A simple way to help you understand AI

Feature News Techonology

Are you familiar with Artificial Intelligence?

In the past six months, chatbots, such as ChatGPT, and video chats, such as Midjourney, have quickly become popular.

But artificial intelligence (AI) or “machine learning” has been around for a long time.

In this beginner’s guide, we go beyond chatbots and look at other types of AI – and see how these incredible new creatures are starting to play a role in our lives.

The secret to all machine learning is a process called training, where a computer program is given a large amount of data (information) – with the characteristics of that data – and certain instructions.

The instructions could be something like: “find all photos that contain faces” or, “put sounds into categories”.

The program will automatically search for a match in the given data to be able to do what is requested.

The middle of the given task can be rushed – like saying “this world looks like” or “these two sounds are different” – but, what the program learns from the data it receives, and what it is told, becomes the ‘model’ of the AI – and everything it uses in this exercise explains its potential.

One way to see how this practice can yield different types of AI is to think of different animals.

Over the last million years, environmental changes have changed the abilities of certain animals, similarly, the more the AI uses the data it has been given, the more detailed the AI ‘models’ become.

What are some examples of how we’ve trained AI to have different abilities?

What are chatbots?

Think of a chatbot as a bird of prey. It memorizes and repeats words it has heard in a certain way but does not give its full meaning.

Chatbots work just like that – only in a more sophisticated way – and are about to change the way we communicate and write.

But how do chatbots know how to type?

They are a type of AI known as large language models (LLMs) and are trained using very large amounts of text.

LLM can recognize not just single words but whole sentences and can compare the usage of words in many texts in the data that is automatically generated.

Using these billions of matching words and phrases allows it to read a question and create an answer.

The most amazing thing about LLMs is that they can learn spelling rules by themselves and create the meaning of words, without human help.

What the experts say: The future of chatbots

“In 10 years, I think we’ll have chatbots that act as experts in any field. So you’ll be able to ask an expert doctor, an expert teacher, an expert lawyer whatever you want and that expertise will respond to your needs.”

Can I talk to AI?

If you’ve used Alexa, Siri or other voice recognition technology, you’ve already used AI.

Think of a rabbit with its big ears, which are known to hear the tiniest bit of noise.

The AI takes in the sound of your speech, removes the background noise, breaks down your voice into ‘phonetic’ parts – that is the sounds a person makes and makes a word – and matches them to a database of other sounds.

Your speech is then transcribed into a text where any errors in understanding can be corrected before a response is given.

This type of artificial intelligence is known as ‘natural language processing’.

It’s the technology behind everything from saying “yes” to confirming a bank release, or asking the phone to tell you the weather forecast for the next few days in the city you’re visiting.

Can AI understand photos?

Would your phone put photos together in a ‘folder’ and call it “at the beach” or “outing”?

So you used AI without knowing it. There is an AI ‘algorithm’ that finds similarities in your photos and helps you put them together.

These programs are trained to look at a large number of photos, all of which are given a simple title.

When a photo-learning AI is given a series of photos called “bicycles,” it eventually begins to distinguish the bicycle from a car or boat.

Sometimes AI is trained to detect very small differences in many similar photos.

This is how facial recognition is done, seeing the little things in your face that make it stand out from the rest and make it stand out from all the other faces.

These types of ‘algorithms’ have been trained in medicine to study ‘scans’ and find internal tumors in thousands of images, while the doctor makes a decision on a single photo.

How does AI create new images?

Recently, facial recognition techniques are being used by AI and are able to learn in similar ways to recognize and change colors.

These AI video creators can edit millions of different videos they have taken and extract them into new different videos.

You can ask AI to take a photo of something that never happened before – for example, a photo of a man walking on the surface of Mars.

Or in an artistic way and ask it to accept a certain image: “Make a picture of an English coach, painted in the style of Picasso.”

Advanced AI starts giving you this new photo in different colors and sizes.

He pays attention to every aspect of what he has learned while being trained and given information – how to do different things.

This method is increasingly refined and matches what the AI has with the requested needs until it produces something that closely resembles the desired one.

Combine all the necessary elements such as “mars land”, “space travelers” and “travel” and it will give you a new picture.

Because when a new photo is created in many different areas, it is created like never before but it is based on billions of images that AI has and trained on.

Society is now starting to question what ‘copyright’ means and the future of work for real artists who create images, as well as photographers.

What about cars?

Self-driving cars have been a topic of discussion in AI for many years and their concept has been implemented in video-based concepts.

AI for self-driving cars is now known as ‘autonomous driving’ where cars are equipped with cameras, radar, and lane-viewing systems.

Think of this butterfly, it looks everywhere in 360 degrees and its wings help it to feel and move and it changes where it needs to.

In this way, the AI ‘models’ use the information from their senses, to recognize things, to know what is going on, whether it is, to know what it is – another car, a bicycle, a pedestrian or something else.

Thousands of hours of training to understand how to drive well has resulted in the AI now being able to make decisions and know what to do immediately to avoid collisions.

Prediction algorithms may have struggled for years to mimic a real driver, but self-driving cars have now managed to find millions of people on real roads. In San Francisco, there are now paying passengers.