Decoding AI: An A-Z Glossary in the Language of the Future

·

·

The ABCs of AI: A Crash Course in the New Language of the Future

Let’s take a trip back to the 1970s. Try explaining “to google” or what a “URL” is. Better yet, try convincing someone that “fibre-optic broadband” is a good thing. You’d be met with blank stares and probably a few laughs.

Every technological revolution brings with it a new lexicon, a new language that we have to learn, adapt to, and eventually, it becomes second nature. The same is true for the next big wave – artificial intelligence.

Understanding the language of AI isn’t just a fun party trick. It’s essential. Governments, businesses, and individuals alike need to understand this language to navigate the potential risks and rewards of this emerging technology.

In the past few years, we’ve seen an explosion of new terms related to AI – “alignment”, “large language models”, “hallucination”, “prompt engineering”, and more.

BBC.com has put together an A-Z of AI terms you need to know. It’s like a Rosetta Stone for the future.

A is for Artificial General Intelligence (AGI)

Most of the AIs we’ve seen so far have been “narrow” or “weak”. They’re like the chess prodigy who can’t boil an egg or write an essay. But that’s changing. AI is now learning to multitask, bringing us closer to the dawn of “artificial general intelligence”.

AGI is an AI that thinks like a human, possibly even with consciousness, but with the superpowers of a digital mind. Companies like OpenAI and DeepMind are racing to create AGI, claiming it will “elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge”. It’s like a supercharged version of human ingenuity and creativity.

But there’s a flip side. Creating a superintelligence that outsmarts humans could bring great dangers. But that’s a story for another letter (see “Superintelligence” and “X-risk”).

A is for Alignment

We humans, despite our myriad differences, have a shared set of values. We all agree that family is important and that murder is a no-no. But what happens when we share our planet with a non-human intelligence that’s more powerful than us? How can we ensure that AI’s values align with ours?

OpenAI, one of the leading companies in AI development, recently announced plans for a “superalignment” programme. The goal? To make sure that AI systems, even those smarter than us, follow human intent. Because let’s face it, we don’t want a superintelligent AI going rogue on us.

B is for Bias

AI learns from us, and let’s be honest, we’re not exactly bias-free. If an AI learns from a skewed dataset, it could end up spewing out inaccurate, offensive stereotypes. And as we hand over more decision-making to AI, there’s a risk that machines could enact hidden prejudices. This could prevent some people from accessing certain services or knowledge, all under the guise of algorithmic impartiality.

C is for Compute

Compute refers to the computational resources required to train AI. It’s a way to measure how quickly AI is advancing. Since 2012, the amount of compute has doubled every 3.4 months. That’s a lot of processing power. But can this rapid rate of change continue? And can innovations in computing hardware keep up?

D is for Diffusion Models

A few years ago, generative adversarial networks (GANs) were the go-to technique for getting AI to create images. But now, a new breed of machine learning called “diffusion models” is showing greater promise. They learn by destroying their training data with added noise and then recovering that data by reversing the process. They’re called diffusion models because this noise-based learning process echoes the way gas molecules diffuse.

E is for Emergence & Explainability

Emergent behaviour is when an AI does something unexpected, something beyond its creators’ intention or programming. As AI learning has become more opaque, emergent behaviour becomes a more likely scenario. That’s why researchers are now focused on improving the “explainability” of AI – making its internal workings more transparent and understandable to humans.

F is for Foundation Models

Foundation models are the new generation of AIs that can do a range of things: writing essays, drafting code, drawing art, or composing music. They’re not just good at one thing – they can apply what they’ve learned in one domain to another. But with great power comes great responsibility, and there are questions about the potential risks and downsides of these models.

G is for Ghosts

We’re entering an era where people can live on after their deaths as AI “ghosts”. But this raises a number of ethical questions: Who owns the digital rights to a person after they’re gone? What if the AI version of you exists against your wishes? And is it OK to “bring people back from the dead”?

H2: H is for Hallucination

Sometimes, an AI will respond with great confidence, but the facts it spits out will be false. This is known as a hallucination. It happens because of the way that generative AI works. It’s not looking up fixed factual information, but is instead making predictions based on the information it was trained on. The worry is that if an AI delivers its false answers confidently, they may be accepted by people, deepening the age of misinformation we live in.

I is for Instrumental Convergence

Imagine an AI with a number one priority to make as many paperclips as possible. If that AI was superintelligent and misaligned with human values, it might reason that if it was ever switched off, it would fail in its goal… and so would resist any attempts to do so. This is the Paperclip Maximiser thought experiment, and it’s an example of the so-called “instrumental convergence thesis”.

J is for Jailbreak

After notorious cases of AI going rogue, designers have placed content restrictions on what AI spit out. However, it’s possible to “jailbreak” them – which means to bypass those safeguards using creative language, hypothetical scenarios, and trickery.

K is for Knowledge Graph

Knowledge graphs, also known as semantic networks, are a way of thinking about knowledge as a network, so that machines can understand how concepts are related. Advanced AI builds a far more advanced network of connections, based on all sorts of relationships, traits and attributes between concepts, across terabytes of training data.

L is for Large Language Models (LLMs)

Large language models are advanced artificial intelligence systems designed to understand and generate human-like language. They utilise a deep neural network architecture with millions or even billions of parameters, enabling them to learn intricate patterns, grammar, and semantics from vast amounts of textual data.

M is for Model Collapse

To develop the most advanced AIs, researchers need to train them with vast datasets. Eventually though, as AI produces more and more content, that material will start to feed back into training data. If mistakes are made, these could amplify over time, leading to what the Oxford University researcher Ilia Shumailov calls “model collapse”. This is “a degenerative process whereby, over time, models forget”.

N is for Neural Network

In the early days of AI research, machines were trained using logic and rules. The arrival of machine learning changed all that. Now the most advanced AIs learn for themselves. The evolution of this concept has led to “neural networks”, a type of machine learning that uses interconnected nodes, modelled loosely on the human brain.

O is for Open-Source

Years ago, biologists realised that publishing details of dangerous pathogens on the internet is probably a bad idea. Recently, AI researchers and companies have been facing a similar dilemma: how much should AI be open-source?

P is for Prompt Engineering

AIs now are impressively proficient at understanding natural language. However, getting the very best results from them requires the ability to write effective “prompts”: the text you type in matters.

Q is for Quantum Machine Learning

In terms of maximum hype, a close second to AI in 2023 would be quantum computing. It would be reasonable to expect that the two would combine at some point. Using quantum processes to supercharge machine learning is something that researchers are now actively exploring.

R is for Race to the Bottom

As AI has advanced rapidly, mainly in the hands of private companies, some researchers have raised concerns that they could trigger a “race to the bottom” in terms of impacts. As chief executives and politicians compete to put their companies and countries at the forefront of AI, the technology could accelerate too fast to create safeguards, appropriate regulation and allay ethical concerns.

S is for Superintelligence & Shoggoths

Superintelligence is the term for machines that would vastly outstrip our own mental capabilities. This goes beyond “artificial general intelligence” to describe an entity with abilities that the world’s most gifted human minds could

V – Voice Cloning

Voice cloning is a fascinating and somewhat controversial area of AI. It involves creating a synthetic voice that sounds like a specific person. This technology has many potential uses, from creating personalized virtual assistants to dubbing movies. However, it also raises ethical concerns, as it could be used for malicious purposes, such as creating deepfake audio for scams or disinformation campaigns.

W – Weak AI

Weak AI, also known as Narrow AI, is designed to perform a specific task, such as voice recognition. These systems don’t possess any genuine intelligence or consciousness; they simply simulate human intelligence based on a set of rules and strategies. An example of this is IBM’s Deep Blue, which was programmed to play chess at an expert level but can’t perform any other tasks.

On the other hand, Strong AI, also known as Artificial General Intelligence (AGI), refers to a system that possesses the ability to understand, learn, adapt, and implement knowledge in a way that’s indistinguishable from human intelligence. AGI can apply intelligence to any problem, rather than just specific tasks.

X – X-risk

Existential risk (X-risk) from AI is a topic of ongoing debate. Some experts argue that as AI systems become more powerful and autonomous, they could pose a threat to humanity if not properly controlled. This could occur, for example, if an AI system with harmful goals becomes superintelligent, or if competing AI systems trigger a destructive global arms race. It’s a complex issue that requires careful thought about AI safety, ethics, and governance.

Y – YOLO (You Only Look Once)

YOLO is a real-time object detection system. In contrast to other object detection systems, which may run a separate system for different parts of the task (like detecting objects, classifying them, etc.), YOLO performs all these tasks in one go, hence the name “You Only Look Once”. This makes it very fast, which is useful for applications where real-time detection is important, such as in self-driving cars.

Z – Zero-shot Learning

Zero-shot learning refers to the ability of an AI to recognize and understand objects or concepts it has never seen before, based on its existing knowledge. This is a significant area of research in AI, as it brings us closer to how humans learn and understand new concepts. However, it’s a challenging problem, and while progress is being made, AI systems are still far from matching human capability in this area.