AI Term:Zero-shot, One-shot, and Few-shot Learning

·

·

« Back to Glossary Index

Zero-shot, one-shot, and few-shot learning are methods used in machine learning to make predictions about new classes or categories of data that the model hasn’t explicitly learned about during training. These terms come from the field of transfer learning, where the goal is to transfer knowledge learned in one context to another context.

Here’s a more detailed look at each:

  1. Zero-shot Learning: This refers to the ability of a machine learning model to correctly infer or generalize about a class that it has never seen before during training. For example, if a language model like GPT-3 is asked to generate a story about a “flibberfudge” (a word it has never seen before), it might infer from the context or the structure of the word that it’s some kind of fanciful creature or object and generate a story accordingly. This demonstrates zero-shot learning.
  2. One-shot Learning: In one-shot learning, the model is able to make accurate predictions about a new class after seeing just one example of that class. For example, if you show a model a single image of a specific type of bird it has never seen before and then ask it to recognize that bird in other images, it’s demonstrating one-shot learning. This is a challenging task as the model must extract the right features from a single example to recognize it in other contexts.
  3. Few-shot Learning: Few-shot learning extends the concept of one-shot learning. In few-shot learning, the model is able to generalize about a new class after seeing only a few examples of that class. For instance, a machine learning model might be trained on a dataset of images that doesn’t include any pictures of zebras. If we then show it a small number (say, five) of zebra images, and it can then correctly identify zebras in other images, it has demonstrated few-shot learning.

These concepts are particularly important in scenarios where we have limited labeled data for certain classes. Traditional machine learning models often require large amounts of labeled data to make accurate predictions, but zero-shot, one-shot, and few-shot learning offer ways to work around this requirement. However, these methods can be more challenging and may not always produce as accurate results as traditional methods that use large amounts of data.

« Back to Glossary Index