AI Term:Sequence-to-Sequence Models

·

·

« Back to Glossary Index

Sequence-to-Sequence models, often abbreviated as Seq2Seq, are a kind of model used in machine learning, a part of artificial intelligence. These models are like translators that transform one series of things (the “sequence”) into another series of things.

Imagine you have a comic strip, which is a sequence of pictures telling a story. Now, you want to tell that story to your friend, but instead of showing them the pictures, you use words. So, you look at the first picture and describe it, then the second, and so on, until you’ve turned the whole comic strip (a sequence of pictures) into a story (a sequence of words). That’s kind of what a Seq2Seq model does!

In AI, these models are often used for tasks like translating a sentence from one language to another, or turning a question into an answer. They look at the first sequence (like the English sentence or the question), and figure out how to turn it into the second sequence (like the French sentence or the answer).

« Back to Glossary Index