Common Terminology for LLM | Slide


common-terminology-llms

Slide11



Term Description
Embedding Representation of words or phrases as vectors in a multi-dimensional space, capturing semantic relationships.
Tokenization The process of breaking down text into smaller units such as words or subwords for analysis.
Moderation Control mechanism to ensure the language model generates appropriate and respectful content.
Foundation Model The base architecture of a large language model that serves as the starting point for further customization.
In Context Learning Ability of the model to understand and generate text based on the context provided in a conversation or document.
One Shot Learning Learning from a single example or a few examples, rather than extensive training data.
RRAG (Retrieval Augmented Generation) A model architecture that combines retrieval-based and generation-based approaches for improved text generation.
Perplexity A measure of how well a language model predicts a sample of text, with lower values indicating better performance.
Fine Tuning The process of adjusting a pre-trained language model on specific tasks or datasets to improve performance.
Attention Mechanism A component in neural networks that focuses on specific parts of input data, crucial for language understanding.
Transformer A deep learning model architecture known for its ability to handle sequential data efficiently, widely used in language processing tasks.
Beam Search A search algorithm used in text generation to find the most likely sequence of words based on the model's predictions.

Blog


From the Slides blog

Spotlight

Futuristic interfaces

Future-proof interfaces: Build unified web-chatbot experiences that anticipate user needs and offer effortless task completion.


100K-tokens    Agenda    Ai-assistant-architecture    Ai-assistant-building-blocks    Ai-assistant-custom-model    Ai-assistant-evaluation-metric    Ai-assistant-finetune-model    Ai-assistant-on-your-data    Ai-assistant-tech-stack    Ai-assistant-wrapper