FAQ - Dentro de AI
A quick Q&A overview of the core concepts covered on Dentro de AI, with links to deeper resources across the site.

About this site
What is Dentro de AI?
- Dentro de AI (“Inside AI”) is a learning hub that explains modern AI in plain language - with a focus on what happens before, inside, and after the AI “black box”.
- Start here | About
Where should I start if I’m new to AI?
- Start with the guided learning path that stitches the site into a coherent journey (3 weeks, non-math-heavy, non-coding-bootcamp).
- How to Learn AI in 3 Weeks
What does “BEFORE → INSIDE → AFTER” mean?
- It’s a mental model to separate:
- BEFORE: how models are built (data, training, lifecycle),
- INSIDE: how they operate (tokens, embeddings, attention),
- AFTER: how we use them (prompts, context, workflows).
- How to Learn AI in 3 Weeks | AI Model Lifecycle (Car Analogy)
What content types exist on the site?
- Course, blog articles, glossary, curated visualizations, an AI timeline, an industry map (“big players”), and a news digest.
- Home | Blog | Glossary | Visualizations | Timeline | Big AI Players | AI News
AI basics
What is Artificial Intelligence (AI)?
- AI is the umbrella term for systems that can perform tasks that look intelligent (language, perception, decision support, automation).
- AI (Glossary) | Timeline
What is Machine Learning (ML)?
- ML is a subset of AI where systems learn patterns from data instead of being explicitly programmed with rules.
- Machine Learning (Glossary)
What is Deep Learning?
- Deep learning is ML using multi-layer neural networks - the approach behind most modern “big” AI breakthroughs.
- Deep Learning (Glossary) | Neural Network (Glossary)
What is an AI model?
- A model is a trained mathematical function that maps inputs to outputs (e.g., text-in → text-out) based on patterns learned during training.
- Model (Glossary) | AI Model Lifecycle (Car Analogy)
Large Language Models (LLMs)
What is a Large Language Model (LLM)?
- An LLM is a language-focused AI model trained on lots of text to predict the next token in a sequence - repeatedly - which produces fluent output.
- LLM (Glossary) | How LLMs Work
How do Large Language Models work (high level)?
- They:
1) turn your text into tokens,
2) compute next-token probabilities inside a transformer,
3) sample tokens repeatedly to generate an answer. - How LLMs Work | AI Visualizations
Why do LLMs sometimes “hallucinate”?
- Because they generate plausible text from learned patterns and the prompt context - not guaranteed truth. When the model lacks grounding, it can confidently produce incorrect details.
- How LLMs Work | RAG (Glossary)
Why are LLMs sometimes bad at counting letters or doing exact math?
- Because they don’t “see” text as letters or numbers; they see tokens. Token splitting can make “simple” tasks weird (e.g., counting letters vs counting token chunks).
- Token (Glossary) | AI Visualizations (Tiktokenizer)
Tokens, vocabulary, embeddings
What is a token?
- A token is a chunk of text (sometimes a word, subword, punctuation, or part of a number) used as the basic unit the model processes.
- Token (Glossary) | AI Visualizations (Tiktokenizer)
What is tokenization?
- Tokenization is the conversion of raw text into a sequence of token IDs from a fixed vocabulary.
- Vocabulary (Glossary) | How LLMs Work
What is a model vocabulary?
- The vocabulary is the model’s fixed “dictionary” of allowed tokens (each with an ID). Output is always chosen from this set.
- Vocabulary (Glossary) | How to Learn AI
What is an embedding?
- An embedding is a vector (a list of numbers) that represents a token (or other data) in a way the model can compute with.
- Embedding (Glossary) | AI Visualizations
What is an embedding space?
- An embedding space is the geometric “map” where embeddings live; distances and directions can capture similarity and relationships.
- Embedding Space (Glossary)
Transformers, attention, parameters
What is a transformer?
- A transformer is the architecture powering most modern LLMs. It processes token sequences using attention to decide what matters in context.
- Transformer Architecture (Glossary) | AI Visualizations
What is attention?
- Attention is the mechanism that lets the model weigh which previous tokens are most relevant when producing the next token.
- Attention Mechanism (Glossary) | AI Visualizations
What are parameters (model weights)?
- Parameters are the learned numbers inside the network that store the model’s capabilities after training. More parameters usually means more capacity (not automatically “better”).
- Parameters (Glossary) | AI Model Lifecycle
Training vs inference
What is training?
- Training is the compute-heavy process where a model’s parameters are updated based on data so it learns patterns.
- Training (Glossary) | AI Model Lifecycle
What is inference?
- Inference is the “usage” phase: the trained model is run to generate outputs. Parameters don’t change during inference.
- Inference (Glossary) | AI Model Lifecycle
Does an LLM learn from my prompts while I’m chatting?
- Not in the moment. During inference, the model uses your prompt as context but does not update its weights.
- Inference (Glossary) | How to Learn AI
Prompting, context, and output control
What is a prompt?
- A prompt is the input you provide (instruction + context + constraints). It strongly shapes what the model generates.
- Prompt Engineering (Glossary) | How to Learn AI
What is “prompt engineering”?
- Prompt engineering is the practice of writing prompts so the model has the right role, task, context, and output format - reliably.
- Prompt Engineering (Glossary) | How to Learn AI
What is a context window?
- The context window is how much text (tokens) the model can consider at once. If you exceed it, older context is dropped or compressed.
- Context (Glossary) | How to Learn AI
Why do answers vary even with the same prompt?
- Because generation often uses sampling (controlled randomness). Even small changes in sampling settings can change outputs.
- Deterministic (Glossary) | Temperature (Glossary)
What is temperature?
- Temperature controls randomness: lower = more consistent and conservative; higher = more diverse and creative (but riskier).
- Temperature (Glossary)
Model types and adaptation
What is a foundation model?
- A foundation model is a large, general-purpose model trained broadly and then adapted for many tasks via prompting or additional training.
- Foundation Model (Glossary)
What is fine-tuning?
- Fine-tuning updates a pre-trained model with additional task/domain data to specialize its behavior.
- Fine-tuning (Glossary)
What is an instruction-tuned model?
- An instruction-tuned model is trained further so it follows instructions and behaves more like an assistant than a raw text completer.
- Instruction-Tuned Model (Glossary) | How LLMs Work
What is multimodal AI?
- Multimodal models can handle more than text (e.g., images + text, sometimes audio/video), enabling richer workflows.
- Multimodal AI (Glossary) | Timeline
RAG, tools, and agents
What is RAG (Retrieval-Augmented Generation)?
- RAG adds external documents/search results to the prompt context so the model can answer with fresher or more specific information.
- RAG (Glossary)
What is an AI agent?
- An agent is a system that can plan and take actions (often by calling tools/APIs) to achieve goals - not just chat.
- Agent (Glossary) | Agentic AI (Glossary)
Open source, open weights, and the ecosystem
What’s the difference between “open source” and “open weights” for AI models?
- Many models called “open source” are actually “open weights”: you can run the model, but training data/process aren’t fully disclosed.
- Open Source vs Open Weights
Who are the “big players” in AI?
- AI is shaped by big tech incumbents, specialized AI labs, and infrastructure enablers (chips, platforms, open model hubs).
- Big AI Players
Learning & staying current
Where can I see AI concepts visually?
- Use the curated set of interactive tools and videos (tokenizers, transformer explainers, attention visualizers).
- AI Visualizations
Where can I learn “how we got here” historically?
- The timeline mixes research milestones, product moments, and cultural touchpoints to show the path to modern AI.
- AI Timeline
Does Dentro de AI track what’s happening right now in AI?
- Yes - there’s a news digest of key launches, releases, and industry events.
- AI News
Glossary & terminology
Where do I look up AI terms quickly?
- Use the one-page glossary and jump via the table of contents (or site search).
- AI Glossary