<?xml version="1.0" encoding="UTF-8" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    
    <title>dentro.de/ai</title>
    <description>Dentro de AI - Educational Resource to make Artificial Intelligence more Understandable</description>
    <link>https://dentro.de/ai/</link>
    <atom:link href="https://dentro.de/ai/feed_rss_created.xml" rel="self" type="application/rss+xml" />

    
    
    
    <language>en</language>

    
    <pubDate>Mon, 02 Mar 2026 10:33:33 -0000</pubDate>
    <lastBuildDate>Mon, 02 Mar 2026 10:33:33 -0000</lastBuildDate>
    <ttl>1440</ttl>

    
    <generator>MkDocs RSS plugin - v1.17.9</generator>

    
    
    
    

    
    
    <item>
      <title>Claude Code: The Revolution Nobody Noticed</title>
      
      
        
      <author>Michael Frison</author>
        
      
      
      
      <description>&lt;h1 id=&#34;claude-code-the-revolution-nobody-noticed&#34;&gt;Claude Code: The Revolution Nobody Noticed&lt;/h1&gt;
&lt;h2 id=&#34;how-a-terminal-tool-quietly-kicked-off-the-age-of-ai-agents&#34;&gt;How a terminal tool quietly kicked off the age of AI agents&lt;/h2&gt;
&lt;p&gt;On February 24, 2025, Anthropic released a research preview of Claude Code - a command-line tool that let an AI read files, run commands, and write code from a terminal window. There was no launch event and no viral demo, which is why the world did not notice. Mainstream was busy playing with ChatGPT, and a developer tool in terminal with a blinking cursor was not accessible to them.&lt;/p&gt;
&lt;p&gt;But looking back something fundamentally different had happened. For the first time, an AI system was not just answering questions - it was taking action. It could look and understand a codebase, form a plan, execute it step by step, develop tests to observe whether things worked, and course-correct when they did not. Not in a research lab but in real working environments supporting the developer community. A genuine category shift, and almost nobody outside of the software developer bubble noticed that.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;Claude Code 1 Anniversary: the revoultion nobody noticed&#34; src=&#34;../../../../claude-code-revolution/images/Claude_Code_1st_Anniversary.png&#34; /&gt;&lt;/p&gt;
&lt;!-- more --&gt;

&lt;hr /&gt;
&lt;h2 id=&#34;the-day-ai-learned-to-act&#34;&gt;The Day AI Learned to Act&lt;/h2&gt;
&lt;p&gt;To understand what made Claude Code different, it helps to understand what came before it. Every major AI tool on the market - ChatGPT, Gemini, Copilot - was built on the same foundation: &lt;a href=&#34;../../../../2025/04/25/how-ai-large-language-models-llm-work/&#34;&gt;next-token prediction&lt;/a&gt;. You provide a prompt, the model calculates the statistically most likely sequence of tokens that should follow, and it outputs that sequence as text. It is an extraordinarily powerful mechanism, but at its core the model is a reactive function - it takes input and produces output, and then it waits for more input. Even GitHub Copilot, which felt revolutionary when it arrived in 2021, was essentially autocomplete on steroids. It predicted what you might type next, but it never understood the project it was working in, never ran anything, and had no way of knowing whether its suggestions actually worked.&lt;/p&gt;
&lt;p&gt;Claude Code broke that pattern. When a developer described a problem, the system would start by exploring the codebase on its own, reading files and understanding how components connected before touching anything. It would then form a plan, make edits, run tests, read the error output if something failed, and adjust its approach accordingly. This loop of planning, acting, observing, and adjusting is what the AI industry now calls &#34;agentic behavior,&#34; and with Claude Code it went from being a research concept to something a developer could install and use on a regular workday.&lt;/p&gt;
&lt;p&gt;The underlying model at launch was Claude 3.7 Sonnet - capable but not yet the powerhouse that later versions would become. The experience had rough edges and not every task landed cleanly, but even in that early form the difference was unmistakable for anyone who tried it. This was no longer a text prediction engine producing suggestions for a human to evaluate and apply, but a system that could work through problems on its own and deliver working results.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&#34;one-brain-one-body&#34;&gt;One Brain, One Body&lt;/h2&gt;
&lt;p&gt;Before Claude Code, the AI landscape had a clear division of labor. Companies like OpenAI, Google, and Anthropic built foundation models and exposed them through APIs, while separate companies like Cursor, GitHub with Copilot, and various open-source projects built the tools that developers actually used by plugging into whichever model API suited them best. The brain and the body came from different manufacturers, and the tool makers had to work around the limitations and quirks of models they did not control.&lt;/p&gt;
&lt;p&gt;Anthropic did something different with Claude Code. They built both the model and the tool that wraps around it, optimized end to end for each other. The closest analogy might be Apple, which famously designs its own chips and its own operating system so that hardware and software work together in ways that competitors assembling third-party components cannot easily replicate. With Claude Code, Anthropic created the same kind of vertical integration for agentic AI - the model&#39;s capabilities and the harness around it were designed as one coherent system, not bolted together from separate parts.&lt;/p&gt;
&lt;p&gt;This turned out to be a significant competitive advantage. When the model improved, the tool could immediately exploit those improvements without waiting for a third party to catch up. When the tool needed specific model behavior, Anthropic could tune both sides of the equation simultaneously. The result was a tighter feedback loop between model capability and tool design than anything else on the market at the time, and it showed in the experience.&lt;/p&gt;
&lt;p&gt;The competition noticed quickly. Within months, every major lab rushed to build their own vertically integrated equivalent - same company, same brain, same body. The playbook Anthropic had introduced became the template.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&#34;nobody-noticed&#34;&gt;Nobody Noticed&lt;/h2&gt;
&lt;p&gt;Given the magnitude of this shift, it is worth understanding how it managed to fly under the radar of mainstream attention for so long. The answer is a combination of factors that almost seem designed to keep a revolution invisible.&lt;/p&gt;
&lt;p&gt;The most immediate one is the interface itself. Claude Code lives in a terminal - a black screen with monospaced text that most people associate with hackers in movies or IT departments they try to avoid. There is no colorful UI or friendly onboarding wizard to ease you in. For someone outside the software development world, even looking at a terminal feels like staring into a foreign language. Anthropic built something genuinely transformative and then wrapped it in the one interface guaranteed to keep non-developers away.&lt;/p&gt;
&lt;p&gt;Then there is the audience. Claude Code was explicitly built for software developers, and developers are a community that largely talks to itself. The excitement was real and immediate inside that bubble, with people sharing workflows and building extensions and rethinking how they approach entire projects. But developer enthusiasm rarely translates into mainstream headlines. The tech press covered it as a product launch among many, not as the paradigm shift it turned out to be. And the broader business press, which shapes how executives and decision-makers understand technology trends, barely registered it at all.&lt;/p&gt;
&lt;p&gt;Timing played a role too. In February 2025, the public conversation about AI was dominated by chatbot comparisons and the mental model most people had for AI was a chat window where you ask questions and get answers. A tool that worked autonomously in a terminal did not fit that frame, so it was easy to dismiss as a niche developer utility rather than recognizing it as the first real-world implementation of a fundamentally new way AI could operate.&lt;/p&gt;
&lt;p&gt;And finally, the models themselves were not yet at the level where the agentic experience felt consistently reliable. Claude 3.7 Sonnet was good enough to demonstrate the concept convincingly, but it still stumbled on complex tasks often enough that even enthusiastic early adopters treated it as promising rather than production-ready. The gap between &#34;this is impressive&#34; and &#34;this changes everything&#34; had not yet closed, which made it easier for the wider world to look the other way.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&#34;the-race-that-followed&#34;&gt;The Race That Followed&lt;/h2&gt;
&lt;p&gt;Before Claude Code, coding was not where the AI competition was playing out. The big labs were focused on making their chatbots smarter and more versatile, and while coding assistance existed as a feature in that broader race, nobody treated it as a battleground in its own right. Claude Code changed that dynamic, because within months the ability to autonomously develop, test, and deploy a working application from a single prompt became the new frontier that every major lab felt compelled to compete on.&lt;/p&gt;
&lt;p&gt;OpenAI released Codex CLI in April 2025, two months after Claude Code&#39;s preview - their own terminal agent built on o3 and o4-mini. Google followed in June with Gemini CLI on Gemini 2.5 Pro. Both adopted the same fundamental architecture of a terminal-based agent loop that Claude Code had established. Meanwhile, the broader ecosystem of independent tools - Cursor, Windsurf, Kilo Code, OpenCode - all shifted their trajectory toward deeper agentic capabilities. OpenCode in particular grew into one of the most popular open-source alternatives, model-agnostic and free from vendor lock-in, with over 70,000 GitHub stars by early 2026. The entire industry pivoted.&lt;/p&gt;
&lt;p&gt;Anthropic had already laid additional groundwork before Claude Code even launched. In November 2024, they introduced the Model Context Protocol (MCP) as an open standard for connecting AI agents with external tools and data sources. Claude Code supported MCP from early on, allowing it to reach beyond the local codebase and interact with databases, APIs, and third-party services. As the agentic coding space grew throughout 2025, MCP saw broad adoption across the industry - including Google&#39;s Gemini Code Assist and OpenAI&#39;s ecosystem. Having a shared open protocol for how agents interact with the outside world turned out to be just as important as the agents themselves.&lt;/p&gt;
&lt;p&gt;The financial picture confirmed the scale of what was happening. By September 2025, Claude Code usage had grown more than tenfold in three months, reaching over 500 million dollars in annualized revenue. For a terminal tool that most of the world had never heard of, those are platform-level numbers.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&#34;when-the-walls-came-down&#34;&gt;When the Walls Came Down&lt;/h2&gt;
&lt;p&gt;Everything described so far happened inside the developer community, which is impressive and fast-moving but still contained within a professional group that was already comfortable with terminals and code. The truly radical shift began when people who had never written a line of code in their lives started using these tools - and it worked.&lt;/p&gt;
&lt;p&gt;The combination of dramatically improved models and increasingly polished agentic tools lowered the barrier to the point where technical skill was no longer a prerequisite. Someone with a clear idea of what they wanted could sit down in front of Claude Code, describe an application in plain language, and watch it get built in front of them without touching a single line of code or even fully understanding what the code does. They did not need to. The application worked, and that was enough.&lt;/p&gt;
&lt;p&gt;This is a remarkable inversion of how software has been made for decades. Traditionally, building an application required either learning to code yourself or hiring someone who could. Both options involved significant investment of time or money, which meant that the vast majority of ideas people had for tools and applications simply never got built. The agentic tools removed that bottleneck, and in doing so they unlocked a population of potential creators who had always had the ideas but never had the means to realize them.&lt;/p&gt;
&lt;p&gt;By late 2025 and into early 2026, this was no longer a theoretical possibility but something happening in practice. Non-technical professionals were building internal tools for their teams and automating workflows that had previously required IT support or weeks of dedicated developer time. The terminal, which had been the very thing that kept this revolution invisible to the mainstream, was becoming a surprisingly accessible workspace for people who would never have imagined themselves using one.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&#34;from-chatbot-to-agent&#34;&gt;From Chatbot to Agent&lt;/h2&gt;
&lt;p&gt;The &lt;a href=&#34;../../../../2025/11/30/developing-3-week-ai-curriculum/&#34;&gt;AI curriculum&lt;/a&gt; I developed at &lt;a href=&#34;https://dentro.de/ai&#34;&gt;dentro.de/ai&lt;/a&gt; is structured around a simple mental model: &lt;a href=&#34;/ai/how_to_learn_ai/&#34;&gt;BEFORE → INSIDE → AFTER&lt;/a&gt; the AI black box. BEFORE is the large-scale training phase where the model gets built. INSIDE is the trained model itself, a &lt;a href=&#34;../../../../2025/04/25/how-ai-large-language-models-llm-work/&#34;&gt;mathematical function&lt;/a&gt; that takes input and produces output. AFTER is what happens when you use it. In the chatbot era, this was a linear sequence: your prompt went INSIDE the model, a response came out AFTER, and then it was up to you to do something with that response before sending the next prompt. A one-way flow with the human in control at every step.&lt;/p&gt;
&lt;p&gt;What Claude Code changed is that INSIDE and AFTER became a loop. The model produces not just text but an action - editing a file, running a command. The result of that action feeds back as new context, the model processes it again, produces the next action, and keeps going until the task is complete. The model INSIDE did not change - it is still &lt;a href=&#34;../../../../2025/04/25/how-ai-large-language-models-llm-work/&#34;&gt;predicting the next token&lt;/a&gt;. What changed is that the AFTER layer stopped being a dead end and became an autonomous feedback loop. From a straight line to a circle. That is the shift from chatbot to agent, and it is what I described in my earlier writing as the difference between &lt;a href=&#34;../../../01/08/ai-adoption-vs-ai-transformation/&#34;&gt;AI adoption and AI transformation&lt;/a&gt;. When a developer stops reading the code and simply evaluates whether the application works as intended, they have crossed that line. They are no longer doing the work with AI assistance - they have become the orchestrator.&lt;/p&gt;
&lt;p&gt;Most people have not yet come to terms with what chatbot-era AI means for their work. They are still attending workshops on prompt engineering and debating whether AI is overhyped, while AI has already moved on to this fundamentally different paradigm. The transition from reactive chatbot to autonomous agent accelerated out of sight in a space most people never look at, and by the time it reaches them there will be no runway left to prepare. Developers had a year-long head start and their background in mathematics and IT gives them a natural advantage in grasping what is happening - even though many of them still do not fully understand how AI functions inside the black box either. For everyone else, the gap is not just wider - it is growing in a direction they cannot even see. They are already playing catch-up on the chatbot era while the agentic era accelerates in the dark, invisible to anyone not embedded in the developer world. By the time it surfaces into mainstream awareness, there will be no gentle on-ramp. This is exactly why I built dentro.de/ai - to help people build that understanding before the ground shifts under them.&lt;/p&gt;
&lt;p&gt;When ChatGPT launched in November 2022, it was immediately recognized as an inflection point - the &lt;a href=&#34;/ai/timeline/&#34;&gt;&#34;iPhone Moment&#34;&lt;/a&gt; that brought AI into mainstream consciousness. Claude Code&#39;s launch in February 2025 deserves a similar place on that &lt;a href=&#34;/ai/timeline/&#34;&gt;timeline&lt;/a&gt;, but for the opposite reason. Where ChatGPT was the most visible AI event in history, Claude Code was one of the least visible, and yet the shift it represents - from AI that talks to AI that acts - may prove to be the more consequential one. I described the &lt;a href=&#34;../../../01/19/ai-adoption-is-a-trap/&#34;&gt;AI adoption trap&lt;/a&gt; as optimizing for today locking you out of tomorrow. Claude Code&#39;s first year is a concrete illustration. The people who recognized the chatbot-to-agent shift early are positioned for what comes next. For everyone else, the good news is that catching up is still possible and not as difficult as it might seem - the underlying concepts are learnable, and understanding them changes how you see everything that is happening around you.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;em&gt;The &lt;a href=&#34;/ai/how_to_learn_ai/&#34;&gt;dentro.de/ai curriculum&lt;/a&gt; covers everything from how models are built to how they behave when you use them, structured around the BEFORE → INSIDE → AFTER framework. It is designed for anyone who wants to understand AI at a level that makes shifts like this one visible before they become unavoidable.&lt;/em&gt;&lt;/p&gt;</description>
      <link>https://dentro.de/ai/blog/2026/02/24/claude-code-the-revolution-nobody-noticed/?utm_source=rss&amp;utm_medium=RSS&amp;utm_campaign=feed-syndication</link>
      <pubDate>Tue, 24 Feb 2026 00:00:00 +0000</pubDate>
      <source url="https://dentro.de/ai/feed_rss_created.xml">dentro.de/ai</source>
      
      <guid isPermaLink="true">https://dentro.de/ai/blog/2026/02/24/claude-code-the-revolution-nobody-noticed/</guid>
      
      <enclosure url="https://dentro.de/ai/images/claude-code-revolution.jpeg" type="image/jpeg" length="None" />
      
    </item>
    
    <item>
      <title>AI Adoption is a Trap</title>
      
      
        
      <author>Michael Frison</author>
        
      
      
      
      <description>&lt;h1 id=&#34;ai-adoption-is-a-trap&#34;&gt;AI Adoption is a Trap&lt;/h1&gt;
&lt;h2 id=&#34;optimizing-for-today-might-lock-you-out-of-tomorrow&#34;&gt;Optimizing for Today Might Lock You Out of Tomorrow&lt;/h2&gt;
&lt;p&gt;Everyone tells you to implement AI. It is what the conference speakers promote, what competitors are doing, where the money is being made, where the big headlines are. And they are not wrong - AI delivers real productivity gains, cost savings and a strong feeling of competitive advantage. It seems like the right thing to do.&lt;/p&gt;
&lt;p&gt;It sounds paradox, but that is precisely why it is a trap.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;AI Adoption is a Trap&#34; src=&#34;../../../../ai-adoption-is-a-trap/images/ai-adoption-is-a-trap.png&#34; /&gt;&lt;/p&gt;
&lt;!-- more --&gt;

&lt;p&gt;Not because AI is overhyped, or because the productivity gains are smaller than vendors promise. The trap is more subtle: by adopting AI to optimize your current business, you lock yourself into today&#39;s structures that may not survive what comes next.&lt;/p&gt;
&lt;p&gt;You will not hear this argument at AI conferences. The stages are filled with people selling adoption. The insight that adoption itself might be the problem has no natural advocate - no one profits from saying it (aka you read it here first!).&lt;/p&gt;
&lt;p&gt;Most companies assume that using AI today is all that is needed to prepare for an AI-driven future. They deploy chatbots, automate workflows, accelerate content creation. Almost all of this is optimizing the present, not preparing for the future. A company using AI to make selected processes faster is not building toward a future where AI handles complete business functions autonomously. It is doing what it already does, slightly faster - while reinforcing structures that may need to be completely rethought.&lt;/p&gt;
&lt;p&gt;I explored the distinction between &lt;a href=&#34;../../08/ai-adoption-vs-ai-transformation/&#34;&gt;AI adoption and AI transformation&lt;/a&gt; in a previous article. Here I want to argue that the standard adoption path is not just insufficient - it is actively dangerous.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&#34;how-optimization-becomes-lock-in&#34;&gt;How Optimization Becomes Lock-In&lt;/h2&gt;
&lt;p&gt;When you improve a process with AI, you don&#39;t just make it faster. You encode it into software, build APIs around it, train people to depend on it, and produce metrics that justify its existence. You celebrate the success and every workflow you optimize becomes harder to replace. You are cementing your current business in place.&lt;/p&gt;
&lt;p&gt;It happens gradually over time and each optimization makes sense on its own. In the long run however, the cement hardens, and you find yourself locked into a direction - not by a single decision, but by the accumulated weight of many small improvements that now define your organization.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;Dentroid locked into improved workflows - AI Adoption&#34; src=&#34;../../../../ai-adoption-is-a-trap/images/ai-adoption-in-company.png&#34; /&gt;&lt;/p&gt;
&lt;p&gt;This happens at precisely the wrong moment. Previous technology waves - personal computers, internet, mobile, cloud - changed infrastructure, channels, or interfaces. Companies could adopt new capabilities by replacing existing structures. AI changes something more fundamental: cognition itself, how decisions get made, who makes them, and at what speed. This is not a new tool to add - it is a potential reorganization of how value gets created.&lt;/p&gt;
&lt;p&gt;When the ground is stable, optimization makes sense. When the ground is shifting, it can leave you perfectly adapted to a landscape that no longer exists.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&#34;the-zero-sum-of-organizational-capacity&#34;&gt;The Zero-Sum of Organizational Capacity&lt;/h2&gt;
&lt;p&gt;The trap has a second dimension: organizational capacity. Every company has limited resources for change - attention, talent, change tolerance, management support. This capacity is zero-sum. As soon as it is spent for optimizing current workflows, it is no longer available for preparing for a different future. The people running AI adoption are not simultaneously designing the AI-native organization. The executive attention consumed by deployments is not available for questioning fundamental assumptions.&lt;/p&gt;
&lt;p&gt;So optimization both locks you into current structures - plus it consumes the resources you would need to escape.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&#34;consultants-wont-pull-you-out&#34;&gt;Consultants Won&#39;t Pull You Out&lt;/h2&gt;
&lt;p&gt;If this trap is obvious, why does everyone walk into it?&lt;/p&gt;
&lt;p&gt;Because no one gets paid to warn you. Consulting firms sell AI pilots, tools, and workflow automation - tangible, billable, with slide decks showing before-and-after metrics. They cannot easily sell future strategic blueprints that might or might not materialize. There is no case study for a transformation that is anticipating future capabilities that are currently challenging to imagine. The market systematically pushes toward adoption, while the advice companies might actually need is to invest in understanding rather than deployment, and in flexibility rather than efficiency.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;Consultants Won&#39;t Pull You Out&#34; src=&#34;../../../../ai-adoption-is-a-trap/images/Consultancies-dont-pull-you-out.png&#34; /&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&#34;you-dont-know-what-you-dont-know&#34;&gt;You Don&#39;t Know What You Don&#39;t Know&lt;/h2&gt;
&lt;p&gt;The market dynamics are only the surface. The deeper issue is a skill gap.&lt;/p&gt;
&lt;p&gt;Understanding AI transformation requires a literacy most executives have never had reason to develop. Not the ability to use AI tools - which by now is straightforward. The ability to see how AI might reorganize value chains, which structures are temporary, what an AI-native competitor might look like. This does not come from vendor demos or pilot projects. It comes from sustained engagement with the technology itself.&lt;/p&gt;
&lt;p&gt;Most companies - especially outside software and technology - do not have this understanding in their leadership. Executives rose through product, sales, operations, finance, strategy. Their careers gave them no reason to develop technical intuition about AI - or even IT in general. And this gap is difficult to close from within. Technical people - geeks! - rarely climb organizational ladders in that kind of environment.&lt;/p&gt;
&lt;p&gt;This creates a catch-22. Because of the skill gap, it is difficult to recognize that the gap exists. Psychologists call this cognitive bias &lt;a href=&#34;https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect&#34;&gt;the Dunning-Kruger effect&lt;/a&gt;: the skills needed to evaluate your own competence are the same skills you are missing. The less you understand about AI transformation, the more confident you feel that adoption is enough. Consultants agree, the metrics is looking good and everything appears fine.&lt;/p&gt;
&lt;p&gt;This is why closing the skill gap must come first - before pilots, tools, and consultants take over. Without it, every other decision is made blind.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&#34;the-challenge-of-imagining-what-doesnt-exist&#34;&gt;The Challenge of Imagining What Doesn&#39;t Exist&lt;/h2&gt;
&lt;p&gt;There is a deeper challenge that makes this genuinely hard: transformation into an AI native organization requires imagining a future that doesn&#39;t exist yet.&lt;/p&gt;
&lt;p&gt;Leaders are being asked to design organizations for a world two or three years out - a world of AI-native companies with capabilities we can only partially glimpse today. This not only requires the skill mentioned above, but also creativity, imagination, and risk taking. It is easier to optimize what exists than to envision what might replace it.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;A futuristic AI-native production facility&#34; src=&#34;../../../../ai-adoption-is-a-trap/images/imagining-the-future.png&#34; /&gt;&lt;/p&gt;
&lt;p&gt;The companies that adopted AI heavily in 2025 optimized for 2025. By 2028, those optimizations may be liabilities. The question is not whether to use AI, but whether you are building for the future or locking into the present.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&#34;the-way-out&#34;&gt;The Way Out&lt;/h2&gt;
&lt;p&gt;There is a way out. In the &lt;a href=&#34;../../08/ai-adoption-vs-ai-transformation/&#34;&gt;previous article comparing AI Adoption with AI Transformation&lt;/a&gt; I described the model of an elite unit - a satellite organization with access to the parent company&#39;s resources but freedom from its constraints. This unit can do something that internal AI adoption cannot: work on both timelines at once.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;Legacy Inc. and AI Research Lab&#34; src=&#34;../../../../ai-adoption-is-a-trap/images/legecy-inc-and-ai-lab.png&#34; /&gt;&lt;/p&gt;
&lt;p&gt;The experts in such a unit understand your business and have the skills to implement tactical AI improvements. But because they are simultaneously designing the AI-native blueprint, they do this work differently. They know which optimizations are of temporary nature and which would cement you in place. They can improve the present without blocking the future, because they are the same people building what comes next.&lt;/p&gt;
&lt;p&gt;This is the escape from the trap: not avoiding AI, but placing it in hands that are designing the future blueprint rather than defending the status quo.&lt;/p&gt;
&lt;p&gt;But even this model requires something from leadership: enough understanding to recognize what is being built, and to protect it from internal resistance. The skill gap cannot be delegated.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;em&gt;The skill gap is the trap within the trap. For leaders ready to close it, &lt;a href=&#34;https://dentro.de/ai&#34;&gt;dentro.de/ai&lt;/a&gt; offers a learning path designed for non-technical executives - not to make you technical, but to give you the literacy to see what you are missing.&lt;/em&gt;&lt;/p&gt;</description>
      <link>https://dentro.de/ai/blog/2026/01/19/ai-adoption-is-a-trap/?utm_source=rss&amp;utm_medium=RSS&amp;utm_campaign=feed-syndication</link>
      <pubDate>Mon, 19 Jan 2026 00:00:00 +0000</pubDate>
      <source url="https://dentro.de/ai/feed_rss_created.xml">dentro.de/ai</source>
      
      <guid isPermaLink="true">https://dentro.de/ai/blog/2026/01/19/ai-adoption-is-a-trap/</guid>
      
      <enclosure url="https://dentro.de/ai/images/ai-adoption-trap.jpeg" type="image/jpeg" length="None" />
      
    </item>
    
    <item>
      <title>AI Adoption vs AI Transformation</title>
      
      
        
      <author>Michael Frison</author>
        
      
      
      
      <description>&lt;h1 id=&#34;ai-adoption-vs-ai-transformation&#34;&gt;AI Adoption vs AI Transformation&lt;/h1&gt;
&lt;h2 id=&#34;learn-from-dtm-to-build-ai-native-organization&#34;&gt;Learn from DTM to Build AI-Native Organization&lt;/h2&gt;
&lt;p&gt;The pressure to &#34;adopt AI&#34; echoes earlier technology waves: the personal computer, the internet, mobile. Each time, organizations that merely bolted new technology onto existing structures captured only a fraction of the available value.&lt;/p&gt;
&lt;p&gt;AI is no different - except the stakes are higher, and the window shorter.&lt;/p&gt;
&lt;p&gt;The critical distinction is between &lt;strong&gt;AI adoption&lt;/strong&gt; (adding tools to existing workflows) and &lt;strong&gt;AI transformation&lt;/strong&gt; (redesigning the organization around AI capabilities). One makes humans faster. The other builds a machine with end-to-end workflows.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;DTM - AI Adoption vs AI Transformation&#34; src=&#34;../../../../adaption-vs-transformation/images/DTM_front.png&#34; /&gt;
 &lt;!-- more --&gt;
&lt;/p&gt;
&lt;h2 id=&#34;the-homologation-problem-why-ai-enhanced-falls-short&#34;&gt;The Homologation Problem: Why &#34;AI-Enhanced&#34; Falls Short&lt;/h2&gt;
&lt;p&gt;The term &#34;homologation&#34; comes from motorsport. In the late 1980s, DTM regulations required manufacturers to produce road-legal versions of their racing cars - a minimum number of street vehicles had to be sold to &#34;homologate&#34; (officially approve) the race car. The problem: building homologation specials required speed, risk tolerance, and engineering freedom that mass-production organizations simply couldn&#39;t deliver. BMW&#39;s solution was M GmbH; Mercedes-Benz leaned on AMG. These weren&#39;t internal departments - they were elite external satellites, structurally separate but strategically aligned, with permission to operate by different rules. These units didn&#39;t just operate differently - they attracted different people. Engineers and specialists who would have been absorbed into corporate hierarchies elsewhere could finally work alongside peers who shared their intensity, freed from committees and compromise.&lt;/p&gt;
&lt;p&gt;Today&#39;s legacy companies face their own homologation moment. AI demands capabilities - speed, experimentation, tolerance for failure - that their existing structures cannot deliver. But most are not following this proven playbook for radical innovation. Instead of creating structurally separate units to build something genuinely new, they are simply becoming &lt;em&gt;AI-enhanced&lt;/em&gt;. They deploy chatbots, provide co-pilots, accelerate workflows. They work faster but remain structurally unchanged - the same reporting lines, the same approval chains, the same data silos now with fragmented AI bolted on.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;Legacy Manufacturing - AI Adoption vs AI Transformation&#34; src=&#34;../../../../adaption-vs-transformation/images/legacy_manufacturing.png&#34; /&gt;&lt;/p&gt;
&lt;p&gt;By contrast, &lt;em&gt;AI-native&lt;/em&gt; organizations are architected differently from inception. Data flows continuously across the entire operation - clean, structured, and fully integrated into every workflow. This doesn&#39;t happen by accident; these organizations have done the hard work of preparing their data foundation before connecting the machine. AI systems don&#39;t just recommend; they execute within defined guardrails, with humans governing exceptions and setting strategy. Workflows aren&#39;t accelerated versions of legacy processes - they&#39;re redesigned from first principles around what AI makes possible.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;AI Native - AI Adoption vs AI Transformation&#34; src=&#34;../../../../adaption-vs-transformation/images/ai-native.png&#34; /&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Dimension&lt;/th&gt;
&lt;th&gt;AI-Enhanced (Adoption)&lt;/th&gt;
&lt;th&gt;AI-Native (Transformation)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Data&lt;/td&gt;
&lt;td&gt;Fragmented, batch-processed&lt;/td&gt;
&lt;td&gt;Continuous flows, single source of truth&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Decisions&lt;/td&gt;
&lt;td&gt;AI recommends, humans approve&lt;/td&gt;
&lt;td&gt;AI executes within guardrails, humans govern&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Workflows&lt;/td&gt;
&lt;td&gt;Existing processes accelerated&lt;/td&gt;
&lt;td&gt;Processes redesigned around AI capabilities&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Talent&lt;/td&gt;
&lt;td&gt;AI tools distributed to existing roles&lt;/td&gt;
&lt;td&gt;Roles reconceived; human focus on judgment, creativity, relationships&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Speed&lt;/td&gt;
&lt;td&gt;Incremental improvement&lt;/td&gt;
&lt;td&gt;Order-of-magnitude change&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The AI-enhanced company is a faster version of its former self. The AI-native company is a different species.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&#34;why-an-ai-department-cannot-transform-you&#34;&gt;Why an AI Department Cannot Transform You&lt;/h2&gt;
&lt;p&gt;The instinctive organizational response to AI pressure is familiar: create a team. Companies establish AI Centers of Excellence, embed data scientists in business units, hire a Head of AI reporting into IT or Strategy.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;AI Department - AI Adoption vs AI Transformation&#34; src=&#34;../../../../adaption-vs-transformation/images/ai-department.png&#34; /&gt;&lt;/p&gt;
&lt;p&gt;These moves are necessary but insufficient. An AI department can optimize components. It cannot redesign the machine while the machine is running.&lt;/p&gt;
&lt;p&gt;Genuine transformation requires:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;AI literacy at the executive level.&lt;/strong&gt; Not technical fluency, but genuine understanding of what AI can do. Executives must have seen it in action, grasped its capabilities, and be able to think through what it means for their operations, their products, and what their customers will need next.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;A Chief AI Officer with genuine authority.&lt;/strong&gt; Not a technical advisor, but an executive who owns outcomes, controls budget, and has a seat at resource allocation decisions.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Board-level buy-in.&lt;/strong&gt; Directors who understand enough to back the transformation - approving the investment, protecting the elite unit, and holding steady when it starts to disrupt the core business.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Without this, AI remains an operational experiment rather than a strategic commitment.&lt;/p&gt;
&lt;p&gt;So if an internal AI department cannot deliver transformation, what can? The answer lies in a model that has already proven it works - in one of the most intensely competitive, publicly scrutinized environments imaginable: high-calibre motorsport.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&#34;the-dtm-model-why-legacy-companies-need-external-elite-units&#34;&gt;The DTM Model: Why Legacy Companies Need External Elite Units&lt;/h2&gt;
&lt;p&gt;When BMW and Mercedes-Benz committed to winning DTM, they faced a fundamental problem: &lt;strong&gt;how does a company optimized for one thing become excellent at something entirely different?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The conventional approach would have been to create an internal motorsport department - hire some racing engineers, allocate budget, embed the team within the existing organization. Both companies recognized this wouldn&#39;t work. They didn&#39;t try to make their mass-production organizations faster and more agile. They recognized that the capabilities required for competitive racing - building homologation specials, developing and maintaining race cars, running a championship campaign - were structurally incompatible with what made them successful at building reliable, profitable sedans. The solution was separation: elite external units with access to parent company resources but freedom from parent company constraints.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;AI Native - AI Adoption vs AI Transformation&#34; src=&#34;../../../../adaption-vs-transformation/images/ai-unit.png&#34; /&gt;&lt;/p&gt;
&lt;p&gt;AMG, founded by former Mercedes engineers &lt;a href=&#34;https://www.mercedes-amg.com/de/aufrecht-melcher&#34;&gt;Hans Werner Aufrecht and Erhard Melcher&lt;/a&gt;, could attract talent motivated by performance rather than corporate advancement. BMW Motorsport - later known as M GmbH - could specify components that would never survive a standard business case review. Both could iterate at speeds impossible within normal product development cycles.&lt;/p&gt;
&lt;p&gt;Legacy companies facing AI transformation confront the same structural problem. The capabilities required to build AI-native systems - data fluency, rapid experimentation, comfort with ambiguity, willingness to cannibalize existing products - are often incompatible with the cultures and processes that made these organizations successful in the first place.&lt;/p&gt;
&lt;p&gt;The DTM lesson is clear: &lt;strong&gt;don&#39;t try to transform the whole organization at once. Establish or partner with elite external units that can operate by different rules.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The parallel is direct:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Motorsport Elite Unit&lt;/th&gt;
&lt;th&gt;AI Transformation Unit&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Access to parent company resources (capital, brand, distribution)&lt;/td&gt;
&lt;td&gt;Access to enterprise data, customer relationships, existing revenue&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Freedom from parent company constraints (approval chains, cost targets)&lt;/td&gt;
&lt;td&gt;Freedom from legacy tech stack, existing process requirements&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Permission to cannibalize (racing cars competed with production image)&lt;/td&gt;
&lt;td&gt;Permission to disrupt (AI-native offerings may cannibalize core business)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Talent attracted by mission, not corporate ladder&lt;/td&gt;
&lt;td&gt;Talent attracted by building the future, not maintaining the present&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Clear performance metrics (lap times, championships)&lt;/td&gt;
&lt;td&gt;Clear performance metrics (speed to market, unit economics, customer value)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Technology transfer back to parent&lt;/td&gt;
&lt;td&gt;Capability transfer back to parent&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The strategic insight: &lt;strong&gt;you cannot transform a legacy organization by working within its constraints.&lt;/strong&gt; You must create a protected space where different rules apply, then manage the interface between old and new.&lt;/p&gt;
&lt;p&gt;And here&#39;s the unexpected dividend: both M GmbH and AMG still flourish today, many decades later. What began as elite units to win racing championships evolved into standalone brands. The elite unit you create to solve today&#39;s structural challenge may become tomorrow&#39;s growth engine.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&#34;the-threat-landscape-known-competitors-and-unknown-entrants&#34;&gt;The Threat Landscape: Known Competitors and Unknown Entrants&lt;/h2&gt;
&lt;p&gt;Executives often frame AI transformation as competitive necessity: &#34;Our rivals are adopting AI, so we must keep pace.&#34;&lt;/p&gt;
&lt;p&gt;This framing is dangerously incomplete.&lt;/p&gt;
&lt;p&gt;Yes, established competitors are deploying AI - mostly in the same AI-enhanced mode, generating similar incremental gains. The greater threat comes from organizations being built &lt;em&gt;right now&lt;/em&gt;, designed around AI from inception.&lt;/p&gt;
&lt;p&gt;These AI-native entrants carry none of your burdens:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;No legacy technology.&lt;/strong&gt; They build on modern data architectures, not decades of accumulated systems.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;No cultural resistance.&lt;/strong&gt; Their people joined knowing AI would be central, not fearing displacement.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;No process debt.&lt;/strong&gt; Their workflows are designed around what AI makes possible, not what humans historically did.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;No cannibalization anxiety.&lt;/strong&gt; They have no existing revenue to protect.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Startups currently in formation will enter your market within the next months or years with cost structures you cannot match and speeds you cannot achieve - unless you build the organizational capability to operate differently.&lt;/p&gt;
&lt;p&gt;The motorsport lesson applies here too: Mercedes and BMW were absolutely racing against each other - the 190 Evo 2 versus E30 M3 rivalry was one of the most intense battles in touring car history. But they were &lt;em&gt;also&lt;/em&gt; racing against irrelevance. And that threat was not hypothetical: Audi&#39;s V8 quattro took the DTM championship in both 1990 and 1991, proving that a competitor willing to do things differently - in this case, four-wheel drive to promote their quattro technology - could disrupt even the fiercest established rivalry.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;DTM - AI Adoption vs AI Transformation&#34; src=&#34;../../../../adaption-vs-transformation/images/DTM_back.png&#34; /&gt;&lt;/p&gt;
&lt;p&gt;The same dual pressure applies to AI transformation. You are competing against established rivals adopting AI. But you are also competing against the possibility that AI-native entrants will make your entire category obsolete. &lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&#34;a-diagnostic-framework-where-does-your-organization-stand&#34;&gt;A Diagnostic Framework: Where Does Your Organization Stand?&lt;/h2&gt;
&lt;p&gt;Before defining initiatives, assess your current position honestly:&lt;/p&gt;
&lt;h3 id=&#34;1-data-readiness&#34;&gt;1. Data Readiness&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Can you access a unified view of customer, operational, and financial data today?&lt;/li&gt;
&lt;li&gt;How long does it take to answer a novel business question with data?&lt;/li&gt;
&lt;li&gt;What percentage of your data is trapped in spreadsheets, emails, and undocumented systems?&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;2-leadership-alignment&#34;&gt;2. Leadership Alignment&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Can your executive team articulate what an AI-native version of your company looks like?&lt;/li&gt;
&lt;li&gt;Is there genuine agreement on the pace and scope of transformation required?&lt;/li&gt;
&lt;li&gt;Does AI appear in strategic planning as a capability priority or merely a technology line item?&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;3-talent-and-culture&#34;&gt;3. Talent and Culture&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Do you have people who have built AI-native systems, or only people who have implemented AI tools?&lt;/li&gt;
&lt;li&gt;Is your culture comfortable with experimentation and failure, or optimized for predictability?&lt;/li&gt;
&lt;li&gt;Can you attract talent that has alternatives at AI-native companies?&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;4-structural-permission&#34;&gt;4. Structural Permission&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Is there an organizational space where different rules apply - faster decisions, different success metrics, freedom from legacy constraints?&lt;/li&gt;
&lt;li&gt;Do leaders of transformation initiatives have genuine authority, or are they advisors to power?&lt;/li&gt;
&lt;li&gt;Is there explicit permission to cannibalize existing offerings?&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;5-governance-and-risk&#34;&gt;5. Governance and Risk&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Are your board and executive team equipped to oversee AI-driven transformation?&lt;/li&gt;
&lt;li&gt;Have you addressed regulatory, ethical, and operational risks specific to AI deployment?&lt;/li&gt;
&lt;li&gt;Is there a clear accountability structure for AI outcomes?&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2 id=&#34;the-north-star-defining-the-destination-before-the-journey&#34;&gt;The North Star: Defining the Destination Before the Journey&lt;/h2&gt;
&lt;p&gt;Transformation without a destination is expensive wandering. Before launching initiatives, leadership must articulate what a fully AI-driven version of their organization looks like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Data flows freely&lt;/strong&gt;: All operational data moves continuously through the organization - no silos, no manual handoffs, no waiting for reports. The AI can see everything it needs to see.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;AI executes, humans govern&lt;/strong&gt;: Routine decisions and operations are handled by AI within defined guardrails. People focus on judgment, relationships, and strategy - the work that requires human insight.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;The system learns&lt;/strong&gt;: Every interaction, every outcome feeds back into the machine, making it smarter over time. The organization improves continuously without deliberate effort.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This North Star serves as an alignment mechanism. Every AI initiative can be evaluated against it: &lt;em&gt;Does this move us toward the destination, or does it merely optimize a structure we intend to replace?&lt;/em&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&#34;the-path-forward-principles-not-prescriptions&#34;&gt;The Path Forward: Principles, Not Prescriptions&lt;/h2&gt;
&lt;p&gt;Every organization&#39;s transformation path will differ. But certain principles hold broadly:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Start with the North Star.&lt;/strong&gt; Define the destination before launching initiatives. Use it to evaluate every proposal.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Build an elite unit.&lt;/strong&gt; Create a protected space with access to resources but freedom from constraints. Staff it with people who want to build the future, not preserve the present. Give it real authority and clear metrics.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Elevate AI literacy.&lt;/strong&gt; Ensure executives and board members understand AI strategically, not just technically. Transformation cannot be delegated.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Accept cannibalization.&lt;/strong&gt; Your AI-native unit may threaten existing business lines. This is a feature, not a bug. Better to disrupt yourself than be disrupted by others.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Manage the interface.&lt;/strong&gt; The relationship between the elite unit and the legacy organization must be deliberately structured. Technology transfer, talent exchange, and strategic alignment require ongoing attention.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Move faster than comfortable.&lt;/strong&gt; The organizations being built to replace you are not waiting. The competitive window is shorter than it appears.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&#34;conclusion-from-ai-tools-to-ai-designed-organizations&#34;&gt;Conclusion: From AI Tools to AI-Designed Organizations&lt;/h2&gt;
&lt;p&gt;The companies that won DTM battles in the 90s didn&#39;t just build faster cars. They built organizational structures capable of producing faster cars - elite units with permission to operate differently, connected to but protected from the parent organization.&lt;/p&gt;
&lt;p&gt;The companies that will lead their industries in the AI era will not simply adopt AI tools. They will design organizations around AI capabilities, creating structural advantages that tool adoption cannot match.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;AI Board Room - AI Adoption vs AI Transformation&#34; src=&#34;../../../../adaption-vs-transformation/images/board-room.png&#34; /&gt;&lt;/p&gt;
&lt;p&gt;This is not a technology decision. It is a leadership decision, a governance decision, an organizational design decision.&lt;/p&gt;
&lt;p&gt;The question is not whether to adopt AI. The question is whether to transform - and whether you have the strategic courage and organizational imagination to do it before someone else forces the choice upon you.&lt;/p&gt;
&lt;p&gt;AI is not a feature upgrade. It is a company redesign.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;em&gt;For leaders looking to build AI literacy without drowning in technical detail, &lt;a href=&#34;https://dentro.de/ai&#34;&gt;dentro.de/ai&lt;/a&gt; offers a 3-week learning path designed for non-technical executives.&lt;/em&gt;&lt;/p&gt;</description>
      <link>https://dentro.de/ai/blog/2026/01/08/ai-adoption-vs-ai-transformation/?utm_source=rss&amp;utm_medium=RSS&amp;utm_campaign=feed-syndication</link>
      <pubDate>Thu, 08 Jan 2026 00:00:00 +0000</pubDate>
      <source url="https://dentro.de/ai/feed_rss_created.xml">dentro.de/ai</source>
      
      <guid isPermaLink="true">https://dentro.de/ai/blog/2026/01/08/ai-adoption-vs-ai-transformation/</guid>
      
      <enclosure url="https://dentro.de/ai/images/ai-adoption-vs-ai-transformation.jpeg" type="image/jpeg" length="None" />
      
    </item>
    
    <item>
      <title>The Great Squeeze - Understanding LLM Information Density</title>
      
      
        
      <author>Michael Frison</author>
        
      
      
      
      <description>&lt;p&gt;A modern &lt;a href=&#34;/ai/glossary/#llm&#34;&gt;Large Language Model (LLM)&lt;/a&gt; is capable of retrieving and connecting information from a massive body of knowledge, yet the resulting model weight is surprisingly small compared to the data it was trained on. This compression is possible because we have moved from an architecture of data storage to one of mathematical representation.&lt;/p&gt;
&lt;p&gt;In traditional computing, we rely on Data Persistence. If you want to &#34;know&#34; 10 trillion words, you must store 10 trillion words in a database. LLMs break this 1:1 relationship through a process of high-density compression. We aren&#39;t building a digital library; we are training a mathematical representation of that library.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;Understanding Compression&#34; src=&#34;../../../../understanding-compression/images/understanding_compression.png&#34; /&gt;&lt;/p&gt;
&lt;!-- more --&gt;

&lt;p&gt;In this article, we will look at how this &#34;Squeeze&#34; works by breaking it down into five parts:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Bits vs. Brains: Why traditional databases hit a &#34;Storage Wall&#34; and how Conceptual Representation provides a way around it.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The Vocabulary Filter: How trillions of raw data points are funneled into a finite set of Tokens.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The 200:1 Ratio: Looking at the math of Llama 3, distilling 15 Trillion &#34;seen&#34; tokens into 70 Billion Parameters.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Intelligence as Loss: Why the &#34;lossy&#34; nature of these models is actually what enables Reasoning and pattern recognition.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The Outcome: How this compression allows us to fit the essence of a global knowledge base into a &#34;Pocket Galaxy.&#34;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;1-the-core-concept-bits-vs-brains-the-before&#34;&gt;1. The Core Concept: Bits vs. Brains (The BEFORE)&lt;/h2&gt;
&lt;p&gt;In traditional computing, we operate under the principle of Data Persistence. If you want a system to &#34;know&#34; a specific fact, you must store it as a discrete entry in a database. To scale this knowledge to encompass a vast body of information - trillions of words of human history, science, and code - you simply add more storage. You build a digital warehouse where every word has a fixed address. This is a 1:1 relationship: more knowledge requires more physical bits.&lt;/p&gt;
&lt;p&gt;However, as we move into the scale of the collective human record, we hit the Storage Wall. Storing every fact as a unique, searchable record is not just expensive; it is architecturally inefficient for the type of cross-domain synthesis we expect from modern AI. We don&#39;t just need to retrieve data; we need to represent the logic behind it.&lt;/p&gt;
&lt;p&gt;This is the shift from &#34;The Warehouse&#34; to &#34;The Mix.&#34; Imagine a gigantic mixing console (our LLM) with billions of faders (our Parameters). In this new paradigm, we don&#39;t save the books into the console. Instead, we pass the data through the circuitry to find the optimal &#34;position&#34; for every fader. By the time the process is finished, the original text is gone, but the faders are tuned to a specific frequency that represents the patterns of that information. We have successfully traded raw bits for a mathematical brain.&lt;/p&gt;
&lt;p&gt;Modern AI systems often combine both: they use the &#39;Mix&#39; (LLM) for reasoning and a traditional &#39;Warehouse&#39; (Vector Database/RAG) for facts. The LLM becomes the librarian who understands the logic of the books, while the database ensures the text remains exact.&lt;/p&gt;
&lt;h2 id=&#34;2-the-input-the-vocabulary-gatekeeper&#34;&gt;2. The Input: The Vocabulary Gatekeeper&lt;/h2&gt;
&lt;p&gt;Before any knowledge can reach the faders, it has to pass through the &lt;a href=&#34;/ai/glossary/#vocabulary&#34;&gt;Vocabulary&lt;/a&gt;. This is the first bottleneck of the Squeeze. Think of it as a fixed &#34;Patch Bay&#34; at the front of our console. While the collective human record contains an almost infinite variety of words, characters, and symbols, the LLM only understands a specific, finite list of snippets.&lt;/p&gt;
&lt;p&gt;We call these snippets &lt;a href=&#34;/ai/glossary/#token&#34;&gt;Tokens&lt;/a&gt;. A typical modern model might have a Vocabulary of roughly 128,000 tokens. This is the first layer of compression: instead of dealing with the raw, chaotic stream of trillions of characters, the model maps everything it &#34;sees&#34; to this internal list.&lt;/p&gt;
&lt;p&gt;This is made possible by sub-word tokenization. The system doesn&#39;t necessarily store the word &#34;Information&#34; as a single unit. Instead, it might break it into &#34;In&#34;, &#34;form&#34;, and &#34;ation&#34;. This Lego-like approach allows a relatively small dictionary to represent almost any concept in any language. By the time the data enters the &#34;Inside&#34; of the model, it has already been filtered through this gatekeeper, turning petabytes of raw text into a standardized sequence of numerical IDs that the mixer can actually process.&lt;/p&gt;
&lt;h2 id=&#34;3-the-math-15-trillion-vs-70-billion-the-inside&#34;&gt;3. The Math: 15 Trillion vs. 70 Billion (The INSIDE)&lt;/h2&gt;
&lt;p&gt;To understand the sheer scale of this transition, let&#39;s look at the classic &lt;a href=&#34;/ai/glossary/#llama&#34;&gt;Llama&lt;/a&gt; 3 family of models, which set the standard for these density ratios. During its &lt;a href=&#34;/ai/glossary/#training&#34;&gt;training&lt;/a&gt; phase, the model was exposed to a corpus of roughly 15 trillion tokens. If you were to store that volume of raw text in a traditional database, you would be looking at approximately 15 to 20 terabytes of data.&lt;/p&gt;
&lt;p&gt;In the &#34;Inside&#34; of the Squeeze, we funnel that entire 15-terabyte library through our mixer console. The result is the Llama 3 70B model, which contains exactly 70 billion &lt;a href=&#34;/ai/glossary/#parameters&#34;&gt;parameters&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The Squeeze Ratio:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;For every single fader on the console, the model has &#34;seen&#34; over 200 different tokens of information.  &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The Weight Squeeze:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The final model weight is roughly 140 gigabytes.  &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We have effectively distilled the logic and patterns of 15,000 gigabytes of text into a 140-gigabyte mathematical representation. This is a 100:1 reduction in physical size, but the &#34;Information Squeeze&#34; is even more extreme in the smaller Llama 3 8B model. There, 15 trillion tokens were squeezed into only 8 billion parameters - a ratio of nearly 1875 tokens for every single knob on the console. This extreme &#34;over-training&#34; is why smaller models are increasingly capable of complex reasoning: they have a much higher density of learned experience per parameter.&lt;/p&gt;
&lt;p&gt;While Llama 3 was a &#39;dense&#39; model (using all 70B faders at once), modern &#39;sparse&#39; models might have 200B faders but only use 10B for any given word. This makes the &#39;Squeeze&#39; even more complex - we are compressing knowledge not just into faders, but into routing logic that knows which faders to touch.&lt;/p&gt;
&lt;h2 id=&#34;4-intelligence-is-loss&#34;&gt;4. Intelligence is Loss&lt;/h2&gt;
&lt;p&gt;This massive reduction in size comes with a cost: it is a &#34;lossy&#34; process. Unlike a ZIP file which you can decompress to get back the exact original document, an LLM cannot recreate the 15 terabytes of training data perfectly. In our mixer analogy, we have 70 billion faders but we are trying to represent the patterns of trillions of tokens. There aren&#39;t enough faders to record everything exactly.&lt;/p&gt;
&lt;p&gt;However, this loss is not a bug; it is the source of intelligence. Because the model cannot &#34;memorize&#34; everything, it is forced to find mathematical short-cuts. It has to learn that &#34;Paris&#34; is often associated with &#34;France&#34; and &#34;Capital&#34; rather than trying to remember every specific sentence that mentions those words.&lt;/p&gt;
&lt;p&gt;This is the transition from retrieval to &lt;a href=&#34;/ai/glossary/#generalization&#34;&gt;Generalization&lt;/a&gt;. By discarding the specific, noisy details of individual data points, the model uncovers the underlying structures of language and logic. We call this Reasoning. The Squeeze forces the model to move beyond being a parrot that repeats facts and transforms it into a system that &#34;understands&#34; the relationships between them.&lt;/p&gt;
&lt;p&gt;This architecture also explains the phenomenon of Hallucination. Since the model is a probabilistic reconstruction and not a database, it does not &#34;look up&#34; facts - it generates them based on the tuned positions of its parameters. When the model encounters a gap in its signal density - a rare fact or an obscure connection - it still follows the logic of the &#34;Mix.&#34; It produces a result that is grammatically and logically consistent with its training, even if it is factually incorrect. In the Great Squeeze, we trade absolute factual fidelity for the ability to reason across the entire spectrum of human knowledge.&lt;/p&gt;
&lt;h2 id=&#34;5-the-big-picture-the-after&#34;&gt;5. The Big Picture (The AFTER)&lt;/h2&gt;
&lt;p&gt;The result of this operation is a fundamental decoupling of knowledge from raw storage. We have moved from the unmanageable BEFORE state of massive, static datasets to an AFTER state where information is functional rather than just persistent. The library is gone, but the &#34;Mix&#34; is set.&lt;/p&gt;
&lt;p&gt;This architectural shift has two primary implications. First, it changes the economics of information access. By compressing a petabyte-scale corpus into a few billion parameters, we move the burden from massive hardware clusters toward more efficient, specialized compute. We are no longer limited by the speed of a database search or the capacity of a physical warehouse.&lt;/p&gt;
&lt;p&gt;Second, it confirms that intelligence is a byproduct of efficient representation. The fact that 70 billion faders can represent 15 trillion data points shows that the collective human record is not just a pile of facts - it is a system of patterns. By finding the &#34;Squeeze,&#34; we haven&#39;t just saved space - we have created a mathematical map of human logic. This represents a pivot in computing: we are moving from machines that store the world to machines that represent its rules.&lt;/p&gt;</description>
      <link>https://dentro.de/ai/blog/2025/12/20/the-great-squeeze---understanding-llm-information-density/?utm_source=rss&amp;utm_medium=RSS&amp;utm_campaign=feed-syndication</link>
      <pubDate>Sat, 20 Dec 2025 00:00:00 +0000</pubDate>
      <source url="https://dentro.de/ai/feed_rss_created.xml">dentro.de/ai</source>
      
      <guid isPermaLink="true">https://dentro.de/ai/blog/2025/12/20/the-great-squeeze---understanding-llm-information-density/</guid>
      
      <enclosure url="https://dentro.de/ai/images/understanding_compression.png" type="image/png" length="1943677" />
      
    </item>
    
    <item>
      <title>Developing 3-Week AI Curriculum</title>
      
      
        
      <author>Michael Frison</author>
        
      
      
      
      <description>&lt;h1 id=&#34;developing-a-three-week-ai-curriculum-as-a-personal-side-project&#34;&gt;Developing a Three-Week AI Curriculum as a Personal Side Project&lt;/h1&gt;
&lt;p&gt;About half a year ago I launched dentro.de/ai with a simple idea in mind: provide clear, simplified explanations of modern AI in one place: &#34;Inside AI&#34;.&lt;/p&gt;
&lt;p&gt;In the background there was always a bigger plan. I wanted all those loose pieces to eventually line up as a complete scaleable learning path. Not just a glossary here, a visualizer there, a blog post over there, but something you could actually follow from start to finish.&lt;/p&gt;
&lt;p&gt;That missing piece is finally here: a &lt;a href=&#34;/ai/how_to_learn_ai/&#34;&gt;3 Week Learning Curriculum&lt;/a&gt; that ties everything together and walks you through the AI Black Box from the outside in.&lt;/p&gt;
&lt;p&gt;This post is a bit of a behind the scenes story of how and why I built it, what ended up in the course, and how you might use it, whether for yourself or to teach others.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;Developing a Three-Week AI Curriculum as a Personal Side Project&#34; src=&#34;../../../../developing-ai-curriculum-as-side-project/images/dentroid-teacher-ai-black-box.png&#34; /&gt;
 &lt;!-- more --&gt;
&lt;/p&gt;
&lt;h2 id=&#34;the-problem-that-kept-poking-me&#34;&gt;The problem that kept poking me&lt;/h2&gt;
&lt;p&gt;Around me, more and more people were &#34;using AI&#34;, without really knowing what it is. Only to be puzzled, when it seemed not to perform well. I have the strong believe that a) it is easy to gain that AI knowledge and b) this knowledge is a huge enabler to better use this new technology.&lt;/p&gt;
&lt;p&gt;Within our work environments we now are tasked not only to use AI, but to build workflows, educate business partners, evaluate risks and prepare for the future. All this by not being an engineer - and no desire to become one. We don&#39;t need to implement transformers from scratch. We need to reason about what these systems can and cannot do.&lt;/p&gt;
&lt;p&gt;I went through that journey myself and was puzzled that easy-to-consume education is rare for AI. Which is why is started outlining it myself:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;one article about how LLMs work&lt;/li&gt;
&lt;li&gt;an analogy about the AI lifecycle&lt;/li&gt;
&lt;li&gt;a glossary entry here, a visualization there&lt;/li&gt;
&lt;li&gt;a few good videos and visualizations&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It helped teaching too, but only if I sat next to them and arranged the pieces in the right order.&lt;/p&gt;
&lt;p&gt;At some point I realised I was rebuilding the same mental map over and over again. That is when the idea of a structured curriculum stopped being &#34;nice to have&#34; and became the actual goal.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&#34;the-mental-model-that-everything-hangs-on&#34;&gt;The mental model that everything hangs on&lt;/h2&gt;
&lt;p&gt;The anchor for the whole project is one simple model:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;BEFORE → INSIDE → AFTER the AI Black Box&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;BEFORE covers setup and training: data, architecture, tokenisation, model lifecycle&lt;/li&gt;
&lt;li&gt;INSIDE covers how the finished model functions: parameters, weights, embeddings, attention, layers, probabilities&lt;/li&gt;
&lt;li&gt;AFTER covers what happens when you use it: prompts, context, tools, workflows, evaluation&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Once that frame was in place, all the separate elements on the site suddenly had a clear place to live. The 3 week course is basically this model stretched out in time and filled with concrete reading, watching and doing.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&#34;what-the-3-week-curriculum-looks-like&#34;&gt;What the 3 week curriculum looks like&lt;/h2&gt;
&lt;p&gt;The curriculum is called &#34;How to Learn AI in 3 Weeks&#34;. It is written for people who are serious about understanding modern AI at a high level, but who do not necessarily want to code or push matrices around.&lt;/p&gt;
&lt;p&gt;Each week focuses on one phase of the Black Box and has three flavours: Overview, Learn, Practice.&lt;/p&gt;
&lt;h3 id=&#34;week-1-before-where-models-come-from&#34;&gt;Week 1: BEFORE - where models come from&lt;/h3&gt;
&lt;p&gt;Week 1 is about everything that exists before you type a single prompt.&lt;/p&gt;
&lt;p&gt;You look at:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;how text becomes tokens and token IDs&lt;/li&gt;
&lt;li&gt;how data, scale and training define the AI Model as product&lt;/li&gt;
&lt;li&gt;the lifecycle of a model from early research ideas to large training runs&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You read the AI model lifecycle article, browse the AI timeline, and poke around with tools like Tiktokenizer to see sentences or numbers chopped into tokens.&lt;/p&gt;
&lt;p&gt;By the end of the week, &#34;AI&#34; looks a lot less like magic and more like an engineered product with a long supply chain.&lt;/p&gt;
&lt;h3 id=&#34;week-2-inside-the-model-as-a-mathematical-function&#34;&gt;Week 2: INSIDE - the model as a mathematical function&lt;/h3&gt;
&lt;p&gt;Week 2 is about peeking inside the Black Box. Orignially this was my main driver years ago: the desire to understand what exactly is happening in the Black Box and why. It is a fascinating area and people are rightfully puzzled as to why this works so well.&lt;/p&gt;
&lt;p&gt;Focus in the 2nd week is on:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;how a trained model is a fixed mathematical function - this is VERY important to get across. People still &lt;em&gt;discuss&lt;/em&gt; with LLMs thinking they are now teaching them.&lt;/li&gt;
&lt;li&gt;why everything is cut in tokens and vectors rather than words and sentences - and how this has a big influence on the functionality of LLMs.&lt;/li&gt;
&lt;li&gt;how the model always outputs a probability distribution over possible next tokens - one after the other.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The material mixes the long form blog post on LLMs with high signal videos (by legends 3Blue1Brown and Andrej Karpathy) and visualizers where you can literally watch probabilities flow through a tiny model.&lt;/p&gt;
&lt;p&gt;The goal is not to memorise formulas, but to get the basic idea of what happens when sending a prompt to the model.&lt;/p&gt;
&lt;h3 id=&#34;week-3-after-how-models-behave-when-you-use-them&#34;&gt;Week 3: AFTER - how models behave when you use them&lt;/h3&gt;
&lt;p&gt;Week 3 looks at the output of the model and clarifies concepts like inference and context window, but also haluzinations.&lt;/p&gt;
&lt;p&gt;You explore:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;why the same prompt can produce different answers&lt;/li&gt;
&lt;li&gt;why different models behave differently on the same task&lt;/li&gt;
&lt;li&gt;how prompting structure and context help or hurt&lt;/li&gt;
&lt;li&gt;what simple evaluations and benchmarks look like in practice&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This part is deliberately experimental. You try classic failure prompts, compare models and temperatures, and build a tiny evaluation set for your own domain.&lt;/p&gt;
&lt;p&gt;The point is to develop a feel for strengths, blind spots and failure modes. Impressive demos are one thing, but using the tools as solid assistants is another.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;Developing a Three-Week AI Curriculum as a Personal Side Project&#34; src=&#34;../../../../developing-ai-curriculum-as-side-project/images/dentroid-teacher-ai-black-box_2.jpeg&#34; /&gt;&lt;/p&gt;
&lt;h2 id=&#34;stitching-the-site-into-one-coherent-thing&#34;&gt;Stitching the site into one coherent thing&lt;/h2&gt;
&lt;p&gt;The curriculum is not a separate product sitting next to the rest of dentro.de/ai. It is more like a guided tour that walks through all the existing content in a structured sequence. Visualizations and Glossary are helpful, but also the 3 main articles I have composed so far with teaching in mind:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;/ai/blog/2025/04/25/how-ai-large-language-models-llm-work/&#34;&gt;How Large Language Models Work&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;/ai/blog/2025/04/08/explain-ai-lifecycle-with-car-analogy/&#34;&gt;AI Model Lifecycle&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;/ai/blog/2025/07/15/understanding-open-source-in-ai-models/&#34;&gt;Open Source vs Open Weights&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The sections &lt;a href=&#34;https://dentro.de/ai/big_players/&#34;&gt;Industry Landscape&lt;/a&gt; and &lt;a href=&#34;https://dentro.de/ai/news/&#34;&gt;AI News&lt;/a&gt; are obviously helpful when going further.&lt;/p&gt;
&lt;p&gt;The site started as a collection of these pieces. The curriculum is what makes them feel like a single, purposeful resource.&lt;/p&gt;
&lt;p&gt;It also works backwards: writing the course forced me to tighten, rename, and sometimes rewrite the underlying material so that it fits the mental model cleanly. It helped me too to connect remaining loose ends.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&#34;who-i-wrote-this-for&#34;&gt;Who I wrote this for&lt;/h2&gt;
&lt;p&gt;I had three kinds of people in mind while building this.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;People who already use AI tools a bit, but are keen to understand how they function - high level - under the hood.&lt;/li&gt;
&lt;li&gt;People who sit between technical and non technical worlds and have to explain AI decisions in human terms.&lt;/li&gt;
&lt;li&gt;People who do not want to chase scattered links and need one place they can consult or refer others to.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;It is not designed for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;builders or researchers&lt;/li&gt;
&lt;li&gt;people who want a coding bootcamp&lt;/li&gt;
&lt;li&gt;anyone looking for a bag of &#34;magic prompts&#34;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you work with AI models day in day out you will probably find the level too gentle. If you are trying to get from &#34;I am lost&#34; to &#34;It finally clicked&#34;, you are at the right place.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&#34;from-curriculum-to-lecture-material&#34;&gt;From curriculum to lecture material&lt;/h2&gt;
&lt;p&gt;One side effect of finishing this is that it is now much easier to explain the topic live.&lt;/p&gt;
&lt;p&gt;Because the curriculum has a clear spine and a set of examples for each phase, it translates almost directly into:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;a short &#34;AI in one hour&#34; talk&lt;/li&gt;
&lt;li&gt;a half day workshop using some of the experiments&lt;/li&gt;
&lt;li&gt;a three part lecture series that follows the weekly structure&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I designed it so that, with a bit of preparation, other people could give those sessions too. Everything is on the site and referenced explicitly, so you do not have to dig through personal notes to reconstruct the argument.&lt;/p&gt;
&lt;p&gt;If you work in a company or teach and want a structured way to introduce modern AI without turning everyone into engineers, I hope this gives you a starting point.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2 id=&#34;where-it-stands-now&#34;&gt;Where it stands now&lt;/h2&gt;
&lt;p&gt;Is the curriculum finished and perfect? No :)&lt;/p&gt;
&lt;p&gt;What I can say is that it now feels legit and good enough that I am comfortable standing behind it as &#34;a solid starting point&#34;.&lt;/p&gt;
&lt;p&gt;Future me will almost certainly rearrange things, add better examples, and fix what time and progress in the field make outdated. For now, I am mostly satisfied that the original plan worked: the loose ends of dentro.de/ai have been stitched into something that feels like a complete path rather than a pile of resources.&lt;/p&gt;
&lt;p&gt;If you do decide to go through it, or use it with a group, I would be very interested in what helped, what confused, and what is missing. That feedback will probably shape the next half year of evening work.&lt;/p&gt;
&lt;p&gt;In the meantime, if someone asks you &#34;How should I learn how this AI stuff actually works&#34;, you are welcome to send them here:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&#34;/ai/how_to_learn_ai/&#34;&gt;How to Learn AI in 3 Weeks&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;link&gt;&lt;/p&gt;
&lt;p&gt;Hopefully it saves you a few explanations and starts a few good ones.&lt;/p&gt;</description>
      <link>https://dentro.de/ai/blog/2025/11/30/developing-3-week-ai-curriculum/?utm_source=rss&amp;utm_medium=RSS&amp;utm_campaign=feed-syndication</link>
      <pubDate>Sun, 30 Nov 2025 00:00:00 +0000</pubDate>
      <source url="https://dentro.de/ai/feed_rss_created.xml">dentro.de/ai</source>
      
      <guid isPermaLink="true">https://dentro.de/ai/blog/2025/11/30/developing-3-week-ai-curriculum/</guid>
      
      <enclosure url="https://dentro.de/ai/images/how-to-learn-ai-in-3-weeks.jpeg" type="image/jpeg" length="None" />
      
    </item>
    
    <item>
      <title>Understanding Open Source in AI Models</title>
      
      
        
      <author>Michael Frison</author>
        
      
      
      
      <description>&lt;h1 id=&#34;analogy-to-understand-open-source-in-generative-ai-models&#34;&gt;Analogy to understand Open Source in generative AI Models&lt;/h1&gt;
&lt;p&gt;We often hear the terms &#34;Open Source&#34; and &#34;Open Weights&#34; in the world of AI models - but what is the difference? Traditional software uses open source for full transparency and reproducibility. AI is not like traditional software, because the &#34;brain&#34; - our fully trained model - is a compressed representation of the huge amount of training data. &lt;/p&gt;
&lt;p&gt;The good news is we can use &lt;a href=&#34;/ai/how_to_learn_ai/&#34;&gt;our Mental Model&lt;/a&gt; BEFORE → INSIDE → AFTER the AI Black Box again - but this time to explain Open Source vs. Open Weights in AI:&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;BLACK BOX&#34; src=&#34;../../../../open-source/images/Open_Source_vs_Open_Weights_vs_Closed_Model.png&#34; /&gt;&lt;/p&gt;
&lt;p&gt;Let&#39;s dive deeper and explain Open Source vs Open Weights in the context of AI models - with the help a car analogy (&lt;a href=&#34;/ai/blog/2025/04/08/explain-ai-lifecycle-with-car-analogy/&#34;&gt;&lt;span class=&#34;twemoji&#34;&gt;&lt;svg xmlns=&#34;http://www.w3.org/2000/svg&#34; viewBox=&#34;0 0 24 24&#34;&gt;&lt;path d=&#34;m5 11 1.5-4.5h11L19 11m-1.5 5a1.5 1.5 0 0 1-1.5-1.5 1.5 1.5 0 0 1 1.5-1.5 1.5 1.5 0 0 1 1.5 1.5 1.5 1.5 0 0 1-1.5 1.5m-11 0A1.5 1.5 0 0 1 5 14.5 1.5 1.5 0 0 1 6.5 13 1.5 1.5 0 0 1 8 14.5 1.5 1.5 0 0 1 6.5 16M18.92 6c-.2-.58-.76-1-1.42-1h-11c-.66 0-1.22.42-1.42 1L3 12v8a1 1 0 0 0 1 1h1a1 1 0 0 0 1-1v-1h12v1a1 1 0 0 0 1 1h1a1 1 0 0 0 1-1v-8z&#34;/&gt;&lt;/svg&gt;&lt;/span&gt; again!&lt;/a&gt;) for clarity and to help making informed decisions. For terms like &lt;a href=&#34;/ai/glossary/#parameters&#34;&gt;Parameters&lt;/a&gt; (the Model Weights in &#34;Open Weights&#34;) or &lt;a href=&#34;/ai/glossary/#training&#34;&gt;Training&lt;/a&gt;, see the &lt;a href=&#34;/ai/glossary/&#34;&gt;AI Glossary&lt;/a&gt;. &lt;!-- more --&gt;&lt;/p&gt;
&lt;h2 id=&#34;open-source-in-traditional-software&#34;&gt;Open Source in Traditional Software&lt;/h2&gt;
&lt;p&gt;Open source has transformed software development - examples include Linux, Python, the Apache web server and many others:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Full access and transparency. The complete source code acts like a recipe. Users can view, modify, and share changes.&lt;/li&gt;
&lt;li&gt;Community collaboration. Developers contribute fixes and improvements.&lt;/li&gt;
&lt;li&gt;Reproducibility. The code and instructions allow rebuilding from scratch.&lt;/li&gt;
&lt;li&gt;Licenses. Generous options like MIT or GPL support usage, modification, and distribution - including commercial implementations.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This approach lowers costs, speeds innovation and allows full transparency. AI models differ in key ways.&lt;/p&gt;
&lt;h2 id=&#34;open-source-applied-to-ai&#34;&gt;Open Source Applied to AI&lt;/h2&gt;
&lt;p&gt;AI models involve more than code. They result from processes like architecture design, data curation, and very compute hungry large scale training. Also see the &lt;a href=&#34;/ai/blog/2025/04/25/how-ai-large-language-models-llm-work/&#34;&gt;how LLMs work&lt;/a&gt; post.&lt;/p&gt;
&lt;p&gt;Components of an AI model, which you would expect &lt;em&gt;all&lt;/em&gt; for truly open source:   &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Architecture. The structure, such as transformer-based designs.    &lt;/li&gt;
&lt;li&gt;Training code. Scripts for the training process.    &lt;/li&gt;
&lt;li&gt;Training data. Large datasets used in training.    &lt;/li&gt;
&lt;li&gt;Model weights. Learned parameters after training.    &lt;/li&gt;
&lt;li&gt;Inference code. Tools to run the model.    &lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;open-weights-vs-open-source&#34;&gt;Open Weights vs Open Source&lt;/h2&gt;
&lt;p&gt;Many AI models labeled as open source are actually only open weights. These provide weights, architecture, and inference code, while keeping training data and training processes hidden. Users can run, fine-tune, and deploy the model with this setup - but they cannot recreate it from scratch.&lt;/p&gt;
&lt;p&gt;&lt;span class=&#34;twemoji&#34;&gt;&lt;svg xmlns=&#34;http://www.w3.org/2000/svg&#34; viewBox=&#34;0 0 24 24&#34;&gt;&lt;path d=&#34;M20 12a8 8 0 0 1-8 8 8 8 0 0 1-8-8 8 8 0 0 1 8-8c.76 0 1.5.11 2.2.31l1.57-1.57A9.8 9.8 0 0 0 12 2 10 10 0 0 0 2 12a10 10 0 0 0 10 10 10 10 0 0 0 10-10M7.91 10.08 6.5 11.5 11 16 21 6l-1.41-1.42L11 13.17z&#34;/&gt;&lt;/svg&gt;&lt;/span&gt; &amp;nbsp; Customization becomes easier, and community contributions grow.   &lt;/p&gt;
&lt;p&gt;&lt;span class=&#34;twemoji&#34;&gt;&lt;svg xmlns=&#34;http://www.w3.org/2000/svg&#34; viewBox=&#34;0 0 12 12&#34;&gt;&lt;path d=&#34;M6 0a6 6 0 1 1 0 12A6 6 0 0 1 6 0m3 5H3v2h6Z&#34;/&gt;&lt;/svg&gt;&lt;/span&gt; &amp;nbsp; No insight into biases or data sources means risks like ethical issues remain unknown.&lt;/p&gt;
&lt;p&gt;True open source models share all elements for full reproducibility and transparency allows audits for safety, which expands innovation.&lt;/p&gt;
&lt;p&gt;In mid-2025 all major organisations (except Anthropic) released Open Weights models. &lt;/p&gt;
&lt;p&gt;&lt;span class=&#34;twemoji&#34;&gt;&lt;svg xmlns=&#34;http://www.w3.org/2000/svg&#34; viewBox=&#34;0 0 24 24&#34;&gt;&lt;path d=&#34;M12 6a6 6 0 0 1 6 6c0 2.22-1.21 4.16-3 5.2V19a1 1 0 0 1-1 1h-4a1 1 0 0 1-1-1v-1.8c-1.79-1.04-3-2.98-3-5.2a6 6 0 0 1 6-6m2 15v1a1 1 0 0 1-1 1h-2a1 1 0 0 1-1-1v-1zm6-10h3v2h-3zM1 11h3v2H1zM13 1v3h-2V1zM4.92 3.5l2.13 2.14-1.42 1.41L3.5 4.93zm12.03 2.13 2.12-2.13 1.43 1.43-2.13 2.12z&#34;/&gt;&lt;/svg&gt;&lt;/span&gt; &amp;nbsp; To get an overview - and also a ranking - the &lt;a href=&#34;https://lmarena.ai/leaderboard/text&#34;&gt;LMArena&lt;/a&gt; is a good resource: they list the model licence and Open Weights models are typically released under a differenct licence than &#34;Proprietary&#34;.&lt;/p&gt;
&lt;h2 id=&#34;car-analogy-for-clarity&#34;&gt;Car Analogy for Clarity&lt;/h2&gt;
&lt;p&gt;The car analogy illustrates the differences and highlights levels of control in AI:  &lt;br /&gt;
From Closed Source to Open Weights to truly Open Source.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;Open Source&#34; src=&#34;../../../../open-source/images/open-source.png&#34; /&gt;&lt;/p&gt;
&lt;h3 id=&#34;closed-source&#34;&gt;Closed Source:&lt;/h3&gt;
&lt;p&gt;A taxi ride. Users decide direction and route. The engine under the hood stays unknown.&lt;/p&gt;
&lt;h3 id=&#34;open-weights&#34;&gt;Open Weights:&lt;/h3&gt;
&lt;p&gt;A purchased car. Users know engine details and specs. Build processes and component sources remain secret.&lt;/p&gt;
&lt;h3 id=&#34;open-source&#34;&gt;Open Source:&lt;/h3&gt;
&lt;p&gt;A self-built kit car, such as a Lotus 7 replica. All parts and specs arrive. Users can improve, change, and build from scratch.&lt;/p&gt;
&lt;h2 id=&#34;examples-from-2025&#34;&gt;Examples from 2025&lt;/h2&gt;
&lt;p&gt;The AI landscape is highly dynamic and constantly updated. The examples give an indication, but in reality there are many thousands more additional models.&lt;/p&gt;
&lt;h3 id=&#34;closed-source_1&#34;&gt;&lt;span class=&#34;twemoji&#34;&gt;&lt;svg xmlns=&#34;http://www.w3.org/2000/svg&#34; viewBox=&#34;0 0 16 16&#34;&gt;&lt;path d=&#34;M4 4a4 4 0 0 1 8 0v2h.25c.966 0 1.75.784 1.75 1.75v5.5A1.75 1.75 0 0 1 12.25 15h-8.5A1.75 1.75 0 0 1 2 13.25v-5.5C2 6.784 2.784 6 3.75 6H4Zm8.25 3.5h-8.5a.25.25 0 0 0-.25.25v5.5c0 .138.112.25.25.25h8.5a.25.25 0 0 0 .25-.25v-5.5a.25.25 0 0 0-.25-.25M10.5 6V4a2.5 2.5 0 1 0-5 0v2Z&#34;/&gt;&lt;/svg&gt;&lt;/span&gt; &amp;nbsp; Closed Source:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;OpenAI ChatGPT 5   &lt;/li&gt;
&lt;li&gt;Anthropic Claude Opus 4.1   &lt;/li&gt;
&lt;li&gt;Google Gemini 2.5 Pro   &lt;/li&gt;
&lt;li&gt;xAI Grok 3   &lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;open-weights_1&#34;&gt;&lt;span class=&#34;twemoji&#34;&gt;&lt;svg xmlns=&#34;http://www.w3.org/2000/svg&#34; viewBox=&#34;0 0 16 16&#34;&gt;&lt;path d=&#34;M4 4a4 4 0 0 1 8 0v2h.25c.966 0 1.75.784 1.75 1.75v5.5A1.75 1.75 0 0 1 12.25 15h-8.5A1.75 1.75 0 0 1 2 13.25v-5.5C2 6.784 2.784 6 3.75 6H4Zm8.25 3.5h-8.5a.25.25 0 0 0-.25.25v5.5c0 .138.112.25.25.25h8.5a.25.25 0 0 0 .25-.25v-5.5a.25.25 0 0 0-.25-.25M10.5 6V4a2.5 2.5 0 1 0-5 0v2Z&#34;/&gt;&lt;/svg&gt;&lt;/span&gt; &lt;span class=&#34;twemoji&#34;&gt;&lt;svg xmlns=&#34;http://www.w3.org/2000/svg&#34; viewBox=&#34;0 0 24 24&#34;&gt;&lt;path d=&#34;M12 5c-1.11 0-2 .89-2 2s.89 2 2 2 2-.89 2-2-.89-2-2-2m10-4v5h-2V4H4v2H2V1h2v2h16V1zm-7 10.26V23h-2v-5h-2v5H9V11.26C6.93 10.17 5.5 8 5.5 5.5V5h2v.5C7.5 8 9.5 10 12 10s4.5-2 4.5-4.5V5h2v.5c0 2.5-1.43 4.67-3.5 5.76&#34;/&gt;&lt;/svg&gt;&lt;/span&gt; &amp;nbsp; Open Weights:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Meta Llama family&lt;/li&gt;
&lt;li&gt;Google Gemma family&lt;/li&gt;
&lt;li&gt;xAI Grok 2.5&lt;/li&gt;
&lt;li&gt;DeepSeek models like R1 or V3.1&lt;/li&gt;
&lt;li&gt;selected Mistral AI models&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;open-source_1&#34;&gt;&lt;span class=&#34;twemoji&#34;&gt;&lt;svg xmlns=&#34;http://www.w3.org/2000/svg&#34; viewBox=&#34;0 0 24 24&#34;&gt;&lt;path d=&#34;M9 5.82 7.36 4.16A4.99 4.99 0 0 1 12 1c2.76 0 5 2.24 5 5v2h1a2 2 0 0 1 2 2v6.8L11.2 8H15V6c0-1.66-1.34-3-3-3-1.59 0-2.89 1.25-3 2.82m13.11 15.64-1.27 1.27-1.38-1.38c-.36.4-.88.65-1.46.65H6c-1.11 0-2-.89-2-2V10a2 2 0 0 1 2-2h.11l-5-5 1.28-1.27zm-8.26-5.72-2.59-2.59C10.5 13.44 10 14.16 10 15a2 2 0 0 0 2 2c.84 0 1.56-.5 1.85-1.26&#34;/&gt;&lt;/svg&gt;&lt;/span&gt; &amp;nbsp; Open Source:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://huggingface.co/swiss-ai&#34;&gt;Apertus LLM&lt;/a&gt; from Swiss AI&lt;/li&gt;
&lt;li&gt;EleutherAI models&lt;/li&gt;
&lt;li&gt;BigScience Bloom&lt;/li&gt;
&lt;li&gt;TII/UAE Facon-40B&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href=&#34;https://opensource.org/ai&#34;&gt;The Open Source Initiative&lt;/a&gt; serves as the primary authority in establishing and reviewing AI standards within the open-source community.&lt;/p&gt;
&lt;h2 id=&#34;business-implications&#34;&gt;Business Implications&lt;/h2&gt;
&lt;h3 id=&#34;these-differences-guide-ai-choices-and-support-strategic-decisions&#34;&gt;These differences guide AI choices and support strategic decisions.&lt;/h3&gt;
&lt;p&gt;&lt;span class=&#34;twemoji&#34;&gt;&lt;svg xmlns=&#34;http://www.w3.org/2000/svg&#34; viewBox=&#34;0 0 512 512&#34;&gt;&lt;!--! Font Awesome Free 7.0.0 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2025 Fonticons, Inc.--&gt;&lt;path fill=&#34;currentColor&#34; d=&#34;M480 8c0-13.3-10.7-24-24-24H354.2c-13.3 0-24 10.7-24 24s10.7 24 24 24h43.9L286.8 143.3l-95.1-81.5c-9-7.7-22.2-7.7-31.2 0l-112 96c-10.1 8.6-11.2 23.8-2.6 33.8s23.8 11.2 33.8 2.6l96.4-82.6 96.4 82.6c9.5 8.2 23.7 7.6 32.6-1.3l127-127v43.9c0 13.3 10.7 24 24 24s24-10.7 24-24zM48 256c-26.5 0-48 21.5-48 48v160c0 26.5 21.5 48 48 48h416c26.5 0 48-21.5 48-48V304c0-26.5-21.5-48-48-48zm47.3 56c-3.4 20.1-19.2 36-39.4 39.4-4.4.7-8-2.9-8-7.3v-32c0-4.4 3.6-8 8-8h32c4.4 0 8.1 3.6 7.3 8zm0 144.1c.7 4.4-2.9 8-7.3 8H56c-4.4 0-8-3.6-8-8v-32c0-4.4 3.6-8.1 8-7.3 20.1 3.4 36 19.2 39.4 39.4zM456 416.7c4.4-.7 8 2.9 8 7.3v32c0 4.4-3.6 8-8 8h-32c-4.4 0-8.1-3.6-7.3-8 3.4-20.1 19.2-36 39.4-39.4zM416.7 312c-.7-4.4 2.9-8 7.3-8h32c4.4 0 8 3.6 8 8v32c0 4.4-3.6 8.1-8 7.3-20.1-3.4-36-19.2-39.4-39.4zM192 384a64 64 0 1 1 128 0 64 64 0 1 1-128 0&#34;/&gt;&lt;/svg&gt;&lt;/span&gt; &amp;nbsp;&amp;nbsp; Innovation and savings. Open weights allow customization without full development costs. &lt;br /&gt;
&lt;span class=&#34;twemoji&#34;&gt;&lt;svg xmlns=&#34;http://www.w3.org/2000/svg&#34; viewBox=&#34;0 0 24 24&#34;&gt;&lt;path d=&#34;m13.64 3.81-1.23-1.23C12.04 2.21 11.53 2 11 2h-1v2h1l1.23 1.23zm-.71 2.12 1.42-1.43 2.12 2.14-1.42 1.41zM4 11l1.23 1.23-1.42 1.41-1.22-1.22c-.19-.19-.34-.42-.44-.65-.1-.24-.15-.51-.15-.77v-1h2zm.5 3.35 1.43-1.42 2.12 2.12-1.41 1.42zm4.97 4.95-2.12-2.12 1.41-1.42 2.12 2.12zm6.29-10.54 2.12 2.12 1.42-1.41-2.12-2.12zm2.83 2.83L20 10.17l1.41 1.41c.19.19.34.42.44.65.1.27.15.51.15.77 0 .53-.21 1.04-.59 1.41zm2.11 3.53-1.41-1.41-2.44 2.44 1.41 1.41zm-4.55 1.73 1.41 1.41-2.44 2.44-1.41-1.41zm-1.74 4.56-2.82-2.82L10.17 20l1.42 1.42c.38.37.88.58 1.41.58s1.04-.21 1.41-.59M4 9H2V6h2zM2 5h2V4h1V2H4c-.53 0-1.04.21-1.41.59C2.21 2.96 2 3.47 2 4zm4-1V2h3v2zm.5 4C7.33 8 8 7.33 8 6.5S7.33 5 6.5 5 5 5.67 5 6.5 5.67 8 6.5 8&#34;/&gt;&lt;/svg&gt;&lt;/span&gt; &amp;nbsp;&amp;nbsp; Risk control. Partial openness may hide biases or compliance problems. &lt;br /&gt;
&lt;span class=&#34;twemoji&#34;&gt;&lt;svg xmlns=&#34;http://www.w3.org/2000/svg&#34; viewBox=&#34;0 0 24 24&#34;&gt;&lt;path d=&#34;M12 .25a.75.75 0 0 1 .673.418l3.058 6.197 6.839.994a.75.75 0 0 1 .415 1.279l-4.948 4.823 1.168 6.811a.751.751 0 0 1-1.088.791L12 18.347l-6.117 3.216a.75.75 0 0 1-1.088-.79l1.168-6.812-4.948-4.823a.75.75 0 0 1 .416-1.28l6.838-.993L11.328.668A.75.75 0 0 1 12 .25m0 2.445L9.44 7.882a.75.75 0 0 1-.565.41l-5.725.832 4.143 4.038a.75.75 0 0 1 .215.664l-.978 5.702 5.121-2.692a.75.75 0 0 1 .698 0l5.12 2.692-.977-5.702a.75.75 0 0 1 .215-.664l4.143-4.038-5.725-.831a.75.75 0 0 1-.565-.41z&#34;/&gt;&lt;/svg&gt;&lt;/span&gt; &amp;nbsp;&amp;nbsp; Ethics and trust - to enable fairness checks for areas like healthcare or government. &lt;br /&gt;
&lt;span class=&#34;twemoji&#34;&gt;&lt;svg xmlns=&#34;http://www.w3.org/2000/svg&#34; viewBox=&#34;0 0 24 24&#34;&gt;&lt;path d=&#34;M15 6h7v3h-4v4h-4v4h-4v4H3v-3h4v-4h4v-4h4zm-4.83.66-5.51 5.51-1.83-1.83 5.51-5.51L6.5 3H12v5.5z&#34;/&gt;&lt;/svg&gt;&lt;/span&gt; &amp;nbsp;&amp;nbsp; Market trends. Open models already approach closed-source performance in benchmarks.&lt;/p&gt;
&lt;h2 id=&#34;in-summary&#34;&gt;In Summary&lt;/h2&gt;
&lt;p&gt;Often labled as Open Source is really only Open Weights and does not share training data and methods. While Open Weight models are powerful and econimical, they lack transparency and might surface unwanted biases.&lt;/p&gt;</description>
      <link>https://dentro.de/ai/blog/2025/07/15/understanding-open-source-in-ai-models/?utm_source=rss&amp;utm_medium=RSS&amp;utm_campaign=feed-syndication</link>
      <pubDate>Tue, 15 Jul 2025 00:00:00 +0000</pubDate>
      <source url="https://dentro.de/ai/feed_rss_created.xml">dentro.de/ai</source>
      
      <guid isPermaLink="true">https://dentro.de/ai/blog/2025/07/15/understanding-open-source-in-ai-models/</guid>
      
      <enclosure url="https://dentro.de/ai/images/open-source.png" type="image/png" length="628357" />
      
    </item>
    
    <item>
      <title>How AI Large Language Models (LLM) Work</title>
      
      
        
      <author>Michael Frison</author>
        
      
      
      
      <description>&lt;h1 id=&#34;how-large-language-models-like-chatgpt-work&#34;&gt;How Large Language Models like ChatGPT Work&lt;/h1&gt;
&lt;p&gt;You&#39;re likely interacting with AI (Artificial Intelligence) more and more, perhaps using tools like ChatGPT for drafting emails, summarizing reports, or even brainstorming ideas. These Large Language Models (LLMs) have become remarkably capable, &lt;em&gt;but how do they actually work&lt;/em&gt;? What&#39;s happening behind the screen when you type a prompt and receive a well informed response?    &lt;/p&gt;
&lt;p&gt;This post aims to provide a high-level, conceptual understanding of the core mechanics behind LLMs. It&#39;s designed for the tech-interested who want to grasp the fundamentals without needing a deep dive into complex mathematics or code. Think of it as looking under the hood to see the main components, not rebuilding the engine. We&#39;ll explore how these models process language, how they &#34;learn,&#34; and what&#39;s actually happening during that near-instantaneous generation of text. &lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;Prompt to Text&#34; src=&#34;../../../../prompt-to-text/images/prompt-to-text-1.png&#34; /&gt;&lt;/p&gt;
&lt;!-- more --&gt;

&lt;h2 id=&#34;the-building-blocks-turning-words-into-numbers&#34;&gt;The Building Blocks: Turning Words into Numbers&lt;/h2&gt;
&lt;p&gt;At their core, computers operate on numbers - not the usual complexities of human language. The first step for an LLM is to convert text into a format it can understand: numbers. This involves two key concepts &lt;a href=&#34;/ai/glossary/#vocabulary&#34;&gt;Vocabulary&lt;/a&gt; and &lt;a href=&#34;/ai/glossary/#token&#34;&gt;Tokenization&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;An LLM has a predefined, fixed dictionary called a Vocabulary. This isn&#39;t just a list of words; it includes common words, punctuation, and often parts of less common words (sub-words). The size varies - smaller models might have 30k-50k &#34;tokens&#34; in their vocabulary, while large, multilingual models can exceed 100k. (See an example &lt;a href=&#34;https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0/raw/main/tokenizer.json&#34;&gt;here&lt;/a&gt;.)&lt;/p&gt;
&lt;p&gt;Tokenization is the process of breaking down your input text (the prompt) into these predefined tokens from the vocabulary. For example, the word &#34;unbelievably&#34; might be split into tokens like &#34;un&#34;, &#34;believe&#34;, &#34;ably&#34;. Each token in the vocabulary has a unique numerical ID. So, your sentence becomes a sequence of numbers that the model can mathematically process.&lt;/p&gt;
&lt;p&gt;You might wonder how the model handles words with multiple meanings, like &#39;bank&#39; (river side) versus &#39;bank&#39; (financial institution). In most modern LLMs, the Tokenization process itself is primarily based on the text&#39;s characters and frequency statistics, not its meaning. Therefore, the word string &#39;bank&#39; would typically be assigned the &lt;strong&gt;same initial token ID&lt;/strong&gt; (or sequence of sub-word IDs) regardless of its intended meaning in the sentence. The crucial step happens &lt;em&gt;inside&lt;/em&gt; the neural network: through mechanisms like attention, the model analyzes the surrounding &lt;a href=&#34;/ai/glossary/#context-window&#34;&gt;Context&lt;/a&gt;. This allows it to generate vastly different &lt;em&gt;internal representations&lt;/em&gt; (called contextual &lt;a href=&#34;/ai/glossary/#embedding&#34;&gt;embeddings&lt;/a&gt;) for that &#39;bank&#39; token ID depending on whether nearby words relate to finance or geography. The model learns this ability to interpret context &lt;a href=&#34;/ai/glossary/#training&#34;&gt;during its massive Training phase&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;the-core-task-predicting-what-comes-next&#34;&gt;The Core Task: Predicting What Comes Next&lt;/h2&gt;
&lt;p&gt;Surprisingly, the fundamental task of most LLMs is simple to state: &lt;strong&gt;predicting the next token&lt;/strong&gt;. Based on the sequence of tokens it has seen so far (the Context), the model calculates the probability for every single token in its vast vocabulary of being the &lt;em&gt;next&lt;/em&gt; one in the sequence.&lt;/p&gt;
&lt;p&gt;Think of it like an incredibly sophisticated auto-complete feature. It considers the context – the words and sub-words that came before – to make the most statistically likely prediction for what should follow. Complex abilities like answering questions, summarizing text, or translating languages emerge from repeatedly performing this next-token prediction, stringing together the most probable sequence of tokens that fulfill the prompt&#39;s request.&lt;/p&gt;
&lt;h2 id=&#34;inside-the-model-layers-weights-and-fixed-knowledge&#34;&gt;Inside the Model: Layers, Weights, and Fixed Knowledge&lt;/h2&gt;
&lt;p&gt;So, how does the model make these predictions? It uses a complex structure inspired by the human brain: an &lt;a href=&#34;/ai/glossary/#neural-network&#34;&gt;artificial neural network&lt;/a&gt; with many layers. Input tokens (as numbers) enter the network and are processed through these layers. Each layer performs mathematical transformations on the data passed from the previous one, allowing the model to analyze the sequence and identify increasingly complex patterns and relationships between tokens – even those far apart in the text. This ability to weigh the importance of different parts of the context is crucial for understanding meaning and generating relevant text.&lt;/p&gt;
&lt;p&gt;The &#34;intelligence&#34; or capability of the model resides in its &lt;a href=&#34;/ai/glossary/#parameters&#34;&gt;parameters&lt;/a&gt;, which are primarily Weights and Biases. These are millions, often billions, of numerical values associated with the connections between the artificial neurons in the layers. They act like tuning knobs, determining how information flows and is transformed throughout the network. When you hear a model has &#34;7 billion parameters&#34; or 7b - it&#39;s mostly these weights and biases.&lt;/p&gt;
&lt;p&gt;Crucially, after the initial training phase (which we&#39;ll discuss next), these weights and biases are fixed! &lt;strong&gt;The trained LLM, on its own, is like a final, incredibly complex mathematical function.&lt;/strong&gt;  Its core knowledge is static and encoded within these parameters. During standard use (called &lt;a href=&#34;/ai/glossary/#inference&#34;&gt;inference&lt;/a&gt;), the model itself does not learn new information from your prompts. However, it&#39;s important to know that this base model can be part of a larger system that uses techniques like Retrieval-Augmented Generation (&lt;a href=&#34;/ai/glossary/#rag&#34;&gt;RAG&lt;/a&gt;) to pull in live information, giving the appearance of real-time learning.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;Prompt to Text&#34; src=&#34;../../../../prompt-to-text/images/prompt-to-text-2.png&#34; /&gt;&lt;/p&gt;
&lt;h2 id=&#34;learning-the-patterns-the-immense-task-of-training&#34;&gt;Learning the Patterns: The Immense Task of Training&lt;/h2&gt;
&lt;p&gt;How do these billions of parameters get their values? Through a massive and compute-intensive Training process. The model is fed enormous amounts of text data – potentially &lt;em&gt;the internet&lt;/em&gt;, books, articles, code, and more, known as &lt;a href=&#34;/ai/glossary/#data-set&#34;&gt;Datasets&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;During training, the model is repeatedly given sequences of text and asked to predict the next token. Its prediction is compared to the &lt;em&gt;actual&lt;/em&gt; next token in the training data. An &#34;error&#34; value is calculated based on the difference. Then, using sophisticated optimization algorithms, the model&#39;s weights and biases are minutely adjusted to slightly reduce this error. This cycle of predicting, comparing, and adjusting is repeated trillions of times across the vast dataset.&lt;/p&gt;
&lt;p&gt;This training process is computationally immense, requiring specialized hardware (like thousands of &lt;a href=&#34;/ai/glossary/#gpu&#34;&gt;GPUs&lt;/a&gt; or TPUs) running continuously for weeks or months. It&#39;s a major reason why developing foundational LLMs is incredibly resource-intensive and expensive. (For an analogy of the complete lifecycle see &lt;a href=&#34;/ai/blog/2025/04/08/explain-ai-lifecycle-with-car-analogy/&#34;&gt;How AI Models work, explained with Cars&lt;/a&gt;.)&lt;/p&gt;
&lt;p&gt;Through this intensive process, the model doesn&#39;t &#34;understand&#34; language in a human-like, conscious way. Instead, it builds a powerful internal model of the statistical relationships in the data. This goes beyond simple patterns; it learns to represent grammar, facts, reasoning paths, and different styles as complex mathematical transformations, all encoded within the final values of its weights and biases. Critically, the patterns it learns directly reflect the data it was trained on. If the data contains bias, inaccuracies, or predominantly certain viewpoints, the model will inevitably learn and potentially replicate them in its outputs.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;Prompt to Text&#34; src=&#34;../../../../prompt-to-text/images/prompt-to-text-3.png&#34; /&gt;&lt;/p&gt;
&lt;h2 id=&#34;putting-it-all-together-from-prompt-to-generated-text-inference&#34;&gt;Putting It All Together: From Prompt to Generated Text (Inference)&lt;/h2&gt;
&lt;p&gt;This is how the journey from prompt to model response (Inference) looks like:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Prompt:&lt;/strong&gt; You provide your input text.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tokenize:&lt;/strong&gt; The text is broken down into tokens with numerical IDs.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Initial Context:&lt;/strong&gt; These tokens form the starting context sequence.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Network Pass:&lt;/strong&gt; The sequence is fed into the neural network with its fixed weights and biases.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Predict Probabilities:&lt;/strong&gt; The model calculates the probability of every token in its vocabulary being the next one.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Select Token:&lt;/strong&gt; A token is chosen from the probability distribution. While simply picking the single most likely token (&lt;a href=&#34;/ai/glossary/#greedy-decoding&#34;&gt;greedy decoding&lt;/a&gt;) is an option, it often leads to repetitive text. Instead, a sampling strategy is used. A controlled amount of randomness is introduced (often adjusted by a setting called &lt;a href=&#34;/ai/glossary/#temperature&#34;&gt;Temperature&lt;/a&gt;), which allows less probable, but still plausible, tokens to be selected. This is key to the model&#39;s ability to generate creative and diverse outputs.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Append to Context:&lt;/strong&gt; The chosen token is added to the end of the current context sequence.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Repeat:&lt;/strong&gt; This updated, longer context sequence is fed back into the model (Step 4) to predict the &lt;em&gt;next&lt;/em&gt; token. This loop continues, generating the response one token at a time.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Stop:&lt;/strong&gt; The process stops when the model generates a special &#34;end-of-sequence&#34; token, reaches a predefined maximum length, or fulfills another stopping condition. Models have a Maximum Context Length (Context Window) – they can only effectively &#34;remember&#34; or consider a certain number of recent tokens when making predictions.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;De-Tokenize:&lt;/strong&gt; The sequence of generated numerical token IDs is converted back into human-readable words and sentences.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;summary-sophisticated-patterns-not-true-understanding&#34;&gt;Summary: Sophisticated Patterns, Not True Understanding&lt;/h2&gt;
&lt;p&gt;Large Language Models like ChatGPT are fascinating engineering. They operate by tokenizing language, processing it through layered networks governed by billions of pre-learned weights and biases, and predict the next token in a sequence based on context. Their apparent knowledge comes from statistically learning patterns in vast amounts of text data during an intensive, costly training phase, after which their core parameters are fixed.&lt;/p&gt;
&lt;p&gt;Understanding these basics helps appreciate both the power and the limitations of these AI tools. They are incredibly sophisticated pattern-matching and generation engines, excellent at manipulating language based on the data they&#39;ve seen. They are not, however, thinking or understanding entities in the human sense. Keeping this conceptual framework in mind can help  users leverage these tools effectively while maintaining realistic expectations about their capabilities and potential pitfalls, like inherited biases.&lt;/p&gt;</description>
      <link>https://dentro.de/ai/blog/2025/04/25/how-ai-large-language-models-llm-work/?utm_source=rss&amp;utm_medium=RSS&amp;utm_campaign=feed-syndication</link>
      <pubDate>Fri, 25 Apr 2025 00:00:00 +0000</pubDate>
      <source url="https://dentro.de/ai/feed_rss_created.xml">dentro.de/ai</source>
      
      <guid isPermaLink="true">https://dentro.de/ai/blog/2025/04/25/how-ai-large-language-models-llm-work/</guid>
      
      <enclosure url="https://dentro.de/ai/images/prompt-to-text-3.png" type="image/png" length="2669544" />
      
    </item>
    
    <item>
      <title>Explain AI Lifecycle with Car Analogy</title>
      
      
        
      <author>Michael Frison</author>
        
      
      
      
      <description>&lt;h1 id=&#34;ai-model-lifecycle-analogy-explained-with-cars&#34;&gt;AI Model Lifecycle - Analogy Explained with Cars&lt;/h1&gt;
&lt;p&gt;The process of creating, deploying, and using Large Language Models (LLMs) involves numerous intricate steps. Given the complexity in AI development, drawing parallels to more familiar industrial processes can aid understanding. This guide presents an analogy, comparing each stage of the LLM lifecycle to the established process of designing, building, distributing, and using a car.    &lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;LLM vs Car&#34; src=&#34;../../../../llm-vs-car/images/llm-vs-car-9.png&#34; /&gt;&lt;/p&gt;
&lt;p&gt;While AI models and automobiles are fundamentally different, examining their lifecycles side-by-side offers insights into the structure and challenges of LLM development. A key contrast arises during the core &#34;production&#34; phase: car manufacturing centers on the &lt;em&gt;physical assembly&lt;/em&gt; of components, whereas LLM creation hinges on a vast &lt;em&gt;computational training process&lt;/em&gt; to embed knowledge from data.&lt;/p&gt;
&lt;!-- more --&gt;

&lt;p&gt;We are using our proven mental model BEFORE → INSIDE → AFTER the AI Black Box:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Stages 1-5 → &lt;span class=&#34;twemoji&#34;&gt;&lt;svg xmlns=&#34;http://www.w3.org/2000/svg&#34; viewBox=&#34;0 0 24 24&#34;&gt;&lt;path d=&#34;M12 1.25c2.487 0 4.773.402 6.466 1.079.844.337 1.577.758 2.112 1.264.536.507.922 1.151.922 1.907v12.987l-.026.013h.026c0 .756-.386 1.4-.922 1.907-.535.506-1.268.927-2.112 1.264-1.693.677-3.979 1.079-6.466 1.079s-4.774-.402-6.466-1.079c-.844-.337-1.577-.758-2.112-1.264C2.886 19.9 2.5 19.256 2.5 18.5h.026l-.026-.013V5.5c0-.756.386-1.4.922-1.907.535-.506 1.268-.927 2.112-1.264C7.226 1.652 9.513 1.25 12 1.25M4 14.371v4.116l-.013.013H4c0 .211.103.487.453.817.351.332.898.666 1.638.962 1.475.589 3.564.971 5.909.971s4.434-.381 5.909-.971c.739-.296 1.288-.63 1.638-.962.349-.33.453-.607.453-.817h.013L20 18.487v-4.116a8 8 0 0 1-1.534.8c-1.693.677-3.979 1.079-6.466 1.079s-4.774-.402-6.466-1.079a8 8 0 0 1-1.534-.8M20 12V7.871a8 8 0 0 1-1.534.8C16.773 9.348 14.487 9.75 12 9.75s-4.774-.402-6.466-1.079A8 8 0 0 1 4 7.871V12c0 .21.104.487.453.817.35.332.899.666 1.638.961 1.475.59 3.564.972 5.909.972s4.434-.382 5.909-.972c.74-.295 1.287-.629 1.638-.96.35-.33.453-.607.453-.818M4 5.5c0 .211.103.487.453.817.351.332.898.666 1.638.962 1.475.589 3.564.971 5.909.971s4.434-.381 5.909-.971c.739-.296 1.288-.63 1.638-.962.349-.33.453-.607.453-.817 0-.211-.103-.487-.453-.817-.351-.332-.898-.666-1.638-.962-1.475-.589-3.564-.971-5.909-.971s-4.434.381-5.909.971c-.739.296-1.288.63-1.638.962C4.104 5.013 4 5.29 4 5.5&#34;/&gt;&lt;/svg&gt;&lt;/span&gt; BEFORE  &lt;/li&gt;
&lt;li&gt;Stage 6 → &lt;span class=&#34;twemoji&#34;&gt;&lt;svg xmlns=&#34;http://www.w3.org/2000/svg&#34; viewBox=&#34;0 0 24 24&#34;&gt;&lt;path d=&#34;M8.75 8h6.5a.75.75 0 0 1 .75.75v6.5a.75.75 0 0 1-.75.75h-6.5a.75.75 0 0 1-.75-.75v-6.5A.75.75 0 0 1 8.75 8m.75 6.5h5v-5h-5Z&#34;/&gt;&lt;path d=&#34;M15.25 1a.75.75 0 0 1 .75.75V4h2.25c.966 0 1.75.784 1.75 1.75V8h2.25a.75.75 0 0 1 0 1.5H20v5h2.25a.75.75 0 0 1 0 1.5H20v2.25A1.75 1.75 0 0 1 18.25 20H16v2.25a.75.75 0 0 1-1.5 0V20h-5v2.25a.75.75 0 0 1-1.5 0V20H5.75A1.75 1.75 0 0 1 4 18.25V16H1.75a.75.75 0 0 1 0-1.5H4v-5H1.75a.75.75 0 0 1 0-1.5H4V5.75C4 4.784 4.784 4 5.75 4H8V1.75a.75.75 0 0 1 1.5 0V4h5V1.75a.75.75 0 0 1 .75-.75m3 17.5a.25.25 0 0 0 .25-.25V5.75a.25.25 0 0 0-.25-.25H5.75a.25.25 0 0 0-.25.25v12.5c0 .138.112.25.25.25Z&#34;/&gt;&lt;/svg&gt;&lt;/span&gt; INSIDE  &lt;/li&gt;
&lt;li&gt;Stages 7-9 → &lt;span class=&#34;twemoji&#34;&gt;&lt;svg xmlns=&#34;http://www.w3.org/2000/svg&#34; viewBox=&#34;0 0 24 24&#34;&gt;&lt;path d=&#34;M20.322.75h1.176a1.75 1.75 0 0 1 1.75 1.749v1.177a10.75 10.75 0 0 1-2.925 7.374l-1.228 1.304a24 24 0 0 1-1.596 1.542v5.038c0 .615-.323 1.184-.85 1.5l-4.514 2.709a.75.75 0 0 1-1.12-.488l-.963-4.572a1.3 1.3 0 0 1-.14-.129L8.04 15.96l-1.994-1.873a1.3 1.3 0 0 1-.129-.14l-4.571-.963a.75.75 0 0 1-.49-1.12l2.71-4.514c.316-.527.885-.85 1.5-.85h5.037a24 24 0 0 1 1.542-1.594l1.304-1.23A10.75 10.75 0 0 1 20.321.75Zm-6.344 4.018v-.001l-1.304 1.23a22.3 22.3 0 0 0-3.255 3.851l-2.193 3.29 1.859 1.744.034.034 1.743 1.858 3.288-2.192a22.3 22.3 0 0 0 3.854-3.257l1.228-1.303a9.25 9.25 0 0 0 2.517-6.346V2.5a.25.25 0 0 0-.25-.25h-1.177a9.25 9.25 0 0 0-6.344 2.518M6.5 21c-1.209 1.209-3.901 1.445-4.743 1.49a.24.24 0 0 1-.18-.067.24.24 0 0 1-.067-.18c.045-.842.281-3.534 1.49-4.743.9-.9 2.6-.9 3.5 0s.9 2.6 0 3.5m-.592-8.588L8.17 9.017q.346-.519.717-1.017H5.066a.25.25 0 0 0-.214.121l-2.167 3.612ZM16 15.112q-.5.372-1.018.718l-3.393 2.262.678 3.223 3.612-2.167a.25.25 0 0 0 .121-.214ZM17.5 8a1.5 1.5 0 1 1-3.001-.001A1.5 1.5 0 0 1 17.5 8&#34;/&gt;&lt;/svg&gt;&lt;/span&gt; AFTER  &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The comparison below demystifies the LLM journey:&lt;/p&gt;
&lt;h2 id=&#34;before-the-ai-black-box&#34;&gt;BEFORE the AI Black Box&lt;/h2&gt;
&lt;h3 id=&#34;1-architecture-design-research&#34;&gt;1. Architecture Design &amp;amp; Research&lt;/h3&gt;
&lt;div class=&#34;grid cards&#34;&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;div class=&#34;admonition example&#34;&gt;
&lt;ul&gt;
&lt;li&gt;Define the LLM&#39;s goals (chat, code generation, image analysis, etc.).&lt;/li&gt;
&lt;li&gt;Select the target model size (influences capability and resource needs).&lt;/li&gt;
&lt;li&gt;Design the core neural network structure (e.g., Transformer architecture, number of layers).&lt;/li&gt;
&lt;li&gt;Choose the fundamental learning algorithms.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;div class=&#34;admonition info&#34;&gt;
&lt;p class=&#34;admonition-title&#34;&gt;Car Analogy: Conceptualization &amp;amp; Engineering&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Define purpose (sedan, truck), target market.&lt;/li&gt;
&lt;li&gt;Create detailed CAD designs (chassis, body).&lt;/li&gt;
&lt;li&gt;Simulate performance.&lt;/li&gt;
&lt;li&gt;Select core technologies (engine type).&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;h3 id=&#34;2-data-curation-preprocessing&#34;&gt;2. Data Curation &amp;amp; Preprocessing&lt;/h3&gt;
&lt;div class=&#34;grid cards&#34;&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;div class=&#34;admonition example&#34;&gt;
&lt;ul&gt;
&lt;li&gt;Gather massive datasets (text, code, images) - the raw information the LLM learns from.&lt;/li&gt;
&lt;li&gt;Clean, filter, and format this data for training effectiveness.&lt;/li&gt;
&lt;li&gt;Decide the mix of different data sources to shape the LLM&#39;s knowledge base.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;div class=&#34;admonition info&#34;&gt;
&lt;p class=&#34;admonition-title&#34;&gt;Car Analogy: Material Sourcing &amp;amp; Specification&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Select specific materials (steel, plastic).&lt;/li&gt;
&lt;li&gt;Define quality standards for parts.&lt;/li&gt;
&lt;li&gt;Source components (engines, tires).&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;h3 id=&#34;3-training-tuning-alignment&#34;&gt;3. Training, Tuning &amp;amp; Alignment&lt;/h3&gt;
&lt;div class=&#34;grid cards&#34;&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;div class=&#34;admonition example&#34;&gt;
&lt;ul&gt;
&lt;li&gt;Run initial, smaller-scale training experiments to test approaches.&lt;/li&gt;
&lt;li&gt;Tune technical settings (hyperparameters) for optimal learning.&lt;/li&gt;
&lt;li&gt;Perform alignment (e.g., using human feedback - RLHF) to make the LLM safer, more helpful, and better at following instructions.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;div class=&#34;admonition info&#34;&gt;
&lt;p class=&#34;admonition-title&#34;&gt;Car Analogy: Prototype Building, Testing &amp;amp; Calibration&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Build test vehicles (prototypes).&lt;/li&gt;
&lt;li&gt;Conduct extensive tests (road, crash, emissions).&lt;/li&gt;
&lt;li&gt;Tune engine software (ECU), adjust suspension based on test data.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;h3 id=&#34;4-infrastructure-code-finalization&#34;&gt;4. Infrastructure &amp;amp; Code Finalization&lt;/h3&gt;
&lt;div class=&#34;grid cards&#34;&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;div class=&#34;admonition example&#34;&gt;
&lt;ul&gt;
&lt;li&gt;Configure large clusters of powerful GPUs or specialized AI chips (TPUs).&lt;/li&gt;
&lt;li&gt;Finalize the optimized software code that will manage the training process.&lt;/li&gt;
&lt;li&gt;Set up systems to monitor the complex training run.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;div class=&#34;admonition info&#34;&gt;
&lt;p class=&#34;admonition-title&#34;&gt;Car Analogy: Factory Tooling &amp;amp; Manufacturing Process Design&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Set up the factory assembly lines.&lt;/li&gt;
&lt;li&gt;Program the manufacturing robots.&lt;/li&gt;
&lt;li&gt;Define quality control checkpoints.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;h3 id=&#34;5-large-scale-model-training-core-creation&#34;&gt;5. Large-Scale Model Training (Core Creation)&lt;/h3&gt;
&lt;div class=&#34;grid cards&#34;&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;div class=&#34;admonition example&#34;&gt;
&lt;ul&gt;
&lt;li&gt;Execute the main training process, feeding the entire curated dataset through the model architecture using the prepared infrastructure and code.&lt;/li&gt;
&lt;li&gt;This is the primary &lt;em&gt;learning phase&lt;/em&gt; where the LLM&#39;s internal &lt;strong&gt;parameters&lt;/strong&gt; are iteratively adjusted based on patterns identified in the vast amounts of data, crystallizing its knowledge and capabilities.&lt;/li&gt;
&lt;li&gt;A computationally intensive step requiring significant time (weeks/months) and energy.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;div class=&#34;admonition info&#34;&gt;
&lt;p class=&#34;admonition-title&#34;&gt;Car Analogy: Mass Manufacturing (Assembly Line)&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Run the production lines, systematically bringing together sourced components and sub-assemblies according to the design.&lt;/li&gt;
&lt;li&gt;This is the primary &lt;em&gt;physical construction phase&lt;/em&gt; where the car is assembled piece by piece into a functional vehicle.&lt;/li&gt;
&lt;li&gt;Relies on defined processes, robotics, and human labor to build cars efficiently at scale.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;h2 id=&#34;inside-the-ai-black-box&#34;&gt;INSIDE the AI Black Box&lt;/h2&gt;
&lt;h3 id=&#34;6-trained-model-weights-parameters&#34;&gt;6. Trained Model Weights (Parameters)&lt;/h3&gt;
&lt;div class=&#34;grid cards&#34;&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;div class=&#34;admonition example&#34;&gt;
&lt;ul&gt;
&lt;li&gt;The result of training: the final set of numerical &lt;a href=&#34;/ai/glossary/#parameters&#34;&gt;parameters&lt;/a&gt; (often called model weights). These numbers &lt;em&gt;are&lt;/em&gt; the trained LLM&#39;s core intelligence.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;What are Parameters?&lt;/strong&gt; Think of parameters like the potential &lt;strong&gt;&#34;information capacity&#34;&lt;/strong&gt; of the model. More parameters (e.g., 7 billion vs. 70 billion) mean the model has more capacity to store complex patterns and knowledge learned from the data. It is &lt;em&gt;roughly&lt;/em&gt; analogous to a car having more &lt;strong&gt;seats&lt;/strong&gt; or a larger engine - it indicates a greater potential capacity or power, though not guaranteeing better performance in all situations. This parameter count is a key differentiator between LLM sizes.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;div class=&#34;admonition info&#34;&gt;
&lt;p class=&#34;admonition-title&#34;&gt;Car Analogy: Finished Car (Off the Production Line)&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The physical vehicle, fully assembled.&lt;/li&gt;
&lt;li&gt;Calibrated and ready for distribution.&lt;/li&gt;
&lt;li&gt;The number of seats or engine size indicates its capacity/power category.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;h2 id=&#34;after-the-ai-black-box&#34;&gt;AFTER the AI Black Box&lt;/h2&gt;
&lt;h3 id=&#34;7-deployment-access-provisioning&#34;&gt;7. Deployment &amp;amp; Access Provisioning&lt;/h3&gt;
&lt;div class=&#34;grid cards&#34;&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;div class=&#34;admonition example&#34;&gt;
&lt;ul&gt;
&lt;li&gt;Package the model weights and necessary software code.&lt;/li&gt;
&lt;li&gt;Make the LLM accessible via the company&#39;s digital channels: API portals, websites (for direct use or downloads), or developer platforms (SDKs).&lt;/li&gt;
&lt;li&gt;This is how the finished LLM &#34;product&#34; reaches its users.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;div class=&#34;admonition info&#34;&gt;
&lt;p class=&#34;admonition-title&#34;&gt;Car Analogy: Logistics, Dealer Network, Sales &amp;amp; Marketing&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Ship finished cars to established dealerships.&lt;/li&gt;
&lt;li&gt;Utilize the sales and service network infrastructure.&lt;/li&gt;
&lt;li&gt;Market and sell the vehicles to customers through these channels.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;h3 id=&#34;8-inference-engine-interaction&#34;&gt;8. Inference Engine &amp;amp; Interaction&lt;/h3&gt;
&lt;div class=&#34;grid cards&#34;&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;div class=&#34;admonition example&#34;&gt;
&lt;ul&gt;
&lt;li&gt;The software environment that loads the model weights (parameters).&lt;/li&gt;
&lt;li&gt;It handles &lt;strong&gt;Inference&lt;/strong&gt;: the process of using the trained model to generate output based on new input.&lt;/li&gt;
&lt;li&gt;Users provide this input as a &lt;strong&gt;Prompt&lt;/strong&gt;: the specific text, question, or instruction given to the LLM.&lt;/li&gt;
&lt;li&gt;The engine manages the computation needed for the LLM to respond to the prompt.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;div class=&#34;admonition info&#34;&gt;
&lt;p class=&#34;admonition-title&#34;&gt;Car Analogy: Operational Systems &amp;amp; Controls (ECU, Pedals, Wheel)&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The car&#39;s computer (ECU) interpreting sensor data and driver input.&lt;/li&gt;
&lt;li&gt;The physical controls (steering wheel, pedals) used by the driver to direct the car.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;h3 id=&#34;9-ongoing-model-usage-inference-not-live-training&#34;&gt;9. Ongoing Model Usage (Inference, Not Live Training)&lt;/h3&gt;
&lt;div class=&#34;grid cards&#34;&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;div class=&#34;admonition example&#34;&gt;
&lt;ul&gt;
&lt;li&gt;The end-user repeatedly interacts with the LLM by providing &lt;strong&gt;prompts&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Each interaction triggers the &lt;strong&gt;inference&lt;/strong&gt; process: running the user&#39;s input through the fixed, pre-trained model weights (parameters) to generate a response.&lt;/li&gt;
&lt;li&gt;Crucially, this standard interaction does &lt;strong&gt;not&lt;/strong&gt; typically retrain or modify the underlying LLM in real-time. &lt;strong&gt;It is executing a complex mathematical function&lt;/strong&gt; on a static set of parameters, not engaging in a &#34;live&#34; learning conversation. (Note: Interaction data might be collected separately by the provider for &lt;em&gt;future&lt;/em&gt; model improvements, but that&#39;s distinct from the immediate inference.)&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;div class=&#34;admonition info&#34;&gt;
&lt;p class=&#34;admonition-title&#34;&gt;Car Analogy: Driving the Car&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The driver continuously uses the controls (like the steering wheel, pedals) to operate the car within its designed capabilities.&lt;/li&gt;
&lt;li&gt;Adjustments like changing the seat position, turning on the heater, or selecting a radio station are user inputs/settings within the car&#39;s fixed system - analogous to providing different prompts.&lt;/li&gt;
&lt;li&gt;Ignoring wear and tear, the act of driving and making these adjustments does not fundamentally change the car&#39;s engine design, chassis, or core engineering. The user is operating a defined, isolated product.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;</description>
      <link>https://dentro.de/ai/blog/2025/04/08/explain-ai-lifecycle-with-car-analogy/?utm_source=rss&amp;utm_medium=RSS&amp;utm_campaign=feed-syndication</link>
      <pubDate>Tue, 08 Apr 2025 00:00:00 +0000</pubDate>
      <source url="https://dentro.de/ai/feed_rss_created.xml">dentro.de/ai</source>
      
      <guid isPermaLink="true">https://dentro.de/ai/blog/2025/04/08/explain-ai-lifecycle-with-car-analogy/</guid>
      
      <enclosure url="https://dentro.de/ai/images/llm-vs-car-9.png" type="image/png" length="1292315" />
      
    </item>
    
  </channel>
</rss>