Over the past year, this blog has grown into a collection of deep dives into AI - from how models actually work to what AI transformation means for organizations. Each article aims to make complex concepts click, especially if you're approaching AI from a business or practical IT perspective.

Never miss a post - subscribe to the RSS feed: Subscribe via RSS

AI Adoption is a Trap

Optimizing for Today Might Lock You Out of Tomorrow

Everyone tells you to implement AI. It is what the conference speakers promote, what competitors are doing, where the money is being made, where the big headlines are. And they are not wrong - AI delivers real productivity gains, cost savings and a strong feeling of competitive advantage. It seems like the right thing to do.

It sounds paradox, but that is precisely why it is a trap.

AI Adoption is a Trap

AI Adoption vs AI Transformation

Learn from DTM to Build AI-Native Organization

The pressure to "adopt AI" echoes earlier technology waves: the personal computer, the internet, mobile. Each time, organizations that merely bolted new technology onto existing structures captured only a fraction of the available value.

AI is no different - except the stakes are higher, and the window shorter.

The critical distinction is between AI adoption (adding tools to existing workflows) and AI transformation (redesigning the organization around AI capabilities). One makes humans faster. The other builds a machine with end-to-end workflows.

DTM - AI Adoption vs AI Transformation

The Great Squeeze - Understanding LLM Information Density

A modern Large Language Model (LLM) is capable of retrieving and connecting information from a massive body of knowledge, yet the resulting model weight is surprisingly small compared to the data it was trained on. This compression is possible because we have moved from an architecture of data storage to one of mathematical representation.

In traditional computing, we rely on Data Persistence. If you want to "know" 10 trillion words, you must store 10 trillion words in a database. LLMs break this 1:1 relationship through a process of high-density compression. We aren't building a digital library; we are training a mathematical representation of that library.

Understanding Compression

Developing a Three-Week AI Curriculum as a Personal Side Project

About half a year ago I launched dentro.de/ai with a simple idea in mind: provide clear, simplified explanations of modern AI in one place: "Inside AI".

In the background there was always a bigger plan. I wanted all those loose pieces to eventually line up as a complete scaleable learning path. Not just a glossary here, a visualizer there, a blog post over there, but something you could actually follow from start to finish.

That missing piece is finally here: a 3 Week Learning Curriculum that ties everything together and walks you through the AI Black Box from the outside in.

This post is a bit of a behind the scenes story of how and why I built it, what ended up in the course, and how you might use it, whether for yourself or to teach others.

Developing a Three-Week AI Curriculum as a Personal Side Project

Analogy to understand Open Source in generative AI Models

We often hear the terms "Open Source" and "Open Weights" in the world of AI models - but what is the difference? Traditional software uses open source for full transparency and reproducibility. AI is not like traditional software, because the "brain" - our fully trained model - is a compressed representation of the huge amount of training data.

The good news is we can use our Mental Model BEFORE → INSIDE → AFTER the AI Black Box again - but this time to explain Open Source vs. Open Weights in AI:

BLACK BOX

Let's dive deeper and explain Open Source vs Open Weights in the context of AI models - with the help a car analogy ( again!) for clarity and to help making informed decisions. For terms like Parameters (the Model Weights in "Open Weights") or Training, see the AI Glossary.

How Large Language Models like ChatGPT Work

You're likely interacting with AI (Artificial Intelligence) more and more, perhaps using tools like ChatGPT for drafting emails, summarizing reports, or even brainstorming ideas. These Large Language Models (LLMs) have become remarkably capable, but how do they actually work? What's happening behind the screen when you type a prompt and receive a well informed response?

This post aims to provide a high-level, conceptual understanding of the core mechanics behind LLMs. It's designed for the tech-interested who want to grasp the fundamentals without needing a deep dive into complex mathematics or code. Think of it as looking under the hood to see the main components, not rebuilding the engine. We'll explore how these models process language, how they "learn," and what's actually happening during that near-instantaneous generation of text.

Prompt to Text

AI Model Lifecycle - Analogy Explained with Cars

The process of creating, deploying, and using Large Language Models (LLMs) involves numerous intricate steps. Given the complexity in AI development, drawing parallels to more familiar industrial processes can aid understanding. This guide presents an analogy, comparing each stage of the LLM lifecycle to the established process of designing, building, distributing, and using a car.

LLM vs Car

While AI models and automobiles are fundamentally different, examining their lifecycles side-by-side offers insights into the structure and challenges of LLM development. A key contrast arises during the core "production" phase: car manufacturing centers on the physical assembly of components, whereas LLM creation hinges on a vast computational training process to embed knowledge from data.