In the early 1990s, I was building predictive regression models for direct marketing campaigns. There was no cloud infrastructure, no API to call, no pre-trained model to fine-tune. You designed the analytical framework by hand, ran your regression on whatever hardware the organization could afford, printed the output — sometimes literally printed it — and then spent weeks convincing a room full of executives that the numbers meant what you said they meant, and that the decision they were about to make should be informed by them.
Most of the time, the hard part wasn't the model. The hard part was the last mile: turning a prediction into a decision that an organization would actually act on.
I have been thinking about that period a great deal lately, because I keep encountering a version of the same mistake — now dressed in the language of AI, large language models, and machine learning pipelines — that I watched organizations make repeatedly thirty years ago. The mistake is treating the model as the answer.
It was wrong then. It is wrong now.
The conversation we are not having
The current discourse around AI operates almost entirely in one of two registers. The first is evangelical: AI changes everything, the old rules no longer apply, the companies that move fastest will win. The second is skeptical: AI is overhyped, the risks are underappreciated, the ROI hasn't materialized. Both camps are generating enormous amounts of noise, and neither is particularly useful to someone who actually has to make a decision about how to apply AI to a real business problem.
What is almost entirely absent from the conversation is the practitioner's perspective — the view from someone who has been building analytical systems for long enough to have watched multiple technology cycles come and go, who can separate what is genuinely new from what is a faster version of something that was always true.
That distinction matters enormously. Because if you believe AI created the discipline, you'll skip the fundamentals and go straight to the model. And if you skip the fundamentals, the model will fail — not spectacularly, but quietly, in the way that analytical failures always fail: the output will be technically correct and operationally useless.
What has actually changed
Let me be precise about what AI changes, because it does change real things.
Speed and scale. What once took weeks of data preparation, model training, and manual validation can now be done in hours. The compute that required institutional infrastructure in the 1990s is now accessible to a two-person startup. That is not a minor upgrade — it is a genuine democratization of analytical capability that has real commercial implications.
Modality. Until recently, predictive modeling operated almost entirely on structured data — rows, columns, numbers you could put in a spreadsheet. Modern AI systems can operate on text, images, audio, and video simultaneously. The emergence of multimodal AI is a genuine capability expansion, not just a marketing category. I have spent the last two years building a system that classifies video content at the scene level — something that was not practically feasible at commercial scale before this generation of AI infrastructure existed.
Accessibility of implementation. You no longer need a PhD in machine learning to build a system that uses one. The tooling has matured to the point where someone with strong analytical judgment and a command of modern software development can ship a production AI application. That combination — analytical rigor plus implementation fluency — is what the market is short of. The raw technical capability is no longer the bottleneck.
These are meaningful changes. I am not dismissing them. But notice what is not on this list.
What has not changed
The principles that determine whether an analytical system produces value have not changed at all. They are the same principles I learned in the early 1990s, the same ones I applied at Digitas, the same ones I used to design the scoring architecture for the AI platform I am building now.
Principle One: Problem framing precedes model selection
The single most common failure mode I see in organizations adopting AI is selecting a technology before defining the problem. The sequence gets inverted: we have a large language model, now what can we do with it? This is the equivalent of buying a regression package in 1994 and then looking for a variable to regress.
The discipline of analytical problem framing — defining precisely what you are trying to predict or classify, what decision that prediction will inform, what the cost of being wrong is in each direction — has to happen before any model enters the conversation. AI does not change this. It makes it more important, because the sophistication of the tooling makes it easier than ever to produce output that looks rigorous and isn't.
Principle Two: A model is only as good as its validation
I have always insisted on validating frameworks against reality. Not against the training data. Not against a held-out test set that was drawn from the same distribution as the training data. Against the real world, under real conditions, where the assumptions built into the model are tested against outcomes you didn't engineer.
This instinct is not common. Most analytical work stops at "the model performed well on the test set." That is necessary but not sufficient. The question that matters is whether the model's predictions changed the decisions that were made, and whether those decisions led to better outcomes. That loop — prediction, decision, outcome, validation — is the only one that produces genuine organizational learning.
Principle Three: Know what your model cannot tell you
Every model has a boundary condition — a class of inputs for which it was not designed, a type of question it cannot answer, a scenario where the assumptions embedded in its training break down. Experienced analytical practitioners know where those boundaries are and communicate them clearly. Inexperienced ones don't know, or don't say.
With AI systems, this principle is more important than ever, because the outputs are more fluent and more confident-sounding than anything analytical systems produced before. A large language model that is wrong still sounds authoritative. A regression model that is wrong at least shows you the confidence interval. The discipline of knowing your model's limits — and communicating them honestly to the people making decisions — is a human skill. AI does not supply it.
Principle Four: The decision is the only thing that matters
I spent the first decade of my career watching sophisticated analytical work disappear into organizations without changing anything. Beautiful models, rigorous methodology, carefully validated outputs — presented to executives who nodded, filed the report, and made the same decision they would have made without the analysis.
The failure was not in the analytics. The failure was in the last mile: the translation from what the model said to what the organization did. Closing that gap is not a technical problem. It is a problem of communication, organizational design, and executive trust. It requires someone who can stand in a room full of skeptical senior leaders and explain, without hedging, what the data says, why the framework is sound, and what the decision should be.
Thirty years of building models taught me that this is the rarest skill in analytical practice. AI has made the upstream work dramatically easier. The last mile is as hard as it ever was.
What this means for your organization
If you are a CMO, a chief analytics officer, or a founder building a data product, here is the honest assessment of where most AI initiatives fail.
They fail not because the technology doesn't work. They fail because the organization skips the analytical fundamentals, mistakes a working model for a solved problem, and never builds the organizational infrastructure to act on what the model produces. The model goes live. The dashboards are beautiful. Six months later, someone asks whether anything actually changed. The answer is usually no.
The organizations that are extracting real value from AI are doing something different. They are treating AI as a delivery mechanism for analytical capability — and investing as much in the analytical framework design, the validation discipline, and the decision infrastructure as they are in the technology itself. The model is one component of a system. The system includes the humans who designed the problem, the process that validates the output, and the organizational will to act on what the analysis produces.
That system — the full arc from problem definition to decision — is what I have been building in various forms for thirty years. The tools I am using now are incomparably more powerful than what I had in the 1990s. The principles are identical.
That is not a limitation of AI. It is, I would argue, the most important thing to understand about it.