The AI Readiness Problem

Most companies that claim to be “AI-forward” haven’t fixed the prerequisites. CI/CD is fragile or manual. Data infrastructure is scattered across spreadsheets and legacy databases. The engineering team has never shipped a model to production. AI features built on this foundation don’t ship, don’t scale, and don’t survive the engineer who built them leaving.

The AI Readiness Ladder framework maps where companies actually sit versus where they think they sit. The gap is almost always wider than leadership believes. During diligence, I assess a company’s true position on this ladder. Post-close, I move them up it—systematically, in a sequence that builds on itself rather than creating isolated experiments that die on the vine.

The question isn’t whether AI can create value in a portfolio company. It’s whether the company has the foundation to capture that value durably—or whether it will build something fragile that looks good in a board deck and collapses under production load.


What I Deploy

Document Ingestion and Classification

Built an AI-native document ingestion engine that compressed 45 days of manual processing into one business day. Open-source models running on client infrastructure—no per-API-call costs, no data leaving the building. Processing 100K+ legacy documents including damaged and handwritten scans with automated classification, extraction, and database integration. This is the pattern: identify the manual bottleneck, build the AI system, deploy it on infrastructure the team owns and can maintain.

Agentic Workflows

AI agents that execute multi-step business processes autonomously. Not chatbots—systems that take action, verify results, and escalate when confidence is low. These are production systems embedded in the product, not demos. The distinction matters because agentic workflows that actually work require robust error handling, confidence thresholds, and human-in-the-loop escalation paths that most prototypes skip.

Intelligent Automation

Replacing manual processes that were too complex for traditional rule-based automation but are well-suited for LLM-powered reasoning. The target is processes where a human currently applies judgment to unstructured inputs—document review, classification, routing, extraction—and where the cost of that human judgment is a bottleneck to scaling the business.

Model Selection and Deployment

Choosing between commercial APIs, open-source models, and fine-tuned models based on the specific use case. Deploying on client infrastructure when data sensitivity or cost demands it. The decision framework is pragmatic: commercial APIs for low-volume, non-sensitive workloads; open-source models for high-volume, cost-sensitive, or data-sensitive applications; fine-tuned models when domain specificity creates durable competitive advantage.


Durable vs. Disposable AI

Disposable AI

Wrapping an OpenAI API call, calling it a feature, and hoping the margin math works when usage scales. Vendor-dependent, undifferentiated, and margin-negative at volume. Every competitor can build the same thing in a weekend. The “AI feature” is a cost center that the next buyer will evaluate skeptically—because it creates no defensible advantage and its economics degrade with scale.

Durable AI

Models tuned on proprietary data, running on owned infrastructure, embedded in production workflows that customers depend on. Defensible, margin-positive, and increasingly valuable as the model improves on the company’s own data. The switching cost is real because the model has learned the company’s domain—that knowledge doesn’t transfer to a competitor’s generic implementation.

The difference matters for PE because durable AI creates enterprise value that survives an exit. A buyer’s diligence team can evaluate the model, the data pipeline, the infrastructure, and the competitive moat it creates. Disposable AI is a line item in the cost structure that a buyer will discount—or plan to replace.

Every AI deployment I build is designed for durability. Open-source models on client infrastructure. Data pipelines the team can maintain. Documentation that lets the next engineer extend what I built. The goal is not to create dependency on Blackmere—it’s to create AI capabilities that the company owns entirely and that create measurable, defensible value.


How This Fits Into Diligence

AI value creation is increasingly the reason deal teams engage Blackmere—not just to assess technical risk, but to identify and quantify AI value creation potential as part of the deal thesis.

During buy-side diligence: Assess AI readiness, evaluate existing AI initiatives (if any), identify AI value creation opportunities that can be modeled into the deal economics. The assessment maps the company’s current position on the AI Readiness Ladder and estimates the investment required to move up it.
Post-close execution: Deploy AI where the diligence identified opportunity. Same principal, same context, no translation loss. The AI roadmap is written by the person who will build the systems—not a consultant who will hand it to a team that has never deployed a model to production.
Portfolio-wide AI strategy: For funds managing multiple software assets, I assess AI opportunity across the portfolio and prioritize deployment where it creates the most value. Common patterns emerge—document processing, customer onboarding acceleration, operational automation—and the frameworks built for one company can be adapted for others.