Something's happening underneath the AI conversation that most people haven't noticed yet.
The surface story is familiar. Every business is “using AI.” Every vendor added it to their roadmap. Every founder has a ChatGPT tab open and probably a Claude tab too. The noise makes it look like a universal shift.
The actual story is quieter and more uneven. Most people using AI right now are using it in ways that can make them slower, not faster. They feel productive because they're busy. They're producing work that looks polished and lands flat. And they've been told this is what AI adoption looks like, so they don't know there's anywhere else to go.
Meanwhile, a small group is building toward something fundamentally different. Not using AI as a tool their team types prompts into, productizing their expertise so the firm's engine is software, not bodies. Their methodology runs in the product. Their firms scale without diluting the standard. Most of them aren't done yet. The ones who get there first compound while everyone else keeps paying the rewrite tax.
The gap between these two groups is already wider than most people realize. And it's compounding every quarter.
Here's the part most people miss: it's not a gap. It's a ladder.
There are six distinct levels of how AI shows up in a business. Each one is a rung. Each rung has its own advantages, its own problems, and its own reason most people stop there. You can climb it deliberately, if you know what the climb looks like.
What follows is that map. From not using AI at all, to running your entire practice as a system. Most businesses are stuck on the lower rungs, not because they can't climb, but because nobody's shown them where the handholds are.
This is the real AI story in 2026. Not the one about chatbots replacing jobs. The one about which experts climb this ladder faster, and how they do it.
The research on this is unflattering. A 2025 study from Stanford's Social Media Lab and BetterUp Labs coined a term for what most AI output looks like in real workplaces: “workslop.” Content that looks polished but says nothing. Forty-one percent of workers reported receiving it in a single month. Each instance cost the receiver about two hours of rework. Scaled to a 10,000-person organization, that's roughly nine million dollars a year in lost productivity.
A UC Berkeley study from early 2026 found that AI tools are increasing task-switching and multitasking, behaviors that decades of prior research have shown decrease productivity, not increase it. Workers reported feeling busier.
Goldman Sachs economists noted the same thing from a different angle: no measurable relationship between AI adoption and economy-wide productivity gains, despite unprecedented investment. Workers themselves reported that time spent on certain tasks had increased by up to 346%. A BCG study called it “AI brain fry”, focused, uninterrupted work sessions have dropped 9% since the ChatGPT era began.
The pattern across the research is consistent: the default way most people use AI is actively making them worse at their jobs. Not marginally. Measurably.
The tell: most of them don't know it's happening. They feel busy. They're producing output. The work looks done. The cost is distributed across their day in small enough chunks that it never adds up on a balance sheet.
That's the floor of the ladder. The levels above it exist in reaction to this failure mode, not as hypotheticals, but as how a small group of operators is actively climbing out of it.
Six levels. Most people don't know the ladder exists. Here it is, at a glance:
The Ladder · Six Rungs
Each rung has a core benefit, and a cost you pay by staying on it.
YaaS: You as a Service
Your expertise productized. The engine is the system; the judgment is still yours.
Core benefit
Durable leverage. Precedent exists, Pipedrive encoded activity-based selling, Harvey encodes legal thinking, Intercom encoded a product thesis. What's new is doing it at your firm's granularity.
The payoff
Your firm's engine becomes technology, not headcount. The frontier, most firms aren't here yet, which is exactly why getting here first compounds.
Training & self-learning
The system learns your taste, one edit at a time.
Core benefit
Compounding returns. Brief 100 needs less editing than brief 50. Every engagement sharpens the next one.
What staying costs
A firm capped by your calendar. Level 4 is a tool that knows you; Level 5 is the engine running as the product. Staying here keeps you as the bottleneck on scale.
Operating
A custom tool built around your workflow. AI invisible underneath.
Core benefit
Fit. The software is the cockpit, click “new brief,” review, tweak two lines, send. Your methodology, encoded.
What staying costs
A static tool that gets no sharper. Competitors whose systems learn from every edit pull further ahead while yours stays where you built it.
Subscribing
Autocomplete with a better badge.
Core benefit
Zero lift. The vendor did the integration, your software feels modern without effort on your part.
What staying costs
A generic ceiling. The AI doesn't know your methodology, you're paying more for software that still doesn't do what you do.
Prompting
Typing with extra steps.
Core benefit
Accessibility. Zero setup, anyone can start today, feels like progress the moment you open a tab.
What staying costs
The rewrite tax. Minutes-per-task reshaping generic output into how you actually think, invisible, distributed, never on the balance sheet. Research says people get measurably slower, not faster.
Not using AI
Every output drafted from scratch, by hand.
Core benefit
Purity. Every output is unmistakably yours, no workslop, no dilution of voice, no false sense of progress.
What staying costs
A widening gap. Competitors using AI well produce at comparable quality, faster, every quarter. The climb gets steeper the longer you wait.
Which rung are you on?
10 questions. Two minutes. Tells you what the next rung requires.
Now let's walk it through a single consultant, same person, six different states, to make the rungs visible.
They write every client deliverable by hand. Every brief drafted from scratch. Every new engagement starts from a blank page or template. They tried ChatGPT once, got something generic, closed the tab. They're running the business the way they ran it in 2019.
The advantage is real: every output is fully theirs, with no workslop and no false sense of progress. The problem is also real: competitors using AI well are producing faster at comparable quality, and the gap widens every quarter.
Why most people stop here: some feel no pressure to change, the way plenty of businesses didn't bother with a website at the start of the internet. For others, skepticism, a bad first experience, or “I don't have time to figure this out.” The irony is that avoiding AI isn't safety; it's a slow-motion competitive loss. The real talents who are too busy to bother lose the most, because this is a force multiplier for exactly what they already do.
They open ChatGPT before a client kickoff. They type “write me a stakeholder analysis for a mid-size healthcare company going through a merger.” They get back generic MBA-speak that sounds like it was written by someone who's never met their client. They spend 40 minutes rewriting it to match how they actually think. They do this before every engagement.
They call it using AI. It's really just typing with extra steps.
Typing with extra steps.
The advantage is accessibility: zero setup, anyone can start today, it feels like progress. The problem is the rewrite tax, massive, invisible, distributed across the day so it never adds up on a balance sheet. This is where the research finds people getting slower, not faster.
Why most people stop here: the output looks sophisticated on first read, the time cost is hidden, and they don't know another version exists. Outside the center of tech it's genuinely hard to see how fast things are progressing. People aren't changing their approach once a year anymore; the pace is measured in weeks. Benefiting from what's happening requires accepting that as the new normal.
Their CRM rolls out an “AI-powered” update. An assistant that summarizes meeting notes and drafts follow-up emails. They try it for a week. The summaries miss the nuances that matter. The emails sound like a generic sales rep, not them. They stop using it and go back to their old process, not necessarily because they don't see value in AI, but because nobody wants to run AI scattered across five different systems.
The vendor sent three more emails about the AI features. They ignored them.
Autocomplete with a better badge.
The advantage is zero lift, no new tool to learn, the vendor did the work, their software feels modern. The problem is that the AI doesn't know their methodology. It's autocomplete with a better badge. They're now paying more for software that still doesn't do what they do.
Why most people stop here: they assume this is the ceiling. It's still reactive more than proactive. The SaaS vendor sold it as cutting-edge. They benefit a bit, but they don't know a different model exists.
They have a custom tool built around their actual workflow. They click “new engagement brief.” It pulls the client's intake, applies their framework, and generates a brief calibrated to engagement type and client context. They review, tweak two lines, and send it.
They never type a prompt in much of their day. The AI is underneath, invisible. The software is the cockpit.
The advantage is fit, built around how they actually work, not how a generic tool assumes they work. Review replaces produce. The problem is upfront investment and the discomfort of articulating a methodology clearly enough to codify it.
Why most people stop here: the codification work is real. Most give up before they finish describing their own thinking. The ones who push through start compounding.
They review the first 20 briefs the system generates. Some are great. Some need edits. When they edit, they flag what they didn't like: “too corporate for this client,” “wrong framing for a family-owned business,” “this phrasing isn't mine.” Those flags become guardrails. Rules the system follows going forward.
By brief 50, they're editing less. By 100, they're mostly approving. The system has been shaped by their taste, one rejection at a time.
The advantage is compounding: every engagement makes the next one better. The problem is discipline, feedback has to be fed in deliberately; it won't happen automatically. The system has to work with the person smoothly, not as awkward extra work. The quality of the tooling built to do this makes or breaks the learning loop.
The gap between having a tool and having an apprentice.
Why most people stop here: they think Level 3 is the finish line. They don't treat the system as something that needs training. The gap between Level 3 and Level 4 is the gap between having a tool and having an apprentice.
This is the frontier. Your methodology encoded into software. Your approach baked into the product. Not packaged as a course. Not productized in the 2015 fixed-fee-offering sense. Encoded, decision rules, voice, framework, into a system your team operates daily and your clients experience as your firm.
The pattern has precedent. Pipedrive baked activity-based selling into a CRM, one sales thinker's methodology, productized. Intercom encoded Des Traynor's thesis on conversational commerce into a category-defining product. Harvey is encoding partner-level legal thinking for entire firms. EvenUp is doing the same for personal-injury process work. What's new is doing this at the granularity of a single firm's specific expertise, at a cost that's finally viable below the Fortune 500.
People don't disappear. Leadership still reviews, refines, sets direction. The judgment calls, the conversations, the reason clients pay you, is where senior time goes. But the engine is the system, not bodies. When someone joins, they onboard through the software. Work generates in your voice by default.
The engine is the system. The judgment is still yours.
This is YaaS, You as a Service. Your methodology running as software, with you still operating and refining it. The firm's engine becomes technology, not headcount.
The advantage is durable: expertise that compounds, a firm that scales without diluting, judgment no longer trapped in a single skull. The problem is real codification work, not every expert can do it, and the ones who can usually need help getting it out of their head. This is also where the tacit-knowledge limit shows up, the last 20% of what makes a great expert great doesn't encode. That 20% doesn't disappear; it just becomes the scarce, expensive thing your firm sells.
This is what Connectt builds systems for.
I work with an executive coach, a former financial industry executive turning their talents to coaching. After decades as a leader, they've developed a coaching methodology that lives mostly in their head and in countless documents and frameworks. Style preferences down to the sentence level.
The problem is that there is only one of them. Until now, “experts” could provide consulting and services with general frameworks that were, to a degree, one-size-fits-all. Applying AI, we're now able to synthesize their framework, nuances, and approach into a personalized experience that doesn't require them to do everything.
We're encoding it. Not packaging it into a course. Not productizing it in the 2015 sense of turning services into fixed-fee offerings. Encoding it, their methodology, their voice, their decision rules, into a system that generates work in their style and learns from their edits.
They click “generate assessment.” Fifteen tailored questions come back using their frameworks, calibrated to the client's seniority and context. They review, flag what's off, send it. The flags become rules. The rules compound. Brief 50 requires less editing than brief 20. Brief 100 requires less than 50.
That's the Level 3 to Level 4 climb, in real time.
Level 5: YaaS) is what we're building toward. When new associate coaches onboard through the system and deliver at their standard. When the methodology becomes the engine, operated by the team, refined by their reviews, enforced by the software itself. They're still in the seat; the firm just isn't bottlenecked by their calendar anymore.
This isn't abstract, it's what the climb looks like when a firm actually commits to it. The direction is established: Pipedrive encoded a sales methodology, Harvey encodes legal thinking, Intercom encoded a conversational-commerce thesis. What's new is doing this at the granularity of a single firm's expertise, on AI that can finally carry the weight. Some of it is pioneering, and that's the point. The firms that do it first compound while the rest keep paying the rewrite tax.
Now you do.
The diagnostic is above. If you skipped past it, that's your nudge. It takes two minutes and tells you which rung your firm is on and what the next one requires.
The rest is up to you.
One conversation, no deck, no sequence. We'll look at your firm and tell you which rung you're on, what's costing you every quarter, and what the climb to Level 3 or 4 looks like for your specific workflow.