# The Vibe-Coding Layoff Trap: Why CEOs Are Trading Judgment for Vendor Spend

A research brief on the managerial AI rush: why vibe-coding demos prompt premature layoffs, quality regressions, and upstream spend transfers to labs and hyperscalers.

- Canonical URL: https://buildooor.com/research/vibe-coding-layoff-trap
- Author: Rob Baratta
- Published: 2026-03-22
- Version: Working Paper v1.0
- Keywords: vibe coding, AI layoffs, managerial automation, verification economy, AI-first memo, software quality, delivery stability, vendor concentration, frontier lab hiring, AI capex, judgment work, headcount arbitrage

---

<ResearchAbstract>
  A large share of executive AI discourse has converged on a convenient but sloppy conclusion: if a
  competent employee can use an LLM to draft faster, then the firm can cut headcount and let a smaller
  team "vibe code" or "AI-first" its way to the same output. This paper argues that the
  conclusion confuses local task compression with whole-organization substitution. Drawing on the World
  Economic Forum's 2025 employer survey, IBM's 2025 CEO study, DORA's 2024–2025 software-delivery
  research, METR's randomized trial on experienced open-source developers, public executive memos from
  Shopify and Duolingo, Klarna's partial reversal on AI-led customer-service staffing, and 2026 capital
  expenditure guidance from Amazon and Alphabet, we show that the current AI labor narrative has three
  recurring pathologies. First, managers are using AI as a headcount gate before they have measurement
  systems good enough to prove durable quality gains. Second, software and support work still require a
  verification layer whose value becomes more important, not less, as cheap output volume rises. Third,
  much of the apparent labor savings is not disappearing from the system at all -- it is being transferred
  upstream into model, cloud, and integration vendors that continue to hire aggressively. What gets
  casually called a "psyop" is better understood as an incentive cascade: frontier demos create
  fear of missing out, public AI posture signals modernity, labor cuts are easy to book, and the slower
  quality losses arrive later in churn, incident load, review debt, and vendor dependency. The operational
  conclusion is blunt: automate drafting, not judgment; compress rote work, not the people who keep bad
  output from reaching customers.
</ResearchAbstract>

<ResearchSection number={1} title="Introduction: The Local Demo Fallacy">

The most common category error in executive AI thinking is not technical. It is organizational. A
manager watches a strong employee use Claude, ChatGPT, or Copilot to compress one narrow workflow --
draft a spec, summarize a call, scaffold a component, answer a routine support question -- and then
silently upgrades that observation into a staffing thesis. If one person can now do the first-draft
work of three, the logic goes, perhaps three people can now do the work of ten. The problem is that
the first sentence is often directionally true while the second is frequently false. Most organizations
are not bottlenecked by first-draft production alone. They are bottlenecked by review, testing,
exception handling, approval latency, integration, and the messy human work required to decide whether
an apparently plausible output is actually correct.

This is why the word "psyop" -- while imprecise -- keeps surfacing in operator circles. The
feeling is not that AI is fake. The feeling is that a visible layer of executive narrative is claiming
much more certainty than the evidence warrants. Publicly, CEOs talk as if the organization is on the
verge of straightforward substitution. Privately, quality teams, support escalations, and senior
engineers absorb the hidden cost of checking what the new output machines produce. The real shift is
subtler: AI compresses some labor categories dramatically, but it also increases the value of the
people who can verify, contextualize, and reject bad output before it leaks into production.

The data now supports treating this as a management problem, not just a tooling story. The World
Economic Forum's 2025 Future of Jobs survey found that 41% of employers expect workforce reductions
as AI expands its ability to replicate roles. Yet the same survey found 77% plan to reskill workers
for AI collaboration and 69% plan to recruit talent skilled in AI tool design and enhancement. In
other words, employers are simultaneously planning cuts, retraining, and new hiring. This is not what
labor substitution looks like when it is clean and complete. It is what reallocation under uncertainty
looks like.

IBM's 2025 survey of 2,000 CEOs reinforces the same diagnosis from the top of the org chart. CEOs
expect the growth rate of AI investments to more than double over the next two years, but only 25% of
AI initiatives have delivered their expected ROI so far, and 64% of respondents admit that the risk of
falling behind pushes them to invest before they clearly understand the value. That is the shape of an
executive arms race, not a mature operating model. The rational response to an arms race is often to
move fast and signal alignment. The operational consequence is that headcount, vendors, and process
design all get rearranged before the measurement system is good enough to say whether the rearrangement
actually worked.

<ResearchTable
  caption="Table 1. Executive AI Posture Is Spreading Faster Than Durable Proof"
  columns={[
    { label: 'Signal' },
    { label: 'Claim / Move', muted: true },
    { label: 'What It Shows' },
    { label: 'Operational Reading', muted: true },
  ]}
  rows={[
    [
      'World Economic Forum (2025)',
      '41% of employers expect workforce reductions from AI role replication; 77% plan upskilling; 69% plan to recruit AI-skilled talent',
      'Organizations are cutting, retraining, and hiring at the same time',
      'This is not labor disappearance -- it is a reallocation experiment under uncertainty',
    ],
    [
      'IBM CEO Study (2025)',
      'AI investment growth expected to more than double; only 25% of initiatives delivered expected ROI; 64% invest from fear of falling behind',
      'Executive urgency is outrunning proof',
      'Public AI posture is becoming a management signal even where value capture is still weak',
    ],
    [
      'Shopify memo (Apr 2025)',
      'Teams must demonstrate why AI cannot do the work before asking for more headcount or resources',
      'AI is being used as a hiring gate, not just a productivity tool',
      'Management is treating AI as the default labor benchmark for incremental staffing',
    ],
    [
      'Duolingo memo (Apr 2025)',
      'Company shifted to AI-first and said it would gradually stop using contractors for work AI can handle',
      'Content scale pressure moved faster than craft-labor protection',
      'Once AI enters hiring and performance policy, organizational design changes before QA systems catch up',
    ],
    [
      'Klarna (2024--2025)',
      'AI assistant was framed as 700--800 FTE equivalent; later the CEO said cost had been over-weighted and quality suffered, prompting renewed human hiring',
      'The labor-arbitrage win hit a service-quality ceiling',
      'Cheap support is not the same thing as trusted support',
    ],
  ]}
  footnote="Sources: World Economic Forum (2025), IBM CEO Study (2025), TechCrunch reporting on Shopify, public reporting on Duolingo memos, Klarna press releases, and Fortune reporting on Klarna's 2025 customer-service hiring reversal."
  compact
/>

</ResearchSection>

<ResearchSection number={2} title="The Headcount-Gating Turn: AI Is Becoming a Management Policy">

The meaningful change in 2025 was not merely that more employees started using AI. It was that AI
moved from an optional tool to a formal policy input in hiring, budgeting, and performance review.
Shopify's April 2025 memo is the clearest articulation of the new regime: before teams ask for more
headcount or resources, they must demonstrate why AI cannot do the work. That sentence matters because
it turns AI from an enabler into a baseline assumption. Hiring no longer begins from "what work
exists?" It begins from "why has automation not already eliminated this role?"

Duolingo's shift to an AI-first operating posture pushed the same logic into content operations,
explicitly stating that the company would gradually stop using contractors for work that AI can handle.
What matters here is less whether one approves of the tactic and more what it reveals about management
sequence. AI gets inserted into labor design before the organization has durable evidence that the
downstream quality-control apparatus is strong enough to absorb the output expansion. The same pattern
showed up in Klarna. In 2024, the company touted its AI assistant as doing the equivalent work of 700
full-time agents, later updated to approximately 800. In 2025, CEO Sebastian Siemiatkowski told
Bloomberg, via Fortune's reporting, that the company had over-indexed on cost and accepted lower
quality, prompting renewed human hiring to ensure customers could still reach a person when needed.

These examples are often framed as hypocrisy or walk-back. A better reading is structural. Public
management narratives optimize for legibility: a headcount gate is easy to explain, easy to score,
easy to surface to investors, and easy to align with a zeitgeist that already expects every serious
company to be "AI-first." Quality regressions are slower and less legible. They arrive as
reopened tickets, brittle releases, escalations that no chatbot can de-escalate, and senior staff
quietly spending their time repairing the mistakes of systems that were supposed to remove work.

Put differently: management gets a fast accounting win because salaries are visible, but the cost of
verification debt arrives later and is distributed. This is why firms can feel more productive in the
quarter while becoming less reliable in the year. The board sees fewer heads. The operator sees more
exception handling. The customer sees the checkout bug, the canned response, the support loop, or the
product regression. The only people who never see that local pain directly are the upstream vendors
selling more model access, more cloud capacity, and more enterprise deployment support into the same
company.

<ResearchCallout>
  AI compresses drafting labor faster than accountability labor. If management cuts the accountability
  layer first, the organization does not become more automated. It becomes more brittle.
</ResearchCallout>

</ResearchSection>

<ResearchSection number={3} title="Why Vibe Coding Misleads Executives: Output Volume Is Not System Throughput">

"Vibe coding" is a useful phrase precisely because it exposes the category error. In the hands
of a strong engineer, AI-assisted coding can feel magical because the engineer already knows how to
reject dead-end branches, pressure-test abstractions, and smell architectural nonsense quickly. The AI
compresses keystrokes, boilerplate, search overhead, and local iteration. But an organization is not a
single engineer. It is a queueing system with hidden dependencies: testing, code review, environment
drift, release management, observability, rollback readiness, and domain-specific acceptance criteria.
Making the drafting stage cheaper does not automatically make the full system faster. It can easily
make later stages more congested.

The empirical literature now makes this impossible to ignore. METR's randomized 2025 study on
experienced open-source developers working on their own repositories found that AI assistance made them
19% slower on average, even though the developers expected the tools to speed them up. That gap between
measured throughput and felt productivity is the managerial danger zone. Developers can experience less
friction while the full task still takes longer because reviewing, steering, untangling, and correcting
generated code is itself work. METR's February 2026 update suggests newer tools may now produce real
speedups for some cohorts, but the researchers also note that the signal is increasingly hard to measure
because strong AI users resist no-AI baselines. Even the pro-AI direction of travel therefore carries an
uncomfortable implication: belief, workflow dependence, and measured value are not the same variable.

DORA's 2024 and 2025 research sharpens the organizational version of the same point. The 2024 report
found AI adoption associated with better documentation quality, code quality, and review speed -- but
also with lower delivery stability. The 2025 research went even further in its framing, arguing that AI
operates as an amplifier of existing organizational strengths and weaknesses. More than 90% of technology
professionals are now using AI in day-to-day work, roughly 80% report productivity gains, yet about 30%
still report little or no trust in AI output. That combination is not paradoxical. It means people are
happily using a tool that speeds some work while still requiring substantial verification. Organizations
with poor release discipline and weak internal platforms do not escape those weaknesses through AI. They
simply get to hit those weaknesses harder and faster.

GitClear's 2025 analysis of 211 million changed lines adds the maintainability dimension. As AI-assisted
coding rose, code cloning and copy-paste patterns increased materially -- including a reported 4x growth
in code clones. This is precisely the failure mode one should expect when management optimizes for
output volume without corresponding investment in architecture, refactoring discipline, and shared
internal standards. More code is arriving. The question is whether the system can metabolize it.

<ResearchTable
  caption="Table 2. The Software Evidence Does Not Support Blind Headcount Substitution"
  columns={[
    { label: 'Source' },
    { label: 'Setting', muted: true },
    { label: 'Best Case Signal', align: 'right', mono: true },
    { label: 'Managerial Risk', muted: true },
  ]}
  rows={[
    [
      'METR RCT (2025)',
      'Experienced open-source maintainers working on their own repos',
      '-19% slower',
      'Developers expected speedup anyway; subjective uplift can diverge sharply from measured throughput',
    ],
    [
      'METR update (2026)',
      'Later cohorts using newer tools',
      'Possible speedup',
      'The new data is noisy because heavy AI users increasingly refuse no-AI baselines; dependence can rise faster than reliable measurement',
    ],
    [
      'DORA 2024',
      'Large cross-company software delivery survey',
      '+3.4% code quality',
      'The same report ties AI adoption to lower delivery stability; local gains can be absorbed by downstream release chaos',
    ],
    [
      'DORA 2025',
      'AI-assisted software development research',
      '80% report productivity gains',
      '30% still report little or no trust in AI output; AI amplifies existing platform and process strengths or weaknesses',
    ],
    [
      'GitClear 2025',
      '211M changed lines of code across 2020--2024',
      'More output',
      '4x growth in code cloning suggests code volume can rise while maintainability falls',
    ],
  ]}
  footnote="Sources: METR (2025; 2026 update), DORA 2024 Accelerate report, DORA 2025 State of AI-assisted Software Development, and GitClear's 2025 code-quality analysis."
  compact
/>

The practical implication is straightforward. If a company believes AI lets one engineer produce 2x the
raw draft volume, it should not ask "how many engineers can we cut?" first. It should ask
"which downstream functions are about to become more valuable because they now have more potentially
wrong output to inspect?" In many teams the answer is: senior review, quality engineering, release
management, observability, and domain expert approval. This is why so many AI-first organizations can
feel faster while simultaneously becoming more annoying to use. Local speed rose. System quality did not.

</ResearchSection>

<ResearchSection number={4} title="The Spend Transfer: Labor Leaves the P&L, but the Money Still Goes Somewhere">

The most under-discussed part of the current wave is that even when companies do remove labor cost,
much of the spend does not disappear from the system. It moves upstream. IBM's 2025 CEO study says the
growth rate of AI investment is expected to more than double. Alphabet finished 2025 having invested
$91 billion in CapEx, then guided to an astonishing $175 billion to $185 billion in 2026 while stating
that it would continue hiring in key AI and Cloud areas. Amazon told investors it expected roughly
$200 billion in capital expenditures in 2026, explicitly tying that spend to AI, chips, robotics, and
adjacent infrastructure. Microsoft said in Q2 FY2026 that its AI business was already larger than some
of its biggest franchises, while Azure and other cloud services grew 39%. These are not the numbers of
an economy in which human effort is simply evaporating. They are the numbers of an economy rerouting
budget toward compute, models, deployment, and integration.

The labor picture looks similar. OpenAI's careers search was listing 620 open jobs at the time of
writing, spanning research, security, sales, government, deployment, and product. Anthropic's public
jobs board shows a comparably broad appetite: research engineering, pretraining, safeguards, policy,
ML systems, data infrastructure, education, and enterprise deployment. Frontier suppliers are still
aggressively buying scarce labor, especially labor close to model development, model deployment, safety,
enterprise transformation, and infrastructure. The market is not saying "people no longer matter."
It is saying that certain kinds of people matter more upstream than downstream.

That asymmetry is the real strategic problem for downstream firms trying to run the "smaller team,
more AI" playbook. Every headcount cut justified by AI can simultaneously increase dependence on
external vendors whose own labor bills, hiring plans, and capex budgets are still exploding. The firm
gets a temporary margin story. The vendor gets a durable revenue stream and a stronger position in the
stack. If the internal team has also been thinned in the name of efficiency, the buyer becomes even less
capable of replacing the vendor later. What looks like automation can therefore function as a form of
vertical dependency creation.

<ResearchTable
  caption="Table 3. The AI Efficiency Story Often Transfers Spend Upstream"
  columns={[
    { label: 'Upstream Layer' },
    { label: 'Current Signal', mono: true },
    { label: 'Why It Matters' },
  ]}
  rows={[
    [
      'IBM CEO budgets',
      'AI investment growth expected to more than double',
      'Payroll pressure does not mean total AI cost is falling; budget is being re-routed into models, integration, and infra',
    ],
    [
      'Microsoft',
      'AI business larger than some major franchises; Azure +39% in Q2 FY2026',
      'The vendors selling the picks and shovels are still absorbing enterprise budget at high velocity',
    ],
    [
      'Alphabet',
      '$91B CapEx in 2025; $175B--$185B planned for 2026; hiring continues in AI and Cloud',
      'Compute, infrastructure, and AI-specialist labor remain expansion businesses',
    ],
    [
      'Amazon',
      '~$200B capital expenditures planned for 2026',
      'The efficiency narrative keeps feeding a larger infrastructure bill -- not a smaller one',
    ],
    [
      'OpenAI + Anthropic',
      'OpenAI careers showed 620 open jobs; Anthropic jobs span research, safeguards, infra, policy, and enterprise deployment',
      'The downstream cuts are real, but the frontier suppliers are still net buyers of scarce human expertise',
    ],
  ]}
  footnote="Sources: IBM CEO Study (2025), Microsoft FY2026 Q2 earnings press release, Alphabet FY2025/Q4 earnings call and 2026 outlook, Amazon Q4 2025 results, OpenAI careers, and Anthropic jobs."
  compact
/>

<ResearchCallout>
  Many firms are not automating the organization so much as outsourcing more of it -- first to models,
  then to cloud bills, then to deployment vendors, and finally to the frontier labs still hiring the
  people they just cut downstream.
</ResearchCallout>

</ResearchSection>

<ResearchSection number={5} title="The Verification Economy: A Better Staffing Model for the AI Era">

The right organizational model is not "protect every job from AI" and it is also not
"replace people wherever AI can draft something plausible." The correct move is to distinguish
between production work and verification work, then redesign staffing around where errors are cheapest
to generate versus where they are most expensive to catch. AI is strongest when it handles abundant,
repetitive, low-liability first drafts under tight human review. It is weakest when management assumes
that once a draft exists, the rest of the organization can be safely downsized around it.

This implies a four-part operating discipline. First, automate the production layer aggressively:
note-taking, rough drafts, repetitive coding tasks, routine support triage, document synthesis, and
structured analysis. Second, thicken the verification layer rather than stripping it bare: senior code
review, test infrastructure, release engineering, domain approval, support escalations, compliance
sign-off, and measurement. Third, invest in internal context plumbing so AI can access clean data,
standards, runbooks, and architecture knowledge instead of hallucinating around weak documentation.
Fourth, measure the system at the level where management actually feels pain: escaped defects, rollbacks,
churn after support contact, audit findings, and rework load.

The managerial temptation is always to cut where work becomes easiest to see. Rote drafting is visible.
Verification is not. But the economics of cheap generation invert the logic. When output is abundant,
the scarce resource is no longer text production or code emission. The scarce resource is trustworthy
acceptance. That means the people with taste, judgment, domain liability, and debugging discipline are
not the expensive leftovers of a pre-AI org chart. They are the revenue-protection layer of the new one.

<ResearchTable
  caption="Table 4. Where To Automate Hard and Where Not To Cut First"
  columns={[
    { label: 'Workflow Layer' },
    { label: 'AI Leverage' },
    { label: 'Do Not Cut First', muted: true },
    { label: 'What To Track', muted: true },
  ]}
  rows={[
    [
      'First-draft production',
      'High',
      'Do not defend rote drafting headcount by default',
      'Cycle time, acceptance rate, edit distance, revision depth',
    ],
    [
      'Verification / QA / release engineering',
      'Medium',
      'Do not cut this layer first',
      'Defect escape rate, rollback rate, MTTR, post-release hotfix volume',
    ],
    [
      'Domain judgment / approval',
      'Assistive',
      'Protect or deepen expertise',
      'Exception accuracy, liability events, audit findings, pricing error rate',
    ],
    [
      'Customer escalation / trust repair',
      'Assistive',
      'Keep human coverage',
      'CSAT on escalations, reopen rate, churn after support contact',
    ],
    [
      'Internal platform / context plumbing',
      'High upside',
      'Invest before mandating blanket AI adoption',
      'Retrieval quality, environment reproducibility, deployment stability',
    ],
  ]}
  footnote="Author synthesis based on the evidence reviewed in this paper."
  compact
/>

This framing also clarifies why some of the strongest AI operators feel more capable without looking
obviously smaller. They have not confused output abundance with permission to hollow out the org. They
have built better internal platforms, clearer approval paths, denser monitoring, and stronger judgment
concentration at key points in the workflow. In those environments AI truly behaves like leverage. In
weaker environments it behaves like accelerant -- it makes whatever was already broken arrive sooner.

</ResearchSection>

<ResearchSection number={6} title="Conclusion: Stop Firing the People Who Keep the Slop Off the Customer">

The current wave of executive AI behavior is intelligible even if it is often wrong. Frontier model
demos are genuinely impressive. Boards want a story. Markets reward visible modernization. Salaries are
easier to cut than cloud spend is to explain. And individual employees can indeed produce more raw
output than before. But the leap from those premises to "fire broadly, shrink the team, and let AI
handle it" is not an empirical conclusion. It is an organizational gamble -- one that often shifts
cost out of wages and into defects, churn, incident response, review debt, and vendor concentration.

So did everyone get psyoped? Not exactly. A literal psyop implies a coordinated deception. The better
diagnosis is a coordinated incentive failure. Executives are rewarded for looking ahead of the curve;
vendors are rewarded for selling more capability; labs are rewarded for absorbing more enterprise spend;
and operators are left cleaning up the gap between what a demo suggests and what a production system can
safely absorb. The result feels like a psyop because the public certainty is much louder than the private
measurement.

The most useful corrective is brutally simple. Do not ask whether AI can draft the work. Ask whether the
organization can still verify the work after you change the staffing model around it. If the answer is
weak, then the cost savings are fake. They have merely been delayed. The firms that win this cycle will
not be the ones that fire first and brag loudest. They will be the ones that understand the new scarcity:
not output, but judgment.

</ResearchSection>

<ResearchReferences>

World Economic Forum. (2025). *Future of Jobs Report 2025: Workforce Strategies*. https://www.weforum.org/publications/the-future-of-jobs-report-2025/in-full/4-workforce-strategies/

IBM. (2025, May 6). *IBM Study: CEOs Double Down on AI While Navigating Enterprise Hurdles*. https://newsroom.ibm.com/2025-05-06-ibm-study-ceos-double-down-on-ai-while-navigating-enterprise-hurdles

TechCrunch. (2025, April 7). *Shopify CEO says employees must show AI can't do jobs before asking for more headcount*. https://techcrunch.com/2025/04/07/shopify-ceo-says-employees-must-show-ai-cant-do-jobs-before-asking-for-more-headcount/

The Register. (2025, April 29). *Duolingo ditches more contractors in 'AI-first' refocus*. https://www.theregister.com/2025/04/29/duolingo_ceo_ai_first_shift/

Klarna. (2024, February 27). *Klarna AI assistant handles two-thirds of customer service chats in its first month*. https://www.klarna.com/international/press/klarna-ai-assistant-handles-two-thirds-of-customer-service-chats-in-its-first-month/

Fortune. (2025, May 9). *Klarna plans to hire humans again, as new landmark survey reveals most AI projects fail to deliver*. https://fortune.com/2025/05/09/klarna-ai-humans-return-on-investment/

METR. (2025, July 10). *Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity*. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/

METR. (2026, February 24). *We are Changing our Developer Productivity Experiment Design*. https://metr.org/blog/2026-02-24-uplift-update/

DORA. (2024). *Accelerate State of DevOps Report*. https://dora.dev/research/2024/dora-report/2024-dora-accelerate-state-of-devops-report.pdf

DORA. (2025). *State of AI-assisted Software Development*. https://dora.dev/report/2025

GitClear. (2025). *AI Copilot Code Quality: 2025 Look Back at 12 Months of Data*. https://www.gitclear.com/ai_assistant_code_quality_2025_research

Microsoft. (2026). *FY26 Q2 Press Release & Webcast*. https://www.microsoft.com/en-us/Investor/earnings/FY-2026-Q2/press-release-webcast

Alphabet. (2026, February 4). *2025 Q4 Earnings Call*. https://abc.xyz/investor/events/event-details/2026/2025-Q4-Earnings-Call-2026-Dr_C033hS6/default.aspx

Amazon. (2026, February 5). *Amazon.com Announces Fourth Quarter Results*. https://ir.aboutamazon.com/news-release/news-release-details/2026/Amazon-com-Announces-Fourth-Quarter-Results/default.aspx

OpenAI. (2026). *Careers*. https://openai.com/careers/

Anthropic. (2026). *Jobs*. https://www.anthropic.com/careers/jobs

</ResearchReferences>

<ResearchColophon
  citation={'Baratta, R. (2026). "The Vibe-Coding Layoff Trap: Why CEOs Are Trading Judgment for Vendor Spend." Buildooor Research Brief, March 2026.'}
  email="buildooor@gmail.com"
/>
