Buildooor Research Brief -- March 2026

Short-Half-Life AI Tools: Edge Harvesting as the New Software Business Model

buildooor % claude --model opus-4.6 -p "/research-paper trade model shocks, own harnesses"
Published March 5, 2026 -- Working Paper v1.0
Keywords: short-half-life software, AI cannibalization, edge harvesting, micro SaaS monetization, agent harness architecture, platform risk, model deprecations, AI pricing compression, distribution velocity, software portfolio strategy
What does this mean for me?MarkdownPlain text

Abstract

Software defensibility has shifted from a multi-year moat game to a repeated edge extraction game. In 2024 alone, U.S. private AI investment reached $109.1B, organizational AI adoption hit 78%, and inference costs for GPT-3.5-class workloads fell by 280x from late 2022 to late 2024. Those numbers imply two simultaneous truths: building is cheaper than ever, and feature-level advantage decays faster than ever. This paper argues that short-half-life AI tools are not a pathological business category; they are a rational category with a different operating model. The correct frame is not traditional SaaS compounding but volatility trading: find temporary inefficiencies, monetize quickly, and recycle gains into an agent harness layer that compounds as models improve. We define practical monetization archetypes, pricing envelopes, deprecation-aware product operations, and a barbell portfolio strategy that combines 30-180 day utility waves with durable context/memory/policy infrastructure.

1. From SaaS Moats to Edge Half-Lives

The dominant software question of the 2010s was: can this product defend an ARR stream for a decade? The dominant software question of the late 2020s is different: can this product recover build cost before platform integration catches up? This is not rhetorical flourish. It is a structural consequence of cheaper intelligence, higher baseline capability in consumer products, and faster release cycles by model providers who increasingly ship direct-to-user surfaces.

In practical terms, many categories that previously supported venture-scale SaaS now behave like short-dated instruments. A utility can produce immediate cashflow for 3-12 months, then flatten when an upstream model, operating system, or distribution platform absorbs the core feature. The classic flashlight-app dynamic has moved from hardware toggles to cognition-layer workflows: summarize, route, prioritize, nudge, draft, reconcile, and plan.

The critical strategic error is treating this as a temporary anomaly. The more accurate reading is that software production and software consumption have both moved into a higher-frequency regime. Builders can launch in days. Users can switch in minutes. Platforms can clone in weeks. Under those conditions, persistence is no longer the default objective for every product line. Rapid payback and portfolio recycling become first-class objectives.

This does not eliminate durable businesses. It changes where durability sits. Durable value migrates from isolated feature execution to integrated memory, policy, trust, and distribution systems. If the user can reproduce your value with one new model release and a weekend project, you are in the feature layer. If better models improve your throughput while your proprietary context graph, decision policy, and trust loop stay unique, you are in the harness layer.

2. Edge Compression Is Measurable, Not Anecdotal

The edge compression thesis can be grounded in hard macro data rather than founder sentiment. Stanford's 2025 AI Index documents both rapid adoption and rapid cost compression: U.S. private AI investment reached $109.1B in 2024, global private investment rose to $252.3B (+26% YoY), organizational AI usage climbed to 78%, and inference costs for GPT-3.5-equivalent performance fell by roughly 280x between November 2022 and October 2024.

Table 1. Edge Compression Indicators (2024-2025)
SignalLatest ReadingImplication
U.S. private AI investment (2024)$109.1BCapital concentrates where platform power already lives
Global private AI investment growth (2024)+26% YoY to $252.3BMore capital chases faster cycles, not longer durability
Organizations using AI (2024)78% (up from 55%)Adoption is now mainstream, so novelty windows close faster
Industry share of notable models (2024)90.16%Roadmaps are controlled by a few labs and hyperscalers
Top-vs-10th model performance gap5.4% (down from 11.9%)Feature advantages are shorter-lived and easier to copy
Inference cost for GPT-3.5-class tasks280x decline (Nov 2022 to Oct 2024)MVP cost collapses, but pricing power collapses too
Sources: Stanford HAI AI Index 2025 summary and chapter PDFs; figures rounded where needed for readability.

Each metric pushes in the same direction. More capital and broader adoption attract more builders, which increases competition intensity in the exact layers where build cost is falling fastest. Simultaneously, model quality convergence reduces the duration of quality-based differentiation: when the performance spread between the best and the tenth-best model narrows to single digits, feature-level advantages are increasingly packaging and timing effects, not enduring technical moats.

A second-order effect matters even more. Industry produced 90.16% of notable models in 2024. That concentration means a small number of platform actors can reset feature markets on short notice. You are no longer competing in a market with many independent innovation trajectories; you are operating downstream of a few release calendars.

The new default is not "build once, rent forever." It is "ship fast, collect value quickly, and assume the baseline will move under you." Builders who price and operate for that reality win even when individual products have short shelf lives.

3. Monetization Still Works Because Demand Is Explosive During Window Periods

Fast decay does not imply no revenue. It implies a different revenue curve. Consumer demand around AI utilities has been strong enough to support meaningful short-cycle monetization. Reporting that cites Sensor Tower shows AI app spending exceeded $1.0B in 2024 with >200% YoY growth, and State of Mobile 2025 placed GenAI app spending at $1.49B (+169% YoY). Additional reporting on the first half of 2025 cited $1.87B in GenAI app revenue and 1.7B downloads.

Table 2. Demand Momentum for AI Utility Surfaces
Demand MetricObserved ValueCommercial Meaning
AI app consumer spending in 2024>$1.0BUsers pay for immediate utility despite free alternatives
YoY growth in AI app spending (2024)>200%Willingness-to-pay can spike before platform absorption
GenAI app spending in 2024 (State of Mobile 2025)$1.49BCategory has moved from experiment to recurring spend
GenAI app spending growth in 2024 (State of Mobile 2025)+169% YoYRevenue windows are short but large when timing is right
GenAI app revenue in H1 2025$1.87BShort cycles can still produce meaningful cashflow
GenAI app downloads in H1 20251.7BDistribution velocity remains available to fast movers
Sources: TechCrunch reporting on Sensor Tower / State of Mobile data (2025), January-August 2025 coverage.

Two observations matter for builders. First, willingness-to-pay exists even when free model chat surfaces are available. Users pay for speed, fit, and convenience in context, not raw model access. Second, category growth can be nonlinear for brief windows when model capability crosses a threshold and UX has not yet normalized across major platforms. Those windows are monetizable if onboarding, distribution, and pricing are designed for immediate conversion.

The market therefore rewards a barbell stance: treat feature products as intentionally time-bounded cashflow vehicles, while channeling earnings and telemetry into a longer-lived harness substrate. The failure mode is trying to force every short-wave product into a perpetual SaaS story with long sales cycles, heavy roadmap promises, and cost structures that assume multi-year retention.

4. Price for Payback, Not for Theoretical Lifetime Value

If half-life is short, monetization design must prioritize fast payback over elegant annual plans. In this regime, a 30-day cash recovery target often dominates a 24-month LTV narrative. The objective is to convert novelty and immediate task-value before integration pressure erodes differentiation.

Table 3. Monetization Archetypes for Short-Half-Life Tools
ArchetypePrice EnvelopeHalf-LifeBest Use Case
Single-job utility (lifetime)$19-$99 one-time1-6 monthsFast novelty capture, low support load
Workflow accelerator subscription$9-$29 / month3-12 monthsDaily operators who value speed over perfection
Team micro-agent seat$49-$199 / seat / month6-18 monthsDomain teams with repeated high-value tasks
Template + automation bundle$79-$299 bundle2-9 monthsBuyers who need immediate implementation
Outcome-priced execution service$200-$2,000 / outcome6-24 monthsWhen confidence in value capture is high
Tool-led advisory retainer$1k-$10k / month12-36 monthsConvert tool demand into longer-cycle services
Price ranges are operator heuristics based on current AI utility market behavior and observed conversion norms in low-friction digital tools.

This table highlights a non-obvious point: "short-lived" and "low quality" are not equivalent. A tool can be excellent, save users hours weekly, and still be structurally transient because a platform eventually internalizes the workflow. Monetization strategy should therefore encode temporal realism directly in pricing and packaging decisions.

Outcome framing usually outperforms feature framing under cannibalization risk. Features are copyable and often become default controls in upstream interfaces. Outcome claims tied to specific user jobs retain persuasive force longer, especially when backed by transparent before/after data. When possible, package around a completed job unit (e.g., reconciled lead list, routed task queue, finalized memo packet) rather than around the prompt or model selection UI.

Table 4. Offer Design Defaults Under Fast Edge Decay
Design VariableFast-Cycle DefaultReason
Time-to-first-value<10 minutesNovelty monetization fails if activation requires onboarding
Trial designNo trial or 3-day maxShort windows punish long conversion funnels
Payment triggerFront-load at first successful outputCapture value before churn spike
PackagingOutcome unit, not feature listFeatures are easiest for platforms to clone
Refund policyTight but clear guaranteePreserves trust while reducing abuse in low-ticket offers
Upsell pathFrom tool to workflow to advisoryExtends LTV beyond feature half-life
Heuristics for maximizing conversion speed and reducing monetization lag in high-change model markets.

The tactical implication is straightforward: long free trials, delayed value realization, and complex tiering are often anti-patterns in this category. You are not optimizing a mature procurement process. You are optimizing rapid, trust-preserving value capture in a moving baseline.

5. Cost Curves Now Favor High Gross Margin Even at Low Ticket Prices

Falling inference costs radically improve short-cycle economics. OpenAI and Anthropic pricing surfaces in 2026 imply that substantial end-user utility can often be delivered for cents per active session at low-to-mid model tiers. This means a $19-$49 product can maintain strong gross margin if orchestration is disciplined and context handling is efficient.

Table 5. Current API Pricing Surface (Selected Models)
Provider / ModelInput ($ / 1M tokens)Output ($ / 1M tokens)Strategic Read
Anthropic Claude Opus 4.6$5.00$25.00Frontier-tier reasoning at materially lower cost than prior Opus pricing bands
OpenAI GPT-5.3-Codex$1.75$14.00High-capability coding agent economics that still support sub-$100 offers
Anthropic Claude Sonnet 4.6$3.00$15.00Production baseline for agent workflows where speed-cost-quality balance matters
Sources: OpenAI model page for GPT-5.3-Codex; Anthropic Claude Opus 4.6 and Sonnet 4.6 model pages, accessed March 5, 2026.

Margin discipline still matters. The fastest way to destroy a viable short-cycle product is uncontrolled context bloat, redundant model calls, and no routing policy. The right architecture routes most calls to the cheapest adequate model, escalates only when confidence thresholds fail, and aggressively caches reusable context. Providers themselves now expose cost controls (e.g., Anthropic Batch API discounts), reinforcing the feasibility of low-ticket monetization with healthy gross margin.

Table 6. Illustrative Unit Economics for Fast-Cycle AI Products
ScenarioUsers / MonthARPUGross RevenueIllustrative Inference CostGross Margin
Niche solo utility400$19$7,600$300~96%
Power-user workflow app1,200$24$28,800$1,450~95%
Team micro-agent250 seats$149$37,250$3,800~90%
Outcome-priced agent service140 outcomes$450$63,000$6,900~89%
Illustrative scenarios, not audited financials. Inference assumptions reflect blended low/mid-tier routing and moderate context sizes.

These economics reframe the problem. You do not need a monopoly category winner to produce meaningful cashflow. You need disciplined scope, rapid iteration, and predictable conversion behavior in a well-defined niche. In other words, monetization viability no longer requires defending the entire category over many years; it requires operational precision during the useful window.

6. Deprecation and Integration Are Product Variables, Not Surprises

Builders who treat platform changes as random shocks are repeatedly punished. Deprecations and migrations are now routine. OpenAI, Anthropic, and Google all publish model lifecycle changes with explicit dates, and consumer-facing products can shift defaults quickly (as seen when GPT-4 was retired from ChatGPT). A robust short-cycle business therefore includes an explicit deprecation clock.

Table 7. Model Lifecycle Events That Define the Product Clock
Platform EventDateWhat It Signals
OpenAI retired GPT-4 from ChatGPT interface2025-04-30Consumer-facing model shelf life is now short
OpenAI scheduled chatgpt-4o-latest shutdown2026-02-17Alias stability cannot be assumed in product architecture
OpenAI scheduled gpt-4-32k deprecation2025-06-06Legacy premium tiers can disappear rapidly
Anthropic scheduled Claude 3.5 Sonnet (20240620) retirement2025-10-22Provider-led migrations are recurring, not exceptional
Google sunset gemini-2.5-flash-image-preview2026-01-15Preview capabilities are explicitly temporary monetization windows
Sources: OpenAI deprecations docs and ChatGPT release notes; Anthropic model deprecations page; Google Gemini API changelog.

The operational pattern is to convert lifecycle announcements into backlog events. Every deprecation notice should trigger three parallel tracks: migration implementation, pricing review, and customer communication. If your margin model depends on a soon-to-retire endpoint, that is not a technical issue alone; it is a pricing and positioning issue. If your core UX mirrors a new platform release, that is not a roadmap coincidence; it is a revenue risk signal.

This is where many operators misread the market. They spend months refining features while ignoring upstream lifecycle telemetry, then experience sudden churn as users can get 80-90% of the value natively. The correct stance is proactive: design migration-ready abstractions, measure feature substitutability continuously, and pre-announce upgrades that reposition the offer around outcomes rather than around specific model hooks.

7. The Agent Harness Layer Is Where Durability Reappears

The feature layer is where edge decays; the harness layer is where edge can compound. A harness is not simply an LLM wrapper. It is an integrated system that owns context ingestion, memory, policy, routing, tool execution, and human correction loops. Better base models increase the harness's effectiveness, because the harness owns the problem framing and decision boundary, not just one model call.

Table 8. Harness Layer Components and Compounding Mechanics
LayerWhat You OwnWhy It Improves with Better Models
Context ingestionUser events, docs, calendar, location, messages metadataRicher models classify and prioritize context better
Memory modelEntity graph, preference graph, historical outcomesReasoning upgrades increase memory retrieval quality
Policy/routing engineTask policy, risk thresholds, escalation logicBetter models reduce false positives and improve routing
Tool orchestrationAPI actions, retries, fallback providersModel quality boosts tool-call success and recovery handling
Human override loopApproval states, correction logs, confidence gatesCorrection data compounds into higher precision over time
Feedback telemetryError classes, save-time metrics, intervention ratesContinuous tuning benefits from stronger base cognition
Distribution identityAudience trust, niche positioning, voiceBrand and trust remain outside raw model commoditization
Conceptual architecture for agent-layer products that benefit from model improvement instead of being replaced by it.

This distinction aligns with your claim that builders should feel excited on model release day. That emotional test is strategically useful: if a stronger model makes your product better, your architecture likely sits above the feature blast radius. If a stronger model makes your product unnecessary, your architecture is probably too close to raw inference and too far from durable workflow ownership.

Durable AI businesses increasingly resemble operating systems for decisions, not chat interfaces for prompts. The model is the engine; the harness is the vehicle. New engines should increase your speed, not destroy your business.

In enterprise and prosumer settings, this is also where trust economics emerge. Auditability, override controls, escalation policies, and correction histories are difficult for generalized platform features to replicate at domain depth. These assets are dull from a demo perspective, but they are exactly what converts intermittent utility spend into repeat organizational spend.

8. Portfolio Construction: Trade Short Waves, Compound Long Assets

Treating all products as identical long-duration bets is no longer efficient. A portfolio approach is more robust: dedicate part of build capacity to short-wave edge extraction, while reserving explicit allocation for harness, distribution, and data assets that compound across waves.

Table 9. Barbell Portfolio for the Cannibalization Era
Portfolio SleeveAllocationPrimary ObjectiveExpected Horizon
Flash utilities30%Exploit model or UX discontinuities quickly30-180 days
Workflow products30%Capture recurring payments from repeat tasks6-18 months
Agent harness core25%Compound proprietary memory and policy assets2-5 years
Distribution/media moat10%Lower launch CAC for each new wavePersistent
R&D optionality5%Prototype frontier ideas before market consensusAlways-on
Illustrative allocation for solo operators or small teams balancing immediate cashflow with durable capability building.

The portfolio view changes decision quality in three ways. First, it normalizes product sunsetting as a healthy outcome when expected payback has already been captured. Second, it prevents over-investment in defensive roadmaps for products with structurally limited duration. Third, it creates an explicit reinvestment path from short-cycle profits into long-cycle assets.

This is analogous to a trading desk funding a long-term strategy book: short-term volatility capture provides operating cash and market intelligence, while a smaller set of durable positions absorbs most compounding gains. In software terms, the durable book includes memory infrastructure, domain datasets, correction logs, and trusted distribution channels.

Table 10. Build Decision Matrix: Speed, Risk, and Durability
Build ChoiceCannibalization RiskMonetization SpeedDurabilityRecommendation
Single-model UI wrapperCriticalFastLowShip only if payback target is <30 days
Prompt-packaged assistantHighFastLow-MediumSell as bundle and collect upfront
Domain workflow automationMediumModerateMediumGood bridge product if tied to outcome metrics
Agent harness with memory and policyLow-MediumModerateHighBest long-cycle compounding path
Data network + expert feedback loopLowSlowVery HighDurable moat; fund via short-cycle tools
Use this matrix to classify each new idea before development begins.

The matrix also clarifies investor communication and self-governance. If you know a concept sits in "critical cannibalization risk / fast monetization", you can set explicit guardrails: limited engineering investment, hard payback thresholds, and clear sunset criteria. Conversely, if a concept sits in "low risk / slow monetization / high durability," you should expect longer payback and evaluate by compounding telemetry quality rather than immediate MRR.

9. Release-Day Operations: Converting Model Shocks into Revenue

In a fast-cycle market, model launch days are not merely technical events. They are commercial events. Teams that process these shocks quickly can capture outsized demand before market messaging and platform UX normalize. This is where a lightweight but disciplined operating cadence outperforms both ad hoc shipping and heavyweight planning.

Table 11. T+Window Operating Playbook After Major Model Releases
T+WindowOperating ActionMonetization Intent
T+0 to T+24hBenchmark old vs new model on core user jobsDetect instantly marketable quality/cost delta
T+24h to T+72hShip upgraded prompts, routing, and pricing copyCapture novelty while search/social attention is high
T+3 to T+14dLaunch narrow use-case landing pages and micro-offersConvert curiosity traffic into paid cohorts
T+2 to T+6wCollect correction telemetry and publish comparative outcomesBuild trust and reduce churn
T+6 to T+12wDecide: double down, bundle, or sunsetReallocate to higher-edge opportunities
Operational sequence for turning model improvements into shipped value and near-term revenue.

The first 72 hours are disproportionately important. Benchmarking determines whether your current offer should reprice, reposition, or split into a cheaper and a premium tier. Public artifacts during this period matter: comparative output examples, latency and cost deltas, and concrete statements of what the update does for user outcomes. Vague excitement posts waste the window.

The following 2-6 weeks are the telemetry phase. This is when you accumulate the signals that convert one-off launch demand into repeatable decision intelligence: where users still intervene, where output quality failed, what tasks retained willingness-to-pay despite platform improvements, and which cohorts churned after copying baseline improvements from free tools.

Critically, this cadence can coexist with a high quality bar. Rapid does not mean sloppy. It means tight scopes, pre-defined instrumentation, and clear go/no-go thresholds. In practice, this often leads to better product hygiene than a speculative long roadmap because each cycle forces concrete evidence of value.

10. Conclusion: A Practical Doctrine for the AI Edge Economy

Short-half-life AI tools are not a dead end; they are a different asset class. The appropriate doctrine is neither naive permanence nor nihilistic churn. It is structured edge harvesting: build fast, monetize quickly, instrument deeply, and recycle gains into durable harness assets.

Five operating principles follow from the evidence. First, assume edge decay by default and design offers for immediate payback. Second, package outcomes rather than features because outcomes survive cloning longer. Third, manage deprecation calendars as revenue variables. Fourth, allocate portfolio capacity explicitly across short-wave cashflow and long-wave compounding. Fifth, treat every major model release as a deploy-and-sell event, not a spectator event.

For builders with strong execution velocity, this regime can be positive-sum. Falling model costs and rising baseline capability reduce the capital required to launch, test, and monetize. The bottleneck shifts to judgment: selecting the right wedge, setting the right pricing clock, and building the right substrate beneath each wave. In this sense, software abundance does not remove edge; it changes edge from possession to process.

The deepest strategic inversion is psychological. In the old playbook, cannibalization was treated as failure. In the new playbook, cannibalization can be evidence that you correctly identified value early. If you captured cashflow, learning, and proprietary telemetry before integration, you did not lose the game; you completed the cycle. The compounding question is what your harness learned and what you can deploy next.

References

Stanford Human-Centered AI. (2025). 2025 AI Index Report. https://hai.stanford.edu/ai-index/2025-ai-index-report

Stanford Human-Centered AI. (2025). Chapter 1: The AI Research and Development Landscape. https://hai.stanford.edu/sites/default/files/2025-04/chapter_1_the_ai_research_and_development_landscape.pdf

Stanford Human-Centered AI. (2025). Chapter 4: AI in the Economy. https://hai.stanford.edu/sites/default/files/2025-04/chapter_4_ai_in_the_economy.pdf

OpenAI. (2026). API Pricing. https://openai.com/api/pricing/

OpenAI. (2026). GPT-5.3-Codex Model. https://platform.openai.com/docs/models/gpt-5.3-codex

OpenAI. (2026). API Deprecations. https://platform.openai.com/docs/deprecations

OpenAI Help Center. (2026). ChatGPT Release Notes. https://help.openai.com/en/articles/6825453-chatgpt-release-notes

Anthropic. (2026). Pricing. https://docs.anthropic.com/en/docs/about-claude/pricing

Anthropic. (2026). Claude Opus 4.6. https://www.anthropic.com/claude/opus

Anthropic. (2026). Claude Sonnet 4.6. https://www.anthropic.com/claude/sonnet

Anthropic. (2026). Model Deprecations. https://docs.anthropic.com/en/docs/about-claude/model-deprecations

Google AI for Developers. (2026). Gemini API Pricing. https://ai.google.dev/gemini-api/docs/pricing

Google AI for Developers. (2026). Gemini API Changelog. https://ai.google.dev/gemini-api/docs/changelog

Brynjolfsson, E., Li, D., and Raymond, L. R. (2023). Generative AI at Work (NBER Working Paper No. 31161). https://www.nber.org/papers/w31161

Bick, A., Blandin, A., and Deming, D. (2024). The Rapid Adoption of Generative AI (NBER Working Paper No. 32966). https://www.nber.org/papers/w32966

GitHub. (2022). Research: Quantifying GitHub Copilot's impact on developer productivity and happiness. https://github.blog/news-insights/research/research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness/

Carta. (2025). The startup shutdown surge continues in 2024. https://carta.com/data/the-startup-shutdown-surge-continues-in-2024/

TechCrunch. (2025, January 22). AI apps saw over $1 billion in consumer spending in 2024. https://techcrunch.com/2025/01/22/ai-apps-saw-over-1-billion-in-consumer-spending-in-2024/

TechCrunch. (2025, July 30). GenAI apps doubled their revenue, grew to 1.7B downloads in first half of 2025. https://techcrunch.com/2025/07/30/gen-ai-apps-doubled-their-revenue-grew-to-1-7b-downloads-in-first-half-of-2025/

Suggested citation: Baratta, R. (2026). “Short-Half-Life AI Tools: Edge Harvesting as the New Software Business Model.” Buildooor Research Brief, March 2026.

Correspondence: buildooor@gmail.com