Buildooor Research Brief -- March 2026

The AI Product Clock Speed Regime: OpenAI, Anthropic, and the High-Frequency Software Market

buildooor % claude --model opus-4.6 -p "/research-paper software now reprices at frontier-lab tempo"
Published March 23, 2026 -- Working Paper v1.0
Keywords: AI product cycles, OpenAI, Anthropic, product clock speed, AI pricing compression, model distillation, usage limits, platform rent extraction, frontier labs, software market structure
What does this mean for me?MarkdownPlain text

Abstract

The user intuition behind phrases like "algorithmic-trading-level product updates" is directionally correct, even if the metaphor should not be taken literally. Frontier AI markets do not move in milliseconds, but compared with almost every prior software market they do exhibit unusually high clock speed. Between February 2025 and February 2026, OpenAI and Anthropic repeatedly reset the baseline through model launches, cheaper smaller variants, new consumer tiers, usage tapering, direct app integrations, and explicit retirement calendars. Stanford's 2025 AI Index provides the macro substrate for why this feels so violent: nearly 90% of notable models in 2024 came from industry, organizational AI adoption rose to 78%, and the inference cost for GPT-3.5-level quality fell from $20.00 per million tokens in November 2022 to $0.07 by October 2024. This paper argues that frontier labs now operate four interacting clocks -- release, price, usage, and retirement -- that compress product cycles faster than classic SaaS strategy assumes. Free usage is not evidence of weak monetization discipline; it is subsidized distribution. Tapered usage is not user-hostile inconsistency; it is price discrimination. Distilled or mini variants are not rumors in the abstract; OpenAI explicitly documents distillation as a method for training smaller models from larger ones, while Anthropic's recent product ladder shows equivalent capability flowing into cheaper default surfaces. The result is a market that behaves less like stable software categories and more like repeated arbitrage closure: wrappers and mid-layer products get short monetization windows, then value is re-internalized by the labs through premium plans, enterprise workspaces, API volume, credits, and first-party product integration. The practical implication is straightforward. Builders should stop asking whether the visible feature will remain differentiated for ten years and start asking which layer of the business improves when the next model release lands.

1. Software Has Acquired Market Microstructure

Normal software markets used to move on a slower stack of clocks. Product teams shipped quarterly, buyers evaluated annually, pricing changed sparingly, and core technical baselines remained stable long enough for wrappers, plugins, and point solutions to build comfortable middle classes around them. Frontier AI labs have changed that cadence. The relevant competitive arena is no longer only your direct category. It is the moving baseline set by a handful of labs that control both the underlying models and an increasing number of direct-to-user surfaces.

That is why the market feels closer to a high-frequency environment than prior software cycles did. Not because OpenAI or Anthropic literally update products every second, but because the loop from release to user sampling to category imitation to price compression to feature absorption can now occur inside a single quarter. In historical SaaS, a feature advantage might remain commercially distinct for years. In frontier AI, a feature may be real, useful, and monetizable while still being structurally temporary.

Table 1. The Four Clocks That Define Frontier AI Markets
ClockObservable Mechanism2025-2026 Evidence
Release clockNew model, tool, or app default changes the baselineGPT-4.5, GPT-4.1, GPT-5.x, Claude Sonnet 3.7, Claude 4, Sonnet 4.6 all shipped within a single 13-month window
Price clockCapability moves down to cheaper tiers or smaller modelsOpenAI GPT-5.4 mini and nano; Anthropic kept Sonnet 4.6 at Sonnet 4.5 pricing
Usage clockFree, low-cost, and premium tiers meter the same capability differentlyChatGPT Free/Go/Plus/Pro and Claude Free/Pro/Max now form clear discrimination ladders
Retirement clockOld models are deprecated quickly enough to force rewritesGPT-4.5 preview, chatgpt-4o-latest, Claude Sonnet 3.7, Claude 3.5 Sonnet all hit dated retirement paths
These clocks interact. Release resets the baseline, price pushes capability downmarket, usage tiers sort willingness to pay, and retirement forces migration.

The important shift is that these clocks are not independent. A new model release often coincides with cheaper routing options, a new paid tier, changed limits on the free tier, and an implied or explicit countdown for old endpoints. Builders are therefore not only competing on product quality. They are competing against a continuously repricing market structure.

The better phrase is not "AI is chaotic." It is "AI now reprices categories at frontier-lab tempo." That is the closest software has come to market microstructure thinking.

2. The Release Clock Now Resets Categories Faster Than Roadmaps Can Digest

The most visible source of compression is the release clock itself. OpenAI and Anthropic are no longer shipping isolated annual tentpole models. They are updating consumer defaults, developer-facing models, plan structures, and replacement guidance in a rolling sequence. The effect is not just faster innovation. It is faster baseline invalidation.

Table 2. OpenAI and Anthropic Release / Reset Cadence
DateProviderEventStrategic Read
2025-02-24AnthropicClaude Sonnet 3.7 launched in claude.ai and the APIAnthropic moved frontier capability directly into the main app and developer surface at once
2025-02-27OpenAIGPT-4.5 research preview launched to Pro users and developersOpenAI used the consumer Pro tier as both prestige layer and market-sampling layer
2025-04-14OpenAIGPT-4.1, GPT-4.1 mini, and GPT-4.1 nano launched in the APICapability improvement arrived simultaneously with a smaller and cheaper ladder
2025-04-14OpenAIGPT-4.5 preview scheduled for July 14 shutdownA major preview model was put on an explicit short clock almost immediately
2025-05-22AnthropicClaude Sonnet 4 was added to claude.ai after the Claude 4 model family rolloutAnthropic reset its lineup without waiting for long enterprise digestion cycles
2025-08-05AnthropicClaude Opus 4.1 entered the active model rosterThe frontier tier kept moving while prior tiers were still being integrated downstream
2025-08-13AnthropicClaude Sonnet 3.5 models were deprecated for October 22, 2025 retirementAnthropic turned a prior default family into a timed migration project
2025-11-18OpenAIchatgpt-4o-latest snapshot deprecated for February 17, 2026 shutdownEven convenience aliases are now temporary instruments
2026-01-16OpenAIChatGPT Go rolled out worldwide at $8 per monthOpenAI widened the low-cost funnel instead of forcing an all-or-nothing jump from free to Plus
2026-02-13OpenAIGPT-4o, GPT-4.1, GPT-4.1 mini, o4-mini, and GPT-5 were retired from ChatGPTConsumer defaults were reset in bulk, not one model at a time
2026-02-17AnthropicClaude Sonnet 4.6 launched and became the default on Free and Pro plansAnthropic pushed near-frontier quality downward into the mass market immediately
Dates from official OpenAI and Anthropic release notes, launch posts, pricing pages, and deprecation logs.

Notice what is unusual here. The cadence is not merely "new models appear quickly." The cadence is that launches, default changes, and sunsets sit very close together. GPT-4.5 launched on February 27, 2025. GPT-4.1 arrived on April 14, 2025, with GPT-4.5 preview already placed on a shutdown path for July 14. Anthropic launched Sonnet 3.7 on February 24, 2025, then deprecated it on October 28 and retired it on February 19, 2026. These are not decade-long platform epochs. They are operating windows.

Historically, downstream product builders could treat upstream API choice as a semi-stable implementation detail. That assumption no longer holds. Model selection, prompt behavior, cost envelope, and even which model names customers recognize are all changing quickly enough that roadmap inertia becomes a competitive tax. If your product or marketing language assumes a provider baseline that disappears within a quarter, you are already behind the market.

3. Quality Is Flowing Down the Ladder Faster Than Most Builders Admit

The second clock is price compression. Users perceive this as a confusing mix of rumors about distillation, mini models, and sudden improvements at lower price points. The cleaner reading is that the market is explicitly organized around downward capability flow. OpenAI makes this explicit in documentation: it teaches developers how to use a larger model to produce training data for a smaller model so the smaller model can perform similarly on a specific task. That is not rumor. It is productized method.

Table 3. Evidence That Capability Is Moving Downmarket
Compression SignalReadingImplication
Stanford AI Index 2025Inference cost for GPT-3.5-level quality fell from $20.00 per 1M tokens in Nov 2022 to $0.07 by Oct 2024Falling cost shrinks the time any single product surface can command scarcity pricing
Stanford AI Index 202578% of organizations reported AI use in 2024, up from 55% in 2023Adoption is broad enough that every major release instantly has a large sampling market
Stanford AI Index 2025Nearly 90% of notable 2024 models came from industryA small number of labs control the resets that downstream builders must absorb
OpenAI pricingGPT-5.4 is $2.50 / $15.00 per 1M input/output tokens; GPT-5.4 mini is $0.75 / $4.50; nano is $0.20 / $1.25OpenAI prices a full internal quality ladder for immediate arbitrage and replacement
OpenAI distillation guideOpenAI explicitly documents using a larger model to create data that trains a smaller model to perform similarly on a specific taskCapability-downshifting is not rumor; it is a published optimization path
Anthropic Sonnet 4.6 launchSonnet 4.6 became default for Free and Pro users while pricing stayed at $3 / $15 per 1M tokensAnthropic is also moving more capability into lower-priced default lanes
Where the paper interprets provider behavior beyond explicit wording, that interpretation is stated as inference rather than confirmed claim.

OpenAI's pricing page makes the ladder legible: GPT-5.4, GPT-5.4 mini, and GPT-5.4 nano offer the same family identity across materially different price points. Anthropic's public messaging uses different language, but the observed effect is similar. Sonnet 4.6 became the default on Free and Pro plans while remaining priced like Sonnet 4.5 at $3 / $15 per million input and output tokens. In Anthropic's own launch post, users preferred Sonnet 4.6 over Sonnet 4.5 roughly 70% of the time and even preferred it to Opus 4.5 59% of the time in early testing. The economic implication is that yesterday's frontier experience is increasingly tomorrow's mass-market default.

The safest way to phrase the inference is this: OpenAI confirms a formal distillation path; Anthropic does not frame its stack the same way publicly, but its price-performance moves are consistent with the same underlying market logic. Higher-end capability is repeatedly harvested, packaged, and pushed into cheaper lanes. That is why quality improvements now feel simultaneously dramatic and non-monopolizable.

4. Free, Tapered, and Metered Usage Are Strategic, Not Contradictory

Users often read the current market as incoherent: providers offer advanced free access, then impose annoying caps, then introduce lower-cost tiers, then sell premium access on top. But this is exactly what a mature price-discrimination system looks like. Free access is subsidized distribution and behavior sampling. Tapered usage sorts casual from serious users. Premium tiers capture urgency, status, and workflow dependence. Credits monetize overflow without forcing a full plan upgrade.

Table 4. Usage Ladders as Distribution and Price Discrimination
TierAccess PatternUsage Shape
ChatGPT FreeLimited GPT-5.3 access plus tools and GPTs10 GPT-5.3 messages every 5 hours, then automatic fallback to mini; ads can support broader access in some markets
ChatGPT Go / PlusLow-cost and mid-tier paid accessUp to 160 GPT-5.3 messages every 3 hours; Go can include ads and has lower manual Thinking access
ChatGPT Business / ProHigh-dependence work tiersUnlimited GPT-5 models subject to abuse guardrails plus richer reasoning and collaboration access
ChatGPT CreditsOverflow monetization after included limitsPay-as-you-go credits extend Codex and Sora use without upgrading the base subscription
Claude FreeFull app with limited high-end usageSession-based limit resets every 5 hours and varies with demand
Claude Pro$20 monthly or $17 annualizedAt least 5x free-service usage during peak hours; roughly 45 messages every 5 hours for short chats
Claude Max$100 / $200 premium tiersChoose 5x or 20x Pro usage; at least 225 or 900 messages every 5 hours for short chats
Usage details change over time; readings here reflect official plan descriptions and help-center documentation available in March 2026.

OpenAI's current stack makes the structure particularly obvious. Free users get limited flagship access and separate tool caps. Go at $8 per month widens the funnel with more messages and uploads while still allowing ads. Plus and Business users get more control and reasoning access, while Pro gets effectively unlimited top-tier usage subject to abuse guardrails. Anthropic mirrors the same economic intent through different branding: Free has demand-shaped five-hour sessions, Pro expands that budget materially, and Max sells 5x or 20x Pro usage with explicit priority at high traffic times.

Free usage is not anti-monetization. It is user acquisition. Limits are not random friction. They are segmentation. Credits are not a side feature. They are overflow monetization.

Once you see the ladder clearly, the frequent complaints about "high quality sometimes, lower quality later" become easier to interpret. The platforms are intentionally blending aspiration, habituation, and metering. They want broad adoption, observable demand signals, and a clear path for heavy users to move into higher-value monetization buckets.

5. Retirement Clocks Turn Product Updates Into Mini-Capitulations

The hidden clock is retirement. Every launch draws attention, but the commercial brutality of the current market often shows up later when older models are retired, aliases vanish, or defaults are bulk-replaced. That is when downstream teams are forced into mini-capitulations: benchmark resets, pricing changes, revised prompt stacks, sales-copy rewrites, support overhead, and sometimes outright repositioning.

Table 5. Retirement Paths Are Now Part of the Product Itself
Model or SurfaceLaunch / NoticeRetirementWhy It Matters
GPT-4 in ChatGPTRetirement announced 2025-04-102025-04-30Even iconic flagship models now exit the consumer surface on explicit dates
GPT-4.5 previewLaunched 2025-02-27; deprecated 2025-04-142025-07-14A marquee preview model spent only weeks before being placed on the off-ramp
chatgpt-4o-latestDeprecated 2025-11-182026-02-17Alias-based architectures now inherit real shutdown risk
Claude Sonnet 3.5 (20240620)Deprecated 2025-08-132025-10-28A once-mainline Anthropic workhorse became a migration project in under 18 months
Claude Sonnet 3.7Launched 2025-02-24Not sooner than 2026-02-19Even active Anthropic models now ship with an explicit earliest retirement horizon
Claude Haiku 3Deprecated 2026-02-192026-04-20Even cheaper utility tiers now roll on a scheduled cadence
Launch-to-retirement duration matters because it sets the true half-life of any dependent product surface.

GPT-4.5 preview is the clearest OpenAI example. It launched on February 27, 2025, was deprecated on April 14, and was scheduled to shut down on July 14. Anthropic's Sonnet 3.7 lasted longer, but even there the window was short enough to force migration planning within the same planning year. This is why AI product cycles feel more violent than the headline launch count alone suggests. The reset is not just that a new model appears. The reset is that an old one leaves.

For builders, every retirement date is a monetization date. If the product is still profitable after migration cost, keep it. If migration destroys the thesis, sunset it. Pretending retirement is only an engineering detail is how teams end up subsidizing their own obsolescence with roadmap labor.

6. This Does Resemble Amazon and Uber -- But the Loop Is Faster and More Vertical

Your analogy to Amazon and Uber is useful because it points toward the right economic shape: subsidize access early, create dependence or habit, then capture value more selectively once the market structure is set. The difference is that frontier labs compress this loop and control more layers at once than those older platforms did.

Table 6. Historical Platform Analogies and the Frontier-Lab Twist
PatternAmazon / Uber AnalogyFrontier Lab Version
Subsidize adoptionAmazon normalized low prices and Prime convenience; Uber normalized cheap rides and dense supplyOpenAI and Anthropic normalize advanced AI with free tiers, cheap paid tiers, and broad feature access
Use scale to learn demandAmazon learned which categories and seller tools compounded; Uber learned which routes, cohorts, and regions retainedLabs see which tasks, tools, and workflows generate the most demand across consumer and developer surfaces
Segment by willingness to payPrime, ads, AWS, subscriptions, and take rates created layered monetizationFree/Go/Plus/Pro/Business and Free/Pro/Max/API/Team create layered monetization around one capability stack
Extract after habit formationBy 2024, Amazon reported $108B in AWS revenue and $68.6B in operating income; Uber reported $43.978B in 2024 revenue and $2.799B in GAAP operating incomeLabs increasingly capture value through premium plans, enterprise workspaces, tool-specific credits, priority access, and API volume
Key differenceAmazon and Uber did not usually own the underlying cognition layer of everyone building on top of themFrontier labs own the model, the app, the API, and increasingly the developer tooling too, which accelerates re-internalization
The analogy is structural, not identical. Frontier labs own the cognition layer itself, which speeds feedback and re-internalization.

Amazon spent years normalizing low-price, high-convenience consumer behavior, then captured enormous value in more durable infrastructure and monetization layers. Andy Jassy's 2024 shareholder letter reported $108B in AWS revenue and $68.6B in operating income. Uber normalized cheap, available rides long before the business looked traditionally healthy; by 2024 it reported $43.978B in revenue and $2.799B in GAAP operating income. These are not identical stories, but they share the same deeper pattern: subsidized demand can be rational if it teaches the platform where the eventual rents sit.

Frontier labs do something even more powerful. They subsidize the consumer side with free or low-cost chat access, subsidize the builder side with cheap smaller models and extensive tooling, and then learn from both sides simultaneously. They can watch what end users actually do, what developers try to productize, and where willingness to pay persists after the novelty wave. That shortens the path from subsidy to extraction.

7. The Frontier Lab Value-Capture Loop Is Now Visible

Once the clocks are put together, the market stops looking random. It looks like a repeatable capture loop. Providers subsidize access, observe what users and builders value, move quality downmarket, sort users by willingness to pay, absorb high-signal workflows into first-party surfaces, retire stale surfaces, and then meter the heavy users who remain.

Table 7. A Repeating Value-Capture Loop in Frontier AI
StepObservable ExampleBuilder Consequence
1. SubsidizeFree access, low-cost Go, and mass-market Pro tiers widen the funnelYou can acquire users cheaply, but you are entering a lab-owned distribution system
2. SampleConsumer apps, APIs, and coding products reveal which tasks people value mostYour product category becomes free market research for the platform above you
3. ShrinkDistillation, mini and nano models, and lower-priced defaults move capability downmarketYour differentiation window narrows as better-enough quality gets cheaper
4. SegmentHigher reasoning tiers, premium usage bundles, and enterprise workspaces sort willingness to payLabs capture more of the surplus that wrappers hoped to monetize
5. IntegrateNative search, coding, agents, memory, connectors, and office integrations absorb wrapper featuresStandalone feature businesses face margin compression or positioning collapse
6. RetireDeprecation calendars force migration to the next default stackEvery missed migration becomes a mini-capitulation in pricing, UX, or architecture
7. Meter overflowCredits, priority access, and premium seats monetize heavy users without changing the free entry pointValue extraction continues after the initial subscription decision
This framework is an inference synthesized from official product, pricing, and deprecation behavior across OpenAI and Anthropic.

The key consequence is that the middle of the market is under chronic pressure. Thin wrappers, prompt packs presented as products, and single-feature assistants can absolutely make money. But they should increasingly be treated as short-window edges, not as default forever-businesses. Their function is to exploit a temporary inefficiency, capture cashflow and telemetry, and either graduate into a deeper workflow layer or be retired without drama.

This is where the "mini capitulations then value extractions" intuition becomes precise. A new model or feature compresses a downstream market; downstream products capitulate on price or narrative; the lab later captures more of the remaining surplus through premium reasoning tiers, enterprise security, API volume, credits, or first-party product expansion. That is not a one-off event. It is becoming the standard rhythm.

8. Builders Need to Price for Clock Speed, Not for Hope

The practical response is not nihilism. It is better asset classification. If a product sits close to raw model capability, assume a short half-life and price for rapid payback. If a product owns workflow state, approvals, memory, routing, distribution, or human accountability, the cycle may be survivable or even beneficial because better models increase throughput rather than erasing the value.

Table 8. Survival by Distance From the Raw Model
What You SellClock-Speed RiskExpected WindowBest Strategy
Pretty UI around the newest general modelCriticalWeeks to monthsTreat it like a trade and price for immediate payback
Model brokerage or benchmark comparison aloneHighMonthsBundle into procurement, routing, or governance rather than pure comparison
Narrow workflow automation with domain fitMedium6-18 monthsInstrument value tightly and migrate fast when upstream baselines move
Agent harness with memory, policy, tools, and approvalsLow-Medium2-5 yearsLet stronger models increase throughput while the harness owns the workflow
Human-accountable service with AI leverageLowPersistentOwn trust, judgment, and outcome accountability rather than raw capability access
These are strategic buckets, not hard laws. The point is to classify exposure before you overbuild.

This framing also resolves a common emotional trap. Many founders interpret rapid feature absorption as proof that they built the wrong thing. Sometimes that is true. But often the more accurate conclusion is that they built a short-wave product and mistakenly funded it like a long-duration moat. If the product captured meaningful revenue or learning before integration pressure arrived, it may have succeeded on its actual time horizon.

The deeper strategic test is simple: what gets stronger when the next major model release lands? If the answer is nothing, you are probably renting a temporary edge. If the answer is your routing, memory, auditability, brand trust, or domain-specific workflow, you may actually be compounding on top of the release cycle instead of being crushed by it.

9. Conclusion: Yes, We Are Seeing Turbo Product Cycles -- But the Better Frame Is Clock-Speed Arbitrage

So: are we witnessing turbo-accelerated product cycles, with rumors of distillation, free usage, tapered usage, quality tiers, and repeated value re-internalization? Yes. But the strongest version of the claim is not that the market has become irrationally noisy. It is that frontier AI has produced a new software regime where a small number of labs repeatedly reprice the market through four interacting clocks.

That regime is more aggressive than classic SaaS, more vertically integrated than the older platform stories, and closer to arbitrage closure than to slow category building. OpenAI and Anthropic do not need every downstream builder to fail for this structure to hold. They only need enough downstream experimentation to reveal where demand is real, then enough control over pricing, defaults, and retirement to reclaim the value layers that matter most.

The implication for founders is not "never build on frontier labs." It is: build with a more exact sense of duration. Some AI products are trades. Some are bridges. Some are real harness assets. The mistake is treating all three as the same thing. In a high-clock-speed market, bad duration matching is what kills strategy first.

References

Stanford Human-Centered AI. (2025). The 2025 AI Index Report. https://hai.stanford.edu/ai-index/2025-ai-index-report

Stanford Human-Centered AI. (2025). Research and Development -- The 2025 AI Index Report. https://hai.stanford.edu/ai-index/2025-ai-index-report/research-and-development

Stanford Human-Centered AI. (2025). Economy -- The 2025 AI Index Report. https://hai.stanford.edu/ai-index/2025-ai-index-report/economy

OpenAI. (2025, February 27). Introducing GPT-4.5. https://openai.com/index/introducing-gpt-4-5/

OpenAI. (2025, April 14). Introducing GPT-4.1 in the API. https://openai.com/index/gpt-4-1/

OpenAI. (2026). OpenAI API Pricing. https://openai.com/api/pricing/

OpenAI. (2026). Supervised Fine-Tuning: Distilling from a Larger Model. https://platform.openai.com/docs/guides/distillation

OpenAI. (2026). Deprecations. https://developers.openai.com/api/docs/deprecations

OpenAI Help Center. (2026). GPT-5.3 and GPT-5.4 in ChatGPT. https://help.openai.com/en/articles/11909943-gpt-53-and-54-in-chatgpt

OpenAI Help Center. (2026). ChatGPT Free Tier FAQ. https://help.openai.com/en/articles/9275245

OpenAI. (2026, January 16). Introducing ChatGPT Go, now available worldwide. https://openai.com/index/introducing-chatgpt-go/

OpenAI. (2026, February 9). Testing ads in ChatGPT. https://openai.com/index/testing-ads-in-chatgpt/

OpenAI Help Center. (2026). Using Credits for Flexible Usage in ChatGPT (Free/Go/Plus/Pro) & Sora. https://help.openai.com/en/articles/12642688-using-credits-for-flexible-usage-in-chatgpt-free-go-plus-pro-sora

Anthropic. (2026, February 17). Introducing Claude Sonnet 4.6. https://www.anthropic.com/news/claude-sonnet-4-6

Anthropic. (2026). Plans & Pricing. https://www.anthropic.com/pricing

Anthropic Help Center. (2026). About Free Claude Usage. https://support.anthropic.com/en/articles/8602283-about-free-claude-usage

Anthropic Help Center. (2026). About Claude's Pro Plan Usage. https://support.anthropic.com/en/articles/8324991-about-claude-s-pro-plan-usage

Anthropic Help Center. (2026). About Claude's Max Plan Usage. https://support.anthropic.com/en/articles/11014257-about-claude-s-max-plan-usage

Anthropic. (2026). Model deprecations. https://docs.anthropic.com/en/docs/about-claude/model-deprecations

Anthropic. (2025). API Release Notes. https://docs.anthropic.com/en/release-notes/api

Anthropic. (2025). Claude Apps Release Notes. https://docs.anthropic.com/en/release-notes/claude-apps

Amazon. (2025). 2024 Shareholder Letter. https://ir.aboutamazon.com/files/doc_financials/2025/ar/2024-Shareholder-Letter-Final.pdf

Uber Technologies, Inc. (2026, February 4). Uber Announces Results for Fourth Quarter and Full Year 2025. https://investor.uber.com/news-events/news/press-release-details/2026/Uber-Announces-Results-for-Fourth-Quarter-and-Full-Year-2025/default.aspx

TechCrunch. (2025, January 22). AI apps saw over $1 billion in consumer spending in 2024. https://techcrunch.com/2025/01/22/ai-apps-saw-over-1-billion-in-consumer-spending-in-2024/

TechCrunch. (2025, July 30). GenAI apps doubled their revenue, grew to 1.7B downloads in first half of 2025. https://techcrunch.com/2025/07/30/gen-ai-apps-doubled-their-revenue-grew-to-1-7b-downloads-in-first-half-of-2025/

Suggested citation: Baratta, R. (2026). "The AI Product Clock Speed Regime: OpenAI, Anthropic, and the High-Frequency Software Market." Buildooor Research Brief, March 2026.

Correspondence: buildooor@gmail.com