Meta introduced Muse Spark today, the first model in what the company is positioning as its flagship frontier line. The pitch is efficiency: competitive performance against the top Western and Chinese labs at, according to Meta's own numbers, roughly half the inference compute cost of comparable models.

Alongside the launch, Meta confirmed AI capital expenditures in the range of $115 to $135 billion for 2026 — nearly double last year's figure. A significant portion is earmarked for training clusters and custom accelerator silicon. The company's internal research roadmap now explicitly names Anthropic, OpenAI and Google as the peer set, with Muse Spark framed as the first volley rather than a one-off.

The technical claims, if they hold up to independent benchmarking, matter beyond Meta. A material drop in inference cost rewrites the unit economics of every downstream application: search, ads, agents, AR glasses, on-device assistants. Meta's business model — ad-supported consumer products at billions of users — can absorb compute costs that strangle subscription-first competitors. Cheaper models plus richer distribution is a combination rivals will have to answer.

Watch two things in the coming weeks. First, how aggressively Meta opens Muse Spark to enterprise developers. Second, whether the efficiency claims survive third-party evals on long-context, tool-use, and agentic tasks — the places where benchmark wins have been most slippery.