News & Updates

The latest news and updates from companies in the WLTH portfolio.

Cerebras files to go public on Nasdaq and reports $510M in 2025 revenue, up 76% YoY, with a net income of $87.9M, up from a $485M net loss in 2024

@openai: Codex for (almost) everything. It can now use apps on your Mac, connect to more of your tools, create images, learn from previous actions, remember how you like to work, and take on ongoing and repeatable tasks. [video] computer use is broadly here & it's genuinely very cool, but it's worth flagging the structural asymmetry here where apple & google don't have to pipe everything through accessibility apis unlike app players. vertical integration lets them operate deeper in the stack like the compositor, view hierarchy, & event loop itself which is a real latency & reliability moat for on device agents.

Cerebras
Techmeme5d ago
Read update
Cerebras files to go public on Nasdaq and reports $510M in 2025 revenue, up 76% YoY, with a net income of $87.9M, up from a $485M net loss in 2024

How Cerebras approaches competing against Nvidia

While Nvidia dominates the AI chip market, Cerebras Systems is working to be a differentiator. Founded in 2015, the AI inference vendor started with the idea of creating the world's largest computer chip. This mission led the vendor to create a chip in 2019 that was about the size of a dinner plate, said James Wang, director of product marketing at Cerebras, on the latest Targeting AI podcast from Informa TechTarget. "Nothing like that had ever been done before," Wang said. He added that he followed the development as a technology analyst then and that, while vendors like Graphcore or SambaNova were trying to compete with Nvidia with smaller chips, only Cerebras went large. "I thought that was probably the only chance anyone had of taking on Nvidia," Wang said. "If you're just going to make small changes, Nvidia will catch up and beat you." Cerebras' approach to AI chips has since changed. Instead of trying to train the chips, it's now gone into AI inference. In August 2024, the AI vendor launched Cerebras Inference, an AI inference product that delivers 1,800 tokens per second for Llama 3.1 8B and 450 tokens per second for Llama 3.1 70B. The vendor has seen much growth since the launch of Cerebras Inference, Wang said. "The amount of inbound interest, the amount of companies that can use our products, the amount of startups building on Cerebras have just exploded," he said. He continued by saying that a vendor needs a large lead to compete against a technology giant like Nvidia; otherwise, it erodes its lead in one generation. On Aug. 5, Cerebras Systems announced that it will help power OpenAI's open model gpt-oss-120B.

Cerebras
TechTarget9d ago
Read update
How Cerebras approaches competing against Nvidia