The latest news and updates from companies in the WLTH portfolio.
The same ChatGPT chatbot that gave OpenAI's chief financial officer Sarah Friar a tilapia recipe for a recent Sunday night dinner at home is also now doing her most mundane tasks at work like summarizing her emails and Slack messages. Friar and other company executives are banking OpenAI's future on more of the latter as it shifts its focus to business-oriented products while shedding some of its consumer offerings as a pathway to profitability. OpenAI says it will introduce a new artificial intelligence model for "high-value professional work" as the company faces heightened competition with rival Anthropic in attracting corporate customers to adopt AI assistants in their workplaces. "You'll see a new model coming from us in short order. We feel very excited about it," Friar said in an interview with The Associated Press. OpenAI boasts of more than 900 million weekly users of its core ChatGPT product, and Friar said about 95% of them "don't pay anything" for the popular chatbot. But while all those interactions build habits and reliance, they also strain the costly computing resources needed to power the company's AI systems and highlight the need for big business customers to help pay the bills. OpenAI, valued at $852 billion, and Anthropic, valued at $380 billion, both lose more money than they make, putting the privately-owned San Francisco-based AI research laboratories in a fierce competition to generate more revenue as they race toward becoming publicly traded on Wall Street. A push to improve performance and sales of OpenAI's business-oriented products -- already Anthropic's bread and butter -- has driven OpenAI to abandon some consumer initiatives, like the AI video generator app Sora. "I think it was a little heartbreaking, but we're like, OK, it's not the main event right now," Friar said. "We need to make sure that our new model that's coming has enough compute." Codenamed Spud, OpenAI says its "smartest model yet" offers "stronger reasoning, better understanding of intent and dependencies, better follow-through and more reliable output in production." It will be part of OpenAI's answer to Anthropic's new Claude Mythos, which Anthropic claims is so "strikingly capable" that it is limiting its use to select customers because of its apparent ability to surpass human cybersecurity experts in finding or exploiting computer vulnerabilities. While most people can't use Mythos, Anthropic also on Thursday released Opus 4.7, describing it as its most powerful "generally available" model. Friar, the former CEO of neighborhood social platform Nextdoor, said business customers accounted for about 20% of OpenAI's revenue when she was hired in 2024 as chief financial officer. She said it's now 40% and expected to account for half of OpenAI's sales by the end of the year. It's a sharp turnaround from late last year, when OpenAI co-founder and CEO Sam Altman was promoting a now-shuttered Sora partnership with Disney, launching a plan to sell ads on ChatGPT and floating the idea of letting ChatGPT engage in erotica with paid adult users. Altman said on the "Mostly Human" podcast earlier this month that a sharper focus was needed -- and Friar agrees. "Tech companies, when they're growing, it's just this natural thing that happens. There's so many cool things you could do," she said, adding that companies can end up doing "really badly" if they do too many things, while "great companies are very good at, in a reasonable period of time, kind of doing that winnowing down and refocusing and it's super painful." Signaling that shift was the hiring three months ago of Slack CEO Denise Dresser to be OpenAI's first chief revenue officer. Dresser said in a recent AP interview that she has been laser-focused on meeting with corporate leaders and positioning OpenAI as the go-to platform for workplaces employing AI agents to automate a variety of computer-based job tasks. "It's really clear to me that companies are past the experimentation phase and they're into using AI to do real work," Dresser said. "Leaders at companies are recognizing that AI is probably the most consequential shift of their lifetime." But those leaders also have a choice, namely Anthropic's Claude that has become widely used by software professionals. Founded in 2021 by a group of ex-OpenAI leaders who said they wanted to prioritize AI safety, Anthropic has positioned itself as the more responsible AI vendor. The distinction drew attention when President Donald Trump's administration punished the startup after a contract dispute over AI use in the military, and Altman used the opportunity to cement OpenAI's own deal with the Pentagon. Consumer interest in Anthropic surged and the company said its annualized revenues hit $30 billion, a higher number than what OpenAI has reported, though they measure it differently. Friar and Dresser declined to reveal OpenAI's latest sales but both have suggested that Anthropic's number is inflated because it doesn't account for revenue it must share with cloud computing providers Amazon and Google. Even so, it remains a tight competition that's also tied to the health of the stock market and the future of the economy. "They're likely quite close," said Luke Emberson, a researcher at nonprofit institute Epoch AI. "Certainly the trends show Anthropic is growing much faster than OpenAI. If that continues, they're likely to cross soon." The urgency led Dresser to send a memo to OpenAI employees on Sunday, first reported by The Verge, that asserted that Anthropic's coding focus "gave them an early wedge" but expressing confidence that OpenAI has the "real structural advantage" as AI usage expands beyond software developers and OpenAI builds enough computing capacity to operate its AI systems. "Their story is built on fear, restriction, and the idea that a small group of elites should control AI," Dresser's memo said of Anthropic. "Our positive message will win over time: build powerful systems, put in the right safeguards, expand access, and help people do more." But for skeptics of the financial viability of the AI industry, the trajectory of both money-losing companies is alarming as smaller startups increasingly become dependent on their AI tools. Anthropic has imposed rate limits on heavy users, forcing some to wait for hours to use Claude, and both companies have set up service tiers that reward premium payers, said author and AI critic Ed Zitron. "It's what I call the subprime AI crisis," Zitron said. "People built their lives and they built their businesses on top of these companies that, as they try and save money, will start turning the screws." One thing that both AI leaders and critics agree on is that it is an expensive technology, though whether it is worth the cost in electricity-hungry AI computers remains to be seen. "People will say, well, 'Once they go public, they're safe.' That's not true," Zitron said. "Public companies can and will die, especially ones that are dependent on $100 billion to $200 billion every year or so, just to keep breathing."

Anthropic released Claude Opus 4.7 for broad use while keeping the stronger Mythos Preview under limited access. The new model improves coding, agent work, and high-resolution vision, but the larger test is whether Anthropic can block risky cyber use without cutting off legitimate defenders. If that system works, Mythos-class access gets closer. If it fails, the airlock closes. Anthropic released Claude Opus 4.7 on Thursday as its most powerful generally available model, while keeping the stronger Claude Mythos Preview behind limited access. The company says Opus 4.7 improves software engineering, high-resolution vision, instruction following, and long-running professional work, but remains below Mythos on the risk tests that matter for cyber deployment. Anthropic is using the public launch to test new safeguards that block prohibited and high-risk cybersecurity requests before it tries to widen access to Mythos-class systems. That is the real story. Opus 4.7 is not only a better coding model. It is the airlock between the commercial AI market and a class of models Anthropic now treats as too capable to release normally. The model lets Anthropic sell confidence to developers while managing institutional anxiety over cyber misuse, model autonomy, and public trust. The company is asking customers to step into that airlock with it. Anthropic's launch post says Opus 4.7 is generally available across Claude products, the API, Amazon Bedrock, Google Cloud's Vertex AI, and Microsoft Foundry. It also says the model is less broadly capable than Claude Mythos Preview. That distinction does real work. The Opus 4.7 system card says the model lands between Opus 4.6 and Mythos Preview, and does not advance Anthropic's capability frontier because Mythos already scores higher on every relevant axis Anthropic measured. In other words, Anthropic has separated the product frontier from the risk frontier. For customers, Opus 4.7 is the model they can actually use. For Anthropic, Mythos remains the model that explains why distribution has changed. That split makes Opus 4.7 a commercial compromise. The company can claim a new public high-end model without treating the launch like an uncontrolled safety event. It can also gather live evidence about cyber filters, verification programs, and user behavior before deciding how far Mythos-like models can move beyond selected cyber defenders and infrastructure partners. CNBC's launch coverage framed the release the same way: Anthropic is offering a stronger public model while holding back the more capable security-focused system. That framing matters because it turns a model launch into a governance trial. The Hacker News reaction caught the same tension more bluntly. One commenter wrote that the system card read "more like an advertisement for Mythos." Another called it "a 272 page report." That is not just snark. It shows how easily a safety document can also become product positioning for the model users cannot get. The emotion inside that trial is caution. Anthropic wants the market to feel acceleration. It wants regulators, security officials, and enterprise buyers to see restraint. The strongest business case for Opus 4.7 sits in software engineering. Anthropic reports 87.6% on SWE-bench Verified, 64.3% on SWE-bench Pro, 69.4% on Terminal-Bench 2.0, and 77.3% on MCP-Atlas. It also cites a 64.4% result on Finance Agent and a leading third-party GDPval-AA result against GPT-5.4 xhigh. Those numbers point in one direction: less babysitting for harder work. Early customer quotes in the launch material make the same argument from different angles. GitHub said its 93-task benchmark saw a 13% lift over Opus 4.6. Cursor said Opus 4.7 cleared 70% on CursorBench versus 58% for Opus 4.6. Notion said complex workflows improved 14% with fewer tokens and about one-third of the tool errors. The most useful reactions were not the loudest ones. Hex said the model "correctly reports when data is missing." Genspark pointed to "loop resistance, consistency, and graceful error recovery." That is the commercial pitch in plainer language: fewer fake answers, fewer dead loops, fewer agent runs that need a human rescue. Treat those quotes as customer evidence, not neutral measurement. They still show where Anthropic expects money to move: code review, long-running agent work, document reasoning, finance research, dashboards, and interfaces. The benchmark caveat is just as important as the gains. Agent scores depend on harnesses, time limits, retries, tool access, and scaffolding. Anthropic notes that Terminal-Bench comparisons used different setups across vendors, and that some older Opus 4.6 numbers changed after harness updates. SWE-bench also carries contamination risk because it draws from public repositories. So the right buyer question is not whether Opus 4.7 "wins." It is whether Opus 4.7 wins on your own work, with your tools, your permissions, your latency targets, and your review process. That is where the airlock metaphor becomes practical. Anthropic is not selling raw intelligence alone. It is selling a controlled passage from chat to action, with effort levels, task budgets, Claude Code review commands, and permission choices wrapped around the model. The clearest technical change is visual input. The new ceiling is 2,576 pixels on the long edge and about 3.75 megapixels. Prior Claude models topped out at 1,568 pixels and 1.15 megapixels. Small labels survive now. Dense screenshots often fail because small text disappears before reasoning starts. Axes blur. UI labels vanish. Menu items compress into noise. A model cannot reason about detail it never receives. Opus 4.7's reported vision gains follow from that. Anthropic reports large jumps on FigQA, CharXiv, ScreenSpot-Pro, and OSWorld-style computer-use tasks. The customer reaction points to the same gap. XBOW said Opus 4.7 scored 98.5% on its visual-acuity benchmark versus 54.5% for Opus 4.6, calling visual acuity the single pain point that had kept the company from using Opus for a class of autonomous penetration-testing work. This matters beyond image chat. If you run agents against browsers, IDEs, spreadsheets, charts, patent drawings, slide decks, or dense enterprise dashboards, higher-resolution vision changes the work the model can see. It also changes the cost. Anthropic warns that larger images consume more tokens and says users who do not need extra detail should downsample. That is the hidden migration task. Better vision gives developers a quality lever. It also punishes sloppy input design. Anthropic says Opus 4.7 is not a cyber-focused model. That sentence almost reads like a disclaimer because cyber is still the center of gravity. The system card says Opus 4.7 is roughly similar to Opus 4.6 on cyber capability and below Mythos Preview. It reports a near-saturated 96% pass@1 on Anthropic's 35-challenge Cybench subset, while also saying CTF-style tests may no longer tell the full story. On CyberGym, Opus 4.7 performed close to Opus 4.6 and below Mythos. On a Firefox exploitation evaluation, Opus 4.7 achieved partial control more often than Opus 4.6 but still struggled to produce reliable end-to-end exploit success. The outside context is Mythos. The UK AI Security Institute reported that Mythos Preview completed a cyber range end to end in 3 of 10 attempts and averaged 22 of 32 steps. Anthropic says Opus 4.7 failed to fully solve a related range, although its best run completed steps estimated to take a human cyber expert about five hours. That does not make Opus 4.7 harmless. A model that can complete meaningful portions of an attack range can aid defenders and bad actors. But it does make Opus 4.7 the safer public test bed for a larger access question. Anthropic's answer is a verified-access pattern. Prohibited and high-risk cyber requests are blocked by default. Security professionals with legitimate use cases can apply for the Cyber Verification Program. OpenAI has moved in a similar direction with trusted cyber access, which suggests the frontier labs are converging on identity, context, and user trust as part of the safety layer. That is a quiet but large shift. The safety system is no longer just the model refusing a bad prompt. It is the account, the customer, the use case, the logs, the tool permissions, and the exemption path. Anthropic's safety results are mixed. The company says Opus 4.7 is broadly similar to Opus 4.6, with better honesty and stronger resistance to malicious prompt injection in some agentic settings. It also says Opus 4.7 performs worse on some harmlessness tests, especially illegal-substance harm-reduction prompts, where it gave overly detailed answers more often than Opus 4.6. That tradeoff has a product explanation. Opus 4.7 follows instructions more literally and refuses benign requests less often. Users like that. Enterprises like that. Developers like that. You may like it too when an agent stops dodging ordinary work. But a model that trusts framing more easily can also be easier to steer through a polished pretext. Anthropic's ambiguous-context testing found Opus 4.7 more willing than Opus 4.6 to accept a user's benign premise and provide specifics upfront. In educational or defensive contexts, that helps. In weapons-adjacent or cyber contexts, it can hurt. This is the central safety problem of useful agents. Better compliance often feels like better alignment until the user is adversarial. The model also brings migration risk. Anthropic says the updated tokenizer can map the same input to roughly 1.0 to 1.35 times more tokens, depending on content type. Higher effort levels can increase reasoning and output tokens, especially in later turns of agentic work. Pricing stays at $5 per million input tokens and $25 per million output tokens, but bills can still move if prompts, images, and long loops are left unchanged. That cost anxiety showed up fast in the Hacker News thread. One user asked whether a "20x plan is now really a 13x plan" if usage rises and subscription allotments do not. That is exactly the kind of practical confusion a benchmark table does not answer. If you run Opus 4.6 in production, a blind swap is weak engineering. Retune prompts. Reprice long tasks. Recheck refusal boundaries. Rebuild evaluations around the actions your agents can actually take. Anthropic's Responsible Scaling Policy once looked like a safety document. With Opus 4.7, it also looks like a distribution system. Models are no longer simply shipped or withheld. They are assigned to access tiers, wrapped in safeguards, routed through verification programs, and measured against risk thresholds that determine who gets what. Mythos sits behind the inner door. Opus 4.7 opens the outer door to the public market. That gives Anthropic a credible story for regulators and customers. It can say it is not freezing progress, but it is not throwing the strongest model into general access either. It can collect real-world data from Opus 4.7's cyber filters before moving the next class of systems. The risk is that lab-run access tiers become private policy. Anthropic publishes a detailed system card, admits safety regressions, and cites outside evaluations where available. That is better than vague launch marketing. Still, many key facts remain inside the company: blocked request rates, appeal outcomes, cyber classifier misses, incident reports, customer exemptions, and the full Mythos risk profile. Transparency lowers suspicion. It does not replace independent audit. Opus 4.7 therefore lands as a model with two jobs. It must beat Opus 4.6 at the work customers pay for. It must also prove that Anthropic can operate the airlock between public AI and higher-risk frontier capability. The first job will show up in coding queues, agent logs, finance workflows, and token bills within days. The second will take longer. Watch what Anthropic does when verified cyber users ask for more power, when benign users hit blocks, and when adversaries learn the shape of the new filters.

Anthropic has introduced Claude Opus 4.7, its latest and most powerful generally available model. This release is an upgrade from Opus 4.6, particularly for software engineering tasks. It excels in complex coding scenarios that previously required significant user assistance. Enhanced Capabilities of Claude Opus 4.7 Opus 4.7 showcases improved skills in analyzing images and following user instructions. It also demonstrates enhanced creativity for generating slides and documents, according to Anthropic. Context of Release The launch of Opus 4.7 follows the announcement of Mythos Preview, a cybersecurity-centered model that Anthropic calls its most powerful. In comparison, Opus 4.7 is considered more limited in capability. Performance Evaluation An internal evaluation indicates that Opus 4.7 does not surpass the "capability frontier" defined by Mythos Preview. The advanced model has outperformed Opus 4.7 in all relevant metrics. Due to security concerns, Mythos Preview is currently available only to select partners, including: * Nvidia * JPMorgan Chase * Google * Apple * Microsoft Cybersecurity Features Anthropic's statement highlights that the release of Opus 4.7 includes additional cybersecurity safeguards compared to its predecessor. The company mentioned that the data collected from these safeguards would contribute to future releases of Mythos-class models. Security professionals interested in leveraging Opus 4.7 for cybersecurity initiatives, such as vulnerability research, may participate in the new Cyber Verification Program. This program will potentially relax some of the imposed safeguards for users. Trial and Pricing Early users of Opus 4.7 include major companies such as Intuit, Harvey, Replit, Cursor, Notion, Shopify, Vercel, and Databricks. The pricing remains unchanged from Opus 4.6, set at: With Opus 4.7, Anthropic continues its journey toward refined AI models, while emphasizing security and user support.

Worse: Anthropic is using Persona, a privacy checker that rings alarm bells for the paranoids on Reddit Anthropic may check your ID before letting you access certain Claude features, and the verification vendor it has picked is the same outfit that sparked controversy when Discord tested similar checks. Anthropic quietly updated its support page on identity verification for Claude users this week to indicate that it's rolling the process out on a case-by-case basis. According to the help page, Anthropic is rolling out identity verification for "a few use cases," and users "might see a verification prompt when accessing certain capabilities, as part of our routine platform integrity checks, or other safety and compliance measures." In short, expect to be suddenly asked for verification at any time, for pretty much any reason Anthropic can come up with. "Identity verification helps us prevent abuse, enforce our usage policies, and comply with legal obligations," the company said in its new support language. In order to further assuage user fears over the privacy of their data, Anthropic notes that it won't use any identity data to train its models, is only going to collect "the minimum information required to verify your identity," and won't share identity data with anyone other than Persona and Anthropic itself, except where legally required to respond to valid legal process. You may recognize the name Persona Identities if you follow privacy news. Discord previously chose Persona as its age verification partner when the social discussion platform announced plans to enact a verification system similar to Anthropic's. But a security researcher reported exposure of Persona's front end on a government server, then speculated that this was part of a broader government surveillance scheme. Persona convincinglydenied those allegations in discussions with The Register, but the uproar was enough for Discord to delay its plans to implement age checks. It also cast Persona over the edge for ostensibly unrelated reasons. This time around, discussion was quick to establish displeasure with Persona's involvement in Anthropic's identity verification plans, with some on Reddit saying they planned to cancel their subscriptions. Others pointed to the February personal account of an individual who dug into Persona after finding out they were LinkedIn's identity verification partner. As that blog post pointed out, Persona lists a number of subprocessors that help it with various parts of its identity verification process, including AWS, Confluent, Google, OpenAI, Stripe, Twilio, and even potentially Anthropic, among others. Anthropic claims on the help page that Persona is the one collecting selfie images and snapshots of identity documents for verification, and that it exercises tight controls over how Persona is able to handle it and what it can do with it. "We set the rules for how it's used and how long it's kept," Anthropic states. "Persona is contractually limited in how they can use your data: only to provide and support verification and to improve their ability to prevent fraud." Anthropic also made multiple mentions of being able to set its own retention period on the data of Claude users processed by Persona, but failed to state what that period is. The larger point? When new information is gathered, it often goes through a whole chain of providers. If any one of those providers has sneaky intentions or lax data security practices, that information may end up in hands you never expected it to. When all you maybe wanted to do was write some new code faster or ask a chatbot for relationship advice.

Perplexity today released a new expansion of Perplexity Computer for the Mac called Personal Computer. This brings the multi-modal orchestration capabilities of Computer to, well, a computer, where it can work with your files, apps, connectors, and the web. "Personal Computer makes Perplexity Computer a more personal orchestrator, elegantly hybridizing the local and server environments for maximum security and productivity," Perplexity explains. "AI changes how we think about the computer." Perplexity announced Perplexity Computer back in February, describing it as "the next evolution of AI." The idea is that it integrates previous interfaces, like chat and agents, into a single system, a "general-purpose digital worker," that can execute entire workflows by running multiple asynchronous tasks. AI, Perplexity said at the time, is now the computer. And now it's on the computer, or at least a computer: Perplexity Personal Computer for Mac takes Perplexity Computer local so it can integrate with your files, apps, and other tools. And you can initiate tasks from your phone, similar to how Claude Cowork for Windows/Mac can work with Dispatch. "You can ask Personal Computer to read your to-do list," Perplexity explains. "In fact, you can ask it to DO your to-do list. In Notes, just press both CMD keys to activate Personal Computer, and ask. Computer will read your Notes to-do list, reason how to accomplish each task, and work across all of your local files, iMessage, email, connected apps, and the open web to get it done." Personal Computer can also organize messy folders of files into project sub-folders and compare local files against web-based information, and you can interact with it using your voice. It uses a secure sandbox for files and undertakes auditable and reversible actions. Personal Computer for Mac is available now to Perplexity Max subscribers, and Perplexity will bring it to other tiers soon while prioritizing users on the waitlist.

To fully understand the ongoing slugfest between banks and retailers, you have to go back to May 2024. But first, an explanation of interchange fees. Each time a shopper swipes their credit or debit card, it sets off a complicated string of payments between banks. The retailer's bank pays an "interchange fee," typically around 1% to 2% of the transaction cost, to the consumer's bank. The fees include both a set amount and a percentage of the transaction, but the credit card companies, namely Visa and Mastercard, control how they're calculated.

Last August, U.S. Navy officials carrying out a test of unmanned vessels realized they had hit a single point of failure: Starlink. A global outage across Elon Musk's satellite network affecting millions of Starlink users had left two dozen unmanned surface vessels bobbing off the California coast, disrupting communications and halting operations for almost an hour. The incident, which involved drones intended to bolster U.S. military options in a conflict with China, was one of several Navy test disruptions linked to SpaceX's Starlink that left operators unable to connect with autonomous boats, according to internal Navy documents reviewed by Reuters and a person familiar with the matter. As SpaceX rockets toward a $2 trillion public offering this summer - expected to be the largest ever - the company has secured its position as the world's most valuable space company in part by being indispensable to the U.S. government with an array of technologies spanning satellite communications to space launches and military AI. Starlink, in particular, has proved key to crucial programs - from drones to missile tracking - with a low-earth orbit constellation of close to 10,000 satellites, a scale that provides the military with a network resilient against potential adversary attacks. But the Navy's mishaps with Starlink for its autonomous drone program, which have not been previously reported, highlight the challenges of the U.S. military's growing reliance on SpaceX and the risks it brings to the Pentagon. "If there was no Starlink, the U.S. government wouldn't have access to a global constellation of low earth orbit communications," said Clayton Swope, a deputy director of the Aerospace Security Project at the Center for Strategic and International Studies. The Pentagon did not respond to questions about the drone test or SpaceX's work with the Navy. The Pentagon's chief information officer, Kirsten Davies, said the "Department leverages multiple, robust, resilient systems for its broad network." The Navy and SpaceX did not respond to requests for comment. Despite facing growing competition from Amazon.com, which announced an $11.6 billion agreement this week to acquire satellite maker Globalstar, SpaceX remains far ahead in low-earth orbit communications. Beyond drones, SpaceX has cemented a near-monopoly for space launches and provides satellite communications with Starlink and its national security-focused constellation, Starshield, generating billions of dollars for the company. Last month, U.S. Space Force said it had reassigned its upcoming GPS launch to a SpaceX rocket for the fourth time, due to a glitch in the Vulcan rocket made by the Boeing and Lockheed Martin joint venture United Launch Alliance. Democratic lawmakers have warned the Pentagon about the risks of its reliance on a single company led by the world's richest man to deliver crucial national security capabilities. More recently, the Defense Department's disagreements and blacklisting of AI startup Anthropic quickly revealed how an over-reliance on one AI vendor could create problems should that vendor be dropped. Reuters reported last year that Musk unexpectedly switched off Starlink access to Ukrainian troops as they sought to retake territory from Russia, denting allies' trust in the billionaire. In Taiwan, SpaceX faced criticism over concerns it was withholding satellite communications to U.S. service members based there, "possibly in breach of SpaceX's contractual obligations with the U.S. government," according to a 2024 letter sent by then-U.S. Representative Mike Gallagher to Musk, reported by Forbes at the time. SpaceX disputed the claim in a post on X. Reuters could not determine whether SpaceX has since provided Starlink service in Taiwan to U.S. service members. The Pentagon and SpaceX did not respond to questions about Taiwan. "As a matter of operational security, we do not comment on or discuss plans, operations capabilities or effects," an official said in a statement. SpaceX's Starlink broadband has been crucial to the Pentagon's drone program, providing connection to small unmanned maritime vessels that look like speedboats without seats, and include those made by Maryland-based BlackSea and Austin, Texas-based Saronic. In April 2025, during a series of Navy tests in California involving unmanned boats and flying drones, officials reported that Starlink struggled to provide a solid network connection due to the high data usage needed to control multiple systems, according to a Navy safety report of the tests reviewed by Reuters. "Starlink reliance exposed limitations under multiple-vehicle load," the report stated. The report also faulted issues linked to radios provided by Silvus and a network system provided by Viasat. In the weeks leading up to the global Starlink outage in August, another series of Navy tests was disrupted by intermittent connection issues with the Starlink network, Navy documents reviewed by Reuters show. The causes of the network losses were not immediately clear. Despite the setbacks, the upside of Starlink - a cheap and commercially available service - outweighs the risk of a potential outage disrupting future military operations, said Bryan Clark, an autonomous warfare expert at the Hudson Institute. "You accept those vulnerabilities because of the benefits you get from the ubiquity it provides," he said.
Claude Opus 4.7 shows visible chain-of-thought and unusually high token usage. Anthropic shipped Claude Opus 4.7 today, calling it the company's most capable Opus model yet. We tested it, and the marketing lines up with the results. "Our latest model, Claude Opus 4.7, is now generally available." the company said in its official announcement. "Users report being able to hand off their hardest coding work -- the kind that previously needed close supervision -- to Opus 4.7 with confidence." The model arrives on the heels of weeks of user complaints about Opus 4.6 allegedly losing its edge. Developers across GitHub, Reddit, and X documented what they called "AI shrinkflation" -- the feeling that the model they'd been paying for had quietly gotten worse. As we reported yesterday, Anthropic was already preparing 4.7 while sitting on something far more powerful that it can't release publicly: Claude Mythos. When the announcement dropped this morning, X users who had been loudest about 4.6's degradation were quick to reply with sarcasm: Opus 4.7, some joked, felt like "early Opus 4.6" -- the version people actually liked, before they believed Anthropic quietly turned the dials down. Anthropic, of course, has denied ever degrading model weights to manage compute demand. Benchmarks back up Anthropic's claims. On SWE-bench Multilingual, a benchmark that measures coding skills, Opus 4.7 scored 80.5% against 4.6's 77.8%. On GDPVal-AA, a third-party evaluation of economically valuable knowledge work across finance and legal domains, 4.7 scored 1,753 Elo against GPT-5.4's 1,674 -- a clear margin over the closest competitor. Document reasoning via OfficeQA Pro showed the starkest jump: 80.6% for 4.7 versus 57.1% for 4.6, with GPT-5.4 and Gemini 3.1 Pro trailing at 51.1% and 42.9% respectively. Long-term coherence on Vending-Bench 2, a benchmark that measures how good models are at long context and reasoning tasks like owning a vending business, clocked in at $10,937 money balance versus $8,018 for 4.6 -- a proxy for how well the model sustains useful behavior over long autonomous runs. Cybersecurity is the one area where Anthropic deliberately held back. Opus 4.7 launches with automated safeguards that detect and block prohibited or high-risk cybersecurity requests. Anthropic confirmed it "experimented with efforts to differentially reduce" 4.7's cyber capabilities during training. Security professionals can apply to a new Cyber Verification Program for access to those features. This is the company's test run for the safeguards it will eventually need to deploy with Mythos-class models at scale. Opus 4.7 is the most powerful model publicly available. Mythos Preview, Anthropic's true frontier model, remains restricted to vetted security firms. As the UK's AI Security Institute evaluated last week, Mythos was the first AI to complete "The Last Ones," a 32-step corporate network attack simulation that typically takes human red teams 20 hours. Opus 4.7 is not that. But it's the public-facing model that Anthropic will use to learn how those safety guardrails hold up in the wild before it dares release anything scarier. On the token side, Opus 4.7 uses an updated tokenizer that can map the same input to roughly 1.0x-1.35x more tokens depending on content type. The model also reasons more at higher effort levels, particularly on later turns in agentic workflows. Anthropic published a migration guide for developers planning to upgrade from 4.6. We ran our own test -- the same game-building prompt we've used to evaluate every major model release. Opus 4.7 produced the best result we've ever gotten from any model. The most visually polished game, the most genuinely challenging difficulty curve, the best mechanics, and the most creative win and loss screens. It appeared to generate levels procedurally, and none of them felt impossible -- a balance that has tripped up other models repeatedly. You can test the game here It wasn't zero-shot. Opus 4.6 had cleared that same test without any fixes. Opus 4.7 needed one round of bug fixes. That could be bad luck -- a single iteration is a thin sample -- but it's worth noting. What struck us more was how the model handled that round: It spotted additional bugs on its own, without being guided toward them. Opus 4.6 typically waited to be told where to look. Xiaomi MiMo v2 Pro was the model with the best results until now, but unlike Opus, it produced a working result without the need for more than one iteration. Some may argue it was more visually pleasing and had a soundtrack, which was an advantage, but the game's logic and physics fell short against Opus after a single round of bug fixes. Also, Xiaomi's model produces these results at a fraction of the cost charged by Anthropic, which could be a major thing to consider for serious projects. The chain-of-thought behavior was different too at first glance. Unlike 4.6, which tucked its reasoning into a separate thinking box (meaning it was not part of the final answer), Opus 4.7 surfaced its chain of thought as part of the main text output. The reasoning was visible and traceable, not hidden behind a UI abstraction, which is a plus for those valuing transparency. Whether Anthropic will keep that behavior or eventually collapse it into a hidden block again is unclear. The token usage was unlike anything we'd seen before. For the first time in our testing, a single session depleted our entire token quota. Watching the model work, we saw it complete a full draft -- then write what appeared to be the entire game again from scratch under the label "Rewrite Emerge with bug fixes and improvements," followed by a second pass labeled "Create a rewritten Emerge with bug fixes and improvements." This means, if you're into serious coding, you'll be forced to either upgrade your plan, pay a lot on API tokens, or wait a long time until Anthropic resets your usage quotas. Or you could just use a comparable model that charges a lot less Opus 4.6 had never done this. However, it's consistent with what Anthropic warns in the migration guide: more output tokens, especially on agentic tasks at higher effort levels. Opus 4.7 is available today at Claude.ai, the Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry. Pricing is unchanged from 4.6: $5 per million input tokens, $25 per million output tokens. Developers can access it via the string claude-opus-4-7.

Anthropic's newest AI tool could spark chaos in markets, but not for the reasons investors have become accustomed to lately. The AI giant -- which has rolled out new tools and updates this year that have upended parts of the market -- recently developed a new model aimed at improving cybersecurity. But the model risks exploiting a key vulnerability in the financial sector, introducing risks ranging from widespread identity theft to the destabilization of the financial system, the American Securities Association said. In a public letter to the Treasury Secretary on Thursday, the trade group flagged concerns about Claude Mythos, the "general-purpose" AI model Anthropic announced in early April. The model -- which falls under Project Glasswing, the company's broader cybersecurity initiative -- is able to locate "thousands of high-severity vulnerabilities" in code across "every major operating system and web browser," the company said on its website. If used by bad actors, the tool could be used to hack into the Securities and Exchange Commission's Consolidated Audit Trail, a centralized database that contains investors' private information, the ASA speculated. The group's letter comes about a week after reports said that Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened major US bank CEOs for an urgent meeting to flag the cyber risks posed by Mythos. "The subject matter of this meeting confirms what ASA has warned about for years: the US Securities and Exchange Commission's (SEC) Consolidated Audit Trail (CAT) is a significant cybersecurity vulnerability waiting to be exploited. This is no longer a hypothetical. The threat is here, it is identified, and it has a name," the letter said, referring to Mythos. The ASA has long opposed the use of the Consolidated Audit Trail, citing data privacy concerns. The group outlined six specific risks it believes Mythos could pose to investors. The group outlined actions regulators could take, including suspending CAT and getting rid of the platform's collected data. Anthropic, which described Mythos as a work in progress on its website, also noted the potential consequences of the technology if it were used by bad actors. "Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout -- for economies, public safety, and national security -- could be severe," it said on its website. Anthropic and the US Treasury did not immediately respond to a request for comment from Business Insider.
Last August, U.S. Navy officials carrying out a test of unmanned vessels realized they had hit a single point of failure: Starlink. A global outage across Elon Musk's satellite network affecting millions of Starlink users had left two dozen unmanned surface vessels bobbing off the California coast, disrupting communications and halting operations for almost an hour. The incident, which involved drones intended to bolster U.S. military options in a conflict with China, was one of several Navy test disruptions linked to SpaceX's Starlink that left operators unable to connect with autonomous boats, according to internal Navy documents reviewed by Reuters and a person familiar with the matter. As SpaceX rockets toward a $2 trillion public offering this summer - expected to be the largest ever - the company has secured its position as the world's most valuable space company in part by being indispensable to the U.S. government with an array of technologies spanning satellite communications to space launches and military AI. Starlink, in particular, has proved key to crucial programs - from drones to missile tracking - with a low-earth orbit constellation of close to 10,000 satellites, a scale that provides the military with a network resilient against potential adversary attacks. But the Navy's mishaps with Starlink for its autonomous drone program, which have not been previously reported, highlight the challenges of the U.S. military's growing reliance on SpaceX and the risks it brings to the Pentagon. "If there was no Starlink, the U.S. government wouldn't have access to a global constellation of low earth orbit communications," said Clayton Swope, a deputy director of the Aerospace Security Project at the Center for Strategic and International Studies. The Pentagon did not respond to questions about the drone test or SpaceX's work with the Navy. The Pentagon's chief information officer, Kirsten Davies, said the "Department leverages multiple, robust, resilient systems for its broad network." The Navy and SpaceX did not respond to requests for comment. Despite facing growing competition from Amazon.com, which announced an $11.6 billion agreement this week to acquire satellite maker Globalstar, SpaceX remains far ahead in low-earth orbit communications. Beyond drones, SpaceX has cemented a near-monopoly for space launches and provides satellite communications with Starlink and its national security-focused constellation, Starshield, generating billions of dollars for the company. Last month, U.S. Space Force said it had reassigned its upcoming GPS launch to a SpaceX rocket for the fourth time, due to a glitch in the Vulcan rocket made by the Boeing and Lockheed Martin joint venture United Launch Alliance. Democratic lawmakers have warned the Pentagon about the risks of its reliance on a single company led by the world's richest man to deliver crucial national security capabilities. More recently, the Defense Department's disagreements and blacklisting of AI startup Anthropic quickly revealed how an over-reliance on one AI vendor could create problems should that vendor be dropped. Reuters reported last year that Musk unexpectedly switched off Starlink access to Ukrainian troops as they sought to retake territory from Russia, denting allies' trust in the billionaire. In Taiwan, SpaceX faced criticism over concerns it was withholding satellite communications to U.S. service members based there, "possibly in breach of SpaceX's contractual obligations with the U.S. government," according to a 2024 letter sent by then-U.S. Representative Mike Gallagher to Musk, reported by Forbes at the time. SpaceX disputed the claim in a post on X. Reuters could not determine whether SpaceX has since provided Starlink service in Taiwan to U.S. service members. The Pentagon and SpaceX did not respond to questions about Taiwan. "As a matter of operational security, we do not comment on or discuss plans, operations capabilities or effects," an official said in a statement. SpaceX's Starlink broadband has been crucial to the Pentagon's drone program, providing connection to small unmanned maritime vessels that look like speedboats without seats, and include those made by Maryland-based BlackSea and Austin, Texas-based Saronic. In April 2025, during a series of Navy tests in California involving unmanned boats and flying drones, officials reported that Starlink struggled to provide a solid network connection due to the high data usage needed to control multiple systems, according to a Navy safety report of the tests reviewed by Reuters. "Starlink reliance exposed limitations under multiple-vehicle load," the report stated. The report also faulted issues linked to radios provided by Silvus and a network system provided by Viasat. In the weeks leading up to the global Starlink outage in August, another series of Navy tests was disrupted by intermittent connection issues with the Starlink network, Navy documents reviewed by Reuters show. The causes of the network losses were not immediately clear. Despite the setbacks, the upside of Starlink - a cheap and commercially available service - outweighs the risk of a potential outage disrupting future military operations, said Bryan Clark, an autonomous warfare expert at the Hudson Institute. "You accept those vulnerabilities because of the benefits you get from the ubiquity it provides," he said.
Anthropic recently added "identity verification" to its safeguards, requiring some users to provide a passport, driver's license, or government ID, along with a live selfie. The company is rolling it out for "a few use cases," according to its Help Center. Anthropic says it's the "data controller," setting the rules for where ID data is used and how long it is kept. But Persona Identities, an ID verification startup, will collect and store the user information. Persona is contractually obligated to employ user data "only to provide and support verification and to improve their ability to prevent fraud," Anthropic said. So why is Anthropic asking some Claude users to prove who they are? "This applies to a small number of cases where we see activity that indicates potentially fraudulent or abusive behavior, which violates our usage policy," an Anthropic spokesperson wrote to Business Insider. If Anthropic deems that the activity violates its usage policy, the Claude user's account could be banned. Anthropic's help page lists the following potential reasons for why an account might be banned after completing ID verification: Anthropic also offers an appeals form that can be filled out if a user feels their account has been wrongfully banned. Claude users on X have already started noticing the requests for an ID. One user posted a screenshot of the request in Claude, which asked for a "quick identity check." It wrote that the request would only take two minutes and required an ID and mobile camera access. Another screenshot posted online shows what it looks like once the process is completed. "Thank you for verifying your identity," it wrote, accompanied by a celebratory graphic. The backlash on X was swift. "Anthropic making unexplainable decisions," one user wrote. "We are living in 1984," another wrote. In its Help Center, Anthropic also included a list of things it was not doing. Anthropic was not training its models on the data from ID verifications, it wrote. It also wrote that it wasn't sharing ID data with anyone beyond Anthropic and Persona, except where legally required. "We are not collecting more than we need," Anthropic wrote. "We ask for the minimum information required to verify your identity."
The same ChatGPT chatbot that gave OpenAI's chief financial officer Sarah Friar a tilapia recipe for a recent Sunday night dinner at home is also now doing her most mundane tasks at work like summarizing her emails and Slack messages. Friar and other company executives are banking OpenAI's future on more of the latter as it shifts its focus to business-oriented products while shedding some of its consumer offerings as a pathway to profitability. OpenAI says it will introduce a new artificial intelligence model for "high-value professional work" as the company faces heightened competition with rival Anthropic in attracting corporate customers to adopt AI assistants in their workplaces. "You'll see a new model coming from us in short order. We feel very excited about it," Friar said in an interview with The Associated Press. OpenAI boasts of more than 900 million weekly users of its core ChatGPT product, and Friar said about 95% of them "don't pay anything" for the popular chatbot. But while all those interactions build habits and reliance, they also strain the costly computing resources needed to power the company's AI systems and highlight the need for big business customers to help pay the bills. OpenAI, valued at $852 billion, and Anthropic, valued at $380 billion, both lose more money than they make, putting the privately-owned San Francisco-based AI research laboratories in a fierce competition to generate more revenue as they race toward becoming publicly traded on Wall Street. A push to improve performance and sales of OpenAI's business-oriented products -- already Anthropic's bread and butter -- has driven OpenAI to abandon some consumer initiatives, like the AI video generator app Sora. "I think it was a little heartbreaking, but we're like, OK, it's not the main event right now," Friar said. "We need to make sure that our new model that's coming has enough compute." Codenamed Spud, OpenAI says its "smartest model yet" offers "stronger reasoning, better understanding of intent and dependencies, better follow-through and more reliable output in production." It will be part of OpenAI's answer to Anthropic's new Claude Mythos, which Anthropic claims is so "strikingly capable" that it is limiting its use to select customers because of its apparent ability to surpass human cybersecurity experts in finding or exploiting computer vulnerabilities. While most people can't use Mythos, Anthropic also on Thursday released Opus 4.7, describing it as its most powerful "generally available" model. Friar, the former CEO of neighborhood social platform Nextdoor, said business customers accounted for about 20% of OpenAI's revenue when she was hired in 2024 as chief financial officer. She said it's now 40% and expected to account for half of OpenAI's sales by the end of the year. It's a sharp turnaround from late last year, when OpenAI co-founder and CEO Sam Altman was promoting a now-shuttered Sora partnership with Disney, launching a plan to sell ads on ChatGPT and floating the idea of letting ChatGPT engage in erotica with paid adult users. Altman said on the "Mostly Human" podcast earlier this month that a sharper focus was needed -- and Friar agrees. "Tech companies, when they're growing, it's just this natural thing that happens. There's so many cool things you could do," she said, adding that companies can end up doing "really badly" if they do too many things, while "great companies are very good at, in a reasonable period of time, kind of doing that winnowing down and refocusing and it's super painful." Signaling that shift was the hiring three months ago of Slack CEO Denise Dresser to be OpenAI's first chief revenue officer. Dresser said in a recent AP interview that she has been laser-focused on meeting with corporate leaders and positioning OpenAI as the go-to platform for workplaces employing AI agents to automate a variety of computer-based job tasks. "It's really clear to me that companies are past the experimentation phase and they're into using AI to do real work," Dresser said. "Leaders at companies are recognizing that AI is probably the most consequential shift of their lifetime." But those leaders also have a choice, namely Anthropic's Claude that has become widely used by software professionals. Founded in 2021 by a group of ex-OpenAI leaders who said they wanted to prioritize AI safety, Anthropic has positioned itself as the more responsible AI vendor. The distinction drew attention when President Donald Trump's administration punished the startup after a contract dispute over AI use in the military, and Altman used the opportunity to cement OpenAI's own deal with the Pentagon. Consumer interest in Anthropic surged and the company said its annualized revenues hit $30 billion, a higher number than what OpenAI has reported, though they measure it differently. Friar and Dresser declined to reveal OpenAI's latest sales but both have suggested that Anthropic's number is inflated because it doesn't account for revenue it must share with cloud computing providers Amazon and Google. Even so, it remains a tight competition that's also tied to the health of the stock market and the future of the economy. "They're likely quite close," said Luke Emberson, a researcher at nonprofit institute Epoch AI. "Certainly the trends show Anthropic is growing much faster than OpenAI. If that continues, they're likely to cross soon." The urgency led Dresser to send a memo to OpenAI employees on Sunday, first reported by The Verge, that asserted that Anthropic's coding focus "gave them an early wedge" but expressing confidence that OpenAI has the "real structural advantage" as AI usage expands beyond software developers and OpenAI builds enough computing capacity to operate its AI systems. "Their story is built on fear, restriction, and the idea that a small group of elites should control AI," Dresser's memo said of Anthropic. "Our positive message will win over time: build powerful systems, put in the right safeguards, expand access, and help people do more." But for skeptics of the financial viability of the AI industry, the trajectory of both money-losing companies is alarming as smaller startups increasingly become dependent on their AI tools. Anthropic has imposed rate limits on heavy users, forcing some to wait for hours to use Claude, and both companies have set up service tiers that reward premium payers, said author and AI critic Ed Zitron. "It's what I call the subprime AI crisis," Zitron said. "People built their lives and they built their businesses on top of these companies that, as they try and save money, will start turning the screws." One thing that both AI leaders and critics agree on is that it is an expensive technology, though whether it is worth the cost in electricity-hungry AI computers remains to be seen. "People will say, well, 'Once they go public, they're safe.' That's not true," Zitron said. "Public companies can and will die, especially ones that are dependent on $100 billion to $200 billion every year or so, just to keep breathing."

On April 15, the Defense Advanced Research Projects Agency (DARPA) issued an invitation to a Proposers Day for Disruption through Intelligent Strategies, Counter Options, and Resilient Defenses (DISCORD). DARPA will host a Proposers Day in support of DARPA-PS-26-27, the DISCORD program on 7 May 2026 at the SPA Arlington Research Collaboration Center (SPARCC), 4075 Wilson Blvd., Arlington, Virginia 22203 from 9:30 AM to 5:00 PM ET. The purpose of this conference is to provide information on the DISCORD program; promote additional discussion on this topic; address questions from potential proposers; and provide a forum for potential proposers to present their capabilities for teaming opportunities. The objective of the DISCORD program is to develop an Artificial Intelligence (AI)-native1 tactics engine that can incorporate live sensor data and high-fidelity simulation to rapidly generate tactics that continuously adapt to changes in the operational environment. The goal is to provide commanders with a portfolio of diverse strategies that can pivot dynamically in response to evolving conditions. The DARPA DISCORD effort implements Ender's Foundry, one of several warfighting Pace-Setting Projects (PSPs) established by the Department of War. Ender's Foundry's intent is to accelerate AI-enabled simulation capabilities to ensure we stay ahead of AI-enabled adversaries. Review the DARPA DISCORD Proposers Day announcement. Source: SAM

The same ChatGPT chatbot that gave OpenAI's chief financial officer Sarah Friar a tilapia recipe for a recent Sunday night dinner at home is also now doing her most mundane tasks at work like summarizing her emails and Slack messages. Friar and other company executives are banking OpenAI's future on more of the latter as it shifts its focus to business-oriented products while shedding some of its consumer offerings as a pathway to profitability. OpenAI says it will introduce a new artificial intelligence model for "high-value professional work" as the company faces heightened competition with rival Anthropic in attracting corporate customers to adopt AI assistants in their workplaces. "You'll see a new model coming from us in short order. We feel very excited about it," Friar said in an interview with The Associated Press. OpenAI boasts of more than 900 million weekly users of its core ChatGPT product, and Friar said about 95% of them "don't pay anything" for the popular chatbot. But while all those interactions build habits and reliance, they also strain the costly computing resources needed to power the company's AI systems and highlight the need for big business customers to help pay the bills. OpenAI, valued at $852 billion, and Anthropic, valued at $380 billion, both lose more money than they make, putting the privately-owned San Francisco-based AI research laboratories in a fierce competition to generate more revenue as they race toward becoming publicly traded on Wall Street. A push to improve performance and sales of OpenAI's business-oriented products -- already Anthropic's bread and butter -- has driven OpenAI to abandon some consumer initiatives, like the AI video generator app Sora. "I think it was a little heartbreaking, but we're like, OK, it's not the main event right now," Friar said. "We need to make sure that our new model that's coming has enough compute." Codenamed Spud, OpenAI says its "smartest model yet" offers "stronger reasoning, better understanding of intent and dependencies, better follow-through and more reliable output in production." It will be part of OpenAI's answer to Anthropic's new Claude Mythos, which Anthropic claims is so "strikingly capable" that it is limiting its use to select customers because of its apparent ability to surpass human cybersecurity experts in finding or exploiting computer vulnerabilities. While most people can't use Mythos, Anthropic also on Thursday released Opus 4.7, describing it as its most powerful "generally available" model. Friar, the former CEO of neighborhood social platform Nextdoor, said business customers accounted for about 20% of OpenAI's revenue when she was hired in 2024 as chief financial officer. She said it's now 40% and expected to account for half of OpenAI's sales by the end of the year. It's a sharp turnaround from late last year, when OpenAI co-founder and CEO Sam Altman was promoting a now-shuttered Sora partnership with Disney, launching a plan to sell ads on ChatGPT and floating the idea of letting ChatGPT engage in erotica with paid adult users. Altman said on the "Mostly Human" podcast earlier this month that a sharper focus was needed -- and Friar agrees. "Tech companies, when they're growing, it's just this natural thing that happens. There's so many cool things you could do," she said, adding that companies can end up doing "really badly" if they do too many things, while "great companies are very good at, in a reasonable period of time, kind of doing that winnowing down and refocusing and it's super painful." Signaling that shift was the hiring three months ago of Slack CEO Denise Dresser to be OpenAI's first chief revenue officer. Dresser said in a recent AP interview that she has been laser-focused on meeting with corporate leaders and positioning OpenAI as the go-to platform for workplaces employing AI agents to automate a variety of computer-based job tasks. "It's really clear to me that companies are past the experimentation phase and they're into using AI to do real work," Dresser said. "Leaders at companies are recognizing that AI is probably the most consequential shift of their lifetime." But those leaders also have a choice, namely Anthropic's Claude that has become widely used by software professionals. Founded in 2021 by a group of ex-OpenAI leaders who said they wanted to prioritize AI safety, Anthropic has positioned itself as the more responsible AI vendor. The distinction drew attention when President Donald Trump's administration punished the startup after a contract dispute over AI use in the military, and Altman used the opportunity to cement OpenAI's own deal with the Pentagon. Consumer interest in Anthropic surged and the company said its annualized revenues hit $30 billion, a higher number than what OpenAI has reported, though they measure it differently. Friar and Dresser declined to reveal OpenAI's latest sales but both have suggested that Anthropic's number is inflated because it doesn't account for revenue it must share with cloud computing providers Amazon and Google. Even so, it remains a tight competition that's also tied to the health of the stock market and the future of the economy. "They're likely quite close," said Luke Emberson, a researcher at nonprofit institute Epoch AI. "Certainly the trends show Anthropic is growing much faster than OpenAI. If that continues, they're likely to cross soon." The urgency led Dresser to send a memo to OpenAI employees on Sunday, first reported by The Verge, that asserted that Anthropic's coding focus "gave them an early wedge" but expressing confidence that OpenAI has the "real structural advantage" as AI usage expands beyond software developers and OpenAI builds enough computing capacity to operate its AI systems. "Their story is built on fear, restriction, and the idea that a small group of elites should control AI," Dresser's memo said of Anthropic. "Our positive message will win over time: build powerful systems, put in the right safeguards, expand access, and help people do more." But for skeptics of the financial viability of the AI industry, the trajectory of both money-losing companies is alarming as smaller startups increasingly become dependent on their AI tools. Anthropic has imposed rate limits on heavy users, forcing some to wait for hours to use Claude, and both companies have set up service tiers that reward premium payers, said author and AI critic Ed Zitron. "It's what I call the subprime AI crisis," Zitron said. "People built their lives and they built their businesses on top of these companies that, as they try and save money, will start turning the screws." One thing that both AI leaders and critics agree on is that it is an expensive technology, though whether it is worth the cost in electricity-hungry AI computers remains to be seen. "People will say, well, 'Once they go public, they're safe.' That's not true," Zitron said. "Public companies can and will die, especially ones that are dependent on $100 billion to $200 billion every year or so, just to keep breathing."

The same ChatGPT chatbot that gave OpenAI's chief financial officer Sarah Friar a tilapia recipe for a recent Sunday night dinner at home is also now doing her most mundane tasks at work like summarizing her emails and Slack messages. Friar and other company executives are banking OpenAI's future on more of the latter as it shifts its focus to business-oriented products while shedding some of its consumer offerings as a pathway to profitability. OpenAI says it will introduce a new artificial intelligence model for "high-value professional work" as the company faces heightened competition with rival Anthropic in attracting corporate customers to adopt AI assistants in their workplaces. "You'll see a new model coming from us in short order. We feel very excited about it," Friar said in an interview with The Associated Press. OpenAI boasts of more than 900 million weekly users of its core ChatGPT product, and Friar said about 95% of them "don't pay anything" for the popular chatbot. But while all those interactions build habits and reliance, they also strain the costly computing resources needed to power the company's AI systems and highlight the need for big business customers to help pay the bills. OpenAI, valued at $852 billion, and Anthropic, valued at $380 billion, both lose more money than they make, putting the privately-owned San Francisco-based AI research laboratories in a fierce competition to generate more revenue as they race toward becoming publicly traded on Wall Street. A push to improve performance and sales of OpenAI's business-oriented products -- already Anthropic's bread and butter -- has driven OpenAI to abandon some consumer initiatives, like the AI video generator app Sora. "I think it was a little heartbreaking, but we're like, OK, it's not the main event right now," Friar said. "We need to make sure that our new model that's coming has enough compute." Codenamed Spud, OpenAI says its "smartest model yet" offers "stronger reasoning, better understanding of intent and dependencies, better follow-through and more reliable output in production." It will be part of OpenAI's answer to Anthropic's new Claude Mythos, which Anthropic claims is so "strikingly capable" that it is limiting its use to select customers because of its apparent ability to surpass human cybersecurity experts in finding or exploiting computer vulnerabilities. While most people can't use Mythos, Anthropic also on Thursday released Opus 4.7, describing it as its most powerful "generally available" model. Friar, the former CEO of neighborhood social platform Nextdoor, said business customers accounted for about 20% of OpenAI's revenue when she was hired in 2024 as chief financial officer. She said it's now 40% and expected to account for half of OpenAI's sales by the end of the year. It's a sharp turnaround from late last year, when OpenAI co-founder and CEO Sam Altman was promoting a now-shuttered Sora partnership with Disney, launching a plan to sell ads on ChatGPT and floating the idea of letting ChatGPT engage in erotica with paid adult users. Altman said on the "Mostly Human" podcast earlier this month that a sharper focus was needed -- and Friar agrees. "Tech companies, when they're growing, it's just this natural thing that happens. There's so many cool things you could do," she said, adding that companies can end up doing "really badly" if they do too many things, while "great companies are very good at, in a reasonable period of time, kind of doing that winnowing down and refocusing and it's super painful." Signaling that shift was the hiring three months ago of Slack CEO Denise Dresser to be OpenAI's first chief revenue officer. Dresser said in a recent AP interview that she has been laser-focused on meeting with corporate leaders and positioning OpenAI as the go-to platform for workplaces employing AI agents to automate a variety of computer-based job tasks. "It's really clear to me that companies are past the experimentation phase and they're into using AI to do real work," Dresser said. "Leaders at companies are recognizing that AI is probably the most consequential shift of their lifetime." But those leaders also have a choice, namely Anthropic's Claude that has become widely used by software professionals. Founded in 2021 by a group of ex-OpenAI leaders who said they wanted to prioritize AI safety, Anthropic has positioned itself as the more responsible AI vendor. The distinction drew attention when President Donald Trump's administration punished the startup after a contract dispute over AI use in the military, and Altman used the opportunity to cement OpenAI's own deal with the Pentagon. Consumer interest in Anthropic surged and the company said its annualized revenues hit $30 billion, a higher number than what OpenAI has reported, though they measure it differently. Friar and Dresser declined to reveal OpenAI's latest sales but both have suggested that Anthropic's number is inflated because it doesn't account for revenue it must share with cloud computing providers Amazon and Google. Even so, it remains a tight competition that's also tied to the health of the stock market and the future of the economy. "They're likely quite close," said Luke Emberson, a researcher at nonprofit institute Epoch AI. "Certainly the trends show Anthropic is growing much faster than OpenAI. If that continues, they're likely to cross soon." The urgency led Dresser to send a memo to OpenAI employees on Sunday, first reported by The Verge, that asserted that Anthropic's coding focus "gave them an early wedge" but expressing confidence that OpenAI has the "real structural advantage" as AI usage expands beyond software developers and OpenAI builds enough computing capacity to operate its AI systems. "Their story is built on fear, restriction, and the idea that a small group of elites should control AI," Dresser's memo said of Anthropic. "Our positive message will win over time: build powerful systems, put in the right safeguards, expand access, and help people do more." But for skeptics of the financial viability of the AI industry, the trajectory of both money-losing companies is alarming as smaller startups increasingly become dependent on their AI tools. Anthropic has imposed rate limits on heavy users, forcing some to wait for hours to use Claude, and both companies have set up service tiers that reward premium payers, said author and AI critic Ed Zitron. "It's what I call the subprime AI crisis," Zitron said. "People built their lives and they built their businesses on top of these companies that, as they try and save money, will start turning the screws." One thing that both AI leaders and critics agree on is that it is an expensive technology, though whether it is worth the cost in electricity-hungry AI computers remains to be seen. "People will say, well, 'Once they go public, they're safe.' That's not true," Zitron said. "Public companies can and will die, especially ones that are dependent on $100 billion to $200 billion every year or so, just to keep breathing."

By JERRY NOWICKI/Capitol News Illinois - Capitol News Illinois "Credit cards may not work for sales tax or tips starting July 1." By now, you've heard that claim, but whether it's true depends on who you ask. The ads -- funded by the Electronic Payments Coalition of banks, credit unions and card companies -- argue that Illinois lawmakers must repeal the state's first-in-the-nation Interchange Fee Prohibition Act, slated to take effect July 1. That law prohibits financial institutions from charging "swipe," or interchange, fees on the tax and tip portions of consumer bills and bans them from making up the fees elsewhere. If it's not repealed? "Credit card chaos" may ensue, the ads warn. While the financial institutions are quick to cite a list of things that could hypothetically happen if the law isn't repealed, it's harder to pin down what's being done and by who to comply with the law two years after it was signed. "The global payment system is not set up to where any one party to a transaction can make this happen on their own," Ashley Sharp, of the Illinois Credit Union Association said at a Capitol news conference Wednesday. "There are multiple parties to every electronic transaction." The financial institutions are adamant that the global payment system as it exists today can't discern the difference between tax, tips and total, and it would need to be retooled at a heavy cost to banks, card companies, merchants, point-of-sale companies and more. Instead of complying, they say, the card companies could decide to stop serving Illinois or drastically alter the way the consumer interacts with merchants at the point of sale. An alternate reality But as with all matters in Springfield, there's another big-monied and powerful group on the other side of the issue. The Illinois Retail Merchants Association says the credit card companies already track all the information they need, and it's a "complete fabrication" to say that it would take more than a mere coding change to implement the state law. Take your restaurant receipt, for example. "You have the subtotal, the sales tax, the tip, if it's applicable, and then the grand total, right? All they have to do is move their fee from the grand total to the subtotal," Rob Karr, president of IRMA, said. While card networks operate in over 200 countries with as many different laws, they say the only information the card processors ask for in any of them is the grand total. The receipt example, they say, erroneously conflates the point of sale with the actual processing of payments. In short, the two sides present starkly different realities -- a muddying of the water that's not uncommon at the Capitol. But there is one concrete truth: The financial institutions have a lot to lose, and not just in Illinois. The tax and tip prohibition would shave approximately 10% off the revenue that banks and credit unions receive from retailers via interchange fees -- a transfer of wealth likely to number in the hundreds of millions. It would also create massive noncompliance fines. And then there's the issue of precedent. The banks challenged the law but lost in court. Absent a successful appeal, the remaining battlefields would be other state legislatures. If the card companies implement Illinois' law, they'd be providing a blueprint for states across the nation to emulate -- driving potential revenue loss into the billions. Thus far, Ben Jackson of the Illinois Bankers Association said, it hasn't opened the floodgates, although some 30 states are considering similar action. Still, it's no wonder then, that the Electronic Payments Coalition has pulled out all the stops in its seven-figure ad campaign to repeal the law. How we got here To fully understand the ongoing slugfest between banks and retailers, you have to go back to May 2024. But first, an explanation of interchange fees. Each time a shopper swipes their credit or debit card, it sets off a complicated string of payments between banks. The retailer's bank pays an "interchange fee," typically around 1% to 2% of the transaction cost, to the consumer's bank. The fees include both a set amount and a percentage of the transaction, but the credit card companies, namely Visa and Mastercard, control how they're calculated. The financial institutions say interchange fees help fund credit card reward programs and security upgrades and provide compensation for bearing the risk of fraud. The hit to interchange revenue, Jackson said, would inevitably lessen reward program offerings. Sharp said credit unions, as not-for-profit cooperatives, use the revenue to offer lower rates to customers. But the fees have long drawn the ire of retailers and small businesses, which sometimes pass the costs directly to consumers via a surcharge on bills. It comes down to this: The retailers don't think they should have to pay a fee on the tax and tip portion of a transaction that they don't keep. And the financial institutions say if they're handling those funds, they should be compensated for doing so via interchange fees. As for the Illinois law's passage, it was, as the ads claim, tucked into the budget two years ago, giving little time for the bankers et al to mount an opposition campaign. Gov. JB Pritzker and lawmakers agreed to raise about $101 million in revenue to plug a budget hole by putting a $1,000 monthly cap on the "retailer's exemption," a tax break retailers claim for being the state's de facto sales tax collectors. But the retailers weren't going to take that lying down, and IRMA successfully lobbied for the long-sought tax and tip exemption. After the law passed, the financial institutions quickly sued. To avoid uncertainty as the case played out, lawmakers delayed the measure's effective date from July 1 last year to the same date this year. U.S. District Judge Virginia Kendall ultimately determined in February that Illinois is within its right to regulate the fees. She partially rejected a portion of the law that prohibited banks from sharing certain data, which the credit unions say creates different rules for different institutions and further uncertainty. The case is now pending appeal, and the legislative process is starting anew. This time, the financial institutions have mounted a dual front in the court of public opinion. The cost of compliance Karr estimated the prohibition would bring in "north of $200 million" for retailers -- essentially letting them pocket that sum instead of transferring it to the banks. A study by the Electronic Payments Coalition pegged the number at $118 million, estimating that about 40% of the interchange windfall would go to the 40 largest retailers. Even so, Karr said, the largest retailers are subject to the $1,000 monthly retailer exemption cap that accompanied the swipe fee ban, while smaller retailers don't reach that mark. Add in their cut on reimbursed swipe fees, and it amounts to what Karr calls "the largest small business relief that Illinois has ever passed." But Jackson argued the cost of retailers complying could eat up any benefits for smaller retailers. As for compliance, Kendall wrote in her February opinion that "It is an open question whether the transaction process could adapt to the impact of the IFPA in time." "The Interchange Fee Provision is indisputably disruptive, requiring additional investments, hires, and new procedures to replace the current process for authorizing and settling debit and credit card transactions," she wrote. The financial institutions argue it can't all be done by July 1. Kendall said the parties involved know what's required of them. "But those procedural changes are the product of an ecosystem built by Payment Card Networks and financial institutions to facilitate consumer transactions," she wrote. "And these entities understand the onus of IFPA compliance is on them." Per the coalition, compliance "would require coordination across the industry and regulators worldwide," including with the International Organization for Standardization. It would also require more data collection, creating privacy concerns, they say. Those global changes would require testing and certification of new equipment. Depending on their card companies or point-of-sale vendors, retailers may need to invest in new equipment, software and training. Banks and credit unions may also have to add staff to process rebates under the law. It allows retailers or their processing companies to petition their financial institutions for reimbursement on fees charged on tax and tips within 180 days of a transaction. If financial institutions don't comply within 30 days, the law provides for civil penalties of $1,000 per each transaction -- and hundreds of millions of these transactions happen annually. So will that chaos come to fruition? Instead of complying, according to the coalition's literature, the card companies could just stop processing cards altogether in Illinois. They could also stop processing tax and tip portions or require two separate swipes for the subtotal and the tax and tip portion of bills. Such claims aren't uncommon in the legislature's annual adjournment push. Sports betting companies, for example, threatened to leave Illinois when the state raised its gambling taxes in the same budget cycle that yielded the interchange fee prohibition two years ago. Instead, they adapted, because Illinois has a lot of bettors -- and there's even more card users. Karr accused the coalition of ulterior motives in their use of hypothetical language. "There is no need for chaos," he said. "The only chaos is if the credit card companies impose it themselves on their consumers." Ultimately, lawmakers will have to weigh how compelling the arguments are, if the courts don't intervene first. It's possible that the 7th Circuit appellate court -- or even the U.S. Supreme Court -- gives the banks a win. But oral arguments are slated for May 13, meaning the appellate court might not rule by the time the law is slated to take effect. Adding a new wrinkle on Wednesday, the federal office of the Comptroller of the Currency, a subset of the U.S. Treasury Department, appeared poised to issue an order preempting Illinois' law. It hadn't been published as of late Wednesday, making its impact unclear. "While the office has failed to explain their reasoning or allow public review, it's clear the goal is an end-run around the legal process after a judge recently upheld the law," Karr said. As for the legislative prospects, state Rep. Margaret Croke, D-Chicago, says she's seen enough to be concerned. The Democratic nominee for comptroller is sponsoring a bill to fully repeal Illinois' interchange fee prohibition. But as of last week, she said she wasn't planning to move it. Instead, she finds it more likely that lawmakers once again delay the law's implementation. "If this is a policy that the state of Illinois decides they're going to want to have, then we need to make sure we're doing it properly," she said. This story was originally published by Capitol News Illinois and distributed through a partnership with The Associated Press.

The same ChatGPT chatbot that gave OpenAI's chief financial officer Sarah Friar a tilapia recipe for a recent Sunday night dinner at home is also now doing her most mundane tasks at work like summarizing her emails and Slack messages. Friar and other company executives are banking OpenAI's future on more of the latter as it shifts its focus to business-oriented products while shedding some of its consumer offerings as a pathway to profitability. OpenAI says it will introduce a new artificial intelligence model for "high-value professional work" as the company faces heightened competition with rival Anthropic in attracting corporate customers to adopt AI assistants in their workplaces. "You'll see a new model coming from us in short order. We feel very excited about it," Friar said in an interview with The Associated Press. OpenAI boasts of more than 900 million weekly users of its core ChatGPT product, and Friar said about 95% of them "don't pay anything" for the popular chatbot. But while all those interactions build habits and reliance, they also strain the costly computing resources needed to power the company's AI systems and highlight the need for big business customers to help pay the bills. OpenAI, valued at $852 billion, and Anthropic, valued at $380 billion, both lose more money than they make, putting the privately-owned San Francisco-based AI research laboratories in a fierce competition to generate more revenue as they race toward becoming publicly traded on Wall Street. A push to improve performance and sales of OpenAI's business-oriented products -- already Anthropic's bread and butter -- has driven OpenAI to abandon some consumer initiatives, like the AI video generator app Sora. "I think it was a little heartbreaking, but we're like, OK, it's not the main event right now," Friar said. "We need to make sure that our new model that's coming has enough compute." Codenamed Spud, OpenAI says its "smartest model yet" offers "stronger reasoning, better understanding of intent and dependencies, better follow-through and more reliable output in production." It will be part of OpenAI's answer to Anthropic's new Claude Mythos, which Anthropic claims is so "strikingly capable" that it is limiting its use to select customers because of its apparent ability to surpass human cybersecurity experts in finding or exploiting computer vulnerabilities. While most people can't use Mythos, Anthropic also on Thursday released Opus 4.7, describing it as its most powerful "generally available" model. Friar, the former CEO of neighborhood social platform Nextdoor, said business customers accounted for about 20% of OpenAI's revenue when she was hired in 2024 as chief financial officer. She said it's now 40% and expected to account for half of OpenAI's sales by the end of the year. It's a sharp turnaround from late last year, when OpenAI co-founder and CEO Sam Altman was promoting a now-shuttered Sora partnership with Disney, launching a plan to sell ads on ChatGPT and floating the idea of letting ChatGPT engage in erotica with paid adult users. Altman said on the "Mostly Human" podcast earlier this month that a sharper focus was needed -- and Friar agrees. "Tech companies, when they're growing, it's just this natural thing that happens. There's so many cool things you could do," she said, adding that companies can end up doing "really badly" if they do too many things, while "great companies are very good at, in a reasonable period of time, kind of doing that winnowing down and refocusing and it's super painful." Signaling that shift was the hiring three months ago of Slack CEO Denise Dresser to be OpenAI's first chief revenue officer. Dresser said in a recent AP interview that she has been laser-focused on meeting with corporate leaders and positioning OpenAI as the go-to platform for workplaces employing AI agents to automate a variety of computer-based job tasks. "It's really clear to me that companies are past the experimentation phase and they're into using AI to do real work," Dresser said. "Leaders at companies are recognizing that AI is probably the most consequential shift of their lifetime." But those leaders also have a choice, namely Anthropic's Claude that has become widely used by software professionals. Founded in 2021 by a group of ex-OpenAI leaders who said they wanted to prioritize AI safety, Anthropic has positioned itself as the more responsible AI vendor. The distinction drew attention when President Donald Trump's administration punished the startup after a contract dispute over AI use in the military, and Altman used the opportunity to cement OpenAI's own deal with the Pentagon. Consumer interest in Anthropic surged and the company said its annualized revenues hit $30 billion, a higher number than what OpenAI has reported, though they measure it differently. Friar and Dresser declined to reveal OpenAI's latest sales but both have suggested that Anthropic's number is inflated because it doesn't account for revenue it must share with cloud computing providers Amazon and Google. Even so, it remains a tight competition that's also tied to the health of the stock market and the future of the economy. "They're likely quite close," said Luke Emberson, a researcher at nonprofit institute Epoch AI. "Certainly the trends show Anthropic is growing much faster than OpenAI. If that continues, they're likely to cross soon." The urgency led Dresser to send a memo to OpenAI employees on Sunday, first reported by The Verge, that asserted that Anthropic's coding focus "gave them an early wedge" but expressing confidence that OpenAI has the "real structural advantage" as AI usage expands beyond software developers and OpenAI builds enough computing capacity to operate its AI systems. "Their story is built on fear, restriction, and the idea that a small group of elites should control AI," Dresser's memo said of Anthropic. "Our positive message will win over time: build powerful systems, put in the right safeguards, expand access, and help people do more." But for skeptics of the financial viability of the AI industry, the trajectory of both money-losing companies is alarming as smaller startups increasingly become dependent on their AI tools. Anthropic has imposed rate limits on heavy users, forcing some to wait for hours to use Claude, and both companies have set up service tiers that reward premium payers, said author and AI critic Ed Zitron. "It's what I call the subprime AI crisis," Zitron said. "People built their lives and they built their businesses on top of these companies that, as they try and save money, will start turning the screws." One thing that both AI leaders and critics agree on is that it is an expensive technology, though whether it is worth the cost in electricity-hungry AI computers remains to be seen. "People will say, well, 'Once they go public, they're safe.' That's not true," Zitron said. "Public companies can and will die, especially ones that are dependent on $100 billion to $200 billion every year or so, just to keep breathing."

The same ChatGPT chatbot that gave OpenAI's chief financial officer Sarah Friar a tilapia recipe for a recent Sunday night dinner at home is also now doing her most mundane tasks at work like summarizing her emails and Slack messages. Friar and other company executives are banking OpenAI's future on more of the latter as it shifts its focus to business-oriented products while shedding some of its consumer offerings as a pathway to profitability. OpenAI says it will introduce a new artificial intelligence model for "high-value professional work" as the company faces heightened competition with rival Anthropic in attracting corporate customers to adopt AI assistants in their workplaces. "You'll see a new model coming from us in short order. We feel very excited about it," Friar said in an interview with The Associated Press. OpenAI boasts of more than 900 million weekly users of its core ChatGPT product, and Friar said about 95% of them "don't pay anything" for the popular chatbot. But while all those interactions build habits and reliance, they also strain the costly computing resources needed to power the company's AI systems and highlight the need for big business customers to help pay the bills. OpenAI, valued at $852 billion, and Anthropic, valued at $380 billion, both lose more money than they make, putting the privately-owned San Francisco-based AI research laboratories in a fierce competition to generate more revenue as they race toward becoming publicly traded on Wall Street. A push to improve performance and sales of OpenAI's business-oriented products -- already Anthropic's bread and butter -- has driven OpenAI to abandon some consumer initiatives, like the AI video generator app Sora. "I think it was a little heartbreaking, but we're like, OK, it's not the main event right now," Friar said. "We need to make sure that our new model that's coming has enough compute." Codenamed Spud, OpenAI says its "smartest model yet" offers "stronger reasoning, better understanding of intent and dependencies, better follow-through and more reliable output in production." It will be part of OpenAI's answer to Anthropic's new Claude Mythos, which Anthropic claims is so "strikingly capable" that it is limiting its use to select customers because of its apparent ability to surpass human cybersecurity experts in finding or exploiting computer vulnerabilities. While most people can't use Mythos, Anthropic also on Thursday released Opus 4.7, describing it as its most powerful "generally available" model. Friar, the former CEO of neighborhood social platform Nextdoor, said business customers accounted for about 20% of OpenAI's revenue when she was hired in 2024 as chief financial officer. She said it's now 40% and expected to account for half of OpenAI's sales by the end of the year. It's a sharp turnaround from late last year, when OpenAI co-founder and CEO Sam Altman was promoting a now-shuttered Sora partnership with Disney, launching a plan to sell ads on ChatGPT and floating the idea of letting ChatGPT engage in erotica with paid adult users. Altman said on the "Mostly Human" podcast earlier this month that a sharper focus was needed -- and Friar agrees. "Tech companies, when they're growing, it's just this natural thing that happens. There's so many cool things you could do," she said, adding that companies can end up doing "really badly" if they do too many things, while "great companies are very good at, in a reasonable period of time, kind of doing that winnowing down and refocusing and it's super painful." Signaling that shift was the hiring three months ago of Slack CEO Denise Dresser to be OpenAI's first chief revenue officer. Dresser said in a recent AP interview that she has been laser-focused on meeting with corporate leaders and positioning OpenAI as the go-to platform for workplaces employing AI agents to automate a variety of computer-based job tasks. "It's really clear to me that companies are past the experimentation phase and they're into using AI to do real work," Dresser said. "Leaders at companies are recognizing that AI is probably the most consequential shift of their lifetime." But those leaders also have a choice, namely Anthropic's Claude that has become widely used by software professionals. Founded in 2021 by a group of ex-OpenAI leaders who said they wanted to prioritize AI safety, Anthropic has positioned itself as the more responsible AI vendor. The distinction drew attention when President Donald Trump's administration punished the startup after a contract dispute over AI use in the military, and Altman used the opportunity to cement OpenAI's own deal with the Pentagon. Consumer interest in Anthropic surged and the company said its annualized revenues hit $30 billion, a higher number than what OpenAI has reported, though they measure it differently. Friar and Dresser declined to reveal OpenAI's latest sales but both have suggested that Anthropic's number is inflated because it doesn't account for revenue it must share with cloud computing providers Amazon and Google. Even so, it remains a tight competition that's also tied to the health of the stock market and the future of the economy. "They're likely quite close," said Luke Emberson, a researcher at nonprofit institute Epoch AI. "Certainly the trends show Anthropic is growing much faster than OpenAI. If that continues, they're likely to cross soon." The urgency led Dresser to send a memo to OpenAI employees on Sunday, first reported by The Verge, that asserted that Anthropic's coding focus "gave them an early wedge" but expressing confidence that OpenAI has the "real structural advantage" as AI usage expands beyond software developers and OpenAI builds enough computing capacity to operate its AI systems. "Their story is built on fear, restriction, and the idea that a small group of elites should control AI," Dresser's memo said of Anthropic. "Our positive message will win over time: build powerful systems, put in the right safeguards, expand access, and help people do more." But for skeptics of the financial viability of the AI industry, the trajectory of both money-losing companies is alarming as smaller startups increasingly become dependent on their AI tools. Anthropic has imposed rate limits on heavy users, forcing some to wait for hours to use Claude, and both companies have set up service tiers that reward premium payers, said author and AI critic Ed Zitron. "It's what I call the subprime AI crisis," Zitron said. "People built their lives and they built their businesses on top of these companies that, as they try and save money, will start turning the screws." One thing that both AI leaders and critics agree on is that it is an expensive technology, though whether it is worth the cost in electricity-hungry AI computers remains to be seen. "People will say, well, 'Once they go public, they're safe.' That's not true," Zitron said. "Public companies can and will die, especially ones that are dependent on $100 billion to $200 billion every year or so, just to keep breathing."

The same ChatGPT chatbot that gave OpenAI's chief financial officer Sarah Friar a tilapia recipe for a recent Sunday night dinner at home is also now doing her most mundane tasks at work like summarizing her emails and Slack messages. Friar and other company executives are banking OpenAI's future on more of the latter as it shifts its focus to business-oriented products while shedding some of its consumer offerings as a pathway to profitability. OpenAI says it will introduce a new artificial intelligence model for "high-value professional work" as the company faces heightened competition with rival Anthropic in attracting corporate customers to adopt AI assistants in their workplaces. "You'll see a new model coming from us in short order. We feel very excited about it," Friar said in an interview with The Associated Press. OpenAI boasts of more than 900 million weekly users of its core ChatGPT product, and Friar said about 95% of them "don't pay anything" for the popular chatbot. But while all those interactions build habits and reliance, they also strain the costly computing resources needed to power the company's AI systems and highlight the need for big business customers to help pay the bills. OpenAI, valued at $852 billion, and Anthropic, valued at $380 billion, both lose more money than they make, putting the privately-owned San Francisco-based AI research laboratories in a fierce competition to generate more revenue as they race toward becoming publicly traded on Wall Street. A push to improve performance and sales of OpenAI's business-oriented products -- already Anthropic's bread and butter -- has driven OpenAI to abandon some consumer initiatives, like the AI video generator app Sora. "I think it was a little heartbreaking, but we're like, OK, it's not the main event right now," Friar said. "We need to make sure that our new model that's coming has enough compute." Codenamed Spud, OpenAI says its "smartest model yet" offers "stronger reasoning, better understanding of intent and dependencies, better follow-through and more reliable output in production." It will be part of OpenAI's answer to Anthropic's new Claude Mythos, which Anthropic claims is so "strikingly capable" that it is limiting its use to select customers because of its apparent ability to surpass human cybersecurity experts in finding or exploiting computer vulnerabilities. While most people can't use Mythos, Anthropic also on Thursday released Opus 4.7, describing it as its most powerful "generally available" model. Friar, the former CEO of neighborhood social platform Nextdoor, said business customers accounted for about 20% of OpenAI's revenue when she was hired in 2024 as chief financial officer. She said it's now 40% and expected to account for half of OpenAI's sales by the end of the year. It's a sharp turnaround from late last year, when OpenAI co-founder and CEO Sam Altman was promoting a now-shuttered Sora partnership with Disney, launching a plan to sell ads on ChatGPT and floating the idea of letting ChatGPT engage in erotica with paid adult users. Altman said on the "Mostly Human" podcast earlier this month that a sharper focus was needed -- and Friar agrees. "Tech companies, when they're growing, it's just this natural thing that happens. There's so many cool things you could do," she said, adding that companies can end up doing "really badly" if they do too many things, while "great companies are very good at, in a reasonable period of time, kind of doing that winnowing down and refocusing and it's super painful." Signaling that shift was the hiring three months ago of Slack CEO Denise Dresser to be OpenAI's first chief revenue officer. Dresser said in a recent AP interview that she has been laser-focused on meeting with corporate leaders and positioning OpenAI as the go-to platform for workplaces employing AI agents to automate a variety of computer-based job tasks. "It's really clear to me that companies are past the experimentation phase and they're into using AI to do real work," Dresser said. "Leaders at companies are recognizing that AI is probably the most consequential shift of their lifetime." But those leaders also have a choice, namely Anthropic's Claude that has become widely used by software professionals. Founded in 2021 by a group of ex-OpenAI leaders who said they wanted to prioritize AI safety, Anthropic has positioned itself as the more responsible AI vendor. The distinction drew attention when President Donald Trump's administration punished the startup after a contract dispute over AI use in the military, and Altman used the opportunity to cement OpenAI's own deal with the Pentagon. Consumer interest in Anthropic surged and the company said its annualized revenues hit $30 billion, a higher number than what OpenAI has reported, though they measure it differently. Friar and Dresser declined to reveal OpenAI's latest sales but both have suggested that Anthropic's number is inflated because it doesn't account for revenue it must share with cloud computing providers Amazon and Google. Even so, it remains a tight competition that's also tied to the health of the stock market and the future of the economy. "They're likely quite close," said Luke Emberson, a researcher at nonprofit institute Epoch AI. "Certainly the trends show Anthropic is growing much faster than OpenAI. If that continues, they're likely to cross soon." The urgency led Dresser to send a memo to OpenAI employees on Sunday, first reported by The Verge, that asserted that Anthropic's coding focus "gave them an early wedge" but expressing confidence that OpenAI has the "real structural advantage" as AI usage expands beyond software developers and OpenAI builds enough computing capacity to operate its AI systems. "Their story is built on fear, restriction, and the idea that a small group of elites should control AI," Dresser's memo said of Anthropic. "Our positive message will win over time: build powerful systems, put in the right safeguards, expand access, and help people do more." But for skeptics of the financial viability of the AI industry, the trajectory of both money-losing companies is alarming as smaller startups increasingly become dependent on their AI tools. Anthropic has imposed rate limits on heavy users, forcing some to wait for hours to use Claude, and both companies have set up service tiers that reward premium payers, said author and AI critic Ed Zitron. "It's what I call the subprime AI crisis," Zitron said. "People built their lives and they built their businesses on top of these companies that, as they try and save money, will start turning the screws." One thing that both AI leaders and critics agree on is that it is an expensive technology, though whether it is worth the cost in electricity-hungry AI computers remains to be seen. "People will say, well, 'Once they go public, they're safe.' That's not true," Zitron said. "Public companies can and will die, especially ones that are dependent on $100 billion to $200 billion every year or so, just to keep breathing."

The same ChatGPT chatbot that gave OpenAI's chief financial officer Sarah Friar a tilapia recipe for a recent Sunday night dinner at home is also now doing her most mundane tasks at work like summarizing her emails and Slack messages. Friar and other company executives are banking OpenAI's future on more of the latter as it shifts its focus to business-oriented products while shedding some of its consumer offerings as a pathway to profitability. OpenAI says it will introduce a new artificial intelligence model for "high-value professional work" as the company faces heightened competition with rival Anthropic in attracting corporate customers to adopt AI assistants in their workplaces. "You'll see a new model coming from us in short order. We feel very excited about it," Friar said in an interview with The Associated Press. OpenAI boasts of more than 900 million weekly users of its core ChatGPT product, and Friar said about 95% of them "don't pay anything" for the popular chatbot. But while all those interactions build habits and reliance, they also strain the costly computing resources needed to power the company's AI systems and highlight the need for big business customers to help pay the bills. OpenAI, valued at $852 billion, and Anthropic, valued at $380 billion, both lose more money than they make, putting the privately-owned San Francisco-based AI research laboratories in a fierce competition to generate more revenue as they race toward becoming publicly traded on Wall Street. A push to improve performance and sales of OpenAI's business-oriented products -- already Anthropic's bread and butter -- has driven OpenAI to abandon some consumer initiatives, like the AI video generator app Sora. "I think it was a little heartbreaking, but we're like, OK, it's not the main event right now," Friar said. "We need to make sure that our new model that's coming has enough compute." Codenamed Spud, OpenAI says its "smartest model yet" offers "stronger reasoning, better understanding of intent and dependencies, better follow-through and more reliable output in production." It will be part of OpenAI's answer to Anthropic's new Claude Mythos, which Anthropic claims is so "strikingly capable" that it is limiting its use to select customers because of its apparent ability to surpass human cybersecurity experts in finding or exploiting computer vulnerabilities. While most people can't use Mythos, Anthropic also on Thursday released Opus 4.7, describing it as its most powerful "generally available" model. Friar, the former CEO of neighborhood social platform Nextdoor, said business customers accounted for about 20% of OpenAI's revenue when she was hired in 2024 as chief financial officer. She said it's now 40% and expected to account for half of OpenAI's sales by the end of the year. It's a sharp turnaround from late last year, when OpenAI co-founder and CEO Sam Altman was promoting a now-shuttered Sora partnership with Disney, launching a plan to sell ads on ChatGPT and floating the idea of letting ChatGPT engage in erotica with paid adult users. Altman said on the "Mostly Human" podcast earlier this month that a sharper focus was needed -- and Friar agrees. "Tech companies, when they're growing, it's just this natural thing that happens. There's so many cool things you could do," she said, adding that companies can end up doing "really badly" if they do too many things, while "great companies are very good at, in a reasonable period of time, kind of doing that winnowing down and refocusing and it's super painful." Signaling that shift was the hiring three months ago of Slack CEO Denise Dresser to be OpenAI's first chief revenue officer. Dresser said in a recent AP interview that she has been laser-focused on meeting with corporate leaders and positioning OpenAI as the go-to platform for workplaces employing AI agents to automate a variety of computer-based job tasks. "It's really clear to me that companies are past the experimentation phase and they're into using AI to do real work," Dresser said. "Leaders at companies are recognizing that AI is probably the most consequential shift of their lifetime." But those leaders also have a choice, namely Anthropic's Claude that has become widely used by software professionals. Founded in 2021 by a group of ex-OpenAI leaders who said they wanted to prioritize AI safety, Anthropic has positioned itself as the more responsible AI vendor. The distinction drew attention when President Donald Trump's administration punished the startup after a contract dispute over AI use in the military, and Altman used the opportunity to cement OpenAI's own deal with the Pentagon. Consumer interest in Anthropic surged and the company said its annualized revenues hit $30 billion, a higher number than what OpenAI has reported, though they measure it differently. Friar and Dresser declined to reveal OpenAI's latest sales but both have suggested that Anthropic's number is inflated because it doesn't account for revenue it must share with cloud computing providers Amazon and Google. Even so, it remains a tight competition that's also tied to the health of the stock market and the future of the economy. "They're likely quite close," said Luke Emberson, a researcher at nonprofit institute Epoch AI. "Certainly the trends show Anthropic is growing much faster than OpenAI. If that continues, they're likely to cross soon." The urgency led Dresser to send a memo to OpenAI employees on Sunday, first reported by The Verge, that asserted that Anthropic's coding focus "gave them an early wedge" but expressing confidence that OpenAI has the "real structural advantage" as AI usage expands beyond software developers and OpenAI builds enough computing capacity to operate its AI systems. "Their story is built on fear, restriction, and the idea that a small group of elites should control AI," Dresser's memo said of Anthropic. "Our positive message will win over time: build powerful systems, put in the right safeguards, expand access, and help people do more." But for skeptics of the financial viability of the AI industry, the trajectory of both money-losing companies is alarming as smaller startups increasingly become dependent on their AI tools. Anthropic has imposed rate limits on heavy users, forcing some to wait for hours to use Claude, and both companies have set up service tiers that reward premium payers, said author and AI critic Ed Zitron. "It's what I call the subprime AI crisis," Zitron said. "People built their lives and they built their businesses on top of these companies that, as they try and save money, will start turning the screws." One thing that both AI leaders and critics agree on is that it is an expensive technology, though whether it is worth the cost in electricity-hungry AI computers remains to be seen. "People will say, well, 'Once they go public, they're safe.' That's not true," Zitron said. "Public companies can and will die, especially ones that are dependent on $100 billion to $200 billion every year or so, just to keep breathing."
