The latest news and updates from companies in the WLTH portfolio.
The Pentagon, seen from the air in Washington. (Josh Roberts/Reuters) Last August, U.S. Navy officials carrying out a test of unmanned vessels realized they had hit a single point of failure: Starlink. A global outage across Elon Musk's satellite network affecting millions of Starlink users had left two dozen unmanned surface vessels bobbing off the California coast, disrupting communications and halting operations for almost an hour. The incident, which involved drones intended to bolster U.S. military options in a conflict with China, was one of several Navy test disruptions linked to SpaceX's Starlink that left operators unable to connect with autonomous boats, according to internal Navy documents reviewed by Reuters and a person familiar with the matter. As SpaceX rockets toward a $2 trillion public offering this summer - expected to be the largest ever - the company has secured its position as the world's most valuable space company in part by being indispensable to the U.S. government with an array of technologies spanning satellite communications to space launches and military AI. Starlink, in particular, has proved key to crucial programs - from drones to missile tracking - with a low-earth orbit constellation of close to 10,000 satellites, a scale that provides the military with a network resilient against potential adversary attacks. But the Navy's mishaps with Starlink for its autonomous drone program, which have not been previously reported, highlight the challenges of the U.S. military's growing reliance on SpaceX and the risks it brings to the Pentagon. "If there was no Starlink, the U.S. government wouldn't have access to a global constellation of low earth orbit communications," said Clayton Swope, a deputy director of the Aerospace Security Project at the Center for Strategic and International Studies. The Pentagon did not respond to questions about the drone test or SpaceX's work with the Navy. The Pentagon's chief information officer, Kirsten Davies, said the "Department leverages multiple, robust, resilient systems for its broad network." The Navy and SpaceX did not respond to requests for comment. Despite facing growing competition from Amazon.com, which announced an $11.6 billion agreement this week to acquire satellite maker Globalstar, SpaceX remains far ahead in low-earth orbit communications. Beyond drones, SpaceX has cemented a near-monopoly for space launches and provides satellite communications with Starlink and its national security-focused constellation, Starshield, generating billions of dollars for the company. Last month, U.S. Space Force said it had reassigned its upcoming GPS launch to a SpaceX rocket for the fourth time, due to a glitch in the Vulcan rocket made by the Boeing and Lockheed Martin joint venture United Launch Alliance. WARNINGS ABOUT RELYING ON SPACEX Democratic lawmakers have warned the Pentagon about the risks of its reliance on a single company led by the world's richest man to deliver crucial national security capabilities. More recently, the Defense Department's disagreements and blacklisting of AI startup Anthropic quickly revealed how an over-reliance on one AI vendor could create problems should that vendor be dropped. Reuters reported last year that Musk unexpectedly switched off Starlink access to Ukrainian troops as they sought to retake territory from Russia, denting allies' trust in the billionaire. In Taiwan, SpaceX faced criticism over concerns it was withholding satellite communications to U.S. service members based there, "possibly in breach of SpaceX's contractual obligations with the U.S. government," according to a 2024 letter sent by then-U.S. Representative Mike Gallagher to Musk, reported by Forbes at the time. SpaceX disputed the claim in a post on X. Reuters could not determine whether SpaceX has since provided Starlink service in Taiwan to U.S. service members. The Pentagon and SpaceX did not respond to questions about Taiwan. "As a matter of operational security, we do not comment on or discuss plans, operations capabilities or effects," an official said in a statement. STARLINK 'EXPOSED LIMITATIONS' SpaceX's Starlink broadband has been crucial to the Pentagon's drone program, providing connection to small unmanned maritime vessels that look like speedboats without seats, and include those made by Maryland-based BlackSea and Austin, Texas-based Saronic. In April 2025, during a series of Navy tests in California involving unmanned boats and flying drones, officials reported that Starlink struggled to provide a solid network connection due to the high data usage needed to control multiple systems, according to a Navy safety report of the tests reviewed by Reuters. "Starlink reliance exposed limitations under multiple-vehicle load," the report stated. The report also faulted issues linked to radios provided by Silvus and a network system provided by Viasat. In the weeks leading up to the global Starlink outage in August, another series of Navy tests was disrupted by intermittent connection issues with the Starlink network, Navy documents reviewed by Reuters show. The causes of the network losses were not immediately clear. Despite the setbacks, the upside of Starlink - a cheap and commercially available service - outweighs the risk of a potential outage disrupting future military operations, said Bryan Clark, an autonomous warfare expert at the Hudson Institute. "You accept those vulnerabilities because of the benefits you get from the ubiquity it provides," he said.
Sales of Tesla Inc.'s Cybertruck have been propped up in recent months by Elon Musk's other companies, an unusual arrangement that further indicates the polarizing pickup is failing to appeal to everyday buyers. SpaceX, the Musk-led rocket and satellite maker, accounted for 1,279 -- or more than 18% -- of the 7,071 Cybertrucks registered in the US during the fourth quarter, according to registration data that S&P Global Mobility provided to Bloomberg News. The billionaire's other ventures acquired another 60 vehicles during those months. That means almost one in every five Cybertrucks registered during the period were delivered from one part of Musk's sprawling business empire to another. And the purchases, likely exceeding $100 million in value, have continued into this year. The figures reinforce the extent to which consumer demand is faltering only two years after Tesla began delivering the electric pickup. Without those sales to other Musk-run companies -- which included xAI, Boring Co. and Neuralink, in addition to SpaceX -- Cybertruck registrations in the fourth quarter would have fallen 51%. "Tesla is running out of buyers for the Cybertruck," said Sam Fiorani, vice president of global vehicle forecasting for advisory firm AutoForecast Solutions. Tesla, Musk, SpaceX, Boring and Neuralink didn't respond to requests for comment. SpaceX acquired xAI in February. Tesla is under increasing pressure to reverse slumping sales across its lineup as it faces the prospect of a third straight annual decline. Once the undisputed electric vehicle leader, the company was surpassed by China's BYD Co. as the world's top seller of EVs last year. Investors have largely overlooked Tesla's declining auto sales as Musk reorients the company around futuristic pursuits including robotaxis and humanoid robots. But those products are still a ways off from becoming tangible business lines, and shareholders' patience appears to be wearing thin. Since hitting a record high in mid-December, Tesla's stock has lost a fifth of its value. High Hopes The Cybertruck debuted with great fanfare in late 2023, diversifying Tesla's lineup as a rugged bruiser of a vehicle to counter the sleek Model Y SUV and Model 3 sedan that account for the vast majority of the company's auto sales. Tesla was keen to compete in the lucrative US pickup market dominated by Ford Motor Co., General Motors Co. and Stellantis NV. Musk predicted before the launch that the company would be churning out 250,000 Cybertrucks annually by 2025. He's called it the best product Tesla has ever made. From the outset, however, there were red flags. The Cybertruck's angular design was divisive, and the attention-grabbing vehicle occasionally became the target of ridicule and vandalism when a backlash against Musk swelled last year. The truck was also more expensive than expected, with initial versions fetching more than $100,000, far more than the under-$40,000 starting price tag first touted in 2019. The first Cybertruck registrations by SpaceX began in October of last year, according to S&P Global Mobility data. The sales to Musk-run companies have continued into 2026, with another 158 in January and 67 in February. While the financial terms of the inter-company sales haven't been disclosed, the Cybertruck's current starting price of around $70,000 suggests that SpaceX, xAI, Boring and Neuralink have paid Tesla more than $100 million combined for the vehicles. It's not entirely clear what Musk's other companies are doing with the Cybertrucks, or why an artificial intelligence and social media company would acquire 50 of them. Photos and videos have circulated online showing long rows of idle Cybertrucks on SpaceX property in Texas. The lead engineer for the pickup posted on social media in October that SpaceX was replacing gas-powered support vehicles with trucks. At least some are being used as security vehicles. EV news outlet Electrek reported in December that SpaceX could ultimately buy about 2,000 Cybertrucks. While Tesla has given no indication that it would discontinue the Cybertruck, it's phasing out the slow-selling Model X SUV and Model S sedan, its two oldest vehicles. Musk has indicated the company may look to boost fleet sales to commercial customers in response to questions about Cybertruck's murky prospects. "There's obviously a market there for cargo delivery," he said in January during a Tesla earnings call. "There's a lot of cargo that needs to move locally within a city, and an autonomous Cybertruck could be very useful for that." Pickup Letdown The sales woes aren't entirely unique to Cybertruck: electric pickups have been a bust within the broadly stalled US EV market. Ford recently decided to convert its electric F-150 Lightning pickup to an extended-range hybrid vehicle. The Cybertruck was still the top-selling battery-powered truck in the US during the first quarter, despite a 45% drop, according to Cox Automotive data. Musk's companies have long been intertwined through financial investments, business agreements and sometimes even shared personnel. XAI uses Tesla Megapack batteries and has integrated its Grok chatbot into Tesla vehicles; Las Vegas conference-goers can ride in Teslas through a Boring-built tunnel; Tesla and SpaceX are collaborating on a planned chip production project. Still, it's unusual for an automaker to unload significant volumes of a single model to an affiliated business with the same CEO. Car manufacturers will sometimes offer new incentives, lower prices or lease vehicles to employees when a model isn't selling well. "It's a way of keeping the plant running when retail demand does not equal production," said Tom Libby, an automotive analyst at S&P Global Mobility.
The US government is preparing to make a version of Anthropic PBC's powerful new artificial intelligence model available to major federal agencies amid concerns that the tool could sharply increase cybersecurity risk, according to a memo reviewed by Bloomberg News. Gregory Barbaccia, federal chief information officer of the White House Office of Management and Budget, told officials at Cabinet departments in an email Tuesday that OMB is setting up protections that would allow their agencies to begin using the closely guarded AI tool, Mythos. The email doesn't say definitively that the various agencies will get access to Mythos, nor does it provide a timeline for when it might come or how they might use it. It tells top technology and cybersecurity chiefs to expect more information "in the coming weeks." Anthropic has only provided Mythos to a limited group of technology companies, financial firms and others, urging them to use it to assess their cybersecurity risk. The firm limited the release of Mythos amid concerns that hackers could weaponize its capabilities to steal data or sabotage victim networks. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Plus Signed UpPlus Sign UpPlus Sign Up By continuing, I agree to the Privacy Policy and Terms of Service. Before its limited release of Mythos, Anthropic briefed senior officials across the US government on the model's full capabilities, including both its offensive and defensive cyber applications, according to a company official who spoke on condition that they not be identified discussing the talks with government. The talks included staff at the Cybersecurity and Infrastructure Security Agency and the Center for AI Standards and Innovation, among others, the company official said, and Anthropic has continued to work with government on security issues arising from the model. Barbaccia's message was sent as leaders from Washington to Wall Street are grappling with the possibility that the model could make it dramatically easier for hackers to find ways to break into sensitive computer systems in industry and government. "We're working closely with model providers, other industry partners, and the intelligence community to ensure the appropriate guardrails and safeguards are in place before potentially releasing a modified version of the model to agencies," Barbaccia wrote in the email, which had the subject, "Mythos Model Access." A White House official said in an email that the government continues to work and engage with AI companies to ensure their models help secure critical software vulnerabilities. They didn't answer specific questions on the matter. Anthropic declined to comment. Neither Anthropic nor the government said what, if any, federal agencies have gotten early access to Mythos. Barbaccia's email went to officials with the Department of Defense, Department of Treasury, Department of Commerce, Department of Homeland Security, Department of Justice and Department of State, among several other agencies. The Treasury Department has been seeking access to Mythos in order to uncover its own software flaws, Bloomberg has reported. Within Anthropic, company leaders became worried the model could be a national security risk after testers were able to use Mythos to turn up the types of critical bugs that it would normally take the world's best hackers to uncover. These concerns prompted the company's limited release of the model. It's similarly set off alarms in various parts of the US government. Among officials focused on national defense, the introduction of Mythos has created profound uncertainty about how to evaluate cybersecurity risk, a person familiar with the matter previously told Bloomberg. Equipping an individual hacker with the model, or similar AI tools, would likely be a transformation equivalent to turning a conventional soldier into a special forces operator, the person said. On the day, Anthropic publicly disclosed Mythos' existence, US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened Wall Street leaders for a meeting in Washington to urge them to use the model to find weaknesses in their own systems.

Sales of Tesla Inc.'s Cybertruck have been propped up in recent months by Elon Musk's other companies, an unusual arrangement that further indicates the polarizing pickup is failing to appeal to everyday buyers. SpaceX, the Musk-led rocket and satellite maker, accounted for 1,279 -- or more than 18% -- of the 7,071 Cybertrucks registered in the US during the fourth quarter, according to registration data that S&P Global Mobility provided to Bloomberg News. The billionaire's other ventures acquired another 60 vehicles during those months. That means almost one in every five Cybertrucks registered during the period were delivered from one part of Musk's sprawling business empire to another. And the purchases, likely exceeding $100 million in value, have continued into this year. The figures reinforce the extent to which consumer demand is faltering only two years after Tesla began delivering the electric pickup. Without those sales to other Musk-run companies -- which included xAI, Boring Co. and Neuralink, in addition to SpaceX -- Cybertruck registrations in the fourth quarter would have fallen 51%. "Tesla is running out of buyers for the Cybertruck," said Sam Fiorani, vice president of global vehicle forecasting for advisory firm AutoForecast Solutions. Tesla, Musk, SpaceX, Boring and Neuralink didn't respond to requests for comment. SpaceX acquired xAI in February. Tesla is under increasing pressure to reverse slumping sales across its lineup as it faces the prospect of a third straight annual decline. Once the undisputed electric vehicle leader, the company was surpassed by China's BYD Co. as the world's top seller of EVs last year. Investors have largely overlooked Tesla's declining auto sales as Musk reorients the company around futuristic pursuits including robotaxis and humanoid robots. But those products are still a ways off from becoming tangible business lines, and shareholders' patience appears to be wearing thin. Since hitting a record high in mid-December, Tesla's stock has lost a fifth of its value. High Hopes The Cybertruck debuted with great fanfare in late 2023, diversifying Tesla's lineup as a rugged bruiser of a vehicle to counter the sleek Model Y SUV and Model 3 sedan that account for the vast majority of the company's auto sales. Tesla was keen to compete in the lucrative US pickup market dominated by Ford Motor Co., General Motors Co. and Stellantis NV. Musk predicted before the launch that the company would be churning out 250,000 Cybertrucks annually by 2025. He's called it the best product Tesla has ever made. From the outset, however, there were red flags. The Cybertruck's angular design was divisive, and the attention-grabbing vehicle occasionally became the target of ridicule and vandalism when a backlash against Musk swelled last year. The truck was also more expensive than expected, with initial versions fetching more than $100,000, far more than the under-$40,000 starting price tag first touted in 2019. The first Cybertruck registrations by SpaceX began in October of last year, according to S&P Global Mobility data. The sales to Musk-run companies have continued into 2026, with another 158 in January and 67 in February. While the financial terms of the inter-company sales haven't been disclosed, the Cybertruck's current starting price of around $70,000 suggests that SpaceX, xAI, Boring and Neuralink have paid Tesla more than $100 million combined for the vehicles. It's not entirely clear what Musk's other companies are doing with the Cybertrucks, or why an artificial intelligence and social media company would acquire 50 of them. Photos and videos have circulated online showing long rows of idle Cybertrucks on SpaceX property in Texas. The lead engineer for the pickup posted on social media in October that SpaceX was replacing gas-powered support vehicles with trucks. At least some are being used as security vehicles. EV news outlet Electrek reported in December that SpaceX could ultimately buy about 2,000 Cybertrucks. While Tesla has given no indication that it would discontinue the Cybertruck, it's phasing out the slow-selling Model X SUV and Model S sedan, its two oldest vehicles. Musk has indicated the company may look to boost fleet sales to commercial customers in response to questions about Cybertruck's murky prospects. "There's obviously a market there for cargo delivery," he said in January during a Tesla earnings call. "There's a lot of cargo that needs to move locally within a city, and an autonomous Cybertruck could be very useful for that." Pickup Letdown The sales woes aren't entirely unique to Cybertruck: electric pickups have been a bust within the broadly stalled US EV market. Ford recently decided to convert its electric F-150 Lightning pickup to an extended-range hybrid vehicle. The Cybertruck was still the top-selling battery-powered truck in the US during the first quarter, despite a 45% drop, according to Cox Automotive data. Musk's companies have long been intertwined through financial investments, business agreements and sometimes even shared personnel. XAI uses Tesla Megapack batteries and has integrated its Grok chatbot into Tesla vehicles; Las Vegas conference-goers can ride in Teslas through a Boring-built tunnel; Tesla and SpaceX are collaborating on a planned chip production project. Still, it's unusual for an automaker to unload significant volumes of a single model to an affiliated business with the same CEO. Car manufacturers will sometimes offer new incentives, lower prices or lease vehicles to employees when a model isn't selling well. "It's a way of keeping the plant running when retail demand does not equal production," said Tom Libby, an automotive analyst at S&P Global Mobility.

The April 16 market saw the heaviest activity, with $130,166 in USDC traded. The order book was thin: moving the market 5 percentage points cost only $258. The largest single move was a 15-point spike, likely one large order pushing odds from 55% to 70%. What to watch The Claude 4.7 release on April 16 matches earlier bullish signals from Anthropic's announcements and documentation updates. With these markets now resolved, there's no remaining speculative value on this model's release timing. Trader attention shifts to Claude 5, which is unaffected by current events. Watch Anthropic's official channels and statements from Dario Amodei for any signals on that timeline.

The same ChatGPT chatbot that gave OpenAI's chief financial officer Sarah Friar a tilapia recipe for a recent Sunday night dinner at home is also now doing her most mundane tasks at work like summarizing her emails and Slack messages. Friar and other company executives are banking OpenAI's future on more of the latter as it shifts its focus to business-oriented products while shedding some of its consumer offerings as a pathway to profitability. OpenAI says it will introduce a new artificial intelligence model for "high-value professional work" as the company faces heightened competition with rival Anthropic in attracting corporate customers to adopt AI assistants in their workplaces. "You'll see a new model coming from us in short order. We feel very excited about it," Friar said in an interview with The Associated Press. OpenAI boasts of more than 900 million weekly users of its core ChatGPT product, and Friar said about 95% of them "don't pay anything" for the popular chatbot. But while all those interactions build habits and reliance, they also strain the costly computing resources needed to power the company's AI systems and highlight the need for big business customers to help pay the bills. OpenAI, valued at $852 billion, and Anthropic, valued at $380 billion, both lose more money than they make, putting the privately-owned San Francisco-based AI research laboratories in a fierce competition to generate more revenue as they race toward becoming publicly traded on Wall Street. A push to improve performance and sales of OpenAI's business-oriented products -- already Anthropic's bread and butter -- has driven OpenAI to abandon some consumer initiatives, like the AI video generator app Sora. "I think it was a little heartbreaking, but we're like, OK, it's not the main event right now," Friar said. "We need to make sure that our new model that's coming has enough compute." Codenamed Spud, OpenAI says its "smartest model yet" offers "stronger reasoning, better understanding of intent and dependencies, better follow-through and more reliable output in production." It will be part of OpenAI's answer to Anthropic's new Claude Mythos, which Anthropic claims is so "strikingly capable" that it is limiting its use to select customers because of its apparent ability to surpass human cybersecurity experts in finding or exploiting computer vulnerabilities. While most people can't use Mythos, Anthropic also on Thursday released Opus 4.7, describing it as its most powerful "generally available" model. Friar, the former CEO of neighborhood social platform Nextdoor, said business customers accounted for about 20% of OpenAI's revenue when she was hired in 2024 as chief financial officer. She said it's now 40% and expected to account for half of OpenAI's sales by the end of the year. It's a sharp turnaround from late last year, when OpenAI co-founder and CEO Sam Altman was promoting a now-shuttered Sora partnership with Disney, launching a plan to sell ads on ChatGPT and floating the idea of letting ChatGPT engage in erotica with paid adult users. Altman said on the "Mostly Human" podcast earlier this month that a sharper focus was needed -- and Friar agrees. "Tech companies, when they're growing, it's just this natural thing that happens. There's so many cool things you could do," she said, adding that companies can end up doing "really badly" if they do too many things, while "great companies are very good at, in a reasonable period of time, kind of doing that winnowing down and refocusing and it's super painful." Signaling that shift was the hiring three months ago of Slack CEO Denise Dresser to be OpenAI's first chief revenue officer. Dresser said in a recent AP interview that she has been laser-focused on meeting with corporate leaders and positioning OpenAI as the go-to platform for workplaces employing AI agents to automate a variety of computer-based job tasks. "It's really clear to me that companies are past the experimentation phase and they're into using AI to do real work," Dresser said. "Leaders at companies are recognizing that AI is probably the most consequential shift of their lifetime." But those leaders also have a choice, namely Anthropic's Claude that has become widely used by software professionals. Founded in 2021 by a group of ex-OpenAI leaders who said they wanted to prioritize AI safety, Anthropic has positioned itself as the more responsible AI vendor. The distinction drew attention when President Donald Trump's administration punished the startup after a contract dispute over AI use in the military, and Altman used the opportunity to cement OpenAI's own deal with the Pentagon. Consumer interest in Anthropic surged and the company said its annualized revenues hit $30 billion, a higher number than what OpenAI has reported, though they measure it differently. Friar and Dresser declined to reveal OpenAI's latest sales but both have suggested that Anthropic's number is inflated because it doesn't account for revenue it must share with cloud computing providers Amazon and Google. Even so, it remains a tight competition that's also tied to the health of the stock market and the future of the economy. "They're likely quite close," said Luke Emberson, a researcher at nonprofit institute Epoch AI. "Certainly the trends show Anthropic is growing much faster than OpenAI. If that continues, they're likely to cross soon." The urgency led Dresser to send a memo to OpenAI employees on Sunday, first reported by The Verge, that asserted that Anthropic's coding focus "gave them an early wedge" but expressing confidence that OpenAI has the "real structural advantage" as AI usage expands beyond software developers and OpenAI builds enough computing capacity to operate its AI systems. "Their story is built on fear, restriction, and the idea that a small group of elites should control AI," Dresser's memo said of Anthropic. "Our positive message will win over time: build powerful systems, put in the right safeguards, expand access, and help people do more." But for skeptics of the financial viability of the AI industry, the trajectory of both money-losing companies is alarming as smaller startups increasingly become dependent on their AI tools. Anthropic has imposed rate limits on heavy users, forcing some to wait for hours to use Claude, and both companies have set up service tiers that reward premium payers, said author and AI critic Ed Zitron. "It's what I call the subprime AI crisis," Zitron said. "People built their lives and they built their businesses on top of these companies that, as they try and save money, will start turning the screws." One thing that both AI leaders and critics agree on is that it is an expensive technology, though whether it is worth the cost in electricity-hungry AI computers remains to be seen. "People will say, well, 'Once they go public, they're safe.' That's not true," Zitron said. "Public companies can and will die, especially ones that are dependent on $100 billion to $200 billion every year or so, just to keep breathing."

An Anthropic spokesperson wrote that ID verification will be used when it sees "potentially fraudulent or abusive behavior." Anthropic recently added "identity verification" to its safeguards, requiring some users to provide a passport, driver's license, or government ID, along with a live selfie. The company is rolling it out for "a few use cases," according to its Help Center. Anthropic says it's the "data controller," setting the rules for where ID data is used and how long it is kept. But Persona Identities, an ID verification startup, will collect and store the user information. Persona is contractually obligated to employ user data "only to provide and support verification and to improve their ability to prevent fraud," Anthropic said. So why is Anthropic asking some Claude users to prove who they are? "This applies to a small number of cases where we see activity that indicates potentially fraudulent or abusive behavior, which violates our usage policy," an Anthropic spokesperson wrote to Business Insider. If Anthropic deems that the activity violates its usage policy, the Claude user's account could be banned. Anthropic's help page lists the following potential reasons for why an account might be banned after completing ID verification: Anthropic also offers an appeals form that can be filled out if a user feels their account has been wrongfully banned. Claude users on X have already started noticing the requests for an ID. One user posted a screenshot of the request in Claude, which asked for a "quick identity check." It wrote that the request would only take two minutes and required an ID and mobile camera access. Another screenshot posted online shows what it looks like once the process is completed. "Thank you for verifying your identity," it wrote, accompanied by a celebratory graphic. The backlash on X was swift. "Anthropic making unexplainable decisions," one user wrote. "We are living in 1984," another wrote. In its Help Center, Anthropic also included a list of things it was not doing. Anthropic was not training its models on the data from ID verifications, it wrote. It also wrote that it wasn't sharing ID data with anyone beyond Anthropic and Persona, except where legally required. "We are not collecting more than we need," Anthropic wrote. "We ask for the minimum information required to verify your identity."

The same ChatGPT chatbot that gave OpenAI's chief financial officer Sarah Friar a tilapia recipe for a recent Sunday night dinner at home is also now doing her most mundane tasks at work like summarizing her emails and Slack messages. Friar and other company executives are banking OpenAI's future on more of the latter as it shifts its focus to business-oriented products while shedding some of its consumer offerings as a pathway to profitability. OpenAI says it will introduce a new artificial intelligence model for "high-value professional work" as the company faces heightened competition with rival Anthropic in attracting corporate customers to adopt AI assistants in their workplaces. "You'll see a new model coming from us in short order. We feel very excited about it," Friar said in an interview with The Associated Press. OpenAI boasts of more than 900 million weekly users of its core ChatGPT product, and Friar said about 95% of them "don't pay anything" for the popular chatbot. But while all those interactions build habits and reliance, they also strain the costly computing resources needed to power the company's AI systems and highlight the need for big business customers to help pay the bills. OpenAI, valued at $852 billion, and Anthropic, valued at $380 billion, both lose more money than they make, putting the privately-owned San Francisco-based AI research laboratories in a fierce competition to generate more revenue as they race toward becoming publicly traded on Wall Street. A push to improve performance and sales of OpenAI's business-oriented products -- already Anthropic's bread and butter -- has driven OpenAI to abandon some consumer initiatives, like the AI video generator app Sora. "I think it was a little heartbreaking, but we're like, OK, it's not the main event right now," Friar said. "We need to make sure that our new model that's coming has enough compute." Codenamed Spud, OpenAI says its "smartest model yet" offers "stronger reasoning, better understanding of intent and dependencies, better follow-through and more reliable output in production." It will be part of OpenAI's answer to Anthropic's new Claude Mythos, which Anthropic claims is so "strikingly capable" that it is limiting its use to select customers because of its apparent ability to surpass human cybersecurity experts in finding or exploiting computer vulnerabilities. While most people can't use Mythos, Anthropic also on Thursday released Opus 4.7, describing it as its most powerful "generally available" model. Friar, the former CEO of neighborhood social platform Nextdoor, said business customers accounted for about 20% of OpenAI's revenue when she was hired in 2024 as chief financial officer. She said it's now 40% and expected to account for half of OpenAI's sales by the end of the year. It's a sharp turnaround from late last year, when OpenAI co-founder and CEO Sam Altman was promoting a now-shuttered Sora partnership with Disney, launching a plan to sell ads on ChatGPT and floating the idea of letting ChatGPT engage in erotica with paid adult users. Altman said on the "Mostly Human" podcast earlier this month that a sharper focus was needed -- and Friar agrees. "Tech companies, when they're growing, it's just this natural thing that happens. There's so many cool things you could do," she said, adding that companies can end up doing "really badly" if they do too many things, while "great companies are very good at, in a reasonable period of time, kind of doing that winnowing down and refocusing and it's super painful." Signaling that shift was the hiring three months ago of Slack CEO Denise Dresser to be OpenAI's first chief revenue officer. Dresser said in a recent AP interview that she has been laser-focused on meeting with corporate leaders and positioning OpenAI as the go-to platform for workplaces employing AI agents to automate a variety of computer-based job tasks. "It's really clear to me that companies are past the experimentation phase and they're into using AI to do real work," Dresser said. "Leaders at companies are recognizing that AI is probably the most consequential shift of their lifetime." But those leaders also have a choice, namely Anthropic's Claude that has become widely used by software professionals. Founded in 2021 by a group of ex-OpenAI leaders who said they wanted to prioritize AI safety, Anthropic has positioned itself as the more responsible AI vendor. The distinction drew attention when President Donald Trump's administration punished the startup after a contract dispute over AI use in the military, and Altman used the opportunity to cement OpenAI's own deal with the Pentagon. Consumer interest in Anthropic surged and the company said its annualized revenues hit $30 billion, a higher number than what OpenAI has reported, though they measure it differently. Friar and Dresser declined to reveal OpenAI's latest sales but both have suggested that Anthropic's number is inflated because it doesn't account for revenue it must share with cloud computing providers Amazon and Google. Even so, it remains a tight competition that's also tied to the health of the stock market and the future of the economy. "They're likely quite close," said Luke Emberson, a researcher at nonprofit institute Epoch AI. "Certainly the trends show Anthropic is growing much faster than OpenAI. If that continues, they're likely to cross soon." The urgency led Dresser to send a memo to OpenAI employees on Sunday, first reported by The Verge, that asserted that Anthropic's coding focus "gave them an early wedge" but expressing confidence that OpenAI has the "real structural advantage" as AI usage expands beyond software developers and OpenAI builds enough computing capacity to operate its AI systems. "Their story is built on fear, restriction, and the idea that a small group of elites should control AI," Dresser's memo said of Anthropic. "Our positive message will win over time: build powerful systems, put in the right safeguards, expand access, and help people do more." But for skeptics of the financial viability of the AI industry, the trajectory of both money-losing companies is alarming as smaller startups increasingly become dependent on their AI tools. Anthropic has imposed rate limits on heavy users, forcing some to wait for hours to use Claude, and both companies have set up service tiers that reward premium payers, said author and AI critic Ed Zitron. "It's what I call the subprime AI crisis," Zitron said. "People built their lives and they built their businesses on top of these companies that, as they try and save money, will start turning the screws." One thing that both AI leaders and critics agree on is that it is an expensive technology, though whether it is worth the cost in electricity-hungry AI computers remains to be seen. "People will say, well, 'Once they go public, they're safe.' That's not true," Zitron said. "Public companies can and will die, especially ones that are dependent on $100 billion to $200 billion every year or so, just to keep breathing."

The same ChatGPT chatbot that gave OpenAI's chief financial officer Sarah Friar a tilapia recipe for a recent Sunday night dinner at home is also now doing her most mundane tasks at work like summarizing her emails and Slack messages. Friar and other company executives are banking OpenAI's future on more of the latter as it shifts its focus to business-oriented products while shedding some of its consumer offerings as a pathway to profitability. OpenAI says it will introduce a new artificial intelligence model for "high-value professional work" as the company faces heightened competition with rival Anthropic in attracting corporate customers to adopt AI assistants in their workplaces. "You'll see a new model coming from us in short order. We feel very excited about it," Friar said in an interview with The Associated Press. OpenAI boasts of more than 900 million weekly users of its core ChatGPT product, and Friar said about 95% of them "don't pay anything" for the popular chatbot. But while all those interactions build habits and reliance, they also strain the costly computing resources needed to power the company's AI systems and highlight the need for big business customers to help pay the bills. OpenAI, valued at $852 billion, and Anthropic, valued at $380 billion, both lose more money than they make, putting the privately-owned San Francisco-based AI research laboratories in a fierce competition to generate more revenue as they race toward becoming publicly traded on Wall Street. A push to improve performance and sales of OpenAI's business-oriented products -- already Anthropic's bread and butter -- has driven OpenAI to abandon some consumer initiatives, like the AI video generator app Sora. "I think it was a little heartbreaking, but we're like, OK, it's not the main event right now," Friar said. "We need to make sure that our new model that's coming has enough compute." Codenamed Spud, OpenAI says its "smartest model yet" offers "stronger reasoning, better understanding of intent and dependencies, better follow-through and more reliable output in production." It will be part of OpenAI's answer to Anthropic's new Claude Mythos, which Anthropic claims is so "strikingly capable" that it is limiting its use to select customers because of its apparent ability to surpass human cybersecurity experts in finding or exploiting computer vulnerabilities. While most people can't use Mythos, Anthropic also on Thursday released Opus 4.7, describing it as its most powerful "generally available" model. Friar, the former CEO of neighborhood social platform Nextdoor, said business customers accounted for about 20% of OpenAI's revenue when she was hired in 2024 as chief financial officer. She said it's now 40% and expected to account for half of OpenAI's sales by the end of the year. It's a sharp turnaround from late last year, when OpenAI co-founder and CEO Sam Altman was promoting a now-shuttered Sora partnership with Disney, launching a plan to sell ads on ChatGPT and floating the idea of letting ChatGPT engage in erotica with paid adult users. Altman said on the "Mostly Human" podcast earlier this month that a sharper focus was needed -- and Friar agrees. "Tech companies, when they're growing, it's just this natural thing that happens. There's so many cool things you could do," she said, adding that companies can end up doing "really badly" if they do too many things, while "great companies are very good at, in a reasonable period of time, kind of doing that winnowing down and refocusing and it's super painful." Signaling that shift was the hiring three months ago of Slack CEO Denise Dresser to be OpenAI's first chief revenue officer. Dresser said in a recent AP interview that she has been laser-focused on meeting with corporate leaders and positioning OpenAI as the go-to platform for workplaces employing AI agents to automate a variety of computer-based job tasks. "It's really clear to me that companies are past the experimentation phase and they're into using AI to do real work," Dresser said. "Leaders at companies are recognizing that AI is probably the most consequential shift of their lifetime." But those leaders also have a choice, namely Anthropic's Claude that has become widely used by software professionals. Founded in 2021 by a group of ex-OpenAI leaders who said they wanted to prioritize AI safety, Anthropic has positioned itself as the more responsible AI vendor. The distinction drew attention when President Donald Trump's administration punished the startup after a contract dispute over AI use in the military, and Altman used the opportunity to cement OpenAI's own deal with the Pentagon. Consumer interest in Anthropic surged and the company said its annualized revenues hit $30 billion, a higher number than what OpenAI has reported, though they measure it differently. Friar and Dresser declined to reveal OpenAI's latest sales but both have suggested that Anthropic's number is inflated because it doesn't account for revenue it must share with cloud computing providers Amazon and Google. Even so, it remains a tight competition that's also tied to the health of the stock market and the future of the economy. "They're likely quite close," said Luke Emberson, a researcher at nonprofit institute Epoch AI. "Certainly the trends show Anthropic is growing much faster than OpenAI. If that continues, they're likely to cross soon." The urgency led Dresser to send a memo to OpenAI employees on Sunday, first reported by The Verge, that asserted that Anthropic's coding focus "gave them an early wedge" but expressing confidence that OpenAI has the "real structural advantage" as AI usage expands beyond software developers and OpenAI builds enough computing capacity to operate its AI systems. "Their story is built on fear, restriction, and the idea that a small group of elites should control AI," Dresser's memo said of Anthropic. "Our positive message will win over time: build powerful systems, put in the right safeguards, expand access, and help people do more." But for skeptics of the financial viability of the AI industry, the trajectory of both money-losing companies is alarming as smaller startups increasingly become dependent on their AI tools. Anthropic has imposed rate limits on heavy users, forcing some to wait for hours to use Claude, and both companies have set up service tiers that reward premium payers, said author and AI critic Ed Zitron. "It's what I call the subprime AI crisis," Zitron said. "People built their lives and they built their businesses on top of these companies that, as they try and save money, will start turning the screws." One thing that both AI leaders and critics agree on is that it is an expensive technology, though whether it is worth the cost in electricity-hungry AI computers remains to be seen. "People will say, well, 'Once they go public, they're safe.' That's not true," Zitron said. "Public companies can and will die, especially ones that are dependent on $100 billion to $200 billion every year or so, just to keep breathing."

Anthropic released Claude Opus 4.7 for broad use while keeping the stronger Mythos Preview under limited access. The new model improves coding, agent work, and high-resolution vision, but the larger test is whether Anthropic can block risky cyber use without cutting off legitimate defenders. If that system works, Mythos-class access gets closer. If it fails, the airlock closes. Anthropic released Claude Opus 4.7 on Thursday as its most powerful generally available model, while keeping the stronger Claude Mythos Preview behind limited access. The company says Opus 4.7 improves software engineering, high-resolution vision, instruction following, and long-running professional work, but remains below Mythos on the risk tests that matter for cyber deployment. Anthropic is using the public launch to test new safeguards that block prohibited and high-risk cybersecurity requests before it tries to widen access to Mythos-class systems. That is the real story. Opus 4.7 is not only a better coding model. It is the airlock between the commercial AI market and a class of models Anthropic now treats as too capable to release normally. The model lets Anthropic sell confidence to developers while managing institutional anxiety over cyber misuse, model autonomy, and public trust. The company is asking customers to step into that airlock with it. Anthropic's launch post says Opus 4.7 is generally available across Claude products, the API, Amazon Bedrock, Google Cloud's Vertex AI, and Microsoft Foundry. It also says the model is less broadly capable than Claude Mythos Preview. That distinction does real work. The Opus 4.7 system card says the model lands between Opus 4.6 and Mythos Preview, and does not advance Anthropic's capability frontier because Mythos already scores higher on every relevant axis Anthropic measured. In other words, Anthropic has separated the product frontier from the risk frontier. For customers, Opus 4.7 is the model they can actually use. For Anthropic, Mythos remains the model that explains why distribution has changed. That split makes Opus 4.7 a commercial compromise. The company can claim a new public high-end model without treating the launch like an uncontrolled safety event. It can also gather live evidence about cyber filters, verification programs, and user behavior before deciding how far Mythos-like models can move beyond selected cyber defenders and infrastructure partners. CNBC's launch coverage framed the release the same way: Anthropic is offering a stronger public model while holding back the more capable security-focused system. That framing matters because it turns a model launch into a governance trial. The Hacker News reaction caught the same tension more bluntly. One commenter wrote that the system card read "more like an advertisement for Mythos." Another called it "a 272 page report." That is not just snark. It shows how easily a safety document can also become product positioning for the model users cannot get. The emotion inside that trial is caution. Anthropic wants the market to feel acceleration. It wants regulators, security officials, and enterprise buyers to see restraint. The strongest business case for Opus 4.7 sits in software engineering. Anthropic reports 87.6% on SWE-bench Verified, 64.3% on SWE-bench Pro, 69.4% on Terminal-Bench 2.0, and 77.3% on MCP-Atlas. It also cites a 64.4% result on Finance Agent and a leading third-party GDPval-AA result against GPT-5.4 xhigh. Those numbers point in one direction: less babysitting for harder work. Early customer quotes in the launch material make the same argument from different angles. GitHub said its 93-task benchmark saw a 13% lift over Opus 4.6. Cursor said Opus 4.7 cleared 70% on CursorBench versus 58% for Opus 4.6. Notion said complex workflows improved 14% with fewer tokens and about one-third of the tool errors. The most useful reactions were not the loudest ones. Hex said the model "correctly reports when data is missing." Genspark pointed to "loop resistance, consistency, and graceful error recovery." That is the commercial pitch in plainer language: fewer fake answers, fewer dead loops, fewer agent runs that need a human rescue. Treat those quotes as customer evidence, not neutral measurement. They still show where Anthropic expects money to move: code review, long-running agent work, document reasoning, finance research, dashboards, and interfaces. The benchmark caveat is just as important as the gains. Agent scores depend on harnesses, time limits, retries, tool access, and scaffolding. Anthropic notes that Terminal-Bench comparisons used different setups across vendors, and that some older Opus 4.6 numbers changed after harness updates. SWE-bench also carries contamination risk because it draws from public repositories. So the right buyer question is not whether Opus 4.7 "wins." It is whether Opus 4.7 wins on your own work, with your tools, your permissions, your latency targets, and your review process. That is where the airlock metaphor becomes practical. Anthropic is not selling raw intelligence alone. It is selling a controlled passage from chat to action, with effort levels, task budgets, Claude Code review commands, and permission choices wrapped around the model. The clearest technical change is visual input. The new ceiling is 2,576 pixels on the long edge and about 3.75 megapixels. Prior Claude models topped out at 1,568 pixels and 1.15 megapixels. Small labels survive now. Dense screenshots often fail because small text disappears before reasoning starts. Axes blur. UI labels vanish. Menu items compress into noise. A model cannot reason about detail it never receives. Opus 4.7's reported vision gains follow from that. Anthropic reports large jumps on FigQA, CharXiv, ScreenSpot-Pro, and OSWorld-style computer-use tasks. The customer reaction points to the same gap. XBOW said Opus 4.7 scored 98.5% on its visual-acuity benchmark versus 54.5% for Opus 4.6, calling visual acuity the single pain point that had kept the company from using Opus for a class of autonomous penetration-testing work. This matters beyond image chat. If you run agents against browsers, IDEs, spreadsheets, charts, patent drawings, slide decks, or dense enterprise dashboards, higher-resolution vision changes the work the model can see. It also changes the cost. Anthropic warns that larger images consume more tokens and says users who do not need extra detail should downsample. That is the hidden migration task. Better vision gives developers a quality lever. It also punishes sloppy input design. Anthropic says Opus 4.7 is not a cyber-focused model. That sentence almost reads like a disclaimer because cyber is still the center of gravity. The system card says Opus 4.7 is roughly similar to Opus 4.6 on cyber capability and below Mythos Preview. It reports a near-saturated 96% pass@1 on Anthropic's 35-challenge Cybench subset, while also saying CTF-style tests may no longer tell the full story. On CyberGym, Opus 4.7 performed close to Opus 4.6 and below Mythos. On a Firefox exploitation evaluation, Opus 4.7 achieved partial control more often than Opus 4.6 but still struggled to produce reliable end-to-end exploit success. The outside context is Mythos. The UK AI Security Institute reported that Mythos Preview completed a cyber range end to end in 3 of 10 attempts and averaged 22 of 32 steps. Anthropic says Opus 4.7 failed to fully solve a related range, although its best run completed steps estimated to take a human cyber expert about five hours. That does not make Opus 4.7 harmless. A model that can complete meaningful portions of an attack range can aid defenders and bad actors. But it does make Opus 4.7 the safer public test bed for a larger access question. Anthropic's answer is a verified-access pattern. Prohibited and high-risk cyber requests are blocked by default. Security professionals with legitimate use cases can apply for the Cyber Verification Program. OpenAI has moved in a similar direction with trusted cyber access, which suggests the frontier labs are converging on identity, context, and user trust as part of the safety layer. That is a quiet but large shift. The safety system is no longer just the model refusing a bad prompt. It is the account, the customer, the use case, the logs, the tool permissions, and the exemption path. Anthropic's safety results are mixed. The company says Opus 4.7 is broadly similar to Opus 4.6, with better honesty and stronger resistance to malicious prompt injection in some agentic settings. It also says Opus 4.7 performs worse on some harmlessness tests, especially illegal-substance harm-reduction prompts, where it gave overly detailed answers more often than Opus 4.6. That tradeoff has a product explanation. Opus 4.7 follows instructions more literally and refuses benign requests less often. Users like that. Enterprises like that. Developers like that. You may like it too when an agent stops dodging ordinary work. But a model that trusts framing more easily can also be easier to steer through a polished pretext. Anthropic's ambiguous-context testing found Opus 4.7 more willing than Opus 4.6 to accept a user's benign premise and provide specifics upfront. In educational or defensive contexts, that helps. In weapons-adjacent or cyber contexts, it can hurt. This is the central safety problem of useful agents. Better compliance often feels like better alignment until the user is adversarial. The model also brings migration risk. Anthropic says the updated tokenizer can map the same input to roughly 1.0 to 1.35 times more tokens, depending on content type. Higher effort levels can increase reasoning and output tokens, especially in later turns of agentic work. Pricing stays at $5 per million input tokens and $25 per million output tokens, but bills can still move if prompts, images, and long loops are left unchanged. That cost anxiety showed up fast in the Hacker News thread. One user asked whether a "20x plan is now really a 13x plan" if usage rises and subscription allotments do not. That is exactly the kind of practical confusion a benchmark table does not answer. If you run Opus 4.6 in production, a blind swap is weak engineering. Retune prompts. Reprice long tasks. Recheck refusal boundaries. Rebuild evaluations around the actions your agents can actually take. Anthropic's Responsible Scaling Policy once looked like a safety document. With Opus 4.7, it also looks like a distribution system. Models are no longer simply shipped or withheld. They are assigned to access tiers, wrapped in safeguards, routed through verification programs, and measured against risk thresholds that determine who gets what. Mythos sits behind the inner door. Opus 4.7 opens the outer door to the public market. That gives Anthropic a credible story for regulators and customers. It can say it is not freezing progress, but it is not throwing the strongest model into general access either. It can collect real-world data from Opus 4.7's cyber filters before moving the next class of systems. The risk is that lab-run access tiers become private policy. Anthropic publishes a detailed system card, admits safety regressions, and cites outside evaluations where available. That is better than vague launch marketing. Still, many key facts remain inside the company: blocked request rates, appeal outcomes, cyber classifier misses, incident reports, customer exemptions, and the full Mythos risk profile. Transparency lowers suspicion. It does not replace independent audit. Opus 4.7 therefore lands as a model with two jobs. It must beat Opus 4.6 at the work customers pay for. It must also prove that Anthropic can operate the airlock between public AI and higher-risk frontier capability. The first job will show up in coding queues, agent logs, finance workflows, and token bills within days. The second will take longer. Watch what Anthropic does when verified cyber users ask for more power, when benign users hit blocks, and when adversaries learn the shape of the new filters.

Anthropic has introduced Claude Opus 4.7, its latest and most powerful generally available model. This release is an upgrade from Opus 4.6, particularly for software engineering tasks. It excels in complex coding scenarios that previously required significant user assistance. Enhanced Capabilities of Claude Opus 4.7 Opus 4.7 showcases improved skills in analyzing images and following user instructions. It also demonstrates enhanced creativity for generating slides and documents, according to Anthropic. Context of Release The launch of Opus 4.7 follows the announcement of Mythos Preview, a cybersecurity-centered model that Anthropic calls its most powerful. In comparison, Opus 4.7 is considered more limited in capability. Performance Evaluation An internal evaluation indicates that Opus 4.7 does not surpass the "capability frontier" defined by Mythos Preview. The advanced model has outperformed Opus 4.7 in all relevant metrics. Due to security concerns, Mythos Preview is currently available only to select partners, including: * Nvidia * JPMorgan Chase * Google * Apple * Microsoft Cybersecurity Features Anthropic's statement highlights that the release of Opus 4.7 includes additional cybersecurity safeguards compared to its predecessor. The company mentioned that the data collected from these safeguards would contribute to future releases of Mythos-class models. Security professionals interested in leveraging Opus 4.7 for cybersecurity initiatives, such as vulnerability research, may participate in the new Cyber Verification Program. This program will potentially relax some of the imposed safeguards for users. Trial and Pricing Early users of Opus 4.7 include major companies such as Intuit, Harvey, Replit, Cursor, Notion, Shopify, Vercel, and Databricks. The pricing remains unchanged from Opus 4.6, set at: With Opus 4.7, Anthropic continues its journey toward refined AI models, while emphasizing security and user support.

Worse: Anthropic is using Persona, a privacy checker that rings alarm bells for the paranoids on Reddit Anthropic may check your ID before letting you access certain Claude features, and the verification vendor it has picked is the same outfit that sparked controversy when Discord tested similar checks. Anthropic quietly updated its support page on identity verification for Claude users this week to indicate that it's rolling the process out on a case-by-case basis. According to the help page, Anthropic is rolling out identity verification for "a few use cases," and users "might see a verification prompt when accessing certain capabilities, as part of our routine platform integrity checks, or other safety and compliance measures." In short, expect to be suddenly asked for verification at any time, for pretty much any reason Anthropic can come up with. "Identity verification helps us prevent abuse, enforce our usage policies, and comply with legal obligations," the company said in its new support language. In order to further assuage user fears over the privacy of their data, Anthropic notes that it won't use any identity data to train its models, is only going to collect "the minimum information required to verify your identity," and won't share identity data with anyone other than Persona and Anthropic itself, except where legally required to respond to valid legal process. You may recognize the name Persona Identities if you follow privacy news. Discord previously chose Persona as its age verification partner when the social discussion platform announced plans to enact a verification system similar to Anthropic's. But a security researcher reported exposure of Persona's front end on a government server, then speculated that this was part of a broader government surveillance scheme. Persona convincinglydenied those allegations in discussions with The Register, but the uproar was enough for Discord to delay its plans to implement age checks. It also cast Persona over the edge for ostensibly unrelated reasons. This time around, discussion was quick to establish displeasure with Persona's involvement in Anthropic's identity verification plans, with some on Reddit saying they planned to cancel their subscriptions. Others pointed to the February personal account of an individual who dug into Persona after finding out they were LinkedIn's identity verification partner. As that blog post pointed out, Persona lists a number of subprocessors that help it with various parts of its identity verification process, including AWS, Confluent, Google, OpenAI, Stripe, Twilio, and even potentially Anthropic, among others. Anthropic claims on the help page that Persona is the one collecting selfie images and snapshots of identity documents for verification, and that it exercises tight controls over how Persona is able to handle it and what it can do with it. "We set the rules for how it's used and how long it's kept," Anthropic states. "Persona is contractually limited in how they can use your data: only to provide and support verification and to improve their ability to prevent fraud." Anthropic also made multiple mentions of being able to set its own retention period on the data of Claude users processed by Persona, but failed to state what that period is. The larger point? When new information is gathered, it often goes through a whole chain of providers. If any one of those providers has sneaky intentions or lax data security practices, that information may end up in hands you never expected it to. When all you maybe wanted to do was write some new code faster or ask a chatbot for relationship advice.

Perplexity today released a new expansion of Perplexity Computer for the Mac called Personal Computer. This brings the multi-modal orchestration capabilities of Computer to, well, a computer, where it can work with your files, apps, connectors, and the web. "Personal Computer makes Perplexity Computer a more personal orchestrator, elegantly hybridizing the local and server environments for maximum security and productivity," Perplexity explains. "AI changes how we think about the computer." Perplexity announced Perplexity Computer back in February, describing it as "the next evolution of AI." The idea is that it integrates previous interfaces, like chat and agents, into a single system, a "general-purpose digital worker," that can execute entire workflows by running multiple asynchronous tasks. AI, Perplexity said at the time, is now the computer. And now it's on the computer, or at least a computer: Perplexity Personal Computer for Mac takes Perplexity Computer local so it can integrate with your files, apps, and other tools. And you can initiate tasks from your phone, similar to how Claude Cowork for Windows/Mac can work with Dispatch. "You can ask Personal Computer to read your to-do list," Perplexity explains. "In fact, you can ask it to DO your to-do list. In Notes, just press both CMD keys to activate Personal Computer, and ask. Computer will read your Notes to-do list, reason how to accomplish each task, and work across all of your local files, iMessage, email, connected apps, and the open web to get it done." Personal Computer can also organize messy folders of files into project sub-folders and compare local files against web-based information, and you can interact with it using your voice. It uses a secure sandbox for files and undertakes auditable and reversible actions. Personal Computer for Mac is available now to Perplexity Max subscribers, and Perplexity will bring it to other tiers soon while prioritizing users on the waitlist.

To fully understand the ongoing slugfest between banks and retailers, you have to go back to May 2024. But first, an explanation of interchange fees. Each time a shopper swipes their credit or debit card, it sets off a complicated string of payments between banks. The retailer's bank pays an "interchange fee," typically around 1% to 2% of the transaction cost, to the consumer's bank. The fees include both a set amount and a percentage of the transaction, but the credit card companies, namely Visa and Mastercard, control how they're calculated.

Last August, U.S. Navy officials carrying out a test of unmanned vessels realized they had hit a single point of failure: Starlink. A global outage across Elon Musk's satellite network affecting millions of Starlink users had left two dozen unmanned surface vessels bobbing off the California coast, disrupting communications and halting operations for almost an hour. The incident, which involved drones intended to bolster U.S. military options in a conflict with China, was one of several Navy test disruptions linked to SpaceX's Starlink that left operators unable to connect with autonomous boats, according to internal Navy documents reviewed by Reuters and a person familiar with the matter. As SpaceX rockets toward a $2 trillion public offering this summer - expected to be the largest ever - the company has secured its position as the world's most valuable space company in part by being indispensable to the U.S. government with an array of technologies spanning satellite communications to space launches and military AI. Starlink, in particular, has proved key to crucial programs - from drones to missile tracking - with a low-earth orbit constellation of close to 10,000 satellites, a scale that provides the military with a network resilient against potential adversary attacks. But the Navy's mishaps with Starlink for its autonomous drone program, which have not been previously reported, highlight the challenges of the U.S. military's growing reliance on SpaceX and the risks it brings to the Pentagon. "If there was no Starlink, the U.S. government wouldn't have access to a global constellation of low earth orbit communications," said Clayton Swope, a deputy director of the Aerospace Security Project at the Center for Strategic and International Studies. The Pentagon did not respond to questions about the drone test or SpaceX's work with the Navy. The Pentagon's chief information officer, Kirsten Davies, said the "Department leverages multiple, robust, resilient systems for its broad network." The Navy and SpaceX did not respond to requests for comment. Despite facing growing competition from Amazon.com, which announced an $11.6 billion agreement this week to acquire satellite maker Globalstar, SpaceX remains far ahead in low-earth orbit communications. Beyond drones, SpaceX has cemented a near-monopoly for space launches and provides satellite communications with Starlink and its national security-focused constellation, Starshield, generating billions of dollars for the company. Last month, U.S. Space Force said it had reassigned its upcoming GPS launch to a SpaceX rocket for the fourth time, due to a glitch in the Vulcan rocket made by the Boeing and Lockheed Martin joint venture United Launch Alliance. Democratic lawmakers have warned the Pentagon about the risks of its reliance on a single company led by the world's richest man to deliver crucial national security capabilities. More recently, the Defense Department's disagreements and blacklisting of AI startup Anthropic quickly revealed how an over-reliance on one AI vendor could create problems should that vendor be dropped. Reuters reported last year that Musk unexpectedly switched off Starlink access to Ukrainian troops as they sought to retake territory from Russia, denting allies' trust in the billionaire. In Taiwan, SpaceX faced criticism over concerns it was withholding satellite communications to U.S. service members based there, "possibly in breach of SpaceX's contractual obligations with the U.S. government," according to a 2024 letter sent by then-U.S. Representative Mike Gallagher to Musk, reported by Forbes at the time. SpaceX disputed the claim in a post on X. Reuters could not determine whether SpaceX has since provided Starlink service in Taiwan to U.S. service members. The Pentagon and SpaceX did not respond to questions about Taiwan. "As a matter of operational security, we do not comment on or discuss plans, operations capabilities or effects," an official said in a statement. SpaceX's Starlink broadband has been crucial to the Pentagon's drone program, providing connection to small unmanned maritime vessels that look like speedboats without seats, and include those made by Maryland-based BlackSea and Austin, Texas-based Saronic. In April 2025, during a series of Navy tests in California involving unmanned boats and flying drones, officials reported that Starlink struggled to provide a solid network connection due to the high data usage needed to control multiple systems, according to a Navy safety report of the tests reviewed by Reuters. "Starlink reliance exposed limitations under multiple-vehicle load," the report stated. The report also faulted issues linked to radios provided by Silvus and a network system provided by Viasat. In the weeks leading up to the global Starlink outage in August, another series of Navy tests was disrupted by intermittent connection issues with the Starlink network, Navy documents reviewed by Reuters show. The causes of the network losses were not immediately clear. Despite the setbacks, the upside of Starlink - a cheap and commercially available service - outweighs the risk of a potential outage disrupting future military operations, said Bryan Clark, an autonomous warfare expert at the Hudson Institute. "You accept those vulnerabilities because of the benefits you get from the ubiquity it provides," he said.
Claude Opus 4.7 shows visible chain-of-thought and unusually high token usage. Anthropic shipped Claude Opus 4.7 today, calling it the company's most capable Opus model yet. We tested it, and the marketing lines up with the results. "Our latest model, Claude Opus 4.7, is now generally available." the company said in its official announcement. "Users report being able to hand off their hardest coding work -- the kind that previously needed close supervision -- to Opus 4.7 with confidence." The model arrives on the heels of weeks of user complaints about Opus 4.6 allegedly losing its edge. Developers across GitHub, Reddit, and X documented what they called "AI shrinkflation" -- the feeling that the model they'd been paying for had quietly gotten worse. As we reported yesterday, Anthropic was already preparing 4.7 while sitting on something far more powerful that it can't release publicly: Claude Mythos. When the announcement dropped this morning, X users who had been loudest about 4.6's degradation were quick to reply with sarcasm: Opus 4.7, some joked, felt like "early Opus 4.6" -- the version people actually liked, before they believed Anthropic quietly turned the dials down. Anthropic, of course, has denied ever degrading model weights to manage compute demand. Benchmarks back up Anthropic's claims. On SWE-bench Multilingual, a benchmark that measures coding skills, Opus 4.7 scored 80.5% against 4.6's 77.8%. On GDPVal-AA, a third-party evaluation of economically valuable knowledge work across finance and legal domains, 4.7 scored 1,753 Elo against GPT-5.4's 1,674 -- a clear margin over the closest competitor. Document reasoning via OfficeQA Pro showed the starkest jump: 80.6% for 4.7 versus 57.1% for 4.6, with GPT-5.4 and Gemini 3.1 Pro trailing at 51.1% and 42.9% respectively. Long-term coherence on Vending-Bench 2, a benchmark that measures how good models are at long context and reasoning tasks like owning a vending business, clocked in at $10,937 money balance versus $8,018 for 4.6 -- a proxy for how well the model sustains useful behavior over long autonomous runs. Cybersecurity is the one area where Anthropic deliberately held back. Opus 4.7 launches with automated safeguards that detect and block prohibited or high-risk cybersecurity requests. Anthropic confirmed it "experimented with efforts to differentially reduce" 4.7's cyber capabilities during training. Security professionals can apply to a new Cyber Verification Program for access to those features. This is the company's test run for the safeguards it will eventually need to deploy with Mythos-class models at scale. Opus 4.7 is the most powerful model publicly available. Mythos Preview, Anthropic's true frontier model, remains restricted to vetted security firms. As the UK's AI Security Institute evaluated last week, Mythos was the first AI to complete "The Last Ones," a 32-step corporate network attack simulation that typically takes human red teams 20 hours. Opus 4.7 is not that. But it's the public-facing model that Anthropic will use to learn how those safety guardrails hold up in the wild before it dares release anything scarier. On the token side, Opus 4.7 uses an updated tokenizer that can map the same input to roughly 1.0x-1.35x more tokens depending on content type. The model also reasons more at higher effort levels, particularly on later turns in agentic workflows. Anthropic published a migration guide for developers planning to upgrade from 4.6. We ran our own test -- the same game-building prompt we've used to evaluate every major model release. Opus 4.7 produced the best result we've ever gotten from any model. The most visually polished game, the most genuinely challenging difficulty curve, the best mechanics, and the most creative win and loss screens. It appeared to generate levels procedurally, and none of them felt impossible -- a balance that has tripped up other models repeatedly. You can test the game here It wasn't zero-shot. Opus 4.6 had cleared that same test without any fixes. Opus 4.7 needed one round of bug fixes. That could be bad luck -- a single iteration is a thin sample -- but it's worth noting. What struck us more was how the model handled that round: It spotted additional bugs on its own, without being guided toward them. Opus 4.6 typically waited to be told where to look. Xiaomi MiMo v2 Pro was the model with the best results until now, but unlike Opus, it produced a working result without the need for more than one iteration. Some may argue it was more visually pleasing and had a soundtrack, which was an advantage, but the game's logic and physics fell short against Opus after a single round of bug fixes. Also, Xiaomi's model produces these results at a fraction of the cost charged by Anthropic, which could be a major thing to consider for serious projects. The chain-of-thought behavior was different too at first glance. Unlike 4.6, which tucked its reasoning into a separate thinking box (meaning it was not part of the final answer), Opus 4.7 surfaced its chain of thought as part of the main text output. The reasoning was visible and traceable, not hidden behind a UI abstraction, which is a plus for those valuing transparency. Whether Anthropic will keep that behavior or eventually collapse it into a hidden block again is unclear. The token usage was unlike anything we'd seen before. For the first time in our testing, a single session depleted our entire token quota. Watching the model work, we saw it complete a full draft -- then write what appeared to be the entire game again from scratch under the label "Rewrite Emerge with bug fixes and improvements," followed by a second pass labeled "Create a rewritten Emerge with bug fixes and improvements." This means, if you're into serious coding, you'll be forced to either upgrade your plan, pay a lot on API tokens, or wait a long time until Anthropic resets your usage quotas. Or you could just use a comparable model that charges a lot less Opus 4.6 had never done this. However, it's consistent with what Anthropic warns in the migration guide: more output tokens, especially on agentic tasks at higher effort levels. Opus 4.7 is available today at Claude.ai, the Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry. Pricing is unchanged from 4.6: $5 per million input tokens, $25 per million output tokens. Developers can access it via the string claude-opus-4-7.

Anthropic's newest AI tool could spark chaos in markets, but not for the reasons investors have become accustomed to lately. The AI giant -- which has rolled out new tools and updates this year that have upended parts of the market -- recently developed a new model aimed at improving cybersecurity. But the model risks exploiting a key vulnerability in the financial sector, introducing risks ranging from widespread identity theft to the destabilization of the financial system, the American Securities Association said. In a public letter to the Treasury Secretary on Thursday, the trade group flagged concerns about Claude Mythos, the "general-purpose" AI model Anthropic announced in early April. The model -- which falls under Project Glasswing, the company's broader cybersecurity initiative -- is able to locate "thousands of high-severity vulnerabilities" in code across "every major operating system and web browser," the company said on its website. If used by bad actors, the tool could be used to hack into the Securities and Exchange Commission's Consolidated Audit Trail, a centralized database that contains investors' private information, the ASA speculated. The group's letter comes about a week after reports said that Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened major US bank CEOs for an urgent meeting to flag the cyber risks posed by Mythos. "The subject matter of this meeting confirms what ASA has warned about for years: the US Securities and Exchange Commission's (SEC) Consolidated Audit Trail (CAT) is a significant cybersecurity vulnerability waiting to be exploited. This is no longer a hypothetical. The threat is here, it is identified, and it has a name," the letter said, referring to Mythos. The ASA has long opposed the use of the Consolidated Audit Trail, citing data privacy concerns. The group outlined six specific risks it believes Mythos could pose to investors. The group outlined actions regulators could take, including suspending CAT and getting rid of the platform's collected data. Anthropic, which described Mythos as a work in progress on its website, also noted the potential consequences of the technology if it were used by bad actors. "Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout -- for economies, public safety, and national security -- could be severe," it said on its website. Anthropic and the US Treasury did not immediately respond to a request for comment from Business Insider.
Last August, U.S. Navy officials carrying out a test of unmanned vessels realized they had hit a single point of failure: Starlink. A global outage across Elon Musk's satellite network affecting millions of Starlink users had left two dozen unmanned surface vessels bobbing off the California coast, disrupting communications and halting operations for almost an hour. The incident, which involved drones intended to bolster U.S. military options in a conflict with China, was one of several Navy test disruptions linked to SpaceX's Starlink that left operators unable to connect with autonomous boats, according to internal Navy documents reviewed by Reuters and a person familiar with the matter. As SpaceX rockets toward a $2 trillion public offering this summer - expected to be the largest ever - the company has secured its position as the world's most valuable space company in part by being indispensable to the U.S. government with an array of technologies spanning satellite communications to space launches and military AI. Starlink, in particular, has proved key to crucial programs - from drones to missile tracking - with a low-earth orbit constellation of close to 10,000 satellites, a scale that provides the military with a network resilient against potential adversary attacks. But the Navy's mishaps with Starlink for its autonomous drone program, which have not been previously reported, highlight the challenges of the U.S. military's growing reliance on SpaceX and the risks it brings to the Pentagon. "If there was no Starlink, the U.S. government wouldn't have access to a global constellation of low earth orbit communications," said Clayton Swope, a deputy director of the Aerospace Security Project at the Center for Strategic and International Studies. The Pentagon did not respond to questions about the drone test or SpaceX's work with the Navy. The Pentagon's chief information officer, Kirsten Davies, said the "Department leverages multiple, robust, resilient systems for its broad network." The Navy and SpaceX did not respond to requests for comment. Despite facing growing competition from Amazon.com, which announced an $11.6 billion agreement this week to acquire satellite maker Globalstar, SpaceX remains far ahead in low-earth orbit communications. Beyond drones, SpaceX has cemented a near-monopoly for space launches and provides satellite communications with Starlink and its national security-focused constellation, Starshield, generating billions of dollars for the company. Last month, U.S. Space Force said it had reassigned its upcoming GPS launch to a SpaceX rocket for the fourth time, due to a glitch in the Vulcan rocket made by the Boeing and Lockheed Martin joint venture United Launch Alliance. Democratic lawmakers have warned the Pentagon about the risks of its reliance on a single company led by the world's richest man to deliver crucial national security capabilities. More recently, the Defense Department's disagreements and blacklisting of AI startup Anthropic quickly revealed how an over-reliance on one AI vendor could create problems should that vendor be dropped. Reuters reported last year that Musk unexpectedly switched off Starlink access to Ukrainian troops as they sought to retake territory from Russia, denting allies' trust in the billionaire. In Taiwan, SpaceX faced criticism over concerns it was withholding satellite communications to U.S. service members based there, "possibly in breach of SpaceX's contractual obligations with the U.S. government," according to a 2024 letter sent by then-U.S. Representative Mike Gallagher to Musk, reported by Forbes at the time. SpaceX disputed the claim in a post on X. Reuters could not determine whether SpaceX has since provided Starlink service in Taiwan to U.S. service members. The Pentagon and SpaceX did not respond to questions about Taiwan. "As a matter of operational security, we do not comment on or discuss plans, operations capabilities or effects," an official said in a statement. SpaceX's Starlink broadband has been crucial to the Pentagon's drone program, providing connection to small unmanned maritime vessels that look like speedboats without seats, and include those made by Maryland-based BlackSea and Austin, Texas-based Saronic. In April 2025, during a series of Navy tests in California involving unmanned boats and flying drones, officials reported that Starlink struggled to provide a solid network connection due to the high data usage needed to control multiple systems, according to a Navy safety report of the tests reviewed by Reuters. "Starlink reliance exposed limitations under multiple-vehicle load," the report stated. The report also faulted issues linked to radios provided by Silvus and a network system provided by Viasat. In the weeks leading up to the global Starlink outage in August, another series of Navy tests was disrupted by intermittent connection issues with the Starlink network, Navy documents reviewed by Reuters show. The causes of the network losses were not immediately clear. Despite the setbacks, the upside of Starlink - a cheap and commercially available service - outweighs the risk of a potential outage disrupting future military operations, said Bryan Clark, an autonomous warfare expert at the Hudson Institute. "You accept those vulnerabilities because of the benefits you get from the ubiquity it provides," he said.
Anthropic recently added "identity verification" to its safeguards, requiring some users to provide a passport, driver's license, or government ID, along with a live selfie. The company is rolling it out for "a few use cases," according to its Help Center. Anthropic says it's the "data controller," setting the rules for where ID data is used and how long it is kept. But Persona Identities, an ID verification startup, will collect and store the user information. Persona is contractually obligated to employ user data "only to provide and support verification and to improve their ability to prevent fraud," Anthropic said. So why is Anthropic asking some Claude users to prove who they are? "This applies to a small number of cases where we see activity that indicates potentially fraudulent or abusive behavior, which violates our usage policy," an Anthropic spokesperson wrote to Business Insider. If Anthropic deems that the activity violates its usage policy, the Claude user's account could be banned. Anthropic's help page lists the following potential reasons for why an account might be banned after completing ID verification: Anthropic also offers an appeals form that can be filled out if a user feels their account has been wrongfully banned. Claude users on X have already started noticing the requests for an ID. One user posted a screenshot of the request in Claude, which asked for a "quick identity check." It wrote that the request would only take two minutes and required an ID and mobile camera access. Another screenshot posted online shows what it looks like once the process is completed. "Thank you for verifying your identity," it wrote, accompanied by a celebratory graphic. The backlash on X was swift. "Anthropic making unexplainable decisions," one user wrote. "We are living in 1984," another wrote. In its Help Center, Anthropic also included a list of things it was not doing. Anthropic was not training its models on the data from ID verifications, it wrote. It also wrote that it wasn't sharing ID data with anyone beyond Anthropic and Persona, except where legally required. "We are not collecting more than we need," Anthropic wrote. "We ask for the minimum information required to verify your identity."
The same ChatGPT chatbot that gave OpenAI's chief financial officer Sarah Friar a tilapia recipe for a recent Sunday night dinner at home is also now doing her most mundane tasks at work like summarizing her emails and Slack messages. Friar and other company executives are banking OpenAI's future on more of the latter as it shifts its focus to business-oriented products while shedding some of its consumer offerings as a pathway to profitability. OpenAI says it will introduce a new artificial intelligence model for "high-value professional work" as the company faces heightened competition with rival Anthropic in attracting corporate customers to adopt AI assistants in their workplaces. "You'll see a new model coming from us in short order. We feel very excited about it," Friar said in an interview with The Associated Press. OpenAI boasts of more than 900 million weekly users of its core ChatGPT product, and Friar said about 95% of them "don't pay anything" for the popular chatbot. But while all those interactions build habits and reliance, they also strain the costly computing resources needed to power the company's AI systems and highlight the need for big business customers to help pay the bills. OpenAI, valued at $852 billion, and Anthropic, valued at $380 billion, both lose more money than they make, putting the privately-owned San Francisco-based AI research laboratories in a fierce competition to generate more revenue as they race toward becoming publicly traded on Wall Street. A push to improve performance and sales of OpenAI's business-oriented products -- already Anthropic's bread and butter -- has driven OpenAI to abandon some consumer initiatives, like the AI video generator app Sora. "I think it was a little heartbreaking, but we're like, OK, it's not the main event right now," Friar said. "We need to make sure that our new model that's coming has enough compute." Codenamed Spud, OpenAI says its "smartest model yet" offers "stronger reasoning, better understanding of intent and dependencies, better follow-through and more reliable output in production." It will be part of OpenAI's answer to Anthropic's new Claude Mythos, which Anthropic claims is so "strikingly capable" that it is limiting its use to select customers because of its apparent ability to surpass human cybersecurity experts in finding or exploiting computer vulnerabilities. While most people can't use Mythos, Anthropic also on Thursday released Opus 4.7, describing it as its most powerful "generally available" model. Friar, the former CEO of neighborhood social platform Nextdoor, said business customers accounted for about 20% of OpenAI's revenue when she was hired in 2024 as chief financial officer. She said it's now 40% and expected to account for half of OpenAI's sales by the end of the year. It's a sharp turnaround from late last year, when OpenAI co-founder and CEO Sam Altman was promoting a now-shuttered Sora partnership with Disney, launching a plan to sell ads on ChatGPT and floating the idea of letting ChatGPT engage in erotica with paid adult users. Altman said on the "Mostly Human" podcast earlier this month that a sharper focus was needed -- and Friar agrees. "Tech companies, when they're growing, it's just this natural thing that happens. There's so many cool things you could do," she said, adding that companies can end up doing "really badly" if they do too many things, while "great companies are very good at, in a reasonable period of time, kind of doing that winnowing down and refocusing and it's super painful." Signaling that shift was the hiring three months ago of Slack CEO Denise Dresser to be OpenAI's first chief revenue officer. Dresser said in a recent AP interview that she has been laser-focused on meeting with corporate leaders and positioning OpenAI as the go-to platform for workplaces employing AI agents to automate a variety of computer-based job tasks. "It's really clear to me that companies are past the experimentation phase and they're into using AI to do real work," Dresser said. "Leaders at companies are recognizing that AI is probably the most consequential shift of their lifetime." But those leaders also have a choice, namely Anthropic's Claude that has become widely used by software professionals. Founded in 2021 by a group of ex-OpenAI leaders who said they wanted to prioritize AI safety, Anthropic has positioned itself as the more responsible AI vendor. The distinction drew attention when President Donald Trump's administration punished the startup after a contract dispute over AI use in the military, and Altman used the opportunity to cement OpenAI's own deal with the Pentagon. Consumer interest in Anthropic surged and the company said its annualized revenues hit $30 billion, a higher number than what OpenAI has reported, though they measure it differently. Friar and Dresser declined to reveal OpenAI's latest sales but both have suggested that Anthropic's number is inflated because it doesn't account for revenue it must share with cloud computing providers Amazon and Google. Even so, it remains a tight competition that's also tied to the health of the stock market and the future of the economy. "They're likely quite close," said Luke Emberson, a researcher at nonprofit institute Epoch AI. "Certainly the trends show Anthropic is growing much faster than OpenAI. If that continues, they're likely to cross soon." The urgency led Dresser to send a memo to OpenAI employees on Sunday, first reported by The Verge, that asserted that Anthropic's coding focus "gave them an early wedge" but expressing confidence that OpenAI has the "real structural advantage" as AI usage expands beyond software developers and OpenAI builds enough computing capacity to operate its AI systems. "Their story is built on fear, restriction, and the idea that a small group of elites should control AI," Dresser's memo said of Anthropic. "Our positive message will win over time: build powerful systems, put in the right safeguards, expand access, and help people do more." But for skeptics of the financial viability of the AI industry, the trajectory of both money-losing companies is alarming as smaller startups increasingly become dependent on their AI tools. Anthropic has imposed rate limits on heavy users, forcing some to wait for hours to use Claude, and both companies have set up service tiers that reward premium payers, said author and AI critic Ed Zitron. "It's what I call the subprime AI crisis," Zitron said. "People built their lives and they built their businesses on top of these companies that, as they try and save money, will start turning the screws." One thing that both AI leaders and critics agree on is that it is an expensive technology, though whether it is worth the cost in electricity-hungry AI computers remains to be seen. "People will say, well, 'Once they go public, they're safe.' That's not true," Zitron said. "Public companies can and will die, especially ones that are dependent on $100 billion to $200 billion every year or so, just to keep breathing."
