The latest news and updates from companies in the WLTH portfolio.
AI demand metrics are broken and only Anthropic is being realistic The main demand signal for artificial intelligence looks explosive on paper, but it may be significantly overstated. Anthropic, by pricing its tools for that reality, might be the best-positioned AI company if a correction comes. Tokens are the basic unit of AI usage: words and characters that make up both the queries users send and the output models generate. Chatting with an AI consumes a couple of hundred tokens per paragraph. Agentic AI, where models write code, browse the web, and execute multi-step workflows, burns through thousands more per session. Using the rates of Anthropic's latest model, one million tokens of input (prompts) costs $5, and one million tokens of output (the model's responses) costs $25. AI companies cite the boom in token consumption to justify the hundreds of billions of dollars being spent on infrastructure to serve it. But token consumption is becoming a distorted metric. Meta and Shopify say they have created internal leaderboards that track how many tokens employees use. Nvidia CEO Jensen Huang has said he'd be "deeply alarmed" if an engineer earning $500,000 a year wasn't using at least $250,000 worth of compute -- measuring what an engineer spends on AI instead of what they produce with it. Once companies start measuring AI adoption by volume, employees optimize for the metric instead of the outcome. "If your goal is to just burn a lot of money, there are easy ways to do that," said Ali Ghodsi, CEO of Databricks, which processes AI workloads for thousands of enterprises. "Resubmit the query to ten places. Put up a loop that just does it again and again. It's going to cost a lot of money and not lead to anything." Jen Stave, executive director of the Harvard Business School AI Institute, hears the same from enterprise leaders. "I've talked to a dozen CTOs or CIOs who are all saying, 'Actually I'm having a really hard time finding an ROI framework for this,'" she said. Anthropic is planning for the possibility that the demand projections are wrong. CEO Dario Amodei has described what he calls a "cone of uncertainty" - data centers take one to two years to build, so companies are committing billions now for demand they can't yet verify. Buy too little and lose customers when you don't have enough capacity. Buy too much and revenue doesn't arrive on schedule, the math stops working. "If you're off by a couple years, that can be ruinous," Amodei said on the Dwarkesh Patel podcast in February. "I get the impression that some of the other companies have not written down the spreadsheet. They're just doing stuff because it sounds cool." Anthropic's response has been to move away from flat-rate enterprise pricing and toward per-token billing, so the revenue it collects reflects actual usage. It has also cut off some third-party tools that were large consumers of tokens, while OpenAI has been making AI cheaper and easier to consume at scale. Flat-rate pricing has dominated the early years of AI adoption, with fixed monthly fees for generous or unlimited AI access. That model worked when people were chatting with AI. But agentic usage turned what cost thousands of tokens per session into millions, and broke the economics. Anthropic's most generous consumer offering, its $200-a-month Max plan, became a case study. Developers had been routing that subscription through third-party agentic tools like OpenClaw, running AI agents around the clock on a plan designed for conversation. Based on Anthropic's published rates for its latest model, a heavy Claude Code Max user could be paying as little as $200 a month for usage that would've cost the user up to $5,000 without a subscription. On April 4, Anthropic cut off those tools. Boris Cherny, head of Claude Code, wrote on X that the subscriptions "weren't built for the usage patterns of these third-party tools." The same recalibration is happening in enterprise. Older Anthropic contracts included standard and premium seats -- flat monthly fees with a baked-in usage allowance. Those are now labeled "legacy seat types that are no longer available for new Enterprise contracts," according to the company's support page. New enterprise plans charge per seat, with token consumption billed at API rates on top. Anthropic was first to move, but the pressure is building across the industry. OpenAI's Nick Turley, head of ChatGPT, acknowledged on a BG2 podcast that "it's possible that in the current era, having an unlimited plan is like having an unlimited electricity plan. It just doesn't make sense." If every token now carries a price, companies and consumers that budgeted for flat-rate AI are going to start asking what they actually got for it. Ramp CEO Eric Glyman, who recently launched a token-tracking tool, sees the dynamic from the finance side. AI spending across Ramp's customer base has grown 13x over the past year and no one knows how to budget for it. He pointed to Anthropic's approach as the more prudent long-term strategy, and raised a question that should concern OpenAI's investors: if your business model depends on extracting maximum token spend, do you have the incentive to help customers use AI more efficiently? Salesforce is making a similar bet, rolling out a new metric it calls "agentic work units" that tracks the work AI completes rather than the tokens it burns. Both Anthropic and OpenAI are expected to pursue IPOs this year. When they do, the demand question will be the first thing public market investors try to answer. Anthropic, by moving to per-token billing, will have cleaner data on what its customers actually value. OpenAI will have bigger numbers but a harder time proving how much of them are real. If even a meaningful fraction of today's AI demand is inflated, the company that priced for reality will be the one still standing when the correction arrives.

WASHINGTON (AP) -- White House chief of staff Susie Wiles plans to sound out Anthropic CEO Dario Amodei about the artificial intelligence company's new Mythos model, which has attracted attention from the federal government for how it could transform national security and the economy. A White House official, who requested anonymity to discuss the planned meeting Friday, said the administration is engaging with advanced AI labs about their models and the security of software. The official stressed that any new technology that might be used by the federal government would require a technical period for evaluation. The meeting comes after tensions have run hot between the Trump administration and the safety-conscious Anthropic, which has sought to put guardrails on the development of AI to minimize any potential risks and maximize its economic and national security benefits for the U.S. President Donald Trump tried to stop all federal agencies from using Anthropic's chatbot Claude over the company's contract dispute with the Pentagon, with Trump saying in a February social media post that the administration "will not do business with them again!" Defense Secretary Pete Hegseth also sought to declare Anthropic a supply chain risk, an unprecedented move against a U.S. company that Anthropic has challenged in two federal courts. The company said it wanted assurance the Pentagon would not use its technology in fully autonomous weapons and the surveillance of Americans. Hegseth said the company must allow for any uses the Pentagon deemed lawful. U.S. District Judge Rita Lin issued a ruling in March that blocked the enforcement of Trump's social media directive ordering all federal agencies to stop using Anthropic products. Anthropic declined to speak about the meeting in advance. The San Francisco-based Anthropic has said the new Mythos model it announced on April 7 is so "strikingly capable" that it is limiting its use to select customers because of its ability to surpass human cybersecurity experts in finding and exploiting computer vulnerabilities. And while some industry experts have questioned whether Anthropic's claims of too-powerful AI technology were a marketing ploy, even some of the company's sharpest critics have suggested that Mythos might represent a further advancement in AI. One influential Anthropic critic, David Sacks, who was the White House's AI and crypto czar, said people should "take this seriously." "Anytime Anthropic is scaring people, you have to ask, 'Is this a tactic? Is this part of their Chicken Little routine? Or is it real?'" Sacks said on the "All-In" podcast he co-hosts with other tech investors. "With cyber, I actually would give them credit in this case and say this is more on the real side." Sacks said, "It just makes sense that as the coding models become more and more capable, they are more capable at finding bugs. That means they're more capable at finding vulnerabilities. That means they're more capable at stringing together multiple vulnerabilities and creating an exploit." The model's potential benefits, as well as its risks, have also attracted attention outside the U.S. The United Kingdom's AI Security Institute said it evaluated the new model and found it a "step up" over previous models, which were already rapidly improving. "Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed," the institute said in a report. Anthropic has also been in talks with the European Union about its AI models, including advanced models that haven't yet been released in Europe, European Commission spokesman Thomas Regnier said Friday. Axios first reported the scheduled meeting between Wiles and Amodei. When it announced Mythos, Anthropic said it was also forming an initiative called Project Glasswing, bringing together tech giants such as Amazon, Apple, Google and Microsoft, along with other companies like JPMorgan Chase, in hopes of securing the world's critical software from "severe" fallout that the new model could pose to public safety, national security and the economy. "We're releasing it to a subset of some of the world's most important companies and organizations so they can use this to find vulnerabilities," said the Anthropic co-founder and policy chief, Jack Clark, at this week's Semafor World Economy conference. Clark added that Mythos, while ahead of the curve, is not a "special model." "There will be other systems just like this in a few months from other companies, and in a year to a year-and-a-half later, there will be open-weight models from China that have these capabilities," he said. So the world is going to have to get ready for more powerful systems that are going to exist within it." ___ O'Brien reported from Providence, R.I. AP business reporter Kelvin Chan contributed to this report from London.

Elon Musk's SpaceX has accelerated its scheduled vesting date to as soon as next week. SpaceX notified employees that the vesting date, when the stock option shares would become eligible for sale, would occur this month instead of in May, according to what uncited sources told Bloomberg. * Tesla shares are climbing with conviction. What's fueling TSLA momentum? The company is now targeting a May listing with a valuation of more than $1.75 trillion, making it the largest IPO in history, surpassing Saudi Aramco's $29 billion debut in 2019. The offering is expected to price the week of June 15. Details regarding the IPO are subject to change, sources noted. It is understood that SpaceX may also qualify for inclusion in major indexes such as the Nasdaq 100 shortly after its IPO, bypassing the traditional longer waiting periods. SpaceX is also reportedly showing potential anchor investors its facilities in California, Mississippi and Texas. Musk will be offering this tour across America for large-stakes investors such as sovereign wealth funds. The plane is set to depart from New York, although plans and dates are subject to change. According to previous media reports, Bank of America, Goldman Sachs, JPMorgan Chase and Morgan Stanley have all secured senior roles on the deal. Citigroup is also among the banks preparing the IPO. International banks are also taking part in the process. Royal Bank of Canada, Mizuho Financial Group and Macquarie Group are all focused on managing shares from their respective locations. In February, Musk announced that SpaceX acquired his artificial intelligence startup xAI. The transaction valued SpaceX at about $1 trillion and xAI at roughly $250 billion, according to a Reuters report. Photo: Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.

Bank of Canada Governor Tiff Macklem says global financial systems need to "come to grips" with the risks posed by rapid advances in artificial intelligence models like Anthropic's Mythos. Developer Anthropic claims the upcoming Mythos model of its Claude AI is capable of quickly detecting long-hidden cybersecurity vulnerabilities, news which has made major financial market players and regulators anxious about the technology's disruptive potential. The Bank of Canada gathered representatives from big banks and financial agencies last week to discuss the risks Mythos poses for the Canadian financial system. Macklem says while there's been a fair amount of discussion about Mythos on the sidelines of the International Monetary Fund's spring meetings in Washington, no one knows yet the full implications of this latest AI advance. He says Mythos is not a one-off event and the nature of AI development means firms, regulators and policy-makers need to grapple with how these rapidly evolving technologies will affect the integrity of financial systems in Canada and around the world. Finance Minister Francois-Philippe Champagne was also in Washington for the IMF meetings and told reporters earlier in the day that Mythos has become a "test case" for how governments prepare for and react to new technologies. This report by The Canadian Press was first published April 17, 2026. Craig Lord, The Canadian Press

Anthropic is a company that is in business to develop the capabilities of AI. As I've described here and here, there has been some level of panic about the degree to which AI in general is moving faster than anyone expected. In particular, people have cited Anthropic's tool known as Claude Code as essentially able to independently complete hours of coding work in a way that renders a lot of junior programmers obsolete. "We're talking 10 to 20 -- to even 100 -- times as productive as I've ever been in my career," Steve Yegge, a veteran coder who built his own tool for running swarms of coding agents, told me. "It's like we've been walking our whole lives," he says, but now they have been given a ride, "and it's fast as [expletive]." In theory this is great and could lead to a boom in production, but there's an obvious potential downside to having a tool like Claude Code that can do all of this better and faster than a lot of humans. In the wrong hands, this tool could absolutely wreck the cybersecurity of a lot of big institutions. Now we get to what is either the genuinely scary part of this story or some genuinely brilliant PR work by Anthropic. The company says that it's new model, which is called Mythos, is so good at coding that it can almost instantly find (and in theory exploit) vulnerabilities in code, some that have existed unnoticed for decades. Anthropic is so worried about the potential for this to go badly that it is essentially holding the new model back from public release. One balmy February evening in Bali, Nicholas Carlini stepped away between events at a wedding, opened his laptop, and set out to do some damage. Anthropic PBC had just made a new artificial intelligence model, called Mythos, available for internal review, and Carlini -- a well-known AI researcher -- intended to see what kind of trouble it could cause. Anthropic pays Carlini to stress-test its AI models to see whether hackers could leverage them for espionage, theft or sabotage. From Bali, where Carlini and his wife were attending an Indian wedding, he was staggered at what the model could do. Within hours Carlini found numerous techniques to infiltrate systems used around the world. Once Carlini was back in Anthropic's downtown San Francisco office, he discovered Mythos was able to autonomously create powerful break-in tools, including against Linux, the open-source code that underpins most of modern computing... A previous model, Opus 4.6, had shown indications it could help people exploit vulnerabilities in software. Mythos could exploit the vulnerabilities on its own, Graham says. This was a national security risk, he warned Anthropic's executives. That left Graham with the unenviable task of telling his bosses that their next major revenue generator was too hazardous to release to the public. What the company eventually decided to do, was a kind of pre-release to insiders which would allow them to use Mythos itself to, essentially, Mythos-proof their own systems. ...rather than make it widely available to Claude users, Anthropic gave 12 tech companies access via Project Glasswing, which it described as "an effort to secure the world's most critical software". They include cloud computing giant Amazon Web Services, device manufacturers Apple, Microsoft and Google, and chip-makers Nvidia and Broadcom. Crowdstrike, whose faulty software update caused a major global outage in July 2024, is also among the project's partners, with Anthropic saying it has also given access to Mythos to more than 40 organisations responsible for critical software. In a video released alongside Project Glasswing's launch, Anthropic boss Dario Amodei said it had offered to work with US government officials to "help defend against the risk of these models". Not everyone is convinced the Mythos threat is real, but David Sacks, the White House AI advisor says it has to be taken seriously because Anthropic's claims could be real. Even beyond our borders, some experts are worried. Ciaran Martin, former head of the UK's National Cyber Security Centre, told the BBC earlier this week the claim Mythos could unearth critical vulnerabilities much more quickly than other AI models had "really shaken people". "The second thing is that even with existing weaknesses that we know about, but organisations might not have patched against, might not be well defended against, it's just a really good hacker," he said. You may also remember that Anthropic has been in a big fight with the Pentagon after it refused to allow the military to use Claude for AI guided weapons. The Pentagon responded by blacklisting the company and now the fight is taking place in court after Anthropic sued. However, the founder of Anthropic, Dario Amodei is heading the White House today for what Axios has dubbed "peace talks" so we may see something change soon. Reminder: Anthropic is suing the Pentagon for blacklisting the company after Amodei refused to allow his AI to be used without restrictions. Some parts of the U.S. intelligence community, plus the Cybersecurity and Infrastructure Security Agency (CISA, part of Homeland Security), are testing Mythos. Treasury and others want it. Behind the scenes: After Anthropic took the administration to court, negotiations with the Pentagon chilled. But Anthropic has hired key Trumpworld consultants -- so expect a deal. Friday's meeting is designed to pave the way. We'll have to wait and see but, ultimately, Mythos may be too valuable to other parts of the government to allow the Pentagon to scuttle it for everyone. Editor's Note: Do you enjoy Hot Air's conservative reporting that takes on the radical left and woke media? Support our work so that we can continue to bring you the truth.

WASHINGTON (AP) -- White House chief of staff Susie Wiles plans to sound out Anthropic CEO Dario Amodei about the artificial intelligence company's new Mythos model, which has attracted attention from the federal government for how it could transform national security and the economy. A White House official, who requested anonymity to discuss the planned meeting Friday, said the administration is engaging with advanced AI labs about their models and the security of software. The official stressed that any new technology that might be used by the federal government would require a technical period for evaluation. The meeting comes after tensions have run hot between the Trump administration and the safety-conscious Anthropic, which has sought to put guardrails on the development of AI to minimize any potential risks and maximize its economic and national security benefits for the U.S. President Donald Trump tried to stop all federal agencies from using Anthropic's chatbot Claude over the company's contract dispute with the Pentagon, with Trump saying in a February social media post that the administration "will not do business with them again!" Defense Secretary Pete Hegseth also sought to declare Anthropic a supply chain risk, an unprecedented move against a U.S. company that Anthropic has challenged in two federal courts. The company said it wanted assurance the Pentagon would not use its technology in fully autonomous weapons and the surveillance of Americans. Hegseth said the company must allow for any uses the Pentagon deemed lawful. U.S. District Judge Rita Lin issued a ruling in March that blocked the enforcement of Trump's social media directive ordering all federal agencies to stop using Anthropic products. Anthropic declined to speak about the meeting in advance. The San Francisco-based Anthropic has said the new Mythos model it announced on April 7 is so "strikingly capable" that it is limiting its use to select customers because of its ability to surpass human cybersecurity experts in finding and exploiting computer vulnerabilities. And while some industry experts have questioned whether Anthropic's claims of too-powerful AI technology were a marketing ploy, even some of the company's sharpest critics have suggested that Mythos might represent a further advancement in AI. One influential Anthropic critic, David Sacks, who was the White House's AI and crypto czar, said people should "take this seriously." "Anytime Anthropic is scaring people, you have to ask, 'Is this a tactic? Is this part of their Chicken Little routine? Or is it real?'" Sacks said on the "All-In" podcast he co-hosts with other tech investors. "With cyber, I actually would give them credit in this case and say this is more on the real side." Sacks said, "It just makes sense that as the coding models become more and more capable, they are more capable at finding bugs. That means they're more capable at finding vulnerabilities. That means they're more capable at stringing together multiple vulnerabilities and creating an exploit." The model's potential benefits, as well as its risks, have also attracted attention outside the U.S. The United Kingdom's AI Security Institute said it evaluated the new model and found it a "step up" over previous models, which were already rapidly improving. "Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed," the institute said in a report. Anthropic has also been in talks with the European Union about its AI models, including advanced models that haven't yet been released in Europe, European Commission spokesman Thomas Regnier said Friday. Axios first reported the scheduled meeting between Wiles and Amodei. When it announced Mythos, Anthropic said it was also forming an initiative called Project Glasswing, bringing together tech giants such as Amazon, Apple, Google and Microsoft, along with other companies like JPMorgan Chase, in hopes of securing the world's critical software from "severe" fallout that the new model could pose to public safety, national security and the economy. "We're releasing it to a subset of some of the world's most important companies and organizations so they can use this to find vulnerabilities," said the Anthropic co-founder and policy chief, Jack Clark, at this week's Semafor World Economy conference. Clark added that Mythos, while ahead of the curve, is not a "special model." "There will be other systems just like this in a few months from other companies, and in a year to a year-and-a-half later, there will be open-weight models from China that have these capabilities," he said. So the world is going to have to get ready for more powerful systems that are going to exist within it." ___ O'Brien reported from Providence, R.I. AP business reporter Kelvin Chan contributed to this report from London.

AI model upgrades have a habit of sounding more incremental than they really are. Is Claude Opus 4.7 continuing that tradition or is it actually better? According to Anthropic, it doesn't reinvent what Claude does, but fixes the things that made you distrust it. Also read: Claude Opus 4.7 announced: Three interesting things you should know While Opus 4.6 was able but overly cautious in its interpretation of prompts, with many tasks being abandoned partway because of difficulty at agentic processing level, Opus 4.7 interprets prompts very literally (to almost the point of being uncomfortable). Anthropic has cautioned developers that prompts they developed for Opus 4.6 may get unexpected results from Opus 4.7 because it provides very straightforward implementations based on exactly what you've asked for versus what you were probably wanting. That's a feature, not a bug. With this change, the progress with coding has followed suit- for example, when trying to measure dev teams using the same benchmarks internally, Cursor had a respective 70% completion rate with Opus 4.7 compared to 58% with Opus 4.6. Notion also got 14% better performance with fewer tool errors when performing multi-step workflows. These increases are not just minor improvements, but show that the new model is able to finish what it started. Also read: OpenAI's Agents SDK 2026 Update: What's New in Building AI Agents The resolution of images processed by Opus 4.6 was sufficient for general use, but it was unreliable for precise applications. With Opus 4.7, images may now be fully processed up to 3.75 megapixels. This significant increase from 4.6 translates into a substantial increase in the quality of stored images including complicated diagrams, graphic depictions of chemical structures, and complex technical diagrams. According to Oege de Moor, CEO of XBOW, the scores on their internal visual acuity benchmark went from 54.5% on 4.6 to 98.5% on 4.7. While Opus 4.6 started each session fresh (with a minimal number of previous files in memory), Opus 4.7 provides improved multi-session file system-based memories. For users running long agentic workflows, this memory will significantly enhance the user experience. It would be dishonest to call 4.7 a pure upgrade with no caveats. he new tokenizer means the same input can map to 1.0 -1.35 times more tokens than before, and higher effort levels generate more output. Costs can creep up. Anthropic acknowledges this and offers mitigation through effort parameters and task budgets, but teams should measure before assuming efficiency gains. Still, the overall picture is of a model that has grown up. Opus 4.6 was impressive and occasionally unreliable. Opus 4.7 feels like the version Anthropic always meant to ship.

WASHINGTON (AP) -- White House chief of staff Susie Wiles plans to sound out Anthropic CEO Dario Amodei about the artificial intelligence company's new Mythos model, which has attracted attention from the federal government for how it could transform national security and the economy. A White House official, who requested anonymity to discuss the planned meeting Friday, said the administration is engaging with advanced AI labs about their models and the security of software. The official stressed that any new technology that might be used by the federal government would require a technical period for evaluation. The meeting comes after tensions have run hot between the Trump administration and the safety-conscious Anthropic, which has sought to put guardrails on the development of AI to minimize any potential risks and maximize its economic and national security benefits for the U.S. President Donald Trump tried to stop all federal agencies from using Anthropic's chatbot Claude over the company's contract dispute with the Pentagon, with Trump saying in a February social media post that the administration "will not do business with them again!" Defense Secretary Pete Hegseth also sought to declare Anthropic a supply chain risk, an unprecedented move against a U.S. company that Anthropic has challenged in two federal courts. The company said it wanted assurance the Pentagon would not use its technology in fully autonomous weapons and the surveillance of Americans. Hegseth said the company must allow for any uses the Pentagon deemed lawful. U.S. District Judge Rita Lin issued a ruling in March that blocked the enforcement of Trump's social media directive ordering all federal agencies to stop using Anthropic products. Anthropic declined to speak about the meeting in advance. The San Francisco-based Anthropic has said the new Mythos model it announced on April 7 is so "strikingly capable" that it is limiting its use to select customers because of its ability to surpass human cybersecurity experts in finding and exploiting computer vulnerabilities. And while some industry experts have questioned whether Anthropic's claims of too-powerful AI technology were a marketing ploy, even some of the company's sharpest critics have suggested that Mythos might represent a further advancement in AI. One influential Anthropic critic, David Sacks, who was the White House's AI and crypto czar, said people should "take this seriously." "Anytime Anthropic is scaring people, you have to ask, 'Is this a tactic? Is this part of their Chicken Little routine? Or is it real?'" Sacks said on the "All-In" podcast he co-hosts with other tech investors. "With cyber, I actually would give them credit in this case and say this is more on the real side." Sacks said, "It just makes sense that as the coding models become more and more capable, they are more capable at finding bugs. That means they're more capable at finding vulnerabilities. That means they're more capable at stringing together multiple vulnerabilities and creating an exploit." The model's potential benefits, as well as its risks, have also attracted attention outside the U.S. The United Kingdom's AI Security Institute said it evaluated the new model and found it a "step up" over previous models, which were already rapidly improving. "Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed," the institute said in a report. Anthropic has also been in talks with the European Union about its AI models, including advanced models that haven't yet been released in Europe, European Commission spokesman Thomas Regnier said Friday. Axios first reported the scheduled meeting between Wiles and Amodei. When it announced Mythos, Anthropic said it was also forming an initiative called Project Glasswing, bringing together tech giants such as Amazon, Apple, Google and Microsoft, along with other companies like JPMorgan Chase, in hopes of securing the world's critical software from "severe" fallout that the new model could pose to public safety, national security and the economy. "We're releasing it to a subset of some of the world's most important companies and organizations so they can use this to find vulnerabilities," said the Anthropic co-founder and policy chief, Jack Clark, at this week's Semafor World Economy conference. Clark added that Mythos, while ahead of the curve, is not a "special model." "There will be other systems just like this in a few months from other companies, and in a year to a year-and-a-half later, there will be open-weight models from China that have these capabilities," he said. So the world is going to have to get ready for more powerful systems that are going to exist within it." ___ O'Brien reported from Providence, R.I. AP business reporter Kelvin Chan contributed to this report from London.
WASHINGTON (AP) -- White House chief of staff Susie Wiles plans to sound out Anthropic CEO Dario Amodei about the artificial intelligence company's new Mythos model, which has attracted attention from the federal government for how it could transform national security and the economy. A White House official, who requested anonymity to discuss the planned meeting Friday, said the administration is engaging with advanced AI labs about their models and the security of software. The official stressed that any new technology that might be used by the federal government would require a technical period for evaluation. The meeting comes after tensions have run hot between the Trump administration and the safety-conscious Anthropic, which has sought to put guardrails on the development of AI to minimize any potential risks and maximize its economic and national security benefits for the U.S. President Donald Trump tried to stop all federal agencies from using Anthropic's chatbot Claude over the company's contract dispute with the Pentagon, with Trump saying in a February social media post that the administration "will not do business with them again!" Defense Secretary Pete Hegseth also sought to declare Anthropic a supply chain risk, an unprecedented move against a U.S. company that Anthropic has challenged in two federal courts. The company said it wanted assurance the Pentagon would not use its technology in fully autonomous weapons and the surveillance of Americans. Hegseth said the company must allow for any uses the Pentagon deemed lawful. U.S. District Judge Rita Lin issued a ruling in March that blocked the enforcement of Trump's social media directive ordering all federal agencies to stop using Anthropic products. Anthropic declined to speak about the meeting in advance. The San Francisco-based Anthropic has said the new Mythos model it announced on April 7 is so "strikingly capable" that it is limiting its use to select customers because of its ability to surpass human cybersecurity experts in finding and exploiting computer vulnerabilities. And while some industry experts have questioned whether Anthropic's claims of too-powerful AI technology were a marketing ploy, even some of the company's sharpest critics have suggested that Mythos might represent a further advancement in AI. One influential Anthropic critic, David Sacks, who was the White House's AI and crypto czar, said people should "take this seriously." "Anytime Anthropic is scaring people, you have to ask, 'Is this a tactic? Is this part of their Chicken Little routine? Or is it real?'" Sacks said on the "All-In" podcast he co-hosts with other tech investors. "With cyber, I actually would give them credit in this case and say this is more on the real side." Sacks said, "It just makes sense that as the coding models become more and more capable, they are more capable at finding bugs. That means they're more capable at finding vulnerabilities. That means they're more capable at stringing together multiple vulnerabilities and creating an exploit." The model's potential benefits, as well as its risks, have also attracted attention outside the U.S. The United Kingdom's AI Security Institute said it evaluated the new model and found it a "step up" over previous models, which were already rapidly improving. "Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed," the institute said in a report. Anthropic has also been in talks with the European Union about its AI models, including advanced models that haven't yet been released in Europe, European Commission spokesman Thomas Regnier said Friday. Axios first reported the scheduled meeting between Wiles and Amodei. When it announced Mythos, Anthropic said it was also forming an initiative called Project Glasswing, bringing together tech giants such as Amazon, Apple, Google and Microsoft, along with other companies like JPMorgan Chase, in hopes of securing the world's critical software from "severe" fallout that the new model could pose to public safety, national security and the economy. "We're releasing it to a subset of some of the world's most important companies and organizations so they can use this to find vulnerabilities," said the Anthropic co-founder and policy chief, Jack Clark, at this week's Semafor World Economy conference. Clark added that Mythos, while ahead of the curve, is not a "special model." "There will be other systems just like this in a few months from other companies, and in a year to a year-and-a-half later, there will be open-weight models from China that have these capabilities," he said. So the world is going to have to get ready for more powerful systems that are going to exist within it." ___ O'Brien reported from Providence, R.I. AP business reporter Kelvin Chan contributed to this report from London.

WASHINGTON -- White House chief of staff Susie Wiles plans to sound out Anthropic CEO Dario Amodei about the artificial intelligence company's new Mythos model, which has attracted attention from the federal government for how it could transform national security and the economy. A White House official, who requested anonymity to discuss the planned meeting Friday, said the administration is engaging with advanced AI labs about their models and the security of software. The official stressed that any new technology that might be used by the federal government would require a technical period for evaluation. The meeting comes after tensions have run hot between the Trump administration and the safety-conscious Anthropic, which has sought to put guardrails on the development of AI to minimize any potential risks and maximize its economic and national security benefits for the U.S. President Donald Trump tried to stop all federal agencies from using Anthropic's chatbot Claude over the company's contract dispute with the Pentagon, with Trump saying in a February social media post that the administration "will not do business with them again!" U.S. Defense Secretary Pete Hegseth also sought to declare Anthropic a supply chain risk, an unprecedented move against a U.S. company that Anthropic has challenged in two federal courts. The company said it wanted assurance the Pentagon would not use its technology in fully autonomous weapons and the surveillance of Americans. Hegseth said the company must allow for any uses the Pentagon deemed lawful. U.S. District Judge Rita Lin issued a ruling in March that blocked the enforcement of Trump's social media directive ordering all federal agencies to stop using Anthropic products. Anthropic declined to speak about the meeting in advance. The San Francisco-based Anthropic has said the new Mythos model it announced on April 7 is so "strikingly capable" that it is limiting its use to select customers because of its ability to surpass human cybersecurity experts in finding and exploiting computer vulnerabilities. And while some industry experts have questioned whether Anthropic's claims of too-powerful AI technology were a marketing ploy, even some of the company's sharpest critics have suggested that Mythos might represent a further advancement in AI. One influential Anthropic critic, David Sacks, who was the White House's AI and crypto czar, said people should "take this seriously." "Anytime Anthropic is scaring people, you have to ask, 'Is this a tactic? Is this part of their Chicken Little routine? Or is it real?'" Sacks said on the "All-In" podcast he co-hosts with other tech investors. "With cyber, I actually would give them credit in this case and say this is more on the real side." Sacks said, "It just makes sense that as the coding models become more and more capable, they are more capable at finding bugs. That means they're more capable at finding vulnerabilities. That means they're more capable at stringing together multiple vulnerabilities and creating an exploit." The model's potential benefits, as well as its risks, have also attracted attention outside the U.S. The United Kingdom's AI Security Institute said it evaluated the new model and found it a "step up" over previous models, which were already rapidly improving. "Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed," the institute said in a report. Anthropic has also been in talks with the European Union about its AI models, including advanced models that haven't yet been released in Europe, European Commission spokesman Thomas Regnier said Friday. Axios first reported the scheduled meeting between Wiles and Amodei. When it announced Mythos, Anthropic said it was also forming an initiative called Project Glasswing, bringing together tech giants such as Amazon, Apple, Google and Microsoft, along with other companies like JPMorgan Chase, in hopes of securing the world's critical software from "severe" fallout that the new model could pose to public safety, national security and the economy. "We're releasing it to a subset of some of the world's most important companies and organizations so they can use this to find vulnerabilities," said the Anthropic co-founder and policy chief, Jack Clark, at this week's Semafor World Economy conference. Clark added that Mythos, while ahead of the curve, is not a "special model." "There will be other systems just like this in a few months from other companies, and in a year to a year-and-a-half later, there will be open-weight models from China that have these capabilities," he said. So the world is going to have to get ready for more powerful systems that are going to exist within it."

The incident was one of several disruptions linked to Starlink that affected Navy testing of autonomous systems, underscoring risks tied to the military's dependence on commercial satellite networks WASHINGTON, D.C.: A Starlink outage that disrupted U.S. Navy drone tests off the California coast has highlighted growing concerns within the Pentagon over its increasing reliance on SpaceX for critical military operations. During an August test of unmanned vessels, a global outage affecting Elon Musk's satellite network left around two dozen autonomous surface vessels without connectivity, forcing operations to halt for nearly an hour, according to internal Navy documents reviewed by Reuters and a person familiar with the matter. The vessels, part of a program aimed at strengthening U.S. capabilities in a potential conflict with China, were unable to communicate after the outage cut off access to Starlink, exposing what officials described as a single point of failure. The incident was one of several disruptions linked to Starlink that affected Navy testing of autonomous systems, underscoring risks tied to the military's dependence on commercial satellite networks. Starlink has become central to a range of Pentagon initiatives, from drone operations to missile tracking, supported by a low-Earth orbit constellation of nearly 10,000 satellites. Its scale has made it a critical communications backbone for U.S. defense systems. But the Navy's experience points to vulnerabilities. In April 2025, officials reported that tests involving unmanned boats and aerial drones showed the network struggled under the heavy data demands of operating multiple systems simultaneously. "Starlink reliance exposed limitations under multiple-vehicle load," a Navy safety report said, also citing issues with radios from Silvus and a network system provided by Viasat. Intermittent connectivity issues also disrupted earlier tests in the weeks leading up to the August outage, though the exact causes were unclear, according to Navy documents. Despite these challenges, defense experts say the benefits of Starlink's widespread availability and relatively low cost continue to outweigh the risks. "You accept those vulnerabilities because of the benefits you get from the ubiquity it provides," said Bryan Clark, an autonomous warfare expert at the Hudson Institute. Still, the incidents have added to broader concerns about relying heavily on a single private company for national security infrastructure. "If there were no Starlink, the U.S. government wouldn't have access to a global constellation of low-earth-orbit communications," said Clayton Swope, deputy director of the Aerospace Security Project at the Center for Strategic and International Studies. Lawmakers have also raised concerns about concentration risk, warning that dependence on a company led by a single individual could create strategic vulnerabilities. SpaceX's dominance extends beyond satellite communications. The company has secured a leading role in space launches and military-related technologies, including its Starshield network designed for national security applications. Its position has strengthened further as competitors struggle to catch up. While Amazon has sought to expand in low-Earth-orbit communications, including a US$11.6 billion deal announced this week to acquire satellite maker Globalstar, SpaceX remains well ahead. The Pentagon has defended its approach, with Chief Information Officer Kirsten Davies saying the department uses "multiple, robust, resilient systems for its broad network." However, past incidents have fuelled unease. Reuters reported previously that Musk restricted Starlink access to Ukrainian forces during operations against Russia, while questions have also been raised about service availability for U.S. troops in Taiwan. Neither the Pentagon nor SpaceX responded to requests for comment on the Navy test disruptions or related concerns. As SpaceX prepares for a potential public offering that could value the company at around $2 trillion, its deepening ties with the U.S. military are likely to remain under scrutiny, particularly as the Pentagon balances innovation with the risks of overdependence on a single provider.

Bank of Canada Governor Tiff Macklem says global financial systems need to "come to grips" with the risks posed by rapid advances in artificial intelligence models like Anthropic's Mythos. Developer Anthropic claims the upcoming Mythos model of its Claude AI is capable of quickly detecting long-hidden cybersecurity vulnerabilities, news which has made major financial market players and regulators anxious about the technology's disruptive potential. The Bank of Canada gathered representatives from big banks and financial agencies last week to discuss the risks Mythos poses for the Canadian financial system. Macklem says while there's been a fair amount of discussion about Mythos on the sidelines of the International Monetary Fund's spring meetings in Washington, no one knows yet the full implications of this latest AI advance. He says Mythos is not a one-off event and the nature of AI development means firms, regulators and policy-makers need to grapple with how these rapidly evolving technologies will affect the integrity of financial systems in Canada and around the world. Finance Minister Francois-Philippe Champagne was also in Washington for the IMF meetings and told reporters earlier in the day that Mythos has become a "test case" for how governments prepare for and react to new technologies.

WASHINGTON (AP) -- White House chief of staff Susie Wiles plans to sound out Anthropic CEO Dario Amodei about the artificial intelligence company's new Mythos model, which has attracted attention from the federal government for how it could transform national security and the economy. A White House official, who requested anonymity to discuss the planned meeting Friday, said the administration is engaging with advanced AI labs about their models and the security of software. The official stressed that any new technology that might be used by the federal government would require a technical period for evaluation. The meeting comes after tensions have run hot between the Trump administration and the safety-conscious Anthropic, which has sought to put guardrails on the development of AI to minimize any potential risks and maximize its economic and national security benefits for the U.S. President Donald Trump tried to stop all federal agencies from using Anthropic's chatbot Claude over the company's contract dispute with the Pentagon, with Trump saying in a February social media post that the administration "will not do business with them again!" Defense Secretary Pete Hegseth also sought to declare Anthropic a supply chain risk, an unprecedented move against a U.S. company that Anthropic has challenged in two federal courts. The company said it wanted assurance the Pentagon would not use its technology in fully autonomous weapons and the surveillance of Americans. Hegseth said the company must allow for any uses the Pentagon deemed lawful. U.S. District Judge Rita Lin issued a ruling in March that blocked the enforcement of Trump's social media directive ordering all federal agencies to stop using Anthropic products. Anthropic declined to speak about the meeting in advance. The San Francisco-based Anthropic has said the new Mythos model it announced on April 7 is so "strikingly capable" that it is limiting its use to select customers because of its ability to surpass human cybersecurity experts in finding and exploiting computer vulnerabilities. And while some industry experts have questioned whether Anthropic's claims of too-powerful AI technology were a marketing ploy, even some of the company's sharpest critics have suggested that Mythos might represent a further advancement in AI. One influential Anthropic critic, David Sacks, who was the White House's AI and crypto czar, said people should "take this seriously." "Anytime Anthropic is scaring people, you have to ask, 'Is this a tactic? Is this part of their Chicken Little routine? Or is it real?'" Sacks said on the "All-In" podcast he co-hosts with other tech investors. "With cyber, I actually would give them credit in this case and say this is more on the real side." Sacks said, "It just makes sense that as the coding models become more and more capable, they are more capable at finding bugs. That means they're more capable at finding vulnerabilities. That means they're more capable at stringing together multiple vulnerabilities and creating an exploit." The model's potential benefits, as well as its risks, have also attracted attention outside the U.S. The United Kingdom's AI Security Institute said it evaluated the new model and found it a "step up" over previous models, which were already rapidly improving. "Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed," the institute said in a report. Anthropic has also been in talks with the European Union about its AI models, including advanced models that haven't yet been released in Europe, European Commission spokesman Thomas Regnier said Friday. Axios first reported the scheduled meeting between Wiles and Amodei. When it announced Mythos, Anthropic said it was also forming an initiative called Project Glasswing, bringing together tech giants such as Amazon, Apple, Google and Microsoft, along with other companies like JPMorgan Chase, in hopes of securing the world's critical software from "severe" fallout that the new model could pose to public safety, national security and the economy. "We're releasing it to a subset of some of the world's most important companies and organizations so they can use this to find vulnerabilities," said the Anthropic co-founder and policy chief, Jack Clark, at this week's Semafor World Economy conference. Clark added that Mythos, while ahead of the curve, is not a "special model." "There will be other systems just like this in a few months from other companies, and in a year to a year-and-a-half later, there will be open-weight models from China that have these capabilities," he said. So the world is going to have to get ready for more powerful systems that are going to exist within it." ___ O'Brien reported from Providence, R.I. AP business reporter Kelvin Chan contributed to this report from London.

Crypto Exchange Kraken's Parent Agrees to $550 Million Bitnomial Acqusition Payward, the parent company of Kraken, is set to acquire Bitnomial for up to $550 million in cash and stock. Bitnomial's regulatory infrastructure, as the first fully CFTC-licensed derivatives company in the U.S., will enhance Payward's B2B platform with crypto trading and derivatives capabilities.

The incident was one of several disruptions linked to Starlink that affected Navy testing of autonomous systems, underscoring risks tied to the military's dependence on commercial satellite networks WASHINGTON, D.C.: A Starlink outage that disrupted U.S. Navy drone tests off the California coast has highlighted growing concerns within the Pentagon over its increasing reliance on SpaceX for critical military operations. During an August test of unmanned vessels, a global outage affecting Elon Musk's satellite network left around two dozen autonomous surface vessels without connectivity, forcing operations to halt for nearly an hour, according to internal Navy documents reviewed by Reuters and a person familiar with the matter. The vessels, part of a program aimed at strengthening U.S. capabilities in a potential conflict with China, were unable to communicate after the outage cut off access to Starlink, exposing what officials described as a single point of failure. The incident was one of several disruptions linked to Starlink that affected Navy testing of autonomous systems, underscoring risks tied to the military's dependence on commercial satellite networks. Starlink has become central to a range of Pentagon initiatives, from drone operations to missile tracking, supported by a low-Earth orbit constellation of nearly 10,000 satellites. Its scale has made it a critical communications backbone for U.S. defense systems. But the Navy's experience points to vulnerabilities. In April 2025, officials reported that tests involving unmanned boats and aerial drones showed the network struggled under the heavy data demands of operating multiple systems simultaneously. "Starlink reliance exposed limitations under multiple-vehicle load," a Navy safety report said, also citing issues with radios from Silvus and a network system provided by Viasat. Intermittent connectivity issues also disrupted earlier tests in the weeks leading up to the August outage, though the exact causes were unclear, according to Navy documents. Despite these challenges, defense experts say the benefits of Starlink's widespread availability and relatively low cost continue to outweigh the risks. "You accept those vulnerabilities because of the benefits you get from the ubiquity it provides," said Bryan Clark, an autonomous warfare expert at the Hudson Institute. Still, the incidents have added to broader concerns about relying heavily on a single private company for national security infrastructure. "If there were no Starlink, the U.S. government wouldn't have access to a global constellation of low-earth-orbit communications," said Clayton Swope, deputy director of the Aerospace Security Project at the Center for Strategic and International Studies. Lawmakers have also raised concerns about concentration risk, warning that dependence on a company led by a single individual could create strategic vulnerabilities. SpaceX's dominance extends beyond satellite communications. The company has secured a leading role in space launches and military-related technologies, including its Starshield network designed for national security applications. Its position has strengthened further as competitors struggle to catch up. While Amazon has sought to expand in low-Earth-orbit communications, including a US$11.6 billion deal announced this week to acquire satellite maker Globalstar, SpaceX remains well ahead. The Pentagon has defended its approach, with Chief Information Officer Kirsten Davies saying the department uses "multiple, robust, resilient systems for its broad network." However, past incidents have fuelled unease. Reuters reported previously that Musk restricted Starlink access to Ukrainian forces during operations against Russia, while questions have also been raised about service availability for U.S. troops in Taiwan. Neither the Pentagon nor SpaceX responded to requests for comment on the Navy test disruptions or related concerns. As SpaceX prepares for a potential public offering that could value the company at around $2 trillion, its deepening ties with the U.S. military are likely to remain under scrutiny, particularly as the Pentagon balances innovation with the risks of overdependence on a single provider.

DETROIT, MI - Detroit officials on April 16 unveiled a new summer safety plan aimed at curbing youth violence and large gatherings downtown, following a series of chaotic "teen takeovers." Mayor Mary Sheffield and Police Chief Todd Bettison announced the six-point plan Thursday, emphasizing a mix of enforcement and youth engagement strategies as the city prepares for warmer months. The plan comes after multiple recent incidents downtown, including a disturbance the night of April 11 when large crowds of teens flooded the area just hours after city leaders met with youth organizers to promote safer alternatives. Videos circulating on social media showed groups running through downtown streets, including along Woodward Avenue, where police said a 19-year-old man from Van Buren Township was chased through a crowd in an attempted robbery before officers arrived. Gunshots were also reported near Campus Martius, though no injuries were confirmed. Police detained multiple teens as they worked to regain control of the scene, with some individuals placed on buses. The April 11 incident followed an earlier "teen takeover" on April 3 tied to a national social media trend and Detroit Tigers Opening Day at Comerica Park, which was marred by vandalism and brawls. In response to that earlier gathering, Sheffield and Bettison held a press conference April 10 at the Butzel Family Recreation Center alongside teen organizers, pledging to create more structured activities for young people. But the renewed chaos the following night highlighted the ongoing challenge facing city leaders. The newly announced safety plan includes increased curfew enforcement, crowd control strategies for large gatherings, a focus on after-hours venues and neighborhood-level crime, and expanded youth programming. "I've said many times that we cannot arrest our way to a safe city," Sheffield said. "It is going to take a broad range of strategies that address not only criminal behavior but the circumstances that create the opportunity for it to occur." City officials say the Detroit Police Department will increase its presence at parks and recreation centers and enforce curfew rules for minors. Under the plan, juveniles found out past curfew can be detained, with parents facing fines. The strategy also emphasizes youth-centered programming, including plans for large-scale events at Hart Plaza featuring music, sports and games. Police Chief Todd Bettison said the approach builds on strategies that have helped reduce violent crime in recent years, while adding new tools focused on prevention. "Over the past several years, violent crime in Detroit has reached 60-year lows, although the onset of warmer temperatures often brings with it a rise in incidents," Bettison said. "Mayor Sheffield has been adamant that to continue the city's trend of lower levels of crime, the city must continue strategies that have proven successful and introduce new preventive strategies." The six-point community safety plan is the following:

Bank of Canada Governor Tiff Macklem says global financial systems need to "come to grips" with the risks posed by rapid advances in artificial intelligence models like Anthropic's Mythos. Developer Anthropic claims the upcoming Mythos model of its Claude AI is capable of quickly detecting long-hidden cybersecurity vulnerabilities, news which has made major financial market players and regulators anxious about the technology's disruptive potential. The Bank of Canada gathered representatives from big banks and financial agencies last week to discuss the risks Mythos poses for the Canadian financial system. Macklem says while there's been a fair amount of discussion about Mythos on the sidelines of the International Monetary Fund's spring meetings in Washington, no one knows yet the full implications of this latest AI advance. He says Mythos is not a one-off event and the nature of AI development means firms, regulators and policy-makers need to grapple with how these rapidly evolving technologies will affect the integrity of financial systems in Canada and around the world. Finance Minister Francois-Philippe Champagne was also in Washington for the IMF meetings and told reporters earlier in the day that Mythos has become a "test case" for how governments prepare for and react to new technologies. This report by The Canadian Press was first published April 17, 2026.

WASHINGTON (AP) -- White House chief of staff Susie Wiles plans to sound out Anthropic CEO Dario Amodei about the artificial intelligence company's new Mythos model, which has attracted attention from the federal government for how it could transform national security and the economy. A White House official, who requested anonymity to discuss the planned meeting Friday, said the administration is engaging with advanced AI labs about their models and the security of software. The official stressed that any new technology that might be used by the federal government would require a technical period for evaluation. The meeting comes after tensions have run hot between the Trump administration and the safety-conscious Anthropic, which has sought to put guardrails on the development of AI to minimize any potential risks and maximize its economic and national security benefits for the U.S. President Donald Trump tried to stop all federal agencies from using Anthropic's chatbot Claude over the company's contract dispute with the Pentagon, with Trump saying in a February social media post that the administration "will not do business with them again!" Defense Secretary Pete Hegseth also sought to declare Anthropic a supply chain risk, an unprecedented move against a U.S. company that Anthropic has challenged in two federal courts. The company said it wanted assurance the Pentagon would not use its technology in fully autonomous weapons and the surveillance of Americans. Hegseth said the company must allow for any uses the Pentagon deemed lawful. U.S. District Judge Rita Lin issued a ruling in March that blocked the enforcement of Trump's social media directive ordering all federal agencies to stop using Anthropic products. Anthropic declined to speak about the meeting in advance. The San Francisco-based Anthropic has said the new Mythos model it announced on April 7 is so "strikingly capable" that it is limiting its use to select customers because of its ability to surpass human cybersecurity experts in finding and exploiting computer vulnerabilities. And while some industry experts have questioned whether Anthropic's claims of too-powerful AI technology were a marketing ploy, even some of the company's sharpest critics have suggested that Mythos might represent a further advancement in AI. One influential Anthropic critic, David Sacks, who was the White House's AI and crypto czar, said people should "take this seriously." "Anytime Anthropic is scaring people, you have to ask, 'Is this a tactic? Is this part of their Chicken Little routine? Or is it real?'" Sacks said on the "All-In" podcast he co-hosts with other tech investors. "With cyber, I actually would give them credit in this case and say this is more on the real side." Sacks said, "It just makes sense that as the coding models become more and more capable, they are more capable at finding bugs. That means they're more capable at finding vulnerabilities. That means they're more capable at stringing together multiple vulnerabilities and creating an exploit." The model's potential benefits, as well as its risks, have also attracted attention outside the U.S. The United Kingdom's AI Security Institute said it evaluated the new model and found it a "step up" over previous models, which were already rapidly improving. "Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed," the institute said in a report. Anthropic has also been in talks with the European Union about its AI models, including advanced models that haven't yet been released in Europe, European Commission spokesman Thomas Regnier said Friday. Axios first reported the scheduled meeting between Wiles and Amodei. When it announced Mythos, Anthropic said it was also forming an initiative called Project Glasswing, bringing together tech giants such as Amazon, Apple, Google and Microsoft, along with other companies like JPMorgan Chase, in hopes of securing the world's critical software from "severe" fallout that the new model could pose to public safety, national security and the economy. "We're releasing it to a subset of some of the world's most important companies and organizations so they can use this to find vulnerabilities," said the Anthropic co-founder and policy chief, Jack Clark, at this week's Semafor World Economy conference. Clark added that Mythos, while ahead of the curve, is not a "special model." "There will be other systems just like this in a few months from other companies, and in a year to a year-and-a-half later, there will be open-weight models from China that have these capabilities," he said. So the world is going to have to get ready for more powerful systems that are going to exist within it." ___ O'Brien reported from Providence, R.I. AP business reporter Kelvin Chan contributed to this report from London.

The incident was one of several disruptions linked to Starlink that affected Navy testing of autonomous systems, underscoring risks tied to the military's dependence on commercial satellite networks WASHINGTON, D.C.: A Starlink outage that disrupted U.S. Navy drone tests off the California coast has highlighted growing concerns within the Pentagon over its increasing reliance on SpaceX for critical military operations. During an August test of unmanned vessels, a global outage affecting Elon Musk's satellite network left around two dozen autonomous surface vessels without connectivity, forcing operations to halt for nearly an hour, according to internal Navy documents reviewed by Reuters and a person familiar with the matter. The vessels, part of a program aimed at strengthening U.S. capabilities in a potential conflict with China, were unable to communicate after the outage cut off access to Starlink, exposing what officials described as a single point of failure. The incident was one of several disruptions linked to Starlink that affected Navy testing of autonomous systems, underscoring risks tied to the military's dependence on commercial satellite networks. Starlink has become central to a range of Pentagon initiatives, from drone operations to missile tracking, supported by a low-Earth orbit constellation of nearly 10,000 satellites. Its scale has made it a critical communications backbone for U.S. defense systems. But the Navy's experience points to vulnerabilities. In April 2025, officials reported that tests involving unmanned boats and aerial drones showed the network struggled under the heavy data demands of operating multiple systems simultaneously. "Starlink reliance exposed limitations under multiple-vehicle load," a Navy safety report said, also citing issues with radios from Silvus and a network system provided by Viasat. Intermittent connectivity issues also disrupted earlier tests in the weeks leading up to the August outage, though the exact causes were unclear, according to Navy documents. Despite these challenges, defense experts say the benefits of Starlink's widespread availability and relatively low cost continue to outweigh the risks. "You accept those vulnerabilities because of the benefits you get from the ubiquity it provides," said Bryan Clark, an autonomous warfare expert at the Hudson Institute. Still, the incidents have added to broader concerns about relying heavily on a single private company for national security infrastructure. "If there were no Starlink, the U.S. government wouldn't have access to a global constellation of low-earth-orbit communications," said Clayton Swope, deputy director of the Aerospace Security Project at the Center for Strategic and International Studies. Lawmakers have also raised concerns about concentration risk, warning that dependence on a company led by a single individual could create strategic vulnerabilities. SpaceX's dominance extends beyond satellite communications. The company has secured a leading role in space launches and military-related technologies, including its Starshield network designed for national security applications. Its position has strengthened further as competitors struggle to catch up. While Amazon has sought to expand in low-Earth-orbit communications, including a US$11.6 billion deal announced this week to acquire satellite maker Globalstar, SpaceX remains well ahead. The Pentagon has defended its approach, with Chief Information Officer Kirsten Davies saying the department uses "multiple, robust, resilient systems for its broad network." However, past incidents have fuelled unease. Reuters reported previously that Musk restricted Starlink access to Ukrainian forces during operations against Russia, while questions have also been raised about service availability for U.S. troops in Taiwan. Neither the Pentagon nor SpaceX responded to requests for comment on the Navy test disruptions or related concerns. As SpaceX prepares for a potential public offering that could value the company at around $2 trillion, its deepening ties with the U.S. military are likely to remain under scrutiny, particularly as the Pentagon balances innovation with the risks of overdependence on a single provider.

The incident was one of several disruptions linked to Starlink that affected Navy testing of autonomous systems, underscoring risks tied to the military's dependence on commercial satellite networks WASHINGTON, D.C.: A Starlink outage that disrupted U.S. Navy drone tests off the California coast has highlighted growing concerns within the Pentagon over its increasing reliance on SpaceX for critical military operations. During an August test of unmanned vessels, a global outage affecting Elon Musk's satellite network left around two dozen autonomous surface vessels without connectivity, forcing operations to halt for nearly an hour, according to internal Navy documents reviewed by Reuters and a person familiar with the matter. The vessels, part of a program aimed at strengthening U.S. capabilities in a potential conflict with China, were unable to communicate after the outage cut off access to Starlink, exposing what officials described as a single point of failure. The incident was one of several disruptions linked to Starlink that affected Navy testing of autonomous systems, underscoring risks tied to the military's dependence on commercial satellite networks. Starlink has become central to a range of Pentagon initiatives, from drone operations to missile tracking, supported by a low-Earth orbit constellation of nearly 10,000 satellites. Its scale has made it a critical communications backbone for U.S. defense systems. But the Navy's experience points to vulnerabilities. In April 2025, officials reported that tests involving unmanned boats and aerial drones showed the network struggled under the heavy data demands of operating multiple systems simultaneously. "Starlink reliance exposed limitations under multiple-vehicle load," a Navy safety report said, also citing issues with radios from Silvus and a network system provided by Viasat. Intermittent connectivity issues also disrupted earlier tests in the weeks leading up to the August outage, though the exact causes were unclear, according to Navy documents. Despite these challenges, defense experts say the benefits of Starlink's widespread availability and relatively low cost continue to outweigh the risks. "You accept those vulnerabilities because of the benefits you get from the ubiquity it provides," said Bryan Clark, an autonomous warfare expert at the Hudson Institute. Still, the incidents have added to broader concerns about relying heavily on a single private company for national security infrastructure. "If there were no Starlink, the U.S. government wouldn't have access to a global constellation of low-earth-orbit communications," said Clayton Swope, deputy director of the Aerospace Security Project at the Center for Strategic and International Studies. Lawmakers have also raised concerns about concentration risk, warning that dependence on a company led by a single individual could create strategic vulnerabilities. SpaceX's dominance extends beyond satellite communications. The company has secured a leading role in space launches and military-related technologies, including its Starshield network designed for national security applications. Its position has strengthened further as competitors struggle to catch up. While Amazon has sought to expand in low-Earth-orbit communications, including a US$11.6 billion deal announced this week to acquire satellite maker Globalstar, SpaceX remains well ahead. The Pentagon has defended its approach, with Chief Information Officer Kirsten Davies saying the department uses "multiple, robust, resilient systems for its broad network." However, past incidents have fuelled unease. Reuters reported previously that Musk restricted Starlink access to Ukrainian forces during operations against Russia, while questions have also been raised about service availability for U.S. troops in Taiwan. Neither the Pentagon nor SpaceX responded to requests for comment on the Navy test disruptions or related concerns. As SpaceX prepares for a potential public offering that could value the company at around $2 trillion, its deepening ties with the U.S. military are likely to remain under scrutiny, particularly as the Pentagon balances innovation with the risks of overdependence on a single provider.
