The latest news and updates from companies in the WLTH portfolio.
WASHINGTON (TNND) -- A powerful new artificial intelligence model is drawing attention in the tech and cybersecurity world -- not just for what it can do, but for how it could be used if it falls into the wrong hands. Anthropic, one of the leading AI firms, is developing an experimental system known as "Mythos." Unlike consumer-facing AI tools, this model is not publicly available. Instead, it's being quietly tested with a small group of major companies due to concerns over its capabilities. At its core, Mythos is designed to excel at cybersecurity tasks. According to Anthropic, the model has already identified thousands of high-severity software vulnerabilities, including flaws in widely used operating systems and web browsers. In some cases, the system has even demonstrated the ability to identify and exploit so-called "zero-day" vulnerabilities -- previously unknown weaknesses that can be especially dangerous if discovered by malicious actors. Independent testing by the UK AI Security Institute underscores both the promise and the risk. Evaluators found the model succeeded in expert-level cybersecurity challenges roughly 73% of the time and, in certain scenarios, could carry out complex, multi-step simulated cyberattacks from start to finish. However, those tests were conducted in controlled environments -- not against real-world, highly defended systems. Because of these capabilities, Anthropic and other AI companies are taking a cautious approach. Rather than releasing Mythos publicly, access is limited to a small group of major tech firms, including Google, Amazon, Apple, and Microsoft. The goal is to test the system while minimizing the risk of misuse. The company has also launched "Project Glasswing," an initiative focused on using advanced AI capabilities for defensive cybersecurity purposes. As part of that effort, firms are conducting extensive "red teaming," where security experts attempt to break the system and uncover potential vulnerabilities before a wider rollout. Companies also say they are monitoring how these tools are used in real time -- with the ability to shut down access if abuse is detected. Still, experts warn that as AI systems become more powerful, the risk of misuse grows. Those concerns come at a time when cyberattacks are already a major global issue, targeting everything from hospitals to government agencies. In a recent example, hackers linked to Iran reportedly accessed emails connected to FBI Director Kash Patel. While officials said no sensitive information was exposed, the incident highlights ongoing vulnerabilities. Security researchers warn that advanced AI could make these threats even more dangerous -- allowing attackers to identify weaknesses faster and carry out more sophisticated operations. In the U.S., the Cybersecurity and Infrastructure Security Agency, or CISA, leads efforts to defend against cyber threats. The agency is responsible for protecting critical infrastructure, including power grids, election systems, and financial networks. But challenges remain. Concerns about staffing and resource constraints have raised questions about whether current defenses can keep pace with rapidly evolving threats -- especially as AI enters the equation.

WASHINGTON (TNND) -- A powerful new artificial intelligence model is drawing attention in the tech and cybersecurity world -- not just for what it can do, but for how it could be used if it falls into the wrong hands. Anthropic, one of the leading AI firms, is developing an experimental system known as "Mythos." Unlike consumer-facing AI tools, this model is not publicly available. Instead, it's being quietly tested with a small group of major companies due to concerns over its capabilities. A Tool Built for Cybersecurity -- and Potential Exploitation At its core, Mythos is designed to excel at cybersecurity tasks. According to Anthropic, the model has already identified thousands of high-severity software vulnerabilities, including flaws in widely used operating systems and web browsers. In some cases, the system has even demonstrated the ability to identify and exploit so-called "zero-day" vulnerabilities -- previously unknown weaknesses that can be especially dangerous if discovered by malicious actors. Independent testing by the UK AI Security Institute underscores both the promise and the risk. Evaluators found the model succeeded in expert-level cybersecurity challenges roughly 73% of the time and, in certain scenarios, could carry out complex, multi-step simulated cyberattacks from start to finish. However, those tests were conducted in controlled environments -- not against real-world, highly defended systems. Why Access Is Being Restricted Because of these capabilities, Anthropic and other AI companies are taking a cautious approach. Rather than releasing Mythos publicly, access is limited to a small group of major tech firms, including Google, Amazon, Apple, and Microsoft. The goal is to test the system while minimizing the risk of misuse. The company has also launched "Project Glasswing," an initiative focused on using advanced AI capabilities for defensive cybersecurity purposes. As part of that effort, firms are conducting extensive "red teaming," where security experts attempt to break the system and uncover potential vulnerabilities before a wider rollout. Companies also say they are monitoring how these tools are used in real time -- with the ability to shut down access if abuse is detected. Still, experts warn that as AI systems become more powerful, the risk of misuse grows. A Growing Cyber Threat Landscape Those concerns come at a time when cyberattacks are already a major global issue, targeting everything from hospitals to government agencies. In a recent example, hackers linked to Iran reportedly accessed emails connected to FBI Director Kash Patel. While officials said no sensitive information was exposed, the incident highlights ongoing vulnerabilities. Security researchers warn that advanced AI could make these threats even more dangerous -- allowing attackers to identify weaknesses faster and carry out more sophisticated operations. In the U.S., the Cybersecurity and Infrastructure Security Agency, or CISA, leads efforts to defend against cyber threats. The agency is responsible for protecting critical infrastructure, including power grids, election systems, and financial networks. But challenges remain. Concerns about staffing and resource constraints have raised questions about whether current defenses can keep pace with rapidly evolving threats -- especially as AI enters the equation.

WASHINGTON (TNND) -- A powerful new artificial intelligence model is drawing attention in the tech and cybersecurity world -- not just for what it can do, but for how it could be used if it falls into the wrong hands. Anthropic, one of the leading AI firms, is developing an experimental system known as "Mythos." Unlike consumer-facing AI tools, this model is not publicly available. Instead, it's being quietly tested with a small group of major companies due to concerns over its capabilities. At its core, Mythos is designed to excel at cybersecurity tasks. According to Anthropic, the model has already identified thousands of high-severity software vulnerabilities, including flaws in widely used operating systems and web browsers. In some cases, the system has even demonstrated the ability to identify and exploit so-called "zero-day" vulnerabilities -- previously unknown weaknesses that can be especially dangerous if discovered by malicious actors. Independent testing by the UK AI Security Institute underscores both the promise and the risk. Evaluators found the model succeeded in expert-level cybersecurity challenges roughly 73% of the time and, in certain scenarios, could carry out complex, multi-step simulated cyberattacks from start to finish. However, those tests were conducted in controlled environments -- not against real-world, highly defended systems. Because of these capabilities, Anthropic and other AI companies are taking a cautious approach. Rather than releasing Mythos publicly, access is limited to a small group of major tech firms, including Google, Amazon, Apple, and Microsoft. The goal is to test the system while minimizing the risk of misuse. The company has also launched "Project Glasswing," an initiative focused on using advanced AI capabilities for defensive cybersecurity purposes. As part of that effort, firms are conducting extensive "red teaming," where security experts attempt to break the system and uncover potential vulnerabilities before a wider rollout. Companies also say they are monitoring how these tools are used in real time -- with the ability to shut down access if abuse is detected. Still, experts warn that as AI systems become more powerful, the risk of misuse grows. Those concerns come at a time when cyberattacks are already a major global issue, targeting everything from hospitals to government agencies. In a recent example, hackers linked to Iran reportedly accessed emails connected to FBI Director Kash Patel. While officials said no sensitive information was exposed, the incident highlights ongoing vulnerabilities. Security researchers warn that advanced AI could make these threats even more dangerous -- allowing attackers to identify weaknesses faster and carry out more sophisticated operations. In the U.S., the Cybersecurity and Infrastructure Security Agency, or CISA, leads efforts to defend against cyber threats. The agency is responsible for protecting critical infrastructure, including power grids, election systems, and financial networks. But challenges remain. Concerns about staffing and resource constraints have raised questions about whether current defenses can keep pace with rapidly evolving threats -- especially as AI enters the equation.

WASHINGTON (TNND) -- A powerful new artificial intelligence model is drawing attention in the tech and cybersecurity world -- not just for what it can do, but for how it could be used if it falls into the wrong hands. Anthropic, one of the leading AI firms, is developing an experimental system known as "Mythos." Unlike consumer-facing AI tools, this model is not publicly available. Instead, it's being quietly tested with a small group of major companies due to concerns over its capabilities. At its core, Mythos is designed to excel at cybersecurity tasks. According to Anthropic, the model has already identified thousands of high-severity software vulnerabilities, including flaws in widely used operating systems and web browsers. In some cases, the system has even demonstrated the ability to identify and exploit so-called "zero-day" vulnerabilities -- previously unknown weaknesses that can be especially dangerous if discovered by malicious actors. Independent testing by the UK AI Security Institute underscores both the promise and the risk. Evaluators found the model succeeded in expert-level cybersecurity challenges roughly 73% of the time and, in certain scenarios, could carry out complex, multi-step simulated cyberattacks from start to finish. However, those tests were conducted in controlled environments -- not against real-world, highly defended systems. Because of these capabilities, Anthropic and other AI companies are taking a cautious approach. Rather than releasing Mythos publicly, access is limited to a small group of major tech firms, including Google, Amazon, Apple, and Microsoft. The goal is to test the system while minimizing the risk of misuse. The company has also launched "Project Glasswing," an initiative focused on using advanced AI capabilities for defensive cybersecurity purposes. As part of that effort, firms are conducting extensive "red teaming," where security experts attempt to break the system and uncover potential vulnerabilities before a wider rollout. Companies also say they are monitoring how these tools are used in real time -- with the ability to shut down access if abuse is detected. Still, experts warn that as AI systems become more powerful, the risk of misuse grows. Those concerns come at a time when cyberattacks are already a major global issue, targeting everything from hospitals to government agencies. In a recent example, hackers linked to Iran reportedly accessed emails connected to FBI Director Kash Patel. While officials said no sensitive information was exposed, the incident highlights ongoing vulnerabilities. Security researchers warn that advanced AI could make these threats even more dangerous -- allowing attackers to identify weaknesses faster and carry out more sophisticated operations. In the U.S., the Cybersecurity and Infrastructure Security Agency, or CISA, leads efforts to defend against cyber threats. The agency is responsible for protecting critical infrastructure, including power grids, election systems, and financial networks. But challenges remain. Concerns about staffing and resource constraints have raised questions about whether current defenses can keep pace with rapidly evolving threats -- especially as AI enters the equation.

WASHINGTON (TNND) -- A powerful new artificial intelligence model is drawing attention in the tech and cybersecurity world -- not just for what it can do, but for how it could be used if it falls into the wrong hands. Anthropic, one of the leading AI firms, is developing an experimental system known as "Mythos." Unlike consumer-facing AI tools, this model is not publicly available. Instead, it's being quietly tested with a small group of major companies due to concerns over its capabilities. A Tool Built for Cybersecurity -- and Potential Exploitation At its core, Mythos is designed to excel at cybersecurity tasks. According to Anthropic, the model has already identified thousands of high-severity software vulnerabilities, including flaws in widely used operating systems and web browsers. In some cases, the system has even demonstrated the ability to identify and exploit so-called "zero-day" vulnerabilities -- previously unknown weaknesses that can be especially dangerous if discovered by malicious actors. Independent testing by the UK AI Security Institute underscores both the promise and the risk. Evaluators found the model succeeded in expert-level cybersecurity challenges roughly 73% of the time and, in certain scenarios, could carry out complex, multi-step simulated cyberattacks from start to finish. However, those tests were conducted in controlled environments -- not against real-world, highly defended systems. Why Access Is Being Restricted Because of these capabilities, Anthropic and other AI companies are taking a cautious approach. Rather than releasing Mythos publicly, access is limited to a small group of major tech firms, including Google, Amazon, Apple, and Microsoft. The goal is to test the system while minimizing the risk of misuse. The company has also launched "Project Glasswing," an initiative focused on using advanced AI capabilities for defensive cybersecurity purposes. As part of that effort, firms are conducting extensive "red teaming," where security experts attempt to break the system and uncover potential vulnerabilities before a wider rollout. Companies also say they are monitoring how these tools are used in real time -- with the ability to shut down access if abuse is detected. Still, experts warn that as AI systems become more powerful, the risk of misuse grows. A Growing Cyber Threat Landscape Those concerns come at a time when cyberattacks are already a major global issue, targeting everything from hospitals to government agencies. In a recent example, hackers linked to Iran reportedly accessed emails connected to FBI Director Kash Patel. While officials said no sensitive information was exposed, the incident highlights ongoing vulnerabilities. Security researchers warn that advanced AI could make these threats even more dangerous -- allowing attackers to identify weaknesses faster and carry out more sophisticated operations. In the U.S., the Cybersecurity and Infrastructure Security Agency, or CISA, leads efforts to defend against cyber threats. The agency is responsible for protecting critical infrastructure, including power grids, election systems, and financial networks. But challenges remain. Concerns about staffing and resource constraints have raised questions about whether current defenses can keep pace with rapidly evolving threats -- especially as AI enters the equation.

WASHINGTON (AP) -- White House chief of staff Susie Wiles on Friday sounded out Anthropic CEO Dario Amodei about the artificial intelligence company's new Mythos model, which has attracted attention from the federal government for how it could transform national security and the economy. A White House official, who requested anonymity to discuss the meeting ahead of time, said the administration is engaging with advanced AI labs about their models and the security of software. The official stressed that any new technology that might be used by the federal government would require a technical period for evaluation. The White House said afterward that the meeting was productive and constructive, as opportunities for collaboration were discussed as well as the goal of balancing innovation and safety. Anthropic said in a statement that Amodei's meeting included senior administration officials and explored how the San Francisco-based company and the "U.S. government can work together on key shared priorities such as cybersecurity, America's lead in the AI race, and AI safety." The company said it was "looking forward to continuing these discussions." The meeting came after tensions had run hot between the Trump administration and the safety-conscious Anthropic, which has sought to put guardrails on the development of AI to minimize any potential risks and maximize its economic and national security benefits for the U.S. President Donald Trump tried to stop all federal agencies from using Anthropic's chatbot Claude over the company's contract dispute with the Pentagon, with Trump saying in a February social media post that the administration "will not do business with them again!" When Trump was asked Friday while in Arizona if Anthropic had a meeting at the White House, the president said he had "no idea." Defense Secretary Pete Hegseth also sought to declare Anthropic a supply chain risk, an unprecedented move against a U.S. company that Anthropic has challenged in two federal courts. The company said it wanted assurance the Pentagon would not use its technology in fully autonomous weapons and the surveillance of Americans. Hegseth said the company must allow for any uses the Pentagon deemed lawful. U.S. District Judge Rita Lin issued a ruling in March that blocked the enforcement of Trump's social media directive ordering all federal agencies to stop using Anthropic products. Anthropic has said the new Mythos model it announced on April 7 is so "strikingly capable" that it is limiting its use to select customers because of its ability to surpass human cybersecurity experts in finding and exploiting computer vulnerabilities. And while some industry experts have questioned whether Anthropic's claims of too-powerful AI technology were a marketing ploy, even some of the company's sharpest critics have suggested that Mythos might represent a further advancement in AI. One influential Anthropic critic, David Sacks, who was the White House's AI and crypto czar, said people should "take this seriously." "Anytime Anthropic is scaring people, you have to ask, 'Is this a tactic? Is this part of their Chicken Little routine? Or is it real?'" Sacks said on the "All-In" podcast he co-hosts with other tech investors. "With cyber, I actually would give them credit in this case and say this is more on the real side." Sacks said: "It just makes sense that as the coding models become more and more capable, they are more capable at finding bugs. That means they're more capable at finding vulnerabilities. That means they're more capable at stringing together multiple vulnerabilities and creating an exploit." The model's potential benefits, as well as its risks, have also attracted attention outside the U.S. The United Kingdom's AI Security Institute said it evaluated the new model and found it a "step up" over previous models, which were already rapidly improving. "Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed," the institute said in a report. Anthropic has also been in talks with the European Union about its AI models, including advanced models that haven't yet been released in Europe, European Commission spokesman Thomas Regnier said Friday. Axios first reported the scheduled meeting between Wiles and Amodei. When it announced Mythos, Anthropic said it was also forming an initiative called Project Glasswing, bringing together tech giants such as Amazon, Apple, Google and Microsoft, along with other companies like JPMorgan Chase, in hopes of securing the world's critical software from "severe" fallout that the new model could pose to public safety, national security and the economy. "We're releasing it to a subset of some of the world's most important companies and organizations so they can use this to find vulnerabilities," said the Anthropic co-founder and policy chief, Jack Clark, at this week's Semafor World Economy conference. Clark added that Mythos, while ahead of the curve, is not a "special model." "There will be other systems just like this in a few months from other companies, and in a year to a year-and-a-half later, there will be open-weight models from China that have these capabilities," he said. So the world is going to have to get ready for more powerful systems that are going to exist within it." ___ O'Brien reported from Providence, R.I. AP business reporter Kelvin Chan contributed to this report from London.

WASHINGTON (TNND) -- A powerful new artificial intelligence model is drawing attention in the tech and cybersecurity world -- not just for what it can do, but for how it could be used if it falls into the wrong hands. Anthropic, one of the leading AI firms, is developing an experimental system known as "Mythos." Unlike consumer-facing AI tools, this model is not publicly available. Instead, it's being quietly tested with a small group of major companies due to concerns over its capabilities. At its core, Mythos is designed to excel at cybersecurity tasks. According to Anthropic, the model has already identified thousands of high-severity software vulnerabilities, including flaws in widely used operating systems and web browsers. In some cases, the system has even demonstrated the ability to identify and exploit so-called "zero-day" vulnerabilities -- previously unknown weaknesses that can be especially dangerous if discovered by malicious actors. Independent testing by the UK AI Security Institute underscores both the promise and the risk. Evaluators found the model succeeded in expert-level cybersecurity challenges roughly 73% of the time and, in certain scenarios, could carry out complex, multi-step simulated cyberattacks from start to finish. However, those tests were conducted in controlled environments -- not against real-world, highly defended systems. Because of these capabilities, Anthropic and other AI companies are taking a cautious approach. Rather than releasing Mythos publicly, access is limited to a small group of major tech firms, including Google, Amazon, Apple, and Microsoft. The goal is to test the system while minimizing the risk of misuse. The company has also launched "Project Glasswing," an initiative focused on using advanced AI capabilities for defensive cybersecurity purposes. As part of that effort, firms are conducting extensive "red teaming," where security experts attempt to break the system and uncover potential vulnerabilities before a wider rollout. Companies also say they are monitoring how these tools are used in real time -- with the ability to shut down access if abuse is detected. Still, experts warn that as AI systems become more powerful, the risk of misuse grows. Those concerns come at a time when cyberattacks are already a major global issue, targeting everything from hospitals to government agencies. In a recent example, hackers linked to Iran reportedly accessed emails connected to FBI Director Kash Patel. While officials said no sensitive information was exposed, the incident highlights ongoing vulnerabilities. Security researchers warn that advanced AI could make these threats even more dangerous -- allowing attackers to identify weaknesses faster and carry out more sophisticated operations. In the U.S., the Cybersecurity and Infrastructure Security Agency, or CISA, leads efforts to defend against cyber threats. The agency is responsible for protecting critical infrastructure, including power grids, election systems, and financial networks. But challenges remain. Concerns about staffing and resource constraints have raised questions about whether current defenses can keep pace with rapidly evolving threats -- especially as AI enters the equation.

WASHINGTON (TNND) -- A powerful new artificial intelligence model is drawing attention in the tech and cybersecurity world -- not just for what it can do, but for how it could be used if it falls into the wrong hands. Anthropic, one of the leading AI firms, is developing an experimental system known as "Mythos." Unlike consumer-facing AI tools, this model is not publicly available. Instead, it's being quietly tested with a small group of major companies due to concerns over its capabilities. At its core, Mythos is designed to excel at cybersecurity tasks. According to Anthropic, the model has already identified thousands of high-severity software vulnerabilities, including flaws in widely used operating systems and web browsers. In some cases, the system has even demonstrated the ability to identify and exploit so-called "zero-day" vulnerabilities -- previously unknown weaknesses that can be especially dangerous if discovered by malicious actors. Independent testing by the UK AI Security Institute underscores both the promise and the risk. Evaluators found the model succeeded in expert-level cybersecurity challenges roughly 73% of the time and, in certain scenarios, could carry out complex, multi-step simulated cyberattacks from start to finish. However, those tests were conducted in controlled environments -- not against real-world, highly defended systems. Because of these capabilities, Anthropic and other AI companies are taking a cautious approach. Rather than releasing Mythos publicly, access is limited to a small group of major tech firms, including Google, Amazon, Apple, and Microsoft. The goal is to test the system while minimizing the risk of misuse. The company has also launched "Project Glasswing," an initiative focused on using advanced AI capabilities for defensive cybersecurity purposes. As part of that effort, firms are conducting extensive "red teaming," where security experts attempt to break the system and uncover potential vulnerabilities before a wider rollout. Companies also say they are monitoring how these tools are used in real time -- with the ability to shut down access if abuse is detected. Still, experts warn that as AI systems become more powerful, the risk of misuse grows. Those concerns come at a time when cyberattacks are already a major global issue, targeting everything from hospitals to government agencies. In a recent example, hackers linked to Iran reportedly accessed emails connected to FBI Director Kash Patel. While officials said no sensitive information was exposed, the incident highlights ongoing vulnerabilities. Security researchers warn that advanced AI could make these threats even more dangerous -- allowing attackers to identify weaknesses faster and carry out more sophisticated operations. In the U.S., the Cybersecurity and Infrastructure Security Agency, or CISA, leads efforts to defend against cyber threats. The agency is responsible for protecting critical infrastructure, including power grids, election systems, and financial networks. But challenges remain. Concerns about staffing and resource constraints have raised questions about whether current defenses can keep pace with rapidly evolving threats -- especially as AI enters the equation.

Cormac McCarthy's Lotus Esprit Is Ready for You to Take on the Road Tesla Cybertruck recent sales don't appear to be as strong as previously thought. The polarizing pickup's sales numbers have been given a boost by other companies owned by Elon Musk, reports Bloomberg. The revelation offers up yet more proof that the general public isn't all that keen on an EV the equally polarizing billionaire once called the "coolest car I've ever seen." Registration data provide by S&P Global Mobility shows that the tech titan's other companies are propping up the battery-powered truck. SpaceX was responsible for 1,279, or more than 18 percent, of the 7,071 Cybertruck that were registered in the U.S. during the fourth quarter of 2025. His other ventures, including xAI (which was acquired by SpaceX this February), Neuralink, and the Boring Company, acquired another 60 vehicles during that period. This means that nearly one in five Cybertrucks registered during the quarter just moved from one part of Musk's empire to another. The terms of the inter-company sales are unknown, but since the Cybertruck starts at around $70,000, that could put the value of the purchases at over $100 million. Sales of the EV to Musk-owned companies has continued into 2026, as well. The same businesses purchased 225 examples during the first two months of the year. Without the help from Musk's other companies, sales of the Cybertruck would have fallen by 51 percent year-over-year in the final quarter of 2025. The billionaire once predicted that Tesla would need to build 250,000 examples of the pickup annually to keep up with demand, but that hasn't proven to be the case during its two-plus years on the market. There are a number of factors for this, including a divisive (and potentially dangerous) design, higher-than-expected pricing, and several recalls. "Tesla is running out of buyers for the Cybertruck," Sam Fiorani, AutoForecast Solution's vice president of global forecasting, told Bloomberg. The Cybertruck's lack of sales success isn't just embarrassing for Tesla; it also represents a serious concern. The company has seen it sales fall for three straight years and been surpassed by BYD as the world's top seller of EVs. In light of the drop, Musk has tried to change the company's focus, but ambitious projects like a robotaxi and a humanoid robot remain years away.

A long-held investment by Alphabet Inc. in SpaceX could become one of its most valuable bets if the rocket company moves ahead with a public listing, according to Bloomberg. Regulatory filings indicate Google owned about 6.11% of SpaceX at the end of 2025. At a projected $2 trillion IPO valuation, that stake would be worth roughly $122 billion. After SpaceX's merger with xAI, the holding is estimated to have diluted to around 5%, or about $100 billion at the same valuation. The figures offer a clearer picture of Google's position in SpaceX, which had previously been acknowledged without precise detail. Only Google and Elon Musk -- who controls roughly 40% -- were required to disclose holdings above 5%. Bloomberg writes that SpaceX is targeting a potential June IPO and could raise as much as $75 billion, which would make it one of the largest listings ever. At that valuation, even a small fraction of ownership would translate into significant dollar value. Early investors are positioned for outsized returns. Some analysts estimate that backers who entered as recently as 2021 could see gains of around 20 times their original investment. Founded in 2002, SpaceX reached a $1 billion valuation within eight years, a relatively fast climb for a capital-intensive aerospace company. Google first invested in 2015, joining Fidelity in a $1 billion funding round that valued SpaceX at $10 billion and gave the firms a combined 10% stake. Ownership stakes have shifted over time due to dilution and secondary share sales. In 2020, Google held about 7.64% while Musk's stake was around 47%. Early investor Founders Fund has since dropped below the 5% disclosure threshold. Alphabet does not separately report its SpaceX holdings in earnings, though it has recorded sizable unrealized gains tied to private investments, including an $8 billion increase in early 2025 linked to SpaceX. The IPO is expected to create significant liquidity for employees and insiders, potentially prompting departures as some cash out or pursue new ventures. Board members and long-time investors also stand to benefit, underscoring the scale of wealth that could be generated by SpaceX's anticipated debut.

Anthropic has announced the launch of Claude Design, a new platform that brings advanced AI-powered design capabilities to its users. This release targets professionals such as designers, product managers, marketers, and founders, as well as teams seeking to streamline their visual design workflows. Claude Design is immediately available in research preview to Claude Pro, Max, Team, and Enterprise subscribers, with features being rolled out gradually. Currently, access is limited to subscribers, and Enterprise admins must enable the feature for their organizations. The tool is powered by Claude Opus 4.7, Anthropic's most advanced vision model, which allows users to generate and refine designs by describing their needs. Users can import assets from various formats, automatically apply their organization's design system, and collaborate with colleagues in real time. Claude Design supports interactive prototypes, wireframes, pitch decks, marketing materials, and code-powered prototypes featuring voice, video, 3D, and AI elements. The platform stands out for enabling rapid design iterations and seamless handoff to Claude Code for implementation. Anthropic, known for its responsible AI research and the Claude family of AI models, is positioning this release to compete directly with other AI-driven design tools. Early reactions from industry users highlight dramatic speed improvements in prototyping and collaboration compared to legacy solutions. Integration with platforms like Canva and positive feedback from teams such as Brilliant and Datadog indicate that Claude Design is already impacting professional workflows. Anthropic plans further integrations in the coming weeks to expand Claude Design's utility across more tools.

White House chief of staff Susie Wiles has met with Anthropic CEO Dario Amodei to discuss the company's new AI model, Mythos WASHINGTON (AP) -- White House chief of staff Susie Wiles on Friday sounded out Anthropic CEO Dario Amodei about the artificial intelligence company's new Mythos model, which has attracted attention from the federal government for how it could transform national security and the economy. A White House official, who requested anonymity to discuss the meeting ahead of time, said the administration is engaging with advanced AI labs about their models and the security of software. The official stressed that any new technology that might be used by the federal government would require a technical period for evaluation. The White House said afterward that the meeting was productive and constructive, as opportunities for collaboration were discussed as well as the goal of balancing innovation and safety. Anthropic said in a statement that Amodei's meeting included senior administration officials and explored how the San Francisco-based company and the "U.S. government can work together on key shared priorities such as cybersecurity, America's lead in the AI race, and AI safety." The company said it was "looking forward to continuing these discussions." The meeting came after tensions had run hot between the Trump administration and the safety-conscious Anthropic, which has sought to put guardrails on the development of AI to minimize any potential risks and maximize its economic and national security benefits for the U.S. President Donald Trump tried to stop all federal agencies from using Anthropic's chatbot Claude over the company's contract dispute with the Pentagon, with Trump saying in a February social media post that the administration "will not do business with them again!" When Trump was asked Friday while in Arizona if Anthropic had a meeting at the White House, the president said he had "no idea." Defense Secretary Pete Hegseth also sought to declare Anthropic a supply chain risk, an unprecedented move against a U.S. company that Anthropic has challenged in two federal courts. The company said it wanted assurance the Pentagon would not use its technology in fully autonomous weapons and the surveillance of Americans. Hegseth said the company must allow for any uses the Pentagon deemed lawful. U.S. District Judge Rita Lin issued a ruling in March that blocked the enforcement of Trump's social media directive ordering all federal agencies to stop using Anthropic products. Anthropic has said the new Mythos model it announced on April 7 is so "strikingly capable" that it is limiting its use to select customers because of its ability to surpass human cybersecurity experts in finding and exploiting computer vulnerabilities. And while some industry experts have questioned whether Anthropic's claims of too-powerful AI technology were a marketing ploy, even some of the company's sharpest critics have suggested that Mythos might represent a further advancement in AI. One influential Anthropic critic, David Sacks, who was the White House's AI and crypto czar, said people should "take this seriously." "Anytime Anthropic is scaring people, you have to ask, 'Is this a tactic? Is this part of their Chicken Little routine? Or is it real?'" Sacks said on the "All-In" podcast he co-hosts with other tech investors. "With cyber, I actually would give them credit in this case and say this is more on the real side." Sacks said: "It just makes sense that as the coding models become more and more capable, they are more capable at finding bugs. That means they're more capable at finding vulnerabilities. That means they're more capable at stringing together multiple vulnerabilities and creating an exploit." The model's potential benefits, as well as its risks, have also attracted attention outside the U.S. The United Kingdom's AI Security Institute said it evaluated the new model and found it a "step up" over previous models, which were already rapidly improving. "Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed," the institute said in a report. Anthropic has also been in talks with the European Union about its AI models, including advanced models that haven't yet been released in Europe, European Commission spokesman Thomas Regnier said Friday. Axios first reported the scheduled meeting between Wiles and Amodei. When it announced Mythos, Anthropic said it was also forming an initiative called Project Glasswing, bringing together tech giants such as Amazon, Apple, Google and Microsoft, along with other companies like JPMorgan Chase, in hopes of securing the world's critical software from "severe" fallout that the new model could pose to public safety, national security and the economy. "We're releasing it to a subset of some of the world's most important companies and organizations so they can use this to find vulnerabilities," said the Anthropic co-founder and policy chief, Jack Clark, at this week's Semafor World Economy conference. Clark added that Mythos, while ahead of the curve, is not a "special model." "There will be other systems just like this in a few months from other companies, and in a year to a year-and-a-half later, there will be open-weight models from China that have these capabilities," he said. So the world is going to have to get ready for more powerful systems that are going to exist within it." ___ O'Brien reported from Providence, R.I. AP business reporter Kelvin Chan contributed to this report from London.

WASHINGTON (AP) -- White House chief of staff Susie Wiles on Friday sounded out Anthropic CEO Dario Amodei about the artificial intelligence company's new Mythos model, which has attracted attention from the federal government for how it could transform national security and the economy. A White House official, who requested anonymity to discuss the meeting ahead of time, said the administration is engaging with advanced AI labs about their models and the security of software. The official stressed that any new technology that might be used by the federal government would require a technical period for evaluation. The White House said afterward that the meeting was productive and constructive, as opportunities for collaboration were discussed as well as the goal of balancing innovation and safety. Anthropic said in a statement that Amodei's meeting included senior administration officials and explored how the San Francisco-based company and the "U.S. government can work together on key shared priorities such as cybersecurity, America's lead in the AI race, and AI safety." The company said it was "looking forward to continuing these discussions." The meeting came after tensions had run hot between the Trump administration and the safety-conscious Anthropic, which has sought to put guardrails on the development of AI to minimize any potential risks and maximize its economic and national security benefits for the U.S. President Donald Trump tried to stop all federal agencies from using Anthropic's chatbot Claude over the company's contract dispute with the Pentagon, with Trump saying in a February social media post that the administration "will not do business with them again!" When Trump was asked Friday while in Arizona if Anthropic had a meeting at the White House, the president said he had "no idea." Defense Secretary Pete Hegseth also sought to declare Anthropic a supply chain risk, an unprecedented move against a U.S. company that Anthropic has challenged in two federal courts. The company said it wanted assurance the Pentagon would not use its technology in fully autonomous weapons and the surveillance of Americans. Hegseth said the company must allow for any uses the Pentagon deemed lawful. U.S. District Judge Rita Lin issued a ruling in March that blocked the enforcement of Trump's social media directive ordering all federal agencies to stop using Anthropic products. Anthropic has said the new Mythos model it announced on April 7 is so "strikingly capable" that it is limiting its use to select customers because of its ability to surpass human cybersecurity experts in finding and exploiting computer vulnerabilities. And while some industry experts have questioned whether Anthropic's claims of too-powerful AI technology were a marketing ploy, even some of the company's sharpest critics have suggested that Mythos might represent a further advancement in AI. One influential Anthropic critic, David Sacks, who was the White House's AI and crypto czar, said people should "take this seriously." "Anytime Anthropic is scaring people, you have to ask, 'Is this a tactic? Is this part of their Chicken Little routine? Or is it real?'" Sacks said on the "All-In" podcast he co-hosts with other tech investors. "With cyber, I actually would give them credit in this case and say this is more on the real side." Sacks said: "It just makes sense that as the coding models become more and more capable, they are more capable at finding bugs. That means they're more capable at finding vulnerabilities. That means they're more capable at stringing together multiple vulnerabilities and creating an exploit." The model's potential benefits, as well as its risks, have also attracted attention outside the U.S. The United Kingdom's AI Security Institute said it evaluated the new model and found it a "step up" over previous models, which were already rapidly improving. "Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed," the institute said in a report. Anthropic has also been in talks with the European Union about its AI models, including advanced models that haven't yet been released in Europe, European Commission spokesman Thomas Regnier said Friday. Axios first reported the scheduled meeting between Wiles and Amodei. When it announced Mythos, Anthropic said it was also forming an initiative called Project Glasswing, bringing together tech giants such as Amazon, Apple, Google and Microsoft, along with other companies like JPMorgan Chase, in hopes of securing the world's critical software from "severe" fallout that the new model could pose to public safety, national security and the economy. "We're releasing it to a subset of some of the world's most important companies and organizations so they can use this to find vulnerabilities," said the Anthropic co-founder and policy chief, Jack Clark, at this week's Semafor World Economy conference. Clark added that Mythos, while ahead of the curve, is not a "special model." "There will be other systems just like this in a few months from other companies, and in a year to a year-and-a-half later, there will be open-weight models from China that have these capabilities," he said. So the world is going to have to get ready for more powerful systems that are going to exist within it." ___

SpaceX is planning to go public in June, and its initial public offering (IPO) is generating plenty of buzz among investors and analysts. The company recently merged with Elon Musk's artificial intelligence (AI) company, xAI, which itself owned the Grok chatbot and social platform X. The combined entity is reportedly seeking to raise up to $75 billion, which would shatter the record set by Saudi Aramco's $29 billion IPO in 2019. That would value the company at about $1.75 trillion, making it the sixth-most valuable company in the U.S., according to Barron's. But what does the IPO really mean for investors, and what might Musk's real endgame be? According to Reuters, Musk is setting aside 30% of SpaceX stock for individual investors, roughly triple the amount that's normally available during an IPO. This could be a smart strategy, as retail demand is expected to be extraordinarily high. As described by Rowan Taylor, managing partner of Liberty Hall Capital Partners, "This is one of those lifetime moments in which people may say they just have to get in." Between the attention drawn by Musk and the excitement behind space exploration and AI, investors should expect the SpaceX IPO to be a watershed event. The projected almost $2 trillion valuation for SpaceX would rank the company ahead of enormous, well-known companies like Meta Platforms and Berkshire Hathaway. Barron's noted that such a lofty valuation for SpaceX would equal 75 times estimated 2026 sales and 160 times estimated earnings before interest, taxes, depreciation and amortization (EBITDA). Barron's analysis noted that SpaceX would have to execute on many fronts, including making it cheaper to go to space. However, according to Forbes, nearly all of the company's current revenue comes from Starlink. This can be either a drag or a boon, depending on how the different elements of the company operate. The long game when it comes to SpaceX could be an integration with Musk's other controversial company, Tesla. Barron's reported that Musk has spoken about the "convergence" of all of his companies and noted that Baird analyst Ben Kallo said, "I think it's probable. It looks like that's going to happen." Such a company would be a conglomerate of futuristic technologies combining AI and space travel and would no doubt generate further investor excitement. However, such a tie-up is still speculative at this stage, and it might create additional concerns regarding valuation. The core question regarding SpaceX is whether the company can grow earnings fast enough to support its extraordinary valuation. While its futuristic technologies are exciting, hype alone isn't enough to sustain long-term valuations. Only time will tell if SpaceX can execute to the point that it supports its high share price.

WASHINGTON (AP) -- White House chief of staff Susie Wiles on Friday sounded out Anthropic CEO Dario Amodei about the artificial intelligence company's new Mythos model, which has attracted attention from the federal government for how it could transform national security and the economy. A White House official, who requested anonymity to discuss the meeting ahead of time, said the administration is engaging with advanced AI labs about their models and the security of software. The official stressed that any new technology that might be used by the federal government would require a technical period for evaluation. The White House said afterward that the meeting was productive and construction, as opportunities for collaboration were discussed as well as the goal of balancing innovation and safety. The meeting came after tensions had run hot between the Trump administration and the safety-conscious Anthropic, which has sought to put guardrails on the development of AI to minimize any potential risks and maximize its economic and national security benefits for the U.S. President Donald Trump tried to stop all federal agencies from using Anthropic's chatbot Claude over the company's contract dispute with the Pentagon, with Trump saying in a February social media post that the administration "will not do business with them again!" When Trump was asked Friday while in Arizona if Anthropic had a meeting at the White House, the president said he had "no idea." Defense Secretary Pete Hegseth also sought to declare Anthropic a supply chain risk, an unprecedented move against a U.S. company that Anthropic has challenged in two federal courts. The company said it wanted assurance the Pentagon would not use its technology in fully autonomous weapons and the surveillance of Americans. Hegseth said the company must allow for any uses the Pentagon deemed lawful. U.S. District Judge Rita Lin issued a ruling in March that blocked the enforcement of Trump's social media directive ordering all federal agencies to stop using Anthropic products. Anthropic declined to speak about the meeting in advance. The San Francisco-based Anthropic has said the new Mythos model it announced on April 7 is so "strikingly capable" that it is limiting its use to select customers because of its ability to surpass human cybersecurity experts in finding and exploiting computer vulnerabilities. And while some industry experts have questioned whether Anthropic's claims of too-powerful AI technology were a marketing ploy, even some of the company's sharpest critics have suggested that Mythos might represent a further advancement in AI. One influential Anthropic critic, David Sacks, who was the White House's AI and crypto czar, said people should "take this seriously." "Anytime Anthropic is scaring people, you have to ask, 'Is this a tactic? Is this part of their Chicken Little routine? Or is it real?'" Sacks said on the "All-In" podcast he co-hosts with other tech investors. "With cyber, I actually would give them credit in this case and say this is more on the real side." Sacks said: "It just makes sense that as the coding models become more and more capable, they are more capable at finding bugs. That means they're more capable at finding vulnerabilities. That means they're more capable at stringing together multiple vulnerabilities and creating an exploit." The model's potential benefits, as well as its risks, have also attracted attention outside the U.S. The United Kingdom's AI Security Institute said it evaluated the new model and found it a "step up" over previous models, which were already rapidly improving. "Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed," the institute said in a report. Anthropic has also been in talks with the European Union about its AI models, including advanced models that haven't yet been released in Europe, European Commission spokesman Thomas Regnier said Friday. Axios first reported the scheduled meeting between Wiles and Amodei. When it announced Mythos, Anthropic said it was also forming an initiative called Project Glasswing, bringing together tech giants such as Amazon, Apple, Google and Microsoft, along with other companies like JPMorgan Chase, in hopes of securing the world's critical software from "severe" fallout that the new model could pose to public safety, national security and the economy. "We're releasing it to a subset of some of the world's most important companies and organizations so they can use this to find vulnerabilities," said the Anthropic co-founder and policy chief, Jack Clark, at this week's Semafor World Economy conference. Clark added that Mythos, while ahead of the curve, is not a "special model." "There will be other systems just like this in a few months from other companies, and in a year to a year-and-a-half later, there will be open-weight models from China that have these capabilities," he said. So the world is going to have to get ready for more powerful systems that are going to exist within it." ___ O'Brien reported from Providence, R.I. AP business reporter Kelvin Chan contributed to this report from London.

SpaceX is planning to go public in June, and its initial public offering (IPO) is generating plenty of buzz among investors and analysts. The company recently merged with Elon Musk's artificial intelligence (AI) company, xAI, which itself owned the Grok chatbot and social platform X. Explore More: 4 Safe Accounts Proven To Grow Your Money Up To 13x Faster The combined entity is reportedly seeking to raise up to $75 billion, which would shatter the record set by Saudi Aramco's $29 billion IPO in 2019. That would value the company at about $1.75 trillion, making it the sixth-most valuable company in the U.S., according to Barron's. But what does the IPO really mean for investors, and what might Musk's real endgame be? How This IPO Is Special for Investors According to Reuters, Musk is setting aside 30% of SpaceX stock for individual investors, roughly triple the amount that's normally available during an IPO. This could be a smart strategy, as retail demand is expected to be extraordinarily high. As described by Rowan Taylor, managing partner of Liberty Hall Capital Partners, "This is one of those lifetime moments in which people may say they just have to get in." Between the attention drawn by Musk and the excitement behind space exploration and AI, investors should expect the SpaceX IPO to be a watershed event. Check Out: I Got Rich Investing -- These Lessons for Beginners Could Lead To $1 Million Net Worth Is SpaceX Really Worth $2 Trillion? The projected almost $2 trillion valuation for SpaceX would rank the company ahead of enormous, well-known companies like Meta Platforms and Berkshire Hathaway. Barron's noted that such a lofty valuation for SpaceX would equal 75 times estimated 2026 sales and 160 times estimated earnings before interest, taxes, depreciation and amortization (EBITDA). Barron's analysis noted that SpaceX would have to execute on many fronts, including making it cheaper to go to space. However, according to Forbes, nearly all of the company's current revenue comes from Starlink. This can be either a drag or a boon, depending on how the different elements of the company operate. Could Tesla Play a Role? The long game when it comes to SpaceX could be an integration with Musk's other controversial company, Tesla. Barron's reported that Musk has spoken about the "convergence" of all of his companies and noted that Baird analyst Ben Kallo said, "I think it's probable. It looks like that's going to happen." Such a company would be a conglomerate of futuristic technologies combining AI and space travel and would no doubt generate further investor excitement. However, such a tie-up is still speculative at this stage, and it might create additional concerns regarding valuation.
An expected wave of technology initial public offerings is kicking into high gear. On Friday, the Silicon Valley artificial intelligence chip maker Cerebras took a key step toward listing its shares on the stock market by unveiling an investor prospectus. In the filing, Cerebras revealed that its revenue rose 75 percent last year to $510 million and that it swung to an annual profit of $238 million from a net loss in 2024. Cerebras initially filed to list its shares in the fall of 2024, before raising additional funding and withdrawing its offering plans without providing a reason. Now it is moving to go public again, before a slew of potentially enormous offerings from high-profile companies. This month, SpaceX, Elon Musk's rocket and satellite maker, which is valued at more than $1 trillion, filed to list its shares in what is likely to be one of the largest-ever public offerings. The A.I. companies OpenAI and Anthropic have also taken early steps to go public. These companies could suck the oxygen out of other I.P.O.s by commanding most investor attention. Cerebras, based in Sunnyvale, Calif., makes specialized computer chips for building A.I. technologies and delivering them to businesses and consumers. The market is dominated by the Silicon Valley chip maker Nvidia. Cerebras and others are trying to make a dent in Nvidia's chip empire. Google and Amazon are pulling in billions of dollars in revenues with their specialized A.I. chips. Other start-ups trying to grab a piece of the booming chip market include Graphcore and SambaNova. Its first prospectus in 2024 showed that Cerebras's business relied heavily on a single customer that was also an investor: G42, an A.I. company backed by the United Arab Emirates. In total, G42 represented 87 percent of Cerebras's revenue in the first half of 2024. In that 2024 prospectus, Cerebras said it had notified CFIUS, or the Committee on Foreign Investment in the United States, about selling shares to G42 so that the committee could review the partnership for national security risks. Several months later, the company announced that CFIUS had cleared the sale, before pulling its I.P.O. plans. Since then, Cerebras has landed two major U.S. companies -- Amazon and OpenAI -- as customers. Both are constructing massive computer data centers to build and deliver A.I. technologies. (The New York Times has sued OpenAI and Microsoft for copyright infringement of news content related to A.I. systems. The two companies have denied the claims.) In its prospectus on Friday, Cerebras revealed that it generated $510 million in revenue in 2025, up from $290 million in 2024. Though the company's chips can be used to build A.I. technologies, much of its business has been driven by the tech industry's growing need to deliver A.I. system to customers -- a technical process called "inference." "As it turns out, inference is cool," Cerebras's chief executive and co-founder, Andrew Feldman, told The Times in January after signing the agreement with OpenAI. The profit of $238 million last year compared with a loss of $482 million in 2024, according to the prospectus. Cerebras warned in the filing that a substantial portion of its revenue would continue coming from a limited number of customers. G42 represented 24 percent of its revenues last year, and it said the Mohamed bin Zayed University of Artificial Intelligence, or MBZUAI, a research lab in the Emirates, accounted for 62 percent. Cerebras added that its deal with OpenAI was valued at more than $20 billion. Cerebras was founded in 2016 by Mr. Feldman and the chip industry veterans Jean-Philippe Fricker, Michael James, Gary Lauterbach and Sean Lie. Like other start-ups, the company saw the growing demand for chips that could help build A.I. and began designing its own. One challenge that A.I. companies faced at the time was connecting many chips so they could work together to analyze enormous amounts of data, an essential part of building modern A.I. systems. To address this, Cerebras created a chip around 56 times as large as those made by competitors to help crunch data faster. Cerebras began shipping its chips in 2019. The company has raised over $2.55 billion in venture capital funding, valuing it at $23 billion. In 2023, the company began building supercomputers for G42. The partnership later came under scrutiny as the Commerce Department weighed whether to place trade restrictions on G42 because of its work with China's military. G42 has denied having connections to the Chinese government and military. Cerebras is among the companies working with the Emirates to build A.I. data centers in the Middle East, projects that could be delayed by the war in the region. Cade Metz is a Times reporter who writes about artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas of technology.

TRM uses layered detection and reconciliation, anchoring all datasets to canonical transaction timestamps for accuracy Blockchain reorganizations continue to challenge data reliability across Ethereum-compatible networks. A recent post by TRM Labs explains how these events can alter transaction records, forcing engineering teams to rethink how real-time blockchain data is processed and maintained. TRM Labs shared the update through its official X account, pointing readers to a detailed breakdown of its internal systems. The post explains that blockchain reorgs do more than create duplicate entries. They can shift transaction positions, modify log indices, and even alter execution outcomes. A reorg occurs when a blockchain replaces recently accepted blocks with a different version of the chain. This can happen under both proof-of-work and proof-of-stake systems. In Ethereum's current structure, delays in block propagation or network partitions can trigger such changes. As a result, previously ingested data may become outdated without warning. Transactions might move to different blocks, while timestamps and execution paths can change. In some cases, a transaction that succeeded earlier may fail in the updated chain version. This creates challenges for data pipelines that process blockchain activity in real time. Once incorrect data enters storage systems, it remains alongside updated records. This leads to inconsistencies that extend across dependent datasets. TRM notes that relying only on transaction hashes for deduplication does not solve the issue. When positions shift, metadata such as log indices and trace identifiers also change. These differences cause systems to treat identical transactions as separate records. To manage these issues, TRM Labs built a layered system that detects and corrects reorg-related inconsistencies. The company processes blockchain data immediately after block production instead of waiting for finality. This approach supports real-time monitoring needs but requires constant reconciliation. Waiting for finality could prevent most reorg issues. However, finality on Ethereum can take up to 15 minutes. For compliance and risk monitoring systems, such delays are not practical. TRM's system begins with reorg detection. Once identified, affected data is republished and corrected across all downstream tables. Each dataset applies its own deduplication rules, ensuring that outdated records are removed or replaced. Another key component is cross-table reconciliation. Since reorgs can affect multiple datasets differently, consistency must be restored across all related tables. Without this step, mismatched records could disrupt analytics and reporting systems. The transactions table plays a central role in this process. It serves as the main reference point for all other datasets. By anchoring downstream data to canonical transaction timestamps, the system restores alignment after a reorg occurs. The post also outlines different failure scenarios observed in production. In some cases, transactions retain the same outputs but shift positions. In others, execution paths change due to differences in blockchain state, leading to altered results. There are also situations where the number of token transfers changes between chain versions. These variations create mismatches that cannot be resolved through simple deduplication methods. TRM's approach addresses each of these scenarios through coordinated data correction. This ensures that real-time systems maintain accuracy even when the underlying blockchain structure changes.

Anthropic CEO Dario Amodei met with senior administration officials staff at the White House Friday, sparking new questions over the future of the artificial intelligence firm's relationship with the Trump administration amid its ongoing battle with the Pentagon. The White House said the "introductory meeting" with the company was "both productive and constructive." "We discussed opportunities for collaboration, as well as shared approaches and protocols to address the challenges associated with scaling this technology," the White House added in a statement shared with The Hill. "The conversation also explored the balance between advancing innovation and ensuring safety. We look forward to continuing this dialogue and will host similar discussions with other leading AI companies." A spokesperson for Anthropic confirmed the meeting, stating Amodei had a "productive discussion on how Anthropic and the U.S. government can work together on key shared priorities such as cybersecurity, America's lead in the AI race, and AI safety." The high-stakes meeting involved Amodei, White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent, Axios first reported. The Hill reached out to the Treasury Department for comment. "The meeting reflected Anthropic's ongoing commitment to engaging with the U.S. government on the development of responsible AI," the Anthropic spokesperson said. "We are grateful for their time and are looking forward to continuing these discussions." It comes just over a week after Anthropic announced it would limit the release of its new Mythos model to a limited set of companies, banks and government entities given its cybersecurity capabilities. While the tool can help governments find cybersecurity vulnerabilities in infrastructure, web browsers or software, it also makes it much easier for hackers to exploit these security gaps. Upon learning about Mythos and its capabilities earlier this month, multiple Trump administration officials quickly engaged with Anthropic, despite its ongoing legal fight with the Pentagon. Bessent and Federal Reserve Chair Jerome Powell convened Wall Street executives last week to discuss the cybersecurity concerns, mulitple people familiar with the meeting told The Hill. CNBC later reported the Treasury secretary joined Vice President Vance on a call with a group of technology leaders, including Anthropic CEO Dario Amodei, xAI CEO and former Trump adviser Elon Musk, OpenAI CEO Sam Altman and others, to discuss the security of AI models. Friday's meeting could signal a new chapter for Anthropic's rocky relationship with the Trump administration. It comes nearly two months after negotiations between the Pentagon and Anthropic fell apart over safety guardrails, prompting the Pentagon to label the company as a supply chain risk. President Trump also directed federal civilian agencies to stop using Anthropic's models, prompting the company to sue the federal government. A preliminary injunction was granted by a California federal judge, putting a temporary pause on the government's directive not to use Anthropic's technology. Mythos has already found thousands of high-security vulnerabilities, some of which date back more than two decades, according to Anthropic. Multiple banks on Wall Street are already using the model to spot vulnerabilities, while technology firms like Google and Apple are testing it as part of Anthropic's new security initiative Project Glasswing. Anthropic is also in conversations with the Cybersecurity and Infrastructure Security Agency and the Center for AI Standards and Innovation about Mythos, a company official told The Hill earlier this week. Bloomberg reported this week that the White House is preparing to make a version of Anthropic's AI model available to major federal agencies.

White House chief of staff Susie Wiles has met with Anthropic CEO Dario Amodei to discuss the company's new AI model, Mythos WASHINGTON (AP) -- White House chief of staff Susie Wiles on Friday sounded out Anthropic CEO Dario Amodei about the artificial intelligence company's new Mythos model, which has attracted attention from the federal government for how it could transform national security and the economy. A White House official, who requested anonymity to discuss the meeting ahead of time, said the administration is engaging with advanced AI labs about their models and the security of software. The official stressed that any new technology that might be used by the federal government would require a technical period for evaluation.
