The latest news and updates from companies in the WLTH portfolio.
Global financial systems need to "come to grips" with the risks posed by rapid advances in artificial intelligence models like Anthropic's Mythos, Bank of Canada Governor Tiff Macklem said on Friday. Developer Anthropic claims the upcoming Mythos model of its Claude AI is capable of quickly detecting long-hidden cybersecurity vulnerabilities. The model has not yet seen a wide commercial release, but major financial market players and regulators are still anxious about the technology's disruptive potential. Mythos was discussed at a meeting last week of the Bank of Canada's financial sector resiliency group, which includes representatives from the finance department and major Canadian banks. U.S. officials have reportedly convened similar roundtables. Macklem told reporters during a call Friday from the sidelines of the International Monetary Fund's spring meetings that there has been a fair amount of discussion about the model at the forum. He confirmed he spoke to Federal Reserve chair Jerome Powell about the U.S. approach. "I don't think anybody knows the full implications at this point. That's precisely what everybody's trying to get to the bottom of," Macklem said. He said policy-makers and financial institutions are still in "early discussions" about what Mythos means for the integrity of the global financial system. But Macklem emphasized that Mythos is not a one-off event and the nature of AI development means firms, regulators and policy-makers need to put plans in place to grapple with this rapidly evolving technology. Whether it's Mythos or another AI model, Macklem said, the ability of these new technologies to both expose and exploit vulnerabilities "puts a premium" on having strong cybersecurity protections in place. "We're going to need to come to grips with how we're going to manage this on an ongoing basis," he said. "The world's moving quickly. We need to keep up." Canada's AI Minister Evan Solomon said he was meeting with Anthropic officials earlier this week to discuss global concern about the new Mythos model. "Our No. 1 goal is to protect Canadians, Canadian data and our institutions, so we're very aware of it," he said Monday. Finance Minister François-Philippe Champagne was also in Washington for the IMF meetings and told reporters earlier in the day that Mythos has become a "test case" for how governments prepare for and react to new technologies. This report by The Canadian Press was first published April 17, 2026. -- with files from Anja Karadeglija Craig Lord, The Canadian Press

Anthropic has recently launched Claude Design, a cutting-edge AI design assistant that enhances creative capabilities. This new tool is part of their ongoing efforts to revolutionize visual design. What is Claude Design? Claude Design is an advanced application enabling users to generate designs, prototypes, and slides. It expands Anthropic's previous work in image generation, building on the foundational capabilities of its vision model, Opus 4.7. Features of Claude Design * Prompt-Based Customization: Every design project begins with a user-defined prompt, allowing for tailored outputs. * Interactive Adjustments: Users can refine designs through inline comments and direct edits, enhancing collaboration. * Custom Sliders: The app generates specific sliders for visual elements, making it easy to modify features like glow and density. * Onboarding Process: Claude learns from a user's existing codebase and design documents, automatically applying brand colors and typography. * Image and Document Support: Users can upload images and documents, facilitating a smoother design workflow. * Web Capture Tool: This feature allows enterprise clients to capture elements directly from their websites for inspiration. * Direct Export: Designs can be exported to Claude Code, making integration seamless. Access and Availability Claude Design is part of Anthropic's subscription services, including Pro, Max, Team, and Enterprise plans. Users can explore the app while keeping track of their usage limits. Market Context The introduction of Claude Design coincides with releases from competitors like Adobe and Canva, both of which have launched their visual AI assistants. Interestingly, Claude Design allows users to export projects directly to Canva, demonstrating a unique approach to market competition. With this initiative, Anthropic is setting the stage to further establish itself in the AI design sector, potentially reshaping how designers and businesses approach visual creativity.

The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user. The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.

The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user. The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.

White House Chief of Staff Susie Wiles plans to meet with Anthropic CEO Dario Amodei to discuss the company's new AI model, Mythos WASHINGTON (AP) -- White House chief of staff Susie Wiles plans to sound out Anthropic CEO Dario Amodei about the artificial intelligence company's new Mythos model, which has attracted attention from the federal government for how it could transform national security and the economy. A White House official, who requested anonymity to discuss the planned meeting Friday, said the administration is engaging with advanced AI labs about their models and the security of software. The official stressed that any new technology that might be used by the federal government would require a technical period for evaluation. The meeting comes after tensions have run hot between the Trump administration and the safety-conscious Anthropic, which has sought to put guardrails on the development of AI to minimize any potential risks and maximize its economic and national security benefits for the U.S. President Donald Trump tried to stop all federal agencies from using Anthropic's chatbot Claude over the company's contract dispute with the Pentagon, with Trump saying in a February social media post that the administration "will not do business with them again!" Defense Secretary Pete Hegseth also sought to declare Anthropic a supply chain risk, an unprecedented move against a U.S. company that Anthropic has challenged in two federal courts. The company said it wanted assurance the Pentagon would not use its technology in fully autonomous weapons and the surveillance of Americans. Hegseth said the company must allow for any uses the Pentagon deemed lawful. U.S. District Judge Rita Lin issued a ruling in March that blocked the enforcement of Trump's social media directive ordering all federal agencies to stop using Anthropic products. Anthropic declined to speak about the meeting in advance. The San Francisco-based Anthropic has said the new Mythos model it announced on April 7 is so "strikingly capable" that it is limiting its use to select customers because of its ability to surpass human cybersecurity experts in finding and exploiting computer vulnerabilities. And while some industry experts have questioned whether Anthropic's claims of too-powerful AI technology were a marketing ploy, even some of the company's sharpest critics have suggested that Mythos might represent a further advancement in AI. One influential Anthropic critic, David Sacks, who was the White House's AI and crypto czar, said people should "take this seriously." "Anytime Anthropic is scaring people, you have to ask, 'Is this a tactic? Is this part of their Chicken Little routine? Or is it real?'" Sacks said on the "All-In" podcast he co-hosts with other tech investors. "With cyber, I actually would give them credit in this case and say this is more on the real side." Sacks said, "It just makes sense that as the coding models become more and more capable, they are more capable at finding bugs. That means they're more capable at finding vulnerabilities. That means they're more capable at stringing together multiple vulnerabilities and creating an exploit." The model's potential benefits, as well as its risks, have also attracted attention outside the U.S. The United Kingdom's AI Security Institute said it evaluated the new model and found it a "step up" over previous models, which were already rapidly improving. "Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed," the institute said in a report. Anthropic has also been in talks with the European Union about its AI models, including advanced models that haven't yet been released in Europe, European Commission spokesman Thomas Regnier said Friday. Axios first reported the scheduled meeting between Wiles and Amodei. When it announced Mythos, Anthropic said it was also forming an initiative called Project Glasswing, bringing together tech giants such as Amazon, Apple, Google and Microsoft, along with other companies like JPMorgan Chase, in hopes of securing the world's critical software from "severe" fallout that the new model could pose to public safety, national security and the economy. "We're releasing it to a subset of some of the world's most important companies and organizations so they can use this to find vulnerabilities," said the Anthropic co-founder and policy chief, Jack Clark, at this week's Semafor World Economy conference. Clark added that Mythos, while ahead of the curve, is not a "special model." "There will be other systems just like this in a few months from other companies, and in a year to a year-and-a-half later, there will be open-weight models from China that have these capabilities," he said. So the world is going to have to get ready for more powerful systems that are going to exist within it." ___ O'Brien reported from Providence, R.I. AP business reporter Kelvin Chan contributed to this report from London.

Related company purchases raise questions about Tesla's sales transparency SpaceX bought 1,279 Cybertrucks in Q4 2025, representing over 18% of Tesla's total US sales during the quarter. That's not fleet expansion -- that's a bailout wearing corporate camouflage. When your biggest customer shares the same CEO, the sales numbers start looking less like market validation and more like financial theater. The Numbers Tell a Brutal Story Total US Cybertruck sales plummeted 48% in 2025 to just 20,300 units. Tesla moved barely 20,300 Cybertrucks in America last year, a 48% nosedive from 2024 that makes the vehicle's polarizing design seem quaint compared to its market rejection. Elon Musk's 2019 forecast of 250,000 annual production looks like pure fantasy now -- Tesla hit roughly 8% of that target. Q1 2026 brought more pain with deliveries crashing to 3,519 units, down 45% year-over-year. These aren't growing pains; they're death throes. Family Business or Market Manipulation? SpaceX's bulk purchases raise questions about Tesla's reported sales transparency. SpaceX justified the purchases as "fleet replacements" for operations at facilities like Starbase, Texas. Fair enough -- rocket companies need truck. But when related entities controlled by the same person buy nearly one-fifth of your struggling product's output, you're not measuring consumer demand anymore. You're watching corporate accounting gymnastics. S&P Global Mobility data shows SpaceX and other Musk ventures combined for roughly 19% of Q4 registrations, turning what should be market metrics into family transactions. What This Means for Your Investment Decisions Tesla's credibility on future product launches now carries an asterisk. This Cybertruck situation creates a troubling precedent for evaluating Tesla's next moves. When a company props up disappointing sales through internal purchases, how do you trust the numbers on future products? Tesla investors deserve transparency about which sales represent genuine market adoption versus corporate reshuffling. EV buyers considering Tesla alternatives now have concrete evidence that even Musk's most hyped products can face brutal market reality -- regardless of what the initial sales figures suggest.

Federal agencies secretly test Anthropic's models while presidential boycott remains active The same administration that blacklisted Anthropic and ordered federal agencies to "IMMEDIATELY CEASE" using its technology is now hosting CEO Dario Amodei at the White House. What shifted? Mythos -- an AI model so advanced at finding software vulnerabilities that restricting access would be "exceedingly reckless" and "effectively hand a competitive edge to China," according to sources involved in the discussions. Chief of Staff Susie Wiles meets with Amodei on Friday to discuss how this cybersecurity breakthrough fits into national defense strategy. The AI That Spots What Human Security Teams Miss Picture this: an AI that identifies long-overlooked software flaws faster than seasoned cybersecurity professionals. Mythos represents a quantum leap in automated vulnerability discovery. The model excels at detecting security holes that human teams routinely miss -- the kind of oversight that leaves critical infrastructure exposed for years. But here's the double-edged reality: the same capability that helps defenders patch systems faster also hands attackers a roadmap to exploitation. You're looking at the Swiss Army knife of cybersecurity AI, equally useful for protection and destruction. Government Agencies Queue Up Despite Presidential Ban While Trump's Truth Social posts demanded boycotts, federal agencies quietly kept evaluating Anthropic's technology. * The Cybersecurity Infrastructure Security Agency * Treasury Department * Intelligence community factions All are testing Mythos despite the presidential directive. Treasury officials want to identify and fix unknown network flaws -- a reasonable request that highlights the absurdity of blanket restrictions. Even the Pentagon, which started this whole mess by demanding unrestricted access to all Anthropic models, reportedly continues using the company's technology in ongoing military operations. When National Security Trumps Ideological Warfare The administration that turned AI ethics into a culture war issue just discovered pragmatism has its place. Recent high-level meetings signal a dramatic recalibration. Vice President JD Vance and Treasury Secretary Scott Bessent met with Amodei alongside other tech executives to discuss AI cybersecurity. Bank executives huddled with federal officials about Mythos applications. Some administration officials now view the prolonged conflict as counterproductive -- a rare outbreak of strategic thinking in an administration better known for Truth Social ultimatums. The legal battles continue, but expect negotiated frameworks rather than corporate exile. This reversal establishes a crucial precedent: even the most ideologically driven administrations must eventually reckon with technological reality. When AI capabilities become too strategically valuable to ignore, pragmatism beats punishment every time.

Pentagon tests multiple AI models to avoid single-vendor dependence after Anthropic walkaway When Anthropic walked away from a $200 million Pentagon contract rather than strip Claude of its safety guardrails, it sent shockwaves through Silicon Valley. Popular AI assistant Claude suddenly became the center of a high-stakes military ethics debate -- one that's reshaping how consumer AI gets built. The Conscience vs. Contract Battle Anthropic's refusal to compromise on AI safety guardrails triggered the first-ever supply chain risk designation for an American AI company. Anthropic CEO Dario Amodei drew a hard line when Pentagon officials demanded Claude's safeguards against autonomous weapons and domestic surveillance be removed. The Pentagon responded by terminating the contract and designating Anthropic as a supply-chain risk on February 27 -- tech industry speak for "you're blacklisted." Google Steps Into the Breach Google's cautious re-entry into defense work includes strict contract language prohibiting autonomous weapons without human oversight. Now Google wants in on the action, but with strings attached. The company is reportedly negotiating Gemini deployment in classified Pentagon environments while insisting on contract language that prohibits domestic mass surveillance and autonomous weapons without human oversight. Think of it as Google's "we'll help, but we won't build Skynet" clause. This marks Google's cautious re-entry into defense work after employee protests forced the company to abandon Project Maven years ago. The Multi-Vendor Arms Race Pentagon officials are testing multiple AI models across different classification levels to avoid single-vendor dependence. The Pentagon isn't putting all its eggs in one AI basket anymore. While Claude was first into classified networks, military officials are now testing various AI models across different classification levels. Pentagon officials emphasize their commitment to rapidly deploying frontier AI capabilities through strong industry partnerships -- bureaucrat-speak for "we need backup plans." The Reliability Problem Technical accuracy concerns plague AI deployment at Pentagon scale, where error rates could affect critical decisions. Here's where things get dicey for everyday users. AI search tools show concerning error rates that become magnified at Pentagon scale, where incorrect responses could mean life or death decisions based on hallucinated intelligence. Military platforms are adding retrieval-augmented generation and web-grounding to reduce those AI fever dreams. The stakes extend beyond military contracts. These negotiations are setting precedents for AI safety standards that will eventually flow back into the consumer tools you use daily. When Pentagon procurement shapes AI guardrails, your ChatGPT conversations inherit the aftermath.

White House Chief of Staff Susie Wiles plans to meet with Anthropic CEO Dario Amodei to discuss the company's new AI model, Mythos WASHINGTON (AP) -- White House chief of staff Susie Wiles plans to sound out Anthropic CEO Dario Amodei about the artificial intelligence company's new Mythos model, which has attracted attention from the federal government for how it could transform national security and the economy. A White House official, who requested anonymity to discuss the planned meeting Friday, said the administration is engaging with advanced AI labs about their models and the security of software. The official stressed that any new technology that might be used by the federal government would require a technical period for evaluation.

Global financial systems need to "come to grips" with the risks posed by rapid advances in artificial intelligence models like Anthropic's Mythos, Bank of Canada Governor Tiff Macklem said on Friday. Developer Anthropic claims the upcoming Mythos model of its Claude AI is capable of quickly detecting long-hidden cybersecurity vulnerabilities. The model has not yet seen a wide commercial release, but major financial market players and regulators are still anxious about the technology's disruptive potential. Mythos was discussed at a meeting last week of the Bank of Canada's financial sector resiliency group, which includes representatives from the finance department and major Canadian banks. U.S. officials have reportedly convened similar roundtables. Macklem told reporters during a call Friday from the sidelines of the International Monetary Fund's spring meetings that there has been a fair amount of discussion about the model at the forum. He confirmed he spoke to Federal Reserve chair Jerome Powell about the U.S. approach. "I don't think anybody knows the full implications at this point. That's precisely what everybody's trying to get to the bottom of," Macklem said. He said policy-makers and financial institutions are still in "early discussions" about what Mythos means for the integrity of the global financial system. But Macklem emphasized that Mythos is not a one-off event and the nature of AI development means firms, regulators and policy-makers need to put plans in place to grapple with this rapidly evolving technology. Whether it's Mythos or another AI model, Macklem said, the ability of these new technologies to both expose and exploit vulnerabilities "puts a premium" on having strong cybersecurity protections in place. "We're going to need to come to grips with how we're going to manage this on an ongoing basis," he said. "The world's moving quickly. We need to keep up." Canada's AI Minister Evan Solomon said he was meeting with Anthropic officials earlier this week to discuss global concern about the new Mythos model. "Our No. 1 goal is to protect Canadians, Canadian data and our institutions, so we're very aware of it," he said Monday. Finance Minister François-Philippe Champagne was also in Washington for the IMF meetings and told reporters earlier in the day that Mythos has become a "test case" for how governments prepare for and react to new technologies. This report by The Canadian Press was first published April 17, 2026. -- with files from Anja Karadeglija Craig Lord, The Canadian Press

Global financial systems need to "come to grips" with the risks posed by rapid advances in artificial intelligence models like Anthropic's Mythos, Bank of Canada Governor Tiff Macklem said on Friday. Developer Anthropic claims the upcoming Mythos model of its Claude AI is capable of quickly detecting long-hidden cybersecurity vulnerabilities. The model has not yet seen a wide commercial release, but major financial market players and regulators are still anxious about the technology's disruptive potential. Mythos was discussed at a meeting last week of the Bank of Canada's financial sector resiliency group, which includes representatives from the finance department and major Canadian banks. U.S. officials have reportedly convened similar roundtables. Macklem told reporters during a call Friday from the sidelines of the International Monetary Fund's spring meetings that there has been a fair amount of discussion about the model at the forum. He confirmed he spoke to Federal Reserve chair Jerome Powell about the U.S. approach. "I don't think anybody knows the full implications at this point. That's precisely what everybody's trying to get to the bottom of," Macklem said. He said policy-makers and financial institutions are still in "early discussions" about what Mythos means for the integrity of the global financial system. But Macklem emphasized that Mythos is not a one-off event and the nature of AI development means firms, regulators and policy-makers need to put plans in place to grapple with this rapidly evolving technology. Whether it's Mythos or another AI model, Macklem said, the ability of these new technologies to both expose and exploit vulnerabilities "puts a premium" on having strong cybersecurity protections in place. "We're going to need to come to grips with how we're going to manage this on an ongoing basis," he said. "The world's moving quickly. We need to keep up." Canada's AI Minister Evan Solomon said he was meeting with Anthropic officials earlier this week to discuss global concern about the new Mythos model. "Our No. 1 goal is to protect Canadians, Canadian data and our institutions, so we're very aware of it," he said Monday. Finance Minister François-Philippe Champagne was also in Washington for the IMF meetings and told reporters earlier in the day that Mythos has become a "test case" for how governments prepare for and react to new technologies. This report by The Canadian Press was first published April 17, 2026. -- with files from Anja Karadeglija Craig Lord, The Canadian Press

White House Chief of Staff Susie Wiles plans to meet with Anthropic CEO Dario Amodei to discuss the company's new AI model, Mythos WASHINGTON (AP) -- White House chief of staff Susie Wiles plans to sound out Anthropic CEO Dario Amodei about the artificial intelligence company's new Mythos model, which has attracted attention from the federal government for how it could transform national security and the economy. A White House official, who requested anonymity to discuss the planned meeting Friday, said the administration is engaging with advanced AI labs about their models and the security of software. The official stressed that any new technology that might be used by the federal government would require a technical period for evaluation. The meeting comes after tensions have run hot between the Trump administration and the safety-conscious Anthropic, which has sought to put guardrails on the development of AI to minimize any potential risks and maximize its economic and national security benefits for the U.S. President Donald Trump tried to stop all federal agencies from using Anthropic's chatbot Claude over the company's contract dispute with the Pentagon, with Trump saying in a February social media post that the administration "will not do business with them again!" Defense Secretary Pete Hegseth also sought to declare Anthropic a supply chain risk, an unprecedented move against a U.S. company that Anthropic has challenged in two federal courts. The company said it wanted assurance the Pentagon would not use its technology in fully autonomous weapons and the surveillance of Americans. Hegseth said the company must allow for any uses the Pentagon deemed lawful. U.S. District Judge Rita Lin issued a ruling in March that blocked the enforcement of Trump's social media directive ordering all federal agencies to stop using Anthropic products. Anthropic declined to speak about the meeting in advance. The San Francisco-based Anthropic has said the new Mythos model it announced on April 7 is so "strikingly capable" that it is limiting its use to select customers because of its ability to surpass human cybersecurity experts in finding and exploiting computer vulnerabilities. And while some industry experts have questioned whether Anthropic's claims of too-powerful AI technology were a marketing ploy, even some of the company's sharpest critics have suggested that Mythos might represent a further advancement in AI. One influential Anthropic critic, David Sacks, who was the White House's AI and crypto czar, said people should "take this seriously." "Anytime Anthropic is scaring people, you have to ask, 'Is this a tactic? Is this part of their Chicken Little routine? Or is it real?'" Sacks said on the "All-In" podcast he co-hosts with other tech investors. "With cyber, I actually would give them credit in this case and say this is more on the real side." Sacks said, "It just makes sense that as the coding models become more and more capable, they are more capable at finding bugs. That means they're more capable at finding vulnerabilities. That means they're more capable at stringing together multiple vulnerabilities and creating an exploit." The model's potential benefits, as well as its risks, have also attracted attention outside the U.S. The United Kingdom's AI Security Institute said it evaluated the new model and found it a "step up" over previous models, which were already rapidly improving. "Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed," the institute said in a report. Anthropic has also been in talks with the European Union about its AI models, including advanced models that haven't yet been released in Europe, European Commission spokesman Thomas Regnier said Friday. Axios first reported the scheduled meeting between Wiles and Amodei. When it announced Mythos, Anthropic said it was also forming an initiative called Project Glasswing, bringing together tech giants such as Amazon, Apple, Google and Microsoft, along with other companies like JPMorgan Chase, in hopes of securing the world's critical software from "severe" fallout that the new model could pose to public safety, national security and the economy. "We're releasing it to a subset of some of the world's most important companies and organizations so they can use this to find vulnerabilities," said the Anthropic co-founder and policy chief, Jack Clark, at this week's Semafor World Economy conference. Clark added that Mythos, while ahead of the curve, is not a "special model." "There will be other systems just like this in a few months from other companies, and in a year to a year-and-a-half later, there will be open-weight models from China that have these capabilities," he said. So the world is going to have to get ready for more powerful systems that are going to exist within it." ___ O'Brien reported from Providence, R.I. AP business reporter Kelvin Chan contributed to this report from London.

WASHINGTON (AP) -- White House chief of staff Susie Wiles plans to sound out Anthropic CEO Dario Amodei about the artificial intelligence company's new Mythos model, which has attracted attention from the federal government for how it could transform national security and the economy. A White House official, who requested anonymity to discuss the planned meeting Friday, said the administration is engaging with advanced AI labs about their models and the security of software. The official stressed that any new technology that might be used by the federal government would require a technical period for evaluation.

White House Chief of Staff Susie Wiles plans to meet with Anthropic CEO Dario Amodei to discuss the company's new AI model, Mythos WASHINGTON (AP) -- White House chief of staff Susie Wiles plans to sound out Anthropic CEO Dario Amodei about the artificial intelligence company's new Mythos model, which has attracted attention from the federal government for how it could transform national security and the economy. A White House official, who requested anonymity to discuss the planned meeting Friday, said the administration is engaging with advanced AI labs about their models and the security of software. The official stressed that any new technology that might be used by the federal government would require a technical period for evaluation. The meeting comes after tensions have run hot between the Trump administration and the safety-conscious Anthropic, which has sought to put guardrails on the development of AI to minimize any potential risks and maximize its economic and national security benefits for the U.S. President Donald Trump tried to stop all federal agencies from using Anthropic's chatbot Claude over the company's contract dispute with the Pentagon, with Trump saying in a February social media post that the administration "will not do business with them again!" Defense Secretary Pete Hegseth also sought to declare Anthropic a supply chain risk, an unprecedented move against a U.S. company that Anthropic has challenged in two federal courts. The company said it wanted assurance the Pentagon would not use its technology in fully autonomous weapons and the surveillance of Americans. Hegseth said the company must allow for any uses the Pentagon deemed lawful. U.S. District Judge Rita Lin issued a ruling in March that blocked the enforcement of Trump's social media directive ordering all federal agencies to stop using Anthropic products. Anthropic declined to speak about the meeting in advance. The San Francisco-based Anthropic has said the new Mythos model it announced on April 7 is so "strikingly capable" that it is limiting its use to select customers because of its ability to surpass human cybersecurity experts in finding and exploiting computer vulnerabilities. And while some industry experts have questioned whether Anthropic's claims of too-powerful AI technology were a marketing ploy, even some of the company's sharpest critics have suggested that Mythos might represent a further advancement in AI. One influential Anthropic critic, David Sacks, who was the White House's AI and crypto czar, said people should "take this seriously." "Anytime Anthropic is scaring people, you have to ask, 'Is this a tactic? Is this part of their Chicken Little routine? Or is it real?'" Sacks said on the "All-In" podcast he co-hosts with other tech investors. "With cyber, I actually would give them credit in this case and say this is more on the real side." Sacks said, "It just makes sense that as the coding models become more and more capable, they are more capable at finding bugs. That means they're more capable at finding vulnerabilities. That means they're more capable at stringing together multiple vulnerabilities and creating an exploit." The model's potential benefits, as well as its risks, have also attracted attention outside the U.S. The United Kingdom's AI Security Institute said it evaluated the new model and found it a "step up" over previous models, which were already rapidly improving. "Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed," the institute said in a report. Anthropic has also been in talks with the European Union about its AI models, including advanced models that haven't yet been released in Europe, European Commission spokesman Thomas Regnier said Friday. Axios first reported the scheduled meeting between Wiles and Amodei. When it announced Mythos, Anthropic said it was also forming an initiative called Project Glasswing, bringing together tech giants such as Amazon, Apple, Google and Microsoft, along with other companies like JPMorgan Chase, in hopes of securing the world's critical software from "severe" fallout that the new model could pose to public safety, national security and the economy. "We're releasing it to a subset of some of the world's most important companies and organizations so they can use this to find vulnerabilities," said the Anthropic co-founder and policy chief, Jack Clark, at this week's Semafor World Economy conference. Clark added that Mythos, while ahead of the curve, is not a "special model." "There will be other systems just like this in a few months from other companies, and in a year to a year-and-a-half later, there will be open-weight models from China that have these capabilities," he said. So the world is going to have to get ready for more powerful systems that are going to exist within it." ___ O'Brien reported from Providence, R.I. AP business reporter Kelvin Chan contributed to this report from London.

Things are moving at a breakneck pace: Anthropic has, as the rumor mill predicted, launched a new product called Claude Design that directly enters the market for AI-powered design and prototyping tools. The tool targets designers, product managers, founders, and marketers alike, putting it in direct competition with platforms such as Lovable, which had until now been leading in this niche. Lovable's response was not long in coming. It had previously become known that Mike Krieger, Anthropic's Chief Product Officer, had stepped down from Figma's board because the company is working on a competing product. That it is Claude Design is now clear. Claude Design is a collaborative design tool based on Claude Opus 4.7, Anthropic's most powerful vision model. Users can create visual projects through simple text descriptions and then refine them through conversation, inline comments, or direct edits. The tool is currently available as a Research Preview for subscribers on the Pro, Max, Team, and Enterprise plans. Its core features include: The tool thus addresses a classic bottleneck in the design process: even experienced designers can rarely prototype more than a few directions simultaneously due to time constraints. Claude Design is intended to remove this limitation. Lovable has established itself in recent years as one of the leading platforms for AI-powered app and interface design. With Claude Design, the provider of the underlying AI model itself is now entering the market and offering many features that had until now been the unique selling point of third-party tools. In particular, the combination of design system integration, interactive prototypes, team collaboration, and direct code handoff makes Claude Design a serious competitor to Lovable, Figma, and other platforms. For users who are already Claude subscribers, Claude Design eliminates the need to pay for a separate tool like Lovable. Access is included in the existing subscription, with only the option to purchase additional usage quotas if needed. Lovable responded immediately to the launch. The company announced that, following the integration of Claude Opus 4.7 into its own platform, users will receive double the credits for their actions for a limited time. The offer applies automatically and expires on April 30, 2026, after which credits will revert to normal calculation. "Opus 4.7 just dropped in Lovable, and for a limited time your credits will go up to 2x further." The measure is best understood as a classic customer retention strategy: users are meant to be incentivized by a short-term added value to stay on the platform rather than switching to Claude Design. At the same time, Lovable is signaling that it is betting on the most powerful available model in order to remain competitive. Lovable's and Figma's responses, however, reveal a fundamental strategic problem. By integrating Claude as their central model and even actively promoting it, these companies are making themselves directly dependent on Anthropic -- the very company that is now entering the market as a direct competitor with Claude Design. This dependency is structural in nature. Lovable, for example, cannot control the quality of the underlying model itself, cannot improve it independently, and is reliant on Anthropic's availability and pricing. Should Anthropic change its API terms or further expand Claude Design, Lovable has little room to push back. The coding tool Cursor, for instance, has attempted to become more independent of Anthropic by using open source from China -- in that case, Kimi K2.5 -- and tailoring it for its own purposes.

Not everything is clear or perfect but, something is changing and I can feel it. There's something strange about how quickly a year begins to feel real. One minute, it's January. Fresh energy, clear intentions and a sense that this time, things will finally align. Then suddenly, it's already months in and life has quietly rewritten parts of your plan without asking. And then, you pause. Not because everything is falling apart but because everything feels different from what you imagined. Some seasons don't come with answers, they come with awareness. So I want to use this opportunity to outline some aspects of this season that stood out for me.

Mythos was discussed at a meeting last week of the Bank of Canada's financial sector resiliency group, which includes representatives from the finance department and major Canadian banks. U.S. officials have reportedly convened similar roundtables. Macklem told reporters during a call Friday from the sidelines of the International Monetary Fund's spring meetings that there has been a fair amount of discussion about the model at the forum. He confirmed he spoke to Federal Reserve chair Jerome Powell about the U.S. approach. "I don't think anybody knows the full implications at this point. That's precisely what everybody's trying to get to the bottom of," Macklem said. He said policy-makers and financial institutions are still in "early discussions" about what Mythos means for the integrity of the global financial system. But Macklem emphasized that Mythos is not a one-off event and the nature of AI development means firms, regulators and policy-makers need to put plans in place to grapple with this rapidly evolving technology. Whether it's Mythos or another AI model, Macklem said, the ability of these new technologies to both expose and exploit vulnerabilities "puts a premium" on having strong cybersecurity protections in place. "We're going to need to come to grips with how we're going to manage this on an ongoing basis," he said. "The world's moving quickly. We need to keep up." Canada's AI Minister Evan Solomon said he was meeting with Anthropic officials earlier this week to discuss global concern about the new Mythos model. "Our No. 1 goal is to protect Canadians, Canadian data and our institutions, so we're very aware of it," he said Monday. Finance Minister François-Philippe Champagne was also in Washington for the IMF meetings and told reporters earlier in the day that Mythos has become a "test case" for how governments prepare for and react to new technologies. This report by The Canadian Press was first published April 17, 2026. -- with files from Anja Karadeglija Craig Lord, The Canadian Press

@openai: Codex for (almost) everything. It can now use apps on your Mac, connect to more of your tools, create images, learn from previous actions, remember how you like to work, and take on ongoing and repeatable tasks. [video] computer use is broadly here & it's genuinely very cool, but it's worth flagging the structural asymmetry here where apple & google don't have to pipe everything through accessibility apis unlike app players. vertical integration lets them operate deeper in the stack like the compositor, view hierarchy, & event loop itself which is a real latency & reliability moat for on device agents.

The group has gradually evolved its business model, transitioning from chip sales to a cloud services offering powered by its own infrastructure. This strategy is underpinned by major partnerships, most notably with OpenAI, under an agreement to provide up to 750 megawatts of computing capacity by 2028 in a deal valued at over $10bn. This commitment has since been expanded to over $20bn, including investment mechanisms allowing OpenAI to acquire equity in Cerebras. Valued at $8.1bn following a $1.1bn funding round in September, the company seeks to position itself against rivals such as Nvidia and AMD by highlighting the speed of its processors for real-time processing. It also benefits from growth prospects through new partnerships, including with Oracle, and has attracted attention from high-profile investors, including Sam Altman. According to its CEO, Elon Musk had also considered an acquisition of the firm as early as 2018.
Personal Computer brings the multi-model orchestration of Computer to your machine. It can work across your local files, native applications, connectors, and the web to complete complex and even continuous workflows. Personal Computer makes Perplexity Computer a more personal orchestrator, elegantly hybridizing the local and server environments for maximum security and productivity. Juli Clover: Perplexity Computer came out earlier this year, and it's an all-in-one "digital worker" able to create and execute entire workflows. With today's upgrade, it can run directly on a Mac with access to the file system and native apps. Pressing both Command keys on a Mac will activate Personal Computer, and it responds to text or voice commands. Personal Computer can work across any Mac app, and it can see active apps and display quick actions automatically. [...] Personal Computer for Mac is rolling out to Perplexity Max subscribers starting today, with Perplexity prioritizing waitlist members. Perplexity Max is priced at $200 per month, and the new feature is not available to $20/month Pro plan subscribers. Andrew Orr: Perplexity is moving beyond the typical chatbot model by running in the background and carrying out multi-step tasks. The feature builds on Perplexity's existing agent system, which breaks a request into smaller jobs and assigns them to different sub-agents. [...] Actions can require approval and that activity is logged, but the setup still asks users to trust an always-on agent with broad access. Previously: * Codex for Almost Everything * Gemini App for Mac Comments RSS · Twitter · Mastodon
