The latest news and updates from companies in the WLTH portfolio.
Anthropic has appointed Vas Narasimhan, CEO of Novartis, to its board of directors. This move strengthens the group's foothold in healthcare and illustrates the growing significance of AI applications within the life sciences sector. On Tuesday Anthropic announced the appointment of Vas Narasimhan, CEO of Novartis, to its board of directors. His appointment was decided by the Anthropic Long-Term Benefit Trust, an independent body tasked with maintaining the balance between shareholder interests and the group's public benefit mission. With this addition, directors designated by the trust now hold a majority on the board. Anthropic highlighted Vas Narasimhan's profile as a physician-scientist and his experience in one of the world's most heavily regulated sectors. The group noted that he has overseen the development and approval of over 35 new medicines, viewing healthcare and life sciences as among the most promising application areas for artificial intelligence. In a LinkedIn post, the Novartis chief emphasized the necessity of responsible AI deployment, stating that these tools can accelerate the understanding of disease biology, the identification of therapeutic targets and the design of better drugs. This appointment comes as Anthropic increasingly enters the market's radar. Bloomberg reported in late March that the group was considering an IPO as early as Q4 2026, although no official confirmation has been provided by the company at this stage. In this context, the arrival of Vas Narasimhan can also be interpreted as a strengthening of governance at a pivotal moment in the group's development.
Infosys, Tata Consultancy Services (TCS), LTIMindtree, Coforge, Wipro, and L&T Technology Services are among the major IT players that have all fallen more than 20% on a year-to-date (YTD) basis. Indian IT stocks have remained under sustained selling pressure, with the Nifty IT index declining 17% so far in 2026. The downturn is largely attributed to rising concerns over potential disruption from rapid advancements in the artificial intelligence (AI) space. The Indian IT sector is grappling with slowing growth, limited earnings visibility, and the absence of the "AI froth" that has supported valuations of global technology companies. Infosys, Tata Consultancy Services (TCS), LTIMindtree, Coforge, Wipro, and L&T Technology Services are among the major IT players that have all fallen more than 20% on a year-to-date (YTD) basis. The launch of advanced generative AI models by platforms such as Claude and Palantir has further weighed on sentiment. The Indian IT companies are now bracing for another potential wave of disruption following Anthropic's preview of its new model, Mythos. Anthropic's Mythos is said to deliver a significant leap in agentic software development capabilities, based on qualitative assessments. Analysts caution that such developments could pose near- to medium-term risks to demand and valuations across the sector. According to Kotak Institutional Equities, the model demonstrates a "step-change" in benchmark performance across software engineering tasks, marking a clear departure from the incremental improvements observed in recent iterations. "We believe that the model raises near- to medium-term disruption risks for IT services with the caveat that model capabilities are largely unproven in real-world scenarios due to a lack of a public release. Risks can be higher for firms with more exposure to application services," said Kotak Institutional Equities. If such performance enhancements translate effectively into real-world applications, Kotak Equities cautioned that its estimated 3%-3.5% annual growth headwind for the IT services industry over the next three years could shift from a conservative assumption to a more realistic baseline. Further downside risks could also increase, especially if large capability improvements continue in future frontier models. "Yet, stronger agentic software engineering capabilities could result in widening the gap in productivity increase between application services (also called custom application development) and other IT services segments (including BPO)," said the brokerage firm. Among Tier 1 Indian IT, Infosys has a higher exposure to apps, while HCL Technologies has a lower exposure. In general, mid-tier IT has a higher exposure to apps, with Persistent Systems leading the pack among the Indian names. "Mid-tier challengers can offset headwinds by share gains from slower to adapt incumbents," Kotak said. Investors worried about disruption, and margins and growth did weaken. But Indian IT firms adopted these technologies, tweaked delivery models, and returned to a steadier growth path, noted DSP Mutual Fund. "Businesses evolve, and legacy sectors like IT services have done so repeatedly. The AI-led transition may look distant today, but it is early to doubt their ability to adapt. Even after the recent derating, the sector still shows solid ROEs, disciplined capital allocation and reasonable valuations, making it relatively attractive versus the broader market," said the fund house. It expects some further fall in the prices of IT stocks can make this sector attractive on an absolute basis. "Till such times a systematic investing approach seems logical," it said. Catch Stock Market Live Updates here
An influencer known as Clavicular had his livestream suddenly interrupted on Tuesday evening due to a medical emergency that resulted in a 911 call, as revealed by dispatch audio obtained by TMZ. The audio reportedly documents Miami-Dade emergency services responding to the location at approximately 5:46 p.m. local time. Dispatchers described the emergency as an overdose incident involving a "20-year-old male." The event occurred while Braden Eric Peters, who goes by the online moniker "Clavicular," was broadcasting live on the platform Kick. Those tuned into the stream noticed an abrupt and concerning shift in the broadcast, which was then unexpectedly cut off, sparking widespread alarm among viewers and across social media platforms. According to sources cited by TMZ, Clavicular is believed to have experienced an overdose during the live session and was subsequently transported to a hospital. No information about his current condition has been released. The internet quickly became a hub of worry as clips from the stream and viewer reactions rapidly spread online. One post on X described the moment as it happened, writing: "Clavicular may have just suffered an OVERDOSE while streaming live on Kick. His condition is still unknown, The live stream ended as Clavicular looks completely out of it... Prayers for Clavicular, his team and his family." Footage from the stream shared on social platforms appears to show Clavicular suddenly fading mid-broadcast. He can be seen placing his hands behind his back and over his head as people around him ask if he needs water, moments before the feed cuts. Clavicular, aged 20, is known for his controversial "looksmaxxing" content, promoting extreme self-improvement methods and the idea of "mogging" others as a measure of status. His content has drawn attention for its focus on drastic physical transformation, including "bone smashing." The source materials also noted he has admitted to health complications linked to long-term steroid use, including infertility, alongside more recent substance misuse concerns. His past legal issues include an arrest last month in Florida on misdemeanor battery charges, allegedly tied to an incident where he instigated a fight between two women for social media content. He is also reportedly under investigation for the shooting of an alligator in the Everglades. Authorities have not released further updates on his condition as of now, and details remain limited.

New model: GPT-5.4-Cyber 'Today we're expanding this program by introducing additional tiers of access for users willing to work with OpenAI to authenticate themselves as cybersecurity defenders. Customers in the highest tiers will get access to GPT-5.4-Cyber, a model purposely fine-tuned for additional cyber capabilities and with fewer capability restrictions.'

Investing.com-- U.S. federal agencies are quietly testing advanced artificial intelligence tools from Anthropic despite a Trump administration ban, as officials weigh the technology's cybersecurity potential, Politico reported on Tuesday, citing people familiar with the matter. Staff from multiple agencies, including the Commerce Department's Center for AI Standards and Innovation, have been evaluating Anthropic's new "Claude Mythos" model, which has drawn attention for its ability to detect critical software vulnerabilities that can elude human experts, the report said. Get real-time updates on market-moving news with InvestingPro The outreach comes even after U.S. President Donald Trump directed agencies earlier this year to halt use of Anthropic's technology, following disputes over the firm's stance on military and surveillance applications. According to Politico, officials view the model's capabilities as potentially vital for strengthening cyber defenses, prompting quiet efforts to assess and deploy it despite formal restrictions. Lawmakers and congressional staff have also sought briefings on the system, reflecting growing concern that adversaries could develop similar tools, the report stated. The White House said it continues engaging with AI firms to address security risks, while balancing national security considerations.

Investing.com-- U.S. federal agencies are quietly testing advanced artificial intelligence tools from Anthropic despite a Trump administration ban, as officials weigh the technology's cybersecurity potential, Politico reported on Tuesday, citing people familiar with the matter. Staff from multiple agencies, including the Commerce Department's Center for AI Standards and Innovation, have been evaluating Anthropic's new "Claude Mythos" model, which has drawn attention for its ability to detect critical software vulnerabilities that can elude human experts, the report said. Get real-time updates on market-moving news with InvestingPro The outreach comes even after U.S. President Donald Trump directed agencies earlier this year to halt use of Anthropic's technology, following disputes over the firm's stance on military and surveillance applications. According to Politico, officials view the model's capabilities as potentially vital for strengthening cyber defenses, prompting quiet efforts to assess and deploy it despite formal restrictions. Lawmakers and congressional staff have also sought briefings on the system, reflecting growing concern that adversaries could develop similar tools, the report stated. The White House said it continues engaging with AI firms to address security risks, while balancing national security considerations.

ChatGPT-maker OpenAI has launched GPT-5.4-Cyber, a variant of its flagship model GPT 5.4 launched in March this year. The new GPT-5.4-Cyber is specifically designed for defensive cybersecurity work. Announcing the new model, OpenAI said: "This is a version of GPT‑5.4 which lowers the refusal boundary for legitimate cybersecurity work and enables new capabilities for advanced defensive workflows, including binary reverse engineering capabilities that enable security professionals to analyze compiled software for malware potential, vulnerabilities and security robustness without needing access to its source code". Notably, the launch comes days after OpenAI-rival Anthropic introduced its frontier AI model Mythos.Similar to Anthropic which said that it will release the mythos model to select companies, OpenAI's GPT-5.4-Cyber will initially be available to vetted security vendors, organizations and researchers because of its more permissive design. "Because this model is more permissive, we are starting with a limited, iterative deployment to vetted security vendors, organizations, and researchers. Access to permissive and cyber-capable models may come with limitations, especially around no-visibility uses like Zero-Data Retention(opens in a new window) (ZDR)," the company said.Additionally, the company is expanding its Trusted Access for Cyber (TAC) to bring additional tiers of access for users willing to work with OpenAI to authenticate themselves as cybersecurity defenders. For those unaware, TAC was introduced in February this year with both automated identity verification for individuals in order to reduce the friction of safeguards on cybersecurity-related tasks and partner with a limited set of organizations for more cyber-permissive models.OpenAI said that customers in the highest tiers will get access to GPT‑5.4‑Cyber. How to gain access to OpenAI's TAC As announced by the company, following users can get access to TAC:GPT-5.4-Cyber is based on an existing GPT model, while Mythos is a completely new version of Claude. Anthropic plans to offer Mythos to a limited number of companies. It is part of the company's "Project Glasswing", a controlled initiative under which select organizations are permitted to use the unreleased Claude Mythos Preview model for defensive cybersecurity purposesOpenAI's GPT-5.4-Cyber model is part of a controlled access programme for select users and cybersecurity experts.
In today's video, I discuss recent updates affecting CoreWeave (NASDAQ: CRWV) and other AI stocks. To learn more, check out the short video, consider subscribing, and click the special offer link below. Will AI create the world's first trillionaire? Our team just released a report on the one little-known company, called an "Indispensable Monopoly" providing the critical technology Nvidia and Intel both need. Continue " *Stock prices used were the post-market prices of April 10, 2026. The video was published on April 10, 2026. The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now... and CoreWeave wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $556,335!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $1,160,572!* Now, it's worth noting Stock Advisor's total average return is 975% -- a market-crushing outperformance compared to 193% for the S&P 500. Don't miss the latest top 10 list, available with Stock Advisor, and join an investing community built by individual investors for individual investors. Jose Najarro has positions in Alphabet, Amazon, CoreWeave, Marvell Technology, Microsoft, and Nvidia. The Motley Fool has positions in and recommends Alphabet, Amazon, Broadcom, Marvell Technology, Micron Technology, Microsoft, and Nvidia. The Motley Fool has a disclosure policy. Jose Najarro is an affiliate of The Motley Fool and may be compensated for promoting its services. If you choose to subscribe through their link they will earn some extra money that supports their channel. Their opinions remain their own and are unaffected by The Motley Fool.

Despite a formal directive from President Donald Trump ordering federal agencies to stop using Anthropic's artificial intelligence technology, multiple U.S. government bodies have been quietly evaluating the company's latest AI model, according to a Politico report. The move signals a growing tension between executive policy and the urgent demand for cutting-edge cybersecurity tools. Staff from several agencies, including the Commerce Department's Center for AI Standards and Innovation, have reportedly been testing Anthropic's newest model, referred to as "Claude Mythos." The system has captured significant attention within government circles due to its advanced capability to identify critical software vulnerabilities -- flaws that often go undetected by even the most experienced human cybersecurity professionals. The Trump administration's ban on Anthropic's technology stems from disputes over the company's position on military and surveillance applications. However, federal officials appear willing to work around that restriction, quietly assessing and potentially deploying the model based on its perceived value in strengthening national cyber defenses. Interest has extended beyond the executive branch. Members of Congress and their staff have also requested briefings on the AI system, reflecting mounting concern that foreign adversaries -- particularly well-resourced state actors -- could develop or already be leveraging similar technologies to exploit American infrastructure. The White House has acknowledged its continued engagement with artificial intelligence companies to assess and mitigate security risks, while emphasizing the need to balance those efforts against broader national security priorities. This situation highlights a larger debate unfolding across Washington: as AI capabilities rapidly advance, federal agencies are increasingly struggling to reconcile restrictive policy frameworks with the practical need to stay ahead of emerging cyber threats. The quiet testing of banned technology suggests that for some officials, national security may ultimately outweigh political directives.

Investing.com-- U.S. federal agencies are quietly testing advanced artificial intelligence tools from Anthropic despite a Trump administration ban, as officials weigh the technology's cybersecurity potential, Politico reported on Tuesday, citing people familiar with the matter. Staff from multiple agencies, including the Commerce Department's Center for AI Standards and Innovation, have been evaluating Anthropic's new "Claude Mythos" model, which has drawn attention for its ability to detect critical software vulnerabilities that can elude human experts, the report said. Get real-time updates on market-moving news with InvestingPro The outreach comes even after U.S. President Donald Trump directed agencies earlier this year to halt use of Anthropic's technology, following disputes over the firm's stance on military and surveillance applications. According to Politico, officials view the model's capabilities as potentially vital for strengthening cyber defenses, prompting quiet efforts to assess and deploy it despite formal restrictions. Lawmakers and congressional staff have also sought briefings on the system, reflecting growing concern that adversaries could develop similar tools, the report stated. The White House said it continues engaging with AI firms to address security risks, while balancing national security considerations.

Investing.com-- U.S. federal agencies are quietly testing advanced artificial intelligence tools from Anthropic despite a Trump administration ban, as officials weigh the technology's cybersecurity potential, Politico reported on Tuesday, citing people familiar with the matter. Staff from multiple agencies, including the Commerce Department's Center for AI Standards and Innovation, have been evaluating Anthropic's new "Claude Mythos" model, which has drawn attention for its ability to detect critical software vulnerabilities that can elude human experts, the report said. Get real-time updates on market-moving news with InvestingPro The outreach comes even after U.S. President Donald Trump directed agencies earlier this year to halt use of Anthropic's technology, following disputes over the firm's stance on military and surveillance applications. According to Politico, officials view the model's capabilities as potentially vital for strengthening cyber defenses, prompting quiet efforts to assess and deploy it despite formal restrictions. Lawmakers and congressional staff have also sought briefings on the system, reflecting growing concern that adversaries could develop similar tools, the report stated. The White House said it continues engaging with AI firms to address security risks, while balancing national security considerations.

Jose Najarro enjoys investing in the tech market, more importantly, the semiconductor sector. Before partnering with the Fool, Jose worked as a Senior Electrical Engineer for General Dynamics, where he had first-hand experience seeing how emerging technology can change the world. Jose Najarro went to NJIT, receiving his Bachelor's and Master's degree in Electrical Engineering. Jose Najarro has positions in Alphabet, Amazon, CoreWeave, Marvell Technology, Microsoft, and Nvidia. The Motley Fool has positions in and recommends Alphabet, Amazon, Broadcom, Marvell Technology, Micron Technology, Microsoft, and Nvidia. The Motley Fool has a disclosure policy. Jose Najarro is an affiliate of The Motley Fool and may be compensated for promoting its services. If you choose to subscribe through their link they will earn some extra money that supports their channel. Their opinions remain their own and are unaffected by The Motley Fool.

April 14 (Reuters) - Federal agencies and government officials are quietly sidestepping U.S. President Donald Trump's ban on working with Anthropic, Politico reported on Tuesday. The Commerce Department's Center for AI Standards and Innovation is actively testing Anthropic's frontier AI model Mythos' hacking prowess, the report said. Reuters could not immediately confirm the report. Anthropic, the White House and the Commerce Department did not immediately respond to a request for comment. Staff on at least three congressional committees held or requested briefings from the company to learn about Mythos' cyber scanning capabilities over the past week, the report added. Anthropic's co-founder Jack Clark said at the Semafor World Economy event on Monday that the company is discussing Mythos with the Trump administration even after the Pentagon cut off business with the U.S. AI company following a contract dispute. The nature and details of Anthropic's talks with the U.S. government, including which agencies are involved, were not immediately clear. Mythos, announced on April 7, is Anthropic's "most capable yet for coding and agentic tasks," the company said in a blog post, referring to the model's ability to act autonomously. (Reporting by Gnaneshwar Rajan in Bengaluru; Editing by Muralikumar Anantharaman and Raju Gopalakrishnan)
OpenAI on Tuesday unveiled GPT-5.4-Cyber, a variant of its latest flagship model fine-tuned specifically for defensive cybersecurity work, following rival Anthropic's announcement of frontier AI model Mythos. Mythos, announced on April 7, is being deployed as part of Anthropic's "Project Glasswing", a controlled initiative under which select organisations are permitted to use the unreleased Claude Mythos Preview model for defensive cybersecurity purposes. It has found "thousands" of major vulnerabilities in operating systems, web browsers and other software.

April 14 (Reuters) - Federal agencies and government officials are quietly sidestepping U.S. President Donald Trump's ban on working with Anthropic, Politico reported on Tuesday. The Commerce Department's Center for AI Standards and Innovation is actively testing Anthropic's frontier AI model Mythos' hacking prowess, the report said. Reuters could not immediately confirm the report. Anthropic, the White House and the Commerce Department did not immediately respond to a request for comment. Staff on at least three congressional committees held or requested briefings from the company to learn about Mythos' cyber scanning capabilities over the past week, the report added. Anthropic's co-founder Jack Clark said at the Semafor World Economy event on Monday that the company is discussing Mythos with the Trump administration even after the Pentagon cut off business with the U.S. AI company following a contract dispute. The nature and details of Anthropic's talks with the U.S. government, including which agencies are involved, were not immediately clear. Mythos, announced on April 7, is Anthropic's "most capable yet for coding and agentic tasks," the company said in a blog post, referring to the model's ability to act autonomously. (Reporting by Gnaneshwar Rajan in Bengaluru; Editing by Muralikumar Anantharaman and Raju Gopalakrishnan)

April 14 (Reuters) - Federal agencies and government officials are quietly sidestepping U.S. President Donald Trump's ban on working with Anthropic, Politico reported on Tuesday. The Commerce Department's Center for AI Standards and Innovation is actively testing Anthropic's frontier AI model Mythos' hacking prowess, the report said. Reuters could not immediately confirm the report. Anthropic, the White House and the Commerce Department did not immediately respond to a request for comment. Staff on at least three congressional committees held or requested briefings from the company to learn about Mythos' cyber scanning capabilities over the past week, the report added. Anthropic's co-founder Jack Clark said at the Semafor World Economy event on Monday that the company is discussing Mythos with the Trump administration even after the Pentagon cut off business with the U.S. AI company following a contract dispute. The nature and details of Anthropic's talks with the U.S. government, including which agencies are involved, were not immediately clear. Mythos, announced on April 7, is Anthropic's "most capable yet for coding and agentic tasks," the company said in a blog post, referring to the model's ability to act autonomously. (Reporting by Gnaneshwar Rajan in Bengaluru; Editing by Muralikumar Anantharaman and Raju Gopalakrishnan)
OpenAI's $852 billion valuation is facing skepticism from some of its own investors as the company scrambles to reorient itself around enterprise customers and fend off Anthropic, according to the Financial Times. Anthropic's annualized revenue jumped from $9 billion at the end of 2025 to $30 billion by the end of March, driven largely by demand for its coding tools. One investor who has backed both companies told the FT that justifying OpenAI's round required assuming an IPO valuation of $1.2 trillion or more -- making Anthropic's current $380 billion valuation look like the relative bargain. The secondary market tells a similar story right now, where demand for Anthropic shares has grown nearly insatiable while OpenAI shares are trading at a discount. Altman has been here before. During his tenure leading Y Combinator, aggressive valuation inflation left some portfolio companies financially stranded while others proved worth every penny and then some. Iconiq Capital partner Roy Luo -- whose firm has invested over $1 billion in Anthropic while holding a smaller stake in OpenAI -- told the FT where he stood. "There's room for both, but there is fundamentally a number one and a number two dynamic, and the number one will win disproportionately," he said. "We picked." OpenAI CFO Sarah Friar pushed back, telling the FT that the company's $122 billion raise -- the largest private fundraising in history -- was evidence of continued investor confidence.

Crypto exchange Kraken has hinted it is still going ahead with an initial public offering despite reports suggesting the plan was put on hold last month due to market conditions. Kraken filed for a confidential IPO with the US Securities and Exchange Commission in November, but an unconfirmed report in March suggested that the plan may have been frozen. Speaking at the Semafor World Economy 2026 conference on Tuesday, Kraken co-CEO Arjun Sethi didn't address the pause but confirmed the company had "confidentially filed" for an IPO when asked by Semafor reporter Rohan Goswami whether "there are plans to take Kraken public soon." "Is that news?" Goswami asked, to which Sethi responded: "I believe that's news." Cointelegraph reached out to Kraken to confirm whether Kraken is actively pursuing the IPO or has pushed back the timeline, but did not receive an immediate response. Sethi's comments come as German financial markets platform Deutsche Börse Group invested $200 million in Kraken's parent firm, Payward, in exchange for a 1.5% fully diluted stake on Tuesday. The deal placed Kraken's valuation at $13.3 billion, down from $20 billion in November. Kraken told Cointelegraph that the Deutsche Börse Group investment seeks to bring crypto and TradFi closer together as a "single, cohesive infrastructure for institutional clients" rather than parallel systems. Speaking more broadly about going public at the Semafor conference, Sethi dismissed the idea that Kraken's IPO may have been driven, or stalled by, policy developments in Washington. Related: Bitget rolls out SpaceX-linked pre-IPO proxy with Republic "If you live day by day, quarter by quarter, these things are meaningful," Sethi said. But "if you're thinking about your company three, five, 10 or 20 years out, none of this is meaningful. It just doesn't matter."

Access to GPT-5.4-Cyber is granted exclusively to vetted cybersecurity professionals, researchers, and organisations participating in OpenAI's Trusted Access for Cyber program. Just weeks after Anthropic's headline-grabbing announcement on Claude Mythos - the dangerously powerful AI model that's been restricted to a couple of infrastructure organisations - arch rival OpenAI has unveiled GPT-5.4-Cyber. This is a specialised cybersecurity-focused variant of its latest GPT-5.4 model that has been restricted to a carefully controlled, limited rollout. Similar to Claude Mythos, the model is not available to the general public or through ChatGPT. Instead, access to GPT-5.4-Cyber is granted exclusively to vetted cybersecurity professionals, researchers, and organisations participating in OpenAI's Trusted Access for Cyber program. OpenAI has chosen a cautious approach to rigorously test advanced AI capabilities before broader deployment, with concerns over potential misuse of powerful models. How GPT-5.4-Cyber differs from standard GPT models? GPT-5.4-Cyber is a fine-tuned variant of OpenAI's flagship GPT-5.4 large language model, optimised specifically for defensive cybersecurity tasks. Unlike the standard versions with strict guardrails that block risky or sensitive prompts, this variant features relaxed safeguards to enable thorough evaluation. Cybersecurity experts can now test the model with complex, adversarial scenarios, including attempts to identify vulnerabilities, simulate jailbreaks, and test resilience against manipulation. With GPT-5.4-Cyber, the primary objective is defensive enhancement. By exposing the model to real-world attack simulations under controlled conditions, OpenAI aims to strengthen its built-in protections, improve resistance to exploitation, and refine its ability to assist in threat detection, vulnerability analysis, and secure coding practices. Additional capabilities include support for binary reverse engineering, allowing researchers to examine compiled software for malware without source code access. Who can access GPT-5.4-Cyber As far as access is concerned, OpenAI has a tiered Trusted Access for Cyber program, which is expanding to include thousands of verified individuals and hundreds of security teams. Participants are selected based on expertise and must undergo identity verification. Their feedback will inform iterative improvements, mirroring ethical hacking practices in traditional cybersecurity. OpenAI says that the model will not be integrated into consumer-facing platforms like ChatGPT in the near term. OpenAI GPT-5.4 Cyber vs Anthropic Claude Mythos: How they compare The launch of GPT-5.4-Cyber comes just weeks after Anthropic's Claude Mythos announcement. While Anthropic focuses on foundational, from-the-ground-up development for its Mythos model, which was initially limited to about 40 organisations, OpenAI opts for rapid iteration through targeted fine-tuning of its existing GPT-5.4 architecture. Both approaches show how cybersecurity is set to transform into an AI-driven domain, where AI models must counter increasingly sophisticated threats from adversarial AI systems. Unlike Anthropic's CEO Dario Amodei, OpenAI's Sam Altman hasn't spoken about the Cyber model's capabilities. Amodei and his team have reiterated that the Claude Mythos model is substantially dangerous and, hence, is best limited to be used only for cybersecurity purposes.

OpenAI has unveiled GPT-5.4-Cyber, a specialised version of its latest flagship model tailored for defensive cybersecurity applications, just days after rival Anthropic announced its own frontier AI model, Mythos, according to a report by Reuters. Anthropic had introduced Mythos on April 7 as part of its Project Glasswing initiative, a controlled programme that allows select organisations to access the unreleased Claude Mythos Preview model for defensive cybersecurity use. The model has reportedly identified thousands of major vulnerabilities across operating systems, web browsers and other software systems. OpenAI, the company behind ChatGPT, stated that GPT-5.4-Cyber will initially be deployed on a limited basis to vetted security vendors, organisations and researchers, citing its more permissive design compared to standard models. The company also informed that it is expanding its Trusted Access for Cyber (TAC) programme to include thousands of verified individual defenders and hundreds of teams working to secure critical software infrastructure, as outlined in a post on its website. As part of this expansion, OpenAI is introducing additional tiers within the TAC programme, which was first launched in February. Higher levels of verification will unlock access to more advanced capabilities. Users granted approval under the highest tier will gain access to GPT-5.4-Cyber, which operates with fewer restrictions on sensitive cybersecurity tasks, including vulnerability research and analysis.
