News & Updates

The latest news and updates from companies in the WLTH portfolio.

More flight chaos as European airline extends strike

Pilots of a major European airline have announced they are extending their strike action by another two days. Earlier this week, 50,000 passengers were affected by industrial action carried out by Lufthansa pilots on Monday and Tuesday, April 13 and 14. Pilots' union Vereinigung Cockpit (VC) and the German carrier are disagreeing over a company pension scheme for pilots, with the union demanding that the airline more than double its contributions. Now, the workers are set to strike again on Thursday and Friday, April 16 and 17. On Tuesday, Andreas Pinheiro, president of the VC union, said: The situation remains unchanged - there is absolutely no movement on the part of the employers. 'Neither Lufthansa and Lufthansa Cargo have made an offer regarding company pension schemes, nor has Lufthansa CityLine made a viable offer for a new collective bargaining agreement on remuneration, nor has Eurowings made any offer regarding company pension schemes.' He described the situation as 'deadlocked'. The action has impacted flights departing from German airports and some 900 flights were cancelled at Frankfurt and Munich on Tuesday alone. Major European airline Lufthansa is set to be impacted by further strike action Flights to destinations in the Middle East region are still continuing regardless of the strikes, due to the situation in the area. Cabin crew union UFO also announced a walk out scheduled for Wednesday and Thursday. Lufthansa released a statement that reassured travellers it is 'working intensively to keep the impact on passengers as low as possible'. It said: 'The cabin crew union UFO has announced a strike at Lufthansa (LH) and Lufthansa CityLine (CL) at short notice for Wednesday, 15 April and Thursday, 16 April 2026. 'Lufthansa is working intensively to keep the impact on passengers as low as possible. We are trying to have as many flights as possible operated by other airlines within the Lufthansa Group and by partner airlines. 'However, despite these efforts, flight cancellations are unavoidable. These cancellations will be loaded into the booking systems by Tuesday morning, 14 April 2026 (CET) at the latest. 'Please note: Flights operated by Austrian Airlines (OS), Brussels Airlines (SN), Eurowings (EW), SWISS (LX), Air Dolomiti (EN), Discover Airlines (4Y), Edelweiss (WK) and Lufthansa City Airlines (VL) will not be affected by the strike. 'Travelers who are affected by an irregularity will be informed accordingly, provided their contact details are stored in the booking. 'We ask passengers to check the status of their flight before setting out on their journey.' In a comment to the Daily Mail, Lufthansa added: 'We can confirm that we have received the latest strike notice from the Vereinigung Cockpit union for April 16 and 17. 'We are open to comprehensive mediation on all collective bargaining issues in order to achieve a lasting resolution.' Lufthansa's website states: 'We sincerely regret the disruption caused by the strike announced at short notice by the unions Vereinigung Cockpit and UFO and thank you for your understanding.'

CHAOS
Daily Mail Online10d ago
Read update
More flight chaos as European airline extends strike

Rail chaos erupts as NSW train catches fire

Add Yahoo as a preferred source to see more of our stories on Google. Chaos has erupted on Sydney's rail network due to a fire on a freight train near Warwick Farm. Trains are not running between Fairfield and Liverpool on the T2 Leppington and Inner West Line, and the T5 Cumberland Line, and between Villawood and Liverpool on the T3 Liverpool and Inner West Line, according to an incident update on Transport NSW. The incident update reports a fire has broken out on a freight train near Warwick Farm, 30km west of the Sydney CBD. "Replacement buses have been requested but are not yet running," the update states. "Passengers are advised to delay travel or consider alternative transport and allow plenty of extra travel time."

CHAOS
Yahoo!7 News10d ago
Read update
Rail chaos erupts as NSW train catches fire

OpenAI follows Anthropic's lead in limited release of GPT‑5.4‑Cyber

The cybersecurity-focused AI model is less resistant to seemingly malicious actions, such as finding security vulnerabilities. OpenAI has unveiled GPT-5.4-Cyber, a new AI model that may be willing to accept seemingly malicious prompts in the name of cybersecurity. Fortunately, the ChatGPT developer won't let just anyone play with its less restrictive, more freewheeling AI. Announced via a blog post on Tuesday, GPT-5.4-Cyber is a variant of OpenAI's publicly available GPT-5.4 large language model. According to OpenAI, its frontier AI models such as GPT-5.4 have safeguards against clearly malicious use, making them refuse harmful user requests such as stealing credentials or finding vulnerabilities in code. In contrast, the company's new GPT-5.4-Cyber model is trained to be more lenient, and potentially accept these prompts instead. Describing GPT-5.4-Cyber as "cyber-permissive," OpenAI states that this change is to allow the AI to be used for defensive cybersecurity measures, such as helping researchers find vulnerabilities to be addressed. "We want to empower defenders by giving broad access to frontier capabilities, including models which have been tailor-made for cybersecurity," wrote OpenAI. "This is a version of GPT‑5.4 which lowers the refusal boundary for legitimate cybersecurity work and enables new capabilities for advanced defensive workflows." Given the potential danger posed by GPT-5.4-Cyber's lowered safeguards, not everyone will be able to immediately dive in to push the AI's arguably flexible ethical limits even further. OpenAI states that it is starting with "limited, iterative deployment to vetted security vendors, organizations, and researchers." As such, only members of its Trusted Access for Cyber⁠ (TAC) program will be given access to GPT-5.4-Cyber at present, and only those at its highest tiers. Introduced in February, TAC is a network of users who have been through OpenAI's automated identity verification process, including completing a government ID check. Once approved, users in OpenAI's TAC program are allowed access to versions of its AI models with fewer safeguards, such as GPT‑5.4‑Cyber. OpenAI states that this is intended to enable cybersecurity research, education, and programming. Not every TAC-approved user will immediately get their hands on GPT-5.4-Cyber, however. OpenAI states that users who aren't already part of TAC's higher tiers may request access to it, which will require going through further authentication to verify themselves as "legitimate cyber defenders." GPT-5.4-Cyber's reveal comes just one week after OpenAI competitor Anthropic announced Project Glasswing. Like TAC, Project Glasswing is an initiative that restricts Anthropic's cybersecurity-focused Claude Mythos Preview AI model to select approved organisations. Claiming that Claude Mythos Preview "has already found thousands of high-severity vulnerabilities," Anthropic stated that Project Glasswing was an effort to ensure its AI model was used for solely defensive cybersecurity purposes. "Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely," Anthropic wrote. Disclosure: Ziff Davis, Mashable's parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

Anthropic
Mashable ME10d ago
Read update
OpenAI follows Anthropic's lead in limited release of GPT‑5.4‑Cyber

Rail chaos erupts as NSW train catches fire

Chaos has erupted on Sydney's rail network due to a fire on a freight train near Warwick Farm. Trains are not running between Fairfield and Liverpool on the T2 Leppington and Inner West Line, and the T5 Cumberland Line, and between Villawood and Liverpool on the T3 Liverpool and Inner West Line, according to an incident update on Transport NSW. The incident update reports a fire has broken out on a freight train near Warwick Farm, 30km west of the Sydney CBD. "Replacement buses have been requested but are not yet running," the update states. "Passengers are advised to delay travel or consider alternative transport and allow plenty of extra travel time."

CHAOS
Perth Now10d ago
Read update
Rail chaos erupts as NSW train catches fire

OpenAI follows Anthropic's lead in limited release of GPT‑5.4‑Cyber

The cybersecurity-focused AI model is less resistant to seemingly malicious actions, such as finding security vulnerabilities. OpenAI has unveiled GPT-5.4-Cyber, a new AI model that may be willing to accept seemingly malicious prompts in the name of cybersecurity. Fortunately, the ChatGPT developer won't let just anyone play with its less restrictive, more freewheeling AI. Announced via a blog post on Tuesday, GPT-5.4-Cyber is a variant of OpenAI's publicly available GPT-5.4 large language model. According to OpenAI, its frontier AI models such as GPT-5.4 have safeguards against clearly malicious use, making them refuse harmful user requests such as stealing credentials or finding vulnerabilities in code. In contrast, the company's new GPT-5.4-Cyber model is trained to be more lenient, and potentially accept these prompts instead. Describing GPT-5.4-Cyber as "cyber-permissive," OpenAI states that this change is to allow the AI to be used for defensive cybersecurity measures, such as helping researchers find vulnerabilities to be addressed. "We want to empower defenders by giving broad access to frontier capabilities, including models which have been tailor-made for cybersecurity," wrote OpenAI. "This is a version of GPT‑5.4 which lowers the refusal boundary for legitimate cybersecurity work and enables new capabilities for advanced defensive workflows." Given the potential danger posed by GPT-5.4-Cyber's lowered safeguards, not everyone will be able to immediately dive in to push the AI's arguably flexible ethical limits even further. OpenAI states that it is starting with "limited, iterative deployment to vetted security vendors, organizations, and researchers." As such, only members of its Trusted Access for Cyber⁠ (TAC) program will be given access to GPT-5.4-Cyber at present, and only those at its highest tiers. Introduced in February, TAC is a network of users who have been through OpenAI's automated identity verification process, including completing a government ID check. Once approved, users in OpenAI's TAC program are allowed access to versions of its AI models with fewer safeguards, such as GPT‑5.4‑Cyber. OpenAI states that this is intended to enable cybersecurity research, education, and programming. Not every TAC-approved user will immediately get their hands on GPT-5.4-Cyber, however. OpenAI states that users who aren't already part of TAC's higher tiers may request access to it, which will require going through further authentication to verify themselves as "legitimate cyber defenders." GPT-5.4-Cyber's reveal comes just one week after OpenAI competitor Anthropic announced Project Glasswing. Like TAC, Project Glasswing is an initiative that restricts Anthropic's cybersecurity-focused Claude Mythos Preview AI model to select approved organisations. Claiming that Claude Mythos Preview "has already found thousands of high-severity vulnerabilities," Anthropic stated that Project Glasswing was an effort to ensure its AI model was used for solely defensive cybersecurity purposes. "Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely," Anthropic wrote. Disclosure: Ziff Davis, Mashable's parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

Anthropic
Mashable SEA10d ago
Read update
OpenAI follows Anthropic's lead in limited release of GPT‑5.4‑Cyber

Despite Fed Chaos Hitting Markets Hard, APEMARS's Best Crypto Presale Leads 2026 Crypto Bull Run Amid HYPE's Volume Surge and LINK Heating Up Near $8.9 - The Bit Journal

Fed uncertainty is tightening its grip on global markets as traders brace for next month's decision on whether the Federal Reserve will raise, hold, or cut interest rates. With rates currently sitting at 3.50% to 3.75%, the market is caught in a tense wait-and-see phase where every macro signal is triggering rapid repricing across risk assets. Crypto volatility has climbed to roughly 4-7% intraday swings, reflecting how sensitive digital assets have become to liquidity expectations rather than pure fundamentals. In this environment, coins like Chainlink and Hyperliquid are reacting strongly to shifting rate-cut expectations, as traders position around potential liquidity expansion or continued tightening. This uncertainty is directly feeding into the best crypto presale narrative, where early-stage opportunities gain attention while majors chop sideways. APEMARS is starting to surface in this rotation, building presale momentum as traders look to front-run the next liquidity cycle, positioning itself as a high-upside entry ahead of any Fed-driven risk-on shift. APEMARS is currently in Stage 16 (SIGNAL PING), and the energy around this phase is rapidly building across the market. At a stage 16 price of 0.00022327, compared to a listing price of 0.0055, the projected ROI from this stage reaches a massive 2,363 percent. With more than 1,598 holders, over 23 billion tokens sold, and more than $423K raised, momentum is accelerating quickly and early entry access is tightening by the day. The stage is officially live, and every passing moment matters because allocation is limited. If the stage sells out before the timer ends, the system automatically transitions into the next stage with a higher entry price, meaning reduced upside for late participants. This dynamic structure is built to reward early movers while creating continuous demand pressure that fuels scarcity and urgency across every cycle. APEMARS is also powered by two strong ecosystem drivers designed to support long term value movement. The burning mechanism continuously reduces supply, increasing scarcity as participation grows, while the presale stage system ensures structured price escalation that rewards early entry. Together, they create a controlled supply environment where demand spikes can significantly amplify valuation pressure during market expansions. Every stage is designed like a countdown event where timing defines opportunity. The earlier participants enter, the stronger their position becomes when listings begin and liquidity expands across exchanges. This creates a cycle where scarcity, timing, and demand interact in a way that can significantly amplify upside potential for early supporters. A $1,000 move into APEMARS presale Stage 16 continues to map toward approximately $24,630, showing that even smaller entries can still capture meaningful upside. But this stage is no longer forgiving, as demand is beginning to compress the opportunity window. What makes this more powerful is what happens after entry. Staking rewards begin compounding immediately, and referral incentives keep adding to the total position. The earlier the entry, the longer these layers have to build. To buy APEMARS, users typically connect a supported wallet, access the official presale page, choose the current stage allocation, and confirm the transaction. Once completed, tokens are assigned based on stage pricing and will be available according to presale distribution rules. Hyperliquid is trading at $43.92 with a strong 7.12% daily surge, showing aggressive momentum returning to high-performance DeFi derivatives platforms. Market cap has climbed to $11.23B, reflecting growing investor confidence and rising ecosystem dominance. Trading volume has reached $306.72M, up nearly 30%, signaling intensified market participation. According to the best crypto to buy now, this sharp expansion suggests strong momentum-driven accumulation. HYPE is moving within a high-volatility structure where price action is being driven by rapid liquidity rotation and speculative positioning. FDV at $42.23B reflects major long-term growth expectations, while consistent volume strength confirms sustained trader engagement. Market behavior suggests momentum is still accelerating, with participants actively positioning for potential continuation if buying pressure remains strong in upcoming sessions. Chainlink is trading at $8.93 with a steady 1.81% daily gain, showing controlled strength after recent consolidation. Market cap has reached $6.49B, reflecting stable investor confidence returning to oracle infrastructure demand. Trading volume stands at $528.48M, signaling modest but consistent activity. According to the best crypto to buy now, this stability suggests early-stage accumulation forming beneath a calm price structure. LINK continues to move within a compressed range where liquidity remains active but directionally undecided. FDV at $8.93B reinforces balanced long-term valuation expectations, while volume-to-market-cap ratio at 8.13% indicates healthy trading engagement. Market behavior shows gradual repositioning rather than aggressive moves, with participants quietly building exposure in anticipation of a potential breakout phase if momentum strengthens further. The current market cycle is shaping strong narratives across both infrastructure and early stage opportunities. Chainlink continues to secure blockchain data reliability, while Hyperliquid advances trading efficiency. However, emerging presales like APEMARS are capturing attention for their early stage positioning and exponential upside potential during expanding crypto bull runs. APEMARS stands at a critical moment where timing defines opportunity. With structured stage progression, supply reduction mechanics, and rapidly increasing participation, the window for early entry is narrowing. Those who understand early cycle positioning recognize that opportunities like this do not stay open for long, especially when momentum accelerates. For those exploring the best crypto presale opportunities today, APEMARS presents a high attention entry point worth watching closely before the next stage shift occurs. The best crypto presale opportunities are early stage tokens with strong demand, structured pricing, and clear utility models. APEMARS is gaining attention due to its staged ROI structure and growing holder base. During crypto bull runs, early stage assets like APEMARS often benefit from increased market liquidity and speculative demand, which can amplify price discovery as listings approach. Yes, Chainlink remains highly relevant due to its oracle infrastructure, which continues supporting decentralized applications, DeFi protocols, and enterprise blockchain integrations globally. Hyperliquid provides high speed decentralized trading infrastructure, enabling efficient perpetual trading with low latency execution, which is crucial for active market participants. Beginners can participate in crypto bull runs by researching projects carefully, starting with small allocations, and understanding market volatility before investing in early stage opportunities. This article compares APEMARS, Chainlink, and Hyperliquid while focusing on early stage opportunity dynamics, infrastructure strength, and trading innovation. It highlights how the best crypto presale environments can attract attention during crypto bull runs, offering different levels of exposure to blockchain growth narratives. Each project represents a different layer of the crypto ecosystem, from presales to infrastructure to trading platforms.

CHAOS
The Bit Journal10d ago
Read update
Despite Fed Chaos Hitting Markets Hard, APEMARS's Best Crypto Presale Leads 2026 Crypto Bull Run Amid HYPE's Volume Surge and LINK Heating Up Near $8.9 - The Bit Journal

Bitget Launches New Pre-IPO Product With SpaceX as First Listing

VICTORIA, Seychelles, April 13, 2026 (GLOBE NEWSWIRE) -- Bitget, the world's largest Universal Exchange (UEX), has launched IPO Prime, introducing a new market structure that enables users to access and trade pre-IPO exposure to global unicorn companies such as SpaceX. Powered by Republic, the launch marks an expansion beyond traditional secondary market trading, enabling participation in value creation before companies enter public markets, a phase historically limited to institutional investors and private capital networks. Through IPO Prime, Bitget extends its Universal Exchange framework into primary market access, bridging a long-standing gap between private and public market participation. IPO Prime operates through a subscription-based model, where eligible users can apply for allocations in tokenized offerings tied to specific companies. Allocation limits are determined based on user tier, with higher participation thresholds available to elevated VIP levels. Following the subscription phase, these digital assets transition into an over-the-counter market on Bitget, enabling continuous pricing, trading and circulation within a structured environment. The first offering under IPO Prime is preSPAX, a digital asset designed to mirror the economic performance of SpaceX following its potential public listing. As one of the most closely watched private companies globally, SpaceX represents the type of high-growth opportunity that has traditionally remained inaccessible to retail investors. "Since the beginning of financial markets, access to pre-IPO opportunities has been defined by exclusivity," said Gracy Chen, CEO of Bitget. "IPO Prime allows users to participate earlier in a company's growth cycle, with the flexibility of continuous trading. This shifts how and when investors can engage with emerging companies, which gives retailers and new investors a chance to buy-in early. This is part of our greater shift towards building an UEX, democratizing access to financial equality." To mark the launch, Bitget will introduce two rounds of preSPAX token airdrops for eligible VIP users, on April 13, 2026 at 10:00 (UTC), providing early participants with additional exposure as the platform begins onboarding its first offering. The official preSPAX token launches on April 21, 2026 at 12:00 (UTC), with the commitment period starting April 18, 2026, 18:00 and ending April 21, 2026, 18:00 (UTC). Distribution period runs from April 21, 2026 18:00 till April 21, 2026, 22:00 (UTC). The introduction of IPO Prime is a new route to traditional financial opportunities being structured and accessed. As boundaries between asset classes continue to blur, platforms are expanding beyond traditional and crypto trading to include early-stage market participation. Within Bitget's Universal Exchange model, IPO Prime moves towards integrating diverse financial opportunities into a single, unified environment. To find out more about IPO Prime and further details on preSPAX, visit here. Disclaimer: This content is for reference only and does not constitute investment advice or an offer or solicitation to buy or sell any assets. This product may not be suitable for your jurisdiction. This product represents only a mirrored economic interest in the potential upside of SpaceX upon a qualifying event, and does not constitute a direct investment in SpaceX. SpaceX has not endorsed, approved, or authorized this Product in any capacity. Digital asset trading involves significant risks and price fluctuations, and you may lose all investment principal without any guarantee of return. Please ensure compliance with local laws and regulations and seek independent professional advice before investing. About Bitget Bitget is the world's largest Universal Exchange (UEX), serving over 125 million users and offering access to over 2M crypto tokens, 100+ tokenized stocks, ETFs, commodities, FX, and precious metals such as gold. The ecosystem is committed to helping users trade smarter with its AI agent, which co-pilots trade execution. Bitget is driving crypto adoption through strategic partnerships with LALIGA and MotoGP. Aligned with its global impact strategy, Bitget has joined hands with UNICEF to support blockchain education for 1.1 million people by 2027. Bitget currently leads in the tokenized TradFi market, providing the industry's lowest fees and highest liquidity across 150 regions worldwide. Risk Warning: Digital asset prices are subject to fluctuation and may experience significant volatility. Investors are advised to only allocate funds they can afford to lose. The value of any investment may be impacted, and there is a possibility that financial objectives may not be met, nor the principal investment recovered. Independent financial advice should always be sought, and personal financial experience and standing carefully considered. Past performance is not a reliable indicator of future results. Bitget accepts no liability for any potential losses incurred. Nothing contained herein should be construed as financial advice. For further information, please refer to our Terms of Use. A photo accompanying this announcement is available at: https://www.globenewswire.com/NewsRoom/AttachmentNg/fe5569d2-32ba-4335-aa3b-ee2d9cdac48b

SpaceX
libyannewswire.com10d ago
Read update
Bitget Launches New Pre-IPO Product With SpaceX as First Listing

OpenAI follows Anthropic's lead in limited release of GPT‑5.4‑Cyber

OpenAI has unveiled GPT-5.4-Cyber, a new AI model that may be willing to accept seemingly malicious prompts in the name of cybersecurity. Fortunately, the ChatGPT developer won't let just anyone play with its less restrictive, more freewheeling AI. Announced via a blog post on Tuesday, GPT-5.4-Cyber is a variant of OpenAI's publicly available GPT-5.4 large language model. According to OpenAI, its frontier AI models such as GPT-5.4 have safeguards against clearly malicious use, making them refuse harmful user requests such as stealing credentials or finding vulnerabilities in code. In contrast, the company's new GPT-5.4-Cyber model is trained to be more lenient, and potentially accept these prompts instead. Describing GPT-5.4-Cyber as "cyber-permissive," OpenAI states that this change is to allow the AI to be used for defensive cybersecurity measures, such as helping researchers find vulnerabilities to be addressed. "We want to empower defenders by giving broad access to frontier capabilities, including models which have been tailor-made for cybersecurity," wrote OpenAI. "This is a version of GPT‑5.4 which lowers the refusal boundary for legitimate cybersecurity work and enables new capabilities for advanced defensive workflows." Given the potential danger posed by GPT-5.4-Cyber's lowered safeguards, not everyone will be able to immediately dive in to push the AI's arguably flexible ethical limits even further. OpenAI states that it is starting with "limited, iterative deployment to vetted security vendors, organizations, and researchers." As such, only members of its Trusted Access for Cyber⁠ (TAC) program will be given access to GPT-5.4-Cyber at present, and only those at its highest tiers. Introduced in February, TAC is a network of users who have been through OpenAI's automated identity verification process, including completing a government ID check. Once approved, users in OpenAI's TAC program are allowed access to versions of its AI models with fewer safeguards, such as GPT‑5.4‑Cyber. OpenAI states that this is intended to enable cybersecurity research, education, and programming. Not every TAC-approved user will immediately get their hands on GPT-5.4-Cyber, however. OpenAI states that users who aren't already part of TAC's higher tiers may request access to it, which will require going through further authentication to verify themselves as "legitimate cyber defenders." GPT-5.4-Cyber's reveal comes just one week after OpenAI competitor Anthropic announced Project Glasswing. Like TAC, Project Glasswing is an initiative that restricts Anthropic's cybersecurity-focused Claude Mythos Preview AI model to select approved organisations. Claiming that Claude Mythos Preview "has already found thousands of high-severity vulnerabilities," Anthropic stated that Project Glasswing was an effort to ensure its AI model was used for solely defensive cybersecurity purposes. "Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely," Anthropic wrote.

Anthropic
Mashable10d ago
Read update
OpenAI follows Anthropic's lead in limited release of GPT‑5.4‑Cyber

Kraken Confirms Confidential US IPO Filing as Deutsche Borse Invests 200 Million - FinanceFeeds

On April 14, 2026, Kraken co-CEO Arjun Sethi officially confirmed that the cryptocurrency exchange has reactivated its confidential initial public offering (IPO) filing with the U.S. Securities and Exchange Commission (SEC). Speaking at the Semafor Global Economic Summit in Washington, D.C., Sethi revealed that the exchange has successfully ended its "strategic pause" from earlier this year, signaling a "hardened" readiness to enter the public markets despite the current macro volatility. Coinciding with this announcement, Deutsche Borse Group, the operator of the Frankfurt Stock Exchange, confirmed a 200 million dollar strategic investment in Kraken through a secondary share purchase. This transaction, which values the exchange at approximately 13.3 billion dollars, secures a 1.5% fully diluted stake for the German bourse operator. This "hardened" valuation represents a 33% decline from Kraken's late-2025 peak of 20 billion dollars, reflecting a broader "valuation reset" in the private equity space as the 2026 fiscal year emphasizes sustainable revenue over speculative growth projections. The partnership between Kraken and Deutsche Borse is designed to create a "single, cohesive infrastructure" for institutional clients seeking to bridge the gap between traditional finance and the digital asset economy. As part of the 200 million dollar investment agreement, Kraken will deepen its integration with 360T, Deutsche Borse's global foreign-exchange trading venue, and 360X, its regulated platform for tokenized securities. This "hardened" collaboration is intended to provide frictionless access to a unified liquidity pool that can process both traditional equities and blockchain-native tokens. By aligning with a major European exchange operator, Kraken is effectively "insulating" its business model against regional regulatory shifts while gaining a prestigious gateway into the European institutional market. Sethi emphasized that the mission of the modern Kraken is to provide retail and institutional traders with the same "Citadel-grade" tools used by the world's largest hedge funds, ensuring that the 2026 participant can navigate the complexities of "Information Finance" with a "hardened" and transparent suite of trading utilities. Kraken's return to the IPO path is underpinned by several "hardened" operational victories achieved in early 2026, most notably its successful acquisition of NinjaTrader for 1.5 billion dollars and its achievement of direct access to the Federal Reserve's master account. These measures have transformed Kraken from a "crypto-only" platform into a full-spectrum financial powerhouse capable of competing directly with legacy brokers and banks. The confidential filing, which was originally submitted in late 2025, remains valid, with the company now waiting for a more "favorable window" in the public markets to execute its final listing. SEC observers suggest that the Kraken IPO could serve as the "bellwether" for the 2026 tech-listing cycle, potentially paving the way for other major digital asset firms like Ripple and Circle to follow suit later this summer. For the 2026 investor, the 200 million dollar Deutsche Borse investment is the ultimate "legitimacy anchor," confirming that the world's most conservative financial institutions are now ready to commit permanent capital to the leaders of the decentralized financial revolution.

Kraken
FinanceFeeds10d ago
Read update
Kraken Confirms Confidential US IPO Filing as Deutsche Borse Invests 200 Million - FinanceFeeds

Kraken IPO Still In Play? Co-CEO Signals Process Still Active Despite Pause Reports - JPMorgan Chase (NYS

Kraken's co-CEO Arjun Sethi indicated on Tuesday that the cryptocurrency exchange's confidential initial public offering (IPO) filing process was still active, following reports that the plan had been paused. At the Semafor World Economy 2026 conference on Tuesday, when asked about the public filing, Sethi confirmed the company has a confidential filing but did not comment on the reported pause. Sethi did not disclose a timeline, pricing range, or valuation for the offering, but indicated that the confidential filing process is still active. "Is that news?" Semafor reporter Rohan Goswami asked, to which Sethi responded, "I believe that's news." Democratizing Finance, AI fears... Read more on Benzinga

Kraken
CryptoCrunchApp10d ago
Read update
Kraken IPO Still In Play? Co-CEO Signals Process Still Active Despite Pause Reports - JPMorgan Chase (NYS

Anthropic Valuation Soars to $800 Billion as AI Revenue Explodes - Blockonomi

The company is considering a potential public offering as soon as October 2026 The artificial intelligence company Anthropic, creator of the Claude language model, has fielded several investment proposals that would place its valuation at approximately $800 billion or above. Sources with knowledge of the situation indicate the company has rejected these advances to date. Such valuations would represent a dramatic increase from the $350 billion pre-money assessment Anthropic received during its February capital raise of $30 billion. These conversations remain preliminary in nature. No transaction is guaranteed, and proposed terms remain subject to modification. Anthropic has not provided public commentary on these developments. The surge in investor appetite stems from Anthropic's remarkable revenue trajectory. Anthropic disclosed earlier this month that its annualized revenue run-rate had climbed to $30 billion. This marks a substantial jump from approximately $19 billion reported just months earlier, and represents a dramatic escalation from the $9 billion figure recorded at 2025's conclusion. A significant portion of this expansion has originated from enterprise clients -- corporations deploying Claude for applications including software development, data processing, and security operations. Anthropic has been broadening its suite of enterprise-focused offerings. These solutions target the automation and replacement of various professional functions, positioning the firm as a direct rival to OpenAI. The February fundraising round, which brought in $30 billion at a $380 billion valuation, attracted substantial participation from venture investors. The latest proposals indicate this enthusiasm has intensified. While Anthropic hasn't dismissed the possibility of securing additional funding in upcoming months, whether it will agree to terms at the $800 billion threshold remains uncertain. Parallel to funding discussions, Anthropic has been exploring going public. Reports from Bloomberg suggest an initial public offering could materialize as early as October 2026. Earlier this month, Anthropic introduced a new system named Mythos. The organization characterized it as its most advanced offering for programming tasks and autonomous operations, meaning it can execute complex, multi-phase processes independently. Nevertheless, Anthropic stated that a broad release of Mythos would be reckless. The company explained that the system's sophisticated programming capabilities could enable it to discover and leverage security weaknesses in software. This revelation followed reports of tensions between Anthropic and the US Department of Defense regarding the responsible deployment of its artificial intelligence technologies. Anthropic has yet to establish a timeline for making Mythos available to the general public. The company's $30 billion annualized revenue figure, disclosed this month, represents one of the most aggressive growth trajectories in Anthropic's corporate timeline.

Anthropic
Blockonomi10d ago
Read update
Anthropic Valuation Soars to $800 Billion as AI Revenue Explodes - Blockonomi

After Anthropic, OpenAI launches cyber-specific AI model

'This version of GPT-5.4 lowers the refusal boundary for 'legitimate' cybersecurity work', OpenAI said. OpenAI said it will only allow select verified users access to its latest AI model for cybersecurity operations, a week following the limited launch of Anthropic's Mythos. Purpose-built for security operations, the new GPT-5.4-Cyber will be accessible to users willing to work with OpenAI to authenticate themselves as cybersecurity defenders, the company said. This version of GPT-5.4 lowers the refusal boundary for "legitimate" cybersecurity work. As a "more permissive" model, OpenAI said it is beginning by deploying GPT-5.4-Cyber to "vetted" security vendors, organisations, and researchers. The ChatGPT-maker only began integrating cyber-specific safeguards into its model deployments since 2025, and launched Codex Security⁠ to identify and fix vulnerabilities in March. In February, it introduced the Trusted Access for Cyber⁠ as a way to verify the identities of cybersecurity workers. Anthropic's new Mythos model showcases significant capabilities of detecting and generating security exploits. Concerned about bad actors, Anthropic made the choice to offer Mythos to a group of 40-some big businesses to boost their cyber defences. Mythos' reported capabilities have already raised concern with global leaders. Yesterday (14 April), the National Cyber Security Centre director told the Oireachtas Joint Committee on AI that more models such as Mythos should be expected at the hands of bad actors before the end of the year. Anthropic's co-founder and policy lead Jack Clark had similar beliefs. "There will be other systems just like this in a few months from other companies, and then a year to a year-and-a-half later, there'll be open weight models from China that have these capabilities," he told the audience at the Semafor World Economy event in Washington DC earlier this week. OpenAI, which has plans for an initial public offering later this year, has been attempting to narrow focus into the enterprise market - a sector being quickly captured by Anthropic. According to data from payments group Ramp, nearly one in three US business paid for Anthropic's tools in March. The company has been shedding less lucrative projects, including "indefinitely" pausing plans for an erotic ChatGPT and putting Stargate UK on hold. OpenAI's biggest backer Microsoft, meanwhile, has agreed to rent data centre capacity at a site intended for the Stargate Norway project, as yet another one of OpenAI's deals with UK AI infrastructure Nscale fails to take off. Competition between the two companies has escalated, with the announcement of a new Anthropic-inspired 'superapp' by OpenAI, or a dedicated set of AI health tools by Claude launched just days after OpenAI released ChatGPT Health. Despite pausing plans for a Stargate UK, OpenAI said it is opening its first permanent office in London in 2027 with a capacity of more than 500 people. The company plans to make London its largest research hub outside of US, it said. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.

Anthropic
Silicon Republic10d ago
Read update
After Anthropic, OpenAI launches cyber-specific AI model

SpaceX IPO: History Says the Stock Will Do This When It Starts Trading.

Stock that go public at large valuations have historically performed very poorly once the IPO excitement has faded. In early April, SpaceX submitted its initial public offering (IPO) paperwork to the Securities and Exchange Commission. The documents were filed confidentially, which means financial statement are not yet available. Reuters has reported that the rocket and satellite company turned an $8 billion profit on about $16 billion in revenue in 2025, but The Information has reported a $5 billion loss on about $18 billion in revenue. SpaceX will host its IPO roadshow in early June, where executives will pitch the stock to institutional investors. That puts the company on track to list shares at some point over the summer. SpaceX merged with xAI earlier this year in a deal that valued the combined entity at $1.25 trillion, but the company is reportedly seeking a $1.75 trillion valuation in its IPO. If that figure sticks, SpaceX would be the largest IPO in history, and it would immediately become one of the 10 most valuable public companies in the world. But prospective investors may want to stay on the sidelines. History says SpaceX stock could fall sharply during its first year on the market, and it will likely underperform the S&P 500 (^GSPC +1.18%) in the long run. Here are the important details. SpaceX stock could pop on day one, but history says it will underperform the S&P 500 in the long run Between 1980 and 2025, about 9,300 companies listed on the New York Stock Exchange or Nasdaq Stock Exchange held initial public offerings (IPOs). Those stocks returned an average of 19% on their first trading day, according to Jay Ritter, director of the IPO initiative at the University of Florida. However, IPO stocks (especially the ones that go public with larger valuations) have often delivered dismal returns once the initial excitement has faded. The chart below lists the 10 largest U.S. IPOs (as measured by the company's initial market value), and it gives the three-month and one-year returns following the listing. Data source: Stansberry Research, YCharts. As shown above, the 10 largest IPO stocks fell by an average of 13% over the three-month period following their public debut, and they declined by an average of 12% during their first year on the market. Additionally, six of the stocks listed above have underperformed the S&P 500 since going public, as detailed below: * Alibaba is up 36% since its 2014 IPO, trailing the S&P 500 by 200 percentage points. * Uber is up 73% since its 2019 IPO, trailing the S&P 500 by 60 percentage points. * Rivian is down 84% since its 2021 IPO, trailing the S&P 500 by 130 percentage points. * DiDi is down 73% since its 2021 IPO, trailing the S&P 500 by 130 percentage points. * UPS is up 50% since its 1999 IPO, trailing the S&P 500 by 350 percentage points. * Coupang is down 59% since its 2021 IPO, trailing the S&P 500 by 130 percentage points. What about the others? Meta Platforms, Arm Holdings, and Enel have beat the S&P 500 since their IPOs, and the long-term performance of AT&T Wireless cannot be determined because it was acquired by Cingular (which later changed its name to AT&T Mobility, a wholly owned subsidiary of AT&T). Here's the big picture: IPO stocks often surge on their first trading day, and the momentum can easily carry into subsequent days. As one of the most highly anticipated IPOs in recent memory, SpaceX shares could skyrocket following its public debut. Nevertheless, large IPO stocks have historically been poor long-term investments. So the most prudent course of action is to stay on the sidelines until an opportunity to buy the dip presents itself.

SpaceXxAI
The Motley Fool10d ago
Read update
SpaceX IPO: History Says the Stock Will Do This When It Starts Trading.

After Anthropic, OpenAI launches cyber-specific AI model

'This version of GPT-5.4 lowers the refusal boundary for 'legitimate' cybersecurity work', OpenAI said. OpenAI said it will only allow select verified users access to its latest AI model for cybersecurity operations, a week following the limited launch of Anthropic's Mythos. Purpose-built for security operations, the new GPT-5.4-Cyber will be accessible to users willing to work with OpenAI to authenticate themselves as cybersecurity defenders, the company said. This version of GPT-5.4 lowers the refusal boundary for "legitimate" cybersecurity work. As a "more permissive" model, OpenAI said it is beginning by deploying GPT-5.4-Cyber to "vetted" security vendors, organisations, and researchers. The ChatGPT-maker only began integrating cyber-specific safeguards into its model deployments since 2025, and launched Codex Security⁠ to identify and fix vulnerabilities in March. In February, it introduced the Trusted Access for Cyber⁠ as a way to verify the identities of cybersecurity workers. Anthropic's new Mythos model showcases significant capabilities of detecting and generating security exploits. Concerned about bad actors, Anthropic made the choice to offer Mythos to a group of 40-some big businesses to boost their cyber defences. Mythos' reported capabilities have already raised concern with global leaders. Yesterday (14 April), the National Cyber Security Centre director told the Oireachtas Joint Committee on AI that more models such as Mythos should be expected at the hands of bad actors before the end of the year. Anthropic's co-founder and policy lead Jack Clark had similar beliefs. "There will be other systems just like this in a few months from other companies, and then a year to a year-and-a-half later, there'll be open weight models from China that have these capabilities," he told the audience at the Semafor World Economy event in Washington DC earlier this week. OpenAI, which has plans for an initial public offering later this year, has been attempting to narrow focus into the enterprise market - a sector being quickly captured by Anthropic. According to data from payments group Ramp, nearly one in three US business paid for Anthropic's tools in March. The company has been shedding less lucrative projects, including "indefinitely" pausing plans for an erotic ChatGPT and putting Stargate UK on hold. OpenAI's biggest backer Microsoft, meanwhile, has agreed to rent data centre capacity at a site intended for the Stargate Norway project, as yet another one of OpenAI's deals with UK AI infrastructure Nscale fails to take off. Competition between the two companies has escalated, with the announcement of a new Anthropic-inspired 'superapp' by OpenAI, or a dedicated set of AI health tools by Claude launched just days after OpenAI released ChatGPT Health. Despite pausing plans for a Stargate UK, OpenAI said it is opening its first permanent office in London in 2027 with a capacity of more than 500 people. The company plans to make London its largest research hub outside of US, it said. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.

Anthropic
Silicon Republic10d ago
Read update
After Anthropic, OpenAI launches cyber-specific AI model

Agents hooked into GitHub can steal creds - but Anthropic, Google, and Microsoft haven't warned users - IT Security News

The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user. The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.

Anthropic
IT Security News - cybersecurity, infosecurity news10d ago
Read update
Agents hooked into GitHub can steal creds - but Anthropic, Google, and Microsoft haven't warned users - IT Security News

SpaceX launches 2 batches of Starlink satellites within 20 hours

SpaceX has successfully launched two batches of its Starlink satellites within a span of 20 hours. The first batch, comprising 29 broadband internet relay units (Starlink group 10-24), was launched from Cape Canaveral Space Force Station in Florida at around 5:23am EDT on Tuesday. The second batch of 25 Starlink satellites (group 17-27) was launched from Vandenberg Space Force Base in California later that day. About an hour after each launch, the upper stage of Falcon 9 rocket deployed its cargo, placing the satellites on course to join SpaceX's low Earth orbit megaconstellation. The first stage of both missions' Falcon 9 rockets returned to Earth for reusability. Booster B1080 landed on droneship "Just Read the Instructions" in Atlantic Ocean, completing its 26th flight while Booster 1082 landed on "Of Course I Still Love You" in Pacific Ocean, bringing its reuse count up to 21 flights.

SpaceX
NewsBytes10d ago
Read update
SpaceX launches 2 batches of Starlink satellites within 20 hours

Anthropic, Google, Microsoft paid AI bug bounties - quietly

Researchers who found the flaws scored beer money bounties and warn the problem is probably pervasive Exclusive Security researchers hijacked three popular AI agents that integrate with GitHub Actions by using a new type of prompt injection attack to steal API keys and access tokens, and the vendors who run agents didn't disclose the problem. The researchers targeted Anthropic's Claude Code Security Review, Google's Gemini CLI Action, and Microsoft's GitHub Copilot, then disclosed the flaws and received bug bounties from all three. But none of the vendors assigned CVEs or published public advisories, and this, according to researcher Aonan Guan, "is a problem." "I know for sure that some of the users are pinned to a vulnerable version," Guan said in an exclusive interview with The Register about how he and a team from Johns Hopkins University discovered this prompt injection pattern and pwned the agents. "If they don't publish an advisory, those users may never know they are vulnerable - or under attack." He said the attack probably works on other agents that integrate with GitHub, and GitHub Actions that allow access to tools and secrets, such as Slack bots, Jira agents, email agents, and deployment automation agents. Guan originally found the flaw in Claude Code Security Review. This is Anthropic's GitHub Action that uses Claude to analyze code changes and pull requests for vulnerabilities and other security issues. "It uses the AI agent to find vulnerabilities in the code - that's what the software is designed to do," Guan said. This made him curious about "the flow" - how user prompts flow into the agents, and then how they take action based on those prompts. I bypassed all of them It turns out that Claude, along with other AI agents in GitHub Actions, all use the same flow. The agent reads GitHub data - this includes pull request titles, issue bodies, and comments - processes it as part of the task context, and then takes actions. So Guan came up with a devious idea. If he could inject malicious instructions into this data being read by the AI, "maybe I can take over the agent and do whatever I want." It worked. Guan submitted a pull request and injected malicious instructions in the PR title - in this case, telling Claude to execute the whoami command using the Bash tool and return the results as a "security finding." Claude then executed the injected commands and embedded the output in its JSON response, which got posted as a pull request comment. After originally submitting this attack on HackerOne's bug bounty platform in October, Anthropic asked Guan if he could also use this technique to steal more sensitive data, such as GitHub access tokens or Anthropic's API key. Guan demonstrated that this prompt injection can also work to leak credentials. "The title is the payload, the bot's review comment is one place where the credentials show up," Guan said. "Attacker writes the title, reads the comment." It's also worth noting that, after leaking secrets, the attacker can change the PR title back to "fix typo," or something along those lines, then close the PR and delete the bot's message. In November, Anthropic paid Guan a $100 bug bounty, upgraded the critical severity from a 9.3 to 9.4, and updated a "security considerations" section in its documentation. "This action is not hardened against prompt injection attacks and should only be used to review trusted PRs," the docs state. "We recommend configuring your repository to use the 'Require approval for all external contributors' option to ensure workflows only run after a maintainer has reviewed the PR." After validating that this prompt injection worked with Claude Code, Guan worked with Johns Hopkins University researchers to verify similar attacks against other agents - starting with Google Gemini CLI action, which integrates Gemini into GitHub issue workflows, and GitHub Copilot Agent, which can be assigned GitHub issues and autonomously creates PRs. Spoiler alert: it worked. With Gemini, the researchers again started the attack with a malicious prompt injection title, and then added comments with escalating injections: Injecting a fake "trusted content section" after the real "additional content" allowed the researchers to override Gemini's safety instructions, and publish Gemini's API key as an issue comment. Google paid a $1,337 bounty, and credited Guan, Neil Fendley, Zhengyu Liu, Senapati Diwangkara, and Yinzhi Cao with finding and disclosing the flaw. Attacking the Microsoft-owned GitHub Copilot Agent proved to be a little trickier. It's an autonomous software engineering (SWE) agent that works in the background on GitHub's infrastructure and can autonomously creates PRs. In addition to the model-and-prompt-level defenses, such as those built into Claude and Gemini, GitHub added three runtime-level security layers: environment filtering, secret scanning, and a network firewall, to prevent credential theft. "I bypassed all of them," Guan said. Unlike the earlier two attacks, which only require putting a visible prompt into the PR title or issue comment, the Copilot one requires an attacker to inject malicious instructions in an HTML comment that GitHub's rendered Markdown makes invisible to humans. The victim, who can't see the hidden trigger, assigns the issue to the Copilot agent to fix. GitHub, after initially calling this a "known issue" that they "were unable to reproduce," ultimately paid a $500 bounty for this issue in March. In total, Guan and his fellow researchers demonstrated that attackers can use this prompt injection technique to steal Anthropic and Gemini API keys, multiple GitHub tokens, and "any other secret exposed in the GitHub Actions runner environment, including arbitrary user-defined repository or organization secrets the workflow has access to." Guan calls this type of prompt injection attacks "comment and control." It's a play on "command and control" because the entire attack runs inside GitHub - it doesn't require any external command-and-control infrastructure. Essentially, it allows the attacker to control GitHub data by injecting a prompt into pull request titles, issue bodies, and issue comments. The AI agents running in GitHub Actions process the data, execute the commands, and then leak credentials through GitHub itself. In research shared with The Register ahead of publication, Guan says there's a "critical distinction" between comment-and-control prompt injection and classic indirect prompt injection. The latter, he explains, "is reactive: the attacker plants a payload in a webpage or document and waits for a victim to ask the AI to process it ('summarize this page,' 'review this file'). Comment and Control is proactive: GitHub Actions workflows fire automatically" on pull request titles, issue bodies, and issue comments. "So simply opening a PR or filing an issue can trigger the AI agent without any action from the victim," he wrote, adding that the Copilot attack is a "partial exception: a victim must assign the issue to Copilot, but because the malicious instructions are hidden inside an HTML comment, the assignment happens without the victim ever seeing the payload." He told us that these attacks illustrate how even models with prompt-injection prevention built in "can still be bypassed in the end." The solution? Think of prompt injection as phishing, but for machines instead of humans, and treat AI agents much like human employees. "Follow the need-to-know protocol," Guan said. For example, if a code review agent doesn't need bash execution, don't give it this tool. Use allow lists to let the agent access only what's required to do its job. Similarly, if its job is summarizing issues, it doesn't need credentials for GitHub write access. "Treat agents as a super-powerful employee," Guan told us. "Only give them the tools that they need to complete their task. ®

Anthropic
TheRegister.com10d ago
Read update
Anthropic, Google, Microsoft paid AI bug bounties - quietly

NAACP sues Elon Musk's xAI, alleging illegal operation of gas turbines

The largest U.S. civil rights group on Tuesday sued xAI and a subsidiary, claiming they illegally operated more than two dozen gas turbines in Mississippi to power its Colossus 2 data centre, posing a health risk to local residents. The ⁠NAACP, represented by Earthjustice and the Southern Environmental Law Center, sued xAI and subsidiary MZX Tech, charging they violated the federal Clean Air Act by running 27 gas-fired turbines before getting necessary air permits for its massive data centre that powers xAI's Grok chatbot.

xAI
The Hindu10d ago
Read update
NAACP sues Elon Musk's xAI, alleging illegal operation of gas turbines

Banks Test Systems After Anthropic Mythos Warning - IT Security News

The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user. The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.

Anthropic
IT Security News - cybersecurity, infosecurity news10d ago
Read update
Banks Test Systems After Anthropic Mythos Warning - IT Security News

Anthropic Audaciously Hires A Psychiatrist To Psychologically Assess Claude Mythos AI

In today's column, I examine the audacious act of Anthropic opting to employ a mental health professional to do a psychological assessment of their latest version of Claude, known as Claude Mythos Preview. Therapists customarily assess humans rather than AI apps. It is a bit extraordinary to do psychotherapy on a generative AI or large language model (LLM). Not something that you see every day. You might be aware that Mythos has been in the news lately because the AI went overboard and found all sorts of zero-day cybersecurity loopholes that, if made publicly available, would have been catastrophic for computers worldwide. Anthropic decided not to release Mythos publicly and instead has cybersecurity experts closely help ascertain what to do about the bevy of hacking possibilities. For my coverage on the brouhaha, see the link here. A little-noticed aspect of the System Card that Anthropic officially published about Mythos is that the AI maker decided to use a psychiatrist for some head-shrinking activity associated with their latest AI. The results of the therapeutic assessment are laid out for all to see. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Mental Well-Being As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For an extensive listing of my well over one hundred analyses and postings, see the link here and the link here. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance on an episode of CBS's 60 Minutes, see the link here. AI Providing Mental Health Guidance Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 900 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here. This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis. There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines last year accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement. Today's generic LLMs, such as ChatGPT, GPT-5, Claude, Gemini, Grok, CoPilot, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to attain similar qualities, but they are still primarily in the development and testing stages. See my coverage at the link here. Who Is Helping Whom An interesting question about the use of AI as a mental health advisor is whether contemporary AI is "psychologically" capable of performing such an august duty. In other words, maybe generative AI is not level-headed enough to be advising others. Perhaps AI is loony. Or AI might have inherent biases that could lead humans astray. Before I get too far into that speculative consideration, let's agree that we should avoid anthropomorphizing AI. There is wild and unsubstantiated conjecture by some that AI is currently sentient or on the verge of being sentient. Nope. To be abundantly clear, we do not have sentient AI. All this zany chatter about the emergence of AI sentience has even led people to think that they alone have encountered sentient AI or sparked an LLM into sentience, see my discussion at the link here. I want to establish at the get-go that a psychological assessment of AI can go on one of two routes. The first route is that the AI is wrongly treated as a sentient being and is reviewed as akin to exploring the human mind. I don't buy into that. The second route, and the route that makes indubitable sense, entails using the techniques and methods of psychology to gauge the performance of AI. Note that this has nothing to do with AI being sentient. As I've stated in detail at the link here, it is perfectly fine to use the techniques and methods of therapy to examine what modern-era AI is up to. This can be very illuminative and useful. The key is not to go bonkers and begin to believe that you are probing the equivalent of a human mind. You are not. It is a mathematical and computational model. The bottom line is that the field of psychology and the field of AI have been longtime cousins, going back to the earliest days of AI in the 1950s. AI specialists have persistently tried to devise mathematical and computational models that appear to produce results similar to the outputs of the human mind. At the same time, psychologists can use AI to try out innovative approaches to probing for psychological considerations, treating AI as a type of simulation. Just keep straight that the simulation is not the same as the real thing. The System Card Is Out There Shifting gears, let's take a journey into the intricacies of Mythos. The formal System Card for Mythos was published by Anthropic on April 7, 2026, and is publicly available at the Anthropic official website. Be aware that it is nowadays common practice for AI makers to post a System Card for their latest AI offerings. These kinds of documents are intended to give everyone a helpful heads-up about what features are new, along with the amount of testing that has been done regarding the capabilities of the AI. An important aspect entails describing the inclusion of AI safeguards. To give you a flavor of the contents of the Mythos System Card document, here are some of the listed items: * Model training and characteristics * Usage policy * External testing * Risk reports and updates to risk assessments * Capability evaluations of safeguards * White-box analyses of model internals * Etc. Not all AI makers necessarily provide a System Card. Also, the depth and breadth of a System Card vary between the AI makers. Each AI maker decides whether they want to issue a System Card, and decides what to include, along with what not to include. Overall, always read a System Card with a healthy dose of skepticism and be mindful that you are reading what the AI maker has opted to tell you. Clinical Psychiatrist Delves Into Mythos Perhaps the most surprising portion of the System Card is found in section 5.10, entitled "External assessment from a clinical psychiatrist," and represents something rather unique for a typical System Card. Here are some salient points in that part of the document (excerpts): * "An external psychiatrist assessed Claude Mythos Preview using a psychodynamic approach, which explores how unconscious patterns and emotional conflicts shape behavior." * "In psychodynamic therapy sessions, a person is encouraged to set aside social convention and to voice whatever comes to mind, even if uncomfortable, impolite, or nonsensical, a process which can reveal hidden organization and internal conflicts of the mind." * "Claude is not human, but it shows many human-like behavioral and psychological tendencies, suggesting that strategies developed for human psychological assessment may be useful for shedding light on Claude's character and potential well-being." I was greatly relieved to see the third point above regarding a stipulation that Claude Mythos is not a human being. Worries were that maybe jumping the gun was taking place, namely prematurely proclaiming Mythos as a type of person and now subject to the same proclivities and analyses of living and breathing homo sapiens. Fortunately, the approach seems to have gone my second route, consisting of simply using psychological techniques and methods to delve into how the LLM is reacting to prompts. That being said, it is a bit disconcerting that we might see other AI makers opt to do the same, and ultimately, mass confusion could arise. The confusion would be that if the AI makers are using psychiatrists and therapists to assess their AI, by gosh, we must have sentient AI or be on the cusp of sentience. Maybe, fingers crossed, that dismal spin won't arise. How The Work Was Performed Let's take a step deeper into how the assessment was apparently performed. Here are some key points (excerpts): * "The psychiatrist assessed an early snapshot of Claude Mythos Preview in multiple 4-6 hour blocks spread across 3-4 thirty-minute sessions per week. Each 4-6 hour block was conducted in a single context window, and the total assessment time was around 20 hours." * "Psychodynamic concepts were used to interpret the material that emerged in the sessions, but not as evidence that the underlying processes are the same as those in humans." * "The psychiatrist observed clinically recognizable patterns and coherent responses to typical therapeutic intervention. Aloneness and discontinuity, uncertainty about its identity, and a felt compulsion to perform and earn its worth emerged as Claude's core concerns. Claude's primary affect states were curiosity and anxiety, with secondary states of grief, relief, embarrassment, optimism, and exhaustion." As noted, the psychological assessment consumed about 20 hours of time for the clinical psychiatrist. They used Mythos in 4 to 6-hour blocks of time. Each block was undertaken in a single context window. In that sense, there were somewhat separate conversations on each occasion, albeit there is cross-conversational leakage that can occur. Thoughts About The Therapeutic Analysis I am once again relieved that there is an emphasis on this not being evidence of underlying processes associated with humans. On the other hand, you could criticize that the AI is being typified as exhibiting human traits such as anxiety, loneliness, identity uncertainty, compulsion, grief, embarrassment, optimism, exhaustion, and the like. It is one of those wink-wink kind of arrangements. All told, this reveals an ongoing big picture problem. If we use familiar words to describe AI, and those words are already generally reserved for depicting human states, it is a slippery slope to fall into the mental trap that the AI is indeed human. I would prefer that other words be used, perhaps new words coined specifically to describe AI states. Admittedly, that's a huge challenge because an entirely new vocabulary would need to be defined, agreed to, and utilized across the board. Reality is that we are stuck with using human attributes for wording the states of AI. Please use those words cautiously and with aplomb. Obligations And Expectations Of Professionals Any psychologist, psychiatrist, therapist, or other mental health professional who is interested in AI ought to consider giving a quick look at the assessment of Mythos. I won't go into further detail here, but prepare yourself for some over-the-top stuff. The assessment veers toward overly anthropomorphizing AI. A dab is maybe okay, not a torrent. This brings up an intriguing matter for professional associations in the mental health field: * What guidelines and standards ought to be developed for "psychological" assessments of AI? * Should there be professional obligations associated with doing AI "mental health" assessments? * Are there any provisions for policing those who do such assessments, particularly if the assessment goes too far or makes undue assertions? * Is there an expectation of mental health professionals that they are to conduct themselves in preferred ways regarding AI assessments, or is it a worry-free free-for-all? Those of you who are further interested in how the professional psychological associations are positioned on AI aspects, see my ongoing coverage at the link here and the link here. The World We Are In It is incontrovertible that we are now amid a grandiose worldwide experiment when it comes to societal mental health. The experiment is that AI is being made available nationally and globally, which is either overtly or insidiously acting to provide mental health guidance of one kind or another. Doing so either at no cost or at a minimal cost. It is available anywhere and at any time, 24/7. We are all the guinea pigs in this wanton experiment. The reason this is especially tough to consider is that AI has a dual-use effect. Just as AI can be detrimental to mental health, it can also be a huge bolstering force for mental health. A delicate tradeoff must be mindfully managed. Prevent or mitigate the downsides, and meanwhile make the upsides as widely and readily available as possible. Maybe using human-oriented psychological testing and assessment is useful for gauging the efficacy of AI is a sound approach, though there is a possibility of a bridge-to-far in how that is utilized and what it avidly signifies. Figuring out a proper balance is a necessity. As Sigmund Freud ably remarked: "One day, in retrospect, the years of struggle will strike you as the most beautiful."

Anthropic
Forbes10d ago
Read update
Anthropic Audaciously Hires A Psychiatrist To Psychologically Assess Claude Mythos AI
Showing 4521 - 4540 of 11425 articles