The latest news and updates from companies in the WLTH portfolio.
SYDNEY, April 20 (Reuters) - Regulators said on Monday they are monitoring the development of Anthropic's frontier AI model Mythos, which experts say could have the capability to be used to destabilise banking systems. The vast capabilities of Mythos to code at a high level have given it a potentially unprecedented ability to identify cybersecurity vulnerabilities, experts say, prompting greater scrutiny from some regulators globally. "ASIC is closely monitoring these developments along with peer regulators to assess possible implications for the Australian market," a spokesperson for the Australian Securities and Investments Commission (ASIC) said on Monday. "ASIC engages closely with other regulators, government agencies and the financial sector to understand and respond to changing technologies." ASIC said it expected financial services licencees to "be on the front foot" to safeguard their customers and clients. The country's banking regulator, the Australian Prudential Regulation Authority (APRA) said it would "continue to assess the implications of these technological advancements to ensure the ongoing safety and resilience of the financial system." South Korea's Financial Supervisory Service (FSS) said on Monday it held a meeting with information security officials from financial firms last Monday to review Mythos-related risks. South Korea's Yonhap news agency reported that the country's Financial Services Commission (FSC) held an emergency meeting last Wednesday with chief information security officers from the FSS, banks and insurers to review the risks, citing unnamed industry sources. The FSC was not immediately available for comment when contacted by Reuters. Reporting by Scott Murdoch in Sydney and Heekyong Yang in Seoul; Editing by Jacqueline Wong Our Standards: The Thomson Reuters Trust Principles., opens new tab

SYDNEY, April 20 (Reuters) - Regulators said on Monday they are monitoring the development of Anthropic's frontier AI model Mythos, which experts say could have the capability to be used to destabilise banking systems. The vast capabilities of Mythos to code at a high level have given it a potentially unprecedented ability to identify cybersecurity vulnerabilities, experts say, prompting greater scrutiny from some regulators globally. "ASIC is closely monitoring these developments along with peer regulators to assess possible implications for the Australian market," a spokesperson for the Australian Securities and Investments Commission (ASIC) said on Monday. "ASIC engages closely with other regulators, government agencies and the financial sector to understand and respond to changing technologies." ASIC said it expected financial services licencees to "be on the front foot" to safeguard their customers and clients. The country's banking regulator, the Australian Prudential Regulation Authority (APRA) said it would "continue to assess the implications of these technological advancements to ensure the ongoing safety and resilience of the financial system." South Korea's Financial Supervisory Service (FSS) said on Monday it held a meeting with information security officials from financial firms last Monday to review Mythos-related risks. South Korea's Yonhap news agency reported that the country's Financial Services Commission (FSC) held an emergency meeting last Wednesday with chief information security officers from the FSS, banks and insurers to review the risks, citing unnamed industry sources. The FSC was not immediately available for comment when contacted by Reuters. (Reporting by Scott Murdoch in Sydney and Heekyong Yang in Seoul; Editing by Jacqueline Wong)
The US National Security Agency is reportedly using Anthropic's advanced AI tool, Mythos Preview. This comes even after the Pentagon flagged the company for supply-chain risks. The AI model is said to be highly capable in coding and autonomous tasks. Experts suggest its abilities could significantly enhance cyberattack capabilities. Discussions between the US administration and Anthropic have also taken place. The United States National Security Agency is using Anthropic's Mythos Preview AI tool despite the Pentagon hitting the company with a formal supply-chain risk designation, Axios reported on Sunday. The Mythos Preview model was being used more widely within the department, Axios said, citing sources. Reuters could not immediately verify the report. Anthropic, the NSA and the Department of Defense did not immediately respond to requests for comment outside regular business hours. The NSA is part of the Defense Department. Last week, U.S. President Donald Trump's administration and Anthropic's CEO discussed working together for the first time since a dispute earlier this year between the Pentagon and the AI firm over how that company's models should be used.
SYDNEY, April 20 (Reuters) - Regulators said on Monday they are monitoring the development of Anthropic's frontier AI model Mythos, which experts say could have the capability to be used to destabilise banking systems. The vast capabilities of Mythos to code at a high level have given it a potentially unprecedented ability to identify cybersecurity vulnerabilities, experts say, prompting greater scrutiny from some regulators globally. "ASIC is closely monitoring these developments along with peer regulators to assess possible implications for the Australian market," a spokesperson for the Australian Securities and Investments Commission (ASIC) said on Monday. "ASIC engages closely with other regulators, government agencies and the financial sector to understand and respond to changing technologies." ASIC said it expected financial services licencees to "be on the front foot" to safeguard their customers and clients. The country's banking regulator, the Australian Prudential Regulation Authority (APRA) said it would "continue to assess the implications of these technological advancements to ensure the ongoing safety and resilience of the financial system." South Korea's Financial Supervisory Service (FSS) said on Monday it held a meeting with information security officials from financial firms last Monday to review Mythos-related risks. South Korea's Yonhap news agency reported that the country's Financial Services Commission (FSC) held an emergency meeting last Wednesday with chief information security officers from the FSS, banks and insurers to review the risks, citing unnamed industry sources. The FSC was not immediately available for comment when contacted by Reuters. (Reporting by Scott Murdoch in Sydney and Heekyong Yang in Seoul; Editing by Jacqueline Wong)
SYDNEY, April 20 (Reuters) - Regulators said on Monday they are monitoring the development of Anthropic's frontier AI model Mythos, which experts say could have the capability to be used to destabilise banking systems. The vast capabilities of Mythos to code at a high level have given it a potentially unprecedented ability to identify cybersecurity vulnerabilities, experts say, prompting greater scrutiny from some regulators globally. "ASIC is closelymonitoringthese developments along with peer regulators to assesspossible implicationsfor the Australian market," a spokesperson for the Australian Securities and Investments Commission (ASIC) said on Monday. "ASIC engages closely with other regulators, governmentagenciesand the financial sector to understand and respond to changing technologies." ASIC said it expected financial services licencees to "be on the front foot" to safeguard their customers and clients. The country's banking regulator, the Australian Prudential Regulation Authority (APRA) said it would "continue to assess the implications of these technological advancements to ensure the ongoing safety and resilience of the financial system." South Korea's Financial Supervisory Service (FSS) said on Monday it held a meeting with information security officials from financial firms last Monday to review Mythos-related risks. South Korea's Yonhap news agency reported that the country's Financial Services Commission (FSC) held an emergency meeting last Wednesday with chief information security officers from the FSS, banks and insurers to review the risks, citing unnamed industry sources. The FSC was not immediately available for comment when contacted by Reuters. (Reporting by Scott Murdoch in Sydney and Heekyong Yang in Seoul; Editing by Jacqueline Wong)

Bitget, the world's largest Universal Exchange (UEX), has launched IPO Prime, introducing a new market structure that enables users to access and trade pre-IPO exposure to global unicorn companies such as SpaceX. Powered by Republic, the launch marks an expansion beyond traditional secondary market trading, enabling participation in value creation before companies enter public markets, a phase historically limited to institutional investors and private capital networks. Bitget is the world's largest Universal Exchange (UEX), serving over 125 million users and offering access to over 2M crypto tokens, 100+ tokenised stocks, ETFs, commodities, FX, and precious metals such as gold. The ecosystem is committed to helping users trade smarter with its AI agent, which co-pilots trade execution. Bitget is driving crypto adoption through strategic partnerships with LALIGA and MotoGP™. Aligned with its global impact strategy, Bitget has joined hands with UNICEF to support blockchain education for 1.1 million people by 2027. Bitget currently leads in the tokenised TradFi market, providing the industry's lowest fees and highest liquidity across 150 regions worldwide. Through IPO Prime, Bitget extends its Universal Exchange framework into primary market access, bridging a long-standing gap between private and public market participation. IPO Prime operates through a subscription-based model, where eligible users can apply for allocations in tokenised offerings tied to specific companies. Allocation limits are determined based on user tier, with higher participation thresholds available to elevated VIP levels. Following the subscription phase, these digital assets transition into an over-the-counter market on Bitget, enabling continuous pricing, trading, and circulation within a structured environment. The first offering under IPO Prime is preSPAX, a digital asset designed to mirror the economic performance of SpaceX following its potential public listing. As one of the most closely watched private companies globally, SpaceX represents the type of high-growth opportunity that has traditionally remained inaccessible to retail investors. "Since the beginning of financial markets, access to pre-IPO opportunities has been defined by exclusivity," said Gracy Chen, CEO of Bitget. "IPO Prime allows users to participate earlier in a company's growth cycle, with the flexibility of continuous trading. This shifts how and when investors can engage with emerging companies, which gives retailers and new investors a chance to buy in early. This is part of our greater shift towards building a UEX, democratising access to financial equality." To mark the launch, Bitget will introduce two rounds of preSPAX token airdrops for eligible VIP users on April 13, 2026, at 10:00 (UTC), providing early participants with additional exposure as the platform begins onboarding its first offering. The official preSPAX token launches on April 21, 2026, at 12:00 (UTC), with the commitment period starting April 18, 2026, at 18:00 and ending April 21, 2026, at 18:00 (UTC). Distribution period runs from April 21, 2026 18:00 till April 21, 2026, 22:00 (UTC). The introduction of IPO Prime is a new route to traditional financial opportunities being structured and accessed. As boundaries between asset classes continue to blur, platforms are expanding beyond traditional and crypto trading to include early-stage market participation. Within Bitget's Universal Exchange model, IPO Prime moves towards integrating diverse financial opportunities into a single, unified environment. To find out more about IPO Prime and further details on preSPAX, visit here.

Anthropic and the Trump administration seem to be moving past their recent differences and working towards strengthening their cybersecurity defences. Despite the AI company locked in a bitter legal dispute with the Pentagon over the military's use of its technology, its CEO, Dario Amodei, met with Treasury Secretary Scott Bessent and White House Chief of Staff Susie Wiles in what the White House described as a "productive and constructive" introductory meeting. The discussion reportedly focused on shared priorities including cybersecurity, maintaining America's lead in the global AI race, and responsible approaches to AI safety. Anthropic confirmed the meeting in a statement, saying it had engaged with senior administration officials on "opportunities for collaboration, as well as shared approaches and protocols to address the challenges associated with scaling this technology." "Anthropic CEO Dario Amodei today met with senior administration officials for a productive discussion on how Anthropic and the US government can work together on key shared priorities such as cybersecurity, America's lead in the AI race, and AI safety," said the company in a statement to a CNN correspondent. The company also stated that it looks forward to continuing the discussions. Previously, there were signals of improved relations emerged when Treasury Secretary Bessent and Federal Reserve Chair Jerome Powell reportedly encouraged major banks to test Anthropic's newly announced Claude Mythos model. Anthropic co-founder Jack Clark publicly confirmed that the company had briefed the Trump administration on the model, downplaying the ongoing Pentagon conflict as a "narrow contracting dispute." Divide over Anthropic within the US administration In a report from Axios, a source within the administration told Axios that "every agency" except the Department of Defense is eager to adopt Anthropic's tools, thus highlighting a clear divide within the Trump administration. Despite the Pentagon's March 2026 designation of Anthropic as a 'supply-chain risk', the AI company continues high-level engagement outside the Defense Department. As far as the dispute with the Department pf Defense in concerned, Anthropic is challenging the Pentagon's designation in court. The maker of Claude argues that it exceeds legal authority and was awarded the designation for ideological reasons rather than security ones. Rest of the administration seems to see strategic value in Anthropic's AI models and AI capabilities. What this meeting means for Pentagon lawsuit and OpenAI If things turn out to be positive, it could significantly weaken the Department of Defense's position in the ongoing lawsuit. By engaging at the highest levels with Treasury and the White House on AI collaboration while the supply-chain risk designation remains in place, the administration is effectively hinting that the Pentagon's move was more political than a genuine national security necessity. This strengthens Anthropic's core legal argument that the designation was arbitrary, ideologically driven, and inconsistent with the company's recognition as a strategic asset in America's AI competition. The courts are likely to view the high-level meetings as evidence against the Pentagon's claims, increasing the chances of a favourable settlement or ruling for Anthropic before the six-month transition period ends. For OpenAI, the developments could represent a potential setback. The company had capitalised on Anthropic's expulsion by quickly securing expanded Pentagon contracts and positioning itself as the reliable, unrestricted alternative for military use. A partial or full restoration of Anthropic's status could degrade OpenAI's newfound dominance in government work, force renewed competition for classified and non-classified contracts, and reduce the strategic advantage it gained from Anthropic's self-imposed restrictions. The Anthropic-US government relation: From partners to lawsuits Anthropic has been a close partner to the US government since 2024, helping the national interests by deploying models on a classified network. Anthropic's Claude became the first AI model to integrate with National Laboratories, and the first to offer custom models for national security customers. It was In July 2025 when Anthropic signed a contract worth up to $200 million with the Department of Defense to supply advanced AI for intelligence analysis, modelling, simulation, cyber operations, and operational planning. Claude became the only frontier model cleared for classified military use, often routed through platforms like Palantir's. However, tensions emerged in January and February 2026 after Claude was reportedly used in a US military operation to capture Venezuelan President Nicolás Maduro, Pentagon officials pushed to renegotiate the contract. Defense Secretary Pete Hegseth demanded that Anthropic remove all usage restrictions and allow its models to be deployed for "any lawful purpose," including fully autonomous weapons systems and mass domestic surveillance of American citizens -- two applications the company had explicitly prohibited in its contract. On February 24, Hegseth issued Amodei a 72-hour ultimatum to comply with the government's needs. Amodei, however, refused, writing that the company "cannot in good conscience accede to their request." Anthropic argued that frontier AI systems are not yet reliable enough for fully autonomous lethal decisions and that mass surveillance of US citizens would undermine democratic values. On February 27, President Trump posted on Truth Social directing every federal agency to "IMMEDIATELY CEASE all use of Anthropic's technology." Hours later, Hegseth designated Anthropic a "supply-chain risk to national security" -- the first time such a label was applied to an American company. The move effectively barred military contractors from doing business with Anthropic and gave the Pentagon six months to transition away from its systems. Meanwhile, OpenAI quickly announced it had reached a deal with the Defense Department, filling the gap left by Anthropic. The episode triggered widespread fallout, with public criticism from Trump administration officials, including accusations of "woke AI," intensified the rift. Anthropic responded by filing a lawsuit to overturn the supply-chain risk designation.

The hacker behind the attack, who claims to be affiliated with the notorious ShinyHunters group, is selling data stolen from Vercel for $2 million. The group has been in the news lately for targeting Rockstar Games, makers of Grand Theft Auto (GTA). The hackers have alleged that the data they stole could be used to launch a major global supply chain attack. In light of the security breach, Vercel has urged its customers to check their environment variables for sensitive information and rotate secrets if necessary. The company has also released updates to its dashboard, including a new interface for managing sensitive environment variables. Despite the incident, Vercel's core services remain unaffected as it works with impacted customers and informs law enforcement agencies about the matter.

* The prediction platform is negotiating a $400 million capital raise at a $15 billion company valuation * The New York Stock Exchange's parent company, Intercontinental Exchange (ICE), has pledged $600 million * Combined investment could exceed $1 billion with participation from additional strategic backers * Competitor Kalshi achieved a $22 billion valuation following a $1 billion+ March funding round * Regulatory challenges and insider trading concerns continue to shadow both companies According to sources with direct knowledge of the discussions, Polymarket is pursuing $400 million in fresh capital at a valuation of $15 billion. The Information broke the story on Sunday, citing two individuals briefed on the negotiations. This funding effort comes on the heels of a substantial investment from Intercontinental Exchange, which owns and operates the New York Stock Exchange. In late March, ICE injected $600 million into the prediction market platform, establishing a post-investment valuation of $9 billion. Polymarket has set its sights on attracting more strategic partners alongside ICE. Should these efforts prove successful, the complete funding round could reach the $1 billion milestone, The Information reports. The current fundraising initiative aligns with earlier speculation from October 2025, when sources indicated Polymarket was exploring financing opportunities that would value the company somewhere between $12 billion and $15 billion. The prediction market sector has experienced explosive expansion following the 2024 United States presidential election. Leading platforms including Polymarket and Kalshi now routinely process more than $10 billion in monthly trading activity spanning categories from sports and politics to financial instruments and pop culture phenomena. Intensifying Market Competition Competing platform Kalshi secured more than $1 billion during March at a commanding $22 billion valuation, effectively doubling its worth since the previous November. March data shows Kalshi processed approximately $13 billion in trading volume, outpacing Polymarket's $10.57 billion for the same period. Established financial industry titans are now entering the arena as well. Cboe Global Markets has announced plans to debut its own prediction market offering. The Nasdaq options exchange submitted regulatory filings to introduce binary-style trading contracts linked to the Nasdaq-100 index. CME Group formed a strategic alliance with FanDuel, enabling participants to wager on markets beyond traditional financial instruments. Just last week, both Charles Schwab and Citadel Securities publicly acknowledged they're evaluating potential entry into the prediction markets space. Mounting Regulatory Headwinds Notwithstanding the sector's rapid growth trajectory, both leading platforms face increasing scrutiny from government regulators and legislative bodies. During March, United States senators Adam Schiff and John Curtis put forward the "Prediction Markets Are Gambling Act." This proposed legislation seeks to prohibit prediction contracts connected to sporting events or casino-style gaming from appearing on officially registered trading platforms. Kalshi currently finds itself embroiled in legal proceedings with the Nevada Gaming Control Board. A trial court issued a temporary injunction preventing Kalshi from conducting business within state borders. State regulators contend that Kalshi's contract offerings constitute illegal gambling operations conducted without proper licensing. The chief legal officer at Coinbase has suggested the dispute may ultimately land before the United States Supreme Court, potentially establishing landmark legal precedent that would shape prediction market regulation nationwide. Responding to intensifying regulatory oversight, Kalshi rolled out enhanced screening mechanisms. Polymarket implemented broader restrictions designed to combat market manipulation and abuse. As of publication time, Polymarket representatives have not issued any public statement regarding the reported fundraising discussions.

Vercel CEO Guillermo Rauch says the attackers who breached his company's internal systems were "significantly accelerated by AI." Rauch walked through the whole mess himself with a post on X. A Vercel engineer had been using an AI platform called Context.ai. Attackers compromised that platform's Google Workspace OAuth app -- one that hundreds of other organizations had also authorized. Once they had the employee's account, they pivoted into Vercel's environments. The company stores all customer environment variables encrypted at rest. But it also lets users mark some as "non-sensitive." That's where the attackers got traction. They enumerated those variables and moved with surprising velocity, with an in-depth understanding of how Vercel works. Interestingly, Rauch's AI claim comes at a time when exploit-finding models like Claude Mythos have also been getting attention, which makes the idea sound less far-fetched on the surface, even if there is still no public evidence tying any specific model to the Vercel breach. "For now, we believe the number of customers with security impact to be quite limited," Rauch wrote. The team has already reached out to the ones they're worried about. Next.js, Turbopack, and the open-source projects stayed untouched. You can read the official Vercel security bulletin for a deeper breakdown of how the technical side of the intrusion actually worked. A threat actor impersonating the ShinyHunters group listed the data for sale on BreachForums for $2 million. Chat logs obtained by International Cyber Digest show Vercel telling the impostors they won't pay. The real ShinyHunters have already denied any involvement. Google deleted the compromised OAuth app. Security researcher Jaime Blasco tied it directly to Context.ai after spotting a now-removed Chrome extension linked to the same client ID. That said, Vercel, for its part, didn't wait around. By the time Rauch hit send on his thread, the company had already shipped two new dashboard features: an overview page for all environment variables and a cleaner UI for marking them sensitive. Rauch called it part of turning the attack into "the most formidable security response imaginable." Still, some comments suggest that there might be more to this timeline. One user reported receiving an alert from OpenAI about a compromised key way back on April 10. Since that key was only used inside Vercel, it strongly suggests the breach happened over a week ago. There's still no official comment on this, so it's still up in the air about when the breach took place exactly. For now, Google Workspace admins can manually check and see if they might have been compromised. The check is simple: head to the Admin Console, go to Security > API Controls > Manage app access, and filter for the client ID 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com. If it shows up, revoke it. Vercel is still investigating with outside firms and law enforcement. Rauch said they've looped in Context.ai and Google's Mandiant team to help other companies. For everyone else using Vercel, the advice is straightforward: rotate secrets, treat every env var as potentially exposed, and start using the new sensitive-variables tools.

Investing.com -- Tesla's first-quarter results, due Wednesday, are unlikely to offer much reassurance on the company's robotaxi ambitions, and that, analysts at Jefferies say, may be enough to keep merger speculation with SpaceX alive. "Q1 results will show further widening of the gap between vision and execution and, barring a convincing announcement on robotaxi roll-out, may fuel concern about funding and raise the logic of an eventual merger with SpaceX," analysts led by Philippe Houchois said in a note. Jefferies maintained a Hold rating on the stock and raised its price target to $350 from $300. The trigger for renewed merger talk comes amid a widening gap between Tesla's ambitions and its near-term execution. Jefferies forecasts Tesla will report Q1 revenue of $21.2 billion, up 10% year-over-year but well below the prior quarter, with an operating margin below 3% and cash burn of around $1.9 billion. Looking further ahead, the bank projects negative free cash flow of roughly $5.5 billion in 2026 as capital expenditure ramps sharply to $19-20 billion annually. The robotaxi business, which Tesla has said it aims to launch across 25-50% of potential U.S. markets by year-end, remains the central uncertainty. Analysts believe that those ambitions "look beyond reach," with permitting hurdles and questions around lidar-less Full Self-Driving (FSD) technology still unresolved. They do not model any revenue from running robotaxis until 2027. Humanoid robots face a similarly long runway, with Jefferies noting the field is already crowded and commercial scale even more distant. Despite the near-term pressures, the analysts acknowledged that Tesla's "vertically integrated business model and ability to deliver funding and industrial scale" remain unique strengths against competitors who are "also moving slowly and facing higher capital costs."

Investing.com -- Tesla's first-quarter results, due Wednesday, are unlikely to offer much reassurance on the company's robotaxi ambitions, and that, analysts at Jefferies say, may be enough to keep merger speculation with SpaceX alive. "Q1 results will show further widening of the gap between vision and execution and, barring a convincing announcement on robotaxi roll-out, may fuel concern about funding and raise the logic of an eventual merger with SpaceX," analysts led by Philippe Houchois said in a note. Jefferies maintained a Hold rating on the stock and raised its price target to $350 from $300. The trigger for renewed merger talk comes amid a widening gap between Tesla's ambitions and its near-term execution. Jefferies forecasts Tesla will report Q1 revenue of $21.2 billion, up 10% year-over-year but well below the prior quarter, with an operating margin below 3% and cash burn of around $1.9 billion. Looking further ahead, the bank projects negative free cash flow of roughly $5.5 billion in 2026 as capital expenditure ramps sharply to $19-20 billion annually. The robotaxi business, which Tesla has said it aims to launch across 25-50% of potential U.S. markets by year-end, remains the central uncertainty. Analysts believe that those ambitions "look beyond reach," with permitting hurdles and questions around lidar-less Full Self-Driving (FSD) technology still unresolved. They do not model any revenue from running robotaxis until 2027. Humanoid robots face a similarly long runway, with Jefferies noting the field is already crowded and commercial scale even more distant. Despite the near-term pressures, the analysts acknowledged that Tesla's "vertically integrated business model and ability to deliver funding and industrial scale" remain unique strengths against competitors who are "also moving slowly and facing higher capital costs."

The White House is still in talks with Anthropic as legal and policy battles continue. US intelligence agencies are actively deploying advanced artificial intelligence tools from Anthropic despite a formal Pentagon designation labeling the firm a "supply chain risk," according to a new report highlighted by Axios. At the center of the controversy is Anthropic's Mythos Preview model, which sources say is currently in use at the National Security Agency (NSA). The development exposes a growing divide within the US government over how to balance rapid AI adoption with internal security restrictions. The Pentagon blacklisted Anthropic earlier in February 2026 after a dispute over AI safeguards and military use, formally warning that its systems posed a potential supply-chain vulnerability. However, intelligence officials appear to be prioritizing Mythos' capabilities, particularly in cybersecurity, over those concerns. According to sources, the National Security Agency (NSA) has adopted Mythos, as have other units within the Department. So far, the specifics of the NSA's use of Mythos are not publicly known, but elsewhere the model is largely being used to scan internal environments for security flaws. The model is considered one of the most advanced AI systems available, with strong "agentic" abilities that allow it to autonomously analyze and exploit complex systems. Reports of NSA and DOD use come days after insiders said the White House was negotiating access to Anthropic's Mythos model even as efforts to blacklist the company continued. More recently, CEO Dario Amodei also confirmed that the firm has been in contact with government officials and is open to collaboration. Some agencies argue that limiting access to such powerful AI could put the US at a strategic disadvantage, particularly against geopolitical rivals. However, experts warn that the same capabilities that make Mythos valuable for defense could also introduce new risks. Its ability to uncover vulnerabilities at scale could be weaponized if misused, raising concerns about escalation in cyber warfare. White House officials met with Amodei to discuss Mythos use in government operations Only about 40 vetted organizations are permitted to use Mythos. Of the 40 groups, only 12 are publicly known, and the NSA is reportedly tucked away in the unlisted majority. In the U.K., agencies similar to the NSA also noted they have access to the model through their national AI Security Institute. Anthropic describes Mythos as remarkably powerful in cybersecurity, capable of spotting deeply embedded bugs and independently exploiting them. This combination of advanced detection and autonomous analysis has already raised both interest and concern among policymakers. On Friday, Amodei met with White House chief of staff Susie Wiles and Treasury Secretary Scott Bessent to discuss how the model can be safely integrated into government infrastructure. Despite the White House's public friction with Anthropic, this meeting highlights that the model's power is simply too valuable for federal security needs to pass up. Both sides characterized the talks as productive. The White House even shared, "We discussed opportunities for collaboration, as well as shared approaches and protocols to address the challenges associated with scaling this technology." Anthropic filed a lawsuit to counter the supply chain risk designation Anthropic fired back at the Pentagon in March, filing a lawsuit to overturn the supply chain risk designation that threatened its government contracts. This was the first time the "not secure enough" tag has been pinned on a domestic provider, effectively barring their tools from standard government use. Anthropic's legal team has branded the "risk" designation a revenge tactic after Amodei denied the DoD's request to integrate the AI into fully autonomous weapons systems and mass domestic surveillance. As earlier reported by Cryptopolitan, a California district court judge sided with Anthropic and temporarily blocked the "supply chain risk" label. Still, a federal appeals court has since overturned that stay, keeping the designation in place for now. In the early days of the blacklist efforts, President Donald Trump had claimed that the radical leftists running the firm were trying to dictate terms to the Defense Department. He argued, "We don't need it, we don't want it, and will not do business with them again!" At the moment, some in the DoD still believe Anthropic's refusal to cooperate fully proves they would unplug their tech during a war, making them a flight risk in combat. However, other administration officials are eager to bury the hatchet just to get their hands on the company's superior tech.

Polymarket is in discussions to raise fresh capital at a $15 billion valuation, signaling strong investor confidence in the fast-growing event-based trading sector, according to a report by The Information. It is said that the funding round would mark a significant jump from the company's earlier valuation this year, underscoring rapid growth in user activity and trading volumes. Polymarket operates as a blockchain-based prediction market, allowing users to place bets on the outcomes of real-world events ranging from political elections to economic indicators and cryptocurrency prices. The platform has gained traction as both a speculative tool and an alternative data source, with market prices often reflecting real-time probabilities of future events. This dual function has attracted interest not only from retail traders but also from institutional investors seeking new forms of market intelligence. According to the report, Polymarket is aiming to capitalize on this momentum by expanding its infrastructure, improving compliance frameworks, and scaling globally. The company's rise also comes amid a broader trend of increasing acceptance of event-based financial products, which some analysts view as a new and emerging asset class. However, regulatory scrutiny remains a key challenge. Prediction markets often face legal uncertainty in multiple jurisdictions due to their overlap with gambling and financial derivatives. Despite this, investor appetite appears strong, driven by the platform's growing influence and revenue potential. As noted by The Information, the ongoing fundraising talks highlight how platforms like Polymarket are evolving beyond niche crypto applications into mainstream financial ecosystems. The deal is expected to solidify Polymarket's position as a leader in the prediction market space and accelerate the institutionalization of event-driven trading globally.

NIGERIA'S surging stock market may be projecting resilience, but beneath the record-breaking rally lies an economy still exposed to inflation shocks, fiscal strain and global geopolitical turbulence, according to Johnson Chukwu, Group Managing Director of Cowry Asset Management Limited. Presenting at the firm's Quarterly Economic Discourse for Q1 2026, Chukwu described the economy as "resilient but vulnerable," warning that strong financial market performance continues to diverge sharply from the realities facing households. At the heart of that paradox is the Nigerian equities market, now one of the best-performing asset classes globally. The NGX All-Share Index surged 29.35 per cent in the first quarter of 2026, building on a 51.19 per cent gain recorded in 2025. Market capitalisation climbed to N129.21 trillion, translating to nearly N30 trillion in investor gains within three months. Chukwu said the rally is largely liquidity-driven, rather than anchored on broad economic strength. "This is a liquidity-driven market," he noted, pointing to declining real yields in fixed income instruments that have pushed investors toward equities in search of higher returns. With inflation still elevated at 15.38 percent in the month of March having reversed its 11 months consecutive decline from 15.06 percent in February, driven largely by FX pass-through, energy costs, and persistent food supply constraints; equities have increasingly served as a hedge against value erosion. Regulatory changes, particularly increased pension fund exposure to stocks, have further boosted demand, alongside strong corporate earnings and attractive dividend yields. Sectoral performance reflects a market lifted by select themes, rather than broad-based growth. Oil and gas stocks surged 64.22 percent, buoyed by rising crude prices amid Middle East tensions. Industrial goods followed with a 54.60 percent gain, supported by resilient construction demand and pricing power. Banking stocks rose 22.75 percent on higher interest income and foreign exchange revaluation gains, while consumer goods lagged at 9.67 percent due to weak purchasing power. Insurance stocks recorded the weakest gains, underscoring uneven investor appetite. Chukwu cautioned that the rally is unfolding against a volatile global backdrop defined by geopolitical conflict and slowing growth. READ ALSO: Nigerian equities records 5th consecutive year gain Disruptions in the Strait of Hormuz have heightened uncertainty, pushing oil prices higher and amplifying global inflation risks. While this offers revenue upside for oil-exporting Nigeria, it also threatens capital inflows and financial stability. "The markets are holding up, for now, but may be underpricing the risk of a deeper macroeconomic disruption," he warned. Nigeria's economy grew by 3.87 percent in 2025, supported by improved oil output and non-oil sector expansion. Yet, Chukwu stressed that macroeconomic gains have not translated into improved living conditions. "This highlights the disconnect between macro stability and household reality," he said. The Central Bank of Nigeria has begun cautiously easing policy, cutting its benchmark rate to 26.5 percent, while maintaining tight liquidity controls. At the same time, the naira has stabilised and foreign reserves have strengthened, supported by improved portfolio inflows and foreign exchange reforms. However, fiscal risks remain significant. The N68.32 trillion 2026 budget is built on optimistic assumptions, while rising debt servicing costs continue to limit fiscal flexibility. Despite bullish momentum, Chukwu warned that the equities market remains vulnerable to shocks. Rising interest rates, persistent inflation and global risk aversion could trigger capital outflows and corrections, particularly in emerging markets. "War shocks transmit through oil to inflation, then to interest rates and ultimately to equities," he said. Giving an outlook, Chukwu maintained that equities could remain attractive in the near term, supported by liquidity and inflation-hedge demand. But sustaining the rally, he said, will depend on macroeconomic stability, policy consistency and the ability to navigate an increasingly uncertain global environment. "In chaotic markets, clarity is alpha," he stated. For now, Nigeria's stock market may be booming, but the broader economy is still walking a tightrope.

A post on X (formerly Twitter) has surfaced linking ShinyHunters, known for past high-profile breaches, to the Vercel incident.A security breach at cloud development platform Vercel has raised concerns after the company confirmed that parts of its systems were accessed by an attacker through a compromised third-party tool. The incident traces back to Context.ai, an artificial intelligence platform used by a Vercel employee. According to the company's internal bulletin, "The attacker used that access to take over the employee's Vercel Google Workspace account, which enabled them to gain access to some Vercel environments and environment variables that were not marked as 'sensitive'." Vercel clarified that variables marked as 'sensitive' are stored in a way that prevents them from being read directly, but the company "currently do not have evidence that those values were accessed." While not confirmed, a post on X (formerly Twitter) has surfaced linking ShinyHunters, known for past high-profile breaches, to the Vercel incident. The group was earlier associated with an attack on Rockstar Games, and is now being mentioned as a possible actor here as well. ALSO READ | Iran-linked hackers breach FBI director's personal email, publish excerpts online The attackers are attempting to sell the allegedly stolen data online for around $2 million. At the same time, Vercel has described the attacker as 'highly sophisticated' based on their operational velocity and detailed understanding its internal systems. In terms of impact, Vercel stated that only a limited subset of customers appears to have been affected, whose credentials were potentially compromised. "We reached out to that subset and recommended an immediate rotation of credentials," the company added. Addressing the situation publicly, CEO Guillermo Rauch said the company has taken several steps to strengthen its security posture and added, "We've analySed our supply chain, ensuring Next.js, Turbopack, and our many open- source projects remain safe for our community." "We've already rolled out new capabilities in the dashboard, including an overview page of environment variables, and a better user interface for sensitive env var creation and management. As always, I'm totally open to your feedback," he further wrote. To handle the investigation, Vercel is working closely with multiple cybersecurity experts, including Mandiant, along with other industry partners and law enforcement agencies. It has also directly engaged Context.ai "to understand the full scope of the underlying compromise."

In our first Ask the Experts panel, we examine what happens when patching cannot keep up, and exposure becomes a built-in condition. With Anthropic's Mythos and Project Glasswing drawing attention to AI capabilities in vulnerability discovery, we seek to understand how this affects real-world security operations. While Anthropic has introduced Project Glasswing to a select group of major industry players, we are curious to know what this means for the broader security ecosystem. How would the equation change when this capability reaches other players? What happens when threat actors access it? Is the industry prepared for what comes next? Taken together, these concerns are not theoretical. Even with current-generation tools, teams are dealing with increasing backlogs. AI is finding vulnerabilities faster than teams can fix, while existing patching processes were not even built for this speed or volume. In complex environments, this means a landscape where discovery keeps accelerating, but response fragments. Here's what cybersecurity experts have to say. Question: If AI can find vulnerabilities faster than organizations can fix them, what breaks first: patching models, security teams, or business operations? Ram Varadarajan, CEO at Acalvio It's the new forever war -- the race between autonomous vulnerability discovery and human remediation. * Business operations are going to fail first, perhaps catastrophically, at the remediation bottleneck: AI can identify zero-days in milliseconds. * This collapses the traditional, human-scale, weeks-long patch cycle, and forces untenable trade-offs between uptime and exposure. * The remediation lag, driven by maintenance constraints and stability testing, is going to combine with trust erosion, with automated exploitation outpacing manual-paced defense, and pushing breach costs beyond what will be acceptable margins. This failure will be strategic as much as operational: symmetrical defenses that try to match attacker speed are no longer viable. We have to pivot to bot-on-bot defense. Specifically, deception-centric models such as hypergame environments, and model-aware deception that misdirect attackers and create a verification gap, buying time for stabilization. Morey Haber, Chief Security Advisor at BeyondTrust If AI can find vulnerabilities faster than organizations can fix them, the first thing that breaks is not patching. It is not even the business. It is the security operating model itself. We are entering a phase where vulnerability discovery, exploit generation, and attack orchestration occur at machine speed, collapsing the window between discovery and weaponization to hours or less. Traditional security teams were never designed for this cadence. They operate on human triage, ticket queues, and risk prioritization models that assume time exists between exposure, impact, and remediation. That model is now in question. Consider these fracture points based on the document, "The "AI Vulnerability Storm": Building a 'Mythos-ready' Security Program" released by the Cloud Security Alliance. * Security teams break first. Not because of incompetence, but because of labor and time to complete. The volume and velocity of AI-discovered vulnerabilities create a workload that exceeds human scaling limits. * Patching models will break next. Not because patching is ineffective, but because it becomes overwhelming. When exploitation timelines compress to minutes or hours, the concept of remediation before compromise becomes aspirational. * Finally, business operations break last, but with the greatest consequences. Once both the security function and patching lifecycle are saturated, risk accumulates and executive teams will become aware of the short comings. Jason Schmitt, Chief Executive Officer at Black Duck If this imbalance continues, what breaks first isn't the AI models -- it's the system around them. In the real world, security teams are already overwhelmed by volume, not unaware of risk. AI has dramatically lowered the cost and time required to discover vulnerabilities, while remediation still depends on people, change control, regression testing, and business prioritization. That gap is widening. When discovery outpaces execution, teams default to two unhealthy patterns: * patch fatigue or * selective blindness. What's changing now is speed and asymmetry. * Attackers don't need perfect fixes or governance; they need one unpatched weakness. * Defenders, on the other hand, Current patching models weren't built for AI‑generated code churn or continuously evolving supply chains, and most organizations don't have the context to know which vulnerabilities actually matter. If nothing changes, business operations will feel the impact next: * delayed releases, * emergency freezes, * rising technical debt, and * security teams forced into gatekeeping roles that slow innovation rather than enabling it. That's where trust breaks down. What organizations should do differently now is shift focus from raw detection to execution leverage. * That means prioritizing vulnerabilities based on exploitability and business exposure, not scores alone, * embedding security earlier in development where fixes are cheaper, and * putting governance around AI‑generated code so risk doesn't accumulate invisibly. The goal isn't to match AI's speed, it's to apply judgment, context, and control where machines alone fall short. Nicole Carignan, Senior Vice President, Security & AI Strategy, and Field CISO at Darktrace AI has been accelerating vulnerability discovery faster than most organizations can validate, prioritize, patch, deploy, and verify fixes. So, the first thing that breaks is not necessarily the security team or the business itself, but the operating model that assumes remediation can keep pace with identification. In many organizations, budget breaks alongside it, because security investment is still not keeping pace with AI adoption or the scale of monitoring and tooling, now required. The real-world problem is that patching has always been constrained by people, process, uptime constraint, and operational risk. Many organizations cannot rapidly update legacy systems, industrial environments, or business-critical platforms without disruption. That means faster discovery does not automatically make organizations safer. In many cases, it creates larger backlogs, more triage pressure, and a greater chance that exploitable issues remain open longer. What changes next is the security posture organizations need to adopt. If AI continues to compress the gap between discovery and exploitation, companies cannot rely on CVE tracking and patching alone. They need to be able to detect exploitation of vulnerabilities they do not yet know exist. That means more emphasis on * monitoring, * behavioral analysis, * anomaly detection, * autonomous investigation, and * fast containment, especially in environments where patching is slow or infeasible. Organizations also need to recognize that they are defending against more than software flaws alone. They have to defend against identity and credential theft, human error, insider threats, misconfigurations, misuse of AI tools, and AI systems that unintentionally or intentionally introduce new risks. Organizations should stop treating vulnerability management as a closed loop ending in a patch. They need to * prioritize accurate, advanced anomaly-based threat detection and autonomous containment just as aggressively as remediation, and * invest in security resilience at the same pace they are investing in AI adoption. Otherwise, the discovery-remediation gap will widen faster than most teams can absorb. Diana Kelley, Chief Information Security Officer at Noma Security What breaks first isn't patching or even security teams in isolation. It's the operating model that connects discovery to remediation. Mythos shows AI can now find and exploit vulnerabilities at a level that surpasses all but the most skilled human researchers, at scale. Mythos Preview has already identified thousands of previously unknown, high-severity vulnerabilities across major operating systems, browsers, and open-source infrastructure, many long-standing and undetected. * One example: a 17-year-old remote code execution vulnerability in FreeBSD, CVE-2026-4747, exploited autonomously. That changes the timeline defenders are operating on. Glasswing is currently restricted to roughly 40 major partners. That restriction buys us time to address remediation practices, which are already under strain, before autonomous discovery breaks them. Right now, remediation is still gated by ownership ambiguity, testing cycles, uptime requirements, and limited engineering bandwidth. Security teams accumulate findings faster than they can translate them into safe, prioritized action. The backlog becomes unmanageable, and prioritization collapses under volume. The bottleneck is the operating model. Fixing at AI speed means accepting more instability and more risk to revenue systems. Most organizations haven't been willing to make that trade-off, so we slow down remediation instead. Current approaches optimize for discovery, not execution. We've built pipelines that generate more signal than our systems can absorb. What needs to change is structural. We have to reduce dependency on patching as the primary control: * shrink attack surface, * improve asset intelligence, * pre-position compensating controls. In practice, that means clear ownership for every internet-facing asset and defaulting to segmentation or rate limiting when patch SLAs can't be met. We also have to stress test our own systems proactively, using the same capabilities our adversaries will, including chained exposures and agent deployments, which are becoming targets. The organizations that adapt fastest will treat remediation as a systems design problem, not a ticketing problem. John Gallagher, Vice President of Viakoo Labs at Viakoo With organizations now managing 5 - 10 times more network-connected OT, IoT, and CPS devices than traditional IT systems, the first thing to break under accelerated AI-driven vulnerability discovery will be business operations that are reliant on these non-IT environments. OT patching models are already fractured and inadequate. They remain largely manual or device-specific -- consider FDA-regulated medical devices or manufacturing systems that need scheduled downtime for updates. Unlike IT, which benefits from mature, automated patch management, the OT/IoT landscape -- with more than 150,000 distinct operating systems -- lacks scalable automated solutions, leave alone the autonomous capabilities that are needed to counter the rapidly emerging exploitations like those uncovered by Mythos. Current security strategies focus largely on vulnerability discovery and risk prioritization -- the "find and notify" approach -- but they fall short on the operational realities of timely remediation. However, without an autonomous patch deployment process, surfacing exploitable vulnerabilities will inevitably bring OT/IoT/CPS systems to a halt. This operational breakdown will compel a fundamental shift in security team structures, incorporating line-of-business managers who oversee OT systems and expanding governance to fully encompass these environments. Mythos-driven exploits will stress credential and configuration management, demanding faster, more autonomous controls. To meet this urgent threat, organizations must reframe OT patching as a continuous, autonomous process embedded within operational workflows -- not a periodic project or an afterthought. Immediate priorities include: * investing in precise asset visibility, * integrating automated OT remediation where feasible, and * aligning security, IT, and OT teams around unified risk-reduction metrics. Joe Saunders, Founder and CEO of RunSafe Security Patching models were already under strain, especially in critical infrastructure, where updates can take months or even years. What AI-driven discovery changes is the scale. We're seeing a surge of zero-day vulnerabilities that no security team can realistically keep up with. What breaks next are the security teams themselves. They're being forced into constant triage as the volume of exploitable findings outpaces their ability to validate and remediate them, creating a growing backlog of known, unpatched risk. For operational technology and embedded systems, the challenge is even more acute. These environments often require physical access, certification, or planned downtime to patch, making rapid response impossible. The assumption that you can fix vulnerabilities before they're exploited is quickly becoming untenable. This is an inflection point for the industry. Security can't rely solely on patching and has to focus on reducing exploitability even when vulnerabilities remain. That means adopting protections that make software harder to attack in the first place, so organizations aren't forced to choose between operational risk and security risk.

Haitian civilians blocked exit routes used by Kenyan police as the Kenya-led MSS mission winds down in Haiti, amid fears of renewed gang violence and ongoing security transition. NAIROBI, Kenya, Apr 20 -- Haitian civilians reportedly blocked exit routes used by Kenyan police officers in parts of central Haiti, fearing that their withdrawal could expose communities to renewed gang attacks as the Kenya-led Multinational Security Support (MSS) Mission winds down operations in the Caribbean nation. Amateur videos circulated from Petite Rivière and Pont-Sondé showed crowds barricading roads and attempting to stop security convoys headed toward St. Marc and other exit points. The situation reportedly forced the use of helicopters to evacuate officers from volatile areas after ground movement was deemed unsafe. The incidents come as the final contingents of Kenyan police under the MSS Mission prepare to return home, marking the gradual conclusion of Kenya's lead role in the international security deployment. On March 25, a third group of 208 officers arrived back in Kenya and were received at Jomo Kenyatta International Airport (JKIA) by senior government and police officials, including National Security Adviser Dr. Monicah Juma and Inspector General of Police Douglas Kanja. 208 officers back Also present were Deputy National Security Adviser Joseph Boinnet, Deputy Inspector General Eliud Lagat, Chief of Staff for the Administration Police Service James Kamau, and National Police Service spokesperson Muchiri Nyaga. According to the National Police Service (NPS), the returning officers were part of a larger deployment tasked with supporting the Haitian National Police in combating gang violence, restoring public safety, and protecting key infrastructure. "The 208 officers who returned formed part of a larger contingent deployed to Haiti under the Kenya-led Multinational Security Support (MSS) Mission, tasked with supporting the Haitian National Police in combating gang violence, restoring public safety, and securing critical infrastructure," the NPS said. During their deployment, the officers were involved in securing airports, seaports, road networks, and humanitarian corridors, while also supporting institutional capacity building, including training initiatives at the Haiti National Police Academy and efforts to re-establish the Armed Forces College. At JKIA, Dr. Juma commended the officers for their professionalism and discipline, saying their service had enhanced Kenya's international reputation in peace operations. Inspector General Kanja assured the returning personnel of continued welfare support, including psychological counselling and reintegration programmes. The volatile exit scenes underscore the fragile security environment the mission was deployed to stabilise. The MSS operation is now transitioning into a new framework following the approval of the Gang Suppression Force (GSF) by the United Nations Security Council in October 2025. The GSF, which replaces the MSS structure, will be led by Chadian forces with a broader international composition. South African UN official Jack Christofides has been appointed to head the new mission, succeeding Kenya's Godfrey Otunge, who previously led the MSS deployment. Reports indicate Chad plans to deploy up to 800 police officers and gendarmes to Haiti this year, with training support from international partners ahead of full deployment. The GSF is expected to reach its full operational strength of about 5,500 personnel by October. The transition comes amid continued uncertainty on the ground, where armed gangs still control significant territory despite ongoing international security efforts.

Investing.com -- Tesla's first-quarter results, due Wednesday, are unlikely to offer much reassurance on the company's robotaxi ambitions, and that, analysts at Jefferies say, may be enough to keep merger speculation with SpaceX alive. "Q1 results will show further widening of the gap between vision and execution and, barring a convincing announcement on robotaxi roll-out, may fuel concern about funding and raise the logic of an eventual merger with SpaceX," analysts led by Philippe Houchois said in a note. Jefferies maintained a Hold rating on the stock and raised its price target to $350 from $300. The trigger for renewed merger talk comes amid a widening gap between Tesla's ambitions and its near-term execution. Jefferies forecasts Tesla will report Q1 revenue of $21.2 billion, up 10% year-over-year but well below the prior quarter, with an operating margin below 3% and cash burn of around $1.9 billion. Looking further ahead, the bank projects negative free cash flow of roughly $5.5 billion in 2026 as capital expenditure ramps sharply to $19-20 billion annually. The robotaxi business, which Tesla has said it aims to launch across 25-50% of potential U.S. markets by year-end, remains the central uncertainty. Analysts believe that those ambitions "look beyond reach," with permitting hurdles and questions around lidar-less Full Self-Driving (FSD) technology still unresolved. They do not model any revenue from running robotaxis until 2027. Humanoid robots face a similarly long runway, with Jefferies noting the field is already crowded and commercial scale even more distant. Despite the near-term pressures, the analysts acknowledged that Tesla's "vertically integrated business model and ability to deliver funding and industrial scale" remain unique strengths against competitors who are "also moving slowly and facing higher capital costs."

Blames outfit called Context.ai, which reckons an agentic OAuth tangle caused the incident Vercel, the company that created the open source Next.js web development framework, has a data leak that led to compromise of some customer credentials, and blamed an outfit called Context.ai for the mess. A Vercel security bulletin says that on April 19, the company "identified a security incident that involved unauthorized access to certain internal Vercel systems" and led to credential compromise for "a limited subset of customers." The company contacted those customers and "recommended an immediate rotation of credentials." "We continue to investigate whether and what data was exfiltrated and we will contact customers if we discover further evidence of compromise," the bulletin states, adding that the company has "deployed extensive protection measures and monitoring. Our services remain operational." Vercel has named the source of the mess: Context.ai has also published a security bulletin that reveals a March incident that saw it identify and stop a security incident involving unauthorized access to its AWS environment. Context.ai hired CrowdStrike to conduct an investigation, and closed its AWS rig. "Today, based on information provided by Vercel and some additional internal investigation, we learned that, during the incident last month, the unauthorized actor also likely compromised OAuth tokens for some of our consumer users," the company admitted. The company's consumer clients used a product called the AI Office suite that Context.ai describes as a "workspace designed to help users work with AI agents to build presentations, documents, and spreadsheets. The AI Office suite offered a feature that allowed consumer users to enable AI agents to perform actions across their external applications, facilitated via another 3rd-party service." Back to Context.ai's bulletin, which says whoever attacked its systems "appears to have used a compromised OAuth token to access Vercel's Google Workspace. Vercel is not a Context customer, but it appears at least one Vercel employee signed up for the AI Office Suite using their Vercel enterprise account and granted 'Allow All' permissions." Context.ai thinks Vercel's internal OAuth configurations "appear to have allowed this action to grant these broad permissions in Vercel's enterprise Google Workspace." All of the actors in this mess made mistakes. Context.ai clearly didn't have great infosec. CrowdStrike's investigation appears to have missed a trick or two. Vercel didn't lock down its Google Workspace. And now the world has an example of an agentic AI product linking to third-party services and causing trouble, just the kind of risk infosec experts have warned about. ®
