The latest news and updates from companies in the WLTH portfolio.
🔑 Critical data: Projects risk exposure if API keys weren't flagged as sensitive in $ETH dApps. Cloud infrastructure provider Vercel has launched an investigation after discovering unauthorized access to its internal systems, spotlighting new security risks for crypto projects relying on its services. The incident, traced to a third-party AI integration, has revealed vulnerabilities in how environment variables and platform integrations are managed across decentralized application infrastructure. ContentsRoot cause: AI-linked compromise exposes Vercel accountsPotential fallout for crypto infrastructure and project teamsCrypto sector reviews security after infrastructure breachRoot cause: AI-linked compromise exposes Vercel accounts According to details shared by Vercel and supported by cybersecurity firm Mandiant, attackers gained entry after compromising a Vercel employee's account. The breach began through an exploited third-party AI service connected to Google Workspace, which enabled the attackers to maneuver into Vercel's internal environment. Vercel, founded by Guillermo Rauch and headquartered in San Francisco, operates a leading platform for deploying and managing web applications, including critical infrastructure for many prominent decentralized projects. The platform is widely used for hosting crypto dashboards, wallet interfaces, and decentralized application frontends. CEO Guillermo Rauch reported that while customer environment variables tagged as sensitive remain encrypted, investigators found that non-sensitive variables were accessed. This distinction has become a focal issue: teams storing private API keys or sensitive data without properly flagging them may face exposure. Guillermo Rauch emphasized ongoing transparency and assured the community that the incident is being handled directly, with customers advised to review stored variables and rotate any that were not classified as sensitive. Vercel has enlisted external cybersecurity experts and notified authorities. The company is also working with Context.ai to determine the full scope of the breach, which is under continuous review. Potential fallout for crypto infrastructure and project teams The breach has broader implications, with BleepingComputer reporting that a threat actor associated with the group ShinyHunters is attempting to sell purported Vercel data -- including internal credentials, code, and employee records -- for $2 million. The authenticity of these claims has not yet been independently verified, but online samples showed detailed employee information. Developer Theo Browne highlighted potential impacts to integrations such as GitHub and Linear, echoing Vercel's recommendation for immediate rotation of all environment variables that haven't been classified as sensitive. Theo Browne summarized the situation, noting Vercel was the primary victim and reiterating the need to secure environment data, especially those not flagged as sensitive. For many web3 and crypto teams, Vercel forms the backbone of frontend hosting. A breach at this infrastructure layer can put sensitive API keys and RPC endpoints at risk if variables are not properly protected. Even without direct tampering of code, exposure of configuration data can provide attackers with critical access points. Recent attacks against other crypto infrastructure providers, including incidents at CoW Swap and DNS provider EasyDNS, have involved redirecting users to malicious sites. However, the Vercel incident differs in granting attackers potential direct access to deployment outputs, raising concern for undetected code alterations in live applications. Crypto sector reviews security after infrastructure breach Crypto projects are now carefully reviewing their security postures, focusing on whether any sensitive data stored as non-encrypted variables could be at risk. Teams are urged to audit their integrations and credentials, taking immediate measures to protect against future exploits. Despite FUD on dark web forums about stolen data, no major crypto project has confirmed tampered deployments or contacted Vercel publicly regarding the incident. Uncertainty remains about potential modifications to live platforms or exposure of user credentials. Vercel continues its investigation in collaboration with external cybersecurity groups and has not reported evidence of customer applications being changed. The episode underscores the growing threats posed by third-party integrations and highlights the persistent need for vigilant management of sensitive information across decentralized infrastructure. You can follow our news on Telegram, Facebook & Coinmarketcap & XDisclaimer: The information contained in this article does not constitute investment advice. Investors should be aware that cryptocurrencies carry high volatility and therefore risk, and should conduct their own research.

Kraken is a registered Restricted Dealer in Canada and one of the longest-standing crypto asset trading platforms globally. Under this listing, QCAD will be available for trading on the Kraken crypto asset trading platform, facilitating the settlement of digital asset transactions in a Canadian Dollar-denominated instrument. Empowering Canadians in a Global Digital Economy "Canadians are leading digital asset adoption because they want more financial choice. Listing QCAD on Kraken helps meet that demand with a compliant, Canadian Dollar-denominated stablecoin - and it makes it easier for global participants to engage with Canada's digital economy. We are excited for this practical step toward bringing the traditional financial system onchain," said Mark Greenberg, Global Head of Consumer at Kraken. For Canadians, QCAD offers a compliant way to stay anchored to the Canadian dollar while gaining access to global liquidity, 24/7 markets, and new financial use cases that do not exist in traditional systems. "Connecting Canadians to the global digital economy is core to our mission; this listing serves as a significant vote of confidence in QCAD and Canada as a whole," said Kesem Frank, CEO of Stablecorp. "Meeting Kraken's rigorous listing standards further validates QCAD as an important crypto instrument, capable of supporting global flows; we are excited to expand the 'Global Gateway' for Canadians, allowing them to seamlessly benefit from the opportunities of global digital markets." Bringing Canadian Dollars Onchain The availability of QCAD on Kraken unlocks new opportunities for institutional clients and individual Canadian customers alike: * Seamless Global Participation: Canadians can hold and move Canadian Dollar-denominated value on-chain, enabling easier access to global crypto markets without depending on legacy payment rails. * Stronger Canadian Dollar Trading Experience: Deeper liquidity and improved Canadian Dollar-denominated trading pairs help deliver tighter spreads and better price discovery for major crypto assets like BTC, ETH, and USDC. * 24/7 Access: Unlike traditional foreign exchange markets, crypto markets do not close. With QCAD trading pairs on Kraken, customers can manage exposure, deploy capital and respond to market conditions in real time.

Despite these serious allegations, M token's price soared by around 3% in the last 24 hours after a recognition from Grayscale. On April 20, a very popular on-chain sleuth, ZachXBT, raised serious questions on a very renowned cryptocurrency exchange, Kraken, that shook the entire crypto community. On April 20, ZachXBT raised questions about how the crypto exchange decided to list MemeCore's M token for spot trading on July 3, 2025. In the detailed post on X, ZachXBT mentioned suspicious movements of millions of dollars worth of M tokens right around the time of listing. He suggested that insiders may have used the listing event to push the token's price much higher. The token reached a market capitalization of $6 billion and a fully diluted valuation of $18 billion during this event. ZachXBT warned that while insiders benefit from this, normal traders could be left holding the bag once the price corrects. The on-chain investigator broke down his findings in simple terms for regular traders to understand. He first mentioned big withdrawals coming from the Kraken exchange. About $7.9 million worth of M tokens were withdrawn from the exchange to 18 different new wallet addresses shortly after or around the time of listing. Those wallets now hold around $11.7 million M tokens, and the current value of that holding is around $39.8 million. ZachXBT also mentioned insider deposits that happened on the very day of listing. A wallet suspected to belong to the MemeCore team, which received 200 million M tokens at launch, sent 5.3 million tokens directly to two Kraken deposit addresses on the exact same day of July 3, 2025. This timing raised questions about whether the team had advance knowledge or special access around the listing event. Kraken is one of the biggest cryptocurrency exchanges for spot trading. It is one of the few major exchanges that offers spot trading for M, and ZachXBT says this has helped enable price manipulation. The availability of spot trading on a major platform like Kraken gives the token a level of legitimacy and access that many smaller tokens do not have. ZachXBT has also pointed out a major gap between the project's public updates and its market valuation. The MemeCore team's recent updates mostly talk about $66 million in trading volume on their launchpad and thousands of users gained through paid incentive campaigns. According to him, the team has not shown much real product progress that would normally justify a huge market capitalization. In the latest post on X, the on-chain sleuth directed a direct question to Kraken exchange. He asked why the crypto exchange decided to list M tokens for spot trading, and how exactly the token passed the exchange's due diligence checks. Despite such serious allegations of insider trading on the leading crypto exchange, MemeCore's price has soared by around 3%, increasing its value to $3.54 with a market capitalization of $4.58 billion, according to CoinMarketCap. MemeCore is a leading Layer 1 blockchain made for "Meme 2.0." While traditional memecoins just rely on temporary hype, MemeCore wants to change that by turning internet memes into real communities with economic value. It has a native token called M, which users use to pay fees, stake, vote, and earn rewards. The project also has features like MemeX. It is an easy no-code launchpad for new meme tokens. The project recently got recognition from Grayscale.

John Oliver used Sunday's "Last Week Tonight" to call prediction markets gambling sites with better lawyers, adding the loudest voice yet to a backlash already being waged by state AGs, Congress and the sportsbook industry. What Oliver Said Oliver framed Kalshi and Polymarket as gambling sites operating in states where gambling remains illegal, arguing their hedging pitch strains credibility when most users trade sports or political word markets. He cited one analysis showing more than two-thirds of all winnings on Polymarket have been captured by just 740 accounts, despite the platform having over two million users. Oliver also went after CNN, Fox News, CNBC and Wall Street Journal parent Dow Jones for paid partnerships that put Prediction Market odds on air during news coverage, singled out Donald Trump Jr.'s advisory roles at both companies, and criticized CFTC Commissioner Michael Selig for suing three states rather than letting courts decide whether prediction markets constitute gambling. He also flagged suspiciously timed Polymarket trades around the U.S. strikes on Iran and the Nicolás Maduro seizure. What It Means For HOOD And COIN Robinhood Markets (NASDAQ:HOOD), which distributes Kalshi contracts in all 50 states, is the clearest public-market proxy. Bernstein last week projected industry volumes could reach $1 trillion by 2030. Coinbase Global (NASDAQ:COIN), which distributes Kalshi and is building its own prediction markets product, was directly name-checked. Oliver played a clip of Coinbase CEO Brian Armstrong on a recent earnings call rattling off "Bitcoin, Ethereum, blockchain, staking and web 3" to move a prediction market tracking which words he would say on the call. Intercontinental Exchange (NYSE:ICE), parent of the NYSE, committed up to $2 billion to Polymarket in a two-stage deal completed last month. The Backlash Hits A Sensitive Moment More than a dozen bills targeting prediction markets are live in Congress, including the BETS OFF Act from Sen. Chris Murphy (D-Conn.), Rep. Mike Levin's (D-CA) Death Bets Act and Sen. Adam Schiff's (D-CA) sports contracts bill. TD Cowen analysts said last month that none are likely to pass this session. The courts are the bigger threat. The Third Circuit sided with Kalshi earlier this month, ruling New Jersey cannot enforce gambling laws against the platform. The Ninth Circuit went the other way on Nevada days later, teeing up a circuit split that gaming attorneys expect to land at the Supreme Court. Oliver's segment arrives in the middle of that fight. Image: IMAGN Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.

Rival prediction market Kalshi saw its valuation double to $22 billion following a $1 billion raise last month. Prediction markets platform Polymarket is in talks to raise $400 million at a valuation of around $15 billion, according to The Information. Per The Information's reporting, the prediction market firm is looking to add additional strategic investors beyond New York Stock Exchange parent company Intercontinental Exchange to the round, which could total $1 billion. The new funding round follows a $600 million investment in Polymarket by Intercontinental Exchange last month, bringing its total investment in the prediction market firm to $1.6 billion. At the time, ICE announced that the firm would purchase up to $40 million worth of Polymarket securities from existing holders, fulfilling its commitment to invest $2 billion in the firm in an October 2025 deal that valued the company at $9 billion. ICE's relationship with Polymarket has deepened over the past six months. As part of its October deal, the exchange operator became the exclusive global distributor of Polymarket's event-driven data to institutional capital markets. In February it launched the Polymarket Signals and Sentiment tool, integrating prediction market data into its existing financial infrastructure offerings. The institutional backing marks a turning point for prediction markets, which have evolved from crypto-native experiments to mainstream financial instruments, amid growing institutional interest. Earlier this year, Polymarket's rival Kalshi raised $1 billion to reach a $22 billion valuation, while the likes of Charles Schwab and Nasdaq are making moves in the space. Nevertheless, prediction markets face regulatory challenges, with states and federal authorities at odds over whether their offering constitutes gambling or federally regulated even contracts. Last month, Nevada became the first state to ban Kalshi from operating within its borders, while Arizona has filed criminal charges against Kalshi for allegedly operating an illegal unlicensed gambling business. Meanwhile, an appeals court ruling this month found that the firm's sports-related markets should be federally regulated, while the Justice Department and the CFTC have jointly filed lawsuits against Illinois, Arizona, and Connecticut over who has the right to regulate prediction markets. Earlier this month, CFTC Chairman Michael Selig raised concerns that driving prediction markets offshore into unregulated space could cause FTX-style "implosions," arguing that, "We've got to make sure these exchanges come and register here in the United States and that our rules are set up to facilitate fair markets, markets that have investor protections, customer protections, and have real guardrails and rules."

According to a report by Axios, the National Security Agency is using Anthropic's new Mythos Preview. Axios said the model was being used more widely within the department. The model was introduced in the beginning of April, with it being described as a general-purpose language model that is "strikingly capable at computer security tasks." This comes days before the Anthropic CEO Dario Amodei met with White House Chief of Staff Susie Wiles and other officials, reportedly to discuss Mythos. The White House later said the meeting on Friday was "productive and constructive," though President Trump said he had "no idea" about it when asked by reporters. This comes despite Anthropic's designation as a supply chain risk following disagreements between the company and the Pentagon. READ: Anthropic Mythos sparks concerns among the finance community over security risks (April 17, 2026) The Axios report mentioned that sources said the NSA was using Mythos, while one said the model was also being used more widely within the department. While it is not clear how the NSA is using the model, other organizations with access to the model are using it predominantly to scan their own environments for exploitable security vulnerabilities. Anthropic had restricted access to Mythos to around 40 organizations, claiming that its offensive cyber capabilities were too dangerous to allow for a wider release. Anthropic had only announced 12 of those organizations, and a source mentions that NSA was among the unnamed agencies with access. The NSA's UK counterparts have said they have access to the model through the country's AI Security Institute. READ: Trump administration may be pushing banks to trial Anthropic Mythos (April 13, 2026) Anthropic is still involved in a legal battle with the U.S. government after the company filed lawsuits against the Department of Defense in two courts in March after it was deemed a supply chain risk by the Trump administration. While one court granted Anthropic a preliminary injunction, to temporarily block this designation, federal judges in the other denied its motion to lift the label. Finance ministers, bankers, and financiers have recently expressed concerns about the Mythos model, over its potential to undermine security of financial systems. Experts say the model potentially has an unprecedented ability to identify and exploit cybersecurity weaknesses, though others caution further testing is needed to properly understand its capabilities. The Trump administration recently summoned bank executives for a meeting where the U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell encouraged the use of Anthropic's latest model to detect vulnerabilities, according to Bloomberg.

Deutsche Bank CEO Christian Sewing said on Monday that banks were in close contact with European watchdogs about Anthropic's Mythos as regulators rush to examine the cybersecurity risks the new artificial intelligence model raises and how prepared financial firms are to tackle them. Deutsche Bank CEO Christian Sewing said on Monday that banks were in close contact with European watchdogs about Anthropic's Mythos as regulators rush to examine the cybersecurity risks the new artificial intelligence model raises and how prepared financial firms are to tackle them. Mythos is viewed by cybersecurity experts as posing significant challenges to the banking industry and its legacy technology systems, prompting a series of warnings from regulators and policymakers gathered at last week's International Monetary Fund spring meeting in Washington. "It's certainly not something that's causing panic or setting off any alarm bells on our end right now, but it's definitely something we need to keep in mind in our day-to-day risk management -- and that's exactly what we're doing," Sewing, who is chief executive of Germany's biggest bank, told journalists. "The banks are prepared for this and have their own responses. So this is something we have to live with, and of course everyone is trying to gain access, but I also think it's right that access is limited for now," he said, adding that a German banking association would meet to discuss the issue on Monday. Anthropic has so far restricted access to the model to partners in its Project Glasswing initiative and about 40 additional organisations that build or maintain critical software infrastructure. JPMorgan, which is part of Glasswing, is the only bank Anthropic has publicly said has access. Multiple senior banking and regulatory sources in Europe told Reuters they were not aware of any European financial institution with access to Mythos yet. Anthropic did not immediately respond to a request for comment by Reuters on if and when it would grant banks access. Substantially more capable at cber offence The British government sent an open letter to company leaders on April 15 warning that testing by its AI Security Institute (AISI) had shown Mythos to be "substantially more capable at cyber offence than any model we have previously assessed." Some Asian regulators said on Monday they were monitoring the development. South Korea's Financial Supervisory Service (FSS) said it held a meeting with information security officials from financial firms last week to review Mythos-related risks. Mythos was a key topic on the sidelines of the IMF meetings last week. European regulators are not yet overly concerned and for now are assessing it through their existing cyber resilience process, two European supervisory sources told Reuters. One banking source said that the ECB and other regulators have been in contact with European banks to assess their preparedness for new cybersecurity risks. Supervisors have asked about banks' awareness of the threat and their ability to respond, the source said. The vast capabilities of Mythos to code at a high level have given it a potentially unprecedented ability to identify cybersecurity vulnerabilities, experts say, prompting greater scrutiny from regulators globally. Barclays CEO C S Venkatakrishnan said on Friday in Washington that Mythos was a serious threat to the global banking system and likely to be followed by similar, more powerful cyberthreats.
Anthropic and the US government have not been on the best of terms in recent months, as the US Department of Defense deemed the company a "supply chain risk" after its refusal to remove safeguards designed to prevent its products being used for autonomous weapon and mass surveillance purposes. The company has since sued, but that hasn't stopped Anthropic CEO Dario Amodei from attending a meeting at the White House after the announcement of its new Claude Mythos AI model earlier this month. Mythos may have caused a stir all the way to the very top, as it's said to be capable of identifying thousands of high-severity cybersecurity vulnerabilities, and can write its own exploits to demonstrate them. The model is currently in the preview stage, with a few dozen companies given access to its capabilities. The Anthropic chief spoke to US treasury secretary, Scott Bessent, and White House chief of staff, Susie Wiles, last Friday, according to BBC News, in a meeting that was described by the White House as "productive and constructive". "We discussed opportunities for collaboration, as well as shared approaches and protocols to address the challenges associated with scaling this technology," the White House said in a reported statement. "The conversation also explored the balance between advancing innovation and ensuring safety." This friendly-sounding chat stands in stark contrast to comments made by US President Donald Trump about Anthropic in the middle of its dispute with the US Department of War in February. Trump called it a "radical, left, woke company", and claimed that the US government "will not do business with them again." Despite these remarks, it appears that Claude Mythos has warranted a cooler approach from the US government in regards to dealing with Anthropic's AI developments. Certainly, an AI model with these sorts of capabilities could represent a potential security risk if it fell into the wrong hands, and a powerful tool for any company, or government, to wield. For Anthropic's part, a spokesperson for the company said: "The meeting reflected Anthropic's ongoing commitment to engaging with the US government on the development of responsible AI. We are grateful for their time and are looking forward to continuing these discussions."

Piraeus (ATHEX: TPEIR) and Accenture (NYSE: ACN) announced a significant expansion of their long-standing collaboration with the launch of a dedicated AI Hub supported by Anthropic designed to accelerate Piraeus' enterprise-wide AI transformation and set a benchmark for AI-driven banking in Greece. The AI Hub will act as a central engine for designing, developing and scaling advanced AI capabilities across Piraeus' full value chain. By bringing together Accenture's industry and AI expertise, including its Data & AI Center of Excellence in Athens, with Piraeus' strategic AI roadmap, the Hub will drive the reinvention of banking processes across operations, customer experience, risk, and compliance, and modernize the technology backbone. In parallel, the Hub will strengthen Piraeus' long-term AI capabilities by attracting, developing and upskilling specialized talent through targeted recruitment and structured learning programs, including Udacity, Accenture's AI-native learning and training platform. This approach supports the Bank's ambition to embed AI skills and ways of working deeply across the organization. A key focus of the collaboration will be the development of secure, responsible and human-centric AI solutions, designed to autonomously support decision making, streamline complex processes and enhance both customer and employee experiences. Piraeus and Accenture, with its newly-formed Anthropic Business Group , will leverage the power of Anthropic AI models and platforms and its deep grounding in ethical AI principles to drive innovation in a responsible manner, ensuring that advanced AI solutions are aligned with the bank's values and regulatory requirements. This approach will support the development of secure, trustworthy, and scalable AI applications, to elevate human performance and the quality of banking services. "The AI Hub represents a strategic inflection point for Piraeus," said Harry Margaritis, Group Chief Operating Officer, Piraeus. "We are advancing from individual AI deployments to a unified, enterprise-level capability that is deeply embedded in how the Bank operates. Our collaboration with Accenture, together with the integration of Anthropic's AI technology, enables us to scale advanced AI responsibly, anchored in strong governance, transparency and human control. This initiative empowers our people, reinforces trust with our customers and regulators, and builds a resilient, future-ready foundation for banking in Greece." "This collaboration reflects the deep and longstanding relationship between Piraeus and Accenture, built on trust, value creation and shared ambition," stated George Pallioudis, Financial Services Lead at Accenture. "It's a testament to Piraeus' leadership commitment to AI adoption and a recognition of Accenture's leading role in AI-powered reinvention at scale." Thomas Remy, Head of Southern Europe, Middle East & Africa for Anthropic , commented: "AI is transforming how banks operate, and it's vital that modern AI systems meet strong governance and regulatory requirements. Claude is built with the safety, reliability and transparency that highly regulated industries like banking demand. In partnering with Anthropic to power a new AI hub for Greek banking, Piraeus and Accenture have underscored our shared commitment to safe, responsible AI deployment." The AI Hub builds on Piraeus' successful collaboration with Accenture to adopt a cloud first operating model, which has already accelerated digital service delivery, enhanced security and compliance, improved operational efficiency and supported the Bank's broader sustainability and modernization objectives. About Piraeus Piraeus, established in 1916, is the leading financial institution in Greece, in terms of market shares in loans, deposits, and branch presence. The Bank provides a comprehensive range of financial products and services, with recognized leadership in SME banking, retail banking, digital banking, and capital markets. Headquartered in Athens and listed on the Athens Stock Exchange, Piraeus employs approximately 8.1 thousand professionals and operates a nationwide network of 368 branches. As of 31 December 2025, Piraeus Group reported total assets of €91 billion. Piraeus is committed to supporting the country's economic development and delivering long-term value for customers, shareholders, and society. Through disciplined execution, innovation, and sustainable banking principles, Piraeus aims to drive growth and resilience across its operations. About Accenture Accenture is a leading solutions and services company that helps the world's leading enterprises reinvent by building their digital core and unleashing the power of AI to create value at speed across the enterprise, bringing together the talent of our approximately 786,000 people, our proprietary assets and platforms, and deep ecosystem relationships. Our strategy is to be the reinvention partner of choice for our clients and to be the most client-focused, AI-enabled, great place to work in the world. Through our Reinvention Services we bring together our capabilities across strategy, consulting, technology, operations, Song and Industry X with our deep industry expertise to create and deliver solutions and services for our clients. Our purpose is to deliver on the promise of technology and human ingenuity, and we measure our success by the 360° value we create for all our stakeholders. Visit us at accenture.com . About Anthropic Anthropic is an AI research and development company that creates reliable, interpretable, and steerable AI systems. Anthropic's flagship product is Claude, a large language model trusted by millions of users worldwide. Learn more about Anthropic and Claude at anthropic.com . Forward Looking Statements Except for the historical information and discussions contained herein, statements in this news release may constitute forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. Words such as "may," "will," "should," "likely," "anticipates," "aspires," "expects," "intends," "plans," "projects," "believes," "estimates," "positioned," "outlook," "goal," "target" and similar expressions are used to identify these forward-looking statements. These statements are not guarantees of future performance nor promises that goals or targets will be met, and involve a number of risks, uncertainties and other factors that are difficult to predict and could cause actual results to differ materially from those expressed or implied. These risks include, without limitation, that the partnership might not achieve its anticipated benefits and risks and uncertainties related to the development and use of AI, including advanced AI, could harm our business, damage our reputation or give rise to legal or regulatory action, as well as the risks, uncertainties and other factors discussed under the "Risk Factors" heading in Accenture plc's most recent Annual Report on Form 10-K and other documents filed with or furnished to the Securities and Exchange Commission. Statements in this news release speak only as of the date they were made, and Accenture undertakes no duty to update any forward-looking statements made in this news release or to conform such statements to actual results or changes in Accenture's expectations.

Anthropic released Claude Opus 4.7 this week with genuine improvements for enterprise automation, but the findings from its unreleased model should be on every IT leader's desk before anything else Last week, Anthropic gathered twelve of the world's largest technology companies to share an uncomfortable finding. Its most powerful AI model had spent several weeks autonomously identifying security flaws in widely used software, including vulnerabilities that had gone undetected for nearly three decades. That disclosure came alongside the general release of Claude Opus 4.7. Anthropic is using the newer model to test the security controls it needs before it can responsibly release the more capable one. For enterprise buyers, both developments matter. Research from Gravitee, published in February 2026, found that 81% of enterprise teams have moved past the planning phase for AI agents. Yet only 14.4% have full security or IT approval for the agents they run. That governance gap looks considerably more serious in light of what Anthropic disclosed this week. The core problem with running AI agents at scale has always been reliability. Models that drop context between sessions, stall on complex tasks, or need supervising at every step eat up more time than they save. Opus 4.7 addresses several of those issues. It checks its own outputs before reporting back, retains context across sessions, and follows instructions more precisely than its predecessor. For teams running multi-day workflows, that context retention matters most. Re-establishing background at the start of each session is a real operational cost that most productivity assessments overlook. Enterprise testers reported measurable gains. Notion saw a 14% improvement on complex multi-step workflows with a third fewer tool errors. They also said it was the first model to pass their implicit-need tests, where the model works out requirements without explicit instruction. Ramp found it needed far less step-by-step guidance across tasks spanning multiple tools and codebases. Image resolution has increased to more than three times that of previous Claude models. That makes document processing and dense interface work more practical. Those running Claude within Microsoft 365 will see that improvement across Teams, Outlook, and OneDrive workflows. Pricing stays at $5 per million input tokens and $25 per million output tokens. Using Claude Mythos Preview, Anthropic autonomously found thousands of critical zero-day vulnerabilities. These spanned every major operating system and web browser. One was a 27-year-old flaw in OpenBSD that let attackers remotely crash machines. Another was a bug in FFmpeg that automated testing tools had run five million times without flagging. Maintainers have now fixed all of them. As UC Today covered separately this week, the significance is not the individual bugs. It is that a capable AI model can now find serious vulnerabilities at scale, autonomously, and faster than any existing testing process. The average cost of a data breach stands at $4.4 million. Unified communications environments, built on browsers, shared media libraries, APIs, and virtualised infrastructure, sit squarely in scope. Project Glasswing, Anthropic's response, brings together AWS, Cisco, CrowdStrike, Google, Microsoft, Palo Alto Networks, and others. The group committed $100M in model credits to scanning and hardening critical software infrastructure. They also directed a further $4M to open-source security organisations. Microsoft, which has been building its own AI security agent infrastructure in parallel, joined as a founding member. Opus 4.7 is the first Claude model to ship with automated safeguards that block high-risk cybersecurity uses. Anthropic describes it as a test bed for the controls needed before Mythos-class models can reach a wider audience. Security professionals with legitimate requirements can apply through the new Cyber Verification Programme. Deloitte's 2026 enterprise AI report found that only one in five companies has a mature governance model for autonomous AI agents. For IT and security leads, that figure and this week's news belong in the same conversation.

The post Vercel Breach Explained: OAuth Risk in AI + SaaS Environment appeared first on Grip Security Blog. For years, security teams have worried about perimeter breaches, endpoint compromise, and phishing. But the latest incident involving Vercel highlights something far more systemic, and far more dangerous: Your SaaS ecosystem is now your attack surface. And AI is accelerating the problem. At a high level, this breach wasn't a traditional exploit, it was inherited access abuse through SaaS integration. This is not just a "Vercel problem." It's a blueprint for how modern breaches happen. This wasn't malware. It wasn't a zero-day. It was trusted access doing exactly what it was designed to do. Once Context.ai was compromised, the attacker didn't need to break in. This breach exposes two massive, converging risks: We've now seen similar patterns across multiple incidents: The pattern is consistent: One compromised SaaS app quickly cascades into dozens of connected systems. Context.ai isn't just another SaaS tool. It represents a rapidly growing category: AI agents that require deep integration to function. Shadow AI is not just about usage. It's about uncontrolled access at scale. Even if Vercel's direct exposure is contained, the implications are massive: This is the part most organizations miss: Most AI + SaaS breaches won't trigger an alert. They'll trigger a headline. If you're a security leader, assume exposure and act accordingly. If a user connected Context.ai, treat it as a potential compromise path. This is exactly where traditional security models break down, and where identity-driven AI + SaaS security becomes critical. Grip continuously monitors OAuth grants across your environment: This is core to Identity Threat Detection and Response (ITDR) for SaaS. And you definitely can't secure what you implicitly trust. Grip extends detection beyond login: Because in SaaS, the attack starts after authentication. Every new integration is a new attack path. Every AI agent is a new identity. Do we actually understand the access we've already granted? Reach out if you want a walkthrough of your exposure, your risk, and how to fix it fast.

FRANKFURT, April 20 : Deutsche Bank CEO Christian Sewing said on Monday that banks were in close contact with European watchdogs about Anthropic's Mythos as regulators rush to examine the cybersecurity risks the new artificial intelligence model raises and how prepared financial firms are to tackle them. Mythos is viewed by cybersecurity experts as posing significant challenges to the banking industry and its legacy technology systems, prompting a series of warnings from regulators and policymakers gathered at last week's International Monetary Fund spring meeting in Washington. "It's certainly not something that's causing panic or setting off any alarm bells on our end right now, but it's definitely something we need to keep in mind in our day-to-day risk management - and that's exactly what we're doing," Sewing, who is chief executive of Germany's biggest bank, told journalists. "The banks are prepared for this and have their own responses. So this is something we have to live with, and of course everyone is trying to gain access, but I also think it's right that access is limited for now," he said, adding that a German banking association would meet to discuss the issue on Monday. Anthropic has so far restricted access to the model to partners in its Project Glasswing initiative and about 40 additional organisations that build or maintain critical software infrastructure. JPMorgan, which is part of Glasswing, is the only bank Anthropic has publicly said has access. Multiple senior banking and regulatory sources in Europe told Reuters they were not aware of any European financial institution with access to Mythos yet. Anthropic did not immediately respond to a request for comment by Reuters on if and when it would grant banks access. "SUBSTANTIALLY MORE CAPABLE AT CYBER OFFENCE" The British government sent an open letter to company leaders on April 15 warning that testing by its AI Security Institute (AISI) had shown Mythos to be "substantially more capable at cyber offence than any model we have previously assessed." Some Asian regulators said on Monday they were monitoring the development. South Korea's Financial Supervisory Service (FSS) said it held a meeting with information security officials from financial firms last week to review Mythos-related risks. Mythos was a key topic on the sidelines of the IMF meetings last week. European regulators are not yet overly concerned and for now are assessing it through their existing cyber resilience process, two European supervisory sources told Reuters. One banking source said that the ECB and other regulators have been in contact with European banks to assess their preparedness for new cybersecurity risks. Supervisors have asked about banks' awareness of the threat and their ability to respond, the source said. The vast capabilities of Mythos to code at a high level have given it a potentially unprecedented ability to identify cybersecurity vulnerabilities, experts say, prompting greater scrutiny from regulators globally. Barclays CEO C. S. Venkatakrishnan said on Friday in Washington that Mythos was a serious threat to the global banking system and likely to be followed by similar, more powerful cyberthreats.
The move comes after the vendor forged significant deals with OpenAI and AWS earlier this year. Silicon-based chip manufacturer Cerebras Systems is targeting an initial public offering. The 2015 startup, which says it is building the "fastest AI infrastructure in the world," filed on April 17 with the Securities and Exchange Commission to go public on the Nasdaq exchange. The filing does not disclose the size of the offering or the share price. However, it is understood that the IPO is likely to go ahead soon, possibly as early as next month. The stock will be listed under the ticker symbol CBRS, with Morgan Stanley, Citigroup, Barclays and UBS Investment Bank as lead underwriters. Mizuho and TD Cowen, meanwhile, will act as bookrunners. The filing provides insight into the figures that prompted Cerebras' decision. It showed that in 2025, the vendor recorded $510 million in revenue, representing 76% year-over-year growth. Net income was $237.8 million in 2025, as opposed to a net loss of $481.6 million in 2024. This upward trajectory is being seen as vindication for the company's decision to operate its chips in data centers rather than sell direct to companies. In February, Cerebras raised $1.1 billion in Series H funding at a valuation of about $23 billion. This followed a $1.1 billion Series G round in September. Cerebras claims its Wafer-Scale Engine 3 is the world's largest and fastest commercialized AI processor. According to Cerebras, it is nearly 60 times bigger than Nvidia's B200 chip but uses a fraction of the power per unit compute, while delivering inference up to 15 times faster. Earlier this year, Cerebras agreed to a deal to supply 750 megawatts of its wafer-scale systems to OpenAI, with the generative AI vendor saying it would use the compute for real-time responses for coding, inference, image generation and complex reasoning. This year also saw a deal with AWS to use Cerebras chips in Amazon data centers. However, this is not the first time Cerebras has eyed an IPO. It scrapped plans in October last year, without providing an explanation at the time,

Vercel, a web infrastructure provider, confirmed its data has been breached. This breach occurred due to a third-party artificial intelligence (AI) tool deployed by an employee. According to the company's statement, the breach occurred due to the compromise of a third-party AI tool. The tool, Context.ai, was leveraged by an employee and enabled the attacker to access the employee's company Google Workspace account. From there, the attacker accessed some additional company environments. The company claims that environments labelled as "sensitive" show no evidence of being accessed. To fully understand the scope of the issue, the company is collaborating with: At this time, it is suspected that the attacker is "highly sophisticated." The company states that a limited amount of customers were impacted. Those customers have been contacted. The breach of Vercel credentials is possible for this subset of customers. While at this time it appears the data breach is limited, the organization is still investigating to determine exactly what data has been compromised. Though the company's statement does not say this was a case of shadow AI specifically, security leaders can nevertheless see the risks of AI tools -- known or unknown -- in the enterprise. "The Vercel breach this week is a useful case study in a risk that's easy to overlook: what happens when an employee signs up for a consumer AI tool with their enterprise credentials," states Giuseppe Trovato, Head of Research, Geordie AI. Trovato further breaks down the issue for security leaders, dividing it into three main takeaways. Trovato advises security leaders to approach "agent permissions like service account permissions: audit them, minimize them, make sure you can revoke them fast."

Space has become a big business and space stocks are riding that trend higher. They may not be as hot as artificial intelligence stocks were in 2024 and 2025, but that mania may only be a matter of time. That's because SpaceX, Elon Musk's apace company, is going public with an IPO date of sometime in June 2026. Retail and institutional investors are expected to have significant interest in this public offering. But buying shares around an IPO is tricky, and many retail investors have been caught on the wrong side of volatile price action. A different way to profit from the SpaceX IPO is to invest in companies that serve as proxies for the company. Investors have many names to pick from. However, these three names stand out for different reasons. Each stock has also posted significant gains in 2026 that are expected to continue. That may sound bold, but Rocket Lab is, perhaps, the most legitimate operational proxy for SpaceX. The company is the second most active launcher in the United States and the global leader among publicly traded space companies. In 2025, that translated to over $600 million in sales, a 39% year-over-year gain. Rocket Lab's business model mirrors SpaceX's ambitions at a smaller scale: launch services, satellite manufacturing, and in-orbit operations. Its backlog now exceeds $2 billion and is anchored by an $816 million Space Development Agency contract to build 18 satellites. The catalyst coming in late 2026 is the company's Neutron rocket, scheduled for its inaugural launch in Q4 2026. It's designed to go head-to-head with SpaceX's workhorse Falcon 9 in the medium-lift segment. Investors seem to believe in the bull case. RKLB has soared over 300% in the last 12 months and over 20% in 2026. That said, the stock is currently trading above its consensus price target of $79.85 and may need a boost (no pun intended) to sustain a significant move higher. AST SpaceMobile occupies a unique position as it relates to SpaceX. The company competes with SpaceX's Starlink division, yet it still stands to benefit directly from the IPO. The SpaceX S-1 prospectus, due sometime in May, will put hard numbers on the satellite broadband market for the first time. Right now, ASTS is arguably the most direct public-market expression of that opportunity. The company is building a space-based cellular network that connects standard smartphones to broadband internet without specialized hardware. Partnerships with AT&T and Verizon give it an enviable distribution that's showing up on the top line. Q4 2025 revenue came in at $54 million, beating estimates by nearly 29%, and analysts project full-year 2026 revenue could exceed $180 million on its way to over $785 million in 2027. The company is targeting 45 to 60 satellites in orbit by year-end. That said, ASTS has already had a remarkable run, up more than 3,000% since its commercial pivot in mid-2024. That growth hasn't come without volatility. But with $2.8 billion in cash, over $1.2 billion in contracted telecom commitments, and the SpaceX prospectus as a potential catalyst that could reframe how investors price satellite connectivity, it's difficult to bet against the bull case. Momentus Inc. may look like an outlier compared to Rocket Lab and AST SpaceMobile, but that's part of the opportunity. With a market cap of just $43.72 million, this is a micro-cap space infrastructure company with revenue that reflects that market cap. However, early-stage businesses aren't expected to generate significant revenue. And for risk-tolerant investors, the time to invest in MNTS may be before the SpaceX IPO. That's because Momentus specializes in satellite technology, in-space transportation, and orbital services. These are the picks-and-shovels layer of the space economy. It's boring, but vital as satellite constellations scale. Its Vigoride Orbital Service Vehicle successfully launched aboard SpaceX's Transporter-16 rideshare mission in late March 2026, hosting 10 government and commercial payloads for customers, including DARPA and SpaceWERX. Vigoride 8 is already scheduled to launch in early 2027. Adding to the bull case, Momentus holds active contracts with NASA, DARPA, and the U.S. Air Force Research Laboratory, and recently expanded into a 61,000-square-foot R&D and manufacturing facility in San Jose. That said, there are real concerns that investors shouldn't ignore. These include going concern commentary and a 2025 reverse stock split. There's a reason the company has just 9% institutional ownership. But if the SpaceX IPO rerates how the market values the broader space infrastructure sector, Momentus could be a tiny company that captures outsized attention.

The Information reported that Polymarket is in talks to raise about $400 million at a valuation of roughly $15 billion, including the new money. The report said the discussions were based on people familiar with the talks. The potential round would affect Polymarket and its existing backers because it would add to the $600 million already invested by Intercontinental Exchange and could push the broader financing effort to about $1 billion if additional strategic investors join. For now, the main point to watch is whether the discussions turn into a completed raise. Polymarket did not immediately respond to its request for comment, leaving the proposed terms unconfirmed beyond The Information's report. Source: The Information. Disclaimer: Crypto Economy Flash News are based on verified public and official sources. Their purpose is to provide fast, factual updates about relevant events in the crypto and blockchain ecosystem. This information does not constitute financial advice or investment recommendation. Readers are encouraged to verify all details through official project channels before making any related decisions.

The EU's new Entry/Exit System (EES) has fully kicked into place for UK passengers after its October rollout. It's a biometric system (including a photo and/or fingerprints) that registers non-EU nationals every time they make a short stay in Schengen countries. The EU's site says it's designed to eventually replace passport stamps and offer a more "efficient" version of EU check-ins. But so far, there have been early hiccups: EES has been blamed for border delays that left passengers behind and "hours-long queues". In response, airlines like TUI, Jet2, and easyJet have shared advice. Their site reads, "You should allow extra time to register your biometric details, such as fingerprints and a photo, the first time you enter the EU. There is no cost for EES registration, and your digital record will last three years before you need to register again." And responding to an X post by a passenger, the company added: "We ask customers travelling on our European short-haul flights to be there two hours prior to departure. It would be three hours if you're travelling on a long-haul flight and one if you're travelling on a domestic flight within the UK." In a travel alert, they said: "At some airports, you might still find longer queues, particularly at busy travel periods." They added, "To help your journey run as smoothly as possible, please allow a little extra time when passing through border control. Keep any essential medication in your hand luggage in case of delays, and when departing the EU, head straight to passport control after dropping your bags to avoid hold‑ups. Bringing some extra water for comfort is also a good idea." The company shared, "There may be longer wait times at Border Control at some EU Airports, especially at busy times. Once you start your EES registration, it should take around 1-2 minutes per person to complete. "There may be longer wait times than usual when you arrive in destination and before your flight back to the UK. Unfortunately, this is outside of our control. But remember, there's nothing you can prep before you travel." The airline added, "You'll also need to pass through EES when leaving the EU in the same way you do on arrival. Depending on how busy the airport is, this may result in longer wait times at passport control before boarding your flight to the UK. After checking in for your flight, please head straight to security and passport control in order to arrive at your gate in plenty of time." The airline pointed out that while kids under 12 are exempt from fingerprinting, passengers "may experience longer waiting times on arrival, so allow extra time and factor this in when planning onward travel, including trains, taxis, or flight transfers". Plan your journey, arrive early, use Bag Drop as soon as possible if you're availing of the service, get through security as fast as possible, and "be aware that there may be further checks at passport control after security and before reaching your gate," they said. They warned that queues might be longer as airports adjust to the system. "Have your passport ready and follow EES signs," they wrote. "We recommend arriving at the airport with extra time to allow for these additional checks, especially during busy travel periods."

Anthropic PBC's new artificial intelligence (AI) model, Claude Mythos, has drawn significant attention in the cybersecurity field following its release for a preview by a small group of companies and organizations. According to Anthropic, the latest model is capable of advanced reasoning, which enables it to identify software vulnerabilities. Based on publicly available information, the preview found Mythos detected high-severity bugs and software defects that had gone undiscovered for decades and are difficult to detect with traditional tools, with vulnerabilities identified across major operating systems and Web browsers. Mythos was not specifically trained for cybersecurity purposes, but developed as a general-purpose language model offering high performance across a range of uses. However, the preview showed its ability to identify security vulnerabilities and develop exploits -- entirely autonomously and at machine speed. This has raised concerns that malicious actors might exploit such a frontier model to destabilize markets and financial systems. It is not a question of if, but when hackers would use models like Mythos at the expense of others. Barclays PLC chief executive officer C. S. Venkatakrishnan on Friday warned at a meeting of the G30 consultancy group on the sidelines of the IMF's spring meeting in Washington that Mythos poses a serious threat to the global banking system and is likely to be followed by even more powerful cyberthreats, Reuters reported. It followed a surprise meeting earlier this month between US Secretary of the Treasury Scott Bessent, US Federal Reserve Chairman Jerome Powell and several major US bank CEOs to discuss risks posed by Anthropic's latest AI model, Bloomberg News reported. Mythos' limited release to about 40 tech companies and selected organizations has drawn attention, as it allows firms to deploy the model, patch vulnerabilities and bolster defenses while limiting misuse. It is the first time in the past few years that a leading AI company has withheld a public release due to safety concerns, despite skeptics viewing it as hype ahead of Anthropic's initial public offering or a marketing strategy. However, the meeting between Bessent, Powell and bank CEOs -- rather than technical or compliance teams -- suggested US regulators are taking Mythos seriously and extending AI risk governance into financial supervision. The preview comes at a time when cyberattack concerns are already a major global issue, as software used in banking systems, medical records, logistics networks and power grids contains vulnerabilities that can be exploited, threatening national security and businesses. While AI-related cybersecurity risks are not new, Mythos has drawn greater attention due to perceived qualitative changes in AI capability and growing caution about the technology. While most companies and cybersecurity experts have not yet tested Mythos and cannot fully assess its performance, AI development is clearly advancing faster than expected. There could be a Mythos 2 and Mythos 3 in the pipeline, as new models emerge at an accelerating pace. Taiwanese authorities and local firms, especially financial institutions, have enhanced defenses through AI-driven detection, vulnerability assessments and upgraded monitoring systems, but traditional cybersecurity measures alone are no longer sufficient. Addressing these threats requires close coordination between the public and private sectors to harden the domestic cybersecurity ecosystem. Taiwan also needs to engage with like-minded partners on regulatory standards and coordinated countermeasures. Raising public awareness and improving national preparedness are essential to countering AI-linked cyberattacks, while continuously improving the ability to detect anomalies early, identify threats quickly and respond before incidents escalate remains critical.

A mass shooting disrupted the peace at a North Carolina park following a planned confrontation involving youths. The situation escalated at Leinbach Park near Jefferson Middle School around 10 am, according to the Winston-Salem police. The incident led to mutual gunfire, though the exact number of victims remains unconfirmed, with suspects currently evading capture. Jefferson Middle School was placed on lockdown until the area was deemed safe by law enforcement, as confirmed by the North Carolina State Bureau of Investigation. Police authorities have identified some victims and suspects, including juveniles, but are actively working to ensure all individuals involved are accounted for. The investigation continues as the community grapples with the unsettling events.

An employee using a consumer app was breached after granting too many permissions. Vercel, a cloud development platform, said that some of its internal systems were accessed after a third-party tool called Context.ai was compromised while being used by one of Vercel's employees, according to a blog post released Sunday. Vercel is widely known as the creator of Next.js, which is the open-source framework for React. The attacker was able to take over the employee's Vercel Google Workspace account and access certain company "environments and environment variables" that were not designated as "sensitive." Vercel said that a limited number of customers had their credentials compromised during the attack, and that they have been notified. They were urged to immediately rotate credentials. The company said it believes the attacker is highly sophisticated, based on an assessment of their "operational velocity and detailed understanding of Vercel's systems." Vercel is working with Mandiant, the incident response unit of Google, as well as other outside companies and law enforcement. Context on Sunday said there was an attack in March where a hacker gained access to the company's Amazon Web Services environment, according to a blog post. The hacker appears to have compromised OAuth tokens for some of Context's consumer users. At least one employee at Vercel signed up for AI Office Suite, a Context product that allows consumers to work with AI agents to build presentations and other documents. Context said that Vercel is not one of its enterprise customers, but at least one of its employees used their Vercel corporate email to sign up for the AI Office Suite product. The employee granted "allow all" permissions, which opened wide access to Vercel's Google Workspace environment. Context has been working with those who were impacted and is coordinating with CrowdStrike to validate its containment efforts. Context, which said the consumer product was separate from its enterprise product, shut down the AWS environment. Jeff Pollard, vice president and principal analyst at Forrester, said the attack is a reminder about concerns about third-party risk management and permissions related to AI. "This definitely highlights that as AI-related tools spread through an environment, OAuth will remain one of the key elements of the attack surface," Pollard told Cybersecurity Dive. "That isn't about the inherent security flaws of AI applications, it's more about AI tools requiring permissions to be as valuable as possible."
