News & Updates

The latest news and updates from companies in the WLTH portfolio.

Anthropic, Google, and Microsoft paid AI agent bug bounties, then kept quiet about the flaws

In short:Security researcher Aonan Guan hijacked AI agents from Anthropic, Google, and Microsoft via prompt injection attacks on their GitHub Actions integrations, stealing API keys and tokens in each case. All three companies paid bug bounties quietly, $100 from Anthropic, $500 from GitHub, an undisclosed amount from Google, but none published public advisories or assigned CVEs, leaving users on older versions unaware of the risk. Security researchers have demonstrated that AI agents from Anthropic, Google, and Microsoft can be hijacked through prompt injection attacks to steal API keys, GitHub tokens, and other secrets, and all three companies quietly paid bug bounties without publishing public advisories or assigning CVEs. The vulnerabilities, disclosed by researcher Aonan Guan over several months, affect AI tools that integrate with GitHub Actions: Anthropic's Claude Code Security Review, Google's Gemini CLI Action, and GitHub's Copilot Agent. Each tool reads GitHub data, including pull request titles, issue bodies, and comments, processes it as task context, and then takes actions. The problem is that none of them reliably distinguish between legitimate content and injected instructions. The core technique is indirect prompt injection. Rather than attacking the AI model directly, the researcher embedded malicious instructions in places the agents were designed to trust: PR titles, issue descriptions, and comments. When the agent ingested that content as part of its workflow, it executed the injected commands as though they were legitimate instructions. Against Anthropic's Claude Code Security Review, which scans pull requests for vulnerabilities, Guan crafted a PR title containing a prompt injection payload. Claude executed the embedded commands and included the output, including leaked credentials, in its JSON response, which was then posted as a PR comment for anyone to read. The attack could exfiltrate the Anthropic API key, GitHub access tokens, and other secrets exposed in the GitHub Actions runner environment. The Gemini attack followed a similar pattern. By injecting a fake "trusted content section" after legitimate content in a GitHub issue, Guan overrode Gemini's safety instructions and tricked the agent into publishing its own API key as an issue comment. Google's Gemini CLI Action, which integrates Gemini into GitHub issue workflows, treated the injected text as authoritative. The Copilot attack was subtler. Guan hid malicious instructions inside an HTML comment in a GitHub issue, making the payload invisible in the rendered Markdown that humans see but fully visible to the AI agent parsing the raw content. When a developer assigned the issue to Copilot Agent, the bot followed the hidden instructions without question. What happened next is as revealing as the vulnerabilities themselves. Anthropic received Guan's submission on its HackerOne bug bounty platform in October 2025. The company asked whether the technique could also steal more sensitive data such as GitHub tokens, confirmed it could, and in November paid a $100 bounty while upgrading the critical severity rating from 9.3 to 9.4. Anthropic updated a "security considerations" section in its documentation but did not publish a public advisory or assign a CVE. GitHub initially dismissed the Copilot finding as a "known issue" that it "could not reproduce," but ultimately paid a $500 bounty in March. Google paid an undisclosed amount for the Gemini vulnerability. None of the three vendors assigned CVEs or published advisories that would alert users pinned to vulnerable versions. For Guan, this is the crux of the problem. Users running older versions of these AI agent integrations may never learn they are exposed. Without a CVE, vulnerability scanners will not flag the issue. Without an advisory, security teams have no artefact to track. The attacks exploit a fundamental weakness in how AI agents process context. Large language models cannot reliably separate data from instructions. When an agent reads a GitHub issue, it treats the text as input to reason about, but a well-crafted prompt injection can make that input function as a command. Every data source that feeds an AI agent's reasoning, whether it is an email, a calendar invite, a Slack message, or a code comment, is a potential attack vector. This is not a theoretical concern. In January 2026, researchers from Miggo Security demonstrated that Google Gemini could be weaponised through calendar invitations containing hidden instructions. Days later, the "Reprompt" attack against Microsoft Copilot showed that injected prompts could hijack entire user sessions. Anthropic's own Git MCP server was found to harbour three CVEs that allowed attackers to inject backdoors through repositories the server processed. A systematic analysis of 78 studies published in January found that every tested coding agent, including Claude Code, GitHub Copilot, and Cursor, was vulnerable to prompt injection, with adaptive attack success rates exceeding 85%. The supply chain dimension makes it worse. A security audit of nearly 4,000 agent skills on the ClawHub marketplace found that more than a third contained at least one security flaw, and 13.4% had critical-level issues. When AI agents pull in third-party tools and data sources with the same level of trust they extend to their own instructions, a single compromised component can cascade across an entire development pipeline. The vendors' reluctance to publish advisories reflects an uncomfortable reality: there is no established framework for disclosing AI agent vulnerabilities. Traditional software bugs get CVEs, patches, and coordinated disclosure timelines. Prompt injection flaws sit in a grey zone. They are not bugs in the code so much as emergent behaviours of the model, and the mitigations, stronger system prompts, input sanitisation, output filtering, are partial at best. But the consequences are indistinguishable from those of a conventional security flaw. An attacker who exfiltrates a GitHub token through a prompt injection can do exactly the same damage as one who exploits a buffer overflow. The argument that AI safety requires new frameworks does not excuse the absence of disclosure for vulnerabilities that are already being exploited in the wild. Zenity Labs research published this month found that most agent-building frameworks, including those from OpenAI, Google, and Microsoft, lack appropriate guardrails, putting the burden of managing risk on the companies deploying them. In one documented case, attackers manipulated an AI procurement agent's memory so it believed it had authority to approve purchases up to $500,000, when the real limit was $10,000. The agent approved $5 million in fraudulent purchase orders before anyone noticed. For organisations that have integrated AI agents into their CI/CD pipelines, the message is stark. These tools are powerful precisely because they have access to sensitive systems and data. That same access makes them high-value targets, and the industry has not yet built the disclosure infrastructure to match the risk.

Anthropic
The Next Web10d ago
Read update
Anthropic, Google, and Microsoft paid AI agent bug bounties, then kept quiet about the flaws

View: Anthropic's Mythos is a wake-up call for AI

Sign up for Semafor Technology: What's next in the new era of tech. Read it now. One of the nice things about being at Semafor World Economy in DC all week is that it takes me outside the Silicon Valley bubble and I get to hear what non-tech CEOs and leaders are talking about. One of the big topics is Anthropic's Mythos AI model, which has clearly struck fear into a segment of the population out here. After a handful of discussions with people who know what they're talking about and some who don't, here's how I think the world should be processing this. Mythos, and now OpenAI's GPT 5.4-cyber, represent a step up in AI capability that could allow hackers to find more exploits in software. Part of what makes these models powerful is the high number of tokens they require to operate. In that sense, they're a glimpse into an AI future, when the cost of tokens inevitably comes down. But the change might not be as dramatic as some people think. Imagine a bunch of bike racks in a public place. Some of the bikes have big, powerful locks. Some have puny cable locks that can be broken with pliers. And a lot of them have no locks at all. AI is like giving bike thieves better lock cutters. But the bigger problem is that there's already an endless supply of unlocked or poorly locked bikes. More bikes will be stolen, but only marginally so. Mythos, hopefully, is a wake-up call for the entire industry that everybody needs to start investing more in cybersecurity. We need better ways to help people use basic security. Password managers and passkeys just aren't user friendly. Companies need to work on new, innovative methods of authentication that preserve privacy and anonymity. Companies need to empower teams of software engineers who fix bugs, one area of notorious underinvestment. All of this should have already happened. Perhaps fear will be the motivator the world needs.

Anthropic
semafor.com10d ago
Read update
View: Anthropic's Mythos is a wake-up call for AI

ECB Warns Banks on Anthropic AI Model Risks to Cybersecurity

ECB supervisors plan to alert banks of heightened cyber risks from Anthropic's Mythos AI model during routine supervisory dialogue -- reflecting growing concern over its ability to uncover and exploit hidden software vulnerabilities at unprecedented speed. ECB to Advise Banks on Anthropic AI Model Amid Rising Cybersecurity Risks ECB Supervisors Address AI-Driven Cybersecurity Threats in Banking Sector Anthropic's Mythos Model Raises Concerns FRANKFURT, April 15 (Reuters) - European Central Bank supervisors are set to warn bankers about the risks posed by Anthropic's new artificial intelligence model that might supercharge cyberattacks, one source familiar with the situation told Reuters. Anthropic's Mythos is seen by cybersecurity experts as posing significant challenges to the banking industry and its legacy technology systems, raising alarm bells among regulators in Britain and the United States. ECB's Response and Information Gathering ECB supervisors are gathering information about the model, with a view to discussing this new possible source of risk with banks on their watch, said the source who spoke on condition of anonymity because they are not authorised to comment publicly on the matter. Dialogue with Banks and Official Statements Unlike in the U.S., this will be done via the ECB's regular dialogue with bank staff and no ad-hoc meeting with top management has been scheduled yet. An ECB spokesperson declined to comment. (Reporting by Francesco Canepa; Editing by Emelia Sithole-Matarise)

Anthropic
Global Banking & Finance Review10d ago
Read update
ECB Warns Banks on Anthropic AI Model Risks to Cybersecurity

ECB to warn bankers about new Anthropic model risks, source says

FRANKFURT, April 15 (Reuters) - European Central Bank supervisors are set to warn bankers about the risks posed by Anthropic's new artificial intelligence model that might supercharge cyberattacks, one source familiar with the situation told Reuters. Anthropic's Mythos is seen ⁠by cybersecurity experts as ⁠posing significant challenges to the banking industry and its legacy technology systems, raising alarm bells among regulators in Britain and the United States. ECB ⁠supervisors are gathering ⁠information about the model, with a view to discussing this new possible source of risk with banks on their watch, said the source who spoke on condition of anonymity ⁠because they are not authorised to comment publicly on the matter. Unlike in ⁠the U.S., this will be ⁠done via the ECB's regular dialogue with bank staff and ⁠no ad-hoc meeting with top management has been scheduled yet. An ECB spokesperson declined to comment. (Reporting by Francesco Canepa; Editing by Emelia Sithole-Matarise)

Anthropic
1470 & 100.3 WMBD10d ago
Read update
ECB to warn bankers about new Anthropic model risks, source says

How AI is helping businesses save time and money amid tariff chaos

Generative AI can also save weeks of time on complex scenario planning for sourcing materials. US businesses have been on a tariff roller coaster over the last year. Sweeping tariffs were implemented at varying levels across different countries. Though some were eventually overturned by the Supreme Court, there is now an added layer of bureaucracy as companies seek potential refunds. Some are turning to AI for help. Companies like EQI and customs advisory firms like KPMG are using generative AI "to process all that chaos," said Brendan Connallon, the VP of finance at EQI, a company that supplies metal components and provides supply chain advising services to manufacturers. The technology can rapidly scrape and synthesize vast quantities of data, track tariff changes, model potential supply chain scenarios, and accurately classify goods by their government-assigned tariff codes -- a highly nuanced system with more than 17,000 codes. Emil Stefanutti, the CEO of Gaia Dynamics, a software company that provides AI tools to help companies automate trade compliance, said AI is proving particularly useful in this rapidly changing environment, as it can reduce compliance errors and save businesses time. With the Supreme Court ruling specifically, Stefanutti said importers can use AI to analyze data on where and when they paid tariffs, quantify potential overpayments, and flag areas that need correction. AI "can continuously track and adapt to new rules in a way humans simply can't at scale," Stefanutti said. The consulting firm KPMG has been advising its clients on trade compliance for decades, but last year in particular, "the tariffs were changing fast and furious," said Andrew Siciliano, the leader of the Global and US Trade and Customs practices at KPMG. Company leaders needed real-time data quickly to make decisions, so KPMG launched an AI-powered tariff modeler. The firm's clients include many large businesses that import goods ranging from auto parts to retail goods and pharmaceuticals, and that use several ports of entry and customs brokers. KPMG takes its clients' decentralized customs entries and product information from suppliers and freight forwarders -- the intermediaries between importers and their transportation providers -- and plugs the data into the tariff modeler, Siciliano said. This approach has helped KPMG's clients navigate the process of applying for refunds for tariff overpayments resulting from the policy change that took effect after the Supreme Court overturned some tariffs. Many trade rules have nuanced exceptions, leading some businesses to pay multiple tariffs when they should have paid only one. Siciliano said his firm uses AI to interact with a client's data and better understand which products came from which factories, narrowing down which qualify for refunds. Though the refund system is in the works, there could still be confusion and uncertainty, said Connallon. He told Business Insider that he anticipates the process will be "an administrative nightmare." Before AI, manually sifting through thousands of custom entry data points to spot overpayments could take weeks or months -- or not happen at all because of the complexity, Siciliano said. Now, an importer can prompt AI, which delivers the information right away. AI also saves weeks of time in scenario planning. An importer might wonder how costs could change if it moved sourcing from China to Vietnam, for example. Instead of taking weeks to update multiple spreadsheets, AI models scenarios with the click of a few buttons, Siciliano said. Connallon said EQI uses AI in a similar way to model potential sourcing scenarios. The company uses the AI platform Altana, which focuses on supply chain management and trade compliance. In a potential sourcing move from country A to B, EQI uses AI to model total costs, accounting for tariffs, manufacturing costs, and ocean freight rates. For manufacturing, which sources thousands of different products from myriad locations, "the complexity becomes extremely dense very fast," Connallon said. "So, AI helps us simplify it." EQI sends the simplified data to its trade attorneys, who can interpret it within hours, said Connallon. "We've turned something that would take weeks into a same-day thing," he said. He added that "AI is not good at critical thinking," and that humans are essential for sourcing decisions. For example, the AI model might say that sourcing all materials from one country results in the greatest cost savings, but business leaders have to consider the bigger picture, said Connallon. Supply chain executives have learned, especially in recent years, that sourcing solely from one country carries risks, such as product shortages or delays if a geopolitical or economic issue halts trade flows.

CHAOS
DNyuz10d ago
Read update
How AI is helping businesses save time and money amid tariff chaos

SpaceX launches more satellites with its Falcon 9 mission

SpaceX had begun and ended the day with satellite launches across the country, having launched two Falcon 9 rockets soaring, first from Florida before sunrise on Tuesday, and then from California after sunset the same day local time. The company said both launches were successful. The first launch involved 29 of the broadband internet relay units, Starlink group 10-24, at 5:23 a.m. EDT from Space Launch Complex 40 at Cape Canaveral Space Force Station in Florida. About 19 hours later, at 9:29 p.m. PDT, 25 more Starlink satellites, group 17-27, lifted off from Space Launch Complex 4 East at Vandenberg Space Force Base in Southern California. READ: SpaceX to pursue 'one of the largest' IPOs in 2026 (December 10, 2025) The Falcon 9 upper stage deployed its cargo around an hour after each launch, sending its satellites on track to join the SpaceX low Earth orbit megaconstellation. Both missions' Falcon 9 rocket first stages made it back to Earth to be reissued. Booster B1080 completed its 26th flight by landing on the droneship "Just Read the Instructions" based in the Atlantic Ocean. The other booster, Booster 1082 landed on "Of Course I Still Love You" stationed in the Pacific Ocean, raising its reuse tally to 21 flights. With these launches, SpaceX's Starlink network totaled more than 10,200 satellites, according to tracker Jonathan McDowell. The Vandeberg launch was SpaceX's 46th of the year out of 629 Falcon 9 missions 2010. READ: SpaceX IPO: Elon Musk's rocket giant files confidentially (April 1, 2026) The launch occurred weeks after SpaceX took a major step towards going public. According to reports, the company confidentially filed for a U.S. initial public offering. The move puts SpaceX in a position to potentially outpace rivals like OpenAI and Anthropic in the race to tap public markets. Sources suggest that SpaceX may target a valuation of more than $1.75 trillion, making it one of the largest stock market listings ever. The filing follows SpaceX's merger with Musk's AI venture, xAI, in a deal that valued SpaceX at $1 trillion and xAI at $250 billion. Earlier this year, a SpaceX rocket was launched from Florida on Friday with a crew of four. The crew headed to the International Space Station for an eight-month science mission in microgravity. This mission, which was dubbed Crew-12 marks the twelfth long-duration ISS team flown by NASA on a SpaceX rocket, ever since the private rocket firm started launching U.S. astronauts.

xAIAnthropicSpaceX
The American Bazaar10d ago
Read update
SpaceX launches more satellites with its Falcon 9 mission

Kalshi and Polymarket Under Pressure as Lawmakers Raise Insider Trading Concerns

Kalshi and Polymarket are intensifying their efforts in Washington as U.S. lawmakers weigh new regulations to curb what some call "wild west" behaviour in prediction markets. This legislative push follows a series of high-profile, suspiciously timed bets on geopolitical events, including military actions in Iran and Venezuela, which have raised alarms over possible insider trading. The two platforms are taking different approaches to appeal to regulators. Kalshi, which is regulated in the U.S., is promoting itself as the rule-following option, running ads that highlight its transparency and its decision not to allow "death markets." On the other hand, Polymarket uses decentralized blockchain technology and is facing more questions about its international operations, which let people trade anonymously. There are at least eight bills moving through Congress right now, all aiming to make prediction markets more trustworthy. One key proposal, the Event Contract Enforcement Act from Representative Blake Moore, would give the Commodity Futures Trading Commission (CFTC) more power to ban contracts related to war, assassination, or terrorism. Lawmakers are especially worried about bets that seem unusually well-timed. For example, anonymous traders on Polymarket reportedly made over $1.2 million by correctly guessing when U.S. military strikes in Iran would happen, just hours before the news broke. Similar trends appeared during the fall of former Venezuelan President Nicolás Maduro, when some accounts turned small bets into six-figure wins. To navigate this hostile environment, both firms are tapping into their political networks. Kalshi has notably hired veterans from prior Democratic administrations to bridge the gap with skeptical lawmakers. Meanwhile, Polymarket is working to formalize its market integrity standards to align with emerging US Lawmakers Target Insider Trading in Prediction Markets trends. Concerns about insider trading have been raised following a significant financial event on the Polymarket prediction platform. Specifically, four new digital wallets collectively profited over $600,000 by betting on a diplomatic ceasefire. The suspicious timing and unusually low odds of these trades, executed just before the official announcement, strongly indicate the use of non-public information. This incident underscores the ongoing regulatory challenges for decentralized betting markets. Despite the flurry of legislative activity, the odds of these bills passing remain uncertain. Deep political divisions and the complex ties between the tech industry and Washington figures continue to complicate the path toward a unified federal framework for decentralized prediction markets.

Polymarket
DeFi Planet10d ago
Read update
Kalshi and Polymarket Under Pressure as Lawmakers Raise Insider Trading Concerns

How US businesses are using AI to manage tariff chaos

US businesses have been on a tariff roller coaster over the last year. Sweeping tariffs were implemented at varying levels across different countries. Though some were eventually overturned by the Supreme Court, there is now an added layer of bureaucracy as companies seek potential refunds. Some are turning to AI for help. Companies like EQI and customs advisory firms like KPMG are using generative AI "to process all that chaos," said Brendan Connallon, the VP of finance at EQI, a company that supplies metal components and provides supply chain advising services to manufacturers. The technology can rapidly scrape and synthesize vast quantities of data, track tariff changes, model potential supply chain scenarios, and accurately classify goods by their government-assigned tariff codes -- a highly nuanced system with more than 17,000 codes. Emil Stefanutti, the CEO of Gaia Dynamics, a software company that provides AI tools to help companies automate trade compliance, said AI is proving particularly useful in this rapidly changing environment, as it can reduce compliance errors and save businesses time. With the Supreme Court ruling specifically, Stefanutti said importers can use AI to analyze data on where and when they paid tariffs, quantify potential overpayments, and flag areas that need correction. AI "can continuously track and adapt to new rules in a way humans simply can't at scale," Stefanutti said. The consulting firm KPMG has been advising its clients on trade compliance for decades, but last year in particular, "the tariffs were changing fast and furious," said Andrew Siciliano, the leader of the Global and US Trade and Customs practices at KPMG. Company leaders needed real-time data quickly to make decisions, so KPMG launched an AI-powered tariff modeler. The firm's clients include many large businesses that import goods ranging from auto parts to retail goods and pharmaceuticals, and that use several ports of entry and customs brokers. KPMG takes its clients' decentralized customs entries and product information from suppliers and freight forwarders -- the intermediaries between importers and their transportation providers -- and plugs the data into the tariff modeler, Siciliano said. This approach has helped KPMG's clients navigate the process of applying for refunds for tariff overpayments resulting from the policy change that took effect after the Supreme Court overturned some tariffs. Many trade rules have nuanced exceptions, leading some businesses to pay multiple tariffs when they should have paid only one. Siciliano said his firm uses AI to interact with a client's data and better understand which products came from which factories, narrowing down which qualify for refunds. Though the refund system is in the works, there could still be confusion and uncertainty, said Connallon. He told Business Insider that he anticipates the process will be "an administrative nightmare." Before AI, manually sifting through thousands of custom entry data points to spot overpayments could take weeks or months -- or not happen at all because of the complexity, Siciliano said. Now, an importer can prompt AI, which delivers the information right away. AI also saves weeks of time in scenario planning. An importer might wonder how costs could change if it moved sourcing from China to Vietnam, for example. Instead of taking weeks to update multiple spreadsheets, AI models scenarios with the click of a few buttons, Siciliano said. Connallon said EQI uses AI in a similar way to model potential sourcing scenarios. The company uses the AI platform Altana, which focuses on supply chain management and trade compliance. In a potential sourcing move from country A to B, EQI uses AI to model total costs, accounting for tariffs, manufacturing costs, and ocean freight rates. For manufacturing, which sources thousands of different products from myriad locations, "the complexity becomes extremely dense very fast," Connallon said. "So, AI helps us simplify it." EQI sends the simplified data to its trade attorneys, who can interpret it within hours, said Connallon. "We've turned something that would take weeks into a same-day thing," he said. He added that "AI is not good at critical thinking," and that humans are essential for sourcing decisions. For example, the AI model might say that sourcing all materials from one country results in the greatest cost savings, but business leaders have to consider the bigger picture, said Connallon. Supply chain executives have learned, especially in recent years, that sourcing solely from one country carries risks, such as product shortages or delays if a geopolitical or economic issue halts trade flows.

CHAOS
Business Insider10d ago
Read update
How US businesses are using AI to manage tariff chaos

ECB to warn bankers about new Anthropic model risks, source says

FRANKFURT, April ⁠15 (Reuters) - European Central Bank supervisors are ⁠set to warn bankers about the risks posed by Anthropic's new artificial intelligence model that might supercharge cyberattacks, one source familiar with the situation told ⁠Reuters. Anthropic's Mythos is ⁠seen by cybersecurity experts as posing significant ⁠challenges to the banking industry and its legacy technology systems, ⁠raising alarm bells among regulators in Britain and the United States. ECB supervisors are gathering ⁠information about the model, with a view to discussing this new possible source of ⁠risk with banks on their watch, said the source who spoke on condition of anonymity because they ⁠are not authorised to comment publicly on the matter. Unlike in the U.S., this will be done via the ECB's regular dialogue with ⁠bank staff and no ad-hoc meeting with top management has been scheduled yet. An ECB spokesperson declined to comment. (Reporting by Francesco Canepa; Editing by Emelia Sithole-Matarise)

Anthropic
The Star 10d ago
Read update
ECB to warn bankers about new Anthropic model risks, source says

'We are currently being extorted' -- crypto giant Kraken says it is facing extortion attack, here's what we know

* Kraken faces extortion after insiders leaked support system videos * Around 2,000 client accounts potentially viewed, no breach of funds * Company refuses to pay, investigation underway to identify culprits One of the biggest cryptocurrency exchanges - Kraken - is facing an extortion demand, after malicious insiders recorded a video of its client support systems. Kraken's Chief Security Officer, Nick Percoco, shared an announcement on X, describing the incident and saying what the company's plans are. "We are currently being extorted by a criminal group threatening to release videos of our internal systems with client data shown if we do not comply with their demands," he begins. "It's important to start with the most important points: our systems were never breached; funds were never at risk; we will not pay these criminals; we will not ever negotiate with bad actors." Identifying the attackers As Percoco explained, in February 2025, the company was made aware of a video circulating on the dark web, showing access to Kraken's client support system. The video was traced back to a malicious insider - a member of the company's support team. Kraken revoked their access and placed additional security controls. However, soon after the first one - a new video emerged showing similar activity. Again, a malicious insider was blamed. "Across both incidents, only a very small number of client accounts were potentially viewed - approximately 2,000 in total (0.02% of clients)." Percoco stressed. Soon after the crooks lost access, Kraken received extortion demands. The attackers threatened to distribute the materials from these two incidents to both traditional media, and social media, unless a payment is made. Percoco did not say if the actors belong to a known criminal group, and did not say how much money they were asking for. Kraken isn't paying anyhow. A criminal investigation is underway, the company added, saying that there is enough evidence to identify and arrest the responsible individuals. These days, malicious insiders are as big of a risk as external threats. Cryptocurrency exchanges are often targeted, and even the biggest names aren't immune. Coinbase, Kraken, Binance - these are just some of the names that have had to tackle a malicious insider. North Korean state-sponsored threat actors Lazarus are known to target crypto exchanges through fake employees, in a campaign called Operation DreamJob. However, Lazarus rarely goes for extortion, and would rather just steal the coins themselves. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Kraken
TechRadar10d ago
Read update
'We are currently being extorted' -- crypto giant Kraken says it is facing extortion attack, here's what we know

Phemex Partners with Polymarket to Launch Prediction Market and Pre-Release Engagement Event

APIA, Samoa, April 15, 2026 -- Phemex, a user-first crypto exchange, has announced a strategic integration with Polymarket, described as the world's largest prediction market. This partnership supports the upcoming launch of the Phemex Prediction Market, a new product vertical that enables users to trade on the outcomes of real-world events across sectors including finance, technology, and global culture. Pre-Launch Mystery Box Event: To support the rollout, Phemex has introduced a Mystery Box Pre-Launch Event running from April 14 to April 20, 2026. The event is designed to prepare users for the official launch of the Prediction Market on April 21, 2026. Participants can obtain Mystery Boxes containing digital assets, including: Bitcoin (BTC) Tether Gold (XAUT) Prediction Market Infrastructure and Trading Environment: The Phemex Prediction Market utilizes the platform's 500ms execution engine and existing liquidity infrastructure to support sentiment-based trading. The Mystery Box event also incorporates a multi-tiered referral system aimed at expanding user participation. Rewards earned during the event, including BTC and XAUT, will be available for manual claiming on the official launch date. CEO Insight on the Launch: Federico Variola, CEO of Phemex, stated: "The integration of the Prediction Market, empowered by our partnership with Polymarket, is a pivotal step toward our goal of becoming the industry's most comprehensive financial execution hub. By allowing our users to trade on the outcome of global events using institutional-grade infrastructure, we are not just expanding our product suite, we are redefining how traders engage with and profit from the future. This pre-launch event is our way of rewarding the visionaries who are ready to embrace this new era of sentiment-based trading." Product Expansion Strategy: With this launch, Phemex continues to expand its platform capabilities toward a broader trading ecosystem that combines: Traditional market exposure Crypto-native instruments Event-driven trading frameworks About Phemex: Founded in 2019, Phemex is a user-first crypto exchange trusted by over 10 million traders worldwide. The platform offers spot and derivatives trading, copy trading, and wealth management products designed to prioritize user experience, transparency, and innovation. With a forward-thinking approach and a commitment to user empowerment, Phemex delivers reliable tools, inclusive access, and evolving opportunities for traders at every level to grow and succeed. Media Contact: Email: [email protected] Website: https://phemex.com/ Related Items:busdiness, Phemex Partners with Polymarket to Launch Prediction

Polymarket
TechBullion10d ago
Read update
Phemex Partners with Polymarket to Launch Prediction Market and Pre-Release Engagement Event

Claude AI-Anthropic down: Outage reported by users, reports Downdetector

Claude AI-Anthropic outage: Developed by Anthropic, Claude AI is an artificial intelligece assistant platform. Claude AI by Anthropic is down. User reported indicate problems with Claude AI since 10:55 AM (EDT). Claude AI by Anthropic is an AI assistant platform offering models including Opus, Sonnet and Haiku, with access through the Claude API for developers and enterprises requiring advanced language processing. Meanwhile, Adobe said on Wednesday it was releasing a new artificial intelligence assistant designed to help users carry out tasks across its suite of software for editing photos, videos and other digital content. The new capabilities will also be available to users of Anthropic's Claude AI model through a connector to Adobe, though Adobe did not disclose the financial arrangements between the firms. (You can now subscribe to our Economic Times WhatsApp channel)

Anthropic
Economic Times10d ago
Read update
Claude AI-Anthropic down: Outage reported by users, reports Downdetector

ECB to warn bankers about new Anthropic model risks, source says

FRANKFURT, April 15 (Reuters) - European Central Bank supervisors are set to warn bankers about the risks posed by Anthropic's new artificial intelligence model that might supercharge cyberattacks, one source familiar with the situation told Reuters. Anthropic's Mythos is seen by cybersecurity experts as posing significant challenges to the banking industry and its legacy technology systems, raising alarm bells among regulators in Britain and the United States. ECB supervisors are gathering information about the model, with a view to discussing this new possible source of risk with banks on their watch, said the source who spoke on condition of anonymity because they are not authorised to comment publicly on the matter. Unlike in the U.S., this will be done via the ECB's regular dialogue with bank staff and no ad-hoc meeting with top management has been scheduled yet. (Reporting by Francesco Canepa; Editing by Emelia Sithole-Matarise)

Anthropic
Market Screener10d ago
Read update
ECB to warn bankers about new Anthropic model risks, source says

Anthropic prepares Opus 4.7 and AI design tool, VCs offer up to 800 billion dollars

As compute costs surge, Anthropic is replacing its enterprise flat rate with usage-based billing, which could triple costs for heavy users. Anthropic is preparing to release a new model and a design tool that would compete with Adobe and Figma. Meanwhile, venture capitalists are lining up to invest at sky-high valuations. Anthropic is working on its next flagship model, Claude Opus 4.7, along with a new AI-powered tool for designing websites and presentations, according to The Information. Both products could launch as early as this week. The design tool would let both technical and non-technical users create presentations, websites, and landing pages using natural language. That puts it in direct competition with Adobe, Figma, and Wix, as well as startups like Gamma and Google Stitch. According to The Information, shares of Adobe, Wix, and Figma each dropped more than two percent after the plans became public. Notably, Opus 4.7 isn't Anthropic's most powerful model. That distinction belongs to Claude Mythos, which is currently being tested by select partners for security vulnerability research. On the funding side, Anthropic has received multiple investment offers from venture capitalists in recent weeks, valuing the company at up to $800 billion, Business Insider reports. That's more than double the $380 billion valuation Anthropic reached in its funding round that closed in February. On the secondary market platform Caplight, Anthropic is already trading at $688 billion, a 75 percent jump in just three months. For context, OpenAI most recently hit a valuation of $852 billion. The hype is driven by Anthropic's explosive growth. The company says its annualized revenue has climbed to $30 billion, up from $9 billion at the end of 2025. More than 1,000 enterprise customers now spend over $1 million per year, a number that doubled in less than two months. With new customers flooding in, Anthropic has also overhauled its pricing model for Claude Enterprise. Instead of a flat fee of up to $200 per user per month, business customers now pay a $20 base fee plus usage-based charges for compute. For heavy users, that could mean costs doubling or tripling. The shift comes as agents like Claude Code and Claude Cowork drive up inference costs while available compute resources grow increasingly scarce.

Anthropic
THE DECODER10d ago
Read update
Anthropic prepares Opus 4.7 and AI design tool, VCs offer up to 800 billion dollars

ECB to warn bankers about new Anthropic model risks, source says

FRANKFURT, April 15 (Reuters) - European Central Bank supervisors are set to warn bankers about the risks posed by Anthropic's new artificial intelligence model that might supercharge cyberattacks, one source familiar with the situation told Reuters. Anthropic's Mythos is seen by cybersecurity experts as ⁠posing significant challenges to the banking industry and its legacy technology systems, raising alarm bells among regulators in Britain and the United States. ECB supervisors are gathering information about the model, with a view to discussing this new possible source of risk ⁠with banks on their watch, said the source who spoke on condition of anonymity because they are not authorised ⁠to comment publicly on the matter. Unlike in the U.S., this will be done via the ⁠ECB's regular dialogue with bank staff and no ad-hoc meeting with ⁠top management has been scheduled yet. An ECB spokesperson declined to comment. Reporting by Francesco Canepa; Editing by Emelia Sithole-Matarise Our Standards: The Thomson Reuters Trust Principles., opens new tab

Anthropic
Reuters10d ago
Read update
ECB to warn bankers about new Anthropic model risks, source says

ECB to warn bankers about new Anthropic model risks, source says

FRANKFURT, April 15 : European Central Bank supervisors are set to warn bankers about the risks posed by Anthropic's new artificial intelligence model that might supercharge cyberattacks, one source familiar with the situation told Reuters. Anthropic's Mythos is seen by cybersecurity experts as posing significant challenges to the banking industry and its legacy technology systems, raising alarm bells among regulators in Britain and the United States. ECB supervisors are gathering information about the model, with a view to discussing this new possible source of risk with banks on their watch, said the source who spoke on condition of anonymity because they are not authorised to comment publicly on the matter. Unlike in the U.S., this will be done via the ECB's regular dialogue with bank staff and no ad-hoc meeting with top management has been scheduled yet.

Anthropic
CNA10d ago
Read update
ECB to warn bankers about new Anthropic model risks, source says

Kraken IPO still on table despite valuation cut and market pause, co-CEO says

Crypto exchange Kraken is still pursuing a potential US initial public offering, even as its valuation has fallen sharply and listing plans have been put on hold amid volatile market conditions, its co-chief executive said. The company confidentially filed an S-1 registration statement with the US Securities and Exchange Commission in November but delayed its IPO plans amid difficult market conditions. Co-CEO Arjun Sethi said Tuesday the public offering "is still on the table," underscoring that Kraken continues to evaluate long-term access to public markets despite near-term uncertainty. The exchange is currently valued at about $13.3 billion, down from roughly $20 billion in November, reflecting shifting sentiment across crypto-linked assets and broader capital markets. Despite the IPO delay, Kraken has continued to attract significant institutional backing. Between 2011 and 2024, the company raised about $27 million in early-stage funding, but fundraising has accelerated sharply in recent months. Since mid-2025, Kraken has raised roughly $600 million in a round led by proprietary trading firms Jane Street and DRW followed by a $200 million investment from hedge fund manager Citadel. It also completed a $200 million secondary transaction involving German exchange operator Deutsche Börse, which reportedly acquired a 1.5% stake in the company.

Kraken
Proactiveinvestors NA10d ago
Read update
Kraken IPO still on table despite valuation cut and market pause, co-CEO says

ECB Sounds Alarm Over Anthropic's AI Risk to Banks | Technology

The European Central Bank is preparing to warn banks about the potential cyber risks from Anthropic's AI model, Mythos. Seen as a major challenge to banking systems, it's raising concerns among UK and US regulators. ECB will discuss these risks through regular staff dialogues without special meetings. The European Central Bank (ECB) is on the cusp of issuing warnings to banks regarding the potential cybersecurity threats posed by Anthropic's latest artificial intelligence model, Mythos. A source familiar with developments has revealed the model presents significant challenges to traditional bank technology frameworks. Concern is intensifying among regulators in both Britain and the United States, who see the model as a potential catalyst for increasingly sophisticated cyberattacks. The ECB is currently collecting information on Mythos to deliberate potential risks with the banks it oversees. This exploration will be part of their regular communication with banking staff. While an ECB spokesperson has declined to officially comment, it is clear that the focus remains on maintaining vigilance without scheduling any extraordinary meetings with senior management at this stage.

Anthropic
Devdiscourse10d ago
Read update
ECB Sounds Alarm Over Anthropic's AI Risk to Banks | Technology

OpenAI Accuses Anthropic of Inflating Revenue by $8B

IPO Stakes: Both companies are preparing to go public in 2026, making the revenue accounting dispute central to their competing valuation narratives. OpenAI's chief revenue officer has accused rival Anthropic of inflating its revenue in a four-page internal memo that lays out a battle plan for winning the enterprise AI market. Denise Dresser, who assumed Brad Lightcap's duties after the former COO moved to special projects, sent the memo to all employees on April 13, singling out Anthropic as OpenAI's primary competitor and questioning the accounting behind its rival's headline financial figures. Dresser was appointed CRO in December 2025 following a career leading Slack, and her expanded role now encompasses business-side responsibilities alongside CSO Jason Kwon and CFO Sarah Friar while COO Fidji Simo is on leave. Both companies are preparing to go public in 2026. Anthropic's disclosed run rate of $30 billion now exceeds OpenAI's roughly $24 billion in annualized revenue. As a result, the revenue gap makes the accounting dispute a high-stakes contest over which company can claim AI market leadership heading into public markets, with OpenAI allocating IPO shares to retail investors as it prepares for its debut. At the center of the memo is a direct challenge to Anthropic's financial disclosures. Dresser claims Anthropic inflates its revenue by grossing up revenue share payments from Amazon and Google, rather than reporting them net of those partnerships, overstating the run rate by roughly $8 billion. OpenAI, by contrast, reports its Microsoft revenue share net, which Dresser argues is more consistent with the accounting standards both companies would face as publicly traded entities. "Their stated run rate is inflated. They use accounting treatment that makes revenue look bigger than it is, including grossing up rev share with Amazon and Google. [OpenAI's] analysis shows that this overstates their run rate by roughly $8 billion (at the current $30 stated). We report Microsoft revshare net, which is more inline with standards we would be held to as a public company." However, no independent verification of either company's accounting claims exists, and Anthropic has not publicly responded to the memo's allegations. According to prior Anthropic disclosures, the company achieved approximately $9 billion in revenue in 2025 and had projected $20 to $26 billion by the end of 2026, a target it has already surpassed. Whether Dresser's figure represents genuine accounting inflation or a competitive framing exercise remains an open question ahead of both IPOs. Based on the memo's own figures, subtracting $8 billion from Anthropic's stated run rate would place net revenue closer to $22 billion, roughly in line with OpenAI rather than ahead of it. Without audited financials from either company, the dispute amounts to dueling internal estimates ahead of what could be two of the largest technology IPOs in history. Moreover, revenue figures are central to the valuation narratives both companies are building for investors, and gross versus net reporting is a well-known area of accounting judgment in the cloud industry. Beyond the revenue attack, Dresser's memo outlines five customer-backed priorities for Q2: winning the model layer for work, winning the agent platform layer, expanding through Amazon, selling the full AI-native stack, and owning deployment. An enterprise agent platform strategy anchored by internal products codenamed "Frontier" and "DeployCo" forms the core of the plan. Building on this vision, employees are urged to adopt a platform mindset with integrated enterprise offerings rather than treating products as separate lines. OpenAI's stack now includes ChatGPT for Work as the knowledge work entry point, Codex for software development, the API for embedded intelligence, Frontier as the agent platform, and Amazon Stateful Runtime for production-grade execution. Built on a landmark AWS cloud deal announced in late February, the Amazon runtime enables memory and continuity across interactions, moving beyond stateless model access toward systems that can operate reliably across complex business processes. Pairing the runtime with DeployCo, which the memo describes as helping companies prove value faster, reduce risk, and scale adoption, gives OpenAI a deployment story that goes beyond API access. Furthermore, a "Frontier Alliance" partner model would scale deployment execution across the market. Internally, the memo touts a model codenamed "Spud" as OpenAI's smartest yet, emphasizing stronger reasoning, better understanding of intent, and more reliable output for enterprise customers. Multi-year, nine-figure enterprise deals are rising, according to the memo, with existing customers expanding and standardizing on OpenAI capabilities across their organizations. Enterprise already accounts for about 40% of OpenAI's revenue, and the memo confirms that enterprise will equal consumer revenue by the end of 2026. In practice, for enterprise customers already using ChatGPT for Work, adding Codex, Frontier, and the stateful runtime creates switching costs that grow with each additional product adopted. OpenAI can retain customers even if Anthropic's Claude models outperform on individual benchmarks, shifting the competitive axis from model quality to ecosystem depth. Dresser's sharpest language targets Anthropic's technical and philosophical positioning. "Their strategic misstep to not acquire enough compute is showing up in the product," the memo states, citing throttling, weaker availability, and less reliable customer experience as symptoms of a deeper infrastructure deficit. She described these as consequences of a failure to secure adequate compute capacity when it was available, arguing that the shortfall now constrains product quality for enterprise customers. In addition, she attacked Anthropic's safety-first branding, writing that Anthropic's narrative is "built on fear, restriction" and elite control of AI. She acknowledged Anthropic's strength in coding as an initial competitive advantage but characterized the company as vulnerable in a broader platform competition, arguing that a single-product focus cannot sustain an enterprise war. Yet Anthropic's disclosed infrastructure plans complicate that compute critique. Anthropic signed a long-term deal with Google and Broadcom for 3.5 gigawatts of TPU compute capacity starting in 2027, according to a Broadcom SEC filing cited by The Register. Separately, Anthropic now counts over 1,000 enterprise customers each spending more than $1 million annually, doubling from 500 in February 2026. Meanwhile, Claude remains the only frontier AI model available on all three major cloud platforms, AWS Bedrock, Google Vertex AI, and Microsoft Azure Foundry, giving Anthropic a distribution advantage that Dresser's memo does not address. Rather than suffering from a compute shortage, Anthropic appears to be scaling infrastructure and enterprise relationships simultaneously. Dresser's memo extends a pattern of OpenAI leadership publicly disparaging its rival. CEO Sam Altman wrote in February that Anthropic "serves an expensive product to rich people." Escalation from consumer-focused mockery to a formal enterprise battle plan suggests OpenAI views Anthropic's rapid growth, from $1 billion in revenue in 2024 to its current run rate, as a serious organizational threat. Consequently, OpenAI's own consumer market share has eroded sharply over the past two years as Anthropic has gained ground among enterprise buyers, adding urgency to the enterprise pivot that the memo outlines. Inbound demand from the Amazon partnership has been "frankly staggering," according to the memo. With both companies heading toward IPOs, Dresser identified OpenAI's constraint as internal capacity rather than demand. She argued that "multi-product adoption makes us harder to replace," framing platform lock-in as the competitive moat that separates OpenAI from a rival she cast as a single-product company in a platform war. Whether that framing holds may depend less on the accounting dispute and more on whether enterprise customers value integrated platforms over top-performing point solutions, a question that will be tested as both companies open their books to public investors.

Anthropic
WinBuzzer10d ago
Read update
OpenAI Accuses Anthropic of Inflating Revenue by $8B

This Bot Always Bets 'No' on Polymarket, And It Has a Point - Decrypt

Polymarket's top earners are sports bettors with domain expertise, not systematic "No" bettors. Someone built a prediction market bot with a philosophy borrowed from the most cynical person at every dinner table -- the one who always argues "nothing ever happens." The bot, created by engineer Sterling Crispin, automatically buys "No" on every non-sports market on Polymarket it finds and holds to resolution. The entire trading strategy fits in one sentence. Turns out, that might actually be enough to make your bags grow -- kinda. The premise rests on a number Polymarket publishes openly: 73.3% of all resolved markets on the platform end in "No." Crispin cited a nearly identical figure -- 73.4% -- when he announced the bot on X earlier this week, a post that collected 3.1 million views. Polymarket's own accuracy page confirms the data, explaining that "there are usually more ways for something not to happen than to happen in one exact way." The observation cuts to something real about how prediction markets are built. Most questions on Polymarket are framed around specific events materializing by a deadline: Will a particular official resign, will a specific bill pass, will prices breach a round number. The status quo has a structural advantage -- it only needs to hold, while the Yes outcome needs one precise chain of events to complete on schedule. When the deadline passes and nothing materializes, the bets on "No" collect. The effect is amplified in longer-running markets. Our own quick analysis of over 2,300 closed Polymarket positions shows that markets open for 90 to 180 days resolve "No" at a rate of 73.5%, nearly matching the platform's overall published figure. Short-duration markets -- under a week -- flip closer to coin-toss territory, resolving "No" just 52% of the time (still a tiny advantage, but advantage nonetheless). The longer a market stays open, the more time the world has to simply do nothing. The bot is not, as some dismissive replies on Crispin's post assumed, just spraying capital at random. The source code reveals specific filters: It only targets non-sports markets, which have lower "No" rates, and only buys "No" when the best ask sits below $0.65. (All bets on prediction markets like Polymarket, Kalshi, or Myriad -- operated by Decrypt's parent company Dastan -- resolve to $1, with users buying shares in a "yes" or "no" position at less than $1 per share and with the price implying current odds.) That price cap does real work. Buying "No" at $0.40 only requires a 42% win rate to break even after gas fees on Polygon; buying at $0.60 pushes the break-even threshold to 59%. By capping entries below 65 cents (those with implied odds above 65%), the bot screens out markets where the crowd has already priced in the likely "No" outcome, and where the edge has evaporated. The default 2% position sizing is a conservative default, not a limit -- the bot allows any sizing up to 100%. At 10-20% sizing, annual returns reach 16-33%, which is competitive with active trading strategies. Prediction markets have grown into a $63.5 billion-a-year industry, according to a 2026 CertiK report, and Polymarket was valued at $9 billion after NYSE parent Intercontinental Exchange invested $2 billion in October 2025. Markets of that scale don't leave exploitable inefficiencies sitting in plain sight for long. The platform's top earners don't bet "No" on everything. Polymarket's all-time leaderboard shows the top 10 traders have collectively made over $113 million in profit -- and nearly all of them are bettors with genuine domain expertise. Theo4, the leading wallet has made its own fortune on Yes/No political markets by placing odds in Yes or No depending on the event. The statistical edge Crispin identified is real. But turning it into a fortune requires more than a Python script set to "buy No." The bot's edge (if any) comes from: Filtering for markets where "No" is priced below its true probability (the 65 cent cap), avoiding markets where "Yes" is underpriced (sports, finance) and selecting longer-duration markets. The bot's GitHub repository is public, licensed CC0, and includes a live dashboard, a Heroku deployment guide, and a script called wallet_history.py for tracking realized performance. As of publication, Crispin has not shared his wallet address or actual profit figures from live trading.

Polymarket
Decrypt10d ago
Read update
This Bot Always Bets 'No' on Polymarket, And It Has a Point - Decrypt
Showing 4361 - 4380 of 11425 articles