News & Updates

The latest news and updates from companies in the WLTH portfolio.

Anthropic will put AI risks 'on the table' with Mythos model - kuwaitTimes

PARIS: American AI developer Anthropic plans to "lay the risks out on the table" even as it restricts deployment of a new model dubbed Mythos, whose powerful cybersecurity capabilities raise stark questions for companies and governments. "We have a model that's beginning to outstrip human capabilities in the cyber world," Anthropic's Paris-based chief of relations with startups and tech firms Guillaume Princen told AFP in an interview. Mythos is "capable of spotting security holes that have existed for decades, in systems tested by both human experts and automated tools, that have never been discovered before," he added. Anthropic has delayed a general release of Mythos, sharing it first with a few dozen key American tech and financial services players - such as Nvidia, Amazon, Apple and JP Morgan Chase - to allow them to test and improve their security infrastructure. But the company has also been accused of overhyping the powers of a technology which is its stock in trade - and the subject of fierce competition with rival OpenAI. The Mythos news broke as rumours grow that Anthropic will list on the stock market this year. "We prefer to be transparent and lay these risks out on the table," Princen said, adding that AI safety concerns are "central to Anthropic's DNA". "We don't have all the answers, this has to be a conversation between tech actors like us who have the data, the academic world, the political world and the world of economists," he added. Mythos' reported capabilities have unsettled the American financial sector and the European Union, which requested more information from Anthropic. In an open letter to businesses, the British government said that Mythos "highlights the speed at which AI capabilities are increasing and the threats they potentially pose". No European company is part of Anthropic's "Project Glasswing" consortium for shoring up cyber defences using Mythos' findings. That has raised questions about how prepared the rest of the world will be for the offensive capabilities of US-owned AI. Mythos is "certainly not a model that will soon be opened to the public at large, for obvious reasons," Princen said. Anthropic is nevertheless "thinking about the next waves of opening up," he added. Europe is the region where Anthropic sees the fastest growth. Its Claude Code software development tool generates around $2.5 billion in annualized revenue - a figure based on extrapolating from a few recent weeks of sales. Much of that expansion comes from "European firms riding the wave" of AI, Princen said. The company has opened offices in Dublin, London, Paris and Munich, and wants to keep investing across the continent. "We go where the demand is," Princen said, pointing to partnerships with European firms like Swedish coding startup Lovable or Danish pharma company Novo Nordisk. Relatively unknown to the wider public until recently, Anthropic was founded in 2021 by former OpenAI staff and makes around 80 percent of its revenue from business-to-business sales. The company and its Claude chatbot surged in prominence in late February, when bosses refused to allow its AI tools to be used by the Pentagon for mass surveillance of American citizens or fully autonomous weapons. The Trump administration responded by designating Anthropic a so-called "supply chain risk" to national security -- a decision being contested in multiple legal cases. In legal documents seen by AFP, Anthropic finance chief Krishna Rao warned that Washington's move could cost the firm multiple billions in revenue this year. On the other hand, "there are a lot of people who started using Claude precisely because of the position we took on that question," Princen said. Anthropic said in early April that it had tripled its annualized revenues quarter-on-quarter to over $30 billion -- outpacing OpenAI for the first time. - AFP

Anthropic
Kuwait Times3d ago
Read update
Anthropic will put AI risks 'on the table' with Mythos model - kuwaitTimes

Anthropic CEO makes shocking admission about AI

The CEO of one of the world's most powerful AI companies is warning that the technology his firm builds could destroy a massive share of entry-level white-collar jobs. And he says most people in government and business are not ready for it. Anthropic CEO Dario Amodei told Axios that AI could eliminate half of all entry-level white-collar jobs within five years, a shift he said could push U.S. unemployment to between 10% and 20%, according to Axios. What Amodei actually said about the impact of AI on jobs Amodei did not speak in vague terms. He named specific fields and a specific window. "Entry-level jobs will be replaced by AI systems," he told Fox News. "We may indeed have a serious employment crisis on our hands." When asked about timing, he said, "I would not be surprised if somewhere between one and five years we start to see big effects here." The industries most at risk, Amodei explained, are finance, consulting, law, and tech. Those are exactly the fields in which junior roles involve research, document review, data analysis, and report preparation, tasks that AI systems are rapidly learning to handle. He was equally blunt about the lack of awareness. "Most of them are unaware that this is about to happen. It sounds crazy, and people just don't believe it," he told Fox News. Amodei also painted a stark picture of what a positive AI scenario could look like, and why it should still alarm people. "Cancer is cured, the economy grows at 10% a year, the budget is balanced, and 20% of people don't have jobs," he said, according to Axios. Why this AI employment warning is different Amodei's concern is not just about job volume. It is about the breadth of AI's reach. At Davos in January 2026, he warned that AI's "cognitive breadth" means it will not disrupt one industry at a time. It could simultaneously affect finance, consulting, law, and tech, leaving workers with fewer options to switch fields, according to CNBC. "The technology is not replacing a single job but acting as a general labor substitute for humans," he said. That makes this different from previous waves of automation. Factory workers displaced by robots could, in theory, move into service or office jobs. If AI is moving into office jobs at the same time, there is no obvious lane to switch into. Entry-level job cuts damage the career ladder One of the sharpest responses to Amodei's warning came from Emily Galvin-Almanza, an attorney and executive director of the nonprofit Partners for Justice. "I don't get how people are planning to sidestep the very basic problem that if you don't have junior hires right now, you won't have experienced people 5 or 10 years later," she wrote on X (the former Twitter) in response to Amodei's Fox interview, according to The Cool Down. That is the career pipeline risk. Entry-level jobs are not just a starting point. They are how professionals gain the experience needed to move up. If AI eliminates those roles, it does not just hurt recent graduates. It eventually hollows out the senior talent bench behind them. Hiring data already show AI effects Amodei's warning is not purely hypothetical. Several data points suggest the shift is already underway. Big Tech's new graduate hiring has fallen nearly 50% from pre-pandemic levels, according to a SignalFire report. AI was cited as the reason for nearly 55,000 U.S. layoffs in 2025, Challenger, Gray, & Christmas data cited by CNBC reveal. A Massachusetts Institute of Technology study found AI can already perform the work of 11.7% of the U.S. labor market, saving up to $1.2 trillion in wages across finance, health care, and professional services, according to CNBC. Mercer's Global Talent Trends 2026 report, which surveyed 12,000 people worldwide, found 40% of employees feared losing their jobs to AI. That is up from 28% in 2024, according to CNBC. Key figures behind Amodei's AI-jobs warning: * Estimated share of entry-level white-collar jobs at risk: Up to 50%, according to Axios * Projected unemployment spike: 10% to 20%, Axios noted * Timeline: One to five years, Fox News reported * Most exposed industries: Finance, consulting, law, and tech * Big Tech new graduate hiring decline: Nearly 50% from pre-pandemic levels, according to SignalFire * AI-related U.S. layoffs in 2025: Nearly 55,000, CNBC indicated * Share of U.S. labor market AI can already replace: 11.7%, based on an MIT study cited by CNBC * Employees who fear losing jobs to AI: 40%, up from 28% in 2024, according to Mercer via CNBC Not everyone agrees with AI doom scenario Not everyone agrees that a massive AI-induced employment disruption is imminent. Yale University's Budget Lab published a report in October 2025 concluding that AI had not yet caused widespread job losses, based on U.S. labor market data from 2022 to 2025. The share of workers in different jobs had not changed significantly since ChatGPT's debut in late 2022, according to CNBC. Deutsche Bank analysts also warned in a 2026 note that "AI redundancy washing will be a significant feature of 2026," meaning some companies may be using AI as a cover for job cuts that have other causes, CNBC noted. But Amodei anticipated the skepticism. "We, as the producers of this technology, have a duty and an obligation to be honest about what is coming," he said, according to Axios. "I don't think this is on people's radar." The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc. This story was originally published April 20, 2026 at 11:33 AM.

Anthropic
MyrtleBeachOnline3d ago
Read update
Anthropic CEO makes shocking admission about AI

Anthropic's Mythos Leads Global Bank Regulators to Call For Increased Vigilance | PYMNTS.com

By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The latest concerns come from the Asia-Pacific region, Reuters reported Monday (April 20), where regulators said they were tracking the development and potential implications of Mythos. Anthropic said earlier this month that Mythos had uncovered thousands of high-severity vulnerabilities, including flaws in major operating systems and web browsers. Initially, the startup limited access to about 40 companies, including Amazon, Apple and J.P. Morgan Chase, so they can experiment with the model and address weaknesses in their systems. As Reuters noted, the model's capabilities for high-level coding could grant it a potentially unprecedented ability to spot cybersecurity vulnerabilities. A spokesperson for the Australian Securities and Investments Commission (ASIC) told Reuters that it was closely monitoring the use of Mythos along with other regulators to determine possible implications for the Australian market. "ASIC engages closely with other regulators, government agencies and the financial sector to understand and respond to changing technologies," the spokesperson said. The commission added that it expects financial services licensees to "be on the front foot" to protect customers and clients. The Australian Prudential Regulation Authority (APRA), which regulates the country's banks, said it would "continue to assess the implications of these technological advancements to ensure the ongoing safety and resilience of the financial system." Meanwhile, South Korea's Financial Supervisory Service (FSS) said it met with information security officials from financial companies last week to discuss Mythos-related risks. Elsewhere in Asia, Singapore's central bank, the Monetary Authority of Singapore (MAS), said advances in AI could accelerate the discovery and exploitation of software vulnerabilities in information technology systems. "Financial institutions need to redouble efforts to strengthen their security defences, proactively identify and close vulnerabilities, and raise vigilance on cyber hygiene, including timely security patching," it said. MAS added that it was coordinating with the Cyber Security Agency of Singapore to protect critical infrastructure operators. These statements follow similar warnings in Europe, Great Britain and the U.S., where the Treasury Department has sought access to Mythos. As PYMNTS wrote last week, statements such as these show the "split-screen reality" around Anthropic in the wake of Mythos' release. "The company is gaining traction fast in the enterprise market even as regulators and banks scramble to understand the risks that come with more powerful AI tools," that report said.

Anthropic
PYMNTS.com3d ago
Read update
Anthropic's Mythos Leads Global Bank Regulators to Call For Increased Vigilance | PYMNTS.com

News Explorer -- Polymarket Seeks $400 Million in Funding at $15 Billion Valuation: Report

Polymarket Seeks $400 Million in Funding at $15 Billion Valuation: Report Polymarket is in discussions to secure $400 million at a $15 billion valuation, with plans to attract more investors. Intercontinental Exchange invested $600 million last month, totaling $1.6 billion, strengthening their partnership. The prediction market industry is gaining institutional support despite regulatory challenges.

Polymarket
Decrypt3d ago
Read update
News Explorer -- Polymarket Seeks $400 Million in Funding at $15 Billion Valuation: Report

NSA is reportedly using Anthropic's Mythos model despite Pentagon blacklist

The reported use points to a growing split inside the US government as officials weigh Anthropic's cyber capabilities against an ongoing DoD crackdown. The National Security Agency is using Anthropic's Mythos Preview despite the Pentagon's supply chain risk designation against the company, according to an Axios report. Axios said two sources told the outlet the NSA was using the model, while one source said usage had also expanded more broadly within the department. The report highlights a widening contradiction inside the US government. In March, the Pentagon formally designated Anthropic a supply chain risk, a step that limited the use of the company's technology in military contracts after a dispute over Anthropic's refusal to loosen safeguards related to autonomous weapons and domestic surveillance. Axios said it remains unclear how the NSA is using Mythos, though organizations with access to the model have used it to scan their own systems for exploitable security vulnerabilities. Anthropic limited Mythos access to roughly 40 organizations because of the model's offensive cyber capabilities. The timing is notable because the White House appears to be exploring a path forward with Anthropic even as the court fight continues. Axios reported last week that Anthropic CEO Dario Amodei met White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent to discuss government use of Mythos and the company's broader security practices.

Anthropic
Crypto Briefing3d ago
Read update
NSA is reportedly using Anthropic's Mythos model despite Pentagon blacklist

Red, White and Blue Ash to add fences, security and age rules after last year's chaos

There will be new rules and safety measures in place for this year's Red, White and Blue Ash after last year's event resulted in chaos with multiple teen's arrested.Body camera footage from the Fourth of July celebration showed the aftermath after a group of around 400 young people set off their own fireworks, causing panic.After it happened, police arrested multiple people, including a teenager who was facing several charges. Those charges were later dropped.The fireworks display organized by Red, White and Blue Ash finished just one minute before police body camera footage began. Reports indicated rogue fireworks were set off prior to the finale.Blue Ash police said one officer was burned on his leg because of the fireworks but has recovered.This year, the city announced new rules and changes.One of the biggest changes is no one under 18 will be admitted without a parent or guardian present.The event will also now have a fenced perimeter with "four designated entry and exit points" staffed with security and camera surveillance.There will also be road closures: Glendale Milford Road will be closed from E. Lake Forest Dr. to the MadTree property located at 4321 Glendale Milford Rd., beginning at 3:00 p.m.Additional closures will be implemented along Glendale Milford Rd. as the night progresses.Other new rules and safety measures include: Event entrances will open at 3:30 pm. Attendees will not be allowed to enter early.No chair drop off prior to entrances openingNo umbrellas All bags must be clear and not exceed the size of 16" x 16" x 8". Small clutches (no larger than 4.5" x 6.5") are permittedAll bags and carry-in items will be subject to search upon entry or at any time while inside the eventBusinesses within the festival perimeter may be accessed via the entrance on Summit Parkway. Contact the individual businesses for the most up-to-date July 4th hours. The event rules will apply at this entrance throughout the entire day.As with previous events, organizers say foldable chairs and blankets are permitted however, no tents, stakes, tarps, canopies, or other coverings will be allowed. No coolers, outside alcohol or pets will be allowed. "These are significant modifications to the way this event has been carried out in the past," said Blue Ash Communications Coordinator Rachel Murray. "Following an incident at last year's event, we took a close look at ways to enhance both safety and overall guest experience. These updates reflect our commitment to providing a safe, enjoyable experience for all attendees and align with best practices used at major events across the region." There will be new rules and safety measures in place for this year's Red, White and Blue Ash after last year's event resulted in chaos with multiple teen's arrested. Body camera footage from the Fourth of July celebration showed the aftermath after a group of around 400 young people set off their own fireworks, causing panic. After it happened, police arrested multiple people, including a teenager who was facing several charges. Those charges were later dropped. The fireworks display organized by Red, White and Blue Ash finished just one minute before police body camera footage began. Reports indicated rogue fireworks were set off prior to the finale. Blue Ash police said one officer was burned on his leg because of the fireworks but has recovered. This year, the city announced new rules and changes. One of the biggest changes is no one under 18 will be admitted without a parent or guardian present. The event will also now have a fenced perimeter with "four designated entry and exit points" staffed with security and camera surveillance. There will also be road closures: Glendale Milford Road will be closed from E. Lake Forest Dr. to the MadTree property located at 4321 Glendale Milford Rd., beginning at 3:00 p.m. Additional closures will be implemented along Glendale Milford Rd. as the night progresses. Other new rules and safety measures include: * Event entrances will open at 3:30 pm. Attendees will not be allowed to enter early. * No chair drop off prior to entrances opening * No umbrellas * All bags must be clear and not exceed the size of 16" x 16" x 8". Small clutches (no larger than 4.5" x 6.5") are permitted * All bags and carry-in items will be subject to search upon entry or at any time while inside the event * Businesses within the festival perimeter may be accessed via the entrance on Summit Parkway. Contact the individual businesses for the most up-to-date July 4th hours. The event rules will apply at this entrance throughout the entire day. As with previous events, organizers say foldable chairs and blankets are permitted however, no tents, stakes, tarps, canopies, or other coverings will be allowed. No coolers, outside alcohol or pets will be allowed. "These are significant modifications to the way this event has been carried out in the past," said Blue Ash Communications Coordinator Rachel Murray. "Following an incident at last year's event, we took a close look at ways to enhance both safety and overall guest experience. These updates reflect our commitment to providing a safe, enjoyable experience for all attendees and align with best practices used at major events across the region."

CHAOS
WLWT53d ago
Read update
Red, White and Blue Ash to add fences, security and age rules after last year's chaos

What is Anthropic Claude Design? Tool launched to rival Figma

Claude Design is powered by Anthropic's latest Claude Opus 4.7 model. Anthropic has dropped a new AI tool called Claude Design. It can generate polished visual assets ranging from interactive prototypes to pitch decks with just a text prompt. The launch has already rattled the design software market, with Adobe shares dropping 1.5% and Figma sliding 7% in secondary trading. The announcement comes shortly after Anthropic's Chief Product Officer resigned from Figma's board. Claude Design is powered by Anthropic's latest Claude Opus 4.7 model. It is designed to help both professional designers and non-designers create and refine visual work efficiently. What is Claude Design? Claude Design allows users to start projects by uploading images, documents (DOCX, PPTX, XLSX), or even pointing the AI at an existing codebase. The tool generates a first draft, which can then be refined using inline comments, direct edits, or custom sliders. Anthropic claims that Claude Design can be used for: Interactive prototypes: Turning static mockups into shareable prototypes for user testing. Pitch decks: Transforming rough outlines into complete, on-brand presentations. Frontier design: Creating complex prototypes with voice, video, 3D elements, and shaders. Marketing collateral: Generating landing pages, social media assets, and campaign visuals. How to Use Claude Design Open the Claude app or website. Create a project and upload context (DOCX, PPTX, XLSX, screenshots). Review the AI-generated draft on the canvas. Refine using chat prompts, inline comments, or edits. Export the finished design to Canva, PDF, PPTX, or HTML files. Claude Design also integrates seamlessly with Claude Code, allowing users to hand off completed designs for backend development. Availability Claude Design is available at no extra cost for Pro, Max, Team, and Enterprise subscribers, With this launch, Anthropic is positioning itself as a direct competitor to design platforms like Figma and Adobe. READ MORE: WWDC 2026 graphic teaser hints at major Siri makeover, Mark Gurman says

Anthropic
Mashable ME3d ago
Read update
What is Anthropic Claude Design? Tool launched to rival Figma

iOS 27 Update: Apple to End iPhone Home Screen Chaos With New 'Undo' Button

New iOS 27 features aim to enhance user control over iPhone Home Screen customisation. iPhone users who often encounter issues with their Home Screen after accidental changes are likely to welcome Apple's latest potential enhancement. The Cupertino company appears set to introduce new 'undo' and 'redo' buttons with the launch of iOS 27, helping users avoid unwanted changes. According to Mark Gurman in his latest Power On newsletter, Apple is planning to add these options to the iPhone Home Screen customisation menu in iOS 27. The update would expand on the current options available when long-pressing the Home Screen. At present, a long press brings up four options: Add Widget, Customise, Edit Wallpaper and Edit Pages. 'Apple is looking at adding "undo" and "redo" buttons in that same menu to make reversing or redoing changes easier,' Gurman wrote. New Feature Improves Home Screen Control The changes may not be major, although they address the frustration some iPhone users face after accidentally altering their Home Screen. Some explore and experiment with different features but end up struggling to revert to the original layout. With the two added buttons, users would be able to review their current Home Screen and compare it with previous versions. This would allow them to move back and forth and select the layout they prefer. Aside from the undo button in iOS 27, other potential changes are also reportedly in development, including a Liquid Glass adjustment slider, Apple Intelligence upgrades, and performance and battery life improvements, according to AppleInsider. Compatibility and Rollout Expectations The changes in iOS 27 should be welcome news for iPhone users dealing with issues on their devices. However, the downside is that not all iPhone models are expected to be supported. According to leaker Instant Digital, iOS 27 will allegedly be compatible only with iPhone 12 models and later, meaning earlier devices will no longer be supported. This would exclude the iPhone 11, iPhone 11 Pro, iPhone 11 Pro Max and iPhone SE (second generation). However, it is claimed that these devices will continue to receive iOS 26 security updates for at least a few years, according to MacRumors. Furthermore, those eager to try Apple Intelligence should note that it is expected to be available only on iPhone 15 Pro models or later. While this remains a rumour, Instant Digital has previously provided accurate information, including the yellow iPhone 14 and iPhone 14 Plus. Apple is expected to announce iOS 27 on 8 June during WWDC 2026. The first developer beta is likely to be released later that day, while the public beta may follow in July. The final release is expected in September, the same month the Cupertino company is anticipated to announce the iPhone 18 line-up. iOS 27 has been compared to Mac OS X Snow Leopard and is expected to focus on security patches and fixes. With time still remaining before launch, it remains to be seen whether further enhancements will be added.

CHAOS
International Business Times UK3d ago
Read update
iOS 27 Update: Apple to End iPhone Home Screen Chaos With New 'Undo' Button

NSA Is Using Anthropic's Powerful Claude Mythos AI as CEO Meets With White House: Report - Decrypt

An administration source told Axios that every federal agency except the Department of Defense wants access to Anthropic's AI tools. The National Security Agency is running Anthropic's Claude Mythos Preview inside its classified networks, according to two sources cited by Axios -- a surprising development given that the NSA falls under the Department of Defense, which declared Anthropic a supply-chain risk in March and is currently fighting the company in federal court. Claude Mythos is not a standard enterprise tool. When Anthropic unveiled the model earlier this month, it restricted access to a handful of vetted organizations, arguing that the model poses serious offensive security risks. Anthropic's own technical documentation found that Mythos was able to identify critical vulnerabilities in every widely used operating system and web browser. The company judged it too dangerous for open release. Most organizations with access are using the model defensively, scanning their own infrastructure for weaknesses before adversaries do. The initiative, branded Project Glasswing, includes Microsoft, Google, Apple, Amazon Web Services, JPMorgan Chase, and Nvidia. What the NSA is doing with Mythos is less clear, though the agency's mission is not purely defensive. A third source told Axios the model is being used more broadly within the intelligence department. The Pentagon's hostility toward Anthropic traces to negotiations that went bad. In July 2025, the two sides signed an agreement making Claude the first frontier AI model cleared for use on classified networks. Talks soured when the Pentagon sought to renegotiate, demanding the military be allowed to use Claude "for all lawful purposes" without restriction. Anthropic refused, drawing two firm lines: no autonomous weapons, and no domestic mass surveillance. When negotiations collapsed, Defense Secretary Pete Hegseth declared Anthropic a supply-chain risk in late February -- an unprecedented designation, and the first ever applied to an American company. A California federal judge blocked the move, but then a D.C. appeals court denied Anthropic's separate bid to halt the blacklisting while litigation plays out. The two sides remain in court. While the legal fight grinds on, the rest of the administration is moving in a different direction. On April 17, Anthropic CEO Dario Amodei met with White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent. Anthropic described the session as "productive", Reuters reported. The White House said the parties "discussed opportunities for collaboration, as well as shared approaches and protocols to address the challenges associated with scaling this technology." President Trump, asked by reporters about the meeting, said he had "no idea" Amodei had been at the White House, after he previously ordered the administration not to use Anthropic's "woke" models. Bessent and Federal Reserve Chair Jerome Powell have separately been encouraging major bank CEOs to test Mythos and be prepared for security threats, and an administration source told Axios that every federal agency except the Defense Department wants access to Anthropic's tools. The NSA's reported use of Mythos comes as questions mount about whether the model's capabilities can be contained at all. Decrypt reported last week that researchers at Vidoc Security reproduced several of Mythos's most alarming cybersecurity findings using publicly available models -- including OpenAI's GPT-5.4 and Anthropic's own Claude Opus 4.6 -- without any special access to Mythos itself. Anthropic did not immediately respond to a request for comment by Decrypt.

Anthropic
Decrypt3d ago
Read update
NSA Is Using Anthropic's Powerful Claude Mythos AI as CEO Meets With White House: Report - Decrypt

Google builds elite team to close the coding gap with Anthropic

To close the gap, Google is increasingly training its AI models on internal code while also tracking employee usage of internal coding tools and, in some teams, making AI training mandatory. Google is doubling down on AI coding, using more AI internally and aiming for models that can eventually improve themselves. Google Deepmind has put together a specialized team of researchers and engineers to sharpen the programming chops of its Gemini models, The Information reports. The group is led by Deepmind engineer Sebastian Borgeaud, who previously ran pre-training for the company's models. The team is focused on complex, long-horizon programming tasks like writing new software from scratch, work that requires models to read files and figure out what the user actually wants. Part of the motivation: Google researchers think Anthropic's coding tools are better. Coding has become the battleground for every major AI lab this year, with OpenAI and Google both scrambling to catch up to Anthropic. OpenAI recently pulled the plug on its Sora video generator to free up compute for training and running other AI models. Google co-founder Sergey Brin and Deepmind CTO Koray Kavukcuoglu are directly involved in the effort. "To win the final sprint, we must urgently bridge the gap in agentic execution and turn our models into primary developers" of code, Brin wrote in an internal memo. He also required every Gemini engineer to use internal agents for complex, multi-step tasks. Brin told employees that stronger coding skills are a stepping stone toward AI that can improve itself. A sophisticated coding agent, paired with AI that handles math problems and experiments, could eventually automate much of the work done by AI researchers and engineers. Internally, Google tracks how much its coding tool "Jetski" gets used and ranks teams accordingly, a setup similar to Meta, which tracks token usage as its metric. Some teams outside Deepmind also require engineers to attend AI training sessions. According to The Information's sources, Google is leaning more heavily on models trained on its internal code. Google's internal codebase looks very different from the public code typically used to train general-purpose coding agents, so these internally trained models can't be released publicly. They could, however, help Google build better models that eventually ship to users, while also speeding up internal development.

Anthropic
THE DECODER3d ago
Read update
Google builds elite team to close the coding gap with Anthropic

Cloud Platform Vercel Reports Unauthorized Access to Internal Systems

Vercel breach exposes frontend risks as non-sensitive variables and AI integrations create new crypto attack vectors. Security concerns have surfaced around cloud infrastructure provider Vercel following an internal systems breach. The incident has raised questions about potential exposure for crypto projects that rely on the platform. While services remain active, the situation has drawn attention due to possible risks tied to environment variables and integrations. Ongoing investigations continue to assess the scope and impact across affected users. Vercel disclosed that attackers gained entry through a compromised employee account linked to a third-party AI service. According to CEO Guillermo Rauch, the intrusion originated from an OAuth breach involving an AI tool connected to Google Workspace. That external compromise allowed attackers to pivot into Vercel's internal systems and escalate access. Rauch explained that sensitive customer environment variables remain encrypted at rest. However, attackers reportedly accessed variables marked as non-sensitive. That distinction has become a focal point, especially for developers who may have stored important keys without encryption flags. External cybersecurity teams, including Mandiant, are assisting with the response. Vercel has also contacted Context.ai to better understand the breach's origin and broader exposure. Authorities have been notified as part of the response process. Reports from BleepingComputer pointed to a post on BreachForums where a seller linked to ShinyHunters offered alleged Vercel data for $2 million. Claims included access to internal credentials, source code, and employee records. No independent verification has confirmed the authenticity of those claims. A sample shared online reportedly included hundreds of employee entries. Details listed names, email addresses, and activity logs. Vercel has not confirmed any ransom negotiations publicly. Developer Theo Browne noted that internal integrations with GitHub and Linear may have been heavily affected. His comments align with Vercel's advice that users rotate environment variables, especially those not flagged as sensitive. Crypto projects face notable exposure due to common reliance on Vercel for frontend hosting. Many decentralized applications run interfaces, dashboards, and wallet connections through such infrastructure. Any project storing private API keys or RPC endpoints without proper safeguards could face risk. Frontend attacks already pose recurring threats across Web3. Recent incidents show how attackers target infrastructure layers rather than core protocols. In many cases, users interact with compromised interfaces without realizing it. Several recent events reflect that trend, as CoW Swap paused trading after a domain hijack. Aerodrome and Velodrome faced DNS-based attacks months earlier. Meanwhile, EasyDNS admitted involvement in the hijack of eth.limo. Those incidents typically redirect users to malicious interfaces. Attackers clone legitimate platforms and drain wallets once users connect. In contrast, a hosting-layer breach introduces a deeper risk. Direct access to build outputs could allow attackers to alter live applications. Security implications for crypto teams include: Uncertainty remains around whether any live deployments were modified during the breach. Vercel has not reported confirmed cases of tampered customer applications. However, caution remains necessary given the nature of the access described. No major crypto project has publicly confirmed being contacted by Vercel at the time of writing. Still, many teams are likely reviewing internal setups and rotating credentials as a precaution. Further updates are expected as investigations continue. For now, the incident serves as a reminder of how interconnected tools, integrations, and infrastructure can introduce unexpected risks across the crypto sector.

Vercel
Live Bitcoin News3d ago
Read update
Cloud Platform Vercel Reports Unauthorized Access to Internal Systems

Nsa Spies Are Reportedly Using Anthropic's Mythos, Despite Pentagon Feud

BERITAJA is a International-focused news website dedicated to reporting current events and trending stories from across the country. We publish news coverage on local and national issues, politics, business, technology, and community developments. Content is curated and edited to ensure clarity and relevance for our readers. The National Security Agency is said to beryllium utilizing Mythos Preview, Anthropic's precocious announced exemplary that it withheld from nationalist release, Axios reports. The news comes weeks aft the NSA's genitor agency, the Department of Defense, branded Anthropic a "supply concatenation risk," aft the institution refused to let Pentagon officials unrestricted entree to its model's afloat capabilities. Anthropic announced Mythos earlier this period arsenic a frontier exemplary designed for cybersecurity tasks, but claimed the exemplary was excessively could of violative cyberattacks to beryllium released publicly. As a result, the AI patient constricted entree to Mythos to about 40 organizations, of which it has publically named only a dozen. The NSA appears to beryllium among the undisclosed recipients, and is said to beryllium utilizing Mythos chiefly for scanning environments for exploitable vulnerabilities. The UK's AI Security Institute has besides confirmed it has access to Mythos. The U.S. military's expanding usage of Anthropic's devices comes arsenic it simultaneously argues successful court that those devices could frighten nationalist security. The Pentagon's conflict originated erstwhile Anthropic refused to make Claude disposable for wide home surveillance and autonomous weapons development. The NSA's entree to Mythos comes arsenic Anthropic's narration pinch the Trump management appears to beryllium thawing. Last Friday, Anthropic main executive Dario Amodei met pinch White House main of unit Susie Wiles and Secretary of the Treasury Scott Bessent. The White House reportedly called the gathering productive. TechCrunch has reached retired to the NSA for comment. Anthropic declined to comment.

Anthropic
Beritaja3d ago
Read update
Nsa Spies Are Reportedly Using Anthropic's Mythos, Despite Pentagon Feud

Vercel Traces Customer Data Theft to Agentic AI Tool Breach

Attacker First Compromised AI Tool Used by Vercel Employee, Platform Provider Finds Cloud platform provider Vercel said an attacker stole customer data after compromising a third-party agentic artificial intelligence tool used by an employee. See Also: AI Impersonation Is the New Arms Race-Is Your Workforce Ready? San Francisco-based Vercel runs the widely used frontend cloud platform React, a JavaScript library used to build web applications. The company also created and maintains the popular Next.js framework for React, which provides full stack development - referring to both backend and frontend components. "We've identified a security incident that involved unauthorized access to certain internal Vercel systems," the company first warned customers on Sunday. The company said that it's brought in outside cybersecurity firms to help investigate, including Google's Mandiant incident response group. The company said the incident began with a compromise of Context.ai, a third-party AI tool used by a Vercel employee. "The attacker used that access to take over the employee's Vercel Google Workspace account, which enabled them to gain access to some Vercel environments and environment variables that were not marked as 'sensitive.'" Vercel said it's notifying affected customers, which it said amounts to a "quite limited" number. "We've reached out with utmost priority to the ones we have concerns about," said Vercel CEO Guillermo Rauch in a Sunday post to social platform X. The company said all stored sensitive data is fully encrypted and doesn't appear to have been exposed. Data customers typically designate as "sensitive" include everything from API keys and tokens to database credentials and signing keys. Vercel recommends all customers review the Vercel activity log for suspicious activity, as well as review environment variables. Any not marked as being sensitive "should be treated as potentially exposed and rotated as a priority," it said. "If your organization relies on their infrastructure, I strongly recommend you start looking into this immediately," said Austin Larsen, principal threat analyst for Google Threat Intelligence Group, in a Sunday post to LinkedIn. Who perpetrated the attack against Vercel and what all they stole remains unclear. "A group claiming to be ShinyHunters has taken responsibility for this breach. However, it is likely this is an imposter attempting to use an established name to inflate their notoriety," Larsen said (see: Latest BreachForums Reboot Tied to Fake ShinyHunters Admin). Vercel also recommends customers rotate bypass tokens they've created for testing deployments, as well as "investigate recent deployments for unexpected or suspicious looking deployments" and "delete any deployments in question" if there is any question as to their authenticity. That risk ties to an attacker potentially having backdoored or otherwise altered a customer's software. Cybersecurity firm Hudson Rock said the purported attacker on Sunday began listing for sale on a cybercrime forum stolen "access key / source code / database" from Vercel. Hudson Rock said it's found evidence that a Context.ai employee fell victim to Lumma information stealing malware on Feb. 17. The infostealer appeared to harvest valid Context.ai corporate credentials for Google Workspace, Supabase, Datadog and Authkit, as well as for the account, it said. "The exposure of these developer and administrative tools provided the exact leverage needed to escalate privileges, bypass initial security perimeters and successfully pivot into Vercel's infrastructure," it said. Vercel published Sunday an indicator of compromise for a malicious app used by the attacker. "We recommend that Google Workspace Administrators and Google Account owners check for usage of this app immediately" in their Google Admin Console's API Controls section, it said. Context.ai Confirms Breach Context.ai on Sunday confirmed that it was breached, saying that an attacker gained unauthorized access to its Amazon Web Services environment in March. The company hired CrowdStrike to investigate, said the breach involved a product designed to be run onsite by customers - since deprecated - and appeared to only result in the breach of a single customer's environment. In the wake of Vercel getting breached and further internal investigation, Context.ai on Sunday revised its conclusions, saying that the attacker "also likely compromised OAuth tokens for some of our consumer users," including one that allowed them "to access Vercel's Google Workspace," by using the OAuth token in what's known as a replay attack, to gain unauthorized access to service. While Vercel isn't a corporate customer, "it appears at least one Vercel employee signed up for the AI Office Suite using their Vercel enterprise account and granted 'Allow All' permissions," and that "Vercel's internal OAuth configurations appear to ave allowed this action to grant these broad permissions in Vercel's enterprise Google Workspace," Context.ai said. After being breached in March, and in conjunction with CrowdStrike, Context.ai said it better locked down its primary AWS environment, including implementing better "encryption, segmentation, authentication and monitoring controls." How many other Context.ai users might also have been breached isn't clear. Vercel said that it could involve "hundreds of users across many organizations."

Vercel
DataBreachToday3d ago
Read update
Vercel Traces Customer Data Theft to Agentic AI Tool Breach

The 14 Executives Now Driving Anthropic's Future After Its Labs Buildout

Mike Krieger, Ami Vora and Rahul Patil take on expanded roles as Anthropic builds Labs and eyes a high-stakes IPO. Anthropic began restructuring its leadership and organization in early 2026 as it prepares for a potential IPO. The company, which could go public as early as October at a valuation of up to $630 billion, has recently pulled ahead of competitors in revenue. Its annualized run rate (a prediction of annual revenue based on current quarterly performance) surpassed $30 billion in April, more than triple the figure at the end of 2025 and ahead of OpenAI's reported $25 billion. Headcount has also surged, with roughly 2,300 employees at the end of last year, more than double its size just months earlier. Together, these changes reflect a growing emphasis on both rapid experimentation and commercial scale, while maintaining safety as a core differentiator. Sign Up For Our Daily Newsletter Sign Up Thank you for signing up! By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime. See all of our newsletters A key part of this restructuring is the creation of Anthropic Labs in January, a research and development unit focused on incubating experimental products at the frontier of Claude's capabilities. One of its first major decisions was to withhold public release of its most advanced model, Claude Mythos, due to its ability to identify and exploit software vulnerabilities. Instead, the company is pursuing a limited rollout, known as Project Glasswing, granting access to more than 50 organizations, including Microsoft and Nvidia, to help strengthen cyber defenses. These structural changes are closely tied to a broader leadership reshuffle, including the promotion of former chief product officer Mike Krieger to co-lead Anthropic Labs. "Krieger is someone who knows how to build products that stick," Div Garg, founder and CEO of on-device superintelligence company AGI, Inc., told Observer. "At this stage, the constraint isn't model quality. The constraint is turning capability into things people actually use and pay for, repeatedly, at scale." Mike Krieger and Ben Mann co-lead Anthropic Labs Krieger, Anthropic's former chief product officer and a co-founder of Instagram, transitioned to co-lead Anthropic Labs at its launch earlier this year. At Instagram, he served as chief technology officer, and later co-founded the news app Artifact, which he sold to Yahoo in 2024. A native of Brazil, Krieger is a Stanford University alumnus. Ben Mann, an Anthropic co-founder, previously helped architect GPT-3 at OpenAI and worked as a software engineer at Google. Before moving to Labs, he served as Anthropic's lead product engineer, focusing on A.I. alignment and harm mitigation. Mann graduated from Columbia University. At Anthropic Labs, Krieger and Mann oversee a range of high-stakes initiatives, including the controlled rollout and governance of Claude Mythos, the company's most advanced model. While Mythos can significantly strengthen cybersecurity by identifying and exploiting software vulnerabilities, it also poses risks if misused. To manage that tension, Anthropic has opted for a limited release under Project Glasswing. With Labs' creation and their appointment, Anthropic has "the right structure in place to support the most critical motions for our product organization -- discovering experimental products at the frontier of Claude's capabilities and scaling them responsibly," Anthropic president Daniela Amodei wrote in a release. Ami Vora replaces Mike Krieger as CPO Following Krieger's move to Anthropic Labs, Ami Vora has taken on the role of chief product officer. She joined the company in December 2025 as head of product and was quickly promoted. Vora previously spent 15 years at Meta, where she held leadership roles, including vice president of product at Facebook and vice president of product and design at WhatsApp. She began her career at Microsoft and remains on the board of cloud monitoring platform Datadog. As CPO, Vora works closely with chief technology officer Rahul Patil to scale Claude beyond experimentation and expand Anthropic's market presence. Rahul Patil stays on as CTO, with an expanded role Anthropic's broader leadership bench remains deep, with all seven co-founders still at the company. Rahul Patil, who became CTO in October, succeeded Sam McCandlish, now chief architect. Patil previously served as CTO of Stripe and has led engineering teams at Microsoft, AWS and Oracle. Now working in close coordination with Vora, Patil is focused on bridging the gap between technical research and production-ready products. As Anthropic moves closer to a potential IPO, that alignment is increasingly critical. "Anthropic has decided the frontier lab model only gets you so far," said AGI, Inc.'s Garg. "My read is that Anthropic is preparing for a more competitive commercial phase, probably regardless of IPO timing." Other executives shaping Anthropic's future Dario Amodei, CEO and co-founder: Dario Amodei previously served as vice president of research at OpenAI. He founded Anthropic in 2021 with his sister, Daniela, and other former OpenAI colleagues. Daniela Amodei, president and co-founder: Daniela Amodei, who previously served as vice president of safety and policy at OpenAI, oversees Anthropic's core operations, including chief technology officer Rahul Patil and chief architect Sam McCandlish. Jared Kaplan, chief science officer and co-founder: Anthropic co-founder and former OpenAI researcher Jared Kaplan serves as chief science officer. Since 2024, he has also served as the company's responsible scaling officer, helping guide safety-related decisions. Jan Leike, alignment science lead: Jan Leike, who co-led OpenAI's superalignment team, has served as Anthropic's alignment science lead since 2024. Sam McCandlish, chief architect and co-founder: Another former OpenAI employee, Sam McCandlish focuses on model training and large-scale systems development. He previously served as Anthropic's CTO. Tom Brown, chief compute officer and co-founder: Former OpenAI GPT-3 researcher Tom Brown oversees Anthropic's compute infrastructure. Vitaly Gudanets, CISO: Vitaly Gudanets has served as Anthropic's chief information security officer since September. He previously led security efforts at Netflix. Jack Clark, head of policy and co-founder: A former OpenAI policy director and technology journalist, Jack Clark leads Anthropic's policy work. Krishna Rao, CFO: Krishna Rao joined Anthropic as chief financial officer in 2024 after previously leading finance at Airbnb. Christopher Olah, interpretability research lead and co-founder: Christopher Olah, a former interpretability lead at OpenAI, heads Anthropic's interpretability research, focusing on model transparency and A.I. safety. Anthropic's board and trust In February, Anthropic appointed Chris Liddell, a former deputy White House chief of staff and former CFO at Microsoft and General Motors, to its board of directors. Daniela Amodei said Liddell has "a track record of helping organizations get [technology, public service and governance] right when the stakes are highest." The rest of the board remains unchanged, including Dario Amodei, Daniela Amodei, Yasmin Razavi, Jay Kreps, Reed Hastings and Chris Liddell. The Long-Term Benefit Trust recently removed Kanika Bahl and Zach Robinson and added Mariano-Florentino Cuéllar; Neil Buddy Shah remains on the trust board.

Anthropic
Observer3d ago
Read update
The 14 Executives Now Driving Anthropic's Future After Its Labs Buildout

Vercel Breach Explained: OAuth Risk in AI + SaaS Environment - IT Security News

The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user. The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.

Vercel
IT Security News - cybersecurity, infosecurity news3d ago
Read update
Vercel Breach Explained: OAuth Risk in AI + SaaS Environment - IT Security News

Cloud development platform Vercel confirms breach.

Insurance carriers quietly back away from covering AI outputs (CSO Online) Many insurers have begun to exempt AI workloads from cybersecurity and errors and omissions coverage, saying their outputs are too unpredictable to write policies around. For a complete running list of events, please visit the Event Tracker. Newly Noted Events CSA Agentic AI Security Summit 2026 (Virtual, Apr 29 - 30, 2026) Securing the Future of Autonomous Intelligence. Welcome to the Agentic Wild Kingdom -- where autonomous AI agents don't just assist... they act, collaborate, compete, and evolve. Over two action-packed days, we'll explore the explosive growth of agentic ecosystems and the new reality they create: a dynamic, unpredictable environment where agents interact across tools, data, and each other -- often beyond direct human control. This is not just the future of AI. It's a whole new operational paradigm. At the center of it all is a critical challenge: securing the agentic control plane. Events CSA Agentic AI Security Summit 2026 (Virtual, Apr 29 - 30, 2026) Securing the Future of Autonomous Intelligence. Welcome to the Agentic Wild Kingdom -- where autonomous AI agents don't just assist... they act, collaborate, compete, and evolve. Over two action-packed days, we'll explore the explosive growth of agentic ecosystems and the new reality they create: a dynamic, unpredictable environment where agents interact across tools, data, and each other -- often beyond direct human control. This is not just the future of AI. It's a whole new operational paradigm. At the center of it all is a critical challenge: securing the agentic control plane. CyberArk IMPACT 2026 (Austin, Texas, USA, May 11 - 13, 2026) At CyberArk IMPACT, cybersecurity practitioners and leaders will come together to explore today's #1 attack vector - identity. Attendees will acquire a deep understanding of the latest developments in identity-based cyberattacks, including sophisticated attacker techniques that leverage AI and other methods. And most importantly, they will learn how to shut out attackers and prevent unauthorized access by securing every identity across the organization with the right level of privilege controls. KB4-CON 2026 (Orlando, Florida, USA, May 12 - 14, 2026) Since its inception in 2018, KB4-CON has evolved from a user-focused gathering to the must-attend event for anyone serious about staying ahead in the security landscape. It is the human risk management industry's premier event, bringing together KnowBe4 customers, channel partners, prospects, plus security advocates and industry professionals. This is your exclusive opportunity to engage with and learn from the industry's most influential players, all under one roof. ASCEND 2026 (Washington, DC, USA, May 19 - 21, 2026) ASCEND connects the civil, commercial, and national security space sectors, along with adjacent industries, to embrace the opportunities and address the challenges that come with increased activity in space. Building our sustainable off-world future requires long-term thinking. Strategic planning, innovation, scientific exploration, and effective regulations and standards will help us preserve space for future generations. ASCEND will enable the technical exchanges, debates, and collaboration that will help forge a sustainable off-world future for all.

Vercel
The CyberWire3d ago
Read update
Cloud development platform Vercel confirms breach.

Inside Cerebras' IPO filing

Cerebras' aim is to eventually hit a $250 billion valuation, per its new prospectus. Behind the scenes: CEO Andrew Feldman and CTO Sean Lie are set for additional share payouts should the company reach $75 billion, $150 billion, and $250 billion in average valuation within nine years. Zoom out: Investors worried that Cerebras' revenue was heavily concentrated in Abu Dhabi when it previously filed. The new filing appears to address that, lining up contracts with OpenAI and Amazon's AWS. * The $20 billion OpenAI deal could potentially dwarf the size of current Cerebras annual revenue (nearly $510 million last year). Between the lines: OpenAI doesn't need to spend the $20 billion in its agreement to receive a good chunk of a potential 10% stake in the company in return. * OpenAI already has access to about a sixth of that stake, as part of an agreement for it to lend $1 billion to Cerebras. * Another 17% or so of Sam Altman-led company shares in the chipmaker will vest if Cerebras maintains a $40 billion valuation on average for a month. That's not far off from the $35 billion Cerebras is seeking in the IPO -- especially without three mega AI IPOs to dry up markets for other AI bets. * OpenAI will have access to the rest of the stake if it fully buys up the 2GW of AI inference compute capacity. Flashback: OpenAI inked a similar deal with AMD last year that also gave OpenAI an up to 10% stake in the chipmaker, also dependent on the delivery of certain GPU products and AMD's share price eventually hitting $600. * That deal was widely seen as a way for OpenAI to finance its chip buying. Context: The cap table of Coreweave, a cloud data center competitor, was dominated by crossover investors in its IPO last year. * Cerebras' largest investors include Alpha Wave Ventures, Benchmark, Eclipse Ventures, Fidelity, and Foundation Capital. The bottom line: The AI world's entanglements aren't going away.

Cerebras
Axios3d ago
Read update
Inside Cerebras' IPO filing

Deutsche Bank CEO says 'everyone' trying to access Anthropic's Mythos as global regulators review risks

* Banks, regulators assess Mythos cybersecurity risks and industry preparedness * Access to ⁠Mythos restricted, with JPMorgan the only bank confirmed so far * Deutsche Bank CEO says everyone is trying to gain access to Mythos FRANKFURT, April 20 (Reuters) - Deutsche Bank CEO Christian Sewing said on Monday that banks were in close contact with European watchdogs about Anthropic's Mythos as regulators rush to examine the cybersecurity risks the new artificial intelligence model raises and how prepared financial firms are to tackle them. Mythos is viewed ⁠by cybersecurity experts as posing significant challenges to the banking industry and its legacy ⁠technology systems, prompting a series of warnings from regulators and policymakers gathered at last week's International Monetary Fund spring meeting in Washington. "It's certainly not something that's causing panic or setting off any alarm bells on our end right now, but it's definitely something we need to keep in mind in our day-to-day risk management -- and that's exactly what we're doing," Sewing, who is chief executive of Germany's biggest bank, told journalists. "The banks are prepared for this and have their own responses. So this is something we have to live with, and of course everyone is trying to gain access, but I also think it's right that access is limited for now," he said, adding that a German banking association would meet to discuss the issue on Monday. Anthropic has so far restricted access to the model to partners in its Project Glasswing initiative and about 40 additional organisations that build or maintain critical software infrastructure. JPMorgan, which is part of Glasswing, is the only bank Anthropic has publicly said has access. Multiple senior banking and regulatory sources in Europe told Reuters they were not aware of any European financial institution with access to Mythos yet. Anthropic did not immediately respond to a request for comment by Reuters on if and when it would grant banks access. "SUBSTANTIALLY MORE CAPABLE AT CYBER OFFENCE" The British government sent an open letter to company leaders on April 15 warning that testing by its AI Security Institute (AISI) had shown Mythos to be "substantially more capable at cyber offence than any model we have previously assessed." Some Asian regulators said on Monday they were monitoring the development. South Korea's Financial Supervisory Service (FSS) said it held a meeting with information security officials from financial firms last week to review Mythos-related risks. Mythos was a key topic on the sidelines of the IMF meetings last week. European regulators are not yet overly concerned and for now are assessing it through their existing cyber ⁠resilience process, two ⁠European supervisory sources told Reuters. One banking source said that the ECB and other regulators have been in contact with European banks to assess their preparedness for new cybersecurity risks. Supervisors have asked about banks' awareness of the threat and their ability to respond, the source said. The vast capabilities of Mythos to code at a high level have given it a potentially unprecedented ability to identify cybersecurity vulnerabilities, experts say, prompting greater scrutiny from regulators globally. Barclays CEO C. S. Venkatakrishnan said on Friday in Washington that Mythos was a serious threat to the global banking system and likely to be followed by similar, more powerful cyberthreats. (Reporting by Tom Sims, Jesus Aguado; Additional reporting by Francesco Canepa and Balazs Koranyi in Frankfurt. Writing by Tommy Reggiori Wilkes; Editing by Thomas Seythal, Elisa Martinuzzi and Alexander Smith )

Anthropic
London South East3d ago
Read update
Deutsche Bank CEO says 'everyone' trying to access Anthropic's Mythos as global regulators review risks

Stablecorp / QCAD Digital Trust: QCAD Digital Trust Announces QCAD Listing on Kraken, Bringing Canadian Dollars Onchain at Global Scale

Toronto, Ontario--(Newsfile Corp. - April 20, 2026) - Stablecorp Digital Currencies Inc. ("Stablecorp"), a Canadian digital asset infrastructure company acting on behalf of QCAD Digital Trust (the "Trust") in its capacity as servicer of the Trust, today announced the listing of the QCAD digital token ("QCAD") on the Kraken crypto asset trading platform. Kraken is a registered Restricted Dealer in Canada and one of the longest-standing crypto asset trading platforms globally. Under this listing, QCAD will be available for trading on the Kraken crypto asset trading platform, facilitating the settlement of digital asset transactions in a Canadian Dollar-denominated instrument. Empowering Canadians in a Global Digital Economy "Canadians are leading digital asset adoption because they want more financial choice. Listing QCAD on Kraken helps meet that demand with a compliant, Canadian Dollar-denominated stablecoin - and it makes it easier for global participants to engage with Canada's digital economy. We are excited for this practical step toward bringing the traditional financial system onchain," said Mark Greenberg, Global Head of Consumer at Kraken. For Canadians, QCAD offers a compliant way to stay anchored to the Canadian dollar while gaining access to global liquidity, 24/7 markets, and new financial use cases that do not exist in traditional systems. "Connecting Canadians to the global digital economy is core to our mission; this listing serves as a significant vote of confidence in QCAD and Canada as a whole," said Kesem Frank, CEO of Stablecorp. "Meeting Kraken's rigorous listing standards further validates QCAD as an important crypto instrument, capable of supporting global flows; we are excited to expand the 'Global Gateway' for Canadians, allowing them to seamlessly benefit from the opportunities of global digital markets." Bringing Canadian Dollars Onchain The availability of QCAD on Kraken unlocks new opportunities for institutional clients and individual Canadian customers alike: Seamless Global Participation: Canadians can hold and move Canadian Dollar-denominated value on-chain, enabling easier access to global crypto markets without depending on legacy payment rails.Stronger Canadian Dollar Trading Experience: Deeper liquidity and improved Canadian Dollar-denominated trading pairs help deliver tighter spreads and better price discovery for major crypto assets like BTC, ETH, and USDC.24/7 Access: Unlike traditional foreign exchange markets, crypto markets do not close. With QCAD trading pairs on Kraken, customers can manage exposure, deploy capital and respond to market conditions in real time. About QCAD Digital Trust and Stablecorp The Trust is an Ontario trust that holds the reserve assets on behalf of holders of QCAD. Stablecorp is one of Canada's leading digital asset infrastructure companies, focused on building professional-grade blockchain solutions. In partnership with industry leaders, Stablecorp creates refined, scalable and compliant products, such as QCAD, that serve as the foundation for the next generation of financial services. Further information about QCAD, including the reserve assets and the terms and conditions associated with the QCAD program, can be found on Stablecorp's website (www.stablecorp.ca) and under the Trust's profile on SEDAR+ at www.sedarplus.ca. About Kraken Founded in 2011, Kraken is one of the world's longest-standing crypto platforms globally. Kraken clients trade more than 600 digital assets, traditional assets such as U.S. futures and U.S.-listed stocks and ETFs, and 6 different national currencies, including GBP, EUR, USD, CAD, CHF, and AUD. Trusted by millions of institutions, professional traders and consumers, Kraken is one of the fastest, most liquid and performant trading platforms available. Kraken's suite of products and services includes the Kraken App, Kraken Pro, the Krak App, Kraken Institutional, Kraken's onchain offerings and the Ninja Trader retail trading platform. Across these offerings, clients can buy, sell, stake, earn rewards, send and receive assets, custody holdings, and access advanced trading, derivatives, and portfolio management tools. Kraken has set the industry standard for transparency and client trust, and it was the first crypto platform to conduct Proof of Reserves. It complies with regulations and laws applicable to its business, while actively protecting client privacy and maintaining the highest security standards. For more information about Kraken, please visit www.kraken.com. Media Contact Kesem Frank, CEO of Stablecorp [email protected], 647-931-4922 Forward-Looking Statements This news release includes certain forward-looking statements as well as Stablecorp's objectives, strategies, beliefs and intentions. Forward looking statements are frequently identified by such words as "may", "will", "plan", "expect", "anticipate", "estimate", "intend" and similar words referring to future events and results. Forward-looking statements are based on the current opinions and expectations of management of Stablecorp. All forward-looking information is inherently uncertain and subject to a variety of assumptions, risks and uncertainties, as described in more detail in our securities filings available at www.sedarplus.ca. Actual events or results may differ materially from those projected in the forward-looking statements and we caution against placing undue reliance thereon. We assume no obligation to revise or update these forward-looking statements except as required by applicable law. To view the source version of this press release, please visit https://www.newsfilecorp.com/release/293401 Source: Stablecorp / QCAD Digital Trust © 2026 Newsfile Corp.

Kraken
FinanzNachrichten.de3d ago
Read update
Stablecorp / QCAD Digital Trust: QCAD Digital Trust Announces QCAD Listing on Kraken, Bringing Canadian Dollars Onchain at Global Scale

The Vercel Breach: The Steps To Take Now to Protect Your Organization

On April 19, 2026, Vercel -- the cloud platform used by hundreds of thousands of organizations to deploy and host web applications -- disclosed a security breach of its internal systems. The attack began in Context.ai, a small AI productivity tool used by a Vercel employee. The tool was compromised, and the attacker used it as a stepping stone: The threat actor -- believed to be ShinyHunters, a known cybercriminal group -- is selling the stolen data for $2 million on underground forums. Vercel stores the operational secrets of every application it deploys. If your organization uses Vercel, there is a significant chance that credentials stored in your Vercel environment were exposed. These credentials typically include: Critically, this is not just a Vercel problem. If any of these credentials were stolen, an attacker could use them to access your systems -- completely independently of Vercel. A stolen AWS key, for example, works against your AWS account regardless of how it was obtained. The larger trend is clear: AI productivity tools are the new supply chain attack vector. These tools require broad access to email, documents, and identity systems to function -- and most organizations have not established governance programs to track or control those permissions. A compromise at a small AI vendor can cascade into breaches at many enterprises. The Vercel incident highlights a high-impact risk pattern: organizations increasingly rely on platforms like Vercel to orchestrate the entire software delivery lifecycle -- builds, CI/CD pipelines, preview environments, and production deployments. When employees connect third-party AI tools into corporate identity and productivity suites, they extend the trust boundary to that vendor. If that AI vendor (or its OAuth tokens) is compromised, the attacker can use the stolen access to pivot into the very systems that control how code is built and shipped. That matters because a compromise of a deployment platform is rarely contained. From Vercel (or any similar orchestration layer), an attacker may be able to read or modify build settings, add malicious build steps, trigger deployments, and extract environment variables -- which commonly include cloud keys, database credentials, signing secrets, and source control tokens. In other words, a third-party AI tool compromise can become an end-to-end supply-chain attack: from OAuth access, to CI/CD control, to production infrastructure and data. The takeaway: treat AI app integrations as potential entry points to your delivery pipeline, enforce least-privilege scopes, monitor OAuth grants continuously, and be ready to rotate the secrets your CI/CD platform can access. Varonis monitors GitHub, AWS, Azure, GCP, and other platforms in real time. When a stolen credential is used anomalously -- from an unexpected location, accessing unusual data -- Varonis alerts immediately and shows exactly what data was accessed, enabling rapid response and accurate breach scoping. In addition, our MDDR specialists are monitoring your environments 24/7 and will proactively alert if something suspicious happens.

Vercel
varonis.com3d ago
Read update
The Vercel Breach: The Steps To Take Now to Protect Your Organization
Showing 1661 - 1680 of 10807 articles