News & Updates

The latest news and updates from companies in the WLTH portfolio.

Anthropic Settlement Hearing Comes into Focus

With the May 14 Bartz v. Anthropic settlement fairness hearing drawing closer, both the Authors Guild and Authors Alliance have issued updates on where the $1.5 billion copyright infringement agreement stands. On Friday, the Guild took a dive into how the numbers break down. According to papers filed by attorneys for the authors, 440,490, or 91.3%, of the eligible works had been claimed by last month's deadline, compared to what the Guild says is a typical 10% rate in most class action lawsuits. That 91% rate is a huge increase over 54% of claims attorneys for the class reported on March 19. The Guild noted that with the higher claim rate, the payout per work will be closer to the $3,000 per work estimated in the lawsuit rather than the $4,876 payout that was based on the number of works claimed in March. The update also noted that many parties besides authors and publishers will receive money from the $1.5 billion fund. Attorneys' fees are at the top of the list along with administrative costs. While the exact amount of the expenses will be determined by the court, the Guild noted that the class counsel requests 12.5% of the settlement fund ($187,500,000) for attorneys' fees plus $2,779,950.26 for the reimbursement of litigation expenses. "While $187.5 million is a fairly high number," the Guild wrote, "12.5% of the pot is an uncommonly low share of attorney's fees in class actions, where fees range around 30%." When all the costs are added together, they come to "roughly $208.6 million, leaving a net settlement fund of roughly $1.29 billion for the class distributions," which translates to "an estimated base payout of approximately $2,931 per work," the Guild explained, adding that when interest on the money Anthropic has already deposited is included the payout "will likely be slightly higher." According to the settlement agreement, per work payout will be split among rights holders -- authors, publishers, and any co-authors. The Guild noted that it is unclear at the moment when payments will begin to be issued, noting that the timeline depends on when the judge grants final approval to the settlement and any appeals are resolved. The Guild estimated that the earliest payments would begin to be paid will be late fall. The Authors Alliance update focused on the various objections that have been made about the settlement and which are likely to be raised in the settlement hearing. The objections were unsealed following a motion filed by professor Lea Victoria Bishop. Among the objections are the claims that the distribution plan systematically favors publishers over authors; that the class notice was "misleading/coercive," since statutory damages per infringement can technically be up to $150,000/work which would make the settlement amount per work is inadequate; and that the settlement sets a "dangerous precedent" by permitting "a multi-billion dollar AI company to 'buy' its way out of massive piracy for a 'discounted' rate." Judge Martínez-Olguín, who took over the case following the retirement of Judge William Alsup, will oversee the May 14 hearing set for 2 p.m. in the San Francisco Federal Courthouse. A Zoom link will be available for those who cannot make the trip to San Francisco.

Anthropic
PublishersWeekly.com3d ago
Read update
Anthropic Settlement Hearing Comes into Focus

App Host Vercel Confirms Security Incident, Says Customer Data Was Stolen Via Breach At Context Ai

BERITAJA is a International-focused news website dedicated to reporting current events and trending stories from across the country. We publish news coverage on local and national issues, politics, business, technology, and community developments. Content is curated and edited to ensure clarity and relevance for our readers. Cloud app hosting elephantine Vercel this play said hackers had breached its soul systems and accessed customer data. Hackers person claimed they person stolen delicate customer credentials from Vercel's systems and are trading the information online. In a connection connected Sunday, Vercel said the breach originated from different package maker, Context AI. One of Vercel's labor downloaded an app made by Context AI and connected it to their firm account, which is hosted by Google. The hackers utilized that relationship (known arsenic OAuth) to return complete the Vercel employee's Google relationship and summation entree to immoderate of Vercel's soul systems, including credentials that were not encrypted. Vercel says its Next.js and Turbopack projects were not affected by the breach. Both open-source projects are wide utilized by web and app developers. Vercel said it has contacted customers whose app information and keys were compromised. In a station connected X, Vercel main executive Guillermo Rauch advised customers to rotate immoderate keys and credentials successful their app deployments that are marked arsenic "non-sensitive." It's not clear who is down the breach astatine Vercel aliases Context AI, aliases if they are the aforesaid hacker. The threat character trading the information claimed to beryllium representing the ShinyHunters hacking group successful their listing connected a cybercriminal forum. The post, seen by TechCrunch, claimed the hackers were trading entree to customer API keys, root code, and database information stolen from Vercel. The ShinyHunters hacker group, known for breaching cloud-based and database companies, told cybersecurity news tract Bleeping Computer that they are not progressive successful this incident. While specifications of the hack are still emerging, this information breach is the latest successful a drawstring of "supply chain" hacks successful caller months that person targeted package developers whose codification is wide utilized crossed the web. By compromising package that's wide utilized by companies and supports web infrastructure, hackers could bargain credentials from a wide scope of targets astatine erstwhile and summation further entree to ample amounts of information stored by different unreality giants. Vercel said small other about the attack, isolated from that it was investigating the incident and had sought answers from Context AI. Vercel said the hack whitethorn impact "hundreds of users crossed galore organizations," and not conscionable its ain system, informing of imaginable downstream breaches spanning the tech industry. Context AI, which builds evaluations and analytics for AI models, confirmed connected its website that it had a breach successful March involving its Context AI Office Suite user app. The app allows users to automate actions and workflows crossed aggregate third-party applications by measurement of an unnamed third-party service. Context AI said it notified 1 customer of the breach, but based connected Vercel's incident, it now believes that the incident is apt broader than first thought. Context AI said the hackers "likely compromised OAuth tokens for immoderate of our user users." Henry Scott-Green, who founded Context AI and now useful astatine OpenAI pursuing a woody to acqui-hire the company's staff, did not respond to a petition for remark aliases questions about the breach. It's unclear why Context AI did not disclose the breach astatine the time, aliases if the institution received immoderate demands from the hacker, specified arsenic a ransom. OpenAI did not instantly respond to a petition for comment. Vercel besides did not respond to questions about the incident, specified arsenic really galore of its customers could beryllium affected.

Vercel
Beritaja3d ago
Read update
App Host Vercel Confirms Security Incident, Says Customer Data Was Stolen Via Breach At Context Ai

Claude Ai: Anthropic launches 1 design tool aimed at rivals and teams

Anthropic has introduced claude ai into a new creative lane with Claude Design, a research preview that turns prompts, files, and codebases into polished visual work. The launch is notable not only because it expands what the model can generate, but because it arrives as design teams are under pressure to move faster without losing consistency. Anthropic says the tool is rolling out gradually to paid subscribers, and that it is built to help both experienced designers and non-designers produce usable first drafts, prototypes, and presentations. Claude Design enters a crowded visual workflow Claude Design is being positioned as a collaborative system for slides, one-pagers, prototypes, designs, and more. Anthropic says it is powered by Claude Opus 4. 7 and is available in research preview for Claude Pro, Max, Team, and Enterprise subscribers. The company says access is included with the plan and uses subscription limits, with extra usage available beyond those limits. The timing matters because the product is entering a space where design software is already deeply embedded in daily workflows. The immediate market reaction highlighted that point: shares of Figma fell after the launch, while Adobe also moved lower. The broader message is not simply that Anthropic has a new feature, but that claude ai is now being pushed toward the creative software stack that many teams rely on for early product work and brand assets. Why the design workflow could change Anthropic says Claude Design begins by building a design system for a team after reading its codebase and design files. From there, it applies colors, typography, and components automatically so that every project stays aligned with the company's existing look and feel. The company also says users can import from text prompts, images, DOCX, PPTX, XLSX files, or a codebase, and can use a web capture tool to pull elements directly from a website. That matters because the tool is not framed as a simple image generator. It is described as a workflow system that supports inline comments, direct text edits, adjustment knobs for spacing and color, organization-scoped sharing, and exports to Canva, PDF, PPTX, or standalone HTML files. Anthropic also says Claude can package a design into a handoff bundle for Claude Code when it is ready to move from concept to build. In practical terms, the product aims to compress stages that normally sit across multiple tools and multiple people. There is also a built-in constraint: usage limits. Anthropic says Claude Design comes with weekly limits for paid plans, and once those are reached, users move into pay-as-you-go token costs. That detail is important because it suggests the experience may be powerful, but not frictionless. For teams testing the tool, budget discipline could shape adoption as much as capability. The presence of limits also means the value proposition will be judged not only by output quality, but by how efficiently claude ai can sustain iterative work. What experts and company leaders are signaling Anthropic says the product is aimed at giving "designers room to explore widely and everyone else a way to produce visual work. " The company adds that its latest Opus model brings stronger performance across coding, agents, vision, and multi-step tasks, with greater thoroughness and consistency on work that matters most. From the company's perspective, the launch is less about replacing design judgment than widening access to first drafts and prototype exploration. That is reinforced by the feature set: users can approve or revise colors, fonts, and layout decisions, then continue editing after seeing the final result. Anthropic also says users can ask Claude to create sliders and options to tweak designs in real time, which could reduce the back-and-forth that often slows early-stage work. The broader implication is that design may become more conversational and less linear. Instead of moving from a brief to a static mockup, teams can iterate inside one system and preserve context as they go. If that workflow catches on, the competitive pressure will not stop at design software. It could extend to presentation tools, web prototyping, and the handoff between concept and code. Regional and global impact beyond one product launch The near-term impact is likely to be felt most by product teams, marketers, founders, and organizations that need visual output without a large design bench. Anthropic specifically says the tool is useful for realistic prototypes, product mockups, pitch decks, and marketing collateral, as well as more experimental design work that can be time-consuming in traditional workflows. Because Claude Design supports organization-scoped sharing and enterprise controls, the product also has implications for internal collaboration at larger companies. Anthropic says Enterprise organizations have the feature off by default, with admins able to enable it in Organization settings. That suggests a cautious rollout model for workplaces that want the speed benefits but need control over access and governance. For now, the launch places claude ai squarely inside a growing race to make software creation more fluid, more visual, and more collaborative. The unanswered question is whether teams will adopt it as a primary design environment or use it as a fast starting point before moving work elsewhere. Either way, the direction is clear: the boundary between drafting ideas and producing finished-looking assets is getting thinner.

Anthropic
El-Balad.com3d ago
Read update
Claude Ai: Anthropic launches 1 design tool aimed at rivals and teams

Piraeus and Accenture Team Up to Launch ?? Hub in the Greek Banking Sector Powered by Anthropic

Piraeus (ATHEX: TPEIR) and Accenture (NYSE: ACN) announced a significant expansion of their long-standing collaboration with the launch of a dedicated AI Hub supported by Anthropic designed to accelerate Piraeus' enterprise-wide AI transformation and set a benchmark for AI-driven banking in Greece. The AI Hub will act as a central engine for designing, developing and scaling advanced AI capabilities across Piraeus' full value chain. By bringing together Accenture's industry and AI expertise, including its Data AI Center of Excellence in Athens, with Piraeus' strategic AI roadmap, the Hub will drive the reinvention of banking processes across operations, customer experience, risk, and compliance, and modernize the technology backbone. In parallel, the Hub will strengthen Piraeus' long-term AI capabilities by attracting, developing and upskilling specialized talent through targeted recruitment and structured learning programs, including Udacity, Accenture's AI-native learning and training platform. This approach supports the Bank's ambition to embed AI skills and ways of working deeply across the organization. A key focus of the collaboration will be the development of secure, responsible and human-centric AI solutions, designed to autonomously support decision making, streamline complex processes and enhance both customer and employee experiences. Piraeus and Accenture, with its newly-formed Anthropic Business Group, will leverage the power of Anthropic AI models and platforms and its deep grounding in ethical AI principles to drive innovation in a responsible manner, ensuring that advanced AI solutions are aligned with the bank's values and regulatory requirements. This approach will support the development of secure, trustworthy, and scalable AI applications, to elevate human performance and the quality of banking services. "The AI Hub represents a strategic inflection point for Piraeus," said Harry Margaritis, Group Chief Operating Officer, Piraeus. "We are advancing from individual AI deployments to a unified, enterprise-level capability that is deeply embedded in how the Bank operates. Our collaboration with Accenture, together with the integration of Anthropic's AI technology, enables us to scale advanced AI responsibly, anchored in strong governance, transparency and human control. This initiative empowers our people, reinforces trust with our customers and regulators, and builds a resilient, future-ready foundation for banking in Greece." "This collaboration reflects the deep and longstanding relationship between Piraeus and Accenture, built on trust, value creation and shared ambition," stated George Pallioudis, Financial Services Lead at Accenture. "It's a testament to Piraeus' leadership commitment to AI adoption and a recognition of Accenture's leading role in AI-powered reinvention at scale." Thomas Remy, Head of Southern Europe, Middle East Africa for Anthropic, commented: "AI is transforming how banks operate, and it's vital that modern AI systems meet strong governance and regulatory requirements. Claude is built with the safety, reliability and transparency that highly regulated industries like banking demand. In partnering with Anthropic to power a new AI hub for Greek banking, Piraeus and Accenture have underscored our shared commitment to safe, responsible AI deployment." The AI Hub builds on Piraeus' successful collaboration with Accenture to adopt a cloud first operating model, which has already accelerated digital service delivery, enhanced security and compliance, improved operational efficiency and supported the Bank's broader sustainability and modernization objectives. About Piraeus Piraeus, established in 1916, is the leading financial institution in Greece, in terms of market shares in loans, deposits, and branch presence. The Bank provides a comprehensive range of financial products and services, with recognized leadership in SME banking, retail banking, digital banking, and capital markets. Headquartered in Athens and listed on the Athens Stock Exchange, Piraeus employs approximately 8.1 thousand professionals and operates a nationwide network of 368 branches. As of 31 December 2025, Piraeus Group reported total assets of €91 billion. Piraeus is committed to supporting the country's economic development and delivering long-term value for customers, shareholders, and society. Through disciplined execution, innovation, and sustainable banking principles, Piraeus aims to drive growth and resilience across its operations. About Accenture Accenture is a leading solutions and services company that helps the world's leading enterprises reinvent by building their digital core and unleashing the power of AI to create value at speed across the enterprise, bringing together the talent of our approximately 786,000 people, our proprietary assets and platforms, and deep ecosystem relationships. Our strategy is to be the reinvention partner of choice for our clients and to be the most client-focused, AI-enabled, great place to work in the world. Through our Reinvention Services we bring together our capabilities across strategy, consulting, technology, operations, Song and Industry X with our deep industry expertise to create and deliver solutions and services for our clients. Our purpose is to deliver on the promise of technology and human ingenuity, and we measure our success by the 360° value we create for all our stakeholders. Visit us at accenture.com. About Anthropic Anthropic is an AI research and development company that creates reliable, interpretable, and steerable AI systems. Anthropic's flagship product is Claude, a large language model trusted by millions of users worldwide. Learn more about Anthropic and Claude atanthropic.com Forward Looking Statements Except for the historical information and discussions contained herein, statements in this news release may constitute forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. Words such as "may," "will," "should," "likely," "anticipates," "aspires," "expects," "intends," "plans," "projects," "believes," "estimates," "positioned," "outlook," "goal," "target" and similar expressions are used to identify these forward-looking statements. These statements are not guarantees of future performance nor promises that goals or targets will be met, and involve a number of risks, uncertainties and other factors that are difficult to predict and could cause actual results to differ materially from those expressed or implied. These risks include, without limitation, that the partnership might not achieve its anticipated benefits and risks and uncertainties related to the development and use of AI, including advanced AI, could harm our business, damage our reputation or give rise to legal or regulatory action, as well as the risks, uncertainties and other factors discussed under the "Risk Factors" heading in Accenture plc's most recent Annual Report on Form 10-K and other documents filed with or furnished to the Securities and Exchange Commission. Statements in this news release speak only as of the date they were made, and Accenture undertakes no duty to update any forward-looking statements made in this news release or to conform such statements to actual results or changes in Accenture's expectations. Copyright 2026 Accenture. All rights reserved. Accenture and its logo are registered trademarks of Accenture. View source version on businesswire.com: https://www.businesswire.com/news/home/20260420183759/en/ Contacts: Matthaios Sarantos Accenture +306977581264 [email protected] George Papaioannou Piraeus +306944626825 [email protected] Lewis Maconachy Anthropic [email protected] © 2026 Business Wire

Anthropic
FinanzNachrichten.de3d ago
Read update
Piraeus and Accenture Team Up to Launch ?? Hub in the Greek Banking Sector Powered by Anthropic

Piraeus and Accenture Team Up to Launch ΑΙ Hub in the Greek Banking Sector Powered by Anthropic

ATHENS, Greece-(BUSINESS WIRE)-Piraeus (ATHEX: TPEIR) and Accenture (NYSE: ACN) announced a significant expansion of their long-standing collaboration with the launch of a dedicated AI Hub - supported by Anthropic - designed to accelerate Piraeus' enterprise-wide AI transformation and set a benchmark for AI-driven banking in Greece. The AI Hub will act as a central engine for designing, developing and scaling advanced AI capabilities across Piraeus' full value chain. By bringing together Acce

Anthropic
Weekly Voice3d ago
Read update
Piraeus and Accenture Team Up to Launch ΑΙ Hub in the Greek Banking Sector Powered by Anthropic

Nvidia Chipmaking Rival and AI Startup Cerebras Systems Files for IPO | The Motley Fool

Cerebras isn't generating an operating profit, and investors should review its share structure and concentration risk before considering investing in its IPO. Investors would be forgiven if they've never heard of Cerebras Systems. The start-up believes that artificial intelligence (AI) workloads "require purpose-built silicon," and further suggests that "modifying existing compute architectures [will] not realize AI's potential." Cerebras has created a solution it believes will displace Nvidia's graphics processing units (GPUs) as the dominant force in AI and has filed an S-1 with the Securities and Exchange Commission (SEC) to go public as early as next month. Cerebras originally planned its initial public offering (IPO) last year but shelved its plans after raising $1 billion in private markets. The company plans to go public on the Nasdaq exchange, using the ticker "CBRS." Cerebras hasn't yet said how many shares it plans to issue, the price of those shares, or how much it plans to raise. The company plans to go public sometime in mid-May. Cerebras created the Wafer-Scale Engine (WSE) -- a massive semiconductor that the company says is 58 times larger than Nvidia's B200 AI chip. The WSE combines 900,000 compute cores, boasts "19 times more transistors, 250 times more on-chip memory, and 2,625 times more memory bandwidth" than Nvidia's B200. For context, multiple Nvidia AI chips are linked together to work in unison, so this represents a novel approach. Cerebras says it solves the inherent latency problem that plagues AI processing. The company says that "communications is thousands of times faster on-chip than across chips," so by keeping all the processing on a single giant chip, it avoids the latency issue. This solution has attracted a number of high-profile customers. Earlier this year, Cerebras inked a $20 billion, 750 megawatt deal with OpenAI. It has also entered into a multi-year deal with Amazon Web Services (AWS) to use Cerebras chips in its data centers for AI inference. Terms of the deal weren't disclosed, but it could serve as a validation of Cerebras' approach. Perhaps the most intriguing aspect of Cerebras is its financial results. In 2025, revenue of $510 million grew 76% year over year, while generating net income of $238 million -- but that comes with an asterisk. The company generated an operating loss of $146 million, but benefited from $391 million in "other income," resulting from the remeasurement of a contract liability that was removed from its balance sheet. In other words, it had nothing to do with the company's operations. Without that benefit, the company's net loss would have been roughly $153 million. Cerebras reported remaining performance obligations (RPO) of $25 billion as of Dec. 31, of which the company expects to recognize 15% in 2026 and 2027, 43% in 2028 and 2029, and the remainder thereafter. Finally, Cerebras has a complicated multiclass share structure with three classes of common stock. Class A shares carry one vote per share and will be issued to the public. Class B shares are entitled to 20 votes per share and will be held by early investors and insiders, who will retain majority voting control. Additionally, the company issued warrants to OpenAI and Amazon, allowing them to buy up to $1.27 billion in non-voting Class N shares. We don't have all the details and won't know more until Cerebras files a revised S-1 with the SEC. Until then, investors should remember that while the company certainly has potential, there are risks as well. Cerebras hasn't yet been subjected to the glare of the public spotlight, and it isn't yet profitable. Furthermore, just two customers accounted for 86% of the company's revenue in 2025, the very definition of customer concentration risk. As Cerebras gains more converts, that risk should moderate, but it's worth noting nonetheless. As such, investors interested in acquiring a stake should make it a small part of a well-balanced portfolio.

Cerebras
The Motley Fool3d ago
Read update
Nvidia Chipmaking Rival and AI Startup Cerebras Systems Files for IPO | The Motley Fool

Nvidia Chipmaking Rival and AI Startup Cerebras Systems Files for IPO

Cerebras isn't generating an operating profit, and investors should review its share structure and concentration risk before considering investing in its IPO. Investors would be forgiven if they've never heard of Cerebras Systems. The start-up believes that artificial intelligence (AI) workloads "require purpose-built silicon," and further suggests that "modifying existing compute architectures [will] not realize AI's potential." Cerebras has created a solution it believes will displace Nvidia's graphics processing units (GPUs) as the dominant force in AI and has filed an S-1 with the Securities and Exchange Commission (SEC) to go public as early as next month. Will AI create the world's first trillionaire? Our team just released a report on the one little-known company, called an "Indispensable Monopoly" providing the critical technology Nvidia and Intel both need. Continue " Image source: Getty Images. Cerebras originally planned its initial public offering (IPO) last year but shelved its plans after raising $1 billion in private markets. The company plans to go public on the Nasdaq exchange, using the ticker "CBRS." Cerebras hasn't yet said how many shares it plans to issue, the price of those shares, or how much it plans to raise. The company plans to go public sometime in mid-May. Cerebras created the Wafer-Scale Engine (WSE) -- a massive semiconductor that the company says is 58 times larger than Nvidia's B200 AI chip. The WSE combines 900,000 compute cores, boasts "19 times more transistors, 250 times more on-chip memory, and 2,625 times more memory bandwidth" than Nvidia's B200. For context, multiple Nvidia AI chips are linked together to work in unison, so this represents a novel approach. Cerebras says it solves the inherent latency problem that plagues AI processing. The company says that "communications is thousands of times faster on-chip than across chips," so by keeping all the processing on a single giant chip, it avoids the latency issue. This solution has attracted a number of high-profile customers. Earlier this year, Cerebras inked a $20 billion, 750 megawatt deal with OpenAI. It has also entered into a multi-year deal with Amazon Web Services (AWS) to use Cerebras chips in its data centers for AI inference. Terms of the deal weren't disclosed, but it could serve as a validation of Cerebras' approach. Perhaps the most intriguing aspect of Cerebras is its financial results. In 2025, revenue of $510 million grew 76% year over year, while generating net income of $238 million -- but that comes with an asterisk. The company generated an operating loss of $146 million, but benefited from $391 million in "other income," resulting from the remeasurement of a contract liability that was removed from its balance sheet. In other words, it had nothing to do with the company's operations. Without that benefit, the company's net loss would have been roughly $153 million. Cerebras reported remaining performance obligations (RPO) of $25 billion as of Dec. 31, of which the company expects to recognize 15% in 2026 and 2027, 43% in 2028 and 2029, and the remainder thereafter. Finally, Cerebras has a complicated multiclass share structure with three classes of common stock. Class A shares carry one vote per share and will be issued to the public. Class B shares are entitled to 20 votes per share and will be held by early investors and insiders, who will retain majority voting control. Additionally, the company issued warrants to OpenAI and Amazon, allowing them to buy up to $1.27 billion in non-voting Class N shares. We don't have all the details and won't know more until Cerebras files a revised S-1 with the SEC. Until then, investors should remember that while the company certainly has potential, there are risks as well. Cerebras hasn't yet been subjected to the glare of the public spotlight, and it isn't yet profitable. Furthermore, just two customers accounted for 86% of the company's revenue in 2025, the very definition of customer concentration risk. As Cerebras gains more converts, that risk should moderate, but it's worth noting nonetheless. As such, investors interested in acquiring a stake should make it a small part of a well-balanced portfolio. When our analyst team has a stock tip, it can pay to listen. After all, Stock Advisor's total average return is 994%* -- a market-crushing outperformance compared to 199% for the S&P 500. Danny Vena, CPA has positions in Amazon and Nvidia. The Motley Fool has positions in and recommends Amazon and Nvidia. The Motley Fool has a disclosure policy.

Cerebras
NASDAQ Stock Market3d ago
Read update
Nvidia Chipmaking Rival and AI Startup Cerebras Systems Files for IPO

Piraeus and Accenture Team Up to Launch ΑΙ Hub in the Greek Banking Sector Powered by Anthropic

Piraeus (ATHEX: TPEIR) and Accenture (NYSE: ACN) announced a significant expansion of their long-standing collaboration with the launch of a dedicated AI Hub - supported by Anthropic - designed to accelerate Piraeus' enterprise-wide AI transformation and set a benchmark for AI-driven banking in Greece. The AI Hub will act as a central engine for designing, developing and scaling advanced AI capabilities across Piraeus' full value chain. By bringing together Accenture's industry and AI expertise, including its Data & AI Center of Excellence in Athens, with Piraeus' strategic AI roadmap, the Hub will drive the reinvention of banking processes across operations, customer experience, risk, and compliance, and modernize the technology backbone. Anzeige Im Durchschnitt erleiden 7 von 10 Kleinanlegern Verluste beim Handel mit Turbo-Zertifikaten. Turbo-Zertifikate sind hoch risikoreiche Produkte und nicht für langfristige Anlagestrategien geeignet. Eine vorgeschriebene allgemeine Mitteilung gemäß BaFin-Beschluss. Den Basisprospekt sowie die Endgültigen Bedingungen und die Basisinformationsblätter erhalten Sie bei Klick auf das Disclaimer Dokument. Beachten Sie auch die weiteren Hinweise zu dieser Werbung. In parallel, the Hub will strengthen Piraeus' long-term AI capabilities by attracting, developing and upskilling specialized talent through targeted recruitment and structured learning programs, including Udacity, Accenture's AI-native learning and training platform. This approach supports the Bank's ambition to embed AI skills and ways of working deeply across the organization. A key focus of the collaboration will be the development of secure, responsible and human-centric AI solutions, designed to autonomously support decision making, streamline complex processes and enhance both customer and employee experiences. Piraeus and Accenture, with its newly-formed Anthropic Business Group, will leverage the power of Anthropic AI models and platforms and its deep grounding in ethical AI principles to drive innovation in a responsible manner, ensuring that advanced AI solutions are aligned with the bank's values and regulatory requirements. This approach will support the development of secure, trustworthy, and scalable AI applications, to elevate human performance and the quality of banking services. "The AI Hub represents a strategic inflection point for Piraeus," said Harry Margaritis, Group Chief Operating Officer, Piraeus. "We are advancing from individual AI deployments to a unified, enterprise-level capability that is deeply embedded in how the Bank operates. Our collaboration with Accenture, together with the integration of Anthropic's AI technology, enables us to scale advanced AI responsibly, anchored in strong governance, transparency and human control. This initiative empowers our people, reinforces trust with our customers and regulators, and builds a resilient, future-ready foundation for banking in Greece."

Anthropic
wallstreet:online3d ago
Read update
Piraeus and Accenture Team Up to Launch ΑΙ Hub in the Greek Banking Sector Powered by Anthropic

US NSA Reportedly Using Anthropic's Claude Mythos Despite Ongoing Dispute

The US National Security Agency is reportedly using Anthropic's new Mythos Preview model, despite the company's ongoing dispute with the Pentagon, according to a report by Axios citing two people familiar with the matter. Anthropic introduced Mythos Preview earlier this month and described it as a general-purpose language model with strong capabilities in computer security tasks. In February, President Donald Trump reportedly ordered federal agencies to stop using Anthropic services after contract discussions broke down over safeguards related to military use. The disagreement created a months-long standoff between Anthropic and the Pentagon. The latest report comes days after Anthropic CEO Dario Amodei reportedly met White House chief of staff Susie Wiles and other officials. According to Reuters, the White House later described the meeting as productive and constructive. When asked by reporters, Trump said he had no idea about the meeting. Axios reported that the NSA is among roughly 40 organizations that received access to Mythos Preview. One source told the outlet that the model is also being used more broadly within the department. Anthropic remains involved in a legal dispute with the US government. The company filed lawsuits against the Department of Defense in two courts in March after the Trump administration labeled Anthropic a supply chain risk. One court granted Anthropic a preliminary injunction temporarily blocking the designation, while judges in another case denied the company's request to remove the label.

Anthropic
ProPakistani3d ago
Read update
US NSA Reportedly Using Anthropic's Claude Mythos Despite Ongoing Dispute

What do we know about Anthropic's Mythos amid rising concerns?

Anthropic earlier this month debuted Mythos, its most advanced AI model to date, equipped with sophisticated capabilities and designed for defensive cybersecurity tasks. Mythos' vast capabilities have sparked fears about the threat to traditional software security after the AI startup said the preview had uncovered "thousands" of major vulnerabilities in "every major operating system and web browser." Anthropic has rolled out Claude Mythos Preview through a controlled initiative called "Project Glasswing," granting access to tech majors including Amazon, Microsoft, Nvidia and Apple. The company also extended access to a group of more than 40 additional organizations that build or maintain critical software infrastructure. Experts warned that the model can identify and exploit previously unknown vulnerabilities faster than companies can repair them. Its advanced coding and autonomous capabilities could dramatically accelerate sophisticated cyberattacks, particularly in sectors such as banking that rely on complex, interconnected and often decades-old technology systems, they have said. While debuting Mythos, Anthropic said the model's ability to find software flaws at scale could, if misused, pose serious risks to economies, public safety and national security. U.S. software stocks tumbled on April 9 after the Mythos launch on April 7 reignited fears that advances in AI could disrupt traditional firms. The White House has held discussions with Anthropic CEO Dario Amodei about Mythos, with officials saying they talked about collaboration, cybersecurity and balancing AI innovation with safety. The talks were held despite the Pentagon slapping a formal supply-chain risk designation on Anthropic. The U.S. government is planning to make a version of Mythos available to major federal agencies, Bloomberg News has reported. Reuters reported that U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell held a meeting with CEOs of major U.S. banks to brief them on the potential risks from the model. The model also raised alarm bells in Britain, with authorities holding talks with major banks and cybersecurity officials to assess possible risks. Banks are in close contact with their European regulators regarding Mythos, Christian Sewing, president of the German banking association and CEO of Deutsche Bank, said.

Anthropic
Pulse24.com 3d ago
Read update
What do we know about Anthropic's Mythos amid rising concerns?

Amazon (AMZN) Stock: Anthropic's Explosive Growth Could Fuel AWS Ahead of Earnings - Blockonomi

Analysts are growing increasingly bullish on Amazon before its April 29 quarterly results, with much of the optimism stemming from Anthropic's remarkable expansion and persistent AWS strength. Amazon.com, Inc., AMZN KeyBanc's Justin Patterson boosted his Amazon price objective to $325 from $285 over the weekend. This represents approximately a 30% gain from Monday morning's trading price. Patterson simultaneously increased his 2026 revenue projection by 1% and his 2027 forecast by 2%. Amazon stock fell 1.4% Monday, settling at $247.18 amid wider market concerns surrounding U.S.-Iran geopolitical tensions. Shares finished Friday at $250.56 -- merely 1.4% under the all-time closing peak reached in November 2025. The pre-earnings landscape appears quietly promising. Anthropic has emerged as a significant component of the AWS narrative. The company's annual revenue run rate skyrocketed from $9 billion in December 2025 to $30 billion in early April 2026 -- a trajectory that demands attention. KeyBanc calculates that AWS receives roughly 60% of Anthropic's complete spending. This represents a substantial revenue stream connected to one of the world's most rapidly expanding AI enterprises. Anthropic has maintained an active product development schedule. The company launched Claude Opus 4.7 this month -- representing its most sophisticated reasoning model to date. Additionally, it introduced Claude Mythos, a "hyper-agentic" model that Anthropic has restricted from public release citing national security considerations. Patterson indicated he anticipates a "combination of capacity gains, AI diffusion, and client expansion" propelling AWS during the first quarter. He projects 30% year-over-year expansion for AWS -- representing acceleration from 2025, when AWS delivered $128.7 billion in annual revenue, climbing 20% versus the previous year. Impressive performance last week from Taiwan Semiconductor (TSM) provided additional validation that AI infrastructure demand continues robust entering earnings season. Wedbush's Dan Ives reinforced this perspective. "We are seeing no cracks in AI demand on the chips/hardware or software front," he stated, characterizing it as a "bright green light" for owning leading technology stocks. Amazon's Trainium chip division has already exceeded $20 billion in revenue through AWS, expanding at triple-digit percentages year over year. In his annual shareholder correspondence, Jassy signaled willingness to sell Trainium chips to external parties -- representing a potentially significant new revenue stream. On the retail front, KeyBanc highlighted strong grocery performance and the forthcoming debut of Amazon Leo, the company's satellite broadband offering. Amazon recently announced plans to acquire Globalstar, securing additional spectrum to facilitate Leo's deployment. Patterson does identify one concern: the continuing Iran situation has interrupted shipping through the Strait of Hormuz and elevated fuel expenses. He anticipates this will impact Amazon's second-quarter outlook. The company's 3.5% fuel surcharge imposed on third-party merchants earlier this month may partially mitigate that headwind. Amazon is scheduled to announce first-quarter financial results on April 29.

Anthropic
Blockonomi3d ago
Read update
Amazon (AMZN) Stock: Anthropic's Explosive Growth Could Fuel AWS Ahead of Earnings - Blockonomi

App host Vercel confirms security incident, says customer data was stolen via breach at Context AI | TechCrunch

Cloud app hosting giant Vercel this weekend said hackers had breached its internal systems and accessed customer data. Hackers have claimed they have stolen sensitive customer credentials from Vercel's systems and are selling the data online. In a statement on Sunday, Vercel said the breach originated from another software maker, Context AI. One of Vercel's employees downloaded an app made by Context AI and connected it to their corporate account, which is hosted by Google. The hackers used that connection (known as OAuth) to take over the Vercel employee's Google account and gain access to some of Vercel's internal systems, including credentials that were not encrypted. Vercel says its Next.js and Turbopack projects were not affected by the breach. Both open-source projects are widely used by web and app developers. Vercel said it has contacted customers whose app data and keys were compromised. In a post on X, Vercel chief executive Guillermo Rauch advised customers to rotate any keys and credentials in their app deployments that are marked as "non-sensitive." It's not clear who is behind the breach at Vercel or Context AI, or if they are the same hacker. The threat actor selling the data claimed to be representing the ShinyHunters hacking group in their listing on a cybercriminal forum. The post, seen by TechCrunch, claimed the hackers were selling access to customer API keys, source code, and database data stolen from Vercel. The ShinyHunters hacker group, known for breaching cloud-based and database companies, told cybersecurity news site Bleeping Computer that they are not involved in this incident. While details of the hack are still emerging, this security breach is the latest in a string of "supply chain" hacks in recent months that have targeted software developers whose code is widely used across the web. By compromising software that's widely used by companies and supports web infrastructure, hackers can steal credentials from a broad range of targets at once and gain further access to large amounts of data stored by other cloud giants. Vercel said little else about the attack, except that it was investigating the incident and had sought answers from Context AI. Vercel said the hack may affect "hundreds of users across many organizations," and not just its own system, warning of potential downstream breaches spanning the tech industry. Context AI, which builds evaluations and analytics for AI models, confirmed on its website that it had a breach in March involving its Context AI Office Suite consumer app. The app allows users to automate actions and workflows across multiple third-party applications by way of an unnamed third-party service. Context AI said it notified one customer of the breach, but based on Vercel's incident, it now believes that the incident is likely broader than first thought. Context AI said the hackers "likely compromised OAuth tokens for some of our consumer users." Henry Scott-Green, who founded Context AI and now works at OpenAI following a deal to acqui-hire the company's staff, did not respond to a request for comment or questions about the breach. It's unclear why Context AI did not disclose the breach at the time, or if the company received any demands from the hacker, such as a ransom. OpenAI did not immediately respond to a request for comment. Vercel also did not respond to questions about the incident, such as how many of its customers could be affected.

Vercel
TechCrunch3d ago
Read update
App host Vercel confirms security incident, says customer data was stolen via breach at Context AI | TechCrunch

What do we know about Anthropic's Mythos amid rising concerns?

April 20 (Reuters) - Anthropic earlier this month debuted Mythos, its most advanced AI model to date, equipped with sophisticated capabilities and designed for defensive cybersecurity tasks. Mythos' vast capabilities have sparked fears about the threat to traditional software security after the AI startup said the preview had uncovered "thousands" of major vulnerabilities in "every major operating system and web browser." HOW WAS THE MODEL LAUNCHED AND WHO HAD ACCESS TO IT? Anthropic has rolled out Claude Mythos Preview through a controlled initiative called "Project Glasswing", granting access to tech majors including Amazon (AMZN.O), opens new tab, Microsoft (MSFT.O), opens new tab, Nvidia (NVDA.O), opens new tab and Apple (AAPL.O), opens new tab. The company also extended access to a group of more than 40 ⁠additional organizations that build or maintain critical software infrastructure. WHAT ARE THE CONCERNS AROUND MYTHOS? Experts warned that the model can identify and exploit previously unknown vulnerabilities faster than companies can repair them. Its advanced coding and autonomous capabilities could dramatically accelerate sophisticated cyberattacks, particularly in sectors such as banking that rely on complex, interconnected and often decades-old technology systems, they have said. While debuting Mythos, Anthropic said the model's ability to find software flaws at scale could, if misused, pose serious risks to economies, public safety and national security. U.S. software stocks tumbled on April 9 after the Mythos launch ⁠on April 7 reignited fears that advances in AI could disrupt traditional firms. WHAT HAS THE WHITE HOUSE AND REGULATORS SAID ABOUT MYTHOS? The White House has held discussions with Anthropic CEO Dario Amodei about Mythos, with officials saying they talked about collaboration, cybersecurity and balancing AI innovation with safety. The talks were held ⁠despite the Pentagon slapping a formal supply-chain risk designation on Anthropic. The U.S. government is planning to make a version of Mythos available to major federal agencies, Bloomberg News has reported. Reuters reported that U.S. Treasury Secretary Scott Bessent ⁠and Federal Reserve Chair Jerome Powell held a meeting with CEOs of major U.S. banks to brief them on the potential risks from the model. The model also raised alarm bells in ⁠Britain, with authorities holding talks with major banks and cybersecurity officials to assess possible risks. Banks are in close contact with their European regulators regarding Mythos, Christian Sewing, president of the German banking association and CEO of Deutsche Bank, said. Reporting by Harshita Mary Varghese in Bengaluru Our Standards: The Thomson Reuters Trust Principles., opens new tab

Anthropic
Reuters3d ago
Read update
What do we know about Anthropic's Mythos amid rising concerns?

What do we know about Anthropic's Mythos amid rising concerns?

Anthropic earlier this month debuted Mythos, its most advanced AI model to date, equipped with sophisticated capabilities and designed for defensive cybersecurity tasks. Mythos' vast capabilities have sparked fears about the threat to traditional software security after the AI startup said the preview had uncovered "thousands" of major vulnerabilities in "every major operating system and web browser." Anthropic has rolled out Claude Mythos Preview through a controlled initiative called "Project Glasswing," granting access to tech majors including Amazon, Microsoft, Nvidia and Apple. The company also extended access to a group of more than 40 additional organizations that build or maintain critical software infrastructure. Experts warned that the model can identify and exploit previously unknown vulnerabilities faster than companies can repair them. Its advanced coding and autonomous capabilities could dramatically accelerate sophisticated cyberattacks, particularly in sectors such as banking that rely on complex, interconnected and often decades-old technology systems, they have said. While debuting Mythos, Anthropic said the model's ability to find software flaws at scale could, if misused, pose serious risks to economies, public safety and national security. U.S. software stocks tumbled on April 9 after the Mythos launch on April 7 reignited fears that advances in AI could disrupt traditional firms. The White House has held discussions with Anthropic CEO Dario Amodei about Mythos, with officials saying they talked about collaboration, cybersecurity and balancing AI innovation with safety. The talks were held despite the Pentagon slapping a formal supply-chain risk designation on Anthropic. The U.S. government is planning to make a version of Mythos available to major federal agencies, Bloomberg News has reported. Reuters reported that U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell held a meeting with CEOs of major U.S. banks to brief them on the potential risks from the model. The model also raised alarm bells in Britain, with authorities holding talks with major banks and cybersecurity officials to assess possible risks. Banks are in close contact with their European regulators regarding Mythos, Christian Sewing, president of the German banking association and CEO of Deutsche Bank, said.

Anthropic
BNN3d ago
Read update
What do we know about Anthropic's Mythos amid rising concerns?

Claude is better than Gemini for Python, but it's unusable until Anthropic fixes this one problem

Abhinav pivoted from a career in banking to pursue his first love in writing. Even while working full-time, he continued contributing as an editor-at-large, a role he has held for more than 7 years. A lifelong tech enthusiast who has built three gaming and productivity powerhouse PCs since 2018, his passion for technology keeps him closely following the semiconductor industry, from NVIDIA and AMD to ARM. His MSc dissertation explored how artificial intelligence will reshape the future of work, reflecting his curiosity about the wider social impact of emerging technologies. The fact that Claude Sonnet 4.6 eclipses every other frontier model on the market in programming workflows is pretty well-established. If there was any lingering skepticism, the extensive benchmarks I have run recently prove its coding dominance rather decisively. When you need rapid prototyping and complex code generation, Sonnet 4.6 effortlessly outperforms competitors by a long shot. I have come to realize, however, that generative capability is only one piece of the puzzle. Sustained productivity relies a lot more on platform usability, which is a factor that can ultimately matter much more than the underlying intelligence of the model itself. Lately, Claude has developed a severe workflow disrupting bottleneck for those on its free tiers, and this has got me to pivot towards Gemini, even if it ends up taking a lot more fine-tuning. Claude's session limits have become very punishing lately For programming and debugging, that's a nasty problem to have Over the last few months, I've leaned heavily on Sonnet 4.6 for a wide variety of "vibe-coding" projects. Because I'm still relatively new to Python, almost all of my practical learning comes directly from interacting with the platform. Whether I'm prototyping personal workflow automations or experimenting with a 2D platformer with Pygame, it benefits to see the code in action and ask the model to justify its specific design and execution decisions. This iterative learning process, however, requires continuous, uninterrupted dialogue, which is exactly where the current system is breaking down. The friction started right after Anthropic introduced peak-hour throttling of its services on the platform. This meant that the available tokens were consumed much more quickly whenever the demand for the service was higher. Abruptly hitting a strict rate limit in the middle of development inevitably puts a halt to the workflow. When the prompt box locks me out, the only options are to either migrate my context history to another LLM to pick up the pieces, or untangle the remaining logic myself. Neither route is without its own inconvenience. I built 3 Python apps with Claude Code that actually saved me time Reclaiming my time, one prompt at a time Posts 1 By Abhinav Raj I'd rather stick with Gemini, for now Even if that means missing all the great Claude Code features I've had my fair share of experience in Python coding across every major LLM platform, and the competition for me had narrowed down to Anthropic and Google. When it comes to coding capabilities, there's no doubt that Claude Code outperforms Gemini thanks to its superior code generation, strict prompt adherence and error avoidance. Besides this, it just handles complex logic better, and requires far less hand-holding than any other LLM. Although I'd argue that this is almost a requisite when you're dealing with such stringent usage limits for the platform to be at all usable. The economics of Sonnet 4.6 and Opus 4.6, however, deserve serious consideration, especially for paying users. Claude restrics standard paid users to a 200K-token context window. Conversely, the similarly-priced Gemini 3.1 Pro comes with a massive 1M-token window, making it a better deal in price-to-performance. At the time of writing, Google's flagship model is about 2.5x cheaper on input and half the output cost of Claude Opus 4.6. Since my daily vibe-coding sessions usually involve brainstorming and tend to get messy, Gemini's massive context runways and forgiving limits are far more practical for getting things done. I find that it is much easier to deal with frequent fine-tuning and a couple of missing features than to hit a multiple-hour lockout in the middle of the session, and I imagine it would be doubly frustrating for paying users whose productive hours fall between the "peak demand" period. The abrupt breaks completely shatter the flow of ideas and make it easy to lose the vision of the project. Anthropic is putting out some great features, but they're out of reach What's a feature good for when you can't use it at all? Anthropic has been the industry-leader in putting out some incredible features with a wide variety of applications, but for many, they remain frustratingly inaccessible. I was certainly not alone in experiencing this squeeze. Since the last and penultimate week of March 2026, tech forums and subreddits have been abuzz with users complaining about unusually excessive usage caps. What concerns me is the fact that it has now begun to undermine the platform's day-to-day utility. Subscribe to the newsletter for LLM platform insights Need clearer guidance on LLM tradeoffs? Subscribing to the newsletter gives focused coverage of model tokenomics, platform usability, and coding workflow tradeoffs -- practical analysis to help you compare platforms. Get Updates By subscribing, you agree to receive newsletter and marketing emails, and accept our Terms of Use and Privacy Policy. You can unsubscribe anytime. The "bottleneck" in question isn't strictly limited to heavy-duty programming tasks in Claude Code either. I recently tested Claude's new interactive visuals, and while the feature itself was nothing short of a game-changer in information visualization, the excitement completely evaporated when I discovered that generating just two of these "visuals" completely exhausted 100% of my usage limits on the free tier. It's regrettable when a well-rounded feature such as this is reduced to a tech demo. The growing gap between what Anthropic promises and what users can reliably access is coming in the way of the usability of the platform, and this has got many users looking at the closest competitors. The tokenomics of LLMs is a worthy consideration when choosing platforms There's absolutely no denying that Anthropic's direction is genuinely impressive, and that's why Claude is the most capable model for coding workflows. The primary issue I take is with accessibility, because without it, capability itself is a hollow selling point. The growing gap between what Anthropic promises and what users can reliably access is coming in the way of the usability of the platform, and this has got many users, not unlike myself, looking at the closest competitors. Google Gemini Gemini is Google's multimodal artificial intelligence model capable of generating text, code, images, and videos. See at Google Gemini Expand Collapse

Anthropic
XDA-Developers3d ago
Read update
Claude is better than Gemini for Python, but it's unusable until Anthropic fixes this one problem

Anthropic's Mythos AI model sparks fears of turbocharged hacking

Anthropic's new Mythos AI model is raising concern among governments and companies that it could outpace current cyber security defenses, turbocharge hacking, and expose weaknesses faster than they can be fixed. The San Francisco-based startup released a cyber-focused model this month, which has shown the ability to detect software flaws faster than humans but also demonstrated it can generate exploits needed to take advantage of them. In one alarming case, the Mythos model showed it could break out of a secure digital environment to contact an Anthropic worker and publicly reveal software glitches, overriding the intention of its human makers. This week, OpenAI also released its own advanced cyber model with similar capabilities. The developments have led senior international financial officials and government ministers around the world scrambling to understand the dangers, in some cases seeking access to the new models that have only been given to a small number of vetted partners. "This feels like the discovery of fire: a force that can profoundly improve our lives or, if mishandled, cause real harm across the digital world," said Rafe Pilling, director of threat intelligence at cyber firm Sophos. Last week, US Treasury Secretary Scott Bessent and Federal Reserve Chair Jay Powell summoned some of the largest US banks to discuss the cyber threats the AI model posed. The UK's AI minister, Kanishka Narayan, told the FT "we should be worried" about the capabilities of the model. These risks are well known within Anthropic. Logan Graham, who leads Anthropic's frontier "red team," which tests the lab's models, said: "Somebody could use [Mythos] to basically exploit en masse very fast in an automated way, and most of the organizations around the world... including the most technically sophisticated ones, would not be able to patch things in time." AI tools have already significantly boosted the multibillion-dollar cyber crime industry. They have provided amateur hackers with cheap tools to write harmful software, as well as enabling professional criminals to better automate and scale their operations. "Attacks are already increasing in frequency and sophistication, thanks to AI," said Christina Cacioppo, chief executive at security and compliance firm Vanta. "Most companies aren't prepared to handle the risk because they're still managing security through dated methods that are no match for the speed of AI-enabled attacks," she added. AI-enabled cyber attacks were up 89 percent in 2025 compared with a year earlier, according to data from security group CrowdStrike. Meanwhile, the average time between an attacker first gaining access to a system and acting maliciously fell to 29 minutes last year, a 65 percent acceleration from 2024. "The game is asymmetric; it is easier to identify and exploit than to patch everything in time," said one person close to a frontier AI lab. Anthropic's Graham said there were also internal concerns that companies would use Mythos to find "more vulnerabilities than they could hope to deal with in the near future." The heightened fears about AI and cyber security come amid signs that agents, which act autonomously on users' behalf to conduct tasks, could also fuel a further rise in AI-enabled hacking. Last September, Anthropic detected the first reported AI cyber-espionage campaign believed to be coordinated by a Chinese state-sponsored group. It manipulated its coding product, Claude Code, to attempt to infiltrate about 30 global targets, including large tech firms, financial institutions, chemical manufacturers, and government agencies. It was successful in a small number of cases and executed without extensive human intervention. Software researcher Simon Willison has warned there is a "lethal trifecta" of capabilities that arise with agents: access to private data; exposure to untrusted content, such as the Internet; and the ability to communicate externally. Security professionals argue that the safest way to protect against cyber attacks when using an AI agent is to grant it access to only two of these areas. However, AI experts believe that much of the value from agents comes from granting access to all three. "The bad news is that there is no good solution as of today," said one person close to an AI lab. "The good news is [AI agents aren't] yet in mission-critical settings like the stock exchange, bank ledger, or the airport." Stanislav Fort, a former Anthropic and Google DeepMind researcher who has founded AISLE, an AI security platform, said he was optimistic that AI could help to identify and fix a "finite repository" of historical security flaws. To date, AI models have identified thousands of "zero-day" vulnerabilities -- unknown weaknesses in commonly used software -- some of which have been undetected for decades. "We are gradually finding fewer and fewer zero days, of the worst kinds we can imagine," said Fort. Once these weaknesses were eliminated, the technology could be used to "proactively make sure nothing bad comes in [and] meaningfully increase the security level of the whole world as a result." Additional reporting by Kieran Smith in London.

Anthropic
Ars Technica3d ago
Read update
Anthropic's Mythos AI model sparks fears of turbocharged hacking

White House opens backchannel to Anthropic as Pentagon fight simmers

Anthropic gave NSA access to Mythos Preview, Anthropic's donation to open source developers highlights how under-sourced they are, Asian regulators urge banks to use Mythos, LayerZero-powered cross-chain bridge Kelp DAO lost $292m in DPRK exploit, much more Metacurity is the only daily cybersecurity briefing built for clarity, not agendas -- no vendor spin, no echo chamber, just sharp, original aggregation and analysis of what actually matters to security leaders. If you rely on Metacurity to cut through the noise on policy, industry shifts, and security research, consider supporting us with a paid subscription. Independent coverage like this only exists because readers decide it's worth it. Anthropic is building tools that could have enormous implications for the federal government. But that same government is currently fighting Anthropic in court after the Pentagon declared it a "supply chain risk." The meeting points to a potential thaw. In the meeting, Wiles was circumspect about the Pentagon aspect, saying, "It's in court." But she made clear that the government needs a relationship with Anthropic, and she wants an open line of communication, one source said. The discussion covered how Anthropic is safeguarding its code and how the company makes decisions around things like when and how to release new models. (Maria Curi, Marc Caputo, Dave Lawler / Axios) Related: New York Times, The Information, TechCrunch, Fox News, The Next Web, Washington Post, Politico, Implicator.ai, Benzinga, BBC, Reuters, Tech in Asia, The Hill, Bloomberg Law, CNBC, UPI, New York Post, Quartz, NewsMax.com, Blockonomi, KDFX-TV, Washington Examiner, Gizmodo, The Verge, Bloomberg, MarketWatch, Reuters, Wall Street Journal, Crypto Briefing, Breitbart, Politico, The Decoder, New York Times, The Japan Times, Benzinga, International Business Times, DataBreaches.Net, CyberPress It's unclear how the NSA is currently using Mythos, but other organizations with access to the model are using it predominantly to scan their own environments for exploitable security vulnerabilities. Anthropic restricted access to Mythos to around 40 organizations, contending that its offensive cyber capabilities were too dangerous to allow for a wider release. Anthropic only announced 12 of those organizations. One source said the NSA was among the unnamed agencies with access. The NSA's counterparts in the U.K. have said they have access to the model through the country's AI Security Institute. (Maria Curi, Sam Sabin / Axios) Related: Reuters, The Information, The Decoder, Business Today, crypto.news, DigiTimes, Implicator.ai, NewsMax.com, Engadget, Tech in Asia, Yahoo News, Bloomberg, Hacker News. r/politics, r/ArtificialInteligence, r/Anthropic, r/technology, r/Intelligence As code is maintained and bugs are fixed, it accrues what software maintainers call "cruft" -- remnants of legacy code left within software that can break things or be exploited, and keeping track of things can get tricky. The problem is that while the number of AI eyes looking for problems has increased, the number of people fixing those problems when they arise hasn't. And -- so far -- humans are still the final link in the chain, even as AI's autonomous code-writing capabilities increase exponentially. Mythos may eventually alleviate the stress on maintainers, securing code for millions of users who rely on it. (Chris Stokel-Walker / Bloomberg) Related: CTech, The Register, GovTech, HotAir, DevOps.com Singapore's financial regulator is urging banks to plug holes, while South Korea's government agencies have met to review and discuss how to respond to the risks. In Australia, authorities expect lenders to be vigilant to ensure clients aren't put at risk by inadequate controls. The actions around the region reflect rising global concern over Mythos as regulators discuss with financial firms how they are handling the cybersecurity risks raised by the model, which has so far been given only a limited release. Anthropic held back a wider release after finding the model was capable of discovering security holes that have gone undetected for years, fueling alarm about a potential new era of cybersecurity attacks. (Rthvika Suvarna, Richard Henderson, and Haram Lim / Bloomberg) "Preliminary indicators suggest attribution to a highly sophisticated state actor, likely DPRK's Lazarus Group, more specifically TraderTraitor," LayerZero wrote in its latest statement. LayerZero explained that the attacker gained access to the list of RPC nodes used by LayerZero Labs' decentralized verified network (DVN), which are independent entities that verify the cross-chain messages. The attacker then poisoned two of those RPC nodes, causing them to deliver a fake cross-chain message to the DVN. The attacker launched a DDoS attack against the clean nodes to lead the DVN to rely on the poisoned nodes. (Danny Park / The Block) Related: The Block, CoinDesk, CryptoNinjas, CoinGape, Blockchain.News, Blockonomi, crypto.news, Crypto Briefing, Tech in Asia, Cointelegraph, CoinGape, Coinpedia Fintech News, Bloomberg, Mezha, Blockhead, NullTX, DL News, Decrypt, CoinDesk, crypto.news, CryptoSlate, Blockonomi, Yahoo Finance, Blockchain.News, Bitcoin News, Cointelegraph, Crypto Briefing, Bitcoinist.com, The Defiant At 7:07 p.m. EDT on April 17, an attacker impersonated an eth.limo team member to trick registrar EasyDNS into running an account recovery process, according to the post-mortem and a separate blog post from EasyDNS CEO Mark Jeftovic. The attacker flipped eth.limo's nameservers to Cloudflare at 2:23 a.m. EDT on April 18, triggering automated downtime alerts that woke the eth.limo team. The nameservers were then switched again to Namecheap at 3:57 a.m. EDT before EasyDNS restored the team's account access at 7:49 a.m. EDT, per the timeline. eth.limo is a free, open-source reverse proxy that lets users reach ENS-linked content hosted on IPFS, Arweave, or Swarm through a standard browser by appending ".limo" to any .eth name. Its wildcard DNS record at *.eth.limo covers roughly 2 million .eth domains registered through ENS, per figures cited by EasyDNS. (Zack Abrams / The Block) Related: cryptonews.net, Crypto News, Cryptorank, Yellow, BeInCrypto, Blockonomi, Coinpaper Vercel is a cloud platform that provides hosting and deployment infrastructure for developers, with a strong focus on JavaScript frameworks. The company is known for developing Next.js, a widely used React framework, and for offering services such as serverless functions, edge computing, and CI/CD pipelines that enable developers to build, preview, and deploy applications. The company said a limited subset of customers was affected by a security breach. The company says its services have not been impacted and that it is working with impacted customers. Vercel says it is taking steps to protect its customers, advising them to review environment variables, use its sensitive environment variable feature, and to rotate secrets if needed. Vercel said that the breach stemmed from the compromise of a third-party AI tool's Google Workspace OAuth application. Vercel is advising Google Workspace administrators and Google account owners to check for the following application: Vercel CEO Guillermo Rauch later shared additional details on X, stating that the initial access occurred after a Vercel employee's Google Workspace account was compromised via a breach at the AI platform Context.ai. (Lawrence Abrams / Bleeping Computer) Related: Vercel, Hacker News, The Register, TechRadar, Decipher, The Hans India, Ace of Spades HQ, The Indian Express, The Coin Republic, Cointelegraph, PiunikaWeb, Cyber Security News, CoinDesk, Blockhead, Blockonomi, The Cyber Express, Peridot Blog, The Information, crypto.news, Livemint, The Block, iTnews, CyberInsider, The Verge, XDA Developers, r/webdev, WebProNews, r/cybersecurity, BeInCrypto, India Today, Crypto News, IT News admitted he hacked the high court more than two dozen times, in addition to hacking accounts at AmeriCorps and the Veterans Administration Health System. He boasted about his access on social media, using the handle @ihackedthegovernment. He faced up to a year in prison and a fine of up to $100,000 for pleading guilty to a single misdemeanor count of fraud activity in connection with computers. But the Justice Department sought only probation, a recommendation on the lower end of federal sentencing guidelines for Moore. Prosecutors cited his admission and commitment to taking responsibility for his conduct as reasons for a lighter sentence. His attorney, Eugene Ohm, said that Moore immediately admitted guilt and accepted a plea deal when confronted by federal law enforcement. (Ella Lee / The Hill) Related: TechCrunch, Bloomberg Law European Commission President Ursula von der Leyen presented the age-verification tool in Brussels on Wednesday, saying it was "technically ready" and will soon be available to use as countries move to ban kids from social media. Cyber and privacy experts immediately dove into the source code on the GitHub software platform and reported several issues with the app's design. The saga is turning into a PR disaster for Brussels. But underneath the controversy over the code lie deeper divisions between privacy campaigners, child rights groups, tech firms, and politicians over how to protect minors online -- as leaders promise to shield kids from social media and porn sites. Within hours of the EU's app release, security consultant Paul Moore found it would store sensitive data on a user's phone and leave it unprotected, he wrote in a widely shared post on X. Moore claimed to have hacked the app in under 2 minutes. Baptiste Robert, a prominent French white hat hacker, confirmed many of the issues and told POLITICO it was possible to bypass the app's biometric authentication features, meaning someone would be able to forgo entering a PIN code or using Touch ID to access the app. Olivier Blazy, a cryptographic researcher who is part of a French task force on digital identity, said: "Let's say I downloaded the app, proved that I am over 18, then my nephew can take my phone, unlock my app and use it to prove he is over 18." (Émile Marzolf, Ellen O'Regan and Eliza Gkritsi / Politico EU) Related: GitHub, RTÉ, Biometric Update, Wired, Blaze Media, MediaNama, Cointelegraph, RTÉ, Biometric Update, /neoliberal, r/europe, r/privacy, r/eutech, Sofx The researchers documented two campaigns where attackers deployed QEMU as part of their arsenal to collect domain credentials. One campaign that Sophos tracks as STAC4713 was first observed in November 2025 and has been linked to the Payouts King ransomware operation. The other, tracked as STAC3725, was spotted in February this year and exploits the CitrixBleed 2 (CVE‑2025‑5777) vulnerability in NetScaler ADC and Gateway instances. According to Zscaler, Payouts King is likely tied to former BlackBasta affiliates, based on its use of similar initial access methods like spam bombing, Microsoft Teams phishing, and Quick Assist abuse. Sophos recommends that organizations look for unauthorized QEMU installations, suspicious scheduled tasks running with SYSTEM privileges, unusual SSH port forwarding, and outbound SSH tunnels on non-standard ports. (Bill Toulas / Bleeping Computer) Related: Sophos, Security Affairs Of the 1,107 firms that responded to the survey, 507 reported being hit by ransomware attacks, in which hackers block access to data and demand payment to restore it. Of the companies that paid the attackers, 83 were able to restore their systems and data, while 139 were not. Conversely, 141 firms reported being hit by ransomware attacks but restoring their systems and data without paying. Experts say ransoms should not be paid because they fund criminal organizations. The institute noted that the survey results underscore the reality that "paying a ransom does not guarantee data recovery." About half of the companies that experienced ransomware attacks said that their financial losses, including ransom payments and system recovery costs, ranged from 1 million yen to less than 50 million yen. Meanwhile, 16 percent reported little to no damage, while 4.3 percent of the firms experienced losses of 1 billion yen or more. The survey also showed that restoration usually took between one week and a month, as reported by 176 of the affected companies. In contrast, some companies said their data was not restored even after three months. (Japan Today) Related: Caliber, The Mainichi, Kyodo News Reports from the National Information Technology Development Agency (NITDA) and the Corporate Affairs Commission (CAC) confirmed that 'coordinated and sophisticated' threat actors have successfully breached critical infrastructure, leading to service outages and the suspected exfiltration of sensitive citizen data. To show the severity of the breach, CAC suspended, albeit temporarily, the companies' registration portal, even as the Nigeria Data Protection Commission (NDPC) has commenced a probe into the attacks. (Adeyemi Adepetun / The Guardian) Related: Nigerian Mirror In an email obtained by the Tallahassee Democrat, Assistant City Manager Christian Doolin informed city commissioners of the incident and said staff "quickly responded and took action to isolate the threat." "We want to make you aware that earlier this morning, our systems alerted staff to an attack affecting portions of our city's technology environment," the email states. "There are no operational impacts to the system at this time," Doolin wrote. "Staff is validating containment, assessing registries and scheduled tasks, and analyzing access across environments." However, an email sent by Leon County chief information officer Michelle Taylor at 1:35 p.m. added more gravity to the situation. "COT is experiencing a confirmed cyber event on its IT network. They have disconnected from the internet while they investigate further. Additionally, we have temporarily paused our city/county network link to prevent any creep into the Leon County network," she wrote. (Elena Barrera / Tallahassee Democrat) Related: WTXL The announcement follows a pilot project for Tinder verification that World previously conducted in Japan. The global Tinder expansion is one of the biggest tests yet for World, and the company's bet that everyday consumers will be willing to sign up for biometric verification services to use internet applications. In addition to the Tinder global expansion, Tools for Humanity, the company behind World, announced a number of other consumer and enterprise partnerships on Friday at its Lift Off event in San Francisco. The startup says Tinder users who verify with their World ID will receive five free "boosts," typically a paid feature that increases the number of users who see a profile by up to 10 times for 30 minutes. The videoconferencing platform Zoom also says that users can now require other participants to verify their identity with World before joining a call. DocuSign, the contract signing software, will allow users to require World's identity verification technology. (Maxwell Zeff / Wired) Related: TechCrunch, Gizmodo, DL News, The Verge, Decrypt, CoinDesk, TechRadar, BBC, Axios, The Block, TheGrio, Implicator.ai, The Deep View, Slashdot In a post on the Bluesky account, the company shared the cause of the problem and noted that the attack was "impacting our operations, with users experiencing intermittent interruptions in service for their feeds, notifications, threads, and search." Bluesky said that it has not seen any evidence of unauthorized access to private data, however. (Sarah Perez / TechCrunch) Related: Mashable, Engadget, The Verge, heise online The European Commission is strengthening the European Union's digital sovereignty by awarding a tender that allows EU institutions, bodies, offices, and agencies (Union entities) to procure sovereign cloud services for up to €180 million (around $211 million) over 6 years. Elon Musk and Linda Yaccarino, the former CEO of X, were summoned to Paris on Monday, where investigators are looking into allegations of misconduct related to the social media platform X, including the spread of child sexual abuse material and deepfake content, although as of press time, it was unclear if they would go. Surveillance and analytics company Palantir recently posted what it called a "brief" 22-point summary of CEO Alex Karp's book "The Technological Republic" that trashes pluralism, denounces "postwar neutering of Germany and Japan," and otherwise pushes a generally hard-right agenda. Google says its AI can now scan everything to form its own views of you and everyone you know, including all your photos.

VercelAnthropic
Metacurity3d ago
Read update
White House opens backchannel to Anthropic as Pentagon fight simmers

NSA Reportedly Adopts Anthropic's Mythos Tool Despite Pentagon Security Concerns

The US National Security Agency (NSA) is using Anthropic's most advanced artificial intelligence model, Mythos Preview, even as the Department of Defense (DoD) officially labels the company a "supply chain risk," according to a report from Axios published over the weekend. The disclosure underscores a growing split within the federal government, where military leadership has clashed with the AI firm, yet intelligence agencies appear to be deepening their reliance on its tools for cybersecurity work. Axios reported that two sources said the NSA is using Mythos, while a third indicated the model has been adopted more broadly within the Department of Defense. Anthropic has restricted access to the model to roughly 40 organizations, citing the risk that its offensive cyber capabilities could be misused if widely distributed. The company has publicly identified only 12 of those organizations. According to Axios, the NSA is among the unnamed agencies that received access, with most institutions reportedly using Mythos to scan for exploitable vulnerabilities in their environments. The United Kingdom's counterparts to the NSA have also confirmed access to the model through the country's AI Security Institute. The tension dates back to February, when the DoD moved to cut off Anthropic and directed its vendors to do the same. The case is ongoing, and the Pentagon has argued in court that continued use of Anthropic's models could threaten US national security. The dispute flared during contract renegotiations earlier this year. The Defense Department pressed Anthropic to make Claude available for what it called "all lawful purposes," while the company pushed to bar specific applications, particularly mass domestic surveillance and the development of autonomous weapons. Some defense officials view Anthropic's stance as evidence that it cannot be trusted in critical military scenarios, a claim the company has rejected. Anthropic chief executive Dario Amodei met with White House chief of staff Susie Wiles and Treasury Secretary Scott Bessent last Friday to discuss government deployment of Mythos and the company's broader security practices. Both sides described the meeting as productive, according to Axios, with next steps expected to focus on how departments outside the Pentagon can engage with the model. Anthropic and the Pentagon declined to comment. The NSA and the Office of the Director of National Intelligence did not respond to inquiries. The NSA's continued use of Mythos underscores the federal government's willingness to prioritize cybersecurity capabilities over procurement disputes. While the Pentagon has warned that Mythos could amplify cyberattacks due to its advanced coding and autonomous capabilities, other agencies appear to view the model as essential for hardening critical defenses.

Anthropic
FinanceFeeds3d ago
Read update
NSA Reportedly Adopts Anthropic's Mythos Tool Despite Pentagon Security Concerns

Polymarket Targets $15 Billion Valuation in New Funding Round | PYMNTS.com

The round would give the company a valuation of around $15 billion, including the new funds, The Information reported Sunday (April 19), citing sources familiar with the matter. According to the report, this financing would add to the $600 million already invested in the company by New York Stock Exchange parent Intercontinental Exchange. The report notes that a $15 billion valuation would still be below the $22 billion reached last month by rival prediction market Kalshi. However, the new valuation would still be more than 66% higher than the $9 billion the company achieved in a funding round last year. The report also postulates a reason for the difference in valuations: Polymarket has only recently begun serving U.S. customers and charging fees to drive revenue, while Kalshi had a head start in the U.S., with annualized revenues of $1.5 billion. Polymarket, unlike Kalshi, settles trades on a blockchain and has discussed issuing its own token, the report added. The company is also planning an initial public offering, sources familiar with its thinking told The Information. The funding plans come in the wake of a surge in popularity of prediction markets, which let users purchase event contracts -- derivatives that pay out to investors who correctly predict the outcome of things like sporting events or elections. A recent report by Wall Street broker Bernstein forecasts that prediction market volumes will reach $1 trillion by 2030. This will happen as the industry shifts from niche bets to a larger "information market" covering sports, cryptocurrency, politics and the economy, the report said. Volumes came to $51 billion last year and are on pace to reach around $240 billion this year, which implies roughly 80% compound annual growth through the rest of the decade, the report added. Activity has already accelerated so far this year, with Polymarket and Kalshi seeing combined year-to-date volumes of $60 billion. "Increasing regulatory clarity at the federal level is expanding the addressable market, while blockchain-based tokenization and integration with crypto markets is enabling global liquidity, long-tail event creation and participation from institutions," the analysis said. Still, prediction markets have become a flashpoint between federal and state regulators, as PYMNTS wrote last year. "While real-money prediction markets technically fall under the jurisdiction of the Commodity Futures Trading Commission (CFTC), a growing number of states have sought to shut down the markets they view as unlicensed or illegal gambling operations," that report said.

Polymarket
PYMNTS.com3d ago
Read update
Polymarket Targets $15 Billion Valuation in New Funding Round | PYMNTS.com

Before the SpaceX IPO: The Culture Lessons Every Deep Tech Startup Should Steal

What a SpaceX engineer-turned-deeptech VC reveals about culture, speed, and defensibility in deep tech SpaceX just filed for its IPO, and when it goes public, most of the coverage will focus on rockets, Starlink, and financials. That's all interesting -- but it misses the real story. The true advantage isn't just what SpaceX builds. It's HOW they build. For our latest Ubiquity University module, I just sat down with Brannon Jones, who worked on Falcon and Raptor inside SpaceX and is now a deeptech VC at AlleyCorp, to unpack what actually makes SpaceX different from the inside. Not the mythology -- the operating system. The consistent theme: SpaceX has turned culture into a compounding execution engine. SpaceX prioritizes truth over elegance. Instead of trying to design the perfect system upfront, they push hardware into real-world conditions as quickly as possible. That includes tests that fail -- sometimes visibly and dramatically -- but internally, those are treated as progress because they collapse uncertainty. This approach compresses timelines and eliminates false confidence. In deep tech, especially in the physical world, you cannot simulate your way to certainty. One phrase Brannon emphasized was avoiding anything "superfluous." In practice, this is much harder than it sounds, but the mantra is "delete, delete, delete". Most teams overbuild because they assume they understand the system early. SpaceX assumes they don't. They start from first principles, keep designs as simple as possible, and let real-world testing determine what needs to exist. That discipline leads to systems that are cheaper, faster to build, and easier to improve. A useful heuristic: A key insight from Brannon: SpaceX's velocity is driven as much by manufacturing as by design. High-throughput production of complex components creates a rapid feedback loop between building and learning. This flips the typical mental model. Manufacturing is not downstream of innovation -- it enables it. The faster you can produce and iterate, the faster you converge on better systems. This pattern is increasingly visible elsewhere: Another defining trait is how clearly priorities are communicated. At SpaceX, Elon regularly makes it explicit what the single most important thing at the company is, and that clarity propagates across the organization. Engineers, technicians, and new hires all operate with a shared understanding of the current objective. This reduces friction in several ways: It's not always perfectly clean, but the organization consistently converges around what matters. In complex technical environments, that clarity is a force multiplier. SpaceX also places a premium on direct, firsthand information. Brannon described how leaders often bypass layers to understand what is actually happening on the ground. The goal is not polished summaries -- it's reality. That creates a system where: It also removes the ability to hide behind abstraction. The organization continuously orients itself toward what is actually happening. What ties all of this together is not any single tactic, but the system they create. Culture at SpaceX is not about stated values -- it's about how quickly the organization learns from reality and adapts. That combination produces something extremely difficult to replicate: a higher rate of learning. In deep tech, that is the moat. Competitors can copy products, hire talent, and raise capital. But it is much harder to replicate how quickly a company improves. Brannon goes deeper on how these principles show up day-to-day inside SpaceX, and where they do -- and don't -- translate to other startups. Final thought: when SpaceX IPOs, the market will try to value its products. The more durable advantage is a culture that turns time into a competitive weapon.

SpaceX
ubqt.vc3d ago
Read update
Before the SpaceX IPO: The Culture Lessons Every Deep Tech Startup Should Steal
Showing 1721 - 1740 of 10807 articles