News & Updates

The latest news and updates from companies in the WLTH portfolio.

Daily Dividend Report: PEBO,SMG,SO,NRG,CMS

Scotts Miracle-Gro, the leading marketer of branded consumer lawn and garden products in North America, announced that its Board of Directors has approved the payment of a cash dividend of $0.66 per share. The dividend is payable on Friday, June 5, 2026, to shareholders of record as of Friday, May 22, 2026. Southern today announced a regular quarterly dividend of 76 cents per share on the company's common stock, payable June 8, 2026 to shareholders of record as of May 18, 2026. In doing so, the company is increasing its dividend by 8 cents per share on an annualized basis to a rate of $3.04 per share. Every quarter for 79 consecutive years, Southern Company has paid a dividend to its shareholders that is equal to or greater than the previous quarter. NRG Energy today announced that its Board of Directors declared a quarterly dividend on the Company's common stock of $0.475 per share, or $1.90 per share on an annualized basis. The dividend is payable on May 15, 2026, to stockholders of record as of May 1, 2026. The Board of Directors of CMS Energy has declared a quarterly dividend on the company's common stock. The dividend for the common stock is 57 cents per share. It is payable May 29, 2026, to shareholders of record on May 8, 2026.

NRG
NASDAQ Stock Market2d ago
Read update
Daily Dividend Report: PEBO,SMG,SO,NRG,CMS

Emids Unveils Healthcare Agentic AI Suite Integrated With Anthropic at the Emids Healthcare Summit

NASHVILLE, Tenn.-(BUSINESS WIRE)-Emids, a leading provider of digital engineering, AI and platform solutions to the healthcare and life sciences industry, unveiled a library of production-ready agentic workflows built on Anthropic's Claude models, alongside Pacca AI Agent Builder, Emids' proprietary platform that lets enterprises design, deploy, and govern AI agents without building infrastructure from scratch. This positions Emids at the forefront of healthcare AI execution, moving beyond pi

Anthropic
Weekly Voice2d ago
Read update
Emids Unveils Healthcare Agentic AI Suite Integrated With Anthropic at the Emids Healthcare Summit

Anthropic Intros Opus 4.7 AI Model, Focusing on Coding, Visual Tasks, and Cybersecurity Guardrails -- THE Journal

Anthropic has unveiled Claude Opus 4.7, an updated large language model that it says outperforms its predecessor on software engineering tasks, image analysis, and multi-step autonomous work, while maintaining pricing at $5 per million input tokens and $25 per million output tokens. The model is now generally available across Anthropic's own products and through its API, as well as on Amazon Bedrock, Google Cloud's Vertex AI, and Microsoft Foundry. Anthropic said the upgrade delivers the most pronounced gains on demanding coding tasks. Users report being able to hand off difficult coding work that previously required close supervision, with the new model handling complex, long-running tasks with greater consistency and paying closer attention to instructions. The company also said the model can verify its own outputs before reporting results compared to users, a behavior it described as new relative to earlier versions. On vision, Opus 4.7 can now accept images up to 2,576 pixels on the long edge, roughly 3.75 megapixels, more than three times the resolution supported by prior Claude models. Anthropic said this expands the model's usefulness for tasks requiring fine visual detail, including reading dense screenshots and extracting data from complex diagrams. Perhaps the most notable aspect of the release is its role in Anthropic's broader safety rollout strategy. The company recently announced Project Glasswing, which highlighted both the risks and potential benefits of AI for cybersecurity, and stated that it would keep its more powerful Claude Mythos Preview model restricted while testing new cyber safeguards on less-capable systems first. Opus 4.7 is the first such model. Anthropic said it experimented during training by selectively reducing Opus 4.7's cybersecurity capabilities and is releasing the model with automatic safeguards designed to detect and block requests that indicate prohibited or high-risk cybersecurity uses. The company added that findings from this deployment will inform its eventual broader release of what it calls "Mythos-class" models. Security professionals seeking to use the new model for legitimate purposes, such as vulnerability research or penetration testing, can apply through a new Cyber Verification Program. Regarding alignment, Anthropic's evaluations show that Opus 4.7 exhibits low rates of concerning behavior, such as deception, sycophancy, and cooperation with misuse, and performs better than its predecessor in honesty and resistance to malicious prompt-injection attacks. However, the company acknowledged the model is modestly weaker in some areas, including a tendency to give overly detailed harm-reduction advice on controlled substances. Anthropic's internal alignment assessment described the model as "largely well-aligned and trustworthy, though not fully ideal in its behavior," and noted that Mythos Preview remains the best-aligned model the company has trained. Developers upgrading from Opus 4.6 should account for two cost-related changes. Opus 4.7 uses an updated tokenizer that can map the same input to roughly 1.0 to 1.35 times as many tokens, depending on content type. The model also produces more output tokens at higher effort levels, particularly in later turns of agentic tasks, because it engages in more reasoning. Anthropic said users can manage token consumption through an effort parameter, task budgets, or by prompting the model to be more concise. Alongside the model release, Anthropic introduced a new "xhigh" effort level, sitting between the existing "high" and "max" settings, giving developers finer control over the tradeoff between reasoning depth and latency. In Claude Code, the default effort level has been raised to "xhigh" for all plans. The company also launched task budgets in public beta on its API platform, and added a new "/ultrareview" command in Claude Code that reads through code changes and flags bugs and design issues.

Anthropic
THE Journal2d ago
Read update
Anthropic Intros Opus 4.7 AI Model, Focusing on Coding, Visual Tasks, and Cybersecurity Guardrails -- THE Journal

SEALSQ Advances Post-Quantum Cryptography in Silicon to Counter AI-Driven Threats Following Anthropic's Mythos Breakthrough

SEALSQ Corp (NASDAQ: LAES) ("SEALSQ" or "Company"), a company that focuses on developing and selling Semiconductors, PKI, and Post-Quantum technology hardware and software products, today provides an update to its previous communication on the evolving intersection of artificial intelligence and quantum-era cybersecurity risks, highlighting how recent breakthroughs in AI are reinforcing the urgent need for Post-Quantum Cryptography (PQC) and other secure algorithms to be embedded directly into semiconductor infrastructure. Since Anthropic unveiled Claude Mythos Preview on April 7, 2026, the model has drawn significant attention in cybersecurity for its advanced capabilities in coding, reasoning, and vulnerability discovery. In response to the risks, Anthropic has restricted access to Mythos and launched "Project Glasswing," a cross-industry initiative to secure critical software. Mythos significantly outperforms previous models, uncovering serious long-standing flaws in widely used software. As AI agents become fully autonomous cybersecurity actors, they accelerate both offensive and defensive operations, systematically exploring attack paths and using sophisticated evasion techniques. This evolution also amplifies the quantum threat by enabling faster identification of cryptographic weaknesses and shortening the effective lifespan of classical encryption through "harvest now, decrypt later" strategies. In this new environment, software-based security alone is no longer sufficient. SEALSQ emphasizes that the most effective long-term mitigation is the integration of Post-Quantum Cryptography and other secure algorithms directly into silicon. By embedding PQC into secure microcontrollers and semiconductor components, cryptographic operations are executed within tamper-resistant hardware environments, creating an immutable root of trust that cannot be altered, bypassed, or extracted -- even by AI-driven attacks. This hardware-based approach fundamentally reduces the attack surface by mitigating vulnerabilities linked to software layers and misconfigurations. It ensures that cryptographic keys are generated, stored, and used entirely within secure environments, protecting from unauthorized access or exfiltration. As a result, even highly autonomous AI agents would be unable to escalate privileges or compromise protected systems. While no system can claim absolute theoretical immunity, embedding PQC in silicon raises the barrier to attack to a level that is practically unfeasible. Unlike software defenses, which can be continuously probed and exploited by AI, hardware-enforced cryptographic protections are inherently resistant to observation, manipulation, and iterative attack strategies. The convergence of advanced AI and emerging quantum computing capabilities is creating a new cybersecurity reality where hardware-rooted, quantum-resilient infrastructures can provide sustainable protection. SEALSQ remains at the forefront of this transformation, developing next-generation semiconductors designed to secure digital ecosystems against both AI-driven cyber threats and future quantum attacks. About SEALSQ: SEALSQ is a leading innovator in Post-Quantum Technology hardware and software solutions. Our technology seamlessly integrates Semiconductors, PKI (Public Key Infrastructure), and Provisioning Services, with a strategic emphasis on developing state-of-the-art Quantum Resistant Cryptography and Semiconductors designed to address the urgent security challenges posed by quantum computing. As quantum computers advance, traditional cryptographic methods like RSA and Elliptic Curve Cryptography (ECC) are increasingly vulnerable. SEALSQ is pioneering the development of Post-Quantum Semiconductors that provide robust, future-proof protection for sensitive data across a wide range of applications, including Multi-Factor Authentication tokens, Smart Energy, Medical and Healthcare Systems, Defense, IT Network Infrastructure, Automotive, and Industrial Automation and Control Systems. By embedding Post-Quantum Cryptography into our semiconductor solutions, SEALSQ ensures that organizations stay protected against quantum threats. Our products are engineered to safeguard critical systems, enhancing resilience and security across diverse industries. For more information on our Post-Quantum Semiconductors and security solutions, please visit www.sealsq.com. Forward-Looking Statements This communication expressly or implicitly contains certain forward-looking statements concerning SEALSQ Corp and its businesses. Forward-looking statements include statements regarding our business strategy, financial performance, results of operations, market data, events or developments that we expect or anticipate will occur in the future, as well as any other statements which are not historical facts. Although we believe that the expectations reflected in such forward-looking statements are reasonable, no assurance can be given that such expectations will prove to have been correct. These statements involve known and unknown risks and are based upon a number of assumptions and estimates which are inherently subject to significant uncertainties and contingencies, many of which are beyond our control. Actual results may differ materially from those expressed or implied by such forward-looking statements. Important factors that, in our view, could cause actual results to differ materially from those discussed in the forward-looking statements include SEALSQ's ability to continue beneficial transactions with material parties, including a limited number of significant customers; market demand and semiconductor industry conditions; and the risks discussed in SEALSQ's filings with the SEC. Risks and uncertainties are further described in reports filed by SEALSQ with the SEC. SEALSQ Corp is providing this communication as of this date and does not undertake to update any forward-looking statements contained herein as a result of new information, future events or otherwise.

Anthropic
Investing News Network2d ago
Read update
SEALSQ Advances Post-Quantum Cryptography  in Silicon to Counter AI-Driven Threats Following Anthropic's Mythos Breakthrough

Amazon invests another $5bn in Anthropic - kuwaitTimes

SAN FRANCISCO: Amazon on Monday said it pumped another $5 billion into Anthropic as it ramps up its collaboration with the startup behind Claude artificial intelligence. The e-commerce and cloud computing colossus noted that the investment builds on $8 billion it had already invested in Anthropic, according to the companies. Amazon added that it could invest $20 billion more in Anthropic, provided the startup meets certain performance goals. For its part, San Francisco-based Anthropic said it has committed to spending more than $100 billion on Amazon Web Services (AWS) technology to power AI in the coming decade. "We need to build the infrastructure to keep pace with rapidly growing demand," Anthropic chief executive Dario Amodei said in a release. "Our collaboration with Amazon will allow us to continue advancing AI research while delivering Claude to our customers." Anthropic said in early April that it had tripled its annualized revenues quarter-on-quarter to over $30 billion - outpacing OpenAI for the first time. Amodei visited US officials last week at the White House, where they struck a different tone from the dispute that erupted in February, when the AI startup infuriated Pentagon chief Pete Hegseth by insisting its technology should not be used for mass surveillance or fully autonomous weapons systems. "We discussed opportunities for collaboration, as well as shared approaches and protocols to address the challenges associated with scaling this technology," a White House spokesperson told AFP. The rhetoric marks a departure from months earlier, when President Donald Trump instructed the US government to "immediately cease" using Anthropic's technology after the company refused to allow the Pentagon unconditional use of its Claude AI models. Anthropic has challenged the Trump administration in cour Kevin Warsht, as well as Hegseth's move to add the company to a list of firms that pose a "supply chain risk." Earlier this month, Anthropic announced its newest AI model Mythos, withholding it from public release due to its potential cybersecurity risks. -- AFP

Anthropic
Kuwait Times2d ago
Read update
Amazon invests another $5bn in Anthropic - kuwaitTimes

Musk bought $1.4 billion worth SpaceX shares in 2025, report reveals

A report by The Information revealed that Elon Musk increased his stake in SpaceX last year by buying $1.4 billion worth of stock from current and former employees. The secondary stock purchase, made through Musk's trust, was disclosed in a draft of SpaceX's confidential IPO prospectus, according to the report. SpaceX also approved a plan that would award the billionaire CEO 60 million additional shares if the company's market capitalization climbs from $1.1 trillion to as high as $6.6 trillion and completes an ambitious plan of building data centers in space to supply compute for AI developers. The stock would vest as SpaceX increases its market cap in $500 billion increments. SpaceX, which recently confidentially filed for an IPO, generated about $8 billion ⁠in profit last year on revenue of $15 billion to $16 billion, according to Reuters. READ: SpaceX IPO: Elon Musk's rocket giant files confidentially (April 1, 2026) With its IPO, SpaceX's potential debut could be historic. Sources suggest the company may target a valuation of more than $1.75 trillion, making it one of the largest stock market listings ever. The filing follows SpaceX's merger with Musk's AI venture, xAI, in a deal that valued SpaceX at $1 trillion and xAI at $250 billion. According to reports, SpaceX plans to strengthen Musk's control over the company, granting him and a small group of insiders super-voting shares that will outweigh those of other investors. The prospectus, which was filed confidentially this month, provides fresh details of the company's financials and corporate governance. Musk will remain Chief Executive Officer and Chief Technical Officer upon the completion of the offering. He will also serve as Chairman of SpaceX's nine-member board of directors. After the company's stock market debut, Musk is expected to gain billions in equity following SpaceX's IPO. The company is targeting a listing valuation of roughly $1.75 trillion, with a $75 billion raise, making it one of the largest initial public offerings in history. READ: Is Elon Musk trying to influence SpaceX's valuation ahead of IPO (February 3, 2026) The IPO is being led by executives who will host three days of meetings planned this week for Wall Street analysts. The meetings will commence with a tour and briefings at SpaceX's Starbase launch facility in Boca Chica, Texas. According to filing excerpts reviewed by Reuters, SpaceX will use a dual-class equity structure, giving Class B shareholders 10 votes each, concentrating power with Musk and a handful of other insiders. Class A shares, which will be sold to public investors, will carry a single vote each. The report also highlights provisions that could limit shareholders' ability to influence board elections or pursue certain legal claims, forcing disputes into arbitration and restricting where they can be brought.

SpaceXxAI
The American Bazaar2d ago
Read update
Musk bought $1.4 billion worth SpaceX shares in 2025, report reveals

Wall Street's IPO Stampede Is A Race To Beat SpaceX: Here's When Polymarket Thinks Elon Musk Will List -

Companies are rushing to price initial public offerings before Elon Musk's SpaceX lands a potentially $2 trillion deal on Wall Street and sucks up the oxygen. As much as $17.3 billion in US listings could hit the tape this month alone, the busiest stretch since December. "If I'm any company and I want to have a chance of attracting investors that invest in IPOs, I'd probably rather do it before that deal comes," Bob Doll, chief executive of Crossmark Global Investments, told Bloomberg. Bankers think they have until June. Polymarket traders agree. Polymarket Traders Are Pricing June Polymarket's main contract on SpaceX's IPO month gives June a 65% probability, with July at 12% and a "no IPO before 2027" tail at 6%. SpaceX reportedly filed confidentially with the SEC on April 1, targeting a $1.75 trillion valuation and a $75 billion raise. That would be the largest IPO in history, bigger than Saudi Aramco. Polymarket traders think there is a 91% chance SpaceX is the largest IPO this year, with Anthropic a distant second at 4%. Polymarket thinks that the market cap of the IPO is most likely to be between $1.5 and $2 trillion. What The Stampede Looks Like Bill Ackman's Pershing Square closed-end fund is leading this week's slate, seeking up to $10 billion ahead of an April 28 pricing. It's a real test of retail appetite for the kind of vehicle that usually only draws institutional money. The class is pricing well, which is part of why the pipeline is rushing. The weighted-average return for 2026 US IPOs, excluding SPACs and closed-end funds, has jumped to 21%. The S&P 500 is up 4.2% over the same stretch. Morgan Stanley Leads The Fee Race At the 2% gross spread estimated by Jay Ritter, the University of Florida academic known in the industry as "Mr. IPO," the syndicate stands to split around $1.5 billion in fees on a $75 billion raise. Chamath Palihapitiya warned IPO hopefuls this month to get out ahead of SpaceX, saying that SpaceX could eat up a lot of demand. Image: Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.

SpaceXPolymarketAnthropic
Benzinga2d ago
Read update
Wall Street's IPO Stampede Is A Race To Beat SpaceX: Here's When Polymarket Thinks Elon Musk Will List -

Near miss in Nashville and bomb scares spark weekend travel chaos - 41NBC News | WMGT-DT

Air traffic control audio reveals confusion during a near miss involving two Southwest Airlines flights Saturday. (NBC)- A series of alarming aviation incidents is raising concerns about air travel safety, including a close call at Nashville International Airport and multiple bomb scare disruptions across the country. Air traffic control audio reveals confusion during a near miss involving two Southwest Airlines flights Saturday. Controllers instructed an arriving flight to perform a "go-around," placing it on a potential collision course with a departing plane. Pilots in both aircraft took evasive action after receiving onboard alerts, avoiding what could have been a catastrophic accident. The Federal Aviation Administration says both crews responded appropriately, and the arriving flight ultimately landed safely despite gusty winds. Meanwhile, a wave of bomb threats triggered panic at airports from Pittsburgh to Denver. Passengers were seen evacuating planes, some using emergency slides, as law enforcement conducted thorough searches. In one incident at Denver International Airport, a flight was halted after a bomb threat was reported. The FBI says no dangerous materials were found on board. "It was terrifying," said one passenger. "They put us on buses and didn't take us far from the plane." A similar scare occurred mid-air on a United Airlines flight from Chicago to New York, when the pilot reported a potential onboard threat. The aircraft was diverted to Pittsburgh, where authorities investigated the situation. No injuries were reported. Airlines say safety protocols were followed in each case, but the incidents highlight growing anxiety for travelers amid a series of close calls and security scares.

CHAOS
41NBC News | WMGT-DT2d ago
Read update
Near miss in Nashville and bomb scares spark weekend travel chaos - 41NBC News | WMGT-DT

Anthropic's latest AI model is sparking fears from cybersecurity experts and the banking sector. Here's why. | CBC News

The audio version of this article is generated by AI-based technology. Mispronunciations can occur. We are working with our partners to continually review and improve the results. Anthropic earlier this month debuted Mythos, its most advanced AI model to date, equipped with sophisticated capabilities and designed for defensive cybersecurity tasks. Mythos's vast capabilities have sparked fears about the threat to traditional software security after the AI startup said the preview had uncovered "thousands" of major vulnerabilities in "every major operating system and web browser." And while some industry experts have questioned whether Anthropic's claims of too-powerful AI technology were a marketing ploy, even some of the company's sharpest critics have suggested that Mythos might represent a further advancement in AI. Anthropic has rolled out Claude Mythos Preview through a controlled initiative called "Project Glasswing," granting access to tech majors including Amazon, Microsoft, Nvidia and Apple. The company also extended access to a group of more than 40 additional organizations that build or maintain critical software infrastructure. Experts warned that the ⁠model can identify and ⁠exploit previously unknown vulnerabilities faster than companies can repair them. Its advanced coding and autonomous capabilities could dramatically accelerate sophisticated cyberattacks, particularly in sectors such as banking that rely on complex, interconnected and often decades-old technology systems, they have said. While debuting Mythos, Anthropic said the ⁠model's ability to find software flaws at scale could, if misused, pose serious risks to economies, public safety and national security. U.S. software stocks tumbled on April 9 after the Mythos launch on April 7 reignited fears that AI advances could disrupt traditional firms. Global financial systems need to "come to grips" with the risks posed by rapid advances in artificial intelligence models like Mythos, Bank of Canada Governor Tiff Macklem said earlier this month. Mythos was discussed at a meeting last week of the Bank of Canada's financial sector resiliency group, which includes representatives from the finance department and major Canadian banks. U.S. officials have reportedly convened similar roundtables. Macklem told reporters during a call from the sidelines of the International Monetary Fund's spring meetings that there has been a fair amount of discussion about the model at the forum. He confirmed he spoke to Federal Reserve chair Jerome Powell about the U.S. approach. "I don't think anybody knows the full implications at this point. That's precisely what everybody's trying to get to the bottom of," Macklem said. He said policy-makers and financial institutions are still in "early discussions" about what Mythos means for the integrity of the global financial system. But Macklem emphasized that Mythos is not a one-off event and the nature of AI development means firms, regulators and policy-makers need to put plans in place to grapple with this rapidly evolving technology. Whether it's Mythos or another AI model, Macklem said, the ability of these new technologies to both expose and exploit vulnerabilities "puts a premium" on having strong cybersecurity protections in place. "We're going to need to come to grips with how we're going to manage this on an ongoing basis," he said. "The world's moving quickly. We need to keep up." Finance Minister François-Philippe Champagne said Mythos is a "test case" for how governments prepare for and react to new technologies. Meanwhile, the White House has held discussions with Anthropic CEO Dario Amodei about Mythos, with officials saying they talked about collaboration, cybersecurity and balancing AI innovation with safety. The talks were held despite the ⁠Pentagon slapping a formal supply-chain risk designation on Anthropic. The U.S. government is planning ⁠to make a version of Mythos available to major federal agencies, Bloomberg News ⁠has reported. The model also raised alarm bells in Britain, with authorities holding talks with major banks and cybersecurity officials to assess possible risks. Banks are in close contact with their European regulators regarding Mythos, Christian Sewing, president of the German banking ⁠association and CEO of Deutsche Bank, said. One influential Anthropic critic, David Sacks, who was the White House's AI and crypto czar, said people should "take this seriously." "Anytime Anthropic is scaring people, you have to ask: 'Is this a tactic? Is this part of their Chicken Little routine? Or is it real?'" Sacks said on the All-In podcast he co-hosts with other tech investors. "With cyber, I actually would give them credit in this case and say this is more on the real side." He said it "makes sense" that as the coding models become more capable, they'll find more bugs, which means they'll become more capable of finding vulnerabilities. "That means they're more capable at stringing together multiple vulnerabilities and creating an exploit," he said.

Anthropic
CBC News2d ago
Read update
Anthropic's latest AI model is sparking fears from cybersecurity experts and the banking sector. Here's why. | CBC News

Security Leaders Discuss the Vercel Breach

Following the news of the Vercel data breach, security experts are discussing implications, sharing their insights, and weighing in on what this incident suggests about the future of attack patterns. Incidents like this are never fun, and living through one in real time is stressful for everyone involved, no matter how prepared your team thinks they are. Vercel has a massive footprint in the dev community, particularly for modern web apps and CI/CD workflows, so even when only a slice of customers are affected, people are going to notice and talk about it. That said, from what's been shared publicly, this doesn't look like a sweeping supply chain attack. It reads more like a targeted account takeover, someone found a foothold through a third-party AI tool and worked their way into internal systems from there. The bigger concern is the exposure of environment variables and tokens, which can open doors to follow-on access if teams don't move quickly to lock things down. One thing that really stands out here is the timeline. By the time Vercel got ahead of the story publicly, the attacker had already disclosed it. That's a tough spot to be in, and it's a good reminder of why comms teams need a seat at the table during incident response tabletop exercises, not just the engineers. When there's a gap between what's being reported and what the company is saying, the narrative fills itself, usually without the full picture. To Vercel's credit, they've been upfront about what happened and given customers concrete steps to take -- audit your environment variables, use sensitive variable protections, check your deployments, rotate your tokens. That kind of clear, actionable guidance matters a lot when customers are trying to figure out if they're exposed. The bigger takeaway here isn't really about Vercel specifically. It's about the fact that third-party integrations, especially newer AI tools that connect into identity systems like Google Workspace, are quietly becoming a serious attack surface, even for organizations that have otherwise done a lot of things right. Calling this a full-scale supply chain attack would be a gross overstatement. What we are seeing in the Vercel incident is a third-party compromise with supply chain characteristics, but not a systemic, cascading supply chain failure similar to the SolarWinds attack. The threat actor leveraged a compromised third-party AI tool integrated via a Google Workspace OAuth application, which then enabled unauthorized access into internal systems. That is a trust and authentication boundary failure, not a compromised software distribution pipeline. In a true supply chain attack, the adversary weaponizes the vendor's product itself to propagate downstream at scale. Here, the blast radius appears constrained to a subset of customers, with no evidence of malicious code being distributed through Vercel's platform to its tenants. The more accurate framing is this is an identity-centric supply chain exposure. The OAuth trust model became the attack vector. This is not about code integrity but rather about delegated access and over-permissioned integrations. The takeaway is more concerning than the public disclosure. The modern supply chain is no longer just installed software. It is based on identities, APIs, and AI tooling created by third parties, open source, and sovereign installations. That is where control was lost and the breach occurred. The question of whether this is a supply chain attack is the wrong frame. Supply chain is becoming a catch-all term that often generates more heat than clarity. The question every CISO, security team, and engineering leader, should be asking right now is how many third-party AI tools in their environment have OAuth access to systems that hold production secrets, and when that access was last reviewed. This is a governance and program design problem, and no amount of platform hardening fixes it if the access decisions themselves were never rigorously made. The breach vector is the signal: a third-party AI tool's OAuth credentials were compromised and used to reach internal Vercel systems. This is the new attack pattern that security teams are not yet fully pricing into their risk models. AI tools are being onboarded at machine speed, and the access governance frameworks designed to evaluate those integrations are running at human speed. Until that gap closes, every OAuth token granted to an AI productivity tool is a potential pivot point into something much more critical. This incident is the latest in a growing pattern of OAuth 2.0-based supply chain attacks. From the Chrome extension breaches in late 2024 to the Entra ID consent injection attacks, attackers are increasingly targeting the trust relationships built into OAuth 2.0 rather than breaking through traditional perimeters. The initial compromise was an infostealer, not a sophisticated exploit. A Context.ai employee with administrative privileges -- using the [email protected] account, described as belonging to a "core member" of the team -- was infected with Lumma Stealer in February 2026. According to Hudson Rock, the employee had been downloading malicious Roblox "auto-farm" scripts. The malware exfiltrated browser credentials, session cookies, and OAuth tokens, including credentials for Google Workspace, Supabase, Datadog, and Authkit. The attacker used a compromised OAuth token to access Vercel's Google Workspace, gaining entry to certain internal systems and environment variables that were not marked as "sensitive." The OAuth application involved has been identified by its client ID: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com. The application's Chrome extension was removed from the Chrome Marketplace on March 27, and Google subsequently deleted the account. Hudson Rock had possessed the compromised credential data over a month before Vercel confirmed the breach highlighting the detection gap that allowed the supply chain escalation to succeed. The stolen data is now being sold by the ShinyHunters group. Regardless of what tooling you use, the Vercel incident highlights several important practices: The Vercel incident is a clear example of how identity infrastructure, in this case OAuth 2.0 trust relationships, has become a primary attack vector. The attacker didn't exploit a zero-day or brute-force a password. They compromised a third-party app and inherited the trust that employees had already granted. This pattern will continue. As organizations adopt more SaaS tools, AI assistants, and third-party integrations, the sprawl of OAuth grants grows. Defending against these threats requires continuous visibility into your OAuth app landscape, automated detection of risky scopes, and the ability to revoke access at speed.

Vercel
Security Magazine2d ago
Read update
Security Leaders Discuss the Vercel Breach

Amazon Deepens AI Push With $25 Billion Cloud Investment in Anthropic - Tekedia

Amazon has unveiled plans to invest up to $25 billion in Anthropic, tightening its grip on one of the fastest-growing artificial intelligence firms while locking in a long-term cloud partnership that could reshape the economics of the AI infrastructure race. The agreement is structured in phases, with Amazon committing $5 billion upfront and up to $20 billion more tied to commercial milestones. The latest move builds on roughly $8 billion already invested, bringing Amazon's total potential exposure to Anthropic close to $33 billion. In return, Anthropic has committed to spending more than $100 billion over the next decade on Amazon's cloud technologies. This pledge effectively secures a major anchor tenant for Amazon Web Services (AWS) at a time when demand for AI computing capacity is surging. The deal pinpoints a pivot. While Amazon has struggled to generate significant traction around its in-house AI models, such as Nova, it has doubled down on its role as a foundational infrastructure provider powering the broader AI ecosystem. The company expects to spend about $200 billion in capital expenditure this year alone, largely directed toward expanding data centers, chips, and networking capacity to meet AI demand. Chief executive Andy Jassy framed the partnership as validation of Amazon's investment in custom silicon. "Our custom AI silicon offers high performance at significantly lower cost for customers, which is why it's in such hot demand," Jassy said in the announcement. Anthropic's decision to build on Amazon-designed Trainium chips, including the upcoming Trainium2 and Trainium3, "reflects the progress we've made together on custom silicon," he added. Anthropic said it expects to deploy roughly one gigawatt of compute capacity using these chips by the end of the year, with longer-term ambitions of scaling to five gigawatts. That level of infrastructure is comparable to the energy footprint of large industrial facilities, highlighting the growing intensity of AI model training and deployment. The partnership is mutually reinforcing as it helps Anthropic to gain access to vast, dedicated computing resources at a time when competition for chips and data center capacity is a key constraint in AI development. Amazon, in turn, secures long-term utilization of its cloud infrastructure and strengthens its position against rivals in the high-stakes battle for AI workloads. The move also points to a broader pattern among Big Tech firms, which are increasingly pairing large equity investments with cloud commitments to lock in strategic relationships. Earlier this year, Amazon said it would invest up to $50 billion in OpenAI, the developer of ChatGPT, signaling a willingness to back multiple players rather than rely solely on internal capabilities. For Anthropic, the funding arrives at a critical juncture. The company, known for its Claude models, is pushing aggressively into advanced applications such as coding and design, areas where performance gains can translate directly into enterprise adoption. Securing reliable, scalable compute is essential to maintaining that momentum. The scale of the agreement also highlights the shifting economics of AI. Training and running frontier models now requires billions of dollars in infrastructure, pushing startups to align closely with cloud providers. These partnerships blur the line between customer and investor, creating ecosystems where capital, compute, and software development are tightly integrated. The strategy Amazon is wielding is: even if its proprietary models lag competitors in visibility, it can still capture a significant share of value by supplying the infrastructure that underpins the entire industry. By promoting its Trainium chips as a cost-effective alternative to more established options, Amazon is attempting to differentiate itself in a market dominated by a small number of hardware providers. The deal also intensifies competition with other cloud giants, each vying to secure exclusive or semi-exclusive relationships with leading AI developers. Control over these partnerships can influence not just revenue growth but also the direction of technological innovation, as model developers optimize their systems around specific hardware and cloud environments. Amazon shares rose about 2.7% in extended trading following the announcement, reflecting investor confidence in the company's infrastructure-led approach to AI. What further emerges from the agreement is a clearer picture of how the AI race is being financed and built. It is no longer just about developing the most advanced models, but about securing the capital, compute, and partnerships required to sustain them at scale. In that equation, Amazon is positioning itself as an indispensable backbone, even as others compete for the spotlight.

Anthropic
Tekedia2d ago
Read update
Amazon Deepens AI Push With $25 Billion Cloud Investment in Anthropic - Tekedia

AI "Circle Jerk" Returns: Anthropic To Spend $100 Billion On AWS In Amazon Deal

Circular AI vendor financing is back and back in a big way... As we noted last fall, when we walked readers through the stunning math behind what we called the AI "circle jerk," this latest iteration centers on Amazon and Anthropic, with the left-leaning AI company now committing to spend more than $100 billion over the next decade on AWS infrastructure. In the announcement on Monday evening, Anthropic committed to spending more than $100 billion over the next decade on AWS infrastructure, including multiple generations of Trainium chips and tens of millions of Graviton cores. Amazon plans to invest $5 billion in Anthropic and up to an additional $20 billion in the future. "Anthropic's commitment to run its large language models on AWS Trainium for the next decade reflects the progress we've made together on custom silicon, as we continue delivering the technology and infrastructure our customers need to build with generative AI," Amazon CEO Andy Jassy said in a statement. Anthropic's Claude Platform will be directly available in AWS accounts. Over 100,000 customers already run Claude models on AWS, and companies are continuing to collaborate on Project Rainier, a massive AI compute cluster built around nearly half a million Trainium2 chips. The bigger message here is that both companies are locking in long-term deals for chips, cloud infrastructure, and AI deployment. Anthropic noted that it will bring nearly 1 gigawatt total of Trainium2 and Trainium3 capacity by year's end. Anthropic noted that enterprise and developer demand for Claude has seen a "sharp rise" in usage, which has led to "inevitable strain" on its infrastructure, impacting reliability and performance. The company said the Amazon deal will quickly expand its available capacity. "Our users tell us Claude is increasingly essential to how they work, and we need to build the infrastructure to keep pace with rapidly growing demand," Anthropic CEO Dario Amodei said in a statement. "Our collaboration with Amazon will allow us to continue advancing AI research while delivering Claude to our customers, including the more than 100,000 building on AWS." We return to the circular AI vendor-financing scheme among a small cluster of firms, including Nvidia, AMD, Broadcom, Microsoft, Oracle, CoreWeave, and OpenAI, which we previously called a "circle jerk." Now the pattern is reappearing in the Amazon-Anthropic deal. Seperate but related, President Trump told CNBC earlier today that he had a meeting with Anthropic: "They came to the White House a few days ago, and we had some very good talks with them, and I think they're shaping up. They're very smart... I think we'll get along with them just fine." Trump was referring to the fallout of the Pentagon and Anthropic around using AI models for warfare.

Anthropic
Zero Hedge2d ago
Read update
AI "Circle Jerk" Returns: Anthropic To Spend $100 Billion On AWS In Amazon Deal

Amazon and Anthropic expand ties with a new $5 billion investment

Amazon and Anthropic have strengthened their partnership with a new $5 billion investment. This is part of a broader agreement that will see Amazon invest a total of $25 billion in Anthropic, including an additional $20 billion tied to commercial milestones. This comes in addition to the roughly $8 billion Amazon has already committed to Anthropic. Aside from this, Anthropic has agreed to spend more than $100 billion over the next decade on Amazon Web Services (AWS), making it its primary cloud and training partner for large-scale AI workloads. The companies have been teaming up since 2023, with more than 100,000 customers currently running Anthropic's Claude models on AWS. Claude is also one of the most widely used model families on Amazon Bedrock, AWS's platform for accessing third-party and proprietary AI models. READ: National Security Agency is reportedly using Anthropic's Mythos (April 20, 2026) The expanded collaboration centers around infrastructure, with Anthropic using multiple generations of Amazon's custom AI chips -- Trainium2, Trainium3, Trainium4, and future versions, along with tens of millions of Graviton CPU cores to train and deploy its models. As part of the agreement, Anthropic will secure up to 5 gigawatts of compute capacity. The companies are also jointly working on Project Rainier, an AI compute cluster built using nearly half a million Trainium2 chips, which is currently being used to train and run Claude models and future versions. "Our custom AI silicon offers high performance at significantly lower cost for customers, which is why it's in such hot demand," said Andy Jassy, CEO of Amazon. "Anthropic's commitment to run its large language models on AWS Trainium for the next decade reflects the progress we've made together on custom silicon, as we continue delivering the technology and infrastructure our customers need to build with generative AI." READ: Anthropic Mythos sparks concerns among the finance community over security risks (April 17, 2026) "Our users tell us Claude is increasingly essential to how they work, and we need to build the infrastructure to keep pace with rapidly growing demand," said Dario Amodei, CEO and co-founder of Anthropic. "Our collaboration with Amazon will allow us to continue advancing AI research while delivering Claude to our customers, including the more than 100,000 building on AWS." The deal comes amid increasing competition among Big Tech companies to secure AI models and the infrastructure that powers them. Microsoft has committed more than $10 billion to OpenAI, integrating its models across Azure, while Google has also invested in Anthropic and is developing its own models. Anthropic's commitment to AWS secures long-term infrastructure demand for Amazon, as AI companies scale compute usage. The deal also includes expansion of inference capabilities across Asia and Europe to support growing international demand for Claude models.

Anthropic
The American Bazaar2d ago
Read update
Amazon and Anthropic expand ties with a new $5 billion investment

Trump says Anthropic is 'shaping up,' open to deal with Pentagon

Add Yahoo as a preferred source to see more of our stories on Google. Washington - U.S. President Donald Trump said on Tuesday Anthropic was "shaping up" in the eyes of his administration, opening the door for the AI company to reverse its blacklisting at the Pentagon. Trump directed the government in February to stop working with Anthropic. The Pentagon followed up by declaring the firm a supply-chain risk, dealing a major blow to the artificial intelligence lab after a showdown over guardrails for how the military could use its AI tools. The company disputes that characterization and filed suit against the Defense Department in March over the determination. Anthropic CEO Dario Amodei met with White House officials last week to attempt to repair the relationship. The White House called the meeting productive and constructive. "They came to the White House a few days ago, and we had some very good talks with them," Trump told CNBC's "Squawk Box" on Tuesday. "And I think they're shaping up. They're very smart, and I think they can be of great use. I like smart people ... I think we'll get along with them just fine." When asked if a deal was on the horizon with the Pentagon, Trump said, "It's possible. We want the smartest people." Anthropic, asked for comment, referred to its Friday statement describing its White House meeting as productive and focused on how the two "can work together on key shared priorities such as cybersecurity, America's lead in the AI race, and AI safety." The apparent rapprochement comes weeks after Anthropic unveiled Mythos, its most advanced AI tool, with a potentially unprecedented ability to identify cybersecurity vulnerabilities and devise ways to exploit them, experts have said. Anthropic has said Claude Mythos Preview will not be made generally available. Instead, the company announced Project Glasswing, in which it invited major tech companies, cybersecurity vendors and U.S. bank JPMorgan Chase, along with several dozen other organizations, to privately evaluate the model and prepare defenses accordingly. Anthropic Co-founder Jack Clark said last week the firm was discussing its frontier AI model Mythos with the Trump administration without providing details. ___ Reporting by Jacob Bogage and Alexandra Alper This article originally appeared on The Detroit News: Trump says Anthropic is 'shaping up,' open to deal with Pentagon

Anthropic
Yahoo2d ago
Read update
Trump says Anthropic is 'shaping up,' open to deal with Pentagon

Trump Is Warming Up to Anthropic Again, Says the Company Could 'Be of Great Use'

On Tuesday morning, the President told CNBC that his administration had "some very good talks" with Anthropic and that a new deal reallowing the use of Anthropic's models in the Pentagon could be "possible." "I think they're shaping up, they're very smart, and I think they can be of great use," Trump said of Anthropic. "I think we will get along with them just fine." Reports detailing various stages of talks between the Administration and Anthropic have been rolling out over the past week, to the surprise of many, considering the very public fallout just a month prior. The Department of Defense officially designated Anthropic a supply chain risk in early March, after Anthropic refused to agree to the Pentagon's demands during contract renegotiations. The administration and the AI giants couldn't see eye-to-eye on terms regarding the use of AI in mass domestic surveillance and autonomous weapons. The talks fell through just hours before the United States began striking Iran. The designation was unprecedented: It was the first time an American company was deemed a risk to national security and effectively banned from the federal government. What finally convinced the Trump administration to back down from its attack on Anthropic may have been Mythos, the company's buzzy, mysterious new AI model. Mythos was first unveiled in a leak in late March, in which it was deemed too powerful to release to the public. Shortly after, the company confirmed the leak and its allegedly unparalleled cyber capabilities. It announced that the model would not be made public immediately, fearing its potential for abuse by hackers. The model can allegedly identify and exploit software vulnerabilities at an unprecedented scale. Instead of a public rollout, some financial and tech world titans and governments would get a first look via a limited preview, under an initiative the company is calling Project Glasswing. Most of the organizations that were granted access remain unnamed by Anthropic, but a limited list includes Nvidia, Google, JPMorganChase, and Amazon. After reports detailed European governments reacting in fear to the preview, many were left wondering if or when the U.S. would chime in. According to a Bloomberg report from last week, Anthropic briefed senior U.S. officials on the offensive and defensive cyber applications of Mythos before it launched its limited release to the rest of its corporate and government partners. The report also claimed that the Office of Management and Budget was setting up protections in Mythos and would allow agencies, including the Department of Defense, to begin using a version of the model in the next couple of weeks. That report was preceded by a Reuters dispatch claiming that Treasury Secretary Scott Bessent and Fed Chair Jerome Powell were briefing major American financial institutions on the potential risks of Mythos, and a Politico report that the Commerce Department's Center for AI Standards and Innovation had already begun actively testing Mythos' abilities, even before Anthropic confirmed the model's existence. Anthropic CEO Dario Amodei met with White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent on Friday. Axios also cited an unnamed source saying that every agency except for the Pentagon was eager to use Anthropic's tools. But a subsequent Axios report from Sunday claimed that the National Security Agency, which is overseen by the Department of Defense, is already using Mythos. One source even said that the model was being used more widely throughout the entire DoD. While Trump initially took a strict stance against Anthropic, he has so far been more a friend than a foe to the AI world. Under Trump's second presidency, the federal government has had a close collaboration with Silicon Valley, with its tech overlords inking lucrative deals with the Administration and accompanying the President on foreign trips. In his interview with CNBC, Trump also called his posse of American tech executives "the smartest people in the world," and name-dropped OpenAI CEO Sam Altman, Nvidia CEO Jensen Huang, and now former Apple CEO Tim Cook as examples of the geniuses with whom he surrounds himself.

Anthropic
Gizmodo2d ago
Read update
Trump Is Warming Up to Anthropic Again, Says the Company Could 'Be of Great Use'

Anthropic Launches Opus 4.7 AI Model, Focusing on Coding, Visual Tasks, and Cybersecurity Guardrails -- Campus Technology

Anthropic has introduced Claude Opus 4.7, an updated large language model that it says outperforms its predecessor on software engineering tasks, image analysis, and multi-step autonomous work, while maintaining pricing at $5 per million input tokens and $25 per million output tokens. The model is now generally available across Anthropic's own products and through its API, as well as on Amazon Bedrock, Google Cloud's Vertex AI, and Microsoft Foundry. Anthropic said the upgrade delivers the most pronounced gains on demanding coding tasks. Users report being able to hand off difficult coding work that previously required close supervision, with the new model handling complex, long-running tasks with greater consistency and paying closer attention to instructions. The company also said the model can verify its own outputs before reporting results compared to users, a behavior it described as new relative to earlier versions. On vision, Opus 4.7 can now accept images up to 2,576 pixels on the long edge, roughly 3.75 megapixels, more than three times the resolution supported by prior Claude models. Anthropic said this expands the model's usefulness for tasks requiring fine visual detail, including reading dense screenshots and extracting data from complex diagrams. Perhaps the most notable aspect of the release is its role in Anthropic's broader safety rollout strategy. The company recently announced Project Glasswing, which highlighted both the risks and potential benefits of AI for cybersecurity, and stated that it would keep its more powerful Claude Mythos Preview model restricted while testing new cyber safeguards on less-capable systems first. Opus 4.7 is the first such model. Anthropic said it experimented during training by selectively reducing Opus 4.7's cybersecurity capabilities and is releasing the model with automatic safeguards designed to detect and block requests that indicate prohibited or high-risk cybersecurity uses. The company added that findings from this deployment will inform its eventual broader release of what it calls "Mythos-class" models. Security professionals seeking to use the new model for legitimate purposes, such as vulnerability research or penetration testing, can apply through a new Cyber Verification Program. Regarding alignment, Anthropic's evaluations show that Opus 4.7 exhibits low rates of concerning behavior, such as deception, sycophancy, and cooperation with misuse, and performs better than its predecessor in honesty and resistance to malicious prompt-injection attacks. However, the company acknowledged the model is modestly weaker in some areas, including a tendency to give overly detailed harm-reduction advice on controlled substances. Anthropic's internal alignment assessment described the model as "largely well-aligned and trustworthy, though not fully ideal in its behavior," and noted that Mythos Preview remains the best-aligned model the company has trained. Developers upgrading from Opus 4.6 should account for two cost-related changes. Opus 4.7 uses an updated tokenizer that can map the same input to roughly 1.0 to 1.35 times as many tokens, depending on content type. The model also produces more output tokens at higher effort levels, particularly in later turns of agentic tasks, because it engages in more reasoning. Anthropic said users can manage token consumption through an effort parameter, task budgets, or by prompting the model to be more concise. Alongside the model release, Anthropic introduced a new "xhigh" effort level, sitting between the existing "high" and "max" settings, giving developers finer control over the tradeoff between reasoning depth and latency. In Claude Code, the default effort level has been raised to "xhigh" for all plans. The company also launched task budgets in public beta on its API platform, and added a new "/ultrareview" command in Claude Code that reads through code changes and flags bugs and design issues.

Anthropic
Campus Technology2d ago
Read update
Anthropic Launches Opus 4.7 AI Model, Focusing on Coding, Visual Tasks, and Cybersecurity Guardrails -- Campus Technology

Turmoil Within: Exploring the Science of Chaos in the Heart and Brain

A groundbreaking study from Kyoto University has unveiled a novel approach to understanding the intricate connection between brain activity and heart function by focusing on the chaotic fluctuations present in heartbeat variability. Unlike traditional measures of heart rate variability (HRV), which have long been used to gauge autonomic nervous system performance, this research spotlights chaos theory and nonlinear dynamics to decode subtle signs of cognitive engagement embedded in cardiac rhythms. This innovative work holds promise for revolutionizing how we non-invasively monitor brain-heart interactions during mental tasks. Heart rate variability has historically been studied through time-domain and frequency-domain methods, capturing the intervals between heartbeats to infer autonomic regulation. However, these linear approaches often fall short in reflecting the complex, higher-order processes of the central nervous system, especially during cognitive exertion. The Kyoto team hypothesized that the underlying chaotic dynamics within heartbeat sequences, often dismissed as noise, may instead carry physiologically meaningful signatures indicative of brain activity under cognitive load. To test this hypothesis, the researchers subjected human participants to a series of cognitive challenges designed to activate executive functions and mental processing. Heartbeat data collected during these tasks were meticulously analyzed both by conventional HRV indices and by chaos-based metrics that quantify the unpredictability and nonlinear patterns inherent in the heart's rhythm. The contrast in findings was striking: conventional measures remained largely unchanged or inconsistent, while chaos quantifiers exhibited robust, reproducible shifts closely tied to cognitive task engagement. These results suggest that chaotic fluctuations within heartbeat variability serve as a sensitive and reliable window into the brain's influence on cardiac function. This phenomenon, known as brain-heart coupling, reflects the bidirectional communication pathways between the central nervous system and cardiovascular system, mediated through complex neural, hormonal, and autonomic mechanisms. The Kyoto study establishes chaotic dynamics not merely as a random artifact but as a purposeful physiological marker woven into the fabric of systemic integration. The nonlinear analytical tools applied in this research stem from chaos theory, a branch of theoretical physics dedicated to the study of dynamical systems that appear random yet follow deterministic laws. By leveraging metrics that capture the fractal and entropic properties of heartbeat intervals, the team quantified how cognitive efforts systematically modulate cardiovascular control. This approach unearths layers of regulation unseen by traditional HRV metrics, which commonly rely on assumptions of stationarity and linearity. One of the most compelling implications of this research is its potential to enrich clinical and applied neuroscience fields. Continuous, non-invasive monitoring of chaotic heartbeat dynamics could one day provide real-time insights into an individual's cognitive state, mental workload, or emotional stress without necessitating cumbersome neuroimaging or invasive procedures. This may open doors for improved mental health diagnostics, stress management, neurorehabilitation, and even enhancements in human-machine interfacing where adaptive systems respond to subtle physiological cues. Collaboration with Toshiba Information Systems Corporation was pivotal in this project, bringing to bear advanced signal processing and data analysis expertise to detect minute nonlinear patterns in physiological datasets. This interdisciplinary fusion underscores the growing convergence of engineering principles and life sciences in tackling complex biomedical problems. It also demonstrates the value of integrating computational sophistication with experimental physiology to push the boundaries of scientific discovery. Looking beyond the laboratory, there is enormous appeal in validating these chaotic heartbeat signatures across broader populations and varied clinical contexts. The Kyoto research team is actively pursuing international partnerships to explore the utility of their findings within intensive care units, neurological disorder management, and psychiatric treatment frameworks. Such collaborations aim to cement chaos-based heart rate variability as a universal biomarker bridging brain and body function. From a theoretical standpoint, this study challenges prevailing notions regarding the origin and interpretation of variability in heartbeat intervals. Rather than relegating the observed fluctuations to random noise or external disturbances, the data reframes them as integral components of a complex adaptive system. This reconceptualization invites new perspectives on how physiological networks self-organize and maintain homeostasis under cognitive demands. In sum, this pioneering work from Kyoto University not only advances the frontier of heart rate variability research but also provides a transformative lens through which to view the synchronous dance of mind and heart. By harnessing chaos theory's analytical power, researchers have unveiled a quantitative marker that captures the essence of mental exertion as it resonates through the cardiovascular system. This breakthrough paves the way for innovative applications that monitor and interpret the ever-changing landscape of human cognition and physiology. The implications for personalized medicine are profound. Imagine wearable devices capable of detecting cognitive strain or emotional upheaval through changes in heartbeat chaotic patterns, alerting users to take preventative action before symptoms escalate. Such technology could revolutionize stress monitoring, cognitive workload management, and ultimately enhance quality of life by aligning physiological signals with mental well-being. Moreover, this study enriches our understanding of neurocardiology and the integrative biology of human function. The heart, long symbolic of emotion and vitality, reveals itself here as a dynamic organ whose rhythm is finely tuned by the brain's cognitive states. Unraveling this complexity through chaos enables a more nuanced appreciation of health, disease, and the continuity between mind and body. As science continues to navigate the fine line between order and disorder, findings like these illuminate that what appears as chaotic may hold the key to deeper biological truths. The Kyoto University team's seminal research underscores the breathtaking complexity of physiological regulation and heralds a new era in system-level biomedical investigations. Subject of Research: People Article Title: Chaotic fluctuations mark the sign of mental activity in task-based heart rate variability News Publication Date: 24-Mar-2026 Web References: http://dx.doi.org/10.1038/s41598-026-43385-z References: Chaotic fluctuations mark the sign of mental activity in task-based heart rate variability, Scientific Reports, 2026, DOI: 10.1038/s41598-026-43385-z Image Credits: KyotoU / Toshiba Information Systems Japan Corporation Keywords: Chaos theory, Heart rate, Cognitive function, Biomedical engineering

SynchronCHAOS
Scienmag: Latest Science and Health News2d ago
Read update
Turmoil Within: Exploring the Science of Chaos in the Heart and Brain

Trump hints at ending Anthropic blacklisting, calls AI company 'very smart' after White House meeting | Today News

The remarks mark a notable shift after the administration in February directed federal agencies to halt work with Anthropic. The Pentagon subsequently labelled the firm a supply-chain risk, effectively cutting it off from defense-related engagements. US President Donald Trump on Tuesday (April 21) indicated that artificial intelligence firm Anthropic could be back in favor with his administration, suggesting a potential reversal of its earlier blacklisting by the Pentagon. Trump, speaking with CNBC, said the company was "shaping up" following recent discussions with White House officials. "They came to the White House a few days ago, and we had some very good talks with them," Trump said. "And I think they're shaping up. They're very smart, and I think they can be of great use. I like smart people ... I think we'll get along with them just fine." The move followed a dispute over safeguards governing how the military could deploy Anthropic's AI tools. The company has strongly contested the designation and filed a lawsuit against the Defense Department in March. The thaw comes shortly after Anthropic unveiled its most advanced AI system, Mythos, which experts say has an unprecedented ability to detect cybersecurity vulnerabilities -- and potentially exploit them. The company has limited access to the tool, launching it under a controlled initiative called Project Glasswing. The programme involves select partners, including JPMorgan Chase, as well as major tech firms and cybersecurity organisations, to test the system and build defensive measures.

Anthropic
mint2d ago
Read update
Trump hints at ending Anthropic blacklisting, calls AI company 'very smart' after White House meeting | Today News

Amazon deepens AI partnership with Anthropic with $5B investment

Amazon.com Inc (NASDAQ:AMZN) has announced a $5 billion investment in AI company Anthropic, alongside plans for up to an additional $20 billion in future funding tied to commercial milestones. The funding builds on an earlier investment of approximately $8 billion and is part of a broader deepening of the companies' strategic partnership focused on large-scale artificial intelligence development and infrastructure. Alongside the investment, Anthropic will expand its use of Amazon's cloud and custom AI hardware, including access to up to 5 gigawatts of capacity based on current and next-generation Trainium chips. The chips are designed by Amazon specifically for high-performance AI training and inference workloads. The agreement also includes broader use of Amazon's Graviton processors, which are widely deployed across cloud computing applications. Anthropic's Claude AI models will continue to be available through Amazon Web Services (AWS), including via Amazon Bedrock, Amazon's managed service that provides access to foundation models. According to the companies, more than 100,000 customers already use Claude models through AWS, making it one of the most widely adopted model families on the platform. As part of the expanded collaboration, Anthropic will also integrate its Claude developer experience more directly into AWS, allowing customers to access Claude tools using their existing AWS accounts and infrastructure controls. This is intended to simplify deployment by eliminating the need for separate credentials or billing arrangements. The companies also highlighted ongoing joint infrastructure efforts, including "Project Rainier," a large-scale AI compute cluster built with nearly half a million Trainium2 chips. The system is being used to train and deploy Claude models and is expected to support future iterations as demand for generative AI increases. Anthropic has separately committed to spending more than $100 billion over the next decade on AWS infrastructure, including current and future generations of Trainium chips and large-scale CPU capacity. The companies also plan to expand international availability of AI inference services across regions including Europe and Asia. Shares of Amazon traded up 1.3% at about $251 following the announcement.

Anthropic
Proactiveinvestors NA2d ago
Read update
Amazon deepens AI partnership with Anthropic with $5B investment

Trump says Anthropic is 'shaping up,' open to deal with Pentagon

Anthropic CEO Dario Amodei is to meet the White House chief of staff on April 17, amid the AI startup's dispute with the Pentagon, Axios reported. Washington - U.S. President Donald Trump said on Tuesday Anthropic was "shaping up" in the eyes of his administration, opening the door for the AI company to reverse its blacklisting at the Pentagon. Trump directed the government in February to stop working with Anthropic. The Pentagon followed up by declaring the firm a supply-chain risk, dealing a major blow to the artificial intelligence lab after a showdown over guardrails for how the military could use its AI tools. The company disputes that characterization and filed suit against the Defense Department in March over the determination. Anthropic CEO Dario Amodei met with White House officials last week to attempt to repair the relationship. The White House called the meeting productive and constructive. "They came to the White House a few days ago, and we had some very good talks with them," Trump told CNBC's "Squawk Box" on Tuesday. "And I think they're shaping up. They're very smart, and I think they can be of great use. I like smart people ... I think we'll get along with them just fine." When asked if a deal was on the horizon with the Pentagon, Trump said, "It's possible. We want the smartest people." Anthropic, asked for comment, referred to its Friday statement describing its White House meeting as productive and focused on how the two "can work together on key shared priorities such as cybersecurity, America's lead in the AI race, and AI safety." The apparent rapprochement comes weeks after Anthropic unveiled Mythos, its most advanced AI tool, with a potentially unprecedented ability to identify cybersecurity vulnerabilities and devise ways to exploit them, experts have said. Anthropic has said Claude Mythos Preview will not be made generally available. Instead, the company announced Project Glasswing, in which it invited major tech companies, cybersecurity vendors and U.S. bank JPMorgan Chase, along with several dozen other organizations, to privately evaluate the model and prepare defenses accordingly. Anthropic Co-founder Jack Clark said last week the firm was discussing its frontier AI model Mythos with the Trump administration without providing details. ___ Reporting by Jacob Bogage and Alexandra Alper

Anthropic
The Detroit News2d ago
Read update
Trump says Anthropic is 'shaping up,' open to deal with Pentagon
Showing 1081 - 1100 of 10807 articles