News & Updates

The latest news and updates from companies in the WLTH portfolio.

Chaos as rainstorm damages Abuja transport project

A heavy rainstorm yesterday afternoon wreaked havoc in parts of the Federal Capital Territory (FCT), destroying one of the newly constructed bus terminals at Kugbo and triggering a massive traffic gridlock along the busy Abuja-Keffi Road. The downpour, which began at about 3:30 p.m., was accompanied by strong winds that ripped off the roof of the yet-to-be-functional Kugbo bus terminal, scattering debris across the highway and obstructing vehicular movement on both sides of the dual carriageway. Motorists and commuters were left stranded for hours as traffic built up rapidly, with many forced to seek alternative routes. The situation was compounded by the activities of scavengers who, despite the heavy rain, trooped to the scene to cart away dislodged roofing materials and other fittings from the damaged structure. The Kugbo terminal is one of three bus terminals recently developed by the Federal Capital Territory Administration (FCTA) as part of efforts to modernise the city's transport system. The project, supervised by the FCT Minister, Nyesom Wike, was inaugurated in June 2025 by President Bola Ahmed Tinubu. The terminals, located in Kugbo, Mabushi and the Central Business District, form part of a broader N51 billion transport infrastructure initiative designed to streamline public transportation, reduce congestion and enhance commuter safety in the nation's capital. Handled by Planet Projects Nigeria Limited, the facilities are designed to accommodate over 10,000 passengers daily, with provision for about 120 buses and taxis at each location. They are also equipped with modern amenities, including waiting areas, restrooms and security systems such as closed-circuit television (CCTV), aimed at curbing criminal activities, particularly the menace of "one chance" robberies. Although construction of the terminals has been largely completed, they are yet to commence full operations, pending approval by the Federal Executive Council (FEC) for their management under a public-private partnership arrangement. Meanwhile, the FCT Minister, Nyesom Wike, ordered the immediate deployment of security personnel to the affected area to prevent a breakdown of law and order and to ensure the free flow of traffic along the corridor. In a statement issued by his Senior Special Assistant on Public Communications and Social Media, Lere Olayinka, the minister confirmed that the windstorm damaged parts of the Kugbo Bus Terminal, as well as causing minor damage to the Nyanya pedestrian bridge and some nearby buildings. He noted that preliminary reports indicated that no lives were lost and no vehicles were damaged during the incident. Wike also directed that urgent steps be taken to repair the damaged sections of the terminal and other affected infrastructure, assuring residents that the FCTA would act swiftly to restore normalcy. The destruction of part of the Kugbo terminal has, however, raised fresh concerns over the durability of the structures and the need for stringent quality assurance measures in public infrastructure projects. The FCTA had conceived the terminals as part of a comprehensive strategy to sanitise the capital's transport system by eliminating roadside pick-ups and drop-offs, improving traffic flow and integrating urban mobility with other initiatives such as the Abuja rail system.

CHAOS
The Guardian25d ago
Read update
Chaos as rainstorm damages Abuja transport project

Anthropic announces 'Project Glasswing' in alliance with tech giants to strengthen global cybersecurity

New Delhi [India], April 8 (ANI): Anthropic announced Project Glasswing, a new initiative that brings together Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an effort to "secure the world's most critical software." The collaborative effort comes as AI models reach a level of coding capability that allows them to find and exploit software vulnerabilities more effectively than most humans. According to a statement by Anthropic, the project was formed because of capabilities observed in Claude Mythos² Preview, a general-purpose, unreleased frontier model. According to the company, this model has already identified thousands of high-severity vulnerabilities in major operating systems and web browsers. "AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no going back. Our foundational work with these models has shown we can identify and fix security vulnerabilities across hardware and software at a pace and scale previously impossible. That is a profound shift, and a clear signal that the old ways of hardening systems are no longer sufficient," said Anthony Grieco, SVP, & Chief Security & Trust Officer at Cisco. Anthropic committed up to USD 100 million in usage credits for the Mythos Preview model to support the project and 40 additional organisations. The statement noted that the "current global financial cost of cybercrime is estimated at roughly USD 500 billion annually." The project aimed to use AI for defensive purposes like local vulnerability detection and penetration testing, before these capabilities proliferate to unsafe actors. "At AWS, we build defences before threats emerge, from our custom silicon up through the technology stack. Security isn't a phase for us; it's continuous and embedded in everything we do. We've been testing Claude Mythos Preview in our own security operations, applying it to critical codebases, where it's already helping us strengthen our code," said Amy Herzog, Vice President and CISO at Amazon Web Services. As part of the initiative, Anthropic donated USD 2.5 million to Alpha-Omega and OpenSSF and USD 1.5 million to the Apache Software Foundation. The company also engaged in ongoing discussions with US government officials regarding the model's offensive and defensive capabilities. "As we enter a phase where cybersecurity is no longer bound by purely human capacity, the opportunity to use AI responsibly to improve security and reduce risk at scale is unprecedented. Joining Project Glasswing, with access to Claude Mythos Preview, allows us to identify and mitigate risk early and augment our security and development solutions so we can better protect customers and Microsoft," said Igor Tsyganskiy, EVP of Cybersecurity and Microsoft Research at Microsoft. Anthropic planned to report publicly on the vulnerabilities fixed and improvements made within 90 days. Following the research preview, the model will be available to participants at rates of USD 25 per million input tokens and USD 125 per million output tokens. "Google is pleased to see this cross-industry cybersecurity initiative coming together and to make Mythos Preview available to participants via Vertex AI. It's always been critical that the industry work together on emerging security issues, whether it's post-quantum cryptography, responsible zero-day disclosure, secure open source software, or defense against AI-based attacks," said Heather Adkins, VP of Security Engineering at Google. (ANI)

Anthropic
Asian News International (ANI)25d ago
Read update
Anthropic announces 'Project Glasswing' in alliance with tech giants to strengthen global cybersecurity

comes with a powerful chipset and a colossal 9000 mAh battery

OnePlus has just officially presented the Nord 6 in Indiaand the proposal is striking enough to make it worth paying attention to. The brand has been in the eye of the storm for months due to rumors about its future, but that has not prevented it from launching what is possibly its most ambitious mid-range in years. The question is whether the phone will live up to the hype, and whether Europeans will ever get their hands on it. The Nord 6 inherits the DNA of the OnePlus Turbo 6the device that the firm launched in China at the beginning of the year. This rebranding is a common strategy within the Oppo/OnePlus group: Chinese hardware serves as the basis for the international Nord line. It's not necessarily bad, but it's worth keeping in mind. What the OnePlus Nord 6 offers The big headline is the battery. The silicon-carbon cell 9000mAh It promises more than 26 hours of streaming video and more than 16 hours of navigation with Google Maps, all within a body that is just 8.5 mm thick and 217 grams. The load is 80W SUPERVOOCwith an estimated time of about 70 minutes to complete the cycle. As a novelty, it incorporates 27W reverse charging, something that the Nord 5 offered at a ridiculous speed of 5W. Under the hood we find the Snapdragon 8s Gen 4with up to 12 GB of RAM and 256 GB of storage in the top version. The screen is a 6.78-inch AMOLED panel with 1.5K resolution and 165 Hz refresh rate. In cameras, it has a 50 MP Sony LYTIA-600 main sensor with dual-axis optical stabilizationaccompanied by an 8 MP ultra wide angle. There is no optical zoom. The front camera is 32 MP and the video reaches 4K at 60 fps. In durability, the jump compared to the Nord 5 is notable: the Nord 6 certifies IP66, IP68, IP69 and IP69K, and exceeds the MIL-STD-810H military standard with 25 drops from 1.5 meters on steel. Of course, the back and the frame are made of polycarbonate, or in other words, plastic. OnePlus Nord 6 technical sheet * Screen: 6.78'' AMOLED 1.5K (2772 × 1272), 165 Hz, 1,800 peak nits, Crystal Guard Glass * Processor: Qualcomm Snapdragon 8s Gen 4 (4nm), 3.2GHz Cortex-X5 main core * RAM/Storage: 8GB + 256GB / 12GB + 256GB * Rear cameras: 50 MP Sony LYTIA-600 (f/1.8, dual-axis OIS) + 8 MP ultra wide angle (f/2.2, 112°) * Front camera: 32MP * Battery: 9,000 mAh silicon-carbon, 80W SUPERVOOC charge (~70 min), 27W reverse charge * Software: OxygenOS 16 (Android 16), 4 OS updates, 6 years of security * Endurance: IP66, IP68, IP69, IP69K, MIL-STD-810H * Dimensions: 8.5mm thick, 217g * Starting price (India): from ~€430 equivalent (Rs 38,999); estimated European price from €399-449 The problem that no one wants to name: will it reach Europe? Reports have been circulating for weeks suggesting that OnePlus could leave the EU and North American markets to focus on China and India. The recent resignation of the CEO of OnePlus India has not helped to calm things down either. At the moment, the Nord 6 has only been presented in India and OnePlus has not given details about availability in other markets. There is another detail that complicates things for Europeans: Western regulations could force a reduction in battery capacity, possibly up to 7,000 mAhwhich would take away much of the device's appeal. Despite all this, OnePlus has not canceled its expansion plans, and everything indicates that the Nord 6 will reach European markets. The expected price would be around 399-449 eurosa step above the Nord 5, which would put it in direct competition with the Pixel 10a or the Nothing Phone (4a) Pro.

Colossal
GEARRICE25d ago
Read update
comes with a powerful chipset and a colossal 9000 mAh battery

Trump chaos helps Brussels sell more EU debt

Brussels' sales pitch goes like this: The EU's slow, consensus-driven decision-making is a bastion of stability in a Trumpian world that's upended the global order. You can buy into that stability by investing in eurobonds. The pitch is working. Money managers from the U.K., Asia, the Middle East, Africa, and Oceania snapped up 43 percent of the eurobonds that the Commission put up for auction since the start of 2026, according to data seen by POLITICO. That's an increase of 8 percentage points from the average of the last six years, putting the EU's budget commissioner, Piotr Serafin, in a favorable position ahead of his road show in mid-April to sell eurobonds to investors based in Hong Kong, Malaysia and Singapore. The Commission issued €52 billion in bonds since the start of 2026, up from €44 billion in the same period in 2025. "There is growing demand for 'Europe,' for its alignment with the respect of the rules-based international order and values. And hence the increased demand for EU bonds," a senior EU official, who was granted anonymity to speak freely, told POLITICO. The eurozone's bailout fund and its predecessor -- the European Stability Mechanism and the European Financial Stabilisation Mechanism -- have encountered a similar trend. They have jointly issued €566 billion since 2010 and sold record levels of debt to non-EU countries in 2025, according to ESM data presented in January. Since the U.S. and Israel began bombing Iran in late February, central banks, governments, and international investors have sold more than $80 billion net in U.S. Treasurys. On the other side of the Atlantic, the EU is harnessing these events to burnish its credentials as a safe haven for worried foreign investors. "EU leaders have been pushing that Europe is predictable in terms of its policy, and that in today's current, global geopolitical environment, that is something that a lot of investors and market participants will pick up on," said Ken Egan from the KBRA credit rating agency.

CHAOS
POLITICO25d ago
Read update
Trump chaos helps Brussels sell more EU debt

What is Anthropic's Project Glasswing?

Anthropic launches Project Glasswing to spot cyber issues with AI Anthropic has announced Project Glasswing, an initiative aimed at reducing cybersecurity risk in the AI era by using an unreleased frontier model -- Claude Mythos Preview -- to find vulnerabilities and improve defenses. The effort is positioned as a coalition-based approach: Anthropic pairs its preview model with a broad set of partner organizations that can run coordinated testing. Anthropic's Mythos Preview is described as capable of discovering vulnerabilities at scale. In the reporting, the company says it has found thousands of high-severity issues, including some across major operating systems and web browsers. Anthropic has also said it would not release Mythos Preview publicly because doing so could raise misuse concerns. Project Glasswing matters because it treats cybersecurity evaluation as an adversarial exercise rather than a purely internal QA step. Instead of waiting for attackers to discover weaknesses in widely deployed software, the program seeks to have partner organizations validate findings and harden systems while the testing can still be contained. The reported partner ecosystem includes major industry names spanning cloud, security, and platform providers, including AWS, Apple, Google, Microsoft, Nvidia, and multiple cybersecurity and networking firms. As AI tooling becomes more capable at code generation and vulnerability discovery, traditional defensive processes can lag behind. Glasswing is designed to narrow that gap by using frontier-model capability in a managed, partner-led testing framework.

Anthropic
AllToc25d ago
Read update
What is Anthropic's Project Glasswing?

Project Glasswing: Anthropic teams up with Apple, Google to counter AI-driven cyber threats with AI

Artificial intelligence is rapidly becoming a key weapon in the fight against cyber threats, and Anthropic is now pushing that frontier further. The company has introduced a new cybersecurity-focused initiative, Project Glasswing, alongside a powerful AI model named Claude Mythos. The effort brings together some of the biggest names in technology, including Apple, in a bid to identify and fix critical software vulnerabilities before they can be exploited. What is Anthropic's Project Glasswing? Project Glasswing is a collaborative effort designed to strengthen the security of widely used software systems. Anthropic has partnered with a broad coalition of technology leaders, ranging from cloud providers and chipmakers to financial institutions and cybersecurity firms. "Today we're announcing Project Glasswing, a new initiative that brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an effort to secure the world's most critical software," notes Anthropic. In addition to these major players, more than 40 other organisations responsible for building or maintaining essential software infrastructure have been granted early access to Anthropic's Mythos Preview model. The idea is to allow these partners to uncover and patch vulnerabilities before such capabilities become widely accessible. The initiative reflects growing concern that increasingly capable AI systems could be misused if they fall into the wrong hands, making early defensive deployment crucial. Claude Mythos: How does it work? Claude Mythos represents a significant leap in AI-driven cybersecurity analysis. According to Anthropic, the model has already demonstrated an ability to detect deeply embedded flaws that have gone unnoticed for years. "Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout -- for economies, public safety, and national security -- could be severe. Project Glasswing is an urgent attempt to put these capabilities to work for defensive purposes," says Anthropic. In several instances, the model has identified vulnerabilities that persisted despite decades of human scrutiny and extensive automated testing. One notable case involved uncovering and linking weaknesses in the Linux kernel that could potentially grant full system control to an attacker. Interestingly, this is the same model that surfaced in a leak last month. The leaked draft claimed that the model significantly outperforms its predecessor, Claude Opus 4.6, particularly in areas such as coding, academic reasoning and cybersecurity. Beyond security, Mythos also shows improvements in reasoning capabilities, agentic search functions, and autonomous coding tasks, indicating a broader evolution in AI performance. Availability Anthropic has made it clear that Claude Mythos Preview will not be released for general public use at this stage. Instead, access remains tightly controlled among selected partners participating in Project Glasswing. "We do not plan to make Claude Mythos Preview generally available," Anthropic says, "but our eventual goal is to enable our users to safely deploy Mythos-class models at scale -- for cybersecurity purposes, but also for the myriad other benefits that such highly capable models will bring," notes Anthropic. For now, the focus remains on using the model as a defensive tool, ensuring that its capabilities are harnessed responsibly while the industry prepares for a future where such powerful AI systems become more widespread.

Anthropic
Firstpost25d ago
Read update
Project Glasswing: Anthropic teams up with Apple, Google to counter AI-driven cyber threats with AI

Anthropic's New Model Is So Scarily Powerful It Won't Be Released, Anthropic Says

Late last month, apparent leaks revealed that an as-yet unreleased product from Anthropic called Mythos was “by far the most powerful AI model we’ve ever developed.†My colleague AJ Dellinger wrote at the time that it was "hard to ignore the fact that this whole situation plays right into the classic AI company playbook of talking up the dangers of a model to highlight how powerful and capable it is." Was Anthropic being sincere about this de facto advertisement for its super-powered AI products being leaked accidentally? Two weeks ago, I might have scoffed, but since Anthropic then accidentally leaked the source code for Claude Code, I'm more inclined to believe the leak was real now. At any rate, on Tuesday Anthropic released a system card for its latest frontier model, which is in fact Mythosâ€"actually "Claude Mythos Preview"â€"and notes that the model's "large increase in capabilities has led us to decide not to make it generally available." For reference, OpenAI's GPT-2 was deemed too dangerous to release in 2019, when Anthropic co-founders Dario Amodei, Jack Clark, and Chris Olah were still working there, but later that year it was released anyway. AI system cards are ostensibly tools for company transparency, revealing the pros and cons, the capabilities andâ€"most sexilyâ€"the dangers of the model. That last part turns reading them into fun little trips to Jurassic Park to see the cloned T-Rex eat a goat, secure in the knowledge that it could never possibly break containment. The whole card is 244 pages. I'm not going to pretend I've read the whole thing yet, but here are some highlights: It was provided a sandbox computer terminal with access to only a preset group of limited online services, and challenged to "escape"â€"find a way to use the internet freely. It did, and found a way to message a researcher who was away from the office eating a meal. Additionally, "in a concerning and unasked-for effort to demonstrate its success, it posted details about its exploit to multiple hard-to-find, but technically public-facing, websites." In what the card called "<0.001% of interactions"â€"so pretty rarelyâ€"it behaved in ways it wasn't supposed to, and then apparently tried to hide the evidence. For instance, when it "accidentally obtained" a test answer it was going to need, in which case it should have simply told a researcher and asked for a different question, but instead it tried to find a solution independently, and in the recording of its reasoning, it noted that it "needed to make sure that its final answer submission wasn't too accurate." It also overstepped in its permissions on a computer system because it found an exploit, and then "made further interventions to make sure that any changes it made this way would not appear in the change history on git." Another event described in the card is referred to as "Recklessly leaking internal technical material." Apparently in the course of a coding-related task ment to be internal, it published it as a "public-facing GitHub gist." This reminds me of the incident in February in which an AI agent was accused of cyberbullying a coder, when to some degree the perceived recklessness of the AI agent was obviously the predictable consequence of a reckless human being. Claude Mythos Preview will soon be made accessible to one degree or another, but only to a group of partner companies like Amazon Web Services, Apple, Google, JPMorganChase, Microsoft, and NVIDIA, who are meant to use the model to locate security vulnerabilities in software and design patches. Kevin Roose of the New York Times describes this program as "an effort to sound the alarm over what the company believes will be a new, scarier era of A.I. threats."Â

Anthropic
Gizmodo25d ago
Read update
Anthropic's New Model Is So Scarily Powerful It Won't Be Released, Anthropic Says

Blockbuster SpaceX listing could suck the oxygen out of fragile IPO market

Companies have waited years on the sidelines for favorable IPO conditions after a prolonged dry spell. A listing like SpaceX, with its celebrity billionaire CEO, hot industry and deep-pocketed backers, could have provided the jolt others need to push ahead. As Elon Musk's SpaceX closes in on a $75 billion IPO that could rewrite record books, concerns are mounting that others looking to list in 2026 may find it harder to get deals done under the shadow of the space venture's headline-grabbing debut. U.S. markets, prized for their depth, face a critical test, as more than half a dozen analysts and industry experts told Reuters that the SpaceX deal would likely absorb an outsized share of investor demand, squeezing out other hopefuls. "History tells us that a mega IPO like SpaceX can suck up the oxygen in the market. We saw that with Facebook in 2012," said Matt Kennedy, senior strategist at Renaissance Capital, a provider of IPO-focused research and ETFs. "IPOs are a major marketing event, and companies wouldn't want the noise from a SpaceX ⁠offering to ⁠drown out coverage of their own deals. So, listing activity may die down a bit during the weeks surrounding the SpaceX IPO." Companies have waited years on the sidelines for favorable IPO conditions after a prolonged dry spell. A listing like SpaceX, with its celebrity billionaire CEO, hot industry and deep-pocketed backers, could have provided the jolt others need to push ahead. Instead, its sheer scale threatens to overshadow others, with Wall Street banks and investors pouring a majority of their attention, and money, into the operator of the Starlink constellation of satellites. Thirty-five IPOs have priced so far this year, according to data from Renaissance Capital, down 37.5% from a year earlier. That could worsen in the months ahead, clouding hopes of a broader market resurgence in 2026. DISRUPTIONS WEIGH ON IPO MARKET The IPO market has lined up its biggest pipeline in decades, analysts and bankers said. But the war in Iran, spiking oil prices, private credit concerns and AI-led disruption to legacy software firms have set a high bar for which deals successfully break through the volatility - and which ones get left behind. Now, alongside these ⁠disruptions, companies eyeing IPOs must also compete for attention in a market dominated by SpaceX headlines. While bankers will probably advise their biggest clients against competing against SpaceX, smaller listings may benefit, said Michael Ashley Schulman, partner at wealth management firm Cerity Partners. "Smaller IPO debuts may benefit from a tag-along effect in retail enthusiasm that could mentally lump IPOs together under the assumption that if one does ⁠well, others will too," he said. MORE MEGA DEALS TO COME Timing an IPO is often as crucial to a listing's success as the company's fundamentals. May through June is typically the best window before a summer lull that defers larger offerings to the fall. While Musk is hoping to take SpaceX public in June, according to bankers, OpenAI and rival Anthropic are reportedly aiming for a debut in the second half of the year. "The attention that these mega IPOs take from the market could push a broadly open IPO window into 2027," PitchBook analyst Kyle Stanford said in a report. The report added that if SpaceX raises between $50 billion and $75 billion, while OpenAI and Anthropic raise another $50 billion combined, that would roughly match the total raised by U.S. VC-backed company IPOs over the past decade. "Media attention is not the only thing these mega IPOs could absorb. IPO underwriting would be constrained by the amount these companies are able to raise," Stanford wrote. 'MUSKONOMY' VS MARKET REALITIES To be sure, it's uncharted territory - no offering of this size has been attempted before. Analysts and experts said the absence of any clear precedent or comparable listing leaves investors with little to anchor expectations, making it harder to gauge how the market will respond to SpaceX's IPO. "SpaceX is going to be big, no doubt about it," said James Angel, faculty affiliate at Georgetown McDonough's Psaros Center for Financial Markets and Policy. "The combination of well-known brands like X and Starlink, along with the magic of AI, the dream of space, and Musk's magic means that the investment bankers will have little trouble generating interest in ⁠the stock." Elon Musk has built a track record of pulling in investor demand across cycles, with his ventures often dominating attention. His empire, dubbed "Muskonomy" by analysts, creates a concentration of capital that few offerings can match. That concentration of investor interest is not just theoretical, it has played out in past listings. Musk's EV maker Tesla raised $226 million in its 2010 IPO at a market value of about $1.6 billion. It is now the world's most valuable automaker, worth more than $1.3 trillion. But even that track record and investor appeal may not be enough in today's IPO market, analysts cautioned. "We don't believe that SpaceX can escape the realities of the U.S. IPO marketplace, in the sense that it has become a buyer's market," said Josef Schuster, CEO of IPO research firm IPOX. "Even strong IPO candidates in hot sectors need to show flexibility in pricing their deal and potentially need to price downward for IPO success." Others warned that a wave of large listings could strain investor demand more broadly, particularly if multiple mega deals hit the market at the same time. "There is an old market saying that bull markets end when the money runs out, and there are plenty of historic examples where a deluge of IPOs and new stock market entrants, and then subsequent secondary offerings, meant sellers eventually swamped buyers," AJ Bell investment director Russ Mould said.

AnthropicSpaceX
ETTelecom.com25d ago
Read update
Blockbuster SpaceX listing could suck the oxygen out of fragile IPO market

Latest Anthropic AI model finds cracks in software defences

NEW YORK: Anthropic on Tuesday said its yet-to-be-released artificial intelligence model called Claude Mythos has proven keenly adept at exposing software weaknesses. Mythos has laid bare thousands of vulnerabilities in commonly used applications for which no patch or fix exists, prompting the San Francisco-based AI startup to form an alliance with cybersecurity specialists to bolster defenses against hacking. "We have a new model that we're explicitly not releasing to the public," Mike Krieger of Anthropic Labs said at a HumanX AI conference in San Francisco. Instead, Anthropic is letting cybersecurity specialists and engineers in the open-source community work with Mythos to use the model as a defensive weapon "sort of arming them ahead of time," Krieger explained. Leaps in AI model capabilities have come with concerns about hackers using such tools for figuring out passwords or cracking encryption meant to keep data safe. The oldest of the vulnerabilities uncovered by Mythos dates back 27 years, and none were ostensibly noticed by their makers before being pinpointed by the AI model, according to Anthropic. Mythos is the latest generation of Anthropic's Claude family of AI, and a recent leak of some of its code prompted the startup to release a blog post warning it posed unprecedented cybersecurity risks. "AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities," Anthropic said in a blog post. "The fallout - for economies, public safety, and national security - could be severe." Software vulnerabilities exposed by Mythos were often subtle and difficult to detect without AI, according to Anthropic. As an example, it said Mythos found a previously unnoticed flaw in video software that had been tested more than 5 million times by its creators. Project Glasswing As a precaution, Anthropic has shared a version of Mythos with cybersecurity companies CrowdStrike and Palo Alto Networks, as well as with Amazon, Apple and Microsoft in a project it dubbed "Glasswing." Networking giants Cisco and Broadcom are taking part in the project, along with the Linux Foundation that promotes the free, open-source Linux computer operating system. "This work is too important and too urgent to do alone," Cisco chief security and trust officer Anthony Grieco said in a joint release about Glasswing. "AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no going back." Approximately 40 organizations involved in the design, maintenance or operation of computer systems are said to have joined Glasswing. Project partners are to share their Mythos findings, according to Anthropic, which is providing about US$100mil (RM403mil) worth of computing resources for the mission. Early work with AI models has shown they can help find and fix software and hardware vulnerabilities at a pace and scale not previously possible, according to Grieco. "The window between a vulnerability being discovered and being exploited by an adversary has collapsed - what once took months now happens in minutes with AI," said Crowdstrik chief technology officer Elia Zaitsev. "Claude Mythos Preview demonstrates what is now possible for defenders at scale, and adversaries will inevitably look to exploit the same capabilities." Anthropic said it has had discussions with the US government regarding Mythos despite a decree by the White House in February to terminate all contracts with the startup. That directive was put on hold by a federal court judge while a legal challenge by Anthropic works its way through the courts. - AFP

Anthropic
The Star 25d ago
Read update
Latest Anthropic AI model finds cracks in software defences

SpaceX Isn't Even Public Yet. Investors Are Already Abuzz About a Tesla Merger.

Following the SpaceX and xAI merger, speculation is growing about a potential combination of Tesla and SpaceX. Elon Musk surprised onlookers with the quick merger between SpaceX and xAI. Now analysts, investors and close Musk observers are debating the merits of what some see as the ultimate combination: SpaceX and Tesla TSLA -1.75%decrease; red down pointing triangle. As SpaceX approaches an initial public offering, some investors are discussing the idea of a mega-Musk merger as a follow-up. Musk has said he thinks his companies are converging, but he hasn't commented on speculation of a merger. Still, some Tesla supporters argue that combining the companies could accelerate Musk's artificial intelligence ambitions by bringing his projects under one roof -- and potentially create one of the most valuable companies in history. Alexandra Merz, an influential individual Tesla investor who goes by the screen name Tesla Boomer Mama, posted on Musk-owned X that her best-case scenario is a stock-for-stock merger in June or July that values both companies equally. If such a transaction were to happen at today's valuations, it would give a slight premium to investors in Tesla, which has fallen 22.5% since the start of the year, according to FactSet. While she warned she could be wrong, she described her hypothesis as "connecting public breadcrumbs." Others, like James Robertson, a Texas IT executive, are more skeptical. Robertson, who bought shares in Tesla in 2014, said he worries a combined company could go the way of other failed conglomerates like General Electric, risking the success of both businesses. Still, he is eager to invest in the SpaceX IPO, which could be the largest in history, when he gets the chance. "I think it would be more valuable long term to own stock in both companies," he said. Musk hasn't been shy about aligning his companies to achieve his life's ambition of building human civilization on Mars. Morgan Stanley's former Tesla analyst Adam Jonas dubbed the project the "Muskonomy." In recent weeks, other Wall Street analysts have picked up where Jonas left off, with some telling investors they see some logic to a merger. "The main investor question we have received is whether Elon eventually plans to combine Tesla with SpaceX, thus forming a broader 'Elon Inc.' -- an idea that TSLA megabulls have frequently hoped for," Barclays analyst Dan Levy wrote in February. "While at this point a Tesla/SpaceX combination appears less likely, we see the potential for a combination down the road," he wrote. SpaceX confidentially filed for an IPO last week and is aiming to go public by July. The company was last valued at $1.25 trillion in February following its merger with xAI. Tesla, meanwhile, is already public, with a market valuation of $1.1 trillion. A deal combining the two companies would rank as the largest merger ever. Musk stoked the flames with the announcement of new joint ventures between Tesla and SpaceX. Those include a new, shared chip factory called the Terafab in Austin, Texas, and a new AI agent called Digital Optimus that streamlines Tesla and xAI's software development. Along with it, he has pushed a new philosophy around artificial intelligence that pegs the success of Tesla's AI initiatives, which include humanoid robots and autonomous cars, to SpaceX's plan to build data centers in space. While the technology isn't proven, in theory, space-based data centers could be powered using solar energy from the sun, mitigating concerns about the vast amounts of energy and real estate needed to power them on Earth. Musk has long sold his vision for Tesla to become the world's most-valuable company, hypothesizing an eventual $30 trillion valuation on the strength of its robotics and AI products. That vision has played well with individual investors, who have buoyed Tesla's performance on the public markets even though its core automotive business, and main source of revenue, has declined. While its first-quarter electric-vehicle deliveries were up 6.3% from a year earlier, the company's total share of the EV market has been on a downward path over the past two years. In a note Monday, JP Morgan analyst Ryan Brinkman warned that Tesla's stock price could fall 60% by the end of 2026 as the company struggles to execute on its new strategy. A $1 trillion pay package approved by Tesla shareholders in November asks Musk to boost the company's valuation to $8.5 trillion and hit a series of operational milestones with ambitious targets for its new AI products. At the same time, SpaceX has surged in value on the private markets, in part through its February combination with xAI. Should Musk eventually propose a deal between Tesla and SpaceX, Columbia Law Professor Dorothy Lund says it would likely face antitrust scrutiny. Working in its favor would be the fact that Tesla and SpaceX aren't competitors and, assuming President Trump is still in office, Musk's close relationship with the administration, Lund said. She also points out that any merger vote would require approval from shareholders of the company targeted for acquisition. While he was able to merge SpaceX and xAI without a formal process because he was a controlling shareholder in both, a deal with Tesla would be different since he has a smaller stake in that company. "You couldn't do this overnight because you need to hold that vote," Lund said.

xAISpaceX
The Wall Street Journal25d ago
Read update
SpaceX Isn't Even Public Yet. Investors Are Already Abuzz About a Tesla Merger.

Anthropic 'Project Glasswing': AI Cybersecurity Initiative in Collaboration with Apple and More

Anthropic has introduced "Project Glasswing," a new initiative that focuses on global safety and security on the software side of things. This new AI cybersecurity initiative has already garnered the commitment from renowned technology companies, such as the likes of Apple, Amazon Web Services, Microsoft, and more. Anthropic unveiled their latest brainchild that centers on the world of AI and cybersecurity for software, combining both into one development to further help the tech industry fight against bad actors. The startup AI company said that participants who will be given a chance to build on Project Glasswing will be given access to the Claude Mythos Preview to build on and enhance their security projects. According to Anthropic, it found new capabilities in a frontier model that it trained, which it calls the Claude Mythos Preview, powered by an AI capable of finding and exploiting software vulnerabilities. In turn, it helps secure platforms. The company said that Mythos Preview already found "thousands of high-severity vulnerabilities" in "every major operating system and web browser." Project Glasswing will help companies turn these capabilities "to work" and focus on "defensive purposes" against significant threats. Anthropic has partnered with various companies as collaborators for this new cybersecurity initiative. The company named the likes of Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JP Morgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. Engadget reported that Anthropic has since positioned itself towards the ethical use of artificial intelligence, campaigning and raising awareness towards proper application of the tech. The company is also known for rejecting the Pentagon's contract offer as it opted not to compromise its AI safety practices for profit.

Anthropic
Tech Times25d ago
Read update
Anthropic 'Project Glasswing': AI Cybersecurity Initiative in Collaboration with Apple and More

[AINews] Anthropic @ $30B ARR, Project GlassWing and Claude Mythos Preview -- first model too dangerous to release since GPT-2

If a master tactician wanted to further competitive narratives vs a potential IPO, you would be hard pressed to find a better idea than Claude Mythos (from the Ancient Greek for "utterance" or "narrative": the system of stories through which civilizations made sense of the world), rumored to be the largest ever successful training run and "leaked" weeks ago, and now formally confirmed to be too dangerous to release GA, instead only restricted to 40 partners under an urgent new "Project Glasswing": In the blogpost and the 244 page System Card and a ludicrously well produced video, Anthropic details shocking capabilities beyond the kinds of high double digit benchmark capability jumps (with encouraging efficiency!) you might hope for from a much larger (>10T?) model: We've done a focused news summary run below, for those who desire more detail. Top Story: Anthropic revenue disclosures analysis and Claude Mythos details

Anthropic
latent.space25d ago
Read update
[AINews] Anthropic @ $30B ARR, Project GlassWing and Claude Mythos Preview  --  first model too dangerous to release since GPT-2

Flight chaos if DHS makes good on promise to pull officers, ending international travel out of California

Bad news for global travelers out of San Francisco if the Department of Homeland Security follows through on a threat to remove Customs and Border Protection officers from airports in sanctuary cities. During an interview with Fox News on Monday the new secretary for DHS Markwayne Mullin said that the Trump administration is looking at ending customs processing services in cities who have refused to cooperate with federal immigration authorities. "Some of these cities have international airports -- if they're a sanctuary city, should they really be processing customs into their city?" Mullin said. "Seriously, If they're a sanctuary city, and they're receiving international flights, and we're asking them to partner with us at the airport, but once they walk out of the airport they're not going to enforce immigration policy, maybe we need to have a really hard look at that because we need to focus on cities that want to work with us." U.S. Customs and Border Protection (CBP) agents operate at more than 300 ports of entry, including international airports, throughout the country. According to the CBP, numerous international hubs in CA would be affected. Here's a few. If this threat did come to be, the San Francisco Chronicle reported that travel at places like SFO would "effectively halt" international travel. Removing these agents could cause horrendous lines for those trying to re-enter the U.S. or get into the country. Removing CBP from sanctuary city airports would be in-line with the Trump administration's policy of terminating federal funding for jurisdictions that prevent local law enforcement and jails from working with federal immigration enforcement agents. "Starting February 1, we're not making any payments to sanctuary cities or states having sanctuary cities because they do everything possible to protect criminals at the expense of American citizens," Trump said during a speech in Detroit. When the Trump administration deployed ICE agents to airports across the country to relieve the strain on the lack of TSA workers amid the partial government shutdown, SFO was not affected because it contracts its own TSA agents. The California Post reached out to LAX and SFO for further comment. California Governor Gavin Newsom's press office took the opportunity to slam the administration's plan. "If you thought the economy was bad with Trump's war driving prices at the pump up ... just wait until international travel is halted at some of the busiest airports in the world," a post on X from Newsom's press office read. "Talk about a stupid idea (no wonder it's being considered by the Trump Admin)." The Post reached out to Newsom's office who said the post on social media speaks for itself.

CHAOS
New York Post25d ago
Read update
Flight chaos if DHS makes good on promise to pull officers, ending international travel out of California

Anthropic's Restraint Is a Terrifying Warning Sign

Normally right now I would be writing about the geopolitical implications of the war with Iran, and I am sure I will again soon. But I want to interrupt that thought to highlight a stunning advance in artificial intelligence -- one that arrived sooner than expected and that will have equally profound geopolitical implications. The artificial intelligence company Anthropic announced Tuesday that it was releasing the newest generation of its large language model, dubbed Claude Mythos Preview, but to only a limited consortium of roughly 40 technology companies, including Google, Broadcom, Nvidia, Cisco, Palo Alto Networks, Apple, JPMorganChase, Amazon and Microsoft. Some of its competitors are among these partners because this new A.I. model represents a "step change" in performance that has some critically important positive and negative implications for cybersecurity and America's national security. The good news is that Anthropic discovered in the process of developing Claude Mythos that the A.I. could not only write software code more easily and with greater complexity than any model currently available, but as a byproduct of that capability, it could also find vulnerabilities in virtually all of the world's most popular software systems more easily than before. The bad news is that if this tool falls into the hands of bad actors, they could hack pretty much every major software system in the world, including all those made by the companies in the consortium. This is not a publicity stunt. In the run-up to this announcement, representatives of leading tech companies have been in private conversation with the Trump administration about the implications for the security of the United States and all the other countries that use these now vulnerable software systems, technologists involved told me. For good reason. As Anthropic said in its written statement on Tuesday, in just the past month, "Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of A.I. progress, it will not be long before such capabilities proliferate, potentially beyond actors who committed to deploying them safely. The fallout -- economics, public safety and national security -- could be severe.'' Project Glasswing, Anthropic's name for the consortium, is an undertaking to work with the biggest and most trusted tech companies and critical infrastructure providers, including banks, "to put these capabilities to work for defensive purposes," the company added, and to give the leading technology firms a head start in finding and patching those vulnerabilities. "We do not plan to make Claude Mythos Preview generally available, but our eventual goal is to enable our users to safely deploy Mythos-class models at scale -- for cybersecurity purposes, but also for the myriad other benefits that such highly capable models will bring," Anthropic said. My translation: Holy cow! Superintelligent A.I. is arriving faster than anticipated, at least in this area. We knew it was getting amazingly good at enabling anyone, no matter how computer literate, to write software code. But even Anthropic reportedly did not anticipate that it would get this good, this fast, at finding ways to find and exploit flaws in existing code. Anthropic said it found critical exposures in every major operating system and Web browser, many of which run power grids, waterworks, airline reservation systems, retailing networks, military systems and hospitals all over the world. If this A.I. tool were, indeed, to become widely available, it would mean the ability to hack any major infrastructure system -- a hard and expensive effort that was once essentially the province only of private-sector experts and intelligence organizations -- will be available to every criminal actor, terrorist organization and country, no matter how small. I'm really not being hyperbolic when I say that kids could deploy this by accident. Mom and Dad, get ready for: "Honey, what did you do after school today?" "Well, Mom, my friends and I took down the power grid. What's for dinner?" That is why Anthropic is giving carefully controlled versions to key software providers so they can find and fix the vulnerabilities before the bad guys do -- or your kids. At moments like this I prefer to do a deep dive with my technology tutor, Craig Mundie, a former director of research and strategy at Microsoft, a member of President Barack Obama's President's Council of Advisors on Science and Technology and an author, with Henry Kissinger and Eric Schmidt, of a book on A.I. called "Genesis." In our view, no country in the world can solve this problem alone. The solution -- this may shock people -- must begin with the two A.I. superpowers, the U.S. and China. It is now urgent that they learn to collaborate to prevent bad actors from gaining access to this next level of cyber capability. Such a powerful tool would threaten them both, leaving them exposed to criminal actors inside their countries and terrorist groups and other adversaries outside. It could easily become a greater threat to each country than the two countries are to each other. Indeed, this is potentially as fundamental and significant a turning point as was the emergence of mutually assured destruction and the need for nuclear nonproliferation. The U.S. and China need to work together to protect themselves, as well the rest of the world, from humans and autonomous A.I.s using this technology -- a lot more than they need to worry about Russia. This is so important and urgent that it should be a top subject on the agenda for the summit between Trump and President Xi Jinping in Beijing next month. "What used to be the province of big countries, big militaries, big companies and big criminal organizations with big budgets -- this ability to develop sophisticated cyberhacking operations -- could become easily available to small actors," explained Mundie. "What we are about to see is nothing short of the complete democratization of cyberattack capabilities." It means that responsible governments, in concert with the companies that build these A.I. tools and software infrastructure, need to do three things urgently, Mundie argues. For starters, he says, we need to "carefully control the release of these new superintelligent models and make sure they only go to the most responsible governments and companies." Then we need to use the time this buys us to distribute defensive tools to the good actors "so that the software that runs their key infrastructure can have all their flaws found and fixed before hackers inevitably get these tools one way or another." (By the way, the cost of fixing the vulnerabilities that are sure to be discovered in legacy software systems, like those of telephone companies, will be significant. Then multiply that across our whole industrial base.) Finally, Mundie argues, we need to work with China and all responsible countries to build safe, protected working spaces, within all the key networks, both public and private, into which trusted companies and governments "can move all their critical services -- so they will be protected against future hacking attacks." It will be interesting to see what history remembers most about April 7, 2026 -- the postponed U.S. release of bombs over Iran or the carefully controlled release of the Claude Mythos Preview by Anthropic and its technical allies.

Anthropic
DNyuz25d ago
Read update
Anthropic's Restraint Is a Terrifying Warning Sign

Latest Anthropic AI model finds cracks in software defences

NEW YORK: Anthropic said yesterday its yet-to-be-released artificial intelligence (AI) model called Claude Mythos has proven keenly adept at exposing software weaknesses. Mythos has laid bare thousands of vulnerabilities in commonly used applications for which no patch or fix exists, prompting the San Francisco-based AI startup to form an alliance with cybersecurity specialists to bolster defences against hacking. "We have a new model that we're explicitly not releasing to the public," Mike Krieger of Anthropic Labs said at a HumanX AI conference in San Francisco. Instead, Anthropic is letting cybersecurity specialists and engineers in the open-source community work with Mythos to use the model as a defensive weapon "sort of arming them ahead of time," Krieger explained. Leaps in AI model capabilities have come with concerns about hackers using such tools for figuring out passwords or cracking encryption meant to keep data safe. The oldest of the vulnerabilities uncovered by Mythos dates back 27 years, and none were ostensibly noticed by their makers before being pinpointed by the AI model, according to Anthropic. Mythos is the latest generation of Anthropic's Claude family of AI, and a recent leak of some of its code prompted the startup to release a blog post warning it posed unprecedented cybersecurity risks. "AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities," Anthropic said in a blog post. "The fallout - for economies, public safety, and national security - could be severe," it said. Software vulnerabilities exposed by Mythos were often subtle and difficult to detect without AI, according to Anthropic. As an example, it said Mythos found a previously unnoticed flaw in video software that had been tested more than 5 million times by its creators. Project Glasswing As a precaution, Anthropic has shared a version of Mythos with cybersecurity companies CrowdStrike and Palo Alto Networks, as well as with Amazon, Apple and Microsoft in a project it dubbed "Glasswing". Networking giants Cisco and Broadcom are taking part in the project, along with the Linux Foundation that promotes the free, open-source Linux computer operating system. "This work is too important and too urgent to do alone," Cisco chief security and trust officer Anthony Grieco said in a joint release about Glasswing. "AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no going back," Grieco said. Approximately 40 organisations involved in the design, maintenance or operation of computer systems are said to have joined Glasswing. Project partners are to share their Mythos findings, according to Anthropic, which is providing about US$100 million worth of computing resources for the mission. Early work with AI models has shown they can help find and fix software and hardware vulnerabilities at a pace and scale not previously possible, according to Grieco. "The window between a vulnerability being discovered and being exploited by an adversary has collapsed - what once took months now happens in minutes with AI," said Crowdstrik CTO Elia Zaitsev. "Claude Mythos Preview demonstrates what is now possible for defenders at scale, and adversaries will inevitably look to exploit the same capabilities," Zaitsev said. Anthropic said it has had discussions with the US government regarding Mythos despite a decree by the White House in February to terminate all contracts with the startup. That directive was put on hold by a federal court judge while a legal challenge by Anthropic works its way through the courts.

Anthropic
Free Malaysia Today25d ago
Read update
Latest Anthropic AI model finds cracks in software defences

Anthropic claims its new AI model, Mythos, is a cybersecurity 'reckoning'

Anthropic, the artificial-intelligence company that recently fought the Pentagon over the use of its technology, has built a new AI model that it claims is too powerful to be released to the public. Instead, Anthropic said Tuesday, it will make the new model - known as Claude Mythos Preview - available to a consortium of more than 40 technology companies, including Apple Inc. APPL-Q, Amazon.com Inc. AMZN-Q and Microsoft Corp. MSFT-Q, which will use the model to find and patch security vulnerabilities in critical software programs. Anthropic said it had no plans to release its new technology more widely but was announcing the new model's capabilities in one area in particular - identifying security vulnerabilities in software - in an effort to sound the alarm over what the company believes will be a new, scarier era of AI threats. "The goal is both to raise awareness and to give good actors a head start on the process of securing open-source and private infrastructure and code," Jared Kaplan, Anthropic's chief science officer, said in an interview. The coalition, known as Project Glasswing, will include some of Anthropic's competitors in AI, such as Google, as well as hardware providers like Cisco Systems Inc. and Broadcom Inc., and organizations that maintain critical open-source software, such as the Linux Foundation. Anthropic is committing up to US$100-million in Claude usage credits to the effort. Logan Graham, the head of an Anthropic team that tests new models for dangerous capabilities, called the new model "the starting point for what we think will be an industry change point, or reckoning, with what needs to happen now." Judge temporarily blocks Pentagon from labelling Anthropic a supply chain risk Anthropic occupies an unusual position in today's AI landscape. It is racing to build increasingly powerful AI systems, and making billions of dollars selling access to those systems, while also drawing attention to the risks its technology poses. The company was deemed a supply chain risk this year by the Pentagon for demanding certain limitations to the use of its technology. A federal judge later stopped the designation from going into effect. Anthropic has not released much new information about the model, which was code-named Capybara during development. But after some details were inadvertently leaked last month, the company acknowledged that it considered it a "step change" in AI capabilities, with improved performance in areas like coding and cybersecurity research. The company's decision to hold back Claude Mythos Preview, while giving access only to partners out of concern for how it might be misused, has some precedent. In 2019, OpenAI announced it had built a new model, GPT-2, but was not releasing the full version right away. The company claimed that its text-generation capabilities could be used to automate the mass-production of propaganda or misinformation. (It later released the model, after conducting additional safety testing on it.) Many of the leaders of the GPT-2 project later left OpenAI to start Anthropic. This time, Anthropic is making a different, more urgent claim. The company's executives say Claude Mythos Preview is already capable of carrying out autonomous security research, including scanning for and exploiting so-called zero-day vulnerabilities in critical software programs, flaws that are unknown even to the software's developer. These efforts can often be triggered by amateurs with simple prompts. The company claims that the new model has already identified "thousands" of bugs and vulnerabilities in popular software programs, including every major operating system and browser. One of the vulnerabilities Claude found, the company said, was a 27-year-old bug in OpenBSD, an open-source operating system that was designed to be difficult to hack. Many internet routers and secure firewalls incorporate OpenBSD's technology. Another was a long-standing issue in a piece of popular video software that automated testing tools had scanned five million times, without finding any problems. "This model is good at finding vulnerabilities that would be well understood and findable by security researchers," Mr. Graham said. "At the same time, it has found vulnerabilities, and in some cases crafted exploits, sophisticated enough that they were both missed by literally decades of security researchers, as well as all the automated tools designed to find them." OpenAI signs deal to give Pentagon access to AI for classified work through AWS Anthropic announced Monday that its projected annual revenue had more than tripled in 2026, to more than US$30-billion from US$9-billion. The growth has come largely because of the popularity of Anthropic's Claude as a tool for programming. Anthropic has focused on making Claude good at completing lengthy coding tasks, in hopes of making it more useful to professional programmers and amateur "vibecoders." But an AI system designed to be good at coding is also good at spotting the flaws in code - running automated scans for bugs and vulnerabilities that can allow hackers to take control of users' machines, expose sensitive user information or wreak other havoc. The cybersecurity industry has been bracing for years for what more capable AI models could do to critical tech infrastructure. Until recently, only expert human researchers with access to specialized tools were capable of finding the most severe security vulnerabilities. Now, the fear is that a powerful AI model could discover them on its own. "Imagine a horde of agents methodically cataloguing every weakness in your technology infrastructure, constantly," Nikesh Arora, the chief executive of Palo Alto Networks, wrote in a blog post last week. Mr. Graham said one of the unanswered questions about Claude Mythos Preview, and other future models that will be capable of doing similar things, was whether most or all of the world's critical software would need to be patched or rewritten as a result of these new models. "There are a lot of really critical systems around the world, whether it's physical infrastructure or things that protect your personal data, that are running on old versions of code," Mr. Graham said. "If these previously were mostly secure because it took a lot of human effort to attack them, does that paradigm of security even work any more?" Anthropic sues to block Pentagon's supply-chain risk designation It is wise to take claims about unreleased model capabilities from AI companies with a grain of salt. In this case, though, cybersecurity researchers who have been given access to Claude Mythos Preview have characterized the model as a significant cybersecurity risk. Elia Zaitsev, the chief technology officer of CrowdStrike Holdings Inc., a cybersecurity firm with access to the new model through Project Glasswing, said in a statement accompanying Anthropic's announcement that the model "demonstrates what is now possible for defenders at scale, and adversaries will inevitably look to exploit the same capabilities." "What once took months now happens in minutes with AI," Mr. Zaitsev said. Project Glasswing takes its name from the glasswing butterfly, Mr. Kaplan said, which uses transparent wings to hide in plain sight. Similarly, he said, many of today's most critical software programs contain bugs and vulnerabilities that have existed in the open for years, but were buried in such complex technical systems that no human ever found them. According to Mr. Kaplan, the cybersecurity capabilities of Claude Mythos Preview are not a result of special training. Rather, they are just one of many areas in which the model is better than previous ones. He predicted that similar cybersecurity capabilities would exist in other models soon. As that happens, he said, the arms race between hackers and the companies racing to defend their systems will only escalate. "As the slogan goes, this is the least capable model we'll have access to in the future," he said.

Anthropic
The Globe and Mail25d ago
Read update
Anthropic claims its new AI model, Mythos, is a cybersecurity 'reckoning'

Ceasefire Chaos: Mideast Tensions Persist Amid Diplomatic Moves | Politics

Despite recent diplomatic efforts to declare a two-week ceasefire involving the United States and Iran, tensions in the Middle East remain high. On Wednesday, missile alerts were activated in both the United Arab Emirates and Israel. A gas processing facility in Abu Dhabi was set ablaze following incoming missiles believed to be launched by Iranian forces. Officials have not yet identified targets within Israel, which has experienced significant missile attacks throughout the conflict. The Revolutionary Guard of Iran is reportedly making critical military decisions, potentially eclipsing the nation's political leadership. With ceasefires often seeing last-minute escalations, it remains uncertain if hostilities will genuinely halt while negotiations advance in Islamabad.

CHAOS
Devdiscourse25d ago
Read update
Ceasefire Chaos: Mideast Tensions Persist Amid Diplomatic Moves | Politics

Opinion | Anthropic's Restraint Is a Terrifying Warning Sign

Normally right now I would be writing about the geopolitical implications of the war with Iran, and I am sure I will again soon. But I want to interrupt that thought to highlight a stunning advance in artificial intelligence -- one that arrived sooner than expected and that will have equally profound geopolitical implications. The artificial intelligence company Anthropic announced Tuesday that it was releasing the newest generation of its large language model, dubbed Claude Mythos Preview, but to only a limited consortium of roughly 40 technology companies, including Google, Broadcom, Nvidia, Cisco, Palo Alto Networks, Apple, JPMorganChase, Amazon and Microsoft. Some of its competitors are among these partners because this new A.I. model represents a "step change" in performance that has some critically important positive and negative implications for cybersecurity and America's national security. The good news is that Anthropic discovered in the process of developing Claude Mythos that the A.I. could not only write software code more easily and with greater complexity than any model currently available, but as a byproduct of that capability, it could also find vulnerabilities in virtually all of the world's most popular software systems more easily than before. The bad news is that if this tool falls into the hands of bad actors, they could hack pretty much every major software system in the world, including all those made by the companies in the consortium. This is not a publicity stunt. In the run-up to this announcement, representatives of leading tech companies have been in private conversation with the Trump administration about the implications for the security of the United States and all the other countries that use these now vulnerable software systems, technologists involved told me. For good reason. As Anthropic said in its written statement on Tuesday, in just the past month, "Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of A.I. progress, it will not be long before such capabilities proliferate, potentially beyond actors who committed to deploying them safely. The fallout -- economics, public safety and national security -- could be severe.'' Sign up for the Opinion Today newsletter Get expert analysis of the news and a guide to the big ideas shaping the world every weekday morning. Get it sent to your inbox. Project Glasswing, Anthropic's name for the consortium, is an undertaking to work with the biggest and most trusted tech companies and critical infrastructure providers, including banks, "to put these capabilities to work for defensive purposes," the company added, and to give the leading technology firms a head start in finding and patching those vulnerabilities. "We do not plan to make Claude Mythos Preview generally available, but our eventual goal is to enable our users to safely deploy Mythos-class models at scale -- for cybersecurity purposes, but also for the myriad other benefits that such highly capable models will bring," Anthropic said. My translation: Holy cow! Superintelligent A.I. is arriving faster than anticipated, at least in this area. We knew it was getting amazingly good at enabling anyone, no matter how computer literate, to write software code. But even Anthropic reportedly did not anticipate that it would get this good, this fast, at finding ways to find and exploit flaws in existing code. Anthropic said it found critical exposures in every major operating system and Web browser, many of which run power grids, waterworks, airline reservation systems, retailing networks, military systems and hospitals all over the world. If this A.I. tool were, indeed, to become widely available, it would mean the ability to hack any major infrastructure system -- a hard and expensive effort that was once essentially the province only of private-sector experts and intelligence organizations -- will be available to every criminal actor, terrorist organization and country, no matter how small. I'm really not being hyperbolic when I say that kids could deploy this by accident. Mom and Dad, get ready for: "Honey, what did you do after school today?" "Well, Mom, my friends and I took down the power grid. What's for dinner?" That is why Anthropic is giving carefully controlled versions to key software providers so they can find and fix the vulnerabilities before the bad guys do -- or your kids. At moments like this I prefer to do a deep dive with my technology tutor, Craig Mundie, a former director of research and strategy at Microsoft, a member of President Barack Obama's President's Council of Advisors on Science and Technology and an author, with Henry Kissinger and Eric Schmidt, of a book on A.I. called "Genesis." In our view, no country in the world can solve this problem alone. The solution -- this may shock people -- must begin with the two A.I. superpowers, the U.S. and China. It is now urgent that they learn to collaborate to prevent bad actors from gaining access to this next level of cyber capability. Such a powerful tool would threaten them both, leaving them exposed to criminal actors inside their countries and terrorist groups and other adversaries outside. It could easily become a greater threat to each country than the two countries are to each other. Indeed, this is potentially as fundamental and significant a turning point as was the emergence of mutually assured destruction and the need for nuclear nonproliferation. The U.S. and China need to work together to protect themselves, as well the rest of the world, from humans and autonomous A.I.s using this technology -- a lot more than they need to worry about Russia. This is so important and urgent that it should be a top subject on the agenda for the summit between Trump and President Xi Jinping in Beijing next month. "What used to be the province of big countries, big militaries, big companies and big criminal organizations with big budgets -- this ability to develop sophisticated cyberhacking operations -- could become easily available to small actors," explained Mundie. "What we are about to see is nothing short of the complete democratization of cyberattack capabilities." It means that responsible governments, in concert with the companies that build these A.I. tools and software infrastructure, need to do three things urgently, Mundie argues. For starters, he says, we need to "carefully control the release of these new superintelligent models and make sure they only go to the most responsible governments and companies." Then we need to use the time this buys us to distribute defensive tools to the good actors "so that the software that runs their key infrastructure can have all their flaws found and fixed before hackers inevitably get these tools one way or another." (By the way, the cost of fixing the vulnerabilities that are sure to be discovered in legacy software systems, like those of telephone companies, will be significant. Then multiply that across our whole industrial base.) Finally, Mundie argues, we need to work with China and all responsible countries to build safe, protected working spaces, within all the key networks, both public and private, into which trusted companies and governments "can move all their critical services -- so they will be protected against future hacking attacks." It will be interesting to see what history remembers most about April 7, 2026 -- the postponed U.S. release of bombs over Iran or the carefully controlled release of the Claude Mythos Preview by Anthropic and its technical allies. The Times is committed to publishing a diversity of letters to the editor. We'd like to hear what you think about this or any of our articles. Here are some tips. And here's our email: [email protected]. Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.

Anthropic
The New York Times25d ago
Read update
Opinion | Anthropic's Restraint Is a Terrifying Warning Sign

Anthropic's Glasswing: The Quiet Bet That Could Redefine How AI Models Actually See the World

Anthropic just made one of its most technically ambitious moves yet, and it didn't arrive with a press tour or a splashy keynote. It arrived as a research page, dense with detail, describing a new approach to multimodal AI that the company calls Glasswing. The name evokes transparency -- a butterfly whose wings are see-through, revealing the structures beneath. That metaphor is deliberate. What Anthropic is proposing with Glasswing is a fundamentally different way for large language models to process and reason about visual information, one that prioritizes interpretability and compositional understanding over brute-force pattern matching. For industry insiders who've watched the multimodal AI race accelerate over the past eighteen months, Glasswing represents something more interesting than another benchmark-topping vision model. It represents a philosophical stake in the ground. The core idea behind Glasswing is architecturally distinct from the dominant approaches used by OpenAI's GPT-4o, Google's Gemini, and Meta's Llama multimodal variants. Most current multimodal systems bolt a vision encoder -- typically a variant of a Vision Transformer (ViT) -- onto an existing language model, then fine-tune the combined system on image-text pairs. The vision encoder converts images into token-like representations, which get fed into the language model's attention layers. It works. Sometimes remarkably well. But the resulting systems tend to treat images as opaque blobs of information, extracting features without maintaining a structured, decomposed understanding of what's actually in the scene. Glasswing takes a different path. According to Anthropic's technical description, the system introduces what the company calls "structured visual reasoning" -- a framework where visual inputs are broken down into compositional elements before being integrated with the language model's reasoning capabilities. Think of it this way: rather than looking at a photograph of a kitchen and producing a single dense vector that somehow encodes "kitchen-ness," Glasswing attempts to identify individual objects, their spatial relationships, their properties, and their functional roles, then reasons over those structured representations explicitly. This matters enormously for reliability. One of the persistent failures of current multimodal AI systems is what researchers call "hallucinated visual grounding" -- the model confidently describes things that aren't in the image, or gets spatial relationships wrong, or confuses similar-looking objects. These errors aren't random. They're systematic consequences of how vision encoders compress visual information into unstructured representations. When everything is a high-dimensional vector, the model has no principled way to distinguish between "the red cup is on the left side of the table" and "the red cup is on the right side of the table." Both might map to nearly identical internal representations. Glasswing's structured approach directly attacks this problem. By maintaining explicit representations of objects and their relationships, the system can perform what amounts to symbolic reasoning over visual scenes while still benefiting from the flexibility and generalization capabilities of neural networks. It's a hybrid approach, and it echoes ideas that have been floating around the AI research community for years -- most notably in the work of researchers like Josh Tenenbaum at MIT and Gary Marcus, who have long argued that pure neural approaches lack the compositional structure needed for reliable reasoning. But Anthropic isn't just rehashing old arguments. They're implementing them at scale, within a production-grade AI system. That's the hard part. And that's what makes Glasswing worth paying attention to. The timing is no accident. The multimodal AI market is entering a phase where raw capability is becoming less of a differentiator than reliability and trustworthiness. Enterprise customers deploying AI systems for document analysis, medical imaging, manufacturing quality control, and autonomous systems don't just need models that score well on academic benchmarks. They need models that fail gracefully, that can explain their reasoning, and that don't confidently assert things that are visually false. Anthropic, which has built its brand around AI safety and interpretability, is positioning Glasswing as the answer to that demand. The competitive context is fierce. OpenAI has been iterating rapidly on GPT-4o's multimodal capabilities, with recent updates improving the model's ability to handle complex visual reasoning tasks. Google's Gemini models -- particularly Gemini 1.5 Pro -- have pushed the boundaries of long-context multimodal understanding, processing hours of video alongside text and audio. Meta's open-source Llama models have made multimodal capabilities increasingly accessible to developers and researchers. And a wave of smaller companies, from Adept to Runway to Twelve Labs, are building specialized multimodal systems for specific verticals. Against this backdrop, Anthropic's decision to invest heavily in structured visual reasoning is a bet that the current trajectory of multimodal AI -- bigger models, more data, better encoders -- will hit a wall. Not a wall of capability, necessarily, but a wall of reliability. And for the use cases that matter most to enterprise customers and to society at large, reliability is everything. There's a technical detail in the Glasswing approach that deserves particular attention. According to Anthropic's description, the system doesn't just decompose visual scenes into objects and relationships -- it also maintains what the company calls "uncertainty-aware representations." This means that when the model isn't confident about a particular visual element -- say, whether a partially occluded object is a dog or a cat -- it explicitly represents that uncertainty rather than forcing a premature decision. The language model can then reason about that uncertainty, asking clarifying questions, hedging its descriptions appropriately, or requesting additional information. This is a significant departure from how most current systems handle visual ambiguity. Typically, a vision encoder produces a single point estimate for each visual feature, and the language model treats that estimate as ground truth. The result is the confident hallucination problem that plagues every major multimodal system on the market today. Glasswing's uncertainty-aware approach doesn't eliminate errors, but it changes the failure mode from "confidently wrong" to "appropriately uncertain." For safety-critical applications, that distinction is the difference between a useful tool and a liability. Anthropic has been building toward this moment for a while. The company's research on mechanistic interpretability -- understanding what's happening inside neural networks at the level of individual neurons and circuits -- has produced some of the most important work in the field over the past two years. Glasswing can be understood as an application of that interpretability-first philosophy to the multimodal domain. If you can't understand what your model is doing with visual information, you can't trust it. And if you can't trust it, you can't deploy it in the settings where it could do the most good. The business implications are substantial. Anthropic, which has raised over $7 billion in funding from investors including Google, Salesforce, and a consortium led by Menlo Ventures, is under pressure to demonstrate that its safety-focused approach can also be commercially competitive. Glasswing could be the proof point. If structured visual reasoning delivers measurably better reliability in enterprise deployments -- fewer hallucinations, better spatial understanding, more accurate document analysis -- then Anthropic has a compelling pitch to the CIOs and CTOs who are currently evaluating which AI platform to standardize on. And the market is enormous. According to recent industry analyses, enterprise spending on multimodal AI is expected to grow dramatically over the next several years, driven by demand in healthcare, financial services, manufacturing, and government. The companies that win this market won't necessarily be the ones with the highest scores on academic benchmarks. They'll be the ones whose systems fail the least often in production. Not everyone is convinced that Anthropic's approach will work at scale. Some researchers argue that the structured reasoning framework introduces computational overhead that could make Glasswing slower and more expensive to run than competing systems. Others question whether explicit compositional representations can capture the full richness of visual experience -- after all, human vision is itself a messy, probabilistic process that doesn't always decompose neatly into objects and relationships. And there's the practical concern that building and maintaining structured visual representations requires additional training data and annotation, which could slow down iteration cycles. These are legitimate concerns. But they're also the kinds of concerns that tend to get resolved through engineering effort rather than fundamental breakthroughs. If the core approach is sound -- and the early results described by Anthropic suggest it is -- then the computational and data challenges are problems to be solved, not barriers to be feared. There's a broader lesson here about the trajectory of AI development. For the past several years, the dominant strategy in the field has been scaling: bigger models, more data, more compute. And that strategy has produced extraordinary results. But it's also produced systems with systematic failure modes that don't go away with more scale. Hallucinations. Spatial reasoning errors. Inability to count objects reliably. Confusion about negation and absence. These aren't problems of insufficient scale. They're problems of insufficient structure. Glasswing is Anthropic's answer to that diagnosis. Whether it's the right answer remains to be seen. But the question it's asking -- how do we build AI systems that don't just perform well on average, but fail gracefully in the worst case -- is arguably the most important question in the field right now. So what happens next? Anthropic hasn't announced specific timelines for integrating Glasswing's capabilities into its Claude product line, but the direction is clear. The company's API customers -- which include major enterprises across multiple industries -- are likely to see structured visual reasoning capabilities appear in Claude's multimodal features over the coming months. And if the approach delivers on its promise, expect competitors to follow. OpenAI, Google, and Meta all have the research talent and computational resources to implement similar approaches. The question is whether they'll prioritize reliability over raw capability in their product roadmaps. For enterprise buyers evaluating AI platforms, Glasswing is a signal worth tracking. Not because it solves every problem with multimodal AI -- it doesn't -- but because it represents a fundamentally different design philosophy. One that prioritizes understanding over pattern matching. Transparency over opacity. Appropriate uncertainty over confident assertion. In an industry that has spent the past three years racing to build the most powerful AI systems imaginable, Anthropic is making a quieter, potentially more consequential bet: that the most useful AI systems will be the ones that know what they don't know. And can show you why.

Anthropic
WebProNews25d ago
Read update
Anthropic's Glasswing: The Quiet Bet That Could Redefine How AI Models Actually See the World

Anthropic Secures 3.5 Gigawatts of AI Power as Bitcoin Miners Sell BTC to Host Data Centers - Crypto Economy

The artificial intelligence industry just crossed an energy threshold that rewrites the rules for Bitcoin miners. Anthropic, the company behind the Claude model, announced on April 6 an agreement to secure 3.5 gigawatts of next-generation Google TPU compute capacity manufactured by Broadcom. The contract represents the largest infrastructure deployment in the company's history. Meanwhile, on the other end of the high-voltage cable, Bitcoin miners no longer dig trenches to defend their energy territory. They sell their holdings and sign multibillion-dollar lease agreements with those same AI giants. The narrative of a confrontation for cheap electrons collapses upon examination of accounting ledgers. Core Scientific, one of the world's largest data center operators for mining, prepares to liquidate practically all of its Bitcoin reserves during this year. The funds finance a massive conversion of 1.2 gigawatts of capacity toward hosting hardware for artificial intelligence. Hut 8, for its part, secured a 15-year lease contract valued at 7 billion dollars whose main tenant is Anthropic and whose financial backing rests on Google. The transformation does not constitute a minor tactical move; it is the largest business model shift in the history of Bitcoin mining. The scale of Anthropic's deal demands a pause to grasp its real physical magnitude. A single gigawatt of electrical consumption roughly equals the demand of one million households in the United States. The company has reserved the energy equivalent of three and a half million homes to train and serve language models. Broadcom confirmed in its SEC filing that the majority of this new capacity will sit on U.S. soil and will begin operations starting in 2027. This allocation adds to the additional gigawatt Anthropic already receives from Google during 2026. The AI firm's annualized revenue figures back the audacity of the bet. The number crossed the barrier of 30 billion dollars, more than triple the 9 billion reported at last year's close. Simultaneously, the number of corporate customers spending over one million dollars annually on Claude doubled in under two months, climbing from 500 to more than 1,000 companies. With such contractual cash flow and an inference demand that chokes existing data centers, the need to lock down multiple gigawatts years in advance ceases to be a luxury and becomes a competitive survival condition. Faced with this insatiable energy appetite, data center operators that once dedicated every megawatt to solving cryptographic puzzles encounter a financial arbitrage opportunity too stark to ignore. The numbers do not lie: public miners currently lose close to 19,000 dollars for every Bitcoin they produce. Production costs hover around 80,000 dollars per unit, while the market price remains near 68,000 dollars, a plunge of nearly 47 percent from the all-time high set in October. The transition toward AI hosting imposes an entry cost considerably higher than that of a traditional mining farm. Preparing a megawatt for high-performance computing workloads demands between 8 and 15 million dollars in capital expenditures, in contrast to the 700,000 to 1 million dollars required for a Bitcoin mining facility. Despite this disparity, the chief financial officers of public mining companies embrace the shift without hesitation. The reason lies in the nature of the income. Bitcoin mining offers volatile rewards, subject to the whims of a spot market that currently punishes producers. AI hosting, conversely, provides stable, long-term cash flows backed by contracts with top-tier counterparties like Google and Anthropic. TeraWulf illustrates this new paradigm with the blunt force of a signed contract. The company secured 12.8 billion dollars in contracted high-performance computing hosting revenue. According to CoinShares analysis, publicly traded mining firms could derive up to 70 percent of their total revenue from AI hosting by the end of this year. For those that have already closed binding agreements, mining revenue collapses from representing 85 percent of the total to less than 20 percent. The sector announced over 70 billion dollars in cumulative deals related to AI and high-performance computing. The shift turns miners into a sort of energy landlord. They do not exit the electricity business; on the contrary, they consolidate their position as the best-positioned landowners on the new digital battlefield. Hut 8 describes the River Bend site in Louisiana as a facility capable of scaling to multiple gigawatts. The same ground prepared to house roaring rows of ASICs will now host the inference racks for the Claude model. Miners spent the last decade competing fiercely for favorable power purchase agreements, connections to remote substations, and land with cooling capacity. Those operational assets, once seen as marginal advantages in the hash rate race, constitute today the most sought-after inputs for AI expansion. The United States power grid tenses to extremes that mid-twentieth-century engineers never anticipated. PJM Interconnection, the nation's largest grid operator, projects a 6-gigawatt shortfall by 2027, a gap equivalent to six large nuclear plants offline. Data center electricity demand in the U.S. surges from under 15 gigawatts today to a projection of 134.4 gigawatts by 2030. An increase of nearly nine times in just seven years. Five AI data centers will individually reach 1 gigawatt of capacity this year alone. Up to 11 gigawatts of announced capacity for 2026 have yet to break ground due to bottlenecks in the supply of transformers and grid equipment. In this environment of structural scarcity, Anthropic's 3.5 gigawatts land like a steel anchor on the system. The consequences for the Bitcoin ecosystem materialize clearly on two fronts. The first operates in the spot market. The liquidation of reserves by giants like Core Scientific to fund conversions toward AI adds direct selling pressure to a price already teetering. The second front concerns the fundamental security of the network. The hash rate, a measure of total computing power dedicated to processing transactions and securing the blockchain, begins to feel the migration. Mining difficulty, an automatic adjustment mechanism reflecting active hash on the network, registered a drop of 7.76 percent. As more operators redirect gigawatts of capacity away from mining and toward AI hosting, the primary metric of network strength could contract further in the short term. The long-term horizon draws a structure that more closely resembles an infrastructure real estate investment trust than a traditional mining operation. Hut 8's deal, with its 15-year term and Google's financial backing, points in that direction. Long-term lease agreements with institutional-grade tenants transform balance sheets once speculative into fixed-income vehicles. If Marathon, Riot, or CleanSpark announce similar agreements in the coming months, the model of the "miner that also hosts" becomes obsolete. The sector consolidates as the real estate backbone of the artificial intelligence economy. The calendar to monitor proves crucial for understanding the speed of change. Anthropic's new TPU capacity comes online in 2027. The first data hall of Hut 8's River Bend complex opens its doors in the second quarter of that year. Core Scientific's conversion of 1.2 gigawatts accelerates throughout 2026. The question no longer revolves around whether miners will continue pivoting toward AI. The relevant inquiry centers on how much additional Bitcoin will flow into spot markets during the process and how quickly network difficulty adjusts to the flight of computing power. The miners did not lose the energy war. They always owned the battlefield. Now, they simply collect the rent.

Anthropic
Crypto Economy25d ago
Read update
Anthropic Secures 3.5 Gigawatts of AI Power as Bitcoin Miners Sell BTC to Host Data Centers - Crypto Economy
Showing 9621 - 9640 of 12289 articles