The latest news and updates from companies in the WLTH portfolio.
The federal government summoned financial leaders to an emergency meeting over Claude Mythos last week. Anthropic put the entire tech world on notice last week with an unprecedented announcement: it made an AI model so advanced that it was too dangerous to release to the public. Anthropic said the new frontier language model, Claude Mythos Preview, would "reshape cybersecurity." Anthropic also announced the formation of Project Glasswing, an invite-only group of organizations -- including some of Anthropic's biggest competitors -- to test Claude Mythos Preview and secure their infrastructure. Anthropic said that Claude Mythos Preview "found thousands of high-severity vulnerabilities, including some in every major operating system and web browser." (Emphasis in original.) The company said Project Glasswing was necessary "to help secure the world's most critical software." By Friday, CNBC reported that Federal Reserve Chairman Jerome Powell and Treasury Secretary Scott Bessent had summoned the high priests of finance (aka banking CEOs) for an emergency meeting about the new model. New York Times writer Thomas Friedman fretted over a "terrifying" future in which any teenager armed with Claude could hack the local power grid. The reaction to Claude Mythos Preview quickly split along predictable lines. AI boosters hailed the new model as proof that artificial general intelligence (AGI) was nigh, praising Anthropic for rolling it out so responsibly. Critics and AI skeptics called Project Glasswing a big publicity stunt. So, which is it? To find out, Mashable has been reviewing Anthropic's claims and talking to AI and cybersecurity experts. What is Claude Mythos Preview? Claude Mythos is a new large-language model that Anthropic says performs significantly better than Claude Opus 4.6 -- widely considered one of the best AI models in the world -- especially in cybersecurity. "In our testing, Claude Mythos Preview demonstrated a striking leap in cyber capabilities relative to prior models, including the ability to autonomously discover and exploit zero-day vulnerabilities in major operating systems and web browsers," reads the Claude Mythos system card. Is Claude Mythos a sign of AGI? Artificial general intelligence refers to superintelligent AI that can perform better than humans across a wide range of tasks. It's not an exaggeration to say that our entire economy has been organized around the quest for AGI, as Anthropic, Google, Meta, xAI, and OpenAI pour hundreds of billions of dollars into a new arms race. If Claude Mythos is as capable as Anthropic says, would it be an example of AGI? The model card addresses this question, and Anthropic does seem to think it's close to AGI. Any major platform rollout in this era is going to look different to different audiences depending on their fluency and their fear tolerance. What I care about is whether the intent is real, and the evidence I've seen from Anthropic suggests it mostly is. Howie Xu, Gen, Chief AI & Innovation Officer In a section about Claude Mythos safety risks, Antropic writes: "Current risks remain low. But we see warning signs that keeping them low could be a major challenge if capabilities continue advancing rapidly (e.g., to the point of strongly superhuman AI systems)." Of course, Anthropic has a strong financial incentive to promote this belief. Ultimately, the model card for Claude Mythos is more conservative than the reaction online would suggest. For example, while the Claude Mythos model card does show that this model performs above the trend line for previous Anthropic models, Anthropic says it does not show evidence of self-improvement or recursive growth. ("Importantly, though we're observing a slope change with Claude Mythos Preview, we do not know if this trend will continue with future models...The gains we can identify are confidently attributable to human research, not AI assistance.") Reasons to think Project Glasswing is a publicity stunt Don't make me tap my sign: "[When] an AI salesman tells you that AI is an unstoppable world-changing technology on the order of the agricultural revolution...you should take this prediction for what it is: a sales pitch." I wrote those words of caution in response to an essay by Anthropic CEO Dario Amodei, which warned about the potentially cataclysmic dangers of AI. Anthropic also has a history of issuing dire warnings about its AI models. You may remember the story of the Anthropic model that tried to "blackmail" a company CEO to prevent it from being turned off. In reality, Anthropic designed a test environment where blackmail was a potential outcome. This may be more akin to digital entrapment than genuine model misbehavior. So, is Claude Mythos the latest example of the industry's Chicken Little problem? On X, AI safety engineer Heidy Khlaaf listed a number of open questions that cast doubt on Anthropic's claims. Anthropic said the Claude Mythos preview found thousands of zero-day vulnerabilities. But Khlaaf says Anthropic left out key facts needed to assess this claim -- the rate of false positives, how Claude Mythos compares to existing cybersecurity tools, and exactly how much manual human review was required. "Releasing a marketing post with purposely vague language that clearly obscures evidence needed to substantiate Anthropic's claims brings into question if they are trying to garner further investment," Khlaaf told Mashable. "It also serves their 'safety first' image as they're able to frame the lack of public release, even a limited one for independent evaluation, as a public service when it simply obscures even experts' abilities to validate their claims." We reached out to Anthropic repeatedly about these concerns, but the company did not respond. We will update this article if they do. In the Claude Mythos system card, Anthropic wrote that more data will be released in the coming weeks as the bugs Mythos found are patched and fixed. Gary Marcus, an AI expert, scientist, author, and noted critic of the LLM hype machine, initially told Mashable that it was too soon to know whether Claude Mythos represented a new type of threat. But Marcus has grown more skeptical since we spoke to him, and he recently wrote on X that Mythos was "nowhere near as scary" as it first seemed. "Folks, you can relax. Mythos is not some off-trend exponential gain," he wrote. Cybersecurity experts told Mashable it's also very unlikely Claude Mythos could be used to "turn off the lights" or bring down critical infrastructure. "Claims about catastrophic uses of Mythos also significantly misunderstand threat models, cybersecurity risks, and the ability to propagate said risks in a way that could actually lead to safety-critical incidents," Khlaaf told us. "It's not as simple as asking a model 'hack this system,' with Anthropic's own technical blog post demonstrating a requisite of expertise that Anthropic downplays in their marketing posts." Other experts expressed skepticism, while also acknowledging that Mythos does represent a genuine risk, which Marcus has also said. "You could argue it didn't need a public announcement," said Div Garg, a Stanford AI researcher and founder of AGI, Inc. "However, ultimately, the decision to limit access to only those who develop and maintain critical software is precisely what you want a business to do in such a scenario...It's easy to criticize the limited access, but worse outcomes would arise if they released it unchecked." Tal Kollender, Founder and CEO of cybersecurity firm Remedio, told Mashable that tools like Claude Mythos are dangerous because they can automate exploit discovery. "It's brilliant corporate theater," Kolender said. "Labeling a model 'too dangerous to release to the public' is certainly a marketing flex because it immediately creates mystique and signals immense power to investors. But beneath the PR stunt, there is a very real, very mundane truth...The cybersecurity industry doesn't actually have a 'finding' problem. We are already drowning in tools that detect vulnerabilities. What Mythos does is automate that discovery process at an unprecedented scale." TL;DR: A week after revealing Claude Mythos Preview, some of Anthropic's biggest claims about the model look a lot sketchier, experts say. However, they also acknowledge that Claude Mythos poses a real risk. Still, there are plenty of very valid reasons to be nervous about the new frontier model. Reasons to think Claude Mythos Preview is a genuine threat to global cybersecurity In the New York Times, author Thomas Friedman conjures a scenario straight out of War Games, where a teenager hacks the local power grid after school. That scenario seems even more far-fetched a week later. But here's a much more likely scenario: A sophisticated group of hackers uses a tool like Claude Mythos to find zero-day vulnerabilities in our digital infrastructure, launching attacks faster than organizations can respond. And that scenario should worry you. If Claude Mythos isn't the tool that can do it, most experts agree such a tool isn't far off. And some of the world's leading cybersecurity experts certainly seem worried. "I've found more bugs in the last couple of weeks [with Claude Mythos] than in the rest of my entire life combined," said Nicholas Carlini, a research scientist affiliated with Anthropic and Google DeepMind, in a video on the Project Glasswing website. "On Linux, we found a number of vulnerabilities where, as a user with no permissions, I can elevate myself to the administrator by just running some binary on my machine," Carlini said. This week, the AI Security Institute published its findings on Claude Mythos's capabilities, and it provides some independent verification that it does represent a genuine leap forward. Claude Mythos passed AISI cybersecurity tests that no other model had ever completed, scoring higher than any other frontier model on virtually every test. "Our testing shows that Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed," AISI concluded. AISI also identified some limitations with Claude Mythos, which would impair its effectiveness in real-world scenarios. So, was Anthropic's rollout of Mythos responsible AI stewardship or self-serving marketing? Experts I talked to said these options aren't mutually exclusive. "I'd say it's both, and that's not a criticism," said Howie Xu, Gen's Chief AI & Innovation Officer. "Any major platform rollout in this era is going to look different to different audiences depending on their fluency and their fear tolerance. What I care about is whether the intent is real, and the evidence I've seen from Anthropic suggests it mostly is." As is often the case with fear-inducing AI headlines, the reality turned out to be more complicated. "Personally, I don't go to bed worrying about a kid with Mythos hacking the power grid, but that doesn't mean the concern is fictional," said Xu. "We're at an inflection point where the creative and collaborative upside of these tools is massive, and the security infrastructure hasn't caught up. That gap is exactly what keeps me busy. Even a fractional probability of a serious incident is too much, which is why building a trust and security layer into the agentic era is my extreme focus." Finally, as Anthropic stresses in the Claude Mythos model card, tools like this will likely benefit cybersecurity defenders more than hackers in the long-term. And in the short-term, a more cautious approach -- like the approach being modeled with Project Glasswing -- may be warranted. TL;DR: Claude Mythos has formidable cybersecurity coding abilities, and it does represent a genuine threat. However, if hackers have access to AI tools like Claude Mythos, so will the organizations defending against such attacks. UPDATE: Apr. 14, 2026, 9:40 PM EDT This article has been updated with additional information about some of the cited experts.

Are firms keeping talent happy? Click here to take the Law360 survey By Hailey Konnath ( April 14, 2026, 11:54 PM EDT) -- The NAACP sued Elon Musk's artificial intelligence company, xAI, Tuesday in Mississippi federal court over a Memphis, Tennessee-area gas power plant powering its data center, claiming it failed to secure permits for the plant, which emits "dangerous pollutants" affecting communities with "significant Black populations."...

April 14 (Reuters) - Federal agencies and government officials are quietly sidestepping U.S. President Donald Trump's ban on working with Anthropic, Politico reported on Tuesday. The Commerce Department's Center for AI Standards and Innovation is actively testing Anthropic's frontier AI model Mythos' hacking prowess, the report said. Reuters could not immediately confirm the report. Anthropic, the White House and the Commerce Department did not immediately respond to a request for comment. Staff on at least three congressional committees held or requested briefings from the company to learn about Mythos' cyber scanning capabilities over the past week, the report added. Anthropic's co-founder Jack Clark said at the Semafor World Economy event on Monday that the company is discussing Mythos with the Trump administration even after the Pentagon cut off business with the U.S. AI company following a contract dispute. The nature and details of Anthropic's talks with the U.S. government, including which agencies are involved, were not immediately clear. Mythos, announced on April 7, is Anthropic's "most capable yet for coding and agentic tasks," the company said in a blog post, referring to the model's ability to act autonomously. (Reporting by Gnaneshwar Rajan in Bengaluru; Editing by Muralikumar Anantharaman and Raju Gopalakrishnan)
Federal agencies and government officials are quietly sidestepping U.S. President Donald Trump's ban on working with Anthropic, Politico reported on Tuesday. The Commerce Department's Center for AI Standards and Innovation is actively testing Anthropic's frontier AI model Mythos' hacking prowess, the report said. Reuters could not immediately confirm the report.
Easter is usually a time for reunions, chocolate eggs, and the excitement of a spring getaway. But for 90,000 travelers across Europe this week, the "hop" home has turned into a long, frustrating wait. As the 2026 post-Easter travel rush reached its peak, a massive walkout by Lufthansa cabin crew effectively paralyzed the airline's operations. With over 500 flights grounded at Germany's primary hubs -- Frankfurt and Munich -- the "shambles" (to borrow a phrase from the industry) has left travelers sleeping on terminal floors and scrambling for expensive train tickets. The strike, organized by the UFO independent flight attendants' union, comes at a time of high tension between staff and management. Despite record profits reported in the previous fiscal year, the union argues that the "front-line heroes" of the airline -- the men and women in the aisles -- have been left behind by inflation and grueling schedules. The union is demanding a 15% pay rise and a "one-off" inflation compensation payment of €3,000 for its members. While Lufthansa management has offered a staggered increase, the UFO union has branded the offer "insufficient," leading to this week's dramatic 24-hour industrial action. While the number "90,000" looks impressive in a headline, it represents a sea of individual human frustrations. At Frankfurt Airport, the atmosphere shifted from the typical bustle of an international hub to a somber, stagnant wait. Families returning from Mediterranean holidays found themselves stuck at gates with no clear information. Business travelers missed vital meetings, and students heading back to university found their budget-friendly flights replaced by exorbitant last-minute alternatives. "We just wanted to get home for school on Monday," said one traveler interviewed at the terminal. "Now we're told the next available flight isn't until Thursday. The communication has been non-existent." Lufthansa's "hub and spoke" model means that when the hubs stop, the whole system collapses. The strike didn't just affect Lufthansa-branded flights; it rippled through the Lufthansa Group, impacting CityLine operations and causing "knock-on" delays for Star Alliance partners like United and Air Canada. The timing of the strike was tactical. By striking immediately after the Easter weekend, the union hit the airline during its most vulnerable "return-to-work" window. With flights already booked to near-capacity for the holiday rush, there was zero "slack" in the system. When a flight is cancelled during a normal Tuesday in November, you can usually rebook for the next day. When it happens on the Monday after Easter, every other flight is already full. This has left many passengers with no choice but to take the Deutsche Bahn (German Rail), which has struggled to cope with the sudden influx of thousands of extra passengers. If you are one of the 90,000 caught in this "Post-Easter Shambles," it is vital to know your rights under EU Regulation 261/2004. The UFO union has made it clear that this "warning strike" is just that -- a warning. If negotiations do not improve by the end of April 2026, further walkouts are expected during the lucrative Pentecost and early summer travel windows. For Lufthansa, the stakes are high. Not only is the airline losing millions in daily revenue, but the "brand damage" is mounting. In a competitive European market, travelers are increasingly looking toward carriers with more stable industrial relations. The 2026 Lufthansa strike is a stark reminder of the fragile nature of modern travel. We take for granted the seamless ballet of planes, pilots, and crew until the music stops. As you plan your next trip through Germany, the advice is simple: Check your flight status constantly, keep your receipts for every sandwich and hotel room, and perhaps pack a little extra patience in your carry-on. The skies might be empty today, but the battle for the future of the airline industry is just taking off.

Deutsche Börse Group recently invested in the platform at a valuation of $13.3 billion, down from its peak $20 billion valuation. American crypto exchange giant Kraken confidentially filed for an initial public offering (IPO) late last year, its co-CEO, Arjun Sethi, revealed during the Semafor World Economy event in Washington, DC. However, the specifics of the IPO, including valuation and offer size, still remain unknown. Singapore Summit: Meet the largest APAC brokers you know (and those you still don't!) The exchange also raised $200 million from Deutsche Börse Group, Bloomberg reported, at a valuation of $13.3 billion, which was below the $20 billion peak valuation it achieved in late 2025. In exchange for the investment, the German exchange group received a 1.5 per cent fully diluted stake in the crypto exchange. Read more: Kraken's Extortion Claim Points to a Growing Market for Insider Access The parent company of Kraken has been mulling going public for some time now. It first submitted a confidential draft Form S-1 with the Securities and Exchange Commission (SEC) in November 2025. The exchange raised $800 million from investors like Jane Street and Citadel Securities at its peak valuation after that confidential draft filing. However, the company's IPO plans did not materialise, as reports suggested it paused going public around March due to market conditions. The latest statement from Sethi indicates that the exchange never shelved the plan. Meanwhile, the revenue of the crypto exchange also jumped 33 per cent in 2025 to reach more than $2.2 billion, which, according to the company, was driven by "a broad-based performance across trading and asset-based businesses." Of the total revenue, about 47 per cent came from trading activities. Crypto trading activity also increased on the US-based platform, with total transaction volume reaching $2 trillion, a 34 per cent increase. Assets on the platform rose by 11 per cent to $48.2 billion. Although headquartered in the US, Kraken also appears to be expanding globally. It obtained a MiFID II licence by acquiring a Cyprus-based broker last year and launched crypto perpetual contracts through the entity for European users. The expansion in products, as well as the offering of tokenised stocks, shows a strong focus on that area. Within months of launch, tokenised stocks on its platform reached more than $5 billion across both centralised and decentralised venues, and the number of users passed 37,000. Both figures are likely higher now.

Add Yahoo as a preferred source to see more of our stories on Google. April 14 (Reuters) - Federal agencies and government officials are quietly sidestepping U.S. President Donald Trump's ban on working with Anthropic, Politico reported on Tuesday. The Commerce Department's Center for AI Standards and Innovation is actively testing Anthropic's frontier AI model Mythos' hacking prowess, the report said. Reuters could not immediately confirm the report. Anthropic, the White House and the Commerce Department did not immediately respond to a request for comment. Staff on at least three congressional committees held or requested briefings from the company to learn about Mythos' cyber scanning capabilities over the past week, the report added. Anthropic's co-founder Jack Clark said at the Semafor World Economy event on Monday that the company is discussing Mythos with the Trump administration even after the Pentagon cut off business with the U.S. AI company following a contract dispute. The nature and details of Anthropic's talks with the U.S. government, including which agencies are involved, were not immediately clear. Mythos, announced on April 7, is Anthropic's "most capable yet for coding and agentic tasks," the company said in a blog post, referring to the model's ability to act autonomously. (Reporting by Gnaneshwar Rajan in Bengaluru; Editing by Muralikumar Anantharaman and Raju Gopalakrishnan)
Federal agencies are bypassing President Trump's ban to collaborate with Anthropic on its AI model, Mythos. Despite a Pentagon contract dispute, multiple committees have been briefed on its capabilities. The Commerce Department is actively testing Mythos, designed for autonomous coding and tasks. Federal agencies are reportedly circumventing President Trump's directive prohibiting work with Anthropic, as they engage with the company's AI model Mythos. Politico reported that the Commerce Department's Center for AI Standards and Innovation is assessing Mythos' hacking capabilities. In addition, at least three congressional committees have sought information on Mythos, exploring its potential in cybersecurity applications. Though the Pentagon ceased dealings with Anthropic following a contract disagreement, the company is in ongoing talks with the Trump administration regarding Mythos, touted as their most advanced AI model for coding and autonomous tasks.

April 14 (Reuters) - Federal agencies and government officials are quietly sidestepping U.S. President Donald Trump's ban on working with Anthropic, Politico reported on Tuesday. The Commerce Department's Center for AI Standards and Innovation is actively testing Anthropic's frontier AI model Mythos' hacking prowess, the report said. Reuters could not immediately confirm the report. (Reporting by Gnaneshwar Rajan in Bengaluru; Editing by Muralikumar Anantharaman)
Risk Disclosure: Trading in financial instruments and/or cryptocurrencies involves high risks including the risk of losing some, or all, of your investment amount, and may not be suitable for all investors. Prices of cryptocurrencies are extremely volatile and may be affected by external factors such as financial, regulatory or political events. Trading on margin increases the financial risks. Before deciding to trade in financial instrument or cryptocurrencies you should be fully informed of the risks and costs associated with trading the financial markets, carefully consider your investment objectives, level of experience, and risk appetite, and seek professional advice where needed. Fusion Media would like to remind you that the data contained in this website is not necessarily real-time nor accurate. The data and prices on the website are not necessarily provided by any market or exchange, but may be provided by market makers, and so prices may not be accurate and may differ from the actual price at any given market, meaning prices are indicative and not appropriate for trading purposes. Fusion Media and any provider of the data contained in this website will not accept liability for any loss or damage as a result of your trading, or your reliance on the information contained within this website. It is prohibited to use, store, reproduce, display, modify, transmit or distribute the data contained in this website without the explicit prior written permission of Fusion Media and/or the data provider. All intellectual property rights are reserved by the providers and/or the exchange providing the data contained in this website. Fusion Media may be compensated by the advertisers that appear on the website, based on your interaction with the advertisements or advertisers.

April 14 (Reuters) - Federal agencies and government officials are quietly sidestepping U.S. President Donald Trump's ban on working with Anthropic, Politico reported on Tuesday. The Commerce Department's Center for AI Standards and Innovation is actively testing Anthropic's frontier AI model Mythos' hacking prowess, the report said. Reuters could not immediately confirm the report. (Reporting by Gnaneshwar Rajan in Bengaluru; Editing by Muralikumar Anantharaman)
April 14 (Reuters) - Federal agencies and government officials are quietly sidestepping U.S. President Donald Trump's ban on working with Anthropic, Politico reported on Tuesday. The Commerce Department's Center for AI Standards and Innovation is actively testing Anthropic's frontier AI model Mythos' hacking prowess, the report said. Reuters could not immediately confirm the report. (Reporting by Gnaneshwar Rajan in Bengaluru; Editing by Muralikumar Anantharaman)
By clicking submit, I authorize Arcamax and its affiliates to: (1) use, sell, and share my information for marketing purposes, including cross-context behavioral advertising, as described in our Privacy Policy , (2) add to information that I provide with other information like interests inferred from web page views, or data lawfully obtained from data brokers, such as past purchase or location data, or publicly available data, (3) contact me or enable others to contact me by email or other means with offers for different types of goods and services, and (4) retain my information while I am engaging with marketing messages that I receive and for a reasonable amount of time thereafter. I understand I can opt out at any time through an email that I receive, or by clicking here U.S. Treasury Secretary Scott Bessent hailed Anthropic PBC's Mythos as a revolutionary step that will keep America ahead of China in AI, endorsing an industry leader that's clashed with Washington over its role in military endeavors. Bessent, speaking Tuesday at a Wall Street Journal event in Washington, dismissed a question suggesting China was rapidly catching up in AI technology, though he said American artificial intelligence stood just three to six months ahead. He singled out Mythos -- a model Anthropic says is highly adept at finding vulnerabilities in software and computer systems that's being released to a very limited number of carefully-chosen parties. The Treasury Secretary's comments emerged just days after he and Federal Reserve Chair Jerome Powell summoned Wall Street banks to an urgent meeting on concerns that Anthropic's latest model will usher in an era of greater cyber risk. "This Anthropic mythos model was a step function change in abilities, learning capabilities," he told the audience. "It's all logarithmic. You go from x to the 10th power to x to the 12th and then it's very difficult to catch up." Still, Anthropic has run afoul of some agencies in Washington. The Pentagon this year declared the company a threat to the U.S. supply chain, under an authority normally reserved for foreign adversaries. The company won a court order last month blocking a ban on government use of the technology, after Anthropic argued the move could cost it billions of dollars in lost revenue. Founded in 2021 by former OpenAI staffers including Chief Executive Officer Dario Amodei, Anthropic has aimed to be a more responsible AI steward than its competitors. Claude and its underlying technology have gained traction with enterprise customers in sectors like finance and health care, as well as with developers. Anthropic has pledged to spend $50 billion to build custom data centers in the U.S. On Tuesday, Bessent also called out America's lead in AI computing -- the enormous data centers that hyperscalers from Meta Platforms Inc. to Google are spending hundreds of billions of dollars to build out. "I've seen studies that say that in a few years, the U.S. is going to have 70 or 80% of the global computing power," he said. "We were in the 30s. Now, I think we're in the 50s, and we're well, well on our way."

April 14 : Federal agencies and government officials are quietly sidestepping U.S. President Donald Trump's ban on working with Anthropic, Politico reported on Tuesday. The Commerce Department's Center for AI Standards and Innovation is actively testing Anthropic's frontier AI model Mythos' hacking prowess, the report said. Reuters could not immediately confirm the report.
Follow us on our official WhatsApp channel for breaking news alerts and key updates! This field is mandatory.

The largest US civil rights group on Tuesday sued xAI and a subsidiary, claiming they illegally operated more than two dozen gas turbines in Mississippi to power its Colossus 2 data center, posing a health risk to local residents. The NAACP, represented by Earthjustice and the Southern Environmental Law Center, sued xAI and subsidiary MZX Tech, charging they violated the federal Clean Air Act by running 27 gas-fired turbines before getting necessary air permits for its massive data center that powers xAI's Grok chatbot. Elon Musk's artificial intelligence startup xAI has invested more than $20 billion to build the data center in Southaven with the full backing of Mississippi Governor Tate Reeves, but the facility, as well as Colossus 1 just over the border in Memphis, Tennessee, has met heavy opposition from local communities due to their effect on local air and environmental quality. "By looking to evade clean air laws to operate dirty turbines that emit pollution and known carcinogens, these companies are following a shameful, familiar pattern: asking Black and frontline communities to bear the toxic brunt of 'innovation,'" said Abre' Conner, director of the Center for Environmental and Climate Justice at the NAACP. The NAACP announced its intention to sue xAI and MZX in February because the Clean Air Act requires 60 days of notice ahead of filing a lawsuit. Mississippi regulators held one public hearing that month about permits for those turbines after just a few days of public notice for the hearing, and subsequently approved the permits. xAI was not immediately available for comment. Earthjustice said that xAI's Southaven power plant has the potential to emit more than 1,700 tons of smog-causing nitrogen oxides (NOx) each year, a major source of smog in the greater Memphis area. They are also estimated to emit 180 tons of fine particulate matter, 500 tons of carbon monoxide, and 19 tons of cancer-causing formaldehyde.
)
Mythos, a new artificial intelligence model that Anthropic PBC has teased as too dangerous to release, looked at first like a problem for banks. Days after the company announced the new technology, US Treasury Secretary Scott Bessent summoned Wall Street leaders to make sure they were taking precautions to defend their systems, creating invaluable publicity for Anthropic and raising questions about who gets an exclusive peek at its threatening progeny. The Treasury is now pushing for access to Mythos. One organization that already has it is the UK's AI Security Institute, which has become the world's top neutral arbiter of what counts as safe and secure AI. It found that some of the hype around Mythos is warranted. It is indeed more capable of being used for complex cyberattacks than other AI tools such as OpenAI's ChatGPT or Google's Gemini. But it is most perilous for "weakly defended" or simplified systems. Large banks have some of the most secure IT in the world, and while Mythos and other powerful AI poses a threat in the wrong hands, it's the much broader array of small and medium-sized companies that look most vulnerable to hackers and bad actors using the tools. Cyber specialists have long complained that companies treat security as an afterthought, and the result is online services and software that are riddled with bugs, handing hackers a possible way to infiltrate a computer system. Tech companies have an approach for dealing with this, called "responsible disclosure." Once a flaw is found in their software, they'll announce it to the world with a suggested fix, giving their customers time to make the patch and move on with their lives. Microsoft Corp.'s version of this is Patch Tuesday, which despite its name refers to a monthly disclosure of flaws the company has found in Office 365, Windows and other products. IT staff at banks like Barclays Plc and Wells Fargo & Co. will take those suggested patches, test them to make sure they don't break any of their existing systems, get sign off from management, and then deploy them. That takes weeks or months. Up until the advent of generative AI, the process worked just fine because it would typically take an even longer time for bad actors to find a way to attack a system based on the flaw that had been disclosed. They'd have to study the bug and also experiment with different methods for exploiting it. Artificial intelligence tools have changed all that. Even two years ago, hackers could take the details of a disclosure and paste it into ChatGPT, then tell the bot to scan a public database of source code such as GitHub for other patterns which could then be exploited. Let's say for instance that Microsoft announced that its researchers had found a flaw in how Office 365 handles a file. A chatbot could not only suggest how to exploit it but quickly find other software like Microsoft Outlook or Teams that have similar deficiencies. This has all got even easier for hackers in the last few months, as AI companies have imbued their models with "agentic" capabilities, effectively giving them the power to act independently. Anthropic's Claude Cowork, released in January, can now carry out tasks like sending emails and making calendar appointments. For those who want to break into software, such tools won't just find weaknesses, they'll try different ways to hammer at them automatically until one method works. Mythos can even "chain" a software bug into multi-step attacks, something only highly-skilled human hackers had been able to do previously. It's the equivalent of a burglar planning a series of steps for a break-in: finding that first open window, using it to unlock a door from the inside and then disabling the alarm. Each step alone isn't enough but together they get full access. Till now, generative AI's impact on cyber security has been amorphous. There was no single tool that could launch devastating new attacks, but large language models were still harnessed to supercharge old tricks of the trade. Hackers have used chatbots to polish up emails for phishing attacks to look more credible, or real-time avatar generators to create deepfake video calls that trick people into thinking a man in his living room is a young woman. Sign up for the Bloomberg Opinion bundle Sign up for the Bloomberg Opinion bundle Sign up for the Bloomberg Opinion bundle Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Plus Signed UpPlus Sign UpPlus Sign Up By continuing, I agree to the Privacy Policy and Terms of Service. But agentic AI is set to fuel the act of hacking itself, which has long been an opportunistic pursuit for the unscrupulous. So-called black hats tend not to go after banks because they're so secure. Instead they scan the web to spot vulnerabilities, be it a hospital they can infiltrate to make ransomware demands or a mom-and-pop shop. The recent advances in AI are a problem for these organizations because the moment a flaw is disclosed by a software provider, they now have precious little time to update and patch their systems. According to zerodayclock.com, the average time between a software flaw being made public and a working attack being built has collapsed from 771 days in 2018, to less than four hours today. Anthropic's disclosure of Mythos certainly benefits its own publicity efforts ahead of an initial public offering, adding to the mystique around the potency of its technology. But it's also forcing a much-needed reckoning over how the window of time between published IT flaws and their exploitation has effectively vanished. That raises questions over whether "responsible disclosure" is such a smart idea in the first place, and whether the process of patching flaws over weeks and months is now fruitless. Even Wall Street can't answer these questions yet, but banks at least have the staffing and the money to work out the difficult structural changes needed to eventually do patches in near-real time. The bigger problem will be for smaller firms who need to move just as fast, and who will require technical and regulatory help that the market can't yet provide.

Anthropic's rise is giving some OpenAI investors second thoughts - BERITAJA is one of the most discussed topics today. In this article, you will find a clear explanation, key facts, and the latest updates related to this topic, presented in a concise and easy-to-understand way. Read more news on Beritaja. OpenAI's $852 cardinal valuation is facing skepticism from immoderate of its ain investors arsenic the institution scrambles to reorient itself about endeavor customers and fend disconnected Anthropic, according to the Financial Times. Anthropic's annualized gross jumped from $9 cardinal astatine the extremity of 2025 to $30 cardinal by the extremity of March, driven mostly by request for its coding tools. One investor who has backed some companies told the FT that justifying OpenAI's information required assuming an IPO valuation of $1.2 trillion aliases much -- making Anthropic's existent $380 cardinal valuation look for illustration the comparative bargain. The secondary marketplace tells a akin communicative correct now, wherever request for Anthropic shares has grown about insatiable while OpenAI shares are trading astatine a discount. Altman has been present before. During his tenure starring Y Combinator, aggressive valuation inflation near immoderate portfolio companies financially stranded while others proved worthy each penny and past some. Iconiq Capital partner Roy Luo -- whose patient has invested complete $1 cardinal successful Anthropic while holding a smaller liking successful OpenAI -- told the FT wherever he stood. "There's room for both, but location is fundamentally a number 1 and a number 2 dynamic, and the number 1 will triumph disproportionately," he said. "We picked." OpenAI CFO Sarah Friar pushed back, telling the FT that the company's $122 cardinal raise -- the largest backstage fundraising successful history -- was grounds of continued investor confidence.

Anthropic may not have burnt all its bridges with the Trump administration. According to a report by the news agency Reuters, the co-founder of the company behind the Claude chatbot said that Anthropic is in talks with the US government about its latest artificial intelligence (AI) model, Mythos. This comes after the US Department of War cut ties with the Google- and Amazon-backed AI startup. Last month, a dispute between Anthropic and the DoW over guardrails governing the military's use of its AI tools led the agency to label the company a supply-chain risk and bar its use by the Pentagon and its contractors."We have a narrow contracting dispute, but I don't want that to get in the way of the fact that we care deeply about national security. Our position is the government has to know about this stuff ... So absolutely, we're talking to them about Mythos, and we'll talk to them about the next models as well," Anthropic co-founder Jack Clark told Reuters. However, the report didn't provide any further details about the companies' talks with the US government, including which agencies were involved.Last week, a Washington DC federal appeals court declined to block the Pentagon's national security blacklisting, even as another appeals court had reached a different conclusion in a separate challenge. Despite the dispute, Anthropic said it continues to engage with the government on national security matters. A few days ago, Anthropic revealed its latest artificial intelligence technology, Mythos, capable of writing code and executing tasks autonomously. According to the company, the technology can spot loopholes in cybersecurity systems and exploit them. Nevertheless, many people and developers are questioning the relevance of this technology in today's world.George Hotz believes that identifying such loopholes is relatively straightforward and that people do not exploit them for legal and monetary reasons. As a result, he suggested that many other people could perform the task without AI. Similarly, other experts, such as Gary Marcus and Yann LeCun, expressed scepticism about Anthropic's claims.An AI security company called Aisle has also proved that this technology is not as relevant as the company claims. It demonstrated that a cheaper version of this technology can perform the same function. On the contrary, there are still experts who agree that Mythos has made substantial improvements in the process of chaining the exploits.
WASHINGTON: US treasury secretary Scott Bessent hailed Anthropic PBC's Mythos as a revolutionary step that will keep America ahead of China in AI, endorsing an industry leader that's clashed with Washington over its role in military endeavours. Bessent, speaking Tuesday at a Wall Street Journal event in Washington, dismissed a question suggesting China was rapidly catching up in AI technology, though he said American artificial intelligence stood just three to six months ahead. He singled out Mythos - a model Anthropic says is highly adept at finding vulnerabilities in software and computer systems that's being released to a very limited number of carefully-chosen parties. The treasury secretary's comments emerged just days after he and Federal Reserve chair Jerome Powell summoned Wall Street banks to an urgent meeting on concerns that Anthropic's latest model will usher in an era of greater cyber risk. "This Anthropic mythos model was a step function change in abilities, learning capabilities," he told the audience. "It's all logarithmic. You go from x to the 10th power to x to the 12th and then it's very difficult to catch up." Still, Anthropic has run afoul of some agencies in Washington. The Pentagon this year declared the company a threat to the US supply chain, under an authority normally reserved for foreign adversaries. The company won a court order last month blocking a ban on government use of the technology, after Anthropic argued the move could cost it billions of dollars in lost revenue. Founded in 2021 by former OpenAI staffers including CRO Dario Amodei, Anthropic has aimed to be a more responsible AI steward than its competitors. Claude and its underlying technology have gained traction with enterprise customers in sectors like finance and health care, as well as with developers. Anthropic has pledged to spend US$50 billion to build custom data centres in the US. On Tuesday, Bessent also called out America's lead in AI computing - the enormous data centres that hyperscalers from Meta Platforms Inc to Google are spending hundreds of billions of dollars to build out. "I've seen studies that say that in a few years, the US is going to have 70 or 80% of the global computing power," he said. "We were in the 30s. Now, I think we're in the 50s, and we're well, well on our way."
