News & Updates

The latest news and updates from companies in the WLTH portfolio.

IMF Chief warns Anthropic AI model poses major cybersecurity risks

WASHINGTON, 13th April, 2026 (WAM) -- Kristalina Georgieva, Managing Director of the International Monetary Fund, said she is concerned about a powerful new AI model from Anthropic that poses major cybersecurity risks. In a statement, she said that the world does not have the ability to protect the international monetary system against massive cyber risks. "The risks have been growing exponentially," Georgieva said. "Yes, we are concerned. We are very keen to see more attention to the guardrails that are necessary to protect financial stability in the world of AI."

Anthropic
Northern Ireland News13d ago
Read update
IMF Chief warns Anthropic AI model poses major cybersecurity risks

UK regulators rush to assess risks of latest Anthropic AI model, FT reports

April 12 (Reuters) - British financial ⁠regulators are holding urgent talks with the government's cyber ⁠security agency and major banks to assess risks posed by the latest artificial intelligence model from Anthropic, the Financial Times reported on Sunday. Bank of England, Financial Conduct Authority and Treasury officials are in talks with the National Cyber Security Centre to examine potential ⁠vulnerabilities in critical IT ⁠systemshighlighted by Anthropic's latest AI model, the FT said, citingtwo people briefed ⁠on the talks. Anthropic did not respond to a Reuters' request for comment. The BoE, FCA and NCSC declined to ⁠comment. The UK Treasury was not immediately available for comment. Representatives from major British banks, insurers and exchanges are expected to be ⁠briefed on the cyber security risks posed by the model, Claude Mythos Preview, at a meeting with regulatorsin the next fortnight, the newspaper said. Reuters could ⁠not immediately verify details of the report. Reuters nL4N40T024 reported on Friday, citing two sources, that U.S. Treasury Secretary Scott Bessent had called a meeting with major Wall Street banks on ⁠the model's cyber risk potential. AI startup Anthropic has said the model is being deployed as part of "Project Glasswing nL4N40Q0LK", a controlled initiative under which select organizations are permitted to use the unreleased Claude Mythos Preview model for defensive cyber ⁠security purposes. In a blog post earlier this month, Anthropic said the model had already identified thousands of major vulnerabilities across operating systems, web browsers and other widely used software. (Reporting by Mihika Sharma in Bengaluru; Editing by Bernadette Baum, Christina Fincher and Alexander Smith)

Anthropic
The Star 13d ago
Read update
UK regulators rush to assess risks of latest Anthropic AI model, FT reports

easyJet passengers 'vomiting and passing out' in 3-hour border control chaos

easyJet passengers were 'close to passing out' after being left in sweltering conditions as their flight departed to Manchester without them due to border control pandemonium. Around 100 travellers have been left marooned in Milan on Sunday, April 12 after enduring queues lasting up to three hours at Milan's Linate airport because of fresh border control procedures. Alongside worries about getting home, numerous passengers were left being sick and fainting due to the scorching heat, according to the BBC. easyJet stated it was working to assist passengers but that the circumstances were "outside of our control". Massive queues developed at the international airport creating pandemonium, reports the Mirror. Images and footage posted online revealed scenes of chaos as enormous queues built up at the international airport. The disorder follows the UK government revising its advice to people journeying to the European Schengen zone, meaning they may need to register biometric information upon arrival. The introduction of the EU entry and exit system (EES) is a digital system that substitutes the physical stamping of passports when passing through border control. The carrier stated that it delayed the aircraft for nearly an hour extra, but ultimately had to take off due to crew working regulations. Passengers have been left frustrated after reaching the airport with ample time and now confronting enormous delays to get home. Emily Benn, from Grimsby, was journeying with five others on the 11am service. Her replacement flight will now head to Gatwick instead of Manchester, resulting in a £400 taxi bill upon landing. She revealed to the M.E.N: "We arrived at the airport at 8am and our flight was scheduled to depart Milan Linate at 11am. As soon as our gate appeared on the board, we headed straight there and there was already a massive queue. "The queue covered three different flights, and there were hundreds of travellers all attempting to get through. The new EES wasn't functioning, so we all had to be processed by two staff members at passport control. "It reached 11:20am and we were informed the flight had departed without us. They placed us all on a shuttle bus and returned us to the arrivals area, where we had to go back to the easyJet desk. "We were instructed to rebook flights, so have booked to Gatwick and will then pay £400 for a taxi back to Manchester as that's where our car is parked. We are a group of five adults and one child, who is due to have spinal surgery in a few days." Other travellers shared the 'nightmare' ordeal on social media. One wrote: "What a nightmare! "You abandoned me and 122 other passengers in Milan. You flew to Manchester with 34 onboard. "We queued for three hours and all the time the flight info remained at 'boarding' we were then told the delayed flight had left." A spokesperson for easyJet stated: "We are aware that some passengers departing from Milan Linate today experienced longer than usual waiting times at passport control and we advised customers due to fly to allow additional time to make their way through the airport. "We held flight EJU5420 from Milan to Manchester for nearly an hour to give passengers extra time but it had to then depart due to crew reaching their safety regulated operating hours. Customers who missed the flight have been offered a free flight transfer. "We continue to urge border authorities to ensure they make full and effective use of the permitted flexibilities for as long as needed while EES is implemented, to avoid these unacceptable border delays for our customers. While this is outside of our control, we are sorry for any inconvenience caused."

CHAOS
Daily Star13d ago
Read update
easyJet passengers 'vomiting and passing out' in 3-hour border control chaos

IMF Chief Warns Anthropic AI Model Poses Major Cybersecurity Risks

WASHINGTON, (UrduPoint / Pakistan Point News / WAM - 13th Apr, 2026) Kristalina Georgieva, Managing Director of the International Monetary Fund, said she is concerned about a powerful new AI model from Anthropic that poses major cybersecurity risks. In a statement, she said that the world does not have the ability to protect the international monetary system against massive cyber risks. "The risks have been growing exponentially," Georgieva said. "Yes, we are concerned. We are very keen to see more attention to the guardrails that are necessary to protect financial stability in the world of AI."

Anthropic
UrduPoint13d ago
Read update
IMF Chief Warns Anthropic AI Model Poses Major Cybersecurity Risks

The White House Wants Banks to Let Anthropic's AI Inside the Vault -- and Wall Street Is Listening

Senior officials in the Trump administration have been quietly nudging major U.S. banks to pilot Anthropic's newest artificial intelligence model, Mythos, in what amounts to an unusual marriage of government influence and private-sector AI adoption. The effort, first reported by TechCrunch, raises pointed questions about the boundaries between federal policy and corporate technology procurement -- and about who stands to benefit when the government puts its thumb on the scale in the AI race. The push isn't subtle. According to TechCrunch's reporting, administration officials have held private conversations with executives at several of the nation's largest financial institutions, encouraging them to integrate Mythos into compliance workflows, fraud detection pipelines, and customer-facing operations. The conversations have reportedly involved staff from both the Treasury Department and the Office of Science and Technology Policy, suggesting coordination at multiple levels of government. Why Anthropic? And why now? Mythos, which Anthropic released in early 2026, represents the San Francisco-based company's most commercially ambitious model to date. Unlike its predecessor Claude models, which were positioned primarily as general-purpose assistants, Mythos was built with enterprise-grade features tailored to regulated industries. It includes enhanced auditability, detailed chain-of-thought logging, and what Anthropic describes as "constitutional guardrails" specifically tuned for financial services use cases. The model can process and reason over large volumes of regulatory text, flag suspicious transaction patterns, and generate compliance reports -- tasks that currently consume thousands of analyst hours at major banks. Anthropic has been aggressively courting the financial sector for months. But government officials actively steering banks toward a specific vendor is something different entirely. It blurs lines that the banking industry, already subject to intense regulatory scrutiny, has historically tried to keep clean. The timing is no accident. The Trump administration has spent the past year positioning the United States as the global leader in AI development and deployment, rolling back Biden-era executive orders on AI safety and replacing them with a lighter regulatory framework that emphasizes speed and commercial adoption. In January, the administration launched its "AI Acceleration Initiative," a broad policy directive encouraging federal agencies and private industries to adopt American-made AI systems. Banking was identified as a priority sector. So the Mythos push fits a pattern. But the specificity -- one company, one model, directed at one industry -- has raised eyebrows among policy analysts and competitors alike. "There's a difference between saying 'American companies should adopt AI' and saying 'You should adopt this particular company's AI,'" said a senior executive at a competing AI firm who spoke on condition of anonymity. "The first is policy. The second starts to look like favoritism." Anthropic has denied any direct coordination with the White House on bank outreach. A spokesperson told TechCrunch that the company "welcomes interest from any sector" and that Mythos "was designed to meet the rigorous demands of highly regulated industries." The spokesperson declined to comment on specific government conversations. The Treasury Department did not respond to requests for comment. For the banks themselves, the proposition is complicated. On one hand, AI adoption in financial services has been accelerating rapidly, and institutions that fall behind risk competitive disadvantage. JPMorgan Chase, Goldman Sachs, and Bank of America have all publicly discussed expanding their AI capabilities in recent earnings calls. JPMorgan CEO Jamie Dimon has called AI "as consequential as the printing press" for the financial industry. On the other hand, adopting a model at the apparent suggestion of government officials creates a different kind of risk -- reputational, legal, and political. Consider the regulatory angle. Banks operate under a web of oversight from the OCC, the FDIC, the Federal Reserve, and the SEC. If an institution adopts an AI model because government officials encouraged it, and that model later produces errors -- misclassifying transactions, generating flawed compliance reports, or making biased lending recommendations -- the liability questions become extraordinarily tangled. Did the bank exercise independent judgment? Was there implicit pressure? Could regulators who encouraged adoption then turn around and penalize the bank for failures in that same system? These aren't hypothetical concerns. The banking industry's experience with technology mandates and suggestions from Washington has historically been fraught. The 2008 financial crisis was fueled in part by risk models that institutions adopted with insufficient scrutiny. More recently, banks that rushed to implement pandemic-era PPP loan processing systems faced fraud losses and regulatory actions when those systems proved inadequate. "Any time the government says 'you should use this,' a compliance officer's first instinct should be to ask why," said Karen Petrou, managing partner of Federal Financial Analytics, a Washington-based consultancy. "The question isn't whether the technology is good. It's whether the process of adoption is sound." And yet the appeal is real. Mythos has posted impressive benchmark results. In Anthropic's own testing, the model outperformed GPT-5 and Google's Gemini Ultra on financial reasoning tasks by margins of 8 to 12 percent, depending on the benchmark. Independent evaluations from the Stanford Center for Research on Foundation Models have largely corroborated these results, though researchers noted that benchmark performance doesn't always translate to real-world reliability in high-stakes environments. The model's auditability features are particularly attractive to compliance teams. Traditional AI models operate as black boxes -- they produce outputs but can't explain their reasoning in ways that satisfy regulators. Mythos addresses this with what Anthropic calls "transparent reasoning traces," essentially detailed logs of every step the model takes to reach a conclusion. For a bank trying to demonstrate to examiners that its AI-driven decisions are sound, this is a significant selling point. But competitors aren't standing still. OpenAI has its own financial services offering in development. Google DeepMind has partnered with several European banks on compliance automation. And a crop of specialized fintech AI companies -- firms like Resistant AI, Hawk AI, and Hummingbird -- argue that purpose-built models outperform general-purpose ones in specific financial applications, regardless of benchmark scores. The competitive dynamics add another layer to the controversy. If the White House is effectively endorsing Anthropic, it disadvantages every other company in the market. That's a concern not just for OpenAI and Google but for the smaller firms that lack the resources to compete with a government-backed incumbent. "This is industrial policy by other means," said Matt Sheehan, a fellow at the Carnegie Endowment for International Peace who studies AI policy. "The U.S. has traditionally been skeptical of governments picking technology winners. This feels like a departure." Not everyone sees it that way. Supporters of the administration's approach argue that the AI race with China demands a more active government role in accelerating adoption. They point to China's own efforts to push domestic AI models into its banking sector, including mandates that state-owned banks adopt systems from Baidu, Alibaba, and other Chinese tech giants. In this framing, encouraging American banks to use American AI isn't favoritism -- it's national security. "We're in a technology competition with adversaries who don't play by market rules," said a senior administration official who spoke on background. "If we wait for the market to sort this out on its own timeline, we lose." That argument carries weight in Washington right now. Bipartisan concern about China's AI capabilities has created unusual political space for interventionist technology policy. The CHIPS Act, passed under Biden with broad Republican support, established the precedent that the federal government could direct resources toward specific technology sectors. The question is whether that precedent extends to the government directing specific companies toward specific customers. Legal experts are divided. Federal procurement law contains detailed rules about how the government selects technology vendors for its own use. But there's no equivalent framework governing government recommendations to private companies. "It's a gray area," said Daniel Ho, a professor at Stanford Law School who specializes in AI regulation. "There's no law that says a government official can't suggest a product to a bank executive. But the power dynamics make it more than a casual suggestion." The banks, for their part, appear to be proceeding cautiously. According to people familiar with the discussions, at least three major institutions have agreed to conduct limited pilots of Mythos in sandbox environments -- testing the model on historical data sets rather than deploying it in live operations. This approach lets them evaluate the technology without committing to it, and without appearing to reject a suggestion from powerful government officials. It's a classic Wall Street hedge. The broader implications extend well beyond banking. If the administration's approach succeeds -- if Mythos gains a foothold in major financial institutions partly through government encouragement -- it establishes a template that could be replicated in healthcare, energy, defense contracting, and other regulated industries. The AI market, already dominated by a handful of large players, could become even more concentrated if government influence consistently tips the scales toward preferred vendors. Anthropic's own positioning makes this particularly interesting. The company has built its brand around AI safety, arguing that its "constitutional AI" approach produces models that are more aligned with human values and less prone to harmful outputs. It has cultivated relationships with policymakers on both sides of the aisle, and its leadership -- including CEO Dario Amodei and president Daniela Amodei -- has been vocal about the need for responsible AI development. That reputation may be part of why the administration chose Anthropic for this push. It's an easier sell politically than, say, OpenAI, which has faced criticism over its corporate governance and its relationship with Microsoft. But safety branding and actual safety are different things. No AI model operating in financial services has been tested at the scale the administration envisions. The potential failure modes -- hallucinated regulatory citations, incorrect risk assessments, biased credit decisions -- could have consequences measured in billions of dollars and millions of affected consumers. The Office of the Comptroller of the Currency, which supervises national banks, issued guidance in late 2025 stating that financial institutions remain fully responsible for any decisions made with AI assistance, regardless of the model's provenance or who recommended it. That guidance hasn't changed. Banks that adopt Mythos will own the outcomes, whether those outcomes are good or catastrophic. For now, the situation remains fluid. Congressional Democrats have begun asking questions. Senator Elizabeth Warren sent a letter to Treasury Secretary Scott Bessent last week requesting documentation of any communications between Treasury staff and Anthropic or bank executives regarding AI adoption. The letter, first reported by Politico, called the reported outreach "deeply troubling" and demanded a response within 30 days. Republicans have been largely silent on the matter, though some have privately expressed discomfort with the specificity of the administration's approach. "Promoting American AI is one thing," a Republican Senate aide told reporters. "Promoting one American AI company is another." Anthropic's stock -- the company went public in late 2025 -- rose 4.3 percent on the day TechCrunch published its report. It has gained another 2.1 percent since. Investors, at least, seem to view government backing as unambiguously positive. The rest of the industry isn't so sure. What's unfolding is a test case for how AI adoption will proceed in America's most consequential industries. Will it be driven by market competition, regulatory mandate, or something murkier -- a phone call from a government official suggesting that a particular model deserves a closer look? The answer will shape not just the banking sector but the entire trajectory of AI commercialization in the United States. And right now, nobody in Washington or on Wall Street seems entirely comfortable with where this is heading.

Anthropic
WebProNews13d ago
Read update
The White House Wants Banks to Let Anthropic's AI Inside the Vault -- and Wall Street Is Listening

The Democratic Chaos of Language vs. the Curated Precision of Science

The entry for geranium lake in Webster's Third New International Dictionary describes it as "a vivid red that is lighter and slightly yellower and stronger than apple red, yellower, lighter, and stronger than carmine, and bluer, lighter, and stronger than scarlet." Another entry defines geranium red as being "slightly lighter than Goya." How did color definitions this complex and weird end up in dictionaries? Lexicographer Kory Stamper's new book True Color: The Strange and Spectacular Quest to Define Color from Azure to Zinc Pink (Bookshop|Amazon) answers that question, tracing the collision between what she calls the democratic chaos of language and the curated precision of science, plus the challenge of lexicographers struggling to write about color in an era of mass production, military supply chains, and increasingly sophisticated colorimetry. This is part of a complete episode.

CHAOS
waywordradio.org13d ago
Read update
The Democratic Chaos of Language vs. the Curated Precision of Science

Traffic chaos as serious crash closes Tonkin Highway

Emergency crews are responding to a serious crash involving a car and motorbike in Perth's east. The crash is causing heavy congestion for southbound traffic on Tonkin Highway at Great Eastern Highway in Bayswater. Motorists are being urged to exercise extreme caution and avoid the area. The right lane is closed. Police, St John Ambulance and towing services are on the scene.

CHAOS
Perth Now13d ago
Read update
Traffic chaos as serious crash closes Tonkin Highway

Traffic chaos as serious crash closes Tonkin Highway

Perth traffic: Serious crash involving car and motorbike closes part of Tonkin Highway Brooke RolfePerthNow Mon, 13 April 2026 6:31AM Emergency crews are responding to a serious crash involving a car and motorbike in Perth's east. The crash is causing heavy congestion for southbound traffic on Tonkin Highway at Great Eastern Highway in Bayswater. Motorists are being urged to exercise extreme caution and avoid the area. The right lane is closed. Police, St John Ambulance and towing services are on the scene. Your cookie settings are preventing this third party content from displaying. If you'd like to view this content, please adjust your Cookie Settings. To find out more about how we use cookies, please see our Cookie Guide. Get the latest news from thewest.com.au in your inbox. Sign up for our emails

CHAOS
Countryman13d ago
Read update
Traffic chaos as serious crash closes Tonkin Highway

AI models like Anthropic's Mythos pose disruption risks to India's IT services growth: Kotak

New Delhi [India], April 12 (ANI): AI models such as Anthropic's Mythos could pose disruption risks to the growth of India's IT services sector, according to a report by Kotak Institutional Equities. The report said the model 'exhibits a step-jump in benchmark performance across software engineering tasks' and added that it 'raises near- to medium-term disruption risks for IT services,' particularly for companies with higher exposure to application services. The brokerage noted that improvements in AI-driven coding could translate into real business impact. 'The realization of similar improvements in real-world scenarios risks turning our estimate of a 3-3.5% annual growth headwind for the industry... from prudent to practical,' the report said. It added that such advancements could also increase downside risks if rapid capability gains continue in future AI models. Kotak further said the model could 'increase efficiencies across all IT services segments' but warned that gains may not be evenly distributed. Stronger automation in coding could widen productivity differences, especially impacting application development services more than other segments. At the same time, the report flagged pricing pressure risks. It said the development 'could pressurize the valuation multiples of IT services companies' and compound 'near-term deflation risks for services.' However, Kotak also pointed to emerging opportunities for AI adoption. It expects 'an acceleration of opportunities, such as the modernization of legacy systems and data foundations, which will partially offset the revenue deflation impact.' The report added that once such models are widely deployed, they could 'accelerate GenAI-driven business use cases, providing large new opportunities to Indian IT.' The report further noted that AI-driven changes could reshape the sector's trajectory, adding, 'We expect Mythos to increase efficiencies across all IT services segments.' (ANI)

Anthropic
India Gazette13d ago
Read update
AI models like Anthropic's Mythos pose disruption risks to India's IT services growth: Kotak

Anthropic Gains On OpenAI Amid Rising Adoption Among Enterprises | PYMNTS.com

By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. Close to a third of American businesses paid for the company's artificial intelligence offerings last month, the Financial Times (FT) reported Saturday (April 11), citing data from payments company Ramp. That number marks an increase of more than 6 percentage points from the prior month. Meanwhile, OpenAI remains in the lead but business adoption of its tools was flat at 35%, the Ramp data showed. The company's findings were based on $100 billion in annual card and invoice spending from 50,000 customers, the FT added. OpenAI has enjoyed a strong lead in the consumer space since it introduced ChatGPT in November 2022, the report added. The company has said it has around 900 million weekly active users, with a little more than 5% paying for its services. The FT argued that this early growth is proving tough to maintain. The report, citing data from market researchers Sensor Tower, added that downloads of ChatGPT rose 5% in the month leading to March, while downloads of Anthropic's Claude chatbot tripled to 21 million during the same time frame. Also in March, ChatGPT's weekly active users in the U.S. declined from month to month for the first time in around two years, the FT said, using data from Apptopia. OpenAI told the FT it did not recognize the data it had cited, adding that its AI coding agent Codex now has 3 million weekly users, up from 2 million last month. "Our APIs now process more than 15 billion tokens per minute. ChatGPT has six times the monthly web visits and mobile sessions of the next largest AI app," the company said. "Our ads pilot reached $100 million run rate in six weeks. That's growth." Writing about AI adoption last week, PYMNTS CEO Karen Webster argued that tools like Claude are able to gain ground as consumers are introduced to new AI models through work. "ChatGPT expands outward from the consumer, earning trust in low-stakes, high-frequency tasks and carrying that trust into the workplace. The habit comes first; the enterprise follows," Webster wrote.

Anthropic
PYMNTS.com13d ago
Read update
Anthropic Gains On OpenAI Amid Rising Adoption Among Enterprises | PYMNTS.com

Is X-Energy a Millionaire-Maker Stock? | The Motley Fool

The company designs and builds small modular nuclear reactors. Shares of nuclear power company X-Energy aren't available yet. Still, the company recently filed a draft registration statement with the Securities and Exchange Commission (SEC) for an initial public offering (IPO) under the NASDAQ ticker XE. The renewable energy company's shares could go radioactive, making investors millions, or they could bust, as the company presents a strong risk-reward ratio. On the negative side, XE Energy lost money last year, and as a private company, there isn't much information yet about its finances. On the positive side, its IPO prospectus states that the market for small modular reactors (SMRs) could be worth $2.3 trillion by 2050. Here are three reasons why X-Energy could be a millionaire-maker stock. X-energy has attracted funding from Amazon, Dow, the Climate Pledge Fund, Segra Capital Management, Jane Street, and Ares Management. The first two are the most important because they are customers for the company's nuclear SMRs. The company is working with Dow in Texas to build a four-SMR-unit plant under the U.S. Department of Energy's Advanced Reactor Demonstration program. It also has at least 5 GW of projects that it plans to do with Amazon by 2039, and it has a commitment from British energy and services company Centrica for 6GW of SMRs. The primary driver for owning X-Energy is its proprietary Xe-100 reactor technology. Unlike the light-water reactors that dominate the current grid, the Xe-100 is a high-temperature helium-cooled reactor designed for more than just electricity. While the HTR-PM helium-cooled reactor is already operating in China, the Xe-100 is still awaiting approval from the Nuclear Regulatory Commission (NRC) in the U.S. According to X-Energy, the Xe-100 produces high-temperature steam that can directly decarbonize heavy industrial processes, such as chemical manufacturing and hydrogen production. According to X-Energy, the Xe-100 is also considered safer than other designs because it relies on nature rather than machinery or human intervention to prevent a meltdown. The reactor's physics act like an automatic brake. If it gets too hot, the nuclear reaction naturally slows and stops on its own. Because the reactor doesn't produce an overwhelming amount of heat in one small spot, X-Energy says that leftover heat simply drifts away into the air. It doesn't need extra water or pumps to stay cool. The energy company's business model also benefits from its own TRISO-X fuel. According to X-Energy, each grain of uranium is enveloped by layers of ceramic that the company says is designed to prevent the release of radioactive materials even under extreme temperatures. By controlling its SMR's fuel supply, X-Energy creates a recurring revenue stream that would continue for the 40-to-60-year lifespan of the SMRs it sells. IPOs can create millionaires, but because most companies that go public are in their early stages and rarely profitable, there are significant risks. There are plenty of competitors lining up in the SMR space, and if a better design emerges, X-Energy's profits and share price would likely suffer. However, the company is well-financed and may be better suited to thrive than other small SMR companies. While the risks of regulatory delays and capital intensity remain, the combination of a diversified industrial client base, a proprietary fuel monopoly, and elite tech partnerships makes X-Energy a compelling candidate for a growth-oriented portfolio.

X-energy
The Motley Fool13d ago
Read update
Is X-Energy a Millionaire-Maker Stock? | The Motley Fool

Suspect in Molotov attack on Sam Altman's home linked to AI Discord server

PauseAI said the suspect had "no role" in the group and condemned the attack. The 20-year-old suspect arrested after the Molotov cocktail attack on Sam Altman's $27 million home participated in a Discord server critical of AI development. PauseAI said the suspect, identified by several outlets as Daniel Alejandro Moreno-Gama, joined its server two years ago. The nonprofit organization is focused on temporarily pausing the development of frontier AI models -- like OpenAI's GPT-5.4 -- and mitigating the risks they pose. "Violence against anyone is antithetical to everything we stand for," the statement on its website. The organization said Moreno-Gama posted a total of 34 messages in the Discord server, though none of those messages "contained explicit calls to violence." "He had no role in PauseAI, participated in no campaigns, attended no events, and received no support from us," the statement said. "Following news of the attack, we banned him from our server." A moderator began removing his messages following news of the attack, but stopped when they realized they could be relevant to the investigation. "Our Discord server is a public community space. As with any open forum, people can join freely," the statement said. "We actively moderate our channels and take enforcement seriously, as our warning to this individual demonstrates. But we cannot predict or control the actions of every person who joins a public server; no organisation can." Authorities in San Francisco booked Moreno-Gama at 12:47 p.m. on Friday, and he's facing several charges. The charges include arson of an inhabited structure or property and attempted murder. A San Francisco Police Department spokesperson said officers responded to a North Beach residence just after 4 a.m. local time to investigate a fire. The fire was contained to the exterior gate, and there were no injuries, according to the spokesperson. The suspect fled the scene on foot. His description was later broadcast to all SFPD officers. Around 5:07 a.m., officers responded to OpenAI's office, where they said a man was threatening to burn down the building. The spokesperson said officers realized the man matched the description of the person who threw the Molotov cocktail at Altman's house. An OpenAI spokesperson confirmed the attack in a statement. "Early this morning, someone threw a Molotov cocktail at Sam Altman's home and also made threats at our San Francisco headquarters. Thankfully, no one was hurt. We deeply appreciate how quickly SFPD responded and the support from the city in helping keep our employees safe," the OpenAI spokesperson said. Altman responded to the incident in a blog post on Friday, featuring a photo of his husband, Oliver Mulherin, and their son. "I empathize with anti-technology sentiments and clearly technology isn't always good for everyone. But overall, I believe technological progress can make the future unbelievably good, for your family and mine," he wrote. "While we have that debate, we should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally." The incident occurred about five months after OpenAI told employees at its headquarters to shelter in place due to a reported threat, which prompted a police response. The threat came from an individual linked to an anti-AI activist group. OpenAI has become a major player in the tech scene, thrusting Altman into the global spotlight. The billionaire has met with many global leaders, including President Donald Trump, and has become an industry titan. However, public opinion on AI has shifted since the company released ChatGPT in 2022. Concerns over how AI will affect the workforce have caused protests, and OpenAI's deal with the Pentagon drew criticism. Tech companies spend millions of dollars to protect their founders. A SEC filing said Tesla incurred about $2.4 million in expenses for such security services in 2023 and about $0.5 million through February 2024, representing a portion of the total cost of security services related to Elon Musk. Facebook spent $20.4 million protecting CEO Mark Zuckerberg in 2019. It's unclear how much OpenAI spends on Altman's security.

Discord
DNyuz13d ago
Read update
Suspect in Molotov attack on Sam Altman's home linked to AI Discord server

Anthropic's Mythos AI can spot weaknesses in almost every computer on earth. Uh-oh.

Aimee Picchi is the associate managing editor for CBS MoneyWatch, where she covers business and personal finance. She previously worked at Bloomberg News and has written for national news outlets including USA Today and Consumer Reports. Anthropic's latest AI technology, called Mythos, is so powerful at revealing software vulnerabilities that the company is afraid to release the model publicly lest it fall into the hands of bad actors. The company, the developer behind the Claude AI chatbot, said in a post on its website this week that the new tool has already uncovered thousands of weak points in "every major operating system and web browser." Although that capability could prove to be a boon for protecting critical systems, it is also stirring concerns that hackers could exploit Mythos to attack the IT infrastructure at banks, hospitals, government systems and many other organizations. Rather than releasing Mythos to the public, Anthropic is sharing the tech with a select group of major companies, including Amazon, Apple, Cisco, JPMorgan Chase and Nvidia, so they can test the model and strengthen their own systems against cyberattacks. Called Project Glasswing, the effort is aimed at helping key companies harden their defenses before hackers get access to Mythos or similar AI models, according to Anthropic. At the same time, security experts said, the concerns around Mythos attest to the dangers of AI if it is weaponized for harm. "What we need to do is look at this as a wake-up call to say, the storm isn't coming -- the storm is here," Alissa Valentina Knight, CEO of cybersecurity AI company Assail, told CBS News. "We need to prepare ourselves, because we couldn't keep up with the bad guys when it was humans hacking into our networks. We certainly can't keep up now if they're using AI because it's so much devastatingly faster and more capable." Mythos' capabilities are also sparking concern among federal officials. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell met with top bank CEOs in a closed-door meeting on Tuesday to discuss Mythos and other emerging cybersecurity risks stemming from AI. Anthropic also briefed senior U.S. government officials and key industry stakeholders on Mythos's capabilities, CBS News has learned. Separately, IMF Managing Director Kristalina Georgieva said in an interview set to air Sunday on "Face the Nation with Margaret Brennan" that the world does not have the ability "to protect the international monetary system against massive cyber risks." "The risks have been growing exponentially," Georgieva said. "Yes, we are concerned. We are very keen to see more attention to the guardrails that are necessary to protect financial stability in the world of AI." Anthropic didn't return a request for comment. In its post, however, the company underscored the risks of misusing tools like Mythos. "The fallout -- for economies, public safety, and national security -- could be severe," the company said. Such stark warnings mask another troubling reality: Hackers already have access to advanced AI models and are using them for a range of malign purposes, including to create autonomous "agents" capable of carrying out attacks without human intervention. Such attacks range from spreading malware and executing identity theft scams to producing deepfake videos and launching ransomware attacks, according to cybersecurity experts. "AI-enabled tooling has empowered even low-skilled threat actors to execute high-speed, high-volume operations, whilst advanced adversaries are using AI to sharpen precision, scale automation and compress attack timelines," PwC said in a recent report. "The time between the public release of a new capability by an AI company and its weaponization by threat actors shrank dramatically [in 2025], a trend we assess will likely accelerate in 2026," the management consulting firm added, Other AI tools, while not yet as effective as Mythos in exposing the soft underbelly in software, are already amplifying the risks to consumers, businesses and governments. For instance, hackers are tapping AI to sharpen so-called phishing attacks aimed at prying loose confidential information, said Zach Lewis, the chief information officer at the University of Health Sciences and Pharmacy in St. Louis. "It's been used to really script those dialogues, those conversations, those phishing emails, to specific people -- and really customize them to make them a lot more difficult to detect and identify if these are fake or not," he told CBS News. "Once [Mythos] drops, we're going to see a lot more vulnerabilities, probably a lot more attacks," Lewis said. "Cyberattacks are definitely going to increase until we get to a point where we're patching up all those vulnerabilities almost in real time." AI is more effective than humans at finding software bugs because it can quickly scan thousands of lines of code and detect problems, something people are not necessarily good at, Knight explained. "Humans are the weakest link in security," Knight noted. "Humans have the ability to make mistakes when we're writing code. It's possible for vulnerabilities in source code to have never been found by humans." Some security experts questioned the motives behind Anthropic's incremental approach to rolling out Mythos, speculating that the limited release could be aimed at stirring intrest from other prospective customers. Meanwhile, both Anthropic and rival OpenAI are expected to launch initial public offerings by the end of the year, according to the Wall Street Journal -- a possible incentive to drum up headlines, said Peter Garraghan, founder and Chief Science Officer at Mindgard, an AI security platform. "I suspect Anthropic may be using this as a marketing ploy, perhaps towards IPO," he said. Anthropic has sought to distinguish its brand from OpenAI and other rivals by publicly emphasizing AI safety, highlighting its guardrails for keeping the technology in line. Anthropic's decision to hold off on releasing Mythos and launching Project Glasswing aligns with that image, noted Columbia Business School marketing lecturer Malek Ben Sliman. "When facing the tough decisions, Anthropic has actually been true to its values," he said. Curating the release of Mythos "does allow them to look to be the protectors of this responsible AI, but it also is a great marketing and advertising tool."

Anthropic
CBS News13d ago
Read update
Anthropic's Mythos AI can spot weaknesses in almost every computer on earth. Uh-oh.

Suspect in Molotov attack on Sam Altman's home linked to AI Discord server

The 20-year-old suspect arrested after the Molotov cocktail attack on Sam Altman's $27 million home participated in a Discord server critical of AI development. PauseAI said the suspect, identified by several outlets as Daniel Alejandro Moreno-Gama, joined its server two years ago. The nonprofit organization is focused on temporarily pausing the development of frontier AI models -- like OpenAI's GPT-5.4 -- and mitigating the risks they pose. "Violence against anyone is antithetical to everything we stand for," the statement on its website. The organization said Moreno-Gama posted a total of 34 messages in the Discord server, though none of those messages "contained explicit calls to violence." "He had no role in PauseAI, participated in no campaigns, attended no events, and received no support from us," the statement said. "Following news of the attack, we banned him from our server." A moderator began removing his messages following news of the attack, but stopped when they realized they could be relevant to the investigation. "Our Discord server is a public community space. As with any open forum, people can join freely," the statement said. "We actively moderate our channels and take enforcement seriously, as our warning to this individual demonstrates. But we cannot predict or control the actions of every person who joins a public server; no organisation can." Authorities in San Francisco booked Moreno-Gama at 12:47 p.m. on Friday, and he's facing several charges. The charges include arson of an inhabited structure or property and attempted murder. A San Francisco Police Department spokesperson said officers responded to a North Beach residence just after 4 a.m. local time to investigate a fire. The fire was contained to the exterior gate, and there were no injuries, according to the spokesperson. The suspect fled the scene on foot. His description was later broadcast to all SFPD officers. Around 5:07 a.m., officers responded to OpenAI's office, where they said a man was threatening to burn down the building. The spokesperson said officers realized the man matched the description of the person who threw the Molotov cocktail at Altman's house. An OpenAI spokesperson confirmed the attack in a statement. "Early this morning, someone threw a Molotov cocktail at Sam Altman's home and also made threats at our San Francisco headquarters. Thankfully, no one was hurt. We deeply appreciate how quickly SFPD responded and the support from the city in helping keep our employees safe," the OpenAI spokesperson said. Altman responded to the incident in a blog post on Friday, featuring a photo of his husband, Oliver Mulherin, and their son. "I empathize with anti-technology sentiments and clearly technology isn't always good for everyone. But overall, I believe technological progress can make the future unbelievably good, for your family and mine," he wrote. "While we have that debate, we should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally." The incident occurred about five months after OpenAI told employees at its headquarters to shelter in place due to a reported threat, which prompted a police response. The threat came from an individual linked to an anti-AI activist group. OpenAI has become a major player in the tech scene, thrusting Altman into the global spotlight. The billionaire has met with many global leaders, including President Donald Trump, and has become an industry titan. However, public opinion on AI has shifted since the company released ChatGPT in 2022. Concerns over how AI will affect the workforce have caused protests, and OpenAI's deal with the Pentagon drew criticism. Tech companies spend millions of dollars to protect their founders. A SEC filing said Tesla incurred about $2.4 million in expenses for such security services in 2023 and about $0.5 million through February 2024, representing a portion of the total cost of security services related to Elon Musk. Facebook spent $20.4 million protecting CEO Mark Zuckerberg in 2019. It's unclear how much OpenAI spends on Altman's security.

Discord
Business Insider13d ago
Read update
Suspect in Molotov attack on Sam Altman's home linked to AI Discord server

Anthropic's Mythos AI can spot weaknesses in almost every computer on earth. Uh-oh.

Aimee Picchi is the associate managing editor for CBS MoneyWatch, where she covers business and personal finance. She previously worked at Bloomberg News and has written for national news outlets including USA Today and Consumer Reports. Anthropic's latest AI technology, called Mythos, is so powerful at revealing software vulnerabilities that the company is afraid to release the model publicly lest it fall into the hands of bad actors. The company, the developer behind the Claude AI chatbot, said in a post on its website this week that the new tool has already uncovered thousands of weak points in "every major operating system and web browser." Although that capability could prove to be a boon for protecting critical systems, it is also stirring concerns that hackers could exploit Mythos to attack the IT infrastructure at banks, hospitals, government systems and many other organizations. Rather than releasing Mythos to the public, Anthropic is sharing the tech with a select group of major companies, including Amazon, Apple, Cisco, JPMorgan Chase and Nvidia, so they can test the model and strengthen their own systems against cyberattacks. Called Project Glasswing, the effort is aimed at helping key companies harden their defenses before hackers get access to Mythos or similar AI models, according to Anthropic. At the same time, security experts said, the concerns around Mythos attest to the dangers of AI if it is weaponized for harm. "What we need to do is look at this as a wake-up call to say, the storm isn't coming -- the storm is here," Alissa Valentina Knight, CEO of cybersecurity AI company Assail, told CBS News. "We need to prepare ourselves, because we couldn't keep up with the bad guys when it was humans hacking into our networks. We certainly can't keep up now if they're using AI because it's so much devastatingly faster and more capable." Mythos' capabilities are also sparking concern among federal officials. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell met with top bank CEOs in a closed-door meeting on Tuesday to discuss Mythos and other emerging cybersecurity risks stemming from AI. Anthropic also briefed senior U.S. government officials and key industry stakeholders on Mythos's capabilities, CBS News has learned. Separately, IMF Managing Director Kristalina Georgieva said in an interview set to air Sunday on "Face the Nation with Margaret Brennan" that the world does not have the ability "to protect the international monetary system against massive cyber risks." "The risks have been growing exponentially," Georgieva said. "Yes, we are concerned. We are very keen to see more attention to the guardrails that are necessary to protect financial stability in the world of AI." Anthropic didn't return a request for comment. In its post, however, the company underscored the risks of misusing tools like Mythos. "The fallout -- for economies, public safety, and national security -- could be severe," the company said. Such stark warnings mask another troubling reality: Hackers already have access to advanced AI models and are using them for a range of malign purposes, including to create autonomous "agents" capable of carrying out attacks without human intervention. Such attacks range from spreading malware and executing identity theft scams to producing deepfake videos and launching ransomware attacks, according to cybersecurity experts. "AI-enabled tooling has empowered even low-skilled threat actors to execute high-speed, high-volume operations, whilst advanced adversaries are using AI to sharpen precision, scale automation and compress attack timelines," PwC said in a recent report. "The time between the public release of a new capability by an AI company and its weaponization by threat actors shrank dramatically [in 2025], a trend we assess will likely accelerate in 2026," the management consulting firm added, Other AI tools, while not yet as effective as Mythos in exposing the soft underbelly in software, are already amplifying the risks to consumers, businesses and governments. For instance, hackers are tapping AI to sharpen so-called phishing attacks aimed at prying loose confidential information, said Zach Lewis, the chief information officer at the University of Health Sciences and Pharmacy in St. Louis. "It's been used to really script those dialogues, those conversations, those phishing emails, to specific people -- and really customize them to make them a lot more difficult to detect and identify if these are fake or not," he told CBS News. "Once [Mythos] drops, we're going to see a lot more vulnerabilities, probably a lot more attacks," Lewis said. "Cyberattacks are definitely going to increase until we get to a point where we're patching up all those vulnerabilities almost in real time." AI is more effective than humans at finding software bugs because it can quickly scan thousands of lines of code and detect problems, something people are not necessarily good at, Knight explained. "Humans are the weakest link in security," Knight noted. "Humans have the ability to make mistakes when we're writing code. It's possible for vulnerabilities in source code to have never been found by humans." Some security experts questioned the motives behind Anthropic's incremental approach to rolling out Mythos, speculating that the limited release could be aimed at stirring intrest from other prospective customers. Meanwhile, both Anthropic and rival OpenAI are expected to launch initial public offerings by the end of the year, according to the Wall Street Journal -- a possible incentive to drum up headlines, said Peter Garraghan, founder and Chief Science Officer at Mindgard, an AI security platform. "I suspect Anthropic may be using this as a marketing ploy, perhaps towards IPO," he said. Anthropic has sought to distinguish its brand from OpenAI and other rivals by publicly emphasizing AI safety, highlighting its guardrails for keeping the technology in line. Anthropic's decision to hold off on releasing Mythos and launching Project Glasswing aligns with that image, noted Columbia Business School marketing lecturer Malek Ben Sliman. "When facing the tough decisions, Anthropic has actually been true to its values," he said. Curating the release of Mythos "does allow them to look to be the protectors of this responsible AI, but it also is a great marketing and advertising tool."

Anthropic
CBS News13d ago
Read update
Anthropic's Mythos AI can spot weaknesses in almost every computer on earth. Uh-oh.

Trump's incendiary rhetoric fuels 'chaos and confusion', driving NATO allies toward self-reliance

Oliver Farry is pleased to welcome Alexandre Vautravers, Associate Fellow in Leadership in Conflict Management at the Geneva Centre for Security Policy (GCSP). According to Vautravers, amid Trump's fiery rhetoric, the real challenges lie in the technical, logistical, and strategic underpinnings of the alliance and interoperability across US and European forces. He says Europe is entering a phase of cautious autonomisation in response to an unpredictable US ally that has increasingly adopted highly confrontational policies towards allies and adversaries across the globe.

CHAOS
France 2413d ago
Read update
Trump's incendiary rhetoric fuels 'chaos and confusion', driving NATO allies toward self-reliance

Ahmedabad's first brush with electoral politics: Liquor, bribes and utter chaos

Ahmedabad: On Aug 15, 1885 -- 62 years before India won Independence -- Ahmedabad carried out Gujarat's first experiment with democracy. The municipality elections that year followed more than a decade of resistance from the British administration. In 1874, collector Alexander Alfred Borradaile dismissed public demands for elections, calling the process a "farce". It was only the passage of the 1884 Act under Lord Ripon, that finally permitted half the municipal body to have elected members. A historian in the city uncovered the episode in a file titled "Municipal Election -- Alleged Irregularities" from the Maharashtra State Archives last year. The documents herein describe an election that was momentous, but also unruly.The contest covered 14 seats across seven wards, drawing 54 candidates for an electorate of just 1,914 valid voters. With no electoral code and no restrictions on campaigning, the process descended into what officials recorded as "a spectacle of bribery and chaos". Candidates rolled barrels of liquor into the streets and handed out cash, grain and ghee. Several distributed exactly Rs 5 to voters on the eve of polling. Dining halls were opened, prompting the file to observe that "wise voters" made the rounds, sampling meals from multiple candidates."Polling day itself was violent. Horse-drawn carriages ferried supporters to booths as stone-pelting and street brawls erupted across neighbourhoods, leaving both voters and police officers injured," says Rizwan Kadri, member of the Prime Ministers' Museum and Library (PMML) Society. The election threw up several surprises. Social reformer Mahipatram Rupram contested on four seats across three wards. He won Jamalpur with 101 votes but lost the remaining three and went on to file election petitions against the municipal president, mill owner Ranchhodlal Chhotalal. Elsewhere, industrialist Bechardas Ambaidas Lashkari was returned unopposed from Shahpur-1.Ratanlal Trambaklal emerged as the most popular candidate, polling 131 votes in Kalupur. Other winners included Vrajlal Sakarlal, Kasturchand Premchand and Chimanlal Kapurchand in Khadia; Maganlal Sarupchand and Cowasji Mancherji, the latter securing victories in both Kalupur and Jamalpur; Abdul Narmavala and Narbheram Rughnathdas in Dariapur; Madhavlal Ranchhodlal in Shahpur-2; and Navroji Pestanji and Farmanji Pestonji in Raikhad and Saraspur respectively.Eight candidates failed to attract a single vote, while six others polled just one. After the elections, the municipal body included 30 govt-appointed commissioners, which had figures such as Maganbhai Karamchand, Jashingbhai Hathising and Mancherji Sorabji, with a partially elected board. The Municipal Board was constituted on Sep 15, 1885.

CHAOS
The Times of India13d ago
Read update
Ahmedabad's first brush with electoral politics: Liquor, bribes and utter chaos

Trump officials may be encouraging banks to test Anthropic's Mythos model - RocketNews

Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned bank executives for a meeting this week where they encouraged the executives to use Anthropic's new Mythos model to detect vulnerabilities, according to Bloomberg. Indeed, while JPMorgan Chase was the only bank listed as one of the initial partner organizations with access to the model, Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley are reportedly testing Mythos as well. Anthropic announced the model this week but said it would be limiting access for now, in part because Mythos -- despite not being trained specifically for cybersecurity -- is too good at finding security vulnerabilities. (Others suggested this was hype or simply a smart enterprise sales strategy.) The report is particularly surprising since Anthropic is currently battling the Trump administration in court over the Department of Defense's designation of Anthropic as a supply-chain risk; that designation came after negotiations fell apart over the company's efforts to limit how its AI models can be used by the government. Meanwhile, the Financial Times reports that U.K. financial regulators are also discussing the risk posed by Mythos.

Anthropic
RocketNews | Top News Stories From Around the Globe13d ago
Read update
Trump officials may be encouraging banks to test Anthropic's Mythos model - RocketNews

Trump Officials May Be Encouraging Banks To Test Anthropic's Mythos Model

Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned slope executives for a gathering this week wherever they encouraged the executives to usage Anthropic's caller Mythos exemplary to observe vulnerabilities, according to Bloomberg. Indeed, while JPMorgan Chase was the only slope listed arsenic 1 of the first partner organizations pinch entree to the model, Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley are reportedly testing Mythos arsenic well. Anthropic announced the exemplary this week but said it would beryllium limiting entree for now, successful portion because Mythos -- contempt not being trained specifically for cybersecurity -- is excessively bully astatine uncovering information vulnerabilities. (Others suggested this was hype aliases simply a smart endeavor income strategy.) The study is peculiarly astonishing since Anthropic is currently battling the Trump management successful court complete the Department of Defense's designation of Anthropic arsenic a supply-chain risk; that nickname came aft negotiations fell isolated complete Anthropic's efforts to limit really its AI models could beryllium utilized by the government. Meanwhile, the Financial Times reports that U.K. financial regulators are besides discussing the consequence posed by Mythos.

Anthropic
Beritaja13d ago
Read update
Trump Officials May Be Encouraging Banks To Test Anthropic's Mythos Model

Chaos at airport as 105 passengers stranded after flight leaves without them

"We're being told that Tuesday is the earliest we can get back, and that we have to fly to Gatwick. We've had to pay out of pocket for an Airbnb." Vicky and her family were among 105 passengers left behind when the Manchester-bound flight eventually departed. "There were only about 30 people got on the plane, and about 100 people didn't." Adam Lomas, 33, from Wakefield, who was travelling with his wife Katy and their four-month-old daughter, said some passengers had booked hotels while others travelled to different airports, including Pisa, in a bid to get home. He said: "We are trying to find a hotel and we are going to have to book a flight to London and then get from London to Manchester because our daughter's babyseat is there. "The airport and easyJet have spent hours arguing with each other about who is to blame." The disruption comes just days after the European Union introduced its new Entry/Exit System (EES), which requires some travellers entering or leaving the Schengen area to provide biometric details including fingerprints and photographs. The UK Government has warned travellers that the new system could cause significantly longer waits at border control. Foreign Office advice states: "EES may take each passenger extra time to complete so be prepared to wait longer than usual at the border." "We have been doing all possible to minimise the impact of the airport queues, holding flights to allow customers extra time and providing free flight transfers for any customers who may have missed their flight including EJU5420 to Manchester. "We continue to urge border authorities to ensure they make full and effective use of the permitted flexibilities for as long as needed while EES is implemented, to avoid these unacceptable border delays for our customers. "While this is outside of our control, we are sorry for any inconvenience caused."

CHAOS
EXPRESS13d ago
Read update
Chaos at airport as 105 passengers stranded after flight leaves without them
Showing 5541 - 5560 of 11425 articles