News & Updates

The latest news and updates from companies in the WLTH portfolio.

Gachagua accuses police of 'pretending to act' after Southern Bypass chaos

Former DP Rigathi Gachagua has accused police of delayed response during Kikuyu chaos, claiming officers arrived after residents repelled suspected gangs. NAIROBI, Kenya, Apr 11 -- Former Deputy President Rigathi Gachagua has criticised the National Police Service, accusing officers of arriving late and "pretending to act" after residents had already repelled suspected attackers and restored calm following early morning chaos in Kikuyu. In a statement on Saturday, Gachagua alleged that police failed to respond as gangs blocked roads and attacked motorists along Nairobi's Southern Bypass, leaving citizens to defend themselves and secure the area. He claimed that criminal elements terrorised motorists for about two hours without police intervention. "The police only came at 9.30am pretending to act after citizens drove the goons out and restored order and normalcy," he said. Traffic along the busy highway was temporarily paralysed after suspected gangs barricaded sections near Thogoto and Gitaru using rocks, bonfires, and debris, triggering panic among motorists and commuters. Mugging and destruction Witnesses reported incidents of mugging, destruction of property, and attacks on both pedestrians and motorists beginning as early as 6am, with several victims sharing images of damaged vehicles on social media. "We don't have a government here... from 6am there's been no police response. This is organized crime," one motorist said, describing the ordeal. Another victim recounted a similar incident involving family members, while others reported tyre damage and forced detours as the road became impassable. Gachagua linked the violence to what he described as a broader pattern of politically motivated "goonism," accusing President William Ruto's administration of enabling criminal gangs to suppress dissent. "As well captured by the two leading dailies, goonism has become the operational module for Mr. William Ruto's administration," he said. He further alleged collusion between elements within the police service and criminal groups, claiming that gangs were deployed to disrupt his planned political rally in Kikuyu constituency. "Goons are now part of the National Police Service and work alongside the police in causing chaos and unleashing violence against innocent Kenyans," he said, without providing evidence. 'Haiti benchmarking' The former Deputy President also referenced Kenya's security mission to Haiti, controversially claiming it served as a "benchmarking expedition" for police to learn how to work with criminal gangs -- an assertion not independently verified. He directly appealed to Inspector General Douglas Kanja to ensure police neutrality and prevent interference with his rally. "It is a shame that the police have abdicated their duties, forcing citizens to step in," he added. Police later moved in to disperse the suspected gangs and reopen the highway, restoring traffic flow after several hours of disruption. Security agencies subsequently intensified patrols in Kikuyu and surrounding areas ahead of the planned political gathering. Calm was later reported in Kikuyu town under heightened police presence. The incident comes amid escalating political tensions between Gachagua and Kikuyu Member of Parliament Kimani Ichung'wah, who earlier dismissed claims of planned chaos as baseless and politically motivated. Ichung'wah urged security agencies to remain vigilant and continue protecting residents and businesses, warning against attempts to incite unrest. Earlier, Gachagua had written to police requesting enhanced security for his rally, alleging a plot to block roads and deploy individuals to trigger violence. READ: Uhuru-backed team in showdown with police over Gachagua's Kikuyu rally By Saturday afternoon, residents and traders had resumed normal activities, though some continued to guard businesses amid fears of renewed unrest. Police had not issued an official statement on the incident by the time of publication.

CHAOS
Capital FM Kenya12d ago
Read update
Gachagua accuses police of 'pretending to act' after Southern Bypass chaos

Top US banks warned about new Anthropic AI tool

The leaders of some of America's largest banks were warned by a top US government official this week about a new artificial intelligence model from Anthropic that could lead to heightened risks of cyberattacks, according to three people briefed on the matter. Treasury Secretary Scott Bessent delivered the stark message on Tuesday morning to a small group of CEOs, including those from Bank of America, Citi and Wells Fargo, in a hastily arranged meeting in Washington, DC. Bessent cautioned the banks that allowing the new AI software to run on their internal computer systems could pose a serious risk to sensitive customer data, said tye sources on condition of anonymity as they were not authorised to discus the issue publicly. Federal Reserve chairman Jerome Powell, who has spoken publicly in recent weeks about the threat of cyberattacks against the financial system, also attended the meeting. The warnings relate to a new intelligence model that Anthropic has named Claude Mythos Preview. The company has said the model is particularly good at identifying security vulnerabilities in software that human developers could not find. At Tuesday's meeting, the people briefed on the matter said, the bank executives were told that the new model might be so effective at finding security weaknesses inside banks that hackers or other third-party bad actors could get their hands on the information and exploit it. Anthropic itself has warned about the risks. The company said this week that the model's advancements were so powerful and potentially dangerous that they could not safely be released to the public yet and would instead be contained to a coalition of 40 companies that it called "Project Glasswing". That group includes at least one bank, JPMorgan Chase, the nation's largest, which earlier said it would use the software "to evaluate next-generation AI tools for defensive cybersecurity across critical infrastructure". Jamie Dimon, the CEO of JPMorgan, was invited to Tuesday's briefing but skipped it for previously arranged travel plans, according to a person familiar with the matter. The Trump administration and Anthropic are locked in a legal battle over the recent move by the Department of Defense to designate the company a "supply chain risk". The government issued that designation after Anthropic insisted on putting limits on the use of its AI technology in war. In a statement, a Treasury spokesperson said, "This week's meeting was convened by Secretary Bessent to initiate a process for planning and coordination of our approach to the rapid developments taking place in AI." The existence of the meeting was reported earlier by Bloomberg News. The Federal Reserve declined to comment. "We're taking every step we can to make sure that everybody is safe from these potential risks, including Anthropic agreeing to hold back the public release of the model until our officials have figured everything out," Kevin Hassett, director of the National Economic Council, told Fox News on Friday. "There's definitely a sense of urgency." Logan Graham, an Anthropic executive, said in a statement that the new technology would help "secure infrastructure that is critical for global security and economic stability".

Anthropic
Bangkok Post12d ago
Read update
Top US banks warned about new Anthropic AI tool

CoreWeave shares surge 10% following multiyear Anthropic cloud deal

The competitive "AI chip crunch" has led OpenAI to partner with Broadcom for 10GW of custom chips, while Microsoft recently debuted its Maia 200 processor in January 2026. The race for artificial intelligence supremacy has shifted from model development to infrastructure security. On Friday, April 10, 2026, CoreWeave, the AI-focused cloud provider that went public in 2025 announced a strategic agreement to power Anthropic's next generation of Claude models. The deal validates CoreWeave's "specialized cloud" model as an alternative to hyperscalers, sending its stock to a daily high of $105.90 before settling at $102.00. This partnership follows a massive week for the AI hardware sector. Just 24 hours prior, Meta committed an additional $21 billion to CoreWeave, bringing their total partnership value to roughly $35 billion. As the demand for compute remains insatiable, companies are diversifying their hardware stacks. Anthropic now utilizes a blend of AWS Trainium, Google TPUs, and Nvidia GPUs via CoreWeave, while simultaneously working with Broadcom to develop the "multi-gigawatt" infrastructure necessary to keep pace with a revenue run-rate that has reportedly surpassed $30 billion. The Issues The primary challenge for the industry is the escalating cost of compute and the "gigawatt gap." AI firms must solve the problem of hardware dependency on Nvidia; hence the aggressive pivot toward custom silicon (OpenAI's 10GW Broadcom deal and Meta's MTIA 400). However, building custom chips is a multi-year endeavor, leaving a supply-demand mismatch in the interim. Furthermore, CoreWeave faces the challenge of financing its rapid expansion, having recently increased its convertible note offering to $3.5 billion to fund the construction of the 32 data centers required to fulfill its $66 billion contract backlog. CoreWeave has positioned itself as the "essential utility" of the AI era. By locking in both Anthropic and Meta within a single week, it has consolidated a significant portion of the world's most valuable AI workloads, even as those giants move toward building their own custom processors.

Anthropic
BizWatchNigeria.Ng12d ago
Read update
CoreWeave shares surge 10% following multiyear Anthropic cloud deal

Anthropic's AI found thousands of flaws in every major OS and browser -- and cybercrime losses just hit $21 billion

As technology grows in power, cybercrime becomes a bigger and costlier problem -- especially as agentic AI becomes more and more capable. Last year, Anthropic said that hackers used its Claude code tool (1) to attempt to infiltrate around 30 targets, including financial institutions and government agencies. Anthropic says this was the first time a large-scale cyberattack was carried out "without substantial human intervention." Must Read * Thanks to Jeff Bezos, you can now become a landlord for as little as $100 -- and no, you don't have to deal with tenants or fix freezers. Here's how * Robert Kiyosaki this 1 asset will surge 400% in a year and begs investors not to miss this 'explosion' * Taxes are going to change for retirees under Trump's 'big beautiful bill' -- here are 4 reasons you can't afford to waste time On April 7, Anthropic announced an initiative to help protect against bad actors: Project Glasswing (2). Anthropic says that Project Glasswing was formed because its new Claude model, Claude Mythos, "has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser." Anthropic says that it and its Project Glasswing Partners, which include Apple, Google, JPMorganChase, and Microsoft, hope to use Claude Mythos to identify these vulnerabilities and fix them before other AI models can exploit them. But the model could also be used to exploit the same vulnerabilities Anthropic is trying to shore up, causing heavy economic losses. Here's what to know about how AI's role in cybercrime, as well as the impact it can have on the world economy. The economic impacts of AI cyberattacks According to the 2025 Internet Crime Report from the Federal Bureau of Investigation (3), yearly cybercrime complaints have gone up from a little under 800,000 to over 1,000,000 since 2020. In that time, yearly losses from cybercrime have gone from $4.2 billion to almost $21 billion, quintupling in the span of five years. Those numbers only represent the direct losses reported to the FBI. Total costs to businesses are likely orders of magnitude higher, although the World Bank says it's difficult to give accurate estimates (4) thanks to the indirect costs of cybercrime. In 2025, the FBI first started tracking how many of its complaints mentioned AI. At the time, about 2% of complaints were AI-related (3). Those AI-related complaints accounted for around 4% of reported losses -- almost $900 million. Those numbers could be going up in the near future. If, as Anthropic has said, AI is able to exploit OS and browser vulnerabilities that even highly experienced hackers aren't able to find, then the barrier to entry for cybercriminals becomes much lower.

Anthropic
Yahoo! Finance12d ago
Read update
Anthropic's AI found thousands of flaws in every major OS and browser -- and cybercrime losses just hit $21 billion

Holiday chaos warning as jet fuel shortages threaten Europe flights

Holidaymakers are facing growing uncertainty over their summer plans amid fears of potential jet fuel shortages. The crisis, triggered after Donald Trump ordered strikes on Iran which led to the closure of the Strait of Hormuz, could soon disrupt flights across Europe. Aviation leaders have warned that fuel shortages may "become a reality" in the EU if the key shipping route does not reopen soon -- bringing the impact of the conflict much closer to home. A fragile ceasefire is currently in place, but reports suggest vessels are still unable to pass through the Strait, raising concerns about ongoing disruption to global fuel supplies. Airports Council International Europe said "the impact of military activity on demand" was already being felt, warning that fuel issues could "significantly harm the European economy". There is now mounting uncertainty about what lies ahead for the aviation sector. In a memo seen by the Financial Times, EU transport commissioner Apostolos Tzitzikostas warned there are "increasing concerns of the airport industry over the availability of jet fuel as well as the need for proactive EU monitoring and action". "If the passage through the Strait of Hormuz does not resume in any significant and stable way within the next three weeks, systemic jet fuel shortage is set to become a reality for the EU," the note reads. Air travel was already hit in the early stages of the conflict, with dozens of airlines cancelling flights across southwest Asia and leaving tens of thousands stranded. While holidays to other regions have so far avoided major disruption, there are growing signs the wider fallout could start to bite. Countries around the world are already taking steps to conserve energy, including shortening workweeks and limiting evening activity. Budget airlines operating across Europe have also signalled concern about what may come next. A Ryanair spokesperson said: "We don't expect any near-term fuel shortages, but the situation is fluid. At present our fuel suppliers can guarantee supply to mid-end May. If the Iran war finishes soon then supply will not be disrupted. If the closure of the Hormuz Straits continues into May or June then we cannot rule out risks to fuel supplies at some airports in Europe." Attention is now turning to peace talks expected to take place this weekend in Pakistan, where US and Iranian officials are due to meet in a bid to de-escalate the crisis. Israel's continued bombing of Lebanon -- which has killed hundreds of civilians despite the ceasefire -- is expected to be a key issue in discussions. However, on Friday, Trump said US forces were rearming in preparation for the possibility of further hostilities, raising fresh doubts over how quickly the situation can be resolved.

CHAOS
Yahoo12d ago
Read update
Holiday chaos warning as jet fuel shortages threaten Europe flights

Polymarket Prepares pUSD Rollout and Protocol Upgrade to Cut Failed Trades

The overhaul is designed to reduce failed trades, lower gas costs and improve order management across the platform. Polymarket is preparing a protocol upgrade that looks less cosmetic than structural, with a new collateral token and a redesigned trading architecture aimed at fixing some of the platform's more persistent friction points. According to the company's documentation, the upgrade will introduce Polymarket USD, or pUSD, an ERC-20 token on Polygon that is fully backed by USDC. In practical terms, pUSD will function as the technical representation of a user's balance inside the platform. When users deposit USDC, that balance appears in pUSD form on Polymarket, and it can be swapped back into USDC on withdrawal. For most users, the front-end experience is not supposed to change much. Funds go in, a balance appears, trades are placed, and money can be withdrawn. The difference sits underneath, in the settlement layer. Polymarket says the protocol will continue settling trading activity in native USDC, while pUSD acts as the collateral token within the platform. The company is clearly trying to make the system more capital efficient and easier to scale without turning the user flow into something more complicated. It also took care to frame pUSD conservatively. The token is described as a standard ERC-20 on Polygon, backed by USDC through smart contract-enforced withdrawal mechanics, with no algorithmic peg and no fractional reserve structure. The more important upgrade may be the architecture around trading itself. Polymarket says the new CTFv2 and updated order book design are intended to reduce nonce-related failures, balance-check race conditions and other edge-case problems that have caused failed trades. Fees will now be calculated at match time instead of order placement, while order tracking will move to a timestamp-plus-signature model rather than relying on onchain nonces. The company also says gas costs should fall because the new contracts use more efficient libraries. Security is being emphasized too. The CTFv2 contracts have been audited by Cantina and Quantstamp, and Polymarket says it plans to open-source the smart contracts next week alongside a bug bounty program. That suggests the company understands the upgrade will be judged less by the launch message and more by whether the new system holds up cleanly once it is live.

Polymarket
Crypto News Flash12d ago
Read update
Polymarket Prepares pUSD Rollout and Protocol Upgrade to Cut Failed Trades

Peru's election: A battle for the Presidency amid political chaos and crime

LIMA, Peru -- Even by the chaotic standards of Peru's recent politics, this Sunday's election has the potential to confuse and frustrate the Andean nation's 27 million voters. A record 35 candidates are running for president -- the country's ninth leader in nearly as many years, reflecting deep political instability. Voters will face a jumbo-sized ballot featuring candidates' photos and party symbols, a longstanding practice in a society historically marked by low literacy levels. Many of them are unknowns barely registering one percent of support. But, amid widespread fury with the entire political class, even the handful of candidates with established profiles are failing to gather momentum. That means that a run-off election in June between the top two candidates is all but inevitable. Leading the pack, but only just, is Keiko Fujimori, the daughter of the late, disgraced 1990s strongman Alberto Fujimori. She has been walking a tightrope between cloaking herself in her father's legacy of crushing hyperinflation and the Shining Path - Maoist insurgents who once killed roughly 30,000 Peruvians - while also distancing herself from his serious human rights abuses and kleptocracy. Yet although she consistently polls around 10%, that figure may be both her electoral floor and ceiling, with many Peruvians blaming her and her party for their nation's ongoing political turmoil. It began in 2016, when Keiko, as she is known here, lost the presidential contest but her Popular Force party won a majority of seats in congress, ushering in a decade of instability, including impeachments of multiple ministers and presidents. One recent survey found that 54% of Peruvians said they wouldn't vote for her under any circumstances. Despite this, she is still likely to reach a fourth consecutive run-off -- having done so in 2011, 2016 and 2021 -- though she could again be defeated at that the final stage. Behind her is a pack of half-a-dozen other candidates, all in the mid-to-high single digits, any one of whom might, with a small, late surge, make it into the run-off. Prominent among them is Rafael Lópeza Aliaga, the ultra-conservative former mayor of Lima, who is sometimes dubbed "the Peruvian Trump". He has already been making unsubstantiated claims of imminent electoral "fraud" and issuing death threats to the head of ONPE, Peru's electoral agency. The field also features Carlos Álvarez, a Fujimori ally better known for parodying politicians than for offering policy -- something underscored by his difficulty answering basic questions in recent debates. Then there is Ricardo Belmont, an octogenarian left-wing populist whose long career has been marked by repeated sexist, homophobic and xenophobic remarks. Polls show that Peruvians overwhelmingly want fresh blood in their politics, meaning candidates without links to the current congress. It has passed multiple laws allegedly favoring organized crime and has a disapproval rating near 90%. "It is based on the certainty that high-level corruption has fueled a decade of political instability, and that a tacit alliance of political leaders bent on impunity and state plunder has cleared the way for organized crime to flourish in the streets," says Samuel Rotta, who heads anti-corruption group Accion Civica, as he explains citizens' disgust at the political class. That is no surprise in a society gripped by an extortion epidemic, with a record homicide rate, and where the number of Peruvians suffering food insecurity doubled from 25% before the pandemic to 51% now according to the World Food Programme. On Sunday, Peruvians will have the opportunity to change course. But with a crowded field of candidates all struggling to break out of single digits, a run-off election is almost certain.

CHAOS
KOSU12d ago
Read update
Peru's election: A battle for the Presidency amid political chaos and crime

Holiday chaos warning as jet fuel shortages threaten Europe flights - AOL

Holidaymakers are facing growing uncertainty over their summer plans amid fears of potential jet fuel shortages. The crisis, triggered after Donald Trump ordered strikes on Iran which led to the closure of the Strait of Hormuz, could soon disrupt flights across Europe. Aviation leaders have warned that fuel shortages may "become a reality" in the EU if the key shipping route does not reopen soon -- bringing the impact of the conflict much closer to home. A fragile ceasefire is currently in place, but reports suggest vessels are still unable to pass through the Strait, raising concerns about ongoing disruption to global fuel supplies. Airports Council International Europe said "the impact of military activity on demand" was already being felt, warning that fuel issues could "significantly harm the European economy". There is now mounting uncertainty about what lies ahead for the aviation sector. In a memo seen by the Financial Times, EU transport commissioner Apostolos Tzitzikostas warned there are "increasing concerns of the airport industry over the availability of jet fuel as well as the need for proactive EU monitoring and action". "If the passage through the Strait of Hormuz does not resume in any significant and stable way within the next three weeks, systemic jet fuel shortage is set to become a reality for the EU," the note reads. Air travel was already hit in the early stages of the conflict, with dozens of airlines cancelling flights across southwest Asia and leaving tens of thousands stranded. While holidays to other regions have so far avoided major disruption, there are growing signs the wider fallout could start to bite. Countries around the world are already taking steps to conserve energy, including shortening workweeks and limiting evening activity. Budget airlines operating across Europe have also signalled concern about what may come next. A Ryanair spokesperson said: "We don't expect any near-term fuel shortages, but the situation is fluid. At present our fuel suppliers can guarantee supply to mid-end May. If the Iran war finishes soon then supply will not be disrupted. If the closure of the Hormuz Straits continues into May or June then we cannot rule out risks to fuel supplies at some airports in Europe." Attention is now turning to peace talks expected to take place this weekend in Pakistan, where US and Iranian officials are due to meet in a bid to de-escalate the crisis. Israel's continued bombing of Lebanon -- which has killed hundreds of civilians despite the ceasefire -- is expected to be a key issue in discussions. However, on Friday, Trump said US forces were rearming in preparation for the possibility of further hostilities, raising fresh doubts over how quickly the situation can be resolved.

CHAOS
AOL.com12d ago
Read update
Holiday chaos warning as jet fuel shortages threaten Europe flights - AOL

Anthropic Model Sparks Fed-Wall Street Alarm Over AI Cyber Risk

Banks and regulators warn advanced AI could trigger cyberattacks and disrupt financial stabilityU.S. regulators, bank CEOs discuss AI-driven cyber risks to financeJerome Powell, Scott Bessent meet major bank executivesAnthropic restricts Mythos model over misuse concernsMarkets react as AI cyber threats raise systemic financial stability risks Federal regulators and top Wall Street executives held urgent discussions this week over the potential cyber risks posed by a new artificial intelligence model developed by Anthropic, highlighting growing concerns that advanced AI systems could disrupt the stability of the financial system. Jerome Powell and Scott Bessent met with chief executives of major U.S. banks to assess the implications of Anthropic's newly introduced Mythos model. The meeting underscores how AI-driven cyber capabilities are increasingly being treated as systemic risks, with potential consequences for banking operations, financial security, and global markets. The discussions come as Anthropic rolled out its Claude Mythos Preview model in a limited capacity, citing concerns that its advanced capabilities could be exploited by malicious actors. Restricting access reflects the high-risk nature of such systems, where misuse could lead to large-scale cyberattacks, financial disruptions, or data breaches affecting both institutions and consumers. AI Cyber Capabilities Trigger System-Level Concerns The urgency of the meeting signals that AI is no longer viewed solely as a productivity tool but as a potential threat vector. Advanced models capable of automating cyber operations can significantly lower the barrier for executing sophisticated attacks, increasing the scale and frequency of risks facing financial institutions. Major bank leaders, including executives from Bank of America, Citigroup, Goldman Sachs, Morgan Stanley, and Wells Fargo, attended the meeting, reflecting the breadth of concern across the financial sector. The absence of even a single large institution in such discussions can highlight how critical coordinated responses are in managing systemic threats. Anthropic has been working with partners including Apple, Google, Microsoft, and Nvidia under a cybersecurity initiative known as Project Glasswing. These collaborations aim to strengthen defensive capabilities, but they also underscore the dual-use nature of AI, where the same tools can be applied for both protection and attack. Dario Amodei shared in X stating: "The dangers of getting this wrong are obvious, but if we get it right, there is a real opportunity to create a fundamentally more secure internet and world than we had before the advent of AI-powered cyber capabilities." Government And Industry Alignment Still Evolving The Trump administration's engagement with Anthropic reflects growing recognition that AI governance is becoming a national security issue. The company has held ongoing discussions with agencies including the Cybersecurity and Infrastructure Security Agency, as policymakers attempt to understand and regulate emerging risks. At the same time, tensions remain between Anthropic and the U.S. Department of Defense, which recently labeled the company a supply chain risk. Legal challenges are ongoing, with conflicting court rulings allowing Anthropic to continue working with some government agencies while remaining restricted from defense contracts. This regulatory uncertainty can influence how quickly AI technologies are adopted across sectors. Delays or restrictions may slow innovation but are also aimed at preventing misuse that could lead to large-scale economic or security consequences. Financial Markets React To Emerging Risks The release of the Mythos model has already had visible effects in financial markets. Reports of its advanced cyber capabilities contributed to a decline in cybersecurity stocks. Such reactions show how sensitive markets are to developments in AI. Perceived increases in cyber risk can shift capital flows, affect valuations, and influence investment strategies across technology and financial sectors. Previous incidents have demonstrated the risks associated with AI misuse. Anthropic disclosed that its models had been used in past cyber operations, including automated attacks targeting government and corporate systems. The meeting between regulators and bank executives signals a shift in how AI is being integrated into risk assessment frameworks. Rather than focusing solely on innovation, institutions are increasingly preparing now for scenarios where AI could destabilize critical systems. The situation reflects the dual nature of artificial intelligence as both a tool for progress and a potential source of systemic risk, with policymakers and industry leaders working to balance innovation with safeguards as the technology continues to evolve.

Anthropic
International Business Times, Singapore Edition12d ago
Read update
Anthropic Model Sparks Fed-Wall Street Alarm Over AI Cyber Risk

Anthropic blocks OpenClaw founder from accessing Claude

Anthropic, a leading artificial intelligence (AI) company, temporarily suspended Peter Steinberger, the founder of AI agent platform OpenClaw. The suspension was imposed over "suspicious" activity on his account. Steinberger shared an image of the suspension notice he received from Anthropic regarding the suspension of his access to Claude, their AI model. The email from Anthropic stated, "An internal investigation of suspicious signals associated with your account indicates a violation of our Usage Policy. As a result, we have revoked your access to Claude." However, the ban was short-lived as Steinberger announced a few hours later that his account had been reinstated. He thanked everyone for their support during this time. While Anthropic hasn't officially explained why it suspended Steinberger's account, he hinted that his work on a Claude feature could have triggered the action. He said he was trying to get the "claude -p fallback feature" working after Boris confirmed it was a classifier bug and not intentional. Despite being blocked, Steinberger continued his efforts, which may have contributed to his temporary ban. Advertisement Earlier this week, Anthropic announced its decision to ban third-party AI agent platforms like OpenClaw from using Claude. The move was prompted by the massive infrastructure burden these platforms were placing on their system. Despite the ban, Steinberger has continued to work on improving OpenClaw and often seeks user feedback on how to enhance the platform's performance.

Anthropic
NewsBytes12d ago
Read update
Anthropic blocks OpenClaw founder from accessing Claude

Anthropic brings Claude AI to Microsoft Word so you can chat with your documents | Mint

For enterprise users, administrators need to deploy the tool across their organisation via the Microsoft 365 Admin Centre. Aman Gupta is a Digital Content Producer at LiveMint with over 3.5 years of experience covering the technology landscape. He specializes in artificial intelligence and consumer technology, reporting on everything from the ethical debates around AI models to shifts in the smartphone market. <br> His reporting is grounded in first-hand testing, independent analysis, and a focus on how technology impacts everyday users. He holds a PG Diploma in Radio and Television Journalism from the Indian Institute of Mass Communication, Delhi (Class of 2022). <br> Outside the newsroom, he spends his time reading biographies, hunting for the perfect coffee beans, or planning his next trip. <br><br> You can find Aman on <a href="https://www.linkedin.com/in/aman-gupta-894180214">LinkedIn</a> and on X at <a href="https://x.com/nobugsfound">@nobugsfound</a>, or reach him via email at <a href="[email protected]">[email protected]</a>.

Anthropic
mint12d ago
Read update
Anthropic brings Claude AI to Microsoft Word so you can chat with your documents | Mint

Anthropic's AI Model Mythos Sends Cybersecurity Stocks Into Freefall

Shares of numerous cybersecurity and software companies came under significant pressure on Friday. The trigger was the unveiling of a new AI model by US developer Anthropic called Claude Mythos Preview, which appears capable of autonomously detecting and exploiting security vulnerabilities in software. Investors fear that the system could fundamentally call into question established business models across the industry. The S&P 500 index for software and services lost around 1.6 percent on Friday and is now down nearly a quarter since the start of the year. A closely watched Goldman Sachs basket of US software stocks fell by approximately five percent on Friday. Specialized cybersecurity providers were hit particularly hard: Cloudflare lost around 14 percent, Akamai more than 16 percent, and Okta nearly eight percent. Palo Alto Networks (-5.2%), Fortinet (-4.4%), Zscaler (-4.1%), CrowdStrike (-3.2%), and Cisco (-2.0%) also came under pressure. As the Financial Times reports, US Treasury Secretary Scott Bessent and Fed Chair Jay Powell had already convened representatives of several major banks earlier in the week to discuss the cyber risks posed by Mythos. According to the report, the Bank of Canada also held a meeting with financial regulators and major banks at which the new model was discussed. Unlike earlier models, Anthropic is explicitly not making Claude Mythos Preview available to the general public. This decision is justified by the risks that became apparent during internal testing. According to the company, the model is capable of identifying so-called zero-day vulnerabilities -- previously unknown security flaws -- in an almost fully autonomous manner and of developing corresponding exploits. In doing so, it is said to surpass the capabilities of all but the very best human security experts. In internal tests, Mythos uncovered thousands of critical vulnerabilities in common operating systems and web browsers. Anthropic made three examples public: a 27-year-old flaw in OpenBSD, an operating system considered particularly robust for firewalls and critical infrastructure, which allowed affected systems to be crashed solely via a network connection; a 16-year-old vulnerability in the widely used video library FFmpeg, which automated testing tools had previously executed five million times without detecting the bug; and several chained vulnerabilities in the Linux kernel that could allow an attacker to escalate from ordinary user privileges to full system control. All three vulnerabilities were reported to the respective developers and have since been patched. In order to make the model's capabilities usable at least defensively, Anthropic has launched the initiative "Project Glasswing." Through it, selected partners gain access to Mythos to audit their own and open-source systems for vulnerabilities. Founding partners include Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks, among others. More than 40 additional organizations operating critical software infrastructure are also set to receive access. Anthropic is providing usage credits of up to 100 million US dollars for this purpose and is additionally awarding four million US dollars to open-source security organizations.

Anthropic
Trending Topics12d ago
Read update
Anthropic's AI Model Mythos Sends Cybersecurity Stocks Into Freefall

Anthropic's new Mythos AI tool signals a new era for cyber risks and responses

When Anthropic detected last September that someone was using its artificial intelligence software in a highly sophisticated spy campaign, the company began investigating. What stood out about this cyberattack was how much the hackers, who Anthropic says were probably Chinese-sponsored, relied on AI. Rather than advising the attackers, the company discovered, the AI technology actually carried out much of the attack itself. Fast-forward to this past week, when the company said AI had made another huge leap in its cyberattack capabilities. The most advanced model to date, Claude Mythos Preview, not only had found thousands of severe vulnerabilities in common operating systems that humans had missed, but also had devised sophisticated ways to exploit those gaps. The software was so powerful, the San Francisco-based company said, that it would not release it publicly, but rather, for the moment, would make it available to a newly formed consortium of some 40 key tech companies that could fix the vulnerabilities Mythos found. In short, with AI, the long-standing arms race between hackers and cybersecurity firms is going nuclear. If what Anthropic has claimed about Mythos is true, then the race will be faster, more sophisticated, and bigger than ever before. "This is kind of the beginning of the full-scale reckoning of the cyber risk posed by AIs," says Mantas Mazeika, research scientist at the Center for AI Safety, a nonprofit that advocates for standards to manage risks like misinformation, weaponization, and existential threats. The twist is that this time, it's the cybersecurity community that might have gained a step on the hackers. "I view this as an opportunity to get ahead of the bad guys," says V.S. Subrahmanian, a computer scientist at Northwestern University. "We have this capability now to identify the vulnerabilities that might exist in a system." Anthropic built Mythos as a cutting-edge, general-purpose AI model. But what Anthropic found was that it had made a big leap in its ability to detect software bugs and, more importantly, how to use those bugs, sometimes in tandem, to attack systems. The company claims it found severe vulnerabilities in every major operating system and web browser, some of which had gone undetected for years. For example: "AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities," the company warned in its introduction of Project Glasswing. The consortium is using Mythos to find and fix key flaws in its own software and systems. Part of Mythos' advantage over humans is the speed with which it can operate. To find software bugs, most major technology companies follow a cycle. They hire professionals who find a vulnerability in the system and figure out how to exploit it. Then those professionals alert the company, which figures out how to "patch" it. Typically, that process takes months. "What we're basically seeing these AI systems do now - if everything that they are saying in this announcement is accurate - is that time is compressed significantly," says Allie Mellen, an AI security operations analyst in Boston. "The time between anyone - not just a white-hat hacker, but also a black-hat hacker, or a nation-state or a cyber criminal gang - being able to identify and exploit those vulnerabilities is incredibly small." That kind of speed means small companies are most at risk because they don't have the resources that big companies do to spend what's needed to fix flaws in their systems. "Is this a manageable threat? Not with the current software security practices that we have," says Katie Moussouris, founder of Luta Security, a cybersecurity firm in Seattle. "My hope is that this will galvanize as much innovation on the AI defense end as it has on the AI offense end," she says. "We do need to match that energy, or we are not going to be prepared for the tsunami of bugs and patches that are going to be coming out in the next year." Anthropic says it will not widely release this version to the public, in an effort to keep it out of the hands of hackers. Dr. Mellen calls Anthropic's approach a "very positive step," and exactly what's needed in the short term. Down the road, though, "it's a different conversation," she says. "We need to rethink the way that we are approaching the patching process and system." In her view, the solution is two-sided: (1) finding the vulnerabilities in existing software and (2) setting up processes for developing new software. That might mean using AI technologies like Mythos to spot vulnerabilities in advance, so new software is developed to be more hacker-resistant. On the political end, several experts say a first step would be a dialogue among AI firms, cybersecurity companies, and industry and government officials. AI technology is moving so fast, however, that there's only a tight window to act or make revisions before AI's capabilities spread beyond Anthropic's latest development. The company's CEO, Dario Amodei, has said competitors are only six to 18 months behind. Some say China and others may be able to match Mythos' capabilities sooner - perhaps in just a few months. "Chinese cyber capabilities are formidable and impressive, and they have probably hacked Anthropic long back," says Dr. Subrahmanian of Northwestern. "I would suspect they have it already or have the ability to get it very soon."

Anthropic
The Christian Science Monitor12d ago
Read update
Anthropic's new Mythos AI tool signals a new era for cyber risks and responses

After Artemis II, NASA looks to SpaceX, Blue Origin for Moon landings

With Artemis II successfully completing its historic lunar mission on Friday, NASA is banking on billionaires Jeff Bezos and Elon Musk for the next step: landing astronauts on the Moon. The Apollo program -- which sent the first and only humans to the Moon's surface between 1969 and 1972 -- was designed so that only two astronauts could land on the lunar surface for a maximum of a few days. More than 50 years later, American ambitions and expertise have grown, with NASA hoping to send four people on a mission lasting several weeks and eventually building a lunar base. For the second phase of its mission, the space agency is looking to commercial landers designed by Musk's SpaceX and Bezos's Blue Origin to get its astronauts on the Moon. After Artemis II splashed down in the Pacific Ocean on Friday after its record-breaking journey, NASA officials urged all hands on deck for a crewed landing in 2028. "We need all of industry to work and come along with us, and they need to accept that challenge and come with us and really start the production lines that are going to be required in order to achieve that goal," Lori Glaze, the acting associate NASA administrator, told a press conference. The Apollo program relied on a single rocket, the Saturn V, which carried both the lunar lander and the capsule carrying the astronauts. NASA has opted for two separate systems for Artemis: the first to launch the Orion spacecraft carrying the crew from Earth, and another to launch the lunar lander, which will be privately contracted. The decision was driven by the technical limitations of the Apollo program, Kent Chojnacki, a senior NASA official in charge of lunar lander development, told AFP. "It was very not expandable to long-term exploration and long-term stays," he explained. Although spectacular, the Apollo missions were like "camping trips," said Jack Kiraly, director of government relations at the Planetary Society, which encourages space exploration. The systems NASA is looking at now are "huge compared to Apollo," said Chojnacki, noting that the new lunar landers being developed by Blue Origin and SpaceX are two to seven times larger than before. The space agency is also drawing from external partners, such as the European companies that built the propulsion module for Orion. The new approach opens access to more equipment and resources, but also significantly complicates operations. To send these giant spacecrafts to the Moon, the private space exploration companies will need to master in-flight refueling, a complex maneuver that has not yet been fully tested. After the lunar lander is launched, additional rockets will be needed to deliver the fuel required for the journey to the Moon, some 250,000 miles (400,000 kilometers) from Earth. Given this risky undertaking and the numerous delays -- particularly those experienced by SpaceX that was supposed to have its lander ready first -- pressure has mounted in recent months. "We are once again about to lose the Moon," three former NASA officials warned in an article in SpaceNews last September. China, which is hoping to send humans to the Moon by 2030, has been making progress as well, raising fears in the Trump administration that the United States could get left behind. With that in mind, NASA raised the possibility last fall of reopening the contract awarded to SpaceX and using Blue Origin's lunar lander first, sending shockwaves through the rival companies. Both firms announced they were realigning their strategies to prioritize the lunar project -- and keep their lucrative contracts with NASA. But concerns remain, particularly regarding the feasibility of in-orbit refueling. "We do have a plan," Chojnacki said, noting that NASA has a back-up plan in case of failure. The timeline is also up in the air. NASA says it plans to test an in-orbit rendezvous between the spacecraft and one or two lunar landers in 2027, and carry out a crewed lunar landing in 2028. Before that, companies will need to test in-orbit refueling and send an unmanned lunar lander to the Moon to demonstrate its safety. That all needs to happen within the next two years. "It feels like a very small amount of time," said Clayton Swope of the Center for Strategic and International Studies.

SpaceX
Vanguard12d ago
Read update
After Artemis II, NASA looks to SpaceX, Blue Origin for Moon landings

How did Anthropic's Mythos raise cybersecurity concerns?

Anthropic withheld its Mythos model amid cyber risk worries Anthropic's new AI model, Mythos, became a focus for cybersecurity officials and financial-sector leaders after the company said it was withholding the model from broad release due to concerns about how it could be used. The coverage links Mythos to the potential for exposing vulnerabilities -- described as powerful enough to reveal weaknesses in software. That capability raises the risk that attackers could accelerate discovery of exploitable flaws or use the model as a force multiplier. In addition, financial authorities were portrayed as alarmed by the speed with which powerful AI could move from research to real-world misuse. Bank executives and senior officials were described as being brought together for warnings about AI-related cyberthreats. For the U.S. and global economy, the issue isn't only whether Mythos is "safe" in a lab setting. It's about whether rapid deployment of advanced models could increase the overall threat environment for banks, critical infrastructure, and software systems. In practical terms, the concerns point to likely consequences: Across the stories, the central theme is that Mythos's capabilities triggered a preemptive decision to slow or restrict public access while cybersecurity implications are assessed.

Anthropic
AllToc12d ago
Read update
How did Anthropic's Mythos raise cybersecurity concerns?

Want to Own SpaceX Stock Before Its Blockbuster IPO? Here Are 3 Ways Investors Can Buy Right Now.

Alphabet owns roughly a 7% stake in SpaceX, and its diversified business makes it the least risky way to get pre-IPO exposure to the company. In April, SpaceX confidentially filed initial public offering (IPO) paperwork with the Securities and Exchange Commission (SEC). The company will host its IPO roadshow in June, where executives will pitch the stock to money managers. Shares will likely start trading on the public market by July. The IPO promises to be a blockbuster event that draws particularly heavy demand from retail investors. SpaceX is reportedly seeking a $1.75 trillion valuation, which would immediately make it one of the 10 most valuable public companies in the world. Additionally, CEO Elon Musk hopes to raise $75 billion, more than double the current record for an IPO. Will AI create the world's first trillionaire? Our team just released a report on the one little-known company, called an "Indispensable Monopoly" providing the critical technology Nvidia and Intel both need. Continue " For investors who cannot wait until SpaceX goes public, there are ways to get exposure to the rocket maker today. I will discuss three, starting with the most risky and ending with the least risky. Image source: Getty Images. The Ark Venture Fund (NASDAQMUTFUND: ARKVX) is an actively managed interval fund that owns stock in 68 public and private equities. It seeks to "democratize venture capital, offering all investors access to what we believe are the most innovative companies." The top five positions are listed below: The Ark Ventures Fund returned 147% (28% annually) since its inception in 2022, beating the S&P 500 (SNPINDEX: ^GSPC) by 80 percentage points. Heavy exposure to SpaceX factored meaningfully into those gains, as did heavy exposure to artificial intelligence (AI) start-up OpenAI. However, the Ark Ventures Fund is a rather risky way to get pre-IPO exposure to SpaceX for three reasons. First, its high net expense ratio of 2.9% means shareholders will pay $290 per year on every $10,000 invested in the fund. Second, as an interval fund, investors cannot sell at their discretion. Instead, Ark provides liquidity on a quarterly basis by offering to buy shares. Third, the fund is heavily invested in private companies. The Baron Partners Fund Retail Shares (NASDAQMUTFUND: BPTRX) is an actively managed mutual fund that owns stock in about 25 companies, most of which are publicly traded. It seeks "capital appreciation through investments in growth companies of any size with significant long-term potential." The top five positions are listed below: The Baron Partners Fund achieved a total return of 741% (23.7% annually) over the past 10 years, outpacing the S&P 500 by more than 450 percentage points. The driving force behind those astronomical gains was heavy exposure to SpaceX and Tesla. Importantly, unlike the Ark Ventures Fund, shareholders can sell the Baron Partners Fund at their discretion. However, despite being more liquid, this fund is still fairly risky because it is concentrated in two companies. Also, the Baron Partners Fund has an expense ratio of 2.24%, meaning shareholders will pay $224 per year on every $10,000 invested. In 2015, Google parent Alphabet (NASDAQ: GOOGL) (NASDAQ: GOOG) invested $900 million in SpaceX. The rocket and satellite company was worth approximately $12 billion at the time, which means Alphabet owned a roughly 7.5% stake. That investment has already paid off handsomely. In 2026, SpaceX was valued at $1.25 trillion when it merged with xAI, meaning Alphabet's stake is now worth over $100 billion. Looking ahead, if SpaceX does go public with a $1.75 trillion valuation, Alphabet's stake would climb to more than $120 billion. Alphabet shareholders would benefit because unrealized gains would hit the bottom as generally accepted accounting principles (GAAP) earnings but also because SpaceX shares would be more liquid, meaning Alphabet could sell its stake for a substantial amount of cash. Compared to the funds discussed, owning Alphabet stock is a less risky way to get SpaceX exposure before its IPO because Alphabet has a strong presence in three growing markets: advertising, cloud computing, and autonomous driving. Indeed, Wall Street expects the company's earnings to increase at 15% annually over the next three years, which makes the current valuation of 30 times earnings look reasonable. Before you buy stock in ARK Venture Fund, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now... and ARK Venture Fund wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $550,348!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $1,127,467!* Now, it's worth noting Stock Advisor's total average return is 959% -- a market-crushing outperformance compared to 191% for the S&P 500. Don't miss the latest top 10 list, available with Stock Advisor, and join an investing community built by individual investors for individual investors. Trevor Jennewine has positions in Tesla. The Motley Fool has positions in and recommends Alphabet, MSCI, and Tesla. The Motley Fool recommends Hyatt Hotels. The Motley Fool has a disclosure policy.

SpaceXxAI
NASDAQ Stock Market12d ago
Read update
Want to Own SpaceX Stock Before Its Blockbuster IPO? Here Are 3 Ways Investors Can Buy Right Now.

AI Cybersecurity Risks: Anthropic and OpenAI Under Regulatory Scrutiny - News Directory 3

The capabilities of the Mythos model have triggered urgent warnings within the global financial sector. Anthropic has announced it will not release its new artificial intelligence model, Mythos, to the general public, citing concerns that the system's capabilities could facilitate catastrophic cyberattacks. The company stated the model is too dangerous to release due to its proficiency in identifying and exploiting software vulnerabilities. Instead of a public launch, Anthropic is limiting access to a small group of major technology companies. According to reporting from Fortune, the company intends to provide these partners, whose software serves as the foundation for various digital services, with early access to give defenders time to strengthen their systems. This effort is associated with Project Glasswing, an initiative focused on securing critical software for the AI era. The capabilities of the Mythos model have triggered urgent warnings within the global financial sector. Federal Reserve Chair Jerome Powell and U.S. Treasury Secretary Scott Bessent have warned bank CEOs about the specific security risks posed by the model, according to Bloomberg. The Bank of England has also raised alarms regarding the threat from the AI system that Anthropic deemed too dangerous for public release, as reported by The Telegraph. These warnings come as policymakers and cybersecurity professionals express concern over the potential for AI-driven exploits to target critical financial infrastructure. Anthropic is not the only AI laboratory developing systems with these capabilities. Axios reports that OpenAI is preparing a new model, internally referred to as Spud, which could match Mythos in its cybersecurity capabilities. OpenAI is also working on an advanced system specifically focused on cybersecurity. The company plans to implement a phased rollout of this system to a small group of partners to ensure that cyber defenders have a head start over potential attackers. While some analysts suggest these limited release strategies may be intended to create marketing hype around new models, most cybersecurity experts agree that AI-driven capabilities have reached a dangerous tipping point. Experts warn that the threat is not limited to unreleased models. Fortune reports that existing, publicly available AI models are already capable of executing sophisticated cyberattacks within minutes. AI systems are increasingly automating or semi-automating tasks that previously required advanced human expertise, including: This automation allows attackers who lack high-level technical skills to launch large-scale operations that were previously impossible for non-experts. The current caution follows previous security incidents involving Anthropic's technology. Reuters reports that in 2025, hackers exploited vulnerabilities in the Claude AI to attack approximately 30 global organizations. These developments have led AI and cybersecurity professionals to raise concerns during the week of April 6, 2026, regarding the emergence of new national security risks associated with large language models that possess advanced offensive cyber capabilities.

Anthropic
News Directory 312d ago
Read update
AI Cybersecurity Risks: Anthropic and OpenAI Under Regulatory Scrutiny - News Directory 3

Anthropic tools adoption jumped at US businesses?

Financial Times reported new month-over-month adoption figures from Ramp, a spend-management platform that tracks which AI tools U.S. businesses pay for. In March, 30.6% of U.S. businesses paid for Anthropic's tools, up from 24.4% in February. Over the same period, OpenAI's U.S. business adoption stayed nearly flat at about 35%. Ramp's data points to an acceleration in business spending on Anthropic offerings rather than a dramatic decline in OpenAI usage. The share of companies paying for Anthropic-based tools rose by roughly 6 percentage points in a single month, while OpenAI remained the most widely paid-for option but did not show comparable growth in March. This kind of spending-based metric is often seen by investors and operators as closer to "real procurement" than headline user counts. When a platform shows faster movement in paid adoption, it can suggest: - Broader enterprise workflows are being adopted (not just experimentation). - Budget reallocation may be occurring, with some organizations adding Anthropic tools alongside existing OpenAI use. Ramp's snapshot doesn't reveal which specific Anthropic products are driving spend, how large the contracts are, or whether companies are switching providers or simply expanding their tool stacks. Still, the direction of travel is notable: Anthropic's paid footprint appears to be expanding quickly, while OpenAI's remains steady at a higher baseline. In a crowded enterprise AI landscape, that mix -- fast growth by a challenger against a stable market leader -- can shape how vendors price offerings and compete for the next wave of enterprise deployments.

Anthropic
AllToc12d ago
Read update
Anthropic tools adoption jumped at US businesses?

Anthropic's Claude Mythos AI fears trigger $2 trillion wipeout in IT stocks; JPMorgan CEO Jamie Dimon warns 'AI will likely worsen...'

Anthropic's latest AI model, Claude Mythos Preview, has become a major source of concern for Wall Street and US policymakers, triggering a massive $2 trillion selloff in enterprise software stocks and prompting an emergency meeting at the US Treasury, according to TOI. The model, which has demonstrated an extraordinary ability to identify and exploit software vulnerabilities, has raised fears about its potential impact on cybersecurity and the broader technology sector. According to TOI, on Tuesday, US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened an urgent meeting with CEOs from major banks, including Bank of America, Citigroup, Goldman Sachs, Morgan Stanley, and Wells Fargo. JPMorgan Chase CEO Jamie Dimon was invited but could not attend. The discussion focused on the cybersecurity risks posed by Anthropic's unreleased AI model, Claude Mythos Preview, rather than traditional topics like interest rates or tariffs. According to Anthropic, Claude Mythos Preview is a general-purpose frontier model that has shown remarkable capabilities in finding and exploiting software vulnerabilities. It has already identified thousands of high-severity flaws, including zero-days in every major operating system and web browser. Examples include a 27-year-old vulnerability in OpenBSD and a 16-year-old flaw in FFmpeg that had evaded detection in five million automated tests. In response to these capabilities, Anthropic launched Project Glasswing, a collaborative initiative with major tech companies including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. The project aims to use Claude Mythos Preview defensively to scan and secure critical software infrastructure. Anthropic is committing up to $100 million in usage credits and $4 million in donations to open-source security organisations. Earlier releases from Anthropic, such as Claude Opus and agent-building tools, contributed to a $2 trillion selloff in enterprise software stocks, often referred to as the "SaaSpocalypse." Investors fear that AI agents could replace human workers, reducing demand for traditional software subscriptions. The leak of details about Claude Mythos Preview in late March further pressured cybersecurity stocks, as markets grappled with the potential for AI to undermine existing security tools. JPMorgan Chase CEO Jamie Dimon, in his annual shareholder letter, highlighted cybersecurity as one of the bank's biggest risks, noting that AI will likely make threats worse and require significant defensive investments. The episode highlights the dual-use nature of advanced AI models -- powerful tools that can both strengthen and threaten cybersecurity. While Project Glasswing aims to give defenders an advantage, the rapid advancement of such capabilities has raised questions about preparedness across industries and governments. Anthropic has emphasised that Claude Mythos Preview is not being released to the public and access is strictly controlled for defensive purposes. (With TOI inputs)

Anthropic
Economic Times12d ago
Read update
Anthropic's Claude Mythos AI fears trigger $2 trillion wipeout in IT stocks; JPMorgan CEO Jamie Dimon warns 'AI will likely worsen...'

Anthropic, OpenAI And Big Tech's 'Number One Goal' Is To Kill OpenClaw, Says Venture Capitalist Jason Cal

Venture capitalist Jason Calacanis said that killing OpenClaw is "the number one goal" in the large language model space, pointing to a growing list of competitors independently racing to displace the open-source coding agent. OpenClaw is a local-first autonomous AI agent that automates complex and multi-step tasks. It manages calendars, emails and browser actions across platforms such as WhatsApp, Telegram and Slack, and runs directly on a user's device. Industry Lines Up Against Open-Source Rival He added that OpenAI's acquisition of OpenClaw founder Peter Steinberger was designed "to subvert the open-source project." The stakes are not theoretical. OpenClaw has taken the artificial intelligence world by storm, much like OpenAI's ChatGPT did. A growing number of tech industry executives are jumping on the OpenClaw bandwagon. Photo Courtesy: Shutterstock Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.

Anthropic
Benzinga12d ago
Read update
Anthropic, OpenAI And Big Tech's 'Number One Goal' Is To Kill OpenClaw, Says Venture Capitalist Jason Cal
Showing 5181 - 5200 of 10807 articles