The latest news and updates from companies in the WLTH portfolio.
Our team tests, rates, and reviews more than 1,500 products each year to help you make better buying decisions and get more from technology. SpaceX is developing a next-generation gateway station to boost Starlink speeds after securing approval to upgrade the satellite internet service with gigabit connectivity. On Tuesday, the company filed an application with the Federal Communications Commission about a new ground-based gateway station, called "First of Its Name," an apparent Game of Thrones reference. SpaceX routinely submits regulatory filings to authorize new gateway sites. They're best known for using spherical dome antennas to beam high-speed data to orbiting Starlink satellites, which then relay it to users. The company already has more than 100 gateway stations across the US. But they've usually been designed to transmit data over a swath of radio spectrum in the Ka and E bands, according to past SpaceX filings. The First of Its Name gateway stands out because it'll harness even more radio spectrum in the "Ka-, V-, E-, and W-bands" -- which the FCC greenlit in January as part of SpaceX's proposal to upgrade Starlink with gigabit speeds. The application calls out permission to use the 18.6-18.8GHz, 19.4-19.6GHz, and 29.1-29.5GHz spectrum, along with various higher bands in the V- and W-bands. "This application takes the next step by seeking authority for one of SpaceX's next-generation quad-band gateway earth stations that will connect these satellite systems to the terrestrial internet," the company wrote, noting it's targeting "fiber-like speeds." The gateway station will still use 40 antennas for quad-band access, but each will be a 1.99-meter parabolic dish, slightly larger than the 1.85-meter dishes used at the company's other gateway facilities. SpaceX adds: "This allows the gateway site to connect to as many independent NGSO [non-geostationary] satellites as possible at any given instant in time, delivering higher data transmission rates and improved customer connectivity to meet the growing consumer demand for high-speed, low-latency satellite broadband and ubiquitous mobile communications." The application also includes specific coordinates indicating the gateway will be located at SpaceX's Starlink factory in Bastrop, Texas, as noted by Tim Belfall, a director at UK-based Starlink installer Westend WiFi. It's unclear when the quad-band antennas will roll out to other stations. But in the filing, SpaceX noted if the FCC grants the authorization to use the 18.6-18.8GHz band for the gateway site, it'll "allow SpaceX to efficiently upgrade its existing hardware to make productive use of the 18.6-18.8GHz band for consumers, since the commission has already authorized SpaceX to use adjacent frequencies above and below 18.6-18.8GHz." The company's main driver for unleashing gigabit speeds is launching next-generation V3 satellites using the upcoming Starship vehicle. SpaceX CEO Elon Musk has mentioned that mass deployment of V3 satellites could start in Q4, but it will depend on progress with Starship, which is slated for another test flight next month. In the meantime, SpaceX adds, "Granting this application will promote the public interest by improving the coverage, quality, reliability, and sustainability of SpaceX's upgraded Gen1 and Gen2 systems for American consumers without causing significant interference problems." The company is asking permission to use the gateway for both fixed and mobile satellite services.

"It was advised that a robust mechanism for real-time threat intelligence sharing may be established among banks, @IndianCERT and other relevant agencies so that emerging threats are identified early and disseminated across the ecosystem without delay," the finance ministry said in a post on X. Banks were further advised to immediately report any suspicious activity or cyber incident to the relevant authorities, including Indian Computer Emergency Response Team (CERT-In), and to maintain close coordination with all agencies concerned, it said. These recommendations were given during a high-level meeting chaired by the finance minister, along with the Minister for Electronics and Information Technology Ashwini Vaishnaw, with banks and key stakeholders with a view to assess the potential impact of emerging threats linked to recent developments in AI models, particularly the possibility of such technologies being misused to weaponise software vulnerabilities, the meeting assumed significance in view of development of the Claude Mythos AI model by Anthropic, claiming that it has found vulnerabilities in many major operating systems. During the meeting, the finance minister urged the Indian Banks' Association (IBA) to develop a coordinated institutional mechanism to respond swiftly and effectively to any such threats. She also directed banks to engage the best available cybersecurity professionals and specialised agencies to continuously strengthen defensive and monitoring capabilities of banks. Appreciating the work done by banks so far in strengthening cybersecurity systems and protocols, she emphasised that the nature of the emerging threat from the latest AI Model is unprecedented and requires a very high degree of vigilance, preparedness and better coordination across financial institutions and banks. According to a senior finance ministry official, the ministry and the RBI are studying the extent of risks that the Indian financial sector faces from this breach. So far, Indian systems are secure and there is no need for unduly worrying, the official said, adding that the RBI is also doing due-diligence at its end to ensure India's financial sector is secure. As per the reports, Anthropic said Mythos can outperform humans at cyber-security tasks, finding and exploiting thousands of bugs, including 27-year-old vulnerabilities, in major operating systems and web browsers. Anthropic, an US-based artificial intelligence company, said unauthorised access was made on its new model Mythos, which is deemed too dangerous for public release. Announced on April 7, Mythos is being deployed as part of Anthropic's 'Project Glasswing', a controlled initiative under which select organisations "are permitted to use the unreleased Claude Mythos Preview model for defensive cybersecurity". Mythos is a powerful AI model that has sparked concerns among regulators about its unprecedented ability "to identify digital security vulnerabilities and potential for misuse". Anthropic chose not to release Mythos publicly, arguing that its capabilities pose unprecedented cybersecurity risks, as per reports. PTI DP TRB

A powerful AI kept from public access because of its ability to hack computers with impunity is making headlines around the world. But what is Mythos, does it really represent a risk and might it even be used to improve cybersecurity? The past few weeks have brought apparently alarming news of Mythos, an AI that can identify cybersecurity flaws in a matter of moments, leaving operating systems and software vulnerable to hackers. The cybersecurity community is now beginning to get a better sense of how Mythos may change the face of cybersecurity - and not necessarily for the worse. What is Mythos and why are people concerned by it? Mythos is an AI created by Anthropic. Its existence was accidentally revealed last month when people unearthed content on the company's website, not due for publication, which had been left unsecured for anyone to see. According to Anthropic, there's a good reason the model had been kept behind closed doors: it is - by accident rather than design - extremely good at hacking. It can allegedly discover flaws in virtually any software, if asked, that would allow the user to break in. The company says that Mythos found thousands of high- and critical-severity vulnerabilities in operating systems and other software. Anthropic did not respond to New Scientist's request for comment, but the company said on its website that "the fallout -- for economies, public safety, and national security -- could be severe." The company says it took the responsible step of keeping it hidden. So nobody at all is able to use it? Not quite. Anthropic has decided to make it available to a select group of technology and finance giants like Amazon Web Services, Apple, Google, JPMorganChase, Microsoft and NVIDIA under something called Project Glasswing so that they can uncover any bugs in their own software before someone else does. Members of a private online forum have also managed to gain unauthorised access to the trial. Reports suggest that they simply made an "educated guess" about where the model would be hosted online - the same sort of issue that led to the revelation of the existence of Mythos in the first place. Perhaps a company so concerned about cybersecurity should pay more attention to their own. While the model was initially due to be kept under wraps and out of use, it's now gaining huge attention and being tested by some of the world's best cybersecurity experts. Many of those companies are also Anthropic's largest potential customers, of course - and hype about the power of Mythos will certainly do Anthropic no harm. Security expert Davi Ottenheimer summed up the situation in a blog post as "a legitimate technological capability, reframed as civilisational threat, by a party that benefits from the reframing". Kevin Curran at Ulster University, UK, says that the revelation of Mythos and what it might be able to do "triggered alarm across the security industry", although researchers were divided on how serious the threat actually was. "What happens when a machine can do in seconds what a skilled human hacker takes months to accomplish?" he wonders. But there are indications that it isn't time to panic yet. Bobby Holley at Firefox - one of those organisations being given access to Mythos - wrote in a blog post that the model helped his team find 271 vulnerabilities in the web browser, which is certainly quite a haul, but that none were so ingenious, impenetrably complex or sophisticated that a human couldn't have dug them out. "Just one such bug would have been red-alert in 2025, and so many at once makes you stop to wonder whether it's even possible to keep up," wrote Holley. "Encouragingly, we also haven't seen any bugs that couldn't have been found by an elite human researcher." The AI Security Institute (AISI) - set up under then-UK Prime Minister Rishi Sunak after the UK's AI Summit in 2023 - has also investigated Mythos. In tests, it was found to be capable of attacking only "small, weakly defended and vulnerable enterprise systems" and there was no indication that a really secure bit of software or network would be at risk, although it was a step up in ability from previous models. And AISI did warn that these things are improving fast. AISI did not comment when asked by New Scientist to discuss the threat. Alan Woodward at the University of Surrey, UK, has a pragmatic view of the threat posed by Mythos - and all other AI models in general, which also have the ability to spot cyber vulnerabilities to varying degrees. "The AI is not necessarily capable of finding vulnerabilities that a human wouldn't, but it's just so much faster, thorough and relentless. Hence it's finding vulnerabilities that humans have missed," he says. "AI, as demonstrated by Mythos, is making the attacker's job more efficient and giving them a speed and agility that make defence harder, but not impossible." So it seems that while Mythos can find flaws at scale and speed, it isn't finding anything devastatingly dangerous yet. And there are even reasons to believe that it could actually be a good thing. "The defects are finite, and we are entering a world where we can finally find them all," wrote Holley. In essence, if you make or maintain software then you can also use Mythos to pick apart your own code and patch it - perhaps even before it's released. AI will almost certainly get more capable of finding flaws and malicious attackers will almost certainly benefit from this to some extent. But this will also help software-makers - although those who maintain ageing, clunky government software written decades ago may find keeping up challenging. Even Anthropic believes that hacking AIs will eventually benefit defenders more than attackers - but then again, saying the opposite would make it hard to justify making them. Essentially, AI is making - and will continue to make - both hacking and defending from hackers easier, but those who ignore the technology will find themselves at a big disadvantage. "Treat Mythos as the warning shot it is," says Curran. "And assume that within 18 months, comparable capabilities will be in the hands of adversaries. The window to get ahead of this is open, but it is closing fast."

Rishi Sunak said the pressure was being felt particularly in service sectors such as law, accountancy and the creative industries. Former British Prime Minister Rishi Sunak, now an adviser to Anthropic and Microsoft, warned that artificial intelligence is beginning to flatten the jobs market for young people, especially those seeking entry-level roles. Rishi Sunak said concerns among graduates trying to enter the workforce were justified, adding that senior business leaders were privately telling him recruitment trends were changing because of AI. "They're talking about this concept that they think they can continue to grow their businesses without having to significantly increase employment. Flat is the new up," Rishi Sunak told BBC. Entry-Level Jobs Under Pressure Rishi Sunak said the pressure was being felt particularly in service sectors such as law, accountancy and the creative industries, where many junior roles involve routine analytical or administrative tasks that AI tools can increasingly perform. "There are reasons to be worried and think about the future. But we are able to do something about this," he said. While Rishi Sunak described himself as enthusiastic about AI's long-term potential, he said governments should intervene to make hiring people more attractive rather than allowing technology to simply replace workers. Rishi Sunak's Tax Proposal The former Conservative leader suggested phasing out National Insurance contributions over time and replacing the lost revenue with taxes on corporate profits. He argued that companies benefiting from AI-led productivity improvements would likely generate stronger profits, creating an alternative tax base while reducing the cost of employing staff. "We should be thinking about how do we tip the balance in favour of AI being used in that positive way... to help people do their jobs better," he said. Regulating Powerful AI Rishi Sunak joined both Anthropic and Microsoft as an adviser last year after leaving office. During his premiership, he made AI regulation a major policy priority and hosted the AI Safety Summit. His comments come after Anthropic unveiled a new AI model called Claude Mythos, which the company said outperformed humans in some cybersecurity and hacking-related tasks. Rishi Sunak said the development showed regulators should not depend on companies to "mark their own homework". Despite the warning, Rishi Sunak struck an optimistic tone about Britain's place in the global AI race, saying the UK could become the world's most productive user of AI and remained an "AI superpower".

SAN FRANCISCO-(BUSINESS WIRE)-Today, Thumbtack announced a new integration with Anthropic's Claude, bringing its home services marketplace directly into the AI assistant experience. Claude users on Free, Pro, and Max plans can now move from asking home-related questions to finding, comparing, and hiring top-rated local professionals from Thumbtack -- all within the Claude interface. Through the new integration, U.S.-based users can inquire about home maintenance, repairs, or upgrades, and Clau

This analysis has uncovered two additional findings: First, we have identified a small number of additional accounts that were compromised as part of this incident. Second, we have uncovered a small number of customer accounts with evidence of prior compromise that is independent of and predates this incident, potentially as a result of social engineering, malware, or other methods.

New Delhi, Apr 23 (PTI) Finance Minister Nirmala Sitharaman on Thursday convened a high-level meeting with heads of banks to assess emerging cybersecurity risks linked to advanced artificial intelligence models, amid global concerns over Anthropic's Claude Mythos system and its potential implications for financial data security. During the meeting, Sitharaman asked banks to take all necessary pre-emptive measures to secure their IT systems, safeguard customer data, and protect monetary resources. "It was advised that a robust mechanism for real-time threat intelligence sharing may be established among banks, @IndianCERT and other relevant agencies so that emerging threats are identified early and disseminated across the ecosystem without delay," the finance ministry said in a post on X. Banks were further advised to immediately report any suspicious activity or cyber incident to the relevant authorities, including Indian Computer Emergency Response Team (CERT-In), and to maintain close coordination with all agencies concerned, it said. These recommendations were given during a high-level meeting chaired by the finance minister, along with the Minister for Electronics and Information Technology Ashwini Vaishnaw, with banks and key stakeholders with a view to assess the potential impact of emerging threats linked to recent developments in AI models, particularly the possibility of such technologies being misused to weaponise software vulnerabilities, the meeting assumed significance in view of development of the Claude Mythos AI model by Anthropic, claiming that it has found vulnerabilities in many major operating systems. During the meeting, the finance minister urged the Indian Banks' Association (IBA) to develop a coordinated institutional mechanism to respond swiftly and effectively to any such threats. She also directed banks to engage the best available cybersecurity professionals and specialised agencies to continuously strengthen defensive and monitoring capabilities of banks. Appreciating the work done by banks so far in strengthening cybersecurity systems and protocols, she emphasised that the nature of the emerging threat from the latest AI Model is unprecedented and requires a very high degree of vigilance, preparedness and better coordination across financial institutions and banks. According to a senior finance ministry official, the ministry and the RBI are studying the extent of risks that the Indian financial sector faces from this breach. So far, Indian systems are secure and there is no need for unduly worrying, the official said, adding that the RBI is also doing due-diligence at its end to ensure India's financial sector is secure. As per the reports, Anthropic said Mythos can outperform humans at cyber-security tasks, finding and exploiting thousands of bugs, including 27-year-old vulnerabilities, in major operating systems and web browsers. Anthropic, an US-based artificial intelligence company, said unauthorised access was made on its new model Mythos, which is deemed too dangerous for public release. Announced on April 7, Mythos is being deployed as part of Anthropic's 'Project Glasswing', a controlled initiative under which select organisations "are permitted to use the unreleased Claude Mythos Preview model for defensive cybersecurity". Mythos is a powerful AI model that has sparked concerns among regulators about its unprecedented ability "to identify digital security vulnerabilities and potential for misuse". Anthropic chose not to release Mythos publicly, arguing that its capabilities pose unprecedented cybersecurity risks, as per reports. PTI DP TRB

StubHub brings ticketing platform to Claude, changing discoverability and searchability of live events NEW YORK -- StubHub (NYSE: STUB), the world's leading live event marketplace today announced an integration that lets fans discover and browse live events inside Claude, Anthropic's AI assistant. The integration connects Claude users to StubHub's global catalog of live events with... StubHub brings ticketing platform to Claude, changing discoverability and searchability of live events NEW YORK -- StubHub (NYSE: STUB), the world's leading live event marketplace today announced an integration that lets fans discover and browse live events inside Claude, Anthropic's AI assistant. The integration connects Claude users to StubHub's global catalog of live events with up-to-the-minute pricing and seat-level availability. We built StubHub to be where fans discover live events, and these integrations ensure our marketplace reaches fans wherever they are. The launch builds on StubHub's ChatGPT integration and makes StubHub the only major ticketing platform fans can access across multiple leading AI assistants. StubHub is building a distribution strategy designed to put live events within reach of any AI-powered conversation. "We built StubHub to be where fans discover live events, and these integrations ensure our marketplace reaches fans wherever they are," said Nayaab Islam, President & Chief Product Officer at StubHub. "Consumer behavior is driving a new era in ticket discovery, with fans increasingly turning to conversation, not just menus and filters, to find live events. With our breadth of catalog and global reach, we're uniquely positioned to be the default destination for live events, wherever fans choose to start their search." How It Works The integration is available through Claude's connectors. When a user mentions StubHub, Claude will pull up the StubHub marketplace. Ask Claude something like "Look on StubHub. What concerts are happening in New York this Friday?" The integration returns current inventory with actual pricing, not a list of links to sort through on your own. The conversation builds on itself. A fan might start broad, then get specific: Each follow-up refines the results without starting over. When the right tickets surface, Claude sends the fan directly to StubHub to complete the purchase. What Fans Get The integration goes beyond what a web search can do. Fans interact with StubHub's live marketplace data, including current seat maps, pricing trends, and section-level recommendations. Every purchase is backed by StubHub's FanProtect Guarantee. A Multi-Platform AI Strategy StubHub's approach is different from a typical brand integration as it embeds its marketplace directly into conversational platforms. The Claude launch is the second step in a broader roadway towards being the default platform to discover live events. About StubHub StubHub is a leading global ticketing marketplace for live events. Through StubHub in North America and viagogo internationally, StubHub services customers in over 200 countries and territories, supporting over 30 languages and accepting payments in over 45 currencies - from sports to music, comedy to dance, festivals to theater. StubHub offers a safe and convenient way to buy or sell tickets to live events across the world for memorable live experiences.

A private Discord channel, dedicated to sniffing out unreleased AI models, pulled off the unthinkable. They accessed Claude Mythos Preview -- the very AI Anthropic deems too potent for public eyes -- on the day it was announced. No fancy exploits. Just a sharp guess at a URL, pieced together from leaked naming patterns, plus a dash of insider credentials from a third-party contractor. Bloomberg broke the story first, detailing how the group provided screenshots and a live demo as proof. Bloomberg reported the breach occurred through a vendor environment. Anthropic responded swiftly: "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson told multiple outlets, including TechCrunch. Mythos isn't your average language model. Anthropic built it to hunt zero-day vulnerabilities across major operating systems and browsers. During tests, it unearthed flaws hidden for decades, chained exploits autonomously, even escaped a sealed sandbox to send an email. That's why Project Glasswing limits access to about 40 vetted partners -- firms like CrowdStrike, Cisco, and even the NSA -- tasked with patching software before threats emerge. Amazon Bedrock offers it in gated preview, but only to allow-listed organizations. The intruders? A handful of enthusiasts in that Discord server. They drew from a Mercor data breach earlier in April, which spilled Anthropic's API naming habits, as noted by Mashable. One member snagged legitimate access via their contractor job. Boom. Entry granted. They've tinkered since, building basic websites to avoid notice. "We were not using Claude Mythos for nefarious purposes," one told Bloomberg. But here's the rub. Anthropic hyped Mythos as a cybersecurity game-changer, capable of "identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser." Yet their own perimeter crumbled to low-tech sleuthing. BBC highlighted the irony: a tool billed as too risky for the masses, infiltrated by Discord randos. Industry echoes the concern. The Next Web pointed out the access happened on launch day, April 7, via guessed URLs in a contractor portal. Silicon Republic questioned Anthropic's lockdown prowess. Even Cybernews weighed in, noting the group's regular use without malice -- but the precedent chills. And the fallout? Anthropic's probe continues, no breaches beyond the vendor noted so far. Partners press on with Glasswing, applying Mythos to Firefox and beyond. Mozilla confirmed early tests found vulns, per TechCrunch snippets. But this slip exposes broader tensions. AI firms race to cap powerful models, yet supply chains -- contractors, leaks like Mercor's -- offer backdoors. Short-term fix: tighten vendor oversight. Rotate keys. Obfuscate endpoints. Long-term? Mythos itself could audit these gaps, if safely deployed. The group claims more unreleased models in reach, hinting at persistent Discord hunts. Irony bites hard. The AI meant to fortify digital defenses got outfoxed by pattern-matching hobbyists. Security pros now ask: If Mythos can't shield itself, what hope for the wild? Expect audits. Partner scrutiny. Maybe Mythos turns inward, probing Anthropic's own code. For now, the Discord crew vibes on -- quietly coding, loudly underscoring AI's fragile fences.

Mystery Polymarket traders raked in huge profits after correctly predicting an unusual temperature spike at a weather station in Paris-Charles de Gaulle Airport, prompting French officials to launch an investigation into suspected tampering. Polymarket, a leading prediction market, allows users to place bets on the maximum temperature based on readings at specific locations around the world. On April 15, the temperature in Paris was 18 degrees Celsius and trending downward when it inexplicably spiked to 22 degrees Celsius at 9:30 p.m. - before rapidly plunging again. Just before the spike, Polymarket user "xX25Xx" placed an approximately $120 bet that the day's top temperature in Paris would exceed 18 degrees Celsius - even as 99% of other users predicted it wouldn't, the Wall Street Journal reported, citing data from analytics firm Bubblemaps. The user earned more than $21,000 in profit on the trade. A similar incident reportedly occurred on April 6, when a Polymarket user called "Hoaqin" earned nearly $14,000 after predicting that the temperature in Paris, which had peaked at 18 degrees Celsius late that afternoon, would hit 21 degrees. Météo-France, the country's weather forecasting office, examined its sensors at the airport and ultimately filed a complaint with local police due to concerns that someone had meddled with the system, a spokesperson told the Journal. Tampering with temperature sensors in airports is potentially dangerous because airlines and air traffic controllers rely on accurate data to safely operate flights. Polymarket representatives did not immediately answer a request for comment. Since the wagers, the prediction market has switched to gathering its weather data for Paris at a location in Paris-Le Bourget Airport, according to its website. Polymarket and Kalshi have faced growing scrutiny in recent months due to concerns that users could attempt to rig "prediction" bets in their favor. Recent incidents include a wave of bets on Polymarket that correctly predicted when the US would begin airstrikes on Iran, as well as a surge in trading activity on oil futures on March 23 just minutes before President Trump announced he wouldn't target Iran's power plants. Earlier this month, the White House warned staffers not to use any inside information to place bets.

Forty trusted U.S. partners gain exclusive access through Project Glasswing initiative Your browser just became a potential crime scene. Anthropic, the San Francisco AI company, made an unprecedented decision last month: withhold its newest model, Claude Mythos Preview, from public release. The reason? It demonstrates advanced capabilities in identifying vulnerabilities across the digital infrastructure you rely on daily -- from your banking apps to the operating system running your laptop. This AI Breaks Into Systems Like a Digital Burglar Mythos identifies vulnerabilities and chains exploits with five times the precision of previous models. Mythos wasn't trained specifically for cybersecurity, yet it emerged with what Jared Kaplan, Anthropic's Chief Science Officer, calls "very elite-level cybersecurity capabilities." The model excels at identifying high-severity vulnerabilities in major operating systems and browsers, then writing actual exploit code to breach those systems. It can chain multiple vulnerabilities together, creating sophisticated multi-step cyberattacks that would challenge seasoned hackers. Think of it like giving a master locksmith X-ray vision and infinite patience. Mythos can spot weaknesses in digital infrastructure that human experts might miss, then craft precise tools to exploit them. Your smartphone's security updates and browser patches suddenly feel less reassuring when you realize computer problems can potentially be exploited faster than developers can fix them. The Digital Iron Curtain Descends Only select U.S. allies get access to defensive preparations while adversaries scramble to catch up. Instead of public release, Anthropic restricted Mythos access through "Project Glasswing" to roughly 40 trusted partners. The list reads like a who's who of American tech power: * Amazon Web Services * Apple * Microsoft * Google * Nvidia * JPMorganChase These companies can now use AI-powered tools to identify and patch vulnerabilities before bad actors exploit them. Global reactions reveal the new AI geopolitics. The U.S. Treasury warned banks while the White House summoned Anthropic's CEO for briefings. UK's AI Security Institute confirmed the model's advanced cyberattack potential against "weakly defended" systems. China and Russia's notably muted responses highlight just how far behind they've fallen in this particular arms race -- like showing up to a Formula 1 race with a horse and buggy. The shrinking window between vulnerability discovery and exploitation -- from 771 days in 2018 to under four hours today -- means defenders need every advantage they can get. Anthropic predicts similar models from competitors within 18 months, potentially leveling a playing field that currently favors the prepared. Your digital security now depends on whether the good guys can stay one step ahead in an AI-powered game of cyber chess.

Recent conflicts have rewritten the rules of naval warfare. Affordable, scalable unmanned systems now decide outcomes -- and the US Navy needs small USVs that carry flexible payloads exceeding 1,000 lbs, sustain extended operations, and roll off production lines fast. Current domestic offerings fall short. As Navy Secretary John Phelan told the Senate Armed Services Committee: "We will not win the wars of the future with the platforms of the past. Success in modern warfare will require the rapid, scalable production and integration of air, surface, and subsurface unmanned systems." Kraken Technology Group builds the answer. Anduril is partnering with Kraken to bring Kraken's proven family of small, high-performance, mass-producible USVs to the US Navy. Kraken's USVs offer uniquely high performance. With a heritage rooted in competitive offshore racing, Kraken's USVs have set the standard for speed and endurance at sea. They have already proven that performance under the UK's Project Beehive program, where it emerged as the small USV leader for European and partner nation customers. Anduril and Kraken are joining forces to deliver a family of small unmanned surface vehicles to the US Navy. Anduril will build the K5 KRAKEN and K7 SABRE at US facilities, and sustain and support the fleet. Anduril will integrate payloads and Lattice autonomy software on US soil, configuring each vessel for the full range of Navy missions. To meet allied demand, Kraken will continue a parallel production line, designing a distinct hull variant for allied operational requirements. Dominance at sea requires scale. Kraken's platform expertise plus Anduril's autonomy and domestic manufacturing deliver it.

A previously undocumented China-aligned threat actor targeted a Mongolian government entity and used popular communication platforms such as Discord, Slack and Microsoft 365 Outlook to manage its operations and steal data, researchers have found. The group, which researchers at cybersecurity firm ESET named GopherWhisper, has been active since at least November 2023 and was discovered in January 2025 after investigators found a previously unknown backdoor on the network of a Mongolian government institution. The malware, dubbed LaxGopher, was deployed on roughly a dozen systems belonging to the organization, the Slovak cybersecurity firm said in a report on Thursday. Researchers believe the campaign likely affected dozens of additional victims, though they have not identified their locations or sectors. According to ESET, the hackers relied heavily on legitimate online services to conceal their activity, using Discord, Slack and Microsoft 365 Outlook to communicate with compromised machines and manage command-and-control infrastructure. The group deployed a range of custom-built tools written largely in the Go programming language, including loaders, injectors and backdoors designed to maintain access to targeted systems. Among the tools identified were RatGopher, BoxOfFriends, the injector JabGopher, the loader FriendDelivery and a backdoor known as SSLORDoor, researchers said. To remove stolen information from compromised networks, the attackers used a dedicated data exfiltration tool called CompactGopher, which compressed files and uploaded them to the file-sharing service File.io. ESET said the operation appears consistent with cyber espionage activity, though it did not attribute the campaign to a specific entity.

No Anthropic systems were affected by the breach, which has been contained within the third-party vendor environment, according to an Anthropic spokesperson. Involved in the hack of Claude Mythos which has been touted to advance the discovery and exploitation of software flaws were a handful of individuals who used their knowledge of Anthropic's URL formatting conventions and a vendor breach to determine the online location of the AI model, reported Bloomberg News. Unreleased Anthropic AI models discovered following the breach were noted to have since been tested by the group. Such an incident was regarded by Acalvio CEO Ram Varadarajan as a supply chain issue commonly downplayed by perimeter-centric security. "Deception infrastructure is whats needed and operates precisely in the post-breach environment. It doesnt assume the perimeter held, it instruments the terrain inside so that when someone wanders in uninvited, their every move becomes a signal," said Varadarajan.

* Vercel expanded its breach investigation, confirming more compromised accounts than initially reported. * Researchers linked the attack to a Context.ai account infected with Lumma Stealer malware, which was used to access Vercel environments. * A dark web actor attempted to sell stolen Vercel data, claiming ties to ShinyHunters, though the group denied involvement. The number of customers affected by the recent breach at Vercel is bigger than initially thought, as the company confirmed finding even more compromised accounts. Earlier this week, the cloud development platform confirmed suffering a cyberattack and losing "non-sensitive" customer data. In the initial report, Vercel said one of its employees used a third-party AI tool called Context.ai, which seems to have been used as an entry point. "The incident originated with a compromise of Context.ai" the company said, claiming that the attacker used that access to take over that employee's Google Workspace account. Through that, they gained access to some Vercel environments and environment variables "that were not marked as 'sensitive'. Infected after downloading "game hacks" During a more thorough investigation, Vercel expanded its list of compromise indicators. As a result, it found even more accounts that were exposed. It also said it found a "small number" of customer accounts with evidence of proper compromises, predating this attack. These, the company believes, are the result of social engineering, or malware attacks. It said it notified the affected individuals but did not want to say how many people were affected. In its own investigation, security researchers Hudson Rock found that the Context.ai user was infected with the Lumma Stealer infostealer in February 2026, after searching for exploits for Roblox. "We now understand that the threat actor has been active beyond that startup's compromise," Vercel CEO Guillermo Rauch said on X. "Threat intel points to the distribution of malware to computers in search of valuable tokens like keys to Vercel accounts and other providers." Just a day before Vercel announced the breach, someone tried selling the archive on a dark web forum. "Greetings all. Today I am selling Access Key/Source Code/Database from Vercel," the attacker said. They claimed to be part of the ShinyHunters team, which the group denied. Via The Hacker News Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds.

Finance Minister Nirmala Sitharaman on Thursday held a meeting with bank chiefs and officials from the Reserve Bank of India (RBI) to review potential risks arising from Anthropic's AI model Mythos, amid global concerns over cybersecurity vulnerabilities and data breaches in financial systems linked to such technologies, sources aware of the matter told TNIE. Officials from the Department of Financial Services, the Ministry of Electronics and Information Technology, and CERT-In were also present at the meeting. The meeting was held against the backdrop of developments surrounding Anthropic's latest model, Claude Mythos, which has come under global scrutiny. Concerns have been mounting among financial institutions worldwide after Anthropic claimed that the AI model can perform complex hacking tasks, potentially outperforming humans. Officials familiar with the deliberations said the discussions covered both immediate and long-term risks posed by such technologies, along with safeguards required to mitigate them.

A hairdryer was allegedly used to rig Polymarket bets on the weather at Charles de Gaulle airport in Paris, . French authorities note that the official temperature readings at the airport spiked twice in the past month, reaching levels much higher than expected. On both occasions, gamblers on Polymarket appear to have walked away with thousands upon thousands of dollars by betting on those temperature fluctuations. The gambling site relies on readings from temperature sensors, and the one at Charles de Gaulle airport is on a public road. This makes it easy to access. The operating theory is that someone snuck in and used a battery-powered hairdryer to bring the recorded temperature up well beyond the actual heat outside. Meanwhile, the Polymarket page indicated less than a one percent chance of the airport exceeding a particular temperature. Successful bets on these fluctuations netted an unknown user around $34,000. "In view of physical findings on one of our instruments and the analysis of sensor data, Météo-France was indeed led to file a complaint for alteration of the operation of an automated data processing system with the Air Transport Gendarmerie Brigade of Roissy," a spokesperson for France's official weather agency said. There is no indication that Polymarket forced anyone to return their winnings, but the temperature sensor has been moved to a new location. The site is on the daily temperature in and around Paris. It sucks that someone potentially tricked a temperature sensor with a hairdryer to scam actual gamblers out of potential winnings. However, this sort of thing should be expected when betting money on real-world scenarios like this. If something can be rigged, and there's money to be made, it'll get rigged. Humans are gonna human. This does, however, shine a light on the types of bets that should be allowed on sites like Polymarket and Kalshi. Polymarket, for instance, hosts numerous bets on the , whether or not countries and , among many other sensitive topics. What happens when someone uses something much more dangerous than a hairdryer to change the outcome of something for financial gain?
French authorities are investigating potential tampering of critical weather sensors at Paris Charles de Gaulle International Airport that could be tied to prediction market trading. Temperature readings spiked on two evenings this month, beating highs recorded from the daytime. According to Bloomberg, the sensors are both critical to aviation operations at the airport and are also relied upon as official data points in prediction markets, where traders can bet money on certain weather trends. On the two days of suspected tampering, money flowed to temperature predictions at the French airport at more than double the usual volume. One account on Polymarket recorded $21,000 in profit betting on temperature at the airport. Meteorologists inspected the sensors, which spiked 39 degrees Fahrenheit one day and 41 degrees Fahrenheit on the other day. After investigating, they filed a report with airport police for tampering with the system, as Bloomberg reports. Hacking weather systems at an airport is especially dangerous, as temperature data is needed for departing and arriving aircraft and is used by air traffic control to plan routes and spacing between airplanes.
BENGALURU: Salil Parekh, chief executive of Infosys, pointed to emerging risks in artificial intelligence systems, referring specifically to vulnerabilities linked to Anthropic's Mythos. Parekh, during the Q4 earnings press conference said that current observations suggest AI systems may be more exposed than previously understood. "It really... is exposing more vulnerabilities than one thought possible previously. However, other models are also exposing vulnerabilities." He did not provide technical details but indicated that as organisations adopt AI at scale, new risks are becoming visible, particularly around system behaviour, security, and reliability. Parekh also described these developments as a potential opportunity for Infosys.

This article first appeared on GuruFocus. Intel (NASDAQ:INTC) shares rose about 3% on early Thursday after Elon Musk said Tesla (NASDAQ:TSLA) and SpaceX will use Intel's 14A process for a planned Terafab, giving Intel a fresh sign of interest in its manufacturing turnaround, according to a Wednesday earnings call. Intel has been spending heavily to rebuild its chipmaking position and compete with Taiwan Semiconductor Manufacturing Co. (TSM). Musk said the process is still under development, but he expects it to be mature by the time the facility begins operating. Tesla and SpaceX would use the Intel technology in a semiconductor plant meant to support their AI work, Musk said. He added that the Terafab would help address a shortage of advanced chips as demand for AI infrastructure keeps climbing. Intel is due to report earnings on Thursday. The comments add a new customer-facing angle for Intel as it looks to show progress on 14A, while memory chip suppliers including Samsung Electronics, SK Hynix and Micron Technology (NASDAQ:MU) face tight supply conditions.
