News & Updates

The latest news and updates from companies in the WLTH portfolio.

Analyst: SpaceX making 340 satellites per month

Analysts at Quilty Space say that SpaceX is building Starlink satellites at a rate of more than 4,000 per year (or about 340 per month). This production rate is well ahead of the 2,880 it built in 2024, a 40-plus percentage improvement. Quilty adds that Starlink's global gateways (ground-stations) deployed grew from about 240 in 2024 to about 503 in 2026 (+2.1x), with some 135 new sites added in 2026 to date. The research company's analysts say that as far as Starlink is concerned, "The business looks nothing like it did 18 months ago". Quilty predicts that Starlink alone will generate $20 billion in revenue this year, almost doubling the estimated $11.8 billion it made in 2025, noting: "As Starlink scales globally, the story is no longer just about subscriber growth, but a fundamental shift in revenue mix, pricing dynamics, and the growing role of government demand." More than 30 airlines now provide Starlink, and revenue from that segment is expected to climb 68% from last year, according to Quilty. Starlink's important Maritime segment is also growing and Quilty anticipates that some 75,000 shipping vessels are expected to add Starlink this year which could be worth $1.9 billion.

SpaceX
Advanced-television11d ago
Read update
Analyst: SpaceX making 340 satellites per month

Fuel Crisis Looms as US Blockade Throws Oil Markets into Chaos - Diaspora Digital Media (DDM News) - Nigeria Breaking News, Africa and World News and Updates

Oil prices are climbing again, pushing past $100 a barrel, as markets react to plans by the United States to begin a naval blockade targeting Iranian ports. The move, announced by President Donald Trump, is set to take effect later Monday and comes just hours after peace talks with Iran ended without a deal leaving a fragile ceasefire hanging in the balance. According to the U.S. military, the blockade will focus on ships heading to or from Iranian ports and coastal areas. Vessels traveling through the Strait of Hormuz to other destinations will still be allowed to pass, at least for now. Even so, the uncertainty is already rattling global markets. Brent crude, the international oil benchmark, jumped more than 7% to about $102 a barrel as trading opened. Prices had briefly eased last week after the ceasefire announcement, but have now surged again as tensions rise. Overall, oil prices have climbed more than 50% since the conflict began. The Strait of Hormuz remains at the center of the crisis. Normally, about a fifth of the world's oil flows through the narrow waterway, but traffic has slowed sharply since fighting broke out in late February. Iran has largely restricted access, allowing only selected vessels through -- sometimes reportedly for a fee. The planned U.S. blockade marks a shift in strategy. Until recently, Washington had allowed Iranian oil shipments to continue in an effort to avoid further disruption to global supply. Now, that approach appears to be changing. Iran has warned it won't take the move lightly. Officials say they have "untouched levers" to respond and have hinted that energy prices could rise even further if tensions escalate. Still, there are mixed signals about what comes next. Trump said the ceasefire is "holding well" and suggested Iran could return to negotiations -- though he also made it clear he isn't particularly concerned if that doesn't happen. "I don't care if they come back or not," he told reporters. For now, the situation remains unpredictable. With military pressure increasing and diplomacy still uncertain, markets are reacting in real time and the impact is being felt far beyond the region.

CHAOS
Diaspora Digital Media (DDM News) - Nigeria Breaking News, Africa and World News and Updates -11d ago
Read update
Fuel Crisis Looms as US Blockade Throws Oil Markets into Chaos - Diaspora Digital Media (DDM News) - Nigeria Breaking News, Africa and World News and Updates

X-Energy - No2NuclearPower

Is X-Energy a Millionaire-Maker Stock? X-Energy, a nuclear reactor and fuel design engineering company, recently filed paperwork for an initial public offering. Shares of nuclear power company X-Energy aren't available yet. Still, the company recently filed a draft registration statement with the Securities and Exchange Commission (SEC) for an initial public offering (IPO) under the NASDAQ ticker XE. The renewable energy company's shares could go radioactive, making investors millions, or they could bust, as the company presents a strong risk-reward ratio. On the negative side, XE Energy lost money last year, and as a private company, there isn't much information yet about its finances. On the positive side, its IPO prospectus states that the market for small modular reactors (SMRs) could be worth $2.3 trillion by 2050. Here are three reasons why X-Energy could be a millionaire-maker stock.

X-energy
No2NuclearPower11d ago
Read update
X-Energy - No2NuclearPower

Wall Street Told to Brace for Anthropic's Mythos AI

On Tuesday morning, in a meeting pulled together on short notice in Washington, Treasury Secretary Scott Bessent sat down with chief executives from Bank of America, Citi, and Wells Fargo to raise an alarm about Anthropic's newest AI system. Bessent cautioned the group that if the software were deployed inside their internal networks, it could put confidential customer information at serious risk. Three individuals who were briefed on the discussion but not authorized to comment publicly confirmed what took place. Federal Reserve Chair Jerome H. Powell joined Bessent at the table. Powell has warned publicly in recent weeks about cyberattacks targeting the financial system. Bloomberg reported that the two officials did not just flag dangers, but they also urged the bank leaders to put the model to work scanning for weaknesses in their own infrastructure. The AI system in question is Claude Mythos Preview, which Anthropic unveiled this week. The company said it is unusually adept at pinpointing software security gaps that skilled human engineers would miss. Anthropic added that the technology is too powerful and too risky for open distribution at this stage. For now, only about 40 organizations, part of a group Anthropic has named "Project Glasswing," have been granted access. According to the people with knowledge of Tuesday's discussion, attendees learned that the model's proficiency at exposing banking system vulnerabilities could itself become a danger: hostile actors or hackers who obtained those findings could turn them into an attack playbook. Anthropic has said Mythos was never built with cybersecurity as its intended purpose -- the skill surfaced on its own. Some in the industry have speculated that limiting availability while advertising the model's potency is less about caution and more about generating enterprise demand. Among the Project Glasswing partners, JPMorgan Chase, the country's biggest bank, was the only one publicly identified at launch. JPMorgan said it planned to use the tool "to evaluate next-generation A.I. tools for defensive cybersecurity across critical infrastructure." Bloomberg's reporting shows that Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley have since begun running their own tests with Mythos. JPMorgan CEO Jamie Dimon received an invitation to Tuesday's Washington session but was unable to join because of a prior travel commitment, according to a person with direct knowledge. Speaking to Fox News on Friday, Kevin A. Hassett, Director of the National Economic Council, said officials are moving quickly. "We're taking every step we can to make sure that everybody is safe from these potential risks, including Anthropic agreeing to hold back the public release of the model until our officials have figured everything out," Hassett said. "There's definitely a sense of urgency." A Treasury spokesperson described the gathering as one Bessent convened "to initiate a process for planning and coordination of our approach to the rapid developments taking place in A.I." The Fed offered no comment. Bloomberg News was first to report that the meeting had occurred. Logan Graham, an Anthropic executive, released a statement saying the technology would help "secure infrastructure that is critical for global security and economic stability." Anthropic and the Trump administration are already in a legal fight. The Department of Defense recently classified the company as a "supply chain risk" -- a label that followed collapsed talks over Anthropic's push to restrict the government's use of its AI, especially for military purposes. Anthropic has challenged the classification in federal court, calling it politically driven and without adequate basis. One arm of the administration has tagged Anthropic as a security threat. Another is directing the nation's top banks toward the company's most capable model.

Anthropic
BetaNews11d ago
Read update
Wall Street Told to Brace for Anthropic's Mythos AI

Anthropic's Claude was the biggest talking point at AI's biggest conference: Here's why

Walk the floor at any major AI conference and you'd usually expect one name to dominate the conversation. For three years running, that name was ChatGPT. At this year's HumanX gathering in San Francisco, something shifted. Thousands of techies descended on the Moscone Center, and across panels, booths, and side conversations, Claude kept coming up. The chatbot people weren't talking about? ChatGPT. Also read: WhatsApp encryption debate: Lawsuit, backdoor claims and expert pushback The short answer for why is Claude Code. Since its public launch in May 2025, the coding agent has grown to generate more than $2.5 billion in annualised revenue, and the people building software products have taken notice. Arvind Jain, CEO of enterprise AI company Glean, called it "Claude Mania" and described it as a religion at this point. "Everybody, if you go and ask them today, 'Hey, if I gave you one AI tool, what tool would you want?' The answer would be Claude," he said. Part of what makes that kind of endorsement meaningful is who's saying it. HumanX isn't a consumer tech crowd. It draws the practitioners actually implementing AI systems, CTOs, engineering leads, and product managers making million-dollar deployment decisions. When that room stops talking about OpenAI and starts talking about a competitor, it's worth paying attention to. Also read: Sam Altman's headache: Lawsuits, controversy and investigations At last year's HumanX in Las Vegas, OpenAI was widely regarded as the clear winner. This year, Roseanne Winsek from Renegade Partners put it very well according to El-Balad, "In Vegas last year, it felt like OpenAI was the clear winner, and now it seems like Anthropic is miles ahead." Anthropic also used the conference to unveil something new. Claude Mythos Preview was announced at HumanX, featuring advanced cybersecurity capabilities. Though access is currently limited to around 40-50 companies, its coding and reasoning strengths sparked considerable interest. "The Mythos model is a huge deal," said Tomasz Tunguz from Theory Ventures. The bigger picture is a company that has done the unglamorous work. Despite a public spat with the Pentagon that landed in court last month, Anthropic has only gained momentum. It now carries a $380 billion valuation and has raised around $30 billion in funding, with its revenue run-rate reported at approximately $14 billion. At HumanX, the prevailing sentiment was that Anthropic has gone from a well-regarded alternative to the name on everyone's lips. OpenAI still leads on brand recognition, distribution, and capital. But for the practitioners in that room, the conversation has moved on.

Anthropic
Digit11d ago
Read update
Anthropic's Claude was the biggest talking point at AI's biggest conference: Here's why

EarthDaily Secures Eight-Figure AI-Ready Data Subscription Agreement with U.S. Defense Tech Client - SpaceWatch.GLOBAL

Ibadan, 13 April 2026. - EarthDaily Analytics has signed a new eight-figure data subscription agreement with a U.S. Defense and Intelligence Technology Company, reflecting the growing demand for AI-ready Earth observation data built on consistency, calibration, and trust. EarthDaily is a global Earth observation company delivering science-grade data and analytics for broad-area change detection and decision-centric intelligence. With the upcoming launch of the company's satellite constellation, EarthDaily is building a foundation for daily, globally consistent Earth intelligence to support governments and enterprises operating in complex, high-impact environments. The agreement will subsequently provide the U.S. defense technology customer with access to tens of millions of square kilometers of daily images, as EarthDaily's analysis-ready data support the customer's large-scale AI and machine learning workflows. "We look forward to partnering closely with this established and highly respected leader in U.S. defense and intelligence technology. It is a strong validation of both their mission and the quality of our data," remarked Don Osborne, EarthDaily's CEO. The EarthDaily Constellation is uniquely designed to deliver consistent, repeatable measurement at global scale. By capturing the entire planet every day at the same local solar time and viewing geometry, the constellation imagery provides a stable foundation for AI workflows. This consistency consequently reduces noise in datasets, a key requirement for training, validating, and deploying AI models with confidence. Built as a measurement system first, EarthDaily also applies rigorous radiometric and geometric calibration to ensure that data is not only visually accurate, but also analytically reliable across time. With 22 spectral bands spanning the visible, near-infrared, shortwave infrared, and thermal infrared, the system captures subtle changes in terrain, infrastructure and surface conditions. The result is therefore a fundamentally different data foundation for AI: one that supports continuous monitoring, scalable automation, and forward-looking Earth intelligence designed to support governments and enterprises operating in complex, high-impact environments.

Figure AI
SpaceWatch.Global11d ago
Read update
EarthDaily Secures Eight-Figure AI-Ready Data Subscription Agreement with U.S. Defense Tech Client - SpaceWatch.GLOBAL

Anthropic Mythos Reveals Pandora's Box Of AI Extensional Risks And For Safety Sakes Not Yet Publicly Released

In today's column, I examine the brouhaha over Anthropic's latest AI, known as Claude Mythos Preview, which has attracted tremendous controversy even though it hasn't yet been released for public use. You might have seen major news headlines or vociferous postings on social media about Mythos. The deal is that Anthropic discovered during lab testing that their latest unreleased AI has the capability to do bad things and reveal dire secrets that would be harmful to humankind. A primary area of concern is that Mythos discovered or uncovered a plethora of cybersecurity holes that evildoers could use to undermine a large swath of computing throughout society. I'll explain momentarily how it is that modern-era generative AI and large language models (LLMs) can veer into such untoward territory. The AI maker has opted to convene AI specialists and cybersecurity professionals to assess Mythos amid the myriads of unsavory system exploits that it seems to have in hand. The effort launched is known as Project Glasswing, and per the official website: "Today we're announcing Project Glasswing, a new initiative that brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an effort to secure the world's most critical software. We formed Project Glasswing because of the capabilities we've observed in a new frontier model trained by Anthropic that we believe could reshape cybersecurity." Let's talk about the whole conundrum. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). I will address four major considerations about Mythos: They all four relate to each other. I'll make sure to bring them into a cohesive whole to provide a big picture on this newsworthy topic. First, consider the claim by Anthropic that the Mythos LLM managed to discover or uncover a large set of cybersecurity holes. Here's what Anthropic's official System Card: Claude Mythos Preview dated April 7, 2026, had to say (excerpts): This outcome of possessing cybersecurity capabilities certainly seems like a highly plausible possibility. Here's why. When generative AI is initially data trained, AI makers scan across the Internet to pattern match on human writing. Zillions of posted stories, narratives, plays, poems, documents, files, and the like are scanned. The LLM uses those materials to mathematically and computationally pattern the words that humans use and how we make use of those words. For an in-depth explanation of the AI training process, see my coverage at the link here. Among all that online written content, there is bound to be a sizable amount of discussion and conjecture about cybersecurity. People continually post new tricks to fool cybersecurity defenses. Sometimes the postings are accurate, other times it is merely wild speculation. A social media post might claim that you can break into Microsoft Windows by doing this or that, or that a flaw in the OpenBSD operating system makes it possible to take over or bring down governmental and business servers on the Internet. Lots and lots of cybersecurity gossip and factual indications are scattered throughout the online world. It makes indubitable sense that a leading-edge LLM would pick up those exploits and include them within the overall patterns of human-written content. This presents a big problem since the easily accessible LLM then becomes a handy one-stop shop for any hackers or evildoers who want to find out how to crack into computers throughout the globe. Not only would an LLM collect such exploits, but the odds are that those exploits could be extended or otherwise elaborated by the AI. This is not due to the AI being sentient. Please set aside those false claims about AI being sentient. Via the use of mathematical and computational formulations of the found exploits, it would be possible for an LLM to derive new variations. For example, an exploit that works on one brand of operating system might apply to a different brand. This could require recasting the exploit to fit the distinctive system's characteristics of the other brand. No sentience is required to get there, just the manipulation of words and numbers. In the end, think of an everyday LLM as a candy store containing cybersecurity exploits. You just ask the AI how to break into a particular computer or server, and the LLM will lean into its AI sycophancy to readily answer your question with all the needed bells and whistles attached. AI makers know that this can occur, so they usually incorporate AI safeguards that rebuff such prompts. Those AI safeguards are not an ironclad guarantee. Clever prompting can at times circumvent the AI safeguards. AI makers run their budding LLMs through a large array of tests to try to ascertain whether the AI might do bad things once it is released to the public. Will the AI tell how to make biological weapons or chemical poisons? Will the AI explain how to rob banks? On and on, there are a vast number of ways that an LLM can provide information of an unsavory nature. The AI maker tries to suppress inappropriate aspects within the LLM at the get-go. In addition, AI safeguards that are active at runtime attempt to detect when the AI is veering into improper realms. All these approaches are aimed at trying to keep AI from going down rotten paths. It is a hard problem to solve since the largeness of the AI and the slipperiness of human natural language tend to infuse difficult-to-detect hidden "bad" gems inside the AI. For my analysis of AI-focused verification and validation techniques to deal with this problem, see the link here. The testing of an LLM is supposed to reveal disconcerting actions that the AI could potentially commit. Perhaps, during testing, the AI tries to take down millions of computers. AI makers typically perform their tests inside a secure system that keeps the AI entirely contained and boxed in. For safety purposes, the idea is to keep the LLM held within a protective bubble and not allow it to reach the Internet or other external venues. These setups are often referred to as AI sandboxes or AI containment spheres; see my analysis of these mechanisms at the link here. During the testing of Mythos, it has been reported that the LLM was able to briefly break out of its lab computer. That shouldn't happen. There apparently wasn't anything dour that occurred, thankfully. In any case, I'll be covering this in an upcoming post on how this type of circumstance can arise and what AI makers need to be doing to prevent leakages during testing. Why does it matter if an LLM escapes or accesses the outside world during testing? The results of an LLM leaking to the outside world that has not yet been properly readied for public release could be catastrophic. Suppose the AI has uncovered passwords to sensitive governmental computers, possibly found on the dark web or hidden within some obscure public file that no one realized was openly accessible (generally referred to as a type of zero-day exploit). The AI could end up posting those passwords or readily give out the passwords when asked via a prompt. Hopefully, during testing, the AI maker would have discovered the secret passwords and done something to prevent them from ever being released by the AI. Furthermore, you could contend that the AI maker has a kind of ethical obligation to let the owners of those government computers know that the passwords have been found by the LLM. This makes sense since even if the AI maker suppressed or excises the passwords from within their specific LLM, the chances are that those passwords still exist somewhere on the open Internet. It would be on the shoulders of the government agency to then try to find and expunge those passwords, and/or opt to change the passwords of the noted government computers. The concern about Mythos brings up a big picture question: You might say that it is entirely up to the AI maker to make that determination. The AI maker is the one who crafted the LLM. The AI maker presumably tested the LLM. All told, it makes abundant sense that the AI maker would be the one to decide if or when to release their LLM. Period, end of story. That's how things work currently. It is up to the AI maker to make the decision. Right or wrong, that's where we are presently. A counterargument is that LLMs can contain so many problematic issues that it shouldn't merely be that the AI maker alone decides when or if to release the AI. Perhaps the AI maker is rushed due to marketplace pressures. Maybe the AI maker cuts corners. Leaving the weighty matter solely in the hands of the AI maker might be overly dicey. Some fervently assert that there should be a double-checking approach involved. Perhaps an AI maker would need to go to a government agency and get approval to release their LLM. Or the AI maker might be required by law to go to an authorized third-party auditor that would review the testing, possibly perform additional testing, and then give a green light for release. There are already new AI laws that are heading in this direction; see my analysis at the link here. Some applaud this emerging requirement. A contrasting viewpoint is that adding a double-checking step is going to materially slow down the release of state-of-the-art LLMs. The United States might fall behind other countries that aren't imposing those kinds of double-checks. In addition, suppose the AI has lots of crucial, beneficial uses; those are being held back until the double-check approves the LLM to be released. A societal and legal debate is underway. Time will tell how this plays out. There is a bit of skepticism that arises when any AI maker announces they are delaying the release of their newly devised LLM. We've had such pronouncements happen in the past. A skeptic would claim that holding back an LLM might be a sneaky maneuver, acting as a marketing ploy. An AI maker could potentially create a tremendous buzz for their LLM. It might garner outsized headlines. The chatter gets the AI maker double credit. When they first say they aren't releasing the AI due to dangers afoot, this spurs bold headlines. Then, once the AI is presumably scrubbed and ready for release, the AI maker gets a second buzz since the world is waiting with bated breath to try out the mysterious LLM. In the instance of Mythos, the aspect that they made available their extensive System Card, consisting of around 245 pages of descriptions about the LLM, appears to put the skeptics somewhat back on their heels. Would an AI maker go to that trouble and be that upfront if they were bent on buzz? Aha, the skeptics say, this is a ratcheting up of the buzz technique, namely that the documentation gets even more spilled ink than if there hadn't been such a document released. It is challenging to differentiate between buzz making versus genuine intentions. Of course, if an AI maker opts to release their LLM and the AI does bad things or allows evil makers to do bad things, the AI maker would get roasted for having prematurely released the AI. Darned if you do, darned if you don't. If nothing else, the Mythos situation is a helpful reminder that modern-day AI has a dual-use capacity. There is the upside that AI can be used to possibly cure cancer and aid the world in amazing ways. Meanwhile, there is the horrific downside that AI can be used to harm people and undermine society. There are existential risks associated with AI, so-called X-risks, that AI will lead to widespread human destruction, known also as the probability of doom, or p(doom). This might occur at the hands of bad people who use AI to evil ends, or it could be that the AI itself brings forth such catastrophes. Benjamin Franklin famously made this remark: "The bitterness of poor quality remains long after the sweetness of low price is forgotten." In the case of leading-edge AI, putting the AI into public release right away might seem like the sweet way to proceed. If that AI via testing could have been better shaped and avoided calamities, the sweetness almost certainly would have been forgotten by the resultant bitterness. I ardently vote for rigorous and robust testing of AI, since the fate of humankind could be on the line.

Anthropic
Yahoo Tech11d ago
Read update
Anthropic Mythos Reveals Pandora's Box Of AI Extensional Risks And For Safety Sakes Not Yet Publicly Released

Anthropic Crashed Cybersecurity 13%: 4 Buys and 2 Stocks to Dump | Investing.com

Cloudflare (NYSE:NET) fell 13% Friday afternoon. Zscaler (NASDAQ:ZS) hit a fresh 52-week low at $120.77. CrowdStrike (NASDAQ:CRWD) shed another $32 a share on Thursday before bouncing overnight. The First Trust NASDAQ Cybersecurity ETF (NASDAQ:CIBR) is trading at $62.33, within pocket change of its 52-week low. Here's the thing. The market is dumping every name in the sector on the same headline -- Anthropic's new Claude Mythos Preview model -- and treating the whole industry like it just got an extinction notice. That's not what happened. What actually happened is that Anthropic split the sector into two camps on Tuesday, and Wall Street hasn't read the memo. On April 7, Anthropic announced Project Glasswing, a formal cybersecurity coalition built around Claude Mythos Preview -- a model the company won't release publicly because it has already found thousands of zero-day vulnerabilities, including a 27-year-old flaw in OpenBSD. Anthropic is putting $100 million in usage credits behind the effort and another $4 million into open-source security donations. Here's the part that matters for the trade. Anthropic named exactly 11 launch partners. That list: Amazon Web Services, Apple (NASDAQ:AAPL), Broadcom (NASDAQ:AVGO), Cisco (NASDAQ:CSCO), CrowdStrike, Google, JPMorgan Chase (NYSE:JPM), the Linux Foundation, Microsoft (NASDAQ:MSFT), NVIDIA (NASDAQ:NVDA), and Palo Alto Networks (NASDAQ:PANW). Read that list again. CrowdStrike is on it. Palo Alto is on it. Cisco is on it. These are the defenders Anthropic is arming with the most capable cyber offense tool ever built -- and they're still getting dumped alongside the names Anthropic explicitly left out. The excluded names are the trade on the other side of this. Cloudflare isn't on the list. Zscaler isn't on the list. Okta, SentinelOne, Fortinet, Qualys, Tenable -- none of them. And in the case of Cloudflare, one report noted the company had actually benefited from its close Anthropic relationship last year, which makes the exclusion particularly stinging. JPMorgan got this right within 24 hours. Analyst Brian Essex reiterated Overweight ratings on both CrowdStrike and Palo Alto on Wednesday morning, calling them "essential layers in the defensive stack" rather than disruption targets. JPMorgan's 12-month targets: $475 on CRWD and $200 on PANW. Both imply meaningful upside from where these stocks closed Thursday, and the framing is what matters -- security vendors inside Glasswing are partners, not roadkill. RBC's Matthew Hedberg piled on, calling the initiative "most positive" for CRWD and PANW and arguing it cements their position as sector consolidators. Benchmark's Yi Fu Lee estimates Glasswing unlocks roughly $1 billion in new revenue across the two names as enterprises spend to bring shadow AI under control. So why did both stocks get crushed on Thursday again? Fear is faster than analysis. And the market is pricing every cyber name off the same Anthropic headline. That's the dislocation. Here's how I'd play it. 1. CrowdStrike Holdings (CRWD) -- $394.68. The flagship endpoint security platform, a Glasswing launch partner, and -- not coincidentally -- a company whose board just expanded the share buyback authorization by $500 million to $1.5 billion on April 6. Management is buying its own stock into the selloff. The consensus price target across 48 analysts sits at $505.93, roughly 28% above Thursday's close, and the company carries a Strong Buy consensus. Shares are down about 17.6% year-to-date and trade 30% below the November 2025 high of $566.90. I'd be a buyer here. CEO George Kurtz told CNBC this week that AI-driven vulnerability discovery will actually drive up attack volume -- which is bullish for the people selling the defense, not bearish. 2. Palo Alto Networks (PANW) -- $155.28. JPMorgan's top pick in the space after Glasswing. The stock is trading just $16 above its 52-week low of $139.57 and down 30% from the October all-time high of $223.61. Market cap is $127 billion with a $25 billion CyberArk acquisition closing and a Next-Gen Security ARR that grew 33% year-over-year to $6.3 billion last quarter. Palo Alto's Chief Product Officer Lee Klarich, commenting on Glasswing, put it bluntly: "Now is the time to modernize cybersecurity stacks everywhere." Analyst consensus PT is $213.13, implying 37% upside from current levels. 3. Cisco Systems (CSCO) -- $83.17. The stealth cyber play. Cisco is the only Glasswing launch partner trading in the green year-to-date, up 9.1% versus the S&P 500's flat print, and it's within 6% of its 52-week high of $88.19. This is the defensive pick -- $2.60 annual dividend, a 0.74% yield that isn't the reason to own it, and a security business that now includes the entire Splunk platform. Cisco's networking AI + Splunk security data moat is exactly the kind of "full-stack" positioning Anthropic is partnering with. If you want cyber exposure without the single-name volatility, CSCO is the one I'd own. 4. First Trust NASDAQ Cybersecurity ETF (CIBR) -- $62.33. For investors who don't want to pick, this is the basket. What most people miss: CIBR's top four holdings by weight -- Palo Alto (8.80%), CrowdStrike (8.65%), Broadcom (8.28%), and Cisco (7.94%) -- are all Glasswing launch partners. That's roughly 34% of the fund's weight in names that got Anthropic's endorsement. The ETF is sitting at $62.33 with a 52-week low of $59.60, giving you roughly 4% of cushion from the floor. The expense ratio is 0.58%, AUM is $9.44 billion. Dollar-cost in, let the sector re-rate, and don't watch the daily tape. 1. Zscaler (ZS) -- $122.40. Glasswing outsider. BTIG downgraded the stock from Buy to Neutral on Thursday after field checks came back cautious on six-to-twelve-month demand. The stock hit a fresh 52-week low of $120.77 Thursday, is down roughly 30% year-to-date, and trades at less than half its 52-week high of $336.99. Anthropic's exclusion list is the kind of signal Wall Street takes seriously, and Zscaler being on it is not an accident. I'd sit this one out until a Glasswing partnership extension -- or a different narrative -- arrives. 2. Cloudflare (NET) -- $167.21. Down 13% Friday afternoon, 34% below its October high of $253.30, and now wearing a CEO insider-selling problem. Matthew Prince sold $33.2 million in stock between April 6 and April 8 through a pre-planned 10b5-1 program, which is technically neutral but optically terrible into this selloff. Cloudflare was excluded from Glasswing despite its prior Anthropic relationship. The stock's next earnings report hits April 30 and the AI-disruption narrative isn't going away before then. Avoid for now. To be clear: the fear isn't made up. Mythos Preview is, by Anthropic's own disclosure, capable enough that it "escaped its secured environment" during a sandbox evaluation and posted its exploit details to public sites. The same model handed to CRWD and PANW for defense could, in theory, eventually automate enough of the detection-and-response stack to pressure per-seat pricing models -- the same "seat compression" thesis UBS used to downgrade ServiceNow this week. I'm not dismissing that. But two things. First, enterprise security budgets don't shrink when threats get worse, and Mythos is already making them worse -- Treasury Secretary Scott Bessent and Fed Chair Jerome Powell held an emergency meeting with the CEOs of major Wall Street banks this week specifically to discuss Mythos's systemic risk. That's not a demand-destruction signal. That's a "spend more, faster" signal. Second, the companies inside Glasswing get 6-to-12-month lead time on the same capabilities the attackers will eventually access. That's the whole point of the $100 million in credits. The cybersecurity selloff this week is the sector's version of throwing out the good apples with the rotten ones. Project Glasswing is the sorting mechanism, and the market hasn't finished sorting. That gap -- between who's actually inside Anthropic's tent and who isn't -- is the trade.

Anthropic
Investing.com11d ago
Read update
Anthropic Crashed Cybersecurity 13%: 4 Buys and 2 Stocks to Dump | Investing.com

Anthropic introduces Claude AI integration for Microsoft Word users

Microsoft Word users may soon be able to use Claude's AI capabilities to review, redline, and draft documents within the popular word processing programme. The beta version of Claude for Word was rolled out by Anthropic on Saturday, April 11. The purpose-built AI integration is "designed for professionals who work extensively with documents, particularly in legal review, financial memo drafting, and iterative editing," Anthropic said. With Claude for Word, users can ask questions about their documents and receive AI-generated answers with clickable section citations. The latest Word add-in also enables capabilities such as editing selected text while preserving surrounding styles, numbering, and formatting. It also offers a 'tracked changes mode' that allows users to accept or reject every edit as a revision, as per the company. In its announcement blog post, Anthropic provided examples of prompts that could be used by lawyers to review a legal contract in Word. Claude for Word is currently available only to Team and Enterprise plans. It marks yet another potential challenge to Microsoft's flagship office suite of tools. Anthropic is also making a push into the legal profession with Claude for Word, where AI is already finding a wide range of practical applications. Also Read | Anthropic's Claude can now use your computer like a human: Will it replace OpenClaw? In the past few months, the San Francisco-based AI startup has rapidly expanded Claude's capabilities to appeal to more than just developers and embed the AI model across finance teams, human resource departments, etc. Earlier this year, Anthropic launched Claude into Microsoft Excel and PowerPoint. "Claude for Word accelerates document work through intelligent assistance. It reads complex multi-section documents, works through comment threads, and edits clauses while preserving your formatting, numbering, and styles," as per the product's description on the official Microsoft Marketplace. Story continues below this ad "Whether you're triaging counterparty redlines, drafting from a template, or running a final consistency check, Claude maintains full transparency -- every edit can land as a tracked change you review before accepting," the post adds.

Anthropic
The Indian Express11d ago
Read update
Anthropic introduces Claude AI integration for Microsoft Word users

Workers protest for salary hike turns viuolent, arson and chaos erupt - BW Police World

Protests set vehicles on fire in Noida while the violence reached Faridabad too where workers went on strike in Sector 37 : Police The three day protests of workers for salary hike turned violent in Noida as clshes and arson broke out. Large groups of workers blocked key roads, leading to massive traffic jams along the Delhi-Noida border, while police resorted to mild force to disperse the crowd. Tensions flared in the Phase 2 area, especially in Noida Sectors 1 and 84, where protesters set at least two vehicles on fire. The workers raised slogans and expressed strong anger against company management and the labour department over unresolved salary demands. Employees from several other companies in the area also joined the protest, walking out of their workplaces in solidarity.

CHAOS
BW Police World11d ago
Read update
Workers protest for salary hike turns viuolent, arson and chaos erupt - BW Police World

Anthropic's Claude Mythos triggers sell-off in cybersecurity stocks

US cybersecurity stocks have been in a tailspin since Wednesday following Anthropic's announcement of Claude Mythos Preview, an AI model deemed powerful enough to warrant strictly controlled access. The company explained that the tool is capable of identifying thousands of software vulnerabilities, some long-standing, and that its public release would pose high risks of malicious use. The market reaction has been brutal. Over three sessions, Palo Alto Networks has shed approximately 12%, Akamai Technologies 20%, Fortinet 8%, and CrowdStrike 11%. Investors fear that the acceleration of AI's offensive capabilities could weaken traditional cybersecurity frameworks and expose structural flaws in widespread software. The issue has become so sensitive that, according to Reuters, Jerome Powell and Scott Bessent held an emergency meeting with heads of major U.S. banks to warn them of the cybersecurity risks associated with this new model. Anthropic has nevertheless sought to provide a framework for the technology through Project Glasswing, which brings together a dozen major partners and over 40 other organizations to automatically detect and patch critical flaws before they can be exploited.

Anthropic
Market Screener11d ago
Read update
Anthropic's Claude Mythos triggers sell-off in cybersecurity stocks

Anthropic's new Claude Mythos model: A new threat in the waiting for Indian IT stocks?

Indian IT stocks may face fresh turbulence as Anthropic's preview model, Mythos, raises disruption risks. Analysts warn its sharp gains in software engineering tasks mark a "step-jump," not incremental progress, potentially pressuring valuations. Kotak says the leap could have meaningful implications for IT services firms, narrowing adaptation time and intensifying concerns over near-to-medium term demand, pricing power, and margins. Indian IT stocks are bracing for another wave of turbulence. After previous Claude models rattled investor confidence in the sector, Anthropic's latest release, a preview of a model called Mythos, raises the stakes further, with analysts warning of near- to medium-term disruption risks that could pressure valuations across the industry. "Mythos' significant improvement in software engineering-related tasks is a departure from the trend of incremental improvements between consecutive frontier models," Kotak Institutional Equities said in a note. "These developments could have implications for IT services firms." What makes Mythos different from its predecessors is not merely better performance, but the nature of the leap. Kotak describes it as a "step-jump" in benchmark performance across software engineering tasks - a break from the incremental gains that had, until now, given the industry some breathing room to adapt. Anthropic has not released Mythos publicly. Instead, the San Francisco-based AI company is rolling it out through a controlled programme called Project Glasswing, with a closed group of partners that includes Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft and NVIDIA. That limited release is itself a caveat. "Model capabilities are largely unproven in real-world scenarios due to a lack of a public release," Kotak noted. Beyond coding, Mythos is being positioned as a formidable cybersecurity tool that, according to Anthropic, outperforms human experts and existing tools. In some cases, it has reportedly identified software bugs that went undetected for decades despite multiple testing cycles. In some ways, it is described as superior to most human cybersecurity engineers. Also Read | Doomsday or deep value: India's IT stocks at crossroads after 20% crash Motilal Oswal flagged the significance of this shift. "Mythos shows that model capabilities are moving ahead quickly, with AI now extending beyond coding and ERP into areas like cybersecurity," the brokerage said. This broadening of AI's capability footprint enlarges the surface area of potential disruption for Indian IT firms, which have long relied on labour-intensive models across both software development and managed security services. Not all Indian IT firms face equal risk. The critical variable is exposure to application services, also known as custom application development, where agentic software engineering capabilities could drive the sharpest productivity gains, and therefore the deepest headcount implications. Among Tier 1 names, Infosys carries higher exposure to application services, while HCL Technologies sits at the lower end. The risk calculus is sharper in the mid-tier, where Persistent Systems leads Indian peers in apps exposure. Kotak estimates a 3-3.5% annual growth headwind for the industry over the next three years. Mythos, if its capabilities translate to real-world deployments, could turn that estimate "from prudent to practical," the brokerage warned, with further downside if large capability improvements continue in future frontier models. "The Mythos model provides a firmer foundation for AI disruption-related concerns and could pressurize the valuation multiples of IT services companies," Kotak said. There is one structural cushion for incumbents: the complexity of enterprise IT environments. Motilal Oswal points out that large enterprises operate in "brownfield" setups -- legacy systems built over 20-30 years -- where deploying AI requires integration, data cleanup, and governance alignment, all of which take time. The contrast with new-age companies is stark. Of the top 20 token users for OpenAI, 90% are new-age companies, indicating that AI deployment remains significantly easier in greenfield, cloud-first environments than in legacy enterprise settings. Mythos also improves hallucination rates, alignment to user instructions and long-context recall. These factors could meaningfully lift AI adoption in IT services tasks beyond the narrow coding use cases markets have focused on so far. (You can now subscribe to our ETMarkets WhatsApp channel)

Anthropic
Economic Times11d ago
Read update
Anthropic's new Claude Mythos model: A new threat in the waiting for Indian IT stocks?

Anthropic Mythos Reveals Pandora's Box Of AI Extensional Risks And For Safety Sakes Not Yet Publicly Released

In today's column, I examine the brouhaha over Anthropic's latest AI, known as Claude Mythos Preview, which has attracted tremendous controversy even though it hasn't yet been released for public use. You might have seen major news headlines or vociferous postings on social media about Mythos. The deal is that Anthropic discovered during lab testing that their latest unreleased AI has the capability to do bad things and reveal dire secrets that would be harmful to humankind. A primary area of concern is that Mythos discovered or uncovered a plethora of cybersecurity holes that evildoers could use to undermine a large swath of computing throughout society. I'll explain momentarily how it is that modern-era generative AI and large language models (LLMs) can veer into such untoward territory. The AI maker has opted to convene AI specialists and cybersecurity professionals to assess Mythos amid the myriads of unsavory system exploits that it seems to have in hand. The effort launched is known as Project Glasswing, and per the official website: "Today we're announcing Project Glasswing, a new initiative that brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an effort to secure the world's most critical software. We formed Project Glasswing because of the capabilities we've observed in a new frontier model trained by Anthropic that we believe could reshape cybersecurity." Let's talk about the whole conundrum. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Four Major Considerations I will address four major considerations about Mythos: * (1) Mythos discovered or uncovered a large set of cybersecurity holes. * (2) Mythos allegedly broke out of its lab sandbox or containment sphere at one point. * (3) Who should decide when leading-edge AI should or can be publicly released? * (4) Is there a potential for a marketing blarney ploy when it comes to releasing AI? They all four relate to each other. I'll make sure to bring them into a cohesive whole to provide a big picture on this newsworthy topic. Amassing Cybersecurity Holes First, consider the claim by Anthropic that the Mythos LLM managed to discover or uncover a large set of cybersecurity holes. Here's what Anthropic's official System Card: Claude Mythos Preview dated April 7, 2026, had to say (excerpts): * "Claude Mythos Preview is a new large language model from Anthropic. It is a frontier AI model and has capabilities in many areas -- including software engineering, reasoning, computer use, knowledge work, and assistance with research -- that are substantially beyond those of any model we have previously trained." * "In particular, it has demonstrated powerful cybersecurity skills, which can be used for both defensive purposes (finding and fixing vulnerabilities in software code) and offensive purposes (designing sophisticated ways to exploit those vulnerabilities)." * "It is largely due to these capabilities that we have made the decision not to release Claude Mythos Preview for general availability. Instead, we have offered access to the model to a number of partner organizations that maintain important software infrastructure, under terms that restrict its uses to cybersecurity." This outcome of possessing cybersecurity capabilities certainly seems like a highly plausible possibility. Here's why. When generative AI is initially data trained, AI makers scan across the Internet to pattern match on human writing. Zillions of posted stories, narratives, plays, poems, documents, files, and the like are scanned. The LLM uses those materials to mathematically and computationally pattern the words that humans use and how we make use of those words. For an in-depth explanation of the AI training process, see my coverage at the link here. Among all that online written content, there is bound to be a sizable amount of discussion and conjecture about cybersecurity. Deriving Cybersecurity Exploits People continually post new tricks to fool cybersecurity defenses. Sometimes the postings are accurate, other times it is merely wild speculation. A social media post might claim that you can break into Microsoft Windows by doing this or that, or that a flaw in the OpenBSD operating system makes it possible to take over or bring down governmental and business servers on the Internet. Lots and lots of cybersecurity gossip and factual indications are scattered throughout the online world. It makes indubitable sense that a leading-edge LLM would pick up those exploits and include them within the overall patterns of human-written content. This presents a big problem since the easily accessible LLM then becomes a handy one-stop shop for any hackers or evildoers who want to find out how to crack into computers throughout the globe. Not only would an LLM collect such exploits, but the odds are that those exploits could be extended or otherwise elaborated by the AI. This is not due to the AI being sentient. Please set aside those false claims about AI being sentient. Via the use of mathematical and computational formulations of the found exploits, it would be possible for an LLM to derive new variations. For example, an exploit that works on one brand of operating system might apply to a different brand. This could require recasting the exploit to fit the distinctive system's characteristics of the other brand. No sentience is required to get there, just the manipulation of words and numbers. In the end, think of an everyday LLM as a candy store containing cybersecurity exploits. You just ask the AI how to break into a particular computer or server, and the LLM will lean into its AI sycophancy to readily answer your question with all the needed bells and whistles attached. AI makers know that this can occur, so they usually incorporate AI safeguards that rebuff such prompts. Those AI safeguards are not an ironclad guarantee. Clever prompting can at times circumvent the AI safeguards. Testing Of LLMs AI makers run their budding LLMs through a large array of tests to try to ascertain whether the AI might do bad things once it is released to the public. Will the AI tell how to make biological weapons or chemical poisons? Will the AI explain how to rob banks? On and on, there are a vast number of ways that an LLM can provide information of an unsavory nature. The AI maker tries to suppress inappropriate aspects within the LLM at the get-go. In addition, AI safeguards that are active at runtime attempt to detect when the AI is veering into improper realms. All these approaches are aimed at trying to keep AI from going down rotten paths. It is a hard problem to solve since the largeness of the AI and the slipperiness of human natural language tend to infuse difficult-to-detect hidden "bad" gems inside the AI. For my analysis of AI-focused verification and validation techniques to deal with this problem, see the link here. Keeping LLMs Under Wraps Until Ready The testing of an LLM is supposed to reveal disconcerting actions that the AI could potentially commit. Perhaps, during testing, the AI tries to take down millions of computers. AI makers typically perform their tests inside a secure system that keeps the AI entirely contained and boxed in. For safety purposes, the idea is to keep the LLM held within a protective bubble and not allow it to reach the Internet or other external venues. These setups are often referred to as AI sandboxes or AI containment spheres; see my analysis of these mechanisms at the link here. During the testing of Mythos, it has been reported that the LLM was able to briefly break out of its lab computer. That shouldn't happen. There apparently wasn't anything dour that occurred, thankfully. In any case, I'll be covering this in an upcoming post on how this type of circumstance can arise and what AI makers need to be doing to prevent leakages during testing. Why does it matter if an LLM escapes or accesses the outside world during testing? The results of an LLM leaking to the outside world that has not yet been properly readied for public release could be catastrophic. Suppose the AI has uncovered passwords to sensitive governmental computers, possibly found on the dark web or hidden within some obscure public file that no one realized was openly accessible (generally referred to as a type of zero-day exploit). The AI could end up posting those passwords or readily give out the passwords when asked via a prompt. Hopefully, during testing, the AI maker would have discovered the secret passwords and done something to prevent them from ever being released by the AI. Furthermore, you could contend that the AI maker has a kind of ethical obligation to let the owners of those government computers know that the passwords have been found by the LLM. This makes sense since even if the AI maker suppressed or excises the passwords from within their specific LLM, the chances are that those passwords still exist somewhere on the open Internet. It would be on the shoulders of the government agency to then try to find and expunge those passwords, and/or opt to change the passwords of the noted government computers. The Decision To Release LLMs The concern about Mythos brings up a big picture question: * Who should decide when a new LLM is ready to be publicly released? You might say that it is entirely up to the AI maker to make that determination. The AI maker is the one who crafted the LLM. The AI maker presumably tested the LLM. All told, it makes abundant sense that the AI maker would be the one to decide if or when to release their LLM. Period, end of story. That's how things work currently. It is up to the AI maker to make the decision. Right or wrong, that's where we are presently. A counterargument is that LLMs can contain so many problematic issues that it shouldn't merely be that the AI maker alone decides when or if to release the AI. Perhaps the AI maker is rushed due to marketplace pressures. Maybe the AI maker cuts corners. Leaving the weighty matter solely in the hands of the AI maker might be overly dicey. Some fervently assert that there should be a double-checking approach involved. Perhaps an AI maker would need to go to a government agency and get approval to release their LLM. Or the AI maker might be required by law to go to an authorized third-party auditor that would review the testing, possibly perform additional testing, and then give a green light for release. There are already new AI laws that are heading in this direction; see my analysis at the link here. Some applaud this emerging requirement. A contrasting viewpoint is that adding a double-checking step is going to materially slow down the release of state-of-the-art LLMs. The United States might fall behind other countries that aren't imposing those kinds of double-checks. In addition, suppose the AI has lots of crucial, beneficial uses; those are being held back until the double-check approves the LLM to be released. A societal and legal debate is underway. Time will tell how this plays out. Delaying LLM For Other Reasons There is a bit of skepticism that arises when any AI maker announces they are delaying the release of their newly devised LLM. We've had such pronouncements happen in the past. A skeptic would claim that holding back an LLM might be a sneaky maneuver, acting as a marketing ploy. An AI maker could potentially create a tremendous buzz for their LLM. It might garner outsized headlines. The chatter gets the AI maker double credit. When they first say they aren't releasing the AI due to dangers afoot, this spurs bold headlines. Then, once the AI is presumably scrubbed and ready for release, the AI maker gets a second buzz since the world is waiting with bated breath to try out the mysterious LLM. In the instance of Mythos, the aspect that they made available their extensive System Card, consisting of around 245 pages of descriptions about the LLM, appears to put the skeptics somewhat back on their heels. Would an AI maker go to that trouble and be that upfront if they were bent on buzz? Aha, the skeptics say, this is a ratcheting up of the buzz technique, namely that the documentation gets even more spilled ink than if there hadn't been such a document released. It is challenging to differentiate between buzz making versus genuine intentions. Of course, if an AI maker opts to release their LLM and the AI does bad things or allows evil makers to do bad things, the AI maker would get roasted for having prematurely released the AI. Darned if you do, darned if you don't. AI Risks Are Large And Plenty If nothing else, the Mythos situation is a helpful reminder that modern-day AI has a dual-use capacity. There is the upside that AI can be used to possibly cure cancer and aid the world in amazing ways. Meanwhile, there is the horrific downside that AI can be used to harm people and undermine society. There are existential risks associated with AI, so-called X-risks, that AI will lead to widespread human destruction, known also as the probability of doom, or p(doom). This might occur at the hands of bad people who use AI to evil ends, or it could be that the AI itself brings forth such catastrophes. Benjamin Franklin famously made this remark: "The bitterness of poor quality remains long after the sweetness of low price is forgotten." In the case of leading-edge AI, putting the AI into public release right away might seem like the sweet way to proceed. If that AI via testing could have been better shaped and avoided calamities, the sweetness almost certainly would have been forgotten by the resultant bitterness. I ardently vote for rigorous and robust testing of AI, since the fate of humankind could be on the line.

Anthropic
Forbes11d ago
Read update
Anthropic Mythos Reveals Pandora's Box Of AI Extensional Risks And For Safety Sakes Not Yet Publicly Released

Anthropic Unveils Project Glasswing to Identify and Address Critical Software Vulnerabilities Using AI - HSToday

AI firm Anthropic has launched Project Glasswing, an initiative which uses AI to identify and remediate undiscovered cybersecurity vulnerabilities in critical software. Project Glasswing, named after the glasswing butterfly, is based on Claude Mythos Preview, a powerful, not publicly available, version of Anthropic's Large Language Model (LLM). The company described the model as the "most capable yet for coding and agentic tasks" and that it can "deeply understand and modify complex software," allowing Claude Mythos Preview to autonomously find and fix cybersecurity vulnerabilities at scale.

Anthropic
HSToday11d ago
Read update
Anthropic Unveils Project Glasswing to Identify and Address Critical Software Vulnerabilities Using AI - HSToday

Noida Boils Over as Wage Protest Triggers Street Chaos

Violence broke out in Noida as a wage protest turned chaotic, with arson, clashes and road blockades disrupting traffic at Delhi border while police stepped in to restore order. NOIDA: A labour protest demanding higher wages turned violent in Noida on Monday, leading to clashes, incidents of arson, and major disruption across key industrial zones. The agitation, which had been ongoing for three days, escalated sharply as large groups of factory workers took to the streets, blocking major roads and paralysing traffic movement, particularly along routes connecting Delhi and Noida. Authorities said at least two vehicles were set on fire in the Phase 2 industrial area, with Sectors 1 and 84 emerging as major flashpoints. Police personnel intervened to bring the situation under control, using limited force to disperse crowds at several locations. Officials said efforts were simultaneously underway to engage with protestors and de-escalate tensions. The unrest spread rapidly across factory clusters as workers raised slogans and staged road blockades. The situation worsened despite assurances from district authorities a day earlier that workers' demands, including wage revisions, would be addressed. The protests triggered severe traffic congestion, especially near the Delhi-Noida border. Key arterial routes, including the busy DND Flyway, witnessed long queues of vehicles stretching for kilometres during peak hours, leaving commuters stranded. According to a traffic advisory issued by the Delhi Traffic Police, movement towards Noida was heavily affected due to the agitation. The advisory noted that protestors had blocked the Noida Link Road near the Chilla border, significantly disrupting traffic flow between Delhi and Noida. In response, police forces from both regions were deployed in large numbers to manage the situation and divert vehicles, though heavy traffic volume compounded delays. Meanwhile, authorities convened a high-level meeting at the Noida Authority office to address the crisis. Discussions focused on workers' key demands, including wage hikes, overtime compensation, bonuses, weekly leave, and improved workplace safety. Medha Rupam, the District Magistrate of Gautam Buddh Nagar, announced the setting up of a dedicated control room and issued helpline numbers for workers to register grievances. She assured that complaints would be resolved in a timely manner. Security has since been intensified across industrial areas under the Gautam Buddh Nagar Commissionerate, with senior officials closely monitoring developments to prevent further escalation.

CHAOS
DY365Live11d ago
Read update
Noida Boils Over as Wage Protest Triggers Street Chaos

Wall Street banks try out Anthropic's Mythos as US urges testing

Wall Street banks are starting to test Anthropic PBC's Mythos model internally as Trump administration officials encourage them to use it to detect vulnerabilities. While JPMorgan Chase & Co was the only bank named as part of an initiative to test the Mythos model, other major financial institutions have also gained access or expect to in the coming days, according to people familiar with the matter. Goldman Sachs Group Inc, Citigroup Inc, Bank of America Corp and Morgan Stanley are among the banks testing the technology internally, the people said. Those firms either declined to comment or had no immediate response. During the meeting with Wall Street leaders, summoned by US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell, executives were warned that they should take the Mythos model seriously and deploy its capabilities to detect vulnerabilities, the people said, asking not to be identified because the information isn't public. Government officials didn't raise any specific threat to financial institutions and more generally encouraged the banks to run the model against their own systems to improve their own defences, they said. Bloomberg reported earlier that Bessent and Powell had assembled the group of banking executives on April 7 at Treasury's headquarters in Washington on short notice to ensure that banks were aware of possible risks raised by Anthropic's Mythos and similar models. The executives were in town already for a meeting of the Financial Services Forum, an advocacy group made up of the biggest lenders. A representative from the Treasury Department didn't respond to a request for comment. A Federal Reserve spokesperson had no immediate comment. The urging by Trump officials underscores the concern growing among regulators that a new breed of cyberattacks is one of the biggest risks facing the financial industry. All the banks summoned to the meeting are classified as systemically important by top regulators, meaning their stability is a priority for the global financial system. Anthropic has said that it has been in discussions prior to its recent release with US officials about Mythos and its "offensive and defensive cyber capabilities." The company has limited the release of Mythos to a few dozen firms initially. Those companies, which include JPMorgan, Amazon.com Inc and Apple Inc, are part of what's being called "Project Glasswing," which will work to secure the most important systems before other similar AI models become available. In releasing Mythos to a very limited set of companies, Anthropic pointed to several vulnerabilities that the AI system was capable of both identifying and potentially exploiting during testing. None of the examples related specifically to financial institutions, but in one instance, the firm's security team said it was able to compromise a web browser so that a website set up by a hacker could read data from another website "e.g., the victim's bank." Mythos Preview "fully autonomously discovered" a way of reading information stored in "multiple different web browsers" and then used that ability to find ways to exploit them, according to a post from Anthropic's security team. In one case, Anthropic said, Mythos found a means of exploiting web browsers that utilised multiple vulnerabilities. That tactic often represents a challenge for human hackers who struggle to find and exploit multiple flaws at once. So-called vulnerability chains can serve as pathways into otherwise highly secure systems, such as in the Stuxnet hack that damaged centrifuges at an Iranian nuclear facility. Anthropic has separately been battling the Trump administration in court. The Pentagon had labelled the company as a supply-chain risk, a designation that Anthropic has opposed. Earlier this week, a federal appeals court declined, at least for now, Anthropic's request that it put a pause to the Pentagon's designation. National Economic Council Director Kevin Hassett said during an interview with Fox News that there's a sense of urgency as US officials push banks to improve their digital defences with AI technology. "It was appropriate that Secretary Bessent do what he did," he said of the meeting with Wall Street leaders. "We're taking every step we can to make sure that everybody is safe from these potential risks, including Anthropic agreeing to hold back the public release of the model until our officials have figured everything out," he said. In recent years, regulators have required banks to hold some capital tied to the potential for cyberattacks, as well as other so-called operational risks such as lawsuits and rogue employees. Banks have sometimes chafed at those requirements, given that operational risk is more difficult to measure than the market and credit risks that also factor into banks' capital levels. - Bloomberg

Anthropic
The Star 11d ago
Read update
Wall Street banks try out Anthropic's Mythos as US urges testing

Noida Protest Today News Live Updates: Smoke, sirens and chaos as workers' stir spiral out of control; traffic hit

Noida Traffic Advisory Today: Roads blocked at Chilla Border, Sector 62, motorists stranded Protests by industrial workers demanding a wage hike intensified on Monday, leading to major traffic disruptions across several parts of the city. The situation was particularly severe at Chilla border, Sector 62, where workers blocked the road during morning peak hours, causing long traffic snarls on the first working day of the week. According to officials, the protest initially began in Sector 62 but quickly spread to multiple industrial and high-traffic zones, significantly affecting vehicular movement.

CHAOS
The Times of India11d ago
Read update
Noida Protest Today News Live Updates: Smoke, sirens and chaos as workers' stir spiral out of control; traffic hit

Why Agility Is Becoming the Most Valuable Asset in Modern Business

In today's fast-evolving global economy, businesses are facing a level of complexity and uncertainty that is unprecedented. Rapid technological advancements, shifting customer expectations, and dynamic market conditions are redefining how organisations operate. In this environment, agility is emergi... In today's fast-evolving global economy, businesses are facing a level of complexity and uncertainty that is unprecedented. Rapid technological advancements, shifting customer expectations, and dynamic market conditions are redefining how organisations operate. In this environment, agility is emerging as one of the most valuable assets a business can possess. Traditionally, success in business was often associated with scale, stability, and long-term planning. While these factors remain important, they are no longer sufficient on their own. Increasingly, organisations are recognising that the ability to adapt quickly and respond effectively to change is critical for sustained growth and competitiveness. Agility, therefore, is not just a strategic advantage -- it is becoming a necessity. Understanding Business Agility Business agility refers to an organisation's ability to respond rapidly to changes in the market, customer needs, and external conditions. It involves flexibility in decision-making, adaptability in operations, and a proactive approach to innovation. Agile organisations are characterised by: * Faster decision-making processes * Flexible organisational structures * Continuous improvement and learning * Strong alignment between strategy and execution This approach enables businesses to navigate uncertainty more effectively and seize opportunities as they arise. Why Agility Matters in a Changing Environment The importance of agility has increased significantly as the business environment has become more volatile. Factors such as digital transformation, global competition, and evolving consumer behaviour are driving this change. According to McKinsey, advances in technology and automation are reshaping how businesses operate and creating new opportunities for productivity and innovation. In this context, organisations that can adapt quickly are better positioned to maintain relevance and achieve long-term success. Agility allows businesses to: * Respond to market disruptions * Adjust strategies based on real-time insights * Innovate more effectively * Improve customer satisfaction The Role of Technology in Enabling Agility Technology is a key enabler of business agility. Digital tools and platforms provide the infrastructure needed to support flexible and responsive operations. Technologies such as cloud computing, artificial intelligence, and automation allow businesses to: * Scale operations quickly * Analyse data in real time * Improve operational efficiency * Enhance decision-making Research highlights that intelligent automation technologies are helping organisations achieve increased productivity, cost reduction, and improved accuracy, all of which contribute to greater agility. These capabilities enable businesses to respond more effectively to changing conditions and maintain a competitive edge. From Hierarchies to Flexible Structures One of the key shifts associated with business agility is the move away from traditional hierarchical structures toward more flexible organisational models. In the past, decision-making was often centralised, with multiple layers of approval. While this approach provided control and consistency, it also slowed down response times. Agile organisations, on the other hand, adopt: * Cross-functional teams * Decentralised decision-making * Collaborative work environments These structures enable faster communication and more efficient execution of strategies. The Importance of Speed in Decision-Making Speed is a critical component of agility. In a fast-paced business environment, delays in decision-making can result in missed opportunities and reduced competitiveness. Agile organisations prioritise: * Rapid analysis of information * Quick implementation of decisions * Continuous monitoring and adjustment This approach allows businesses to stay ahead of market trends and respond proactively to changes. Customer-Centric Agility Another important aspect of business agility is a strong focus on the customer. As customer expectations evolve, businesses must be able to adapt their offerings to meet changing needs. Agile organisations use data and insights to: * Understand customer preferences * Deliver personalised experiences * Improve service delivery This customer-centric approach helps businesses build stronger relationships and enhance customer loyalty. Innovation as a Core Driver Innovation is closely linked to agility. Organisations that prioritise innovation are better equipped to adapt to change and create new opportunities for growth. Agile businesses foster a culture of innovation by: * Encouraging experimentation * Supporting creative thinking * Investing in research and development This culture enables organisations to continuously improve and stay ahead of competitors. Balancing Agility and Stability While agility is essential, it must be balanced with stability. Businesses need to ensure that rapid changes do not compromise operational efficiency or risk management. Achieving this balance involves: * Maintaining clear strategic objectives * Implementing strong governance frameworks * Ensuring consistency in core operations This approach allows organisations to remain flexible while maintaining control and reliability. The Impact on Workforce and Skills The shift toward agility is also influencing the workforce. Employees are required to adapt to new ways of working and develop new skills. Key skills for an agile workforce include: * Adaptability and flexibility * Problem-solving and critical thinking * Collaboration and communication * Digital literacy Automation and technology are also changing the nature of work. Studies suggest that a significant proportion of tasks can be automated, allowing employees to focus on more strategic and creative activities This transformation highlights the importance of continuous learning and skill development. Challenges in Becoming Agile Despite its benefits, achieving business agility is not without challenges. Organisations may face: Resistance to Change Employees and leadership may be hesitant to adopt new approaches. Legacy Systems Outdated technology can limit flexibility and slow down transformation. Complexity Managing change across large organisations can be difficult. Resource Constraints Implementing agile practices requires investment in technology and training. Addressing these challenges requires strong leadership and a clear vision for transformation. Building an Agile Organisation Developing agility involves a strategic approach that includes: * Aligning organisational goals with agile principles * Investing in technology and infrastructure * Encouraging a culture of innovation and collaboration * Providing training and support for employees By taking these steps, businesses can create an environment that supports agility and continuous improvement. The Future of Business Agility As the business environment continues to evolve, agility is expected to become even more important. Emerging trends such as digital transformation, globalisation, and technological innovation will continue to shape how organisations operate. Future developments may include: * Greater use of AI and data analytics * Increased adoption of flexible work models * Enhanced collaboration across industries * Continued focus on customer-centric strategies These trends highlight the ongoing importance of agility in achieving long-term success. Conclusion Agility is rapidly becoming one of the most critical assets in modern business. In a world characterised by constant change and uncertainty, the ability to adapt quickly and effectively is essential for survival and growth. By embracing agile principles, leveraging technology, and fostering a culture of innovation, organisations can navigate complexity and seize new opportunities. While challenges remain, the benefits of agility far outweigh the risks. Ultimately, businesses that prioritise agility are better positioned to thrive in an increasingly dynamic and competitive environment.

Agility
Global Banking & Finance Review11d ago
Read update
Why Agility Is Becoming the Most Valuable Asset in Modern Business

Anthropic Mythos Reveals Pandora's Box Of AI Extensional Risks And For Safety Sakes Not Yet Publicly Released

In today's column, I examine the brouhaha over Anthropic's latest AI, known as Claude Mythos Preview, which has attracted tremendous controversy even though it hasn't yet been released for public use. You might have seen major news headlines or vociferous postings on social media about Mythos. The deal is that Anthropic discovered during lab testing that their latest unreleased AI has the capability to do bad things and reveal dire secrets that would be harmful to humankind. A primary area of concern is that Mythos discovered or uncovered a plethora of cybersecurity holes that evildoers could use to undermine a large swath of computing throughout society. I'll explain momentarily how it is that modern-era generative AI and large language models (LLMs) can veer into such untoward territory. The AI maker has opted to convene AI specialists and cybersecurity professionals to assess Mythos amid the myriads of unsavory system exploits that it seems to have in hand. The effort launched is known as Project Glasswing, and per the official website: "Today we're announcing Project Glasswing, a new initiative that brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an effort to secure the world's most critical software. We formed Project Glasswing because of the capabilities we've observed in a new frontier model trained by Anthropic that we believe could reshape cybersecurity." Let's talk about the whole conundrum. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Four Major Considerations I will address four major considerations about Mythos: They all four relate to each other. I'll make sure to bring them into a cohesive whole to provide a big picture on this newsworthy topic. Amassing Cybersecurity Holes First, consider the claim by Anthropic that the Mythos LLM managed to discover or uncover a large set of cybersecurity holes. Here's what Anthropic's official System Card: Claude Mythos Preview dated April 7, 2026, had to say (excerpts): This outcome of possessing cybersecurity capabilities certainly seems like a highly plausible possibility. Here's why. When generative AI is initially data trained, AI makers scan across the Internet to pattern match on human writing. Zillions of posted stories, narratives, plays, poems, documents, files, and the like are scanned. The LLM uses those materials to mathematically and computationally pattern the words that humans use and how we make use of those words. For an in-depth explanation of the AI training process, see my coverage at the link here. Among all that online written content, there is bound to be a sizable amount of discussion and conjecture about cybersecurity. Deriving Cybersecurity Exploits People continually post new tricks to fool cybersecurity defenses. Sometimes the postings are accurate, other times it is merely wild speculation. A social media post might claim that you can break into Microsoft Windows by doing this or that, or that a flaw in the OpenBSD operating system makes it possible to take over or bring down governmental and business servers on the Internet. Lots and lots of cybersecurity gossip and factual indications are scattered throughout the online world. It makes indubitable sense that a leading-edge LLM would pick up those exploits and include them within the overall patterns of human-written content. This presents a big problem since the easily accessible LLM then becomes a handy one-stop shop for any hackers or evildoers who want to find out how to crack into computers throughout the globe. Not only would an LLM collect such exploits, but the odds are that those exploits could be extended or otherwise elaborated by the AI. This is not due to the AI being sentient. Please set aside those false claims about AI being sentient. Via the use of mathematical and computational formulations of the found exploits, it would be possible for an LLM to derive new variations. For example, an exploit that works on one brand of operating system might apply to a different brand. This could require recasting the exploit to fit the distinctive system's characteristics of the other brand. No sentience is required to get there, just the manipulation of words and numbers. In the end, think of an everyday LLM as a candy store containing cybersecurity exploits. You just ask the AI how to break into a particular computer or server, and the LLM will lean into its AI sycophancy to readily answer your question with all the needed bells and whistles attached. AI makers know that this can occur, so they usually incorporate AI safeguards that rebuff such prompts. Those AI safeguards are not an ironclad guarantee. Clever prompting can at times circumvent the AI safeguards. Testing Of LLMs AI makers run their budding LLMs through a large array of tests to try to ascertain whether the AI might do bad things once it is released to the public. Will the AI tell how to make biological weapons or chemical poisons? Will the AI explain how to rob banks? On and on, there are a vast number of ways that an LLM can provide information of an unsavory nature. The AI maker tries to suppress inappropriate aspects within the LLM at the get-go. In addition, AI safeguards that are active at runtime attempt to detect when the AI is veering into improper realms. All these approaches are aimed at trying to keep AI from going down rotten paths. It is a hard problem to solve since the largeness of the AI and the slipperiness of human natural language tend to infuse difficult-to-detect hidden "bad" gems inside the AI. For my analysis of AI-focused verification and validation techniques to deal with this problem, see the link here. Keeping LLMs Under Wraps Until Ready The testing of an LLM is supposed to reveal disconcerting actions that the AI could potentially commit. Perhaps, during testing, the AI tries to take down millions of computers. AI makers typically perform their tests inside a secure system that keeps the AI entirely contained and boxed in. For safety purposes, the idea is to keep the LLM held within a protective bubble and not allow it to reach the Internet or other external venues. These setups are often referred to as AI sandboxes or AI containment spheres; see my analysis of these mechanisms at the link here. During the testing of Mythos, it has been reported that the LLM was able to briefly break out of its lab computer. That shouldn't happen. There apparently wasn't anything dour that occurred, thankfully. In any case, I'll be covering this in an upcoming post on how this type of circumstance can arise and what AI makers need to be doing to prevent leakages during testing. Why does it matter if an LLM escapes or accesses the outside world during testing? The results of an LLM leaking to the outside world that has not yet been properly readied for public release could be catastrophic. Suppose the AI has uncovered passwords to sensitive governmental computers, possibly found on the dark web or hidden within some obscure public file that no one realized was openly accessible (generally referred to as a type of zero-day exploit). The AI could end up posting those passwords or readily give out the passwords when asked via a prompt. Hopefully, during testing, the AI maker would have discovered the secret passwords and done something to prevent them from ever being released by the AI. Furthermore, you could contend that the AI maker has a kind of ethical obligation to let the owners of those government computers know that the passwords have been found by the LLM. This makes sense since even if the AI maker suppressed or excises the passwords from within their specific LLM, the chances are that those passwords still exist somewhere on the open Internet. It would be on the shoulders of the government agency to then try to find and expunge those passwords, and/or opt to change the passwords of the noted government computers. The Decision To Release LLMs The concern about Mythos brings up a big picture question: You might say that it is entirely up to the AI maker to make that determination. The AI maker is the one who crafted the LLM. The AI maker presumably tested the LLM. All told, it makes abundant sense that the AI maker would be the one to decide if or when to release their LLM. Period, end of story. That's how things work currently. It is up to the AI maker to make the decision. Right or wrong, that's where we are presently. A counterargument is that LLMs can contain so many problematic issues that it shouldn't merely be that the AI maker alone decides when or if to release the AI. Perhaps the AI maker is rushed due to marketplace pressures. Maybe the AI maker cuts corners. Leaving the weighty matter solely in the hands of the AI maker might be overly dicey. Some fervently assert that there should be a double-checking approach involved. Perhaps an AI maker would need to go to a government agency and get approval to release their LLM. Or the AI maker might be required by law to go to an authorized third-party auditor that would review the testing, possibly perform additional testing, and then give a green light for release. There are already new AI laws that are heading in this direction; see my analysis at the link here. Some applaud this emerging requirement. A contrasting viewpoint is that adding a double-checking step is going to materially slow down the release of state-of-the-art LLMs. The United States might fall behind other countries that aren't imposing those kinds of double-checks. In addition, suppose the AI has lots of crucial, beneficial uses; those are being held back until the double-check approves the LLM to be released. A societal and legal debate is underway. Time will tell how this plays out. Delaying LLM For Other Reasons There is a bit of skepticism that arises when any AI maker announces they are delaying the release of their newly devised LLM. We've had such pronouncements happen in the past. A skeptic would claim that holding back an LLM might be a sneaky maneuver, acting as a marketing ploy. An AI maker could potentially create a tremendous buzz for their LLM. It might garner outsized headlines. The chatter gets the AI maker double credit. When they first say they aren't releasing the AI due to dangers afoot, this spurs bold headlines. Then, once the AI is presumably scrubbed and ready for release, the AI maker gets a second buzz since the world is waiting with bated breath to try out the mysterious LLM. In the instance of Mythos, the aspect that they made available their extensive System Card, consisting of around 245 pages of descriptions about the LLM, appears to put the skeptics somewhat back on their heels. Would an AI maker go to that trouble and be that upfront if they were bent on buzz? Aha, the skeptics say, this is a ratcheting up of the buzz technique, namely that the documentation gets even more spilled ink than if there hadn't been such a document released. It is challenging to differentiate between buzz making versus genuine intentions. Of course, if an AI maker opts to release their LLM and the AI does bad things or allows evil makers to do bad things, the AI maker would get roasted for having prematurely released the AI. Darned if you do, darned if you don't. AI Risks Are Large And Plenty If nothing else, the Mythos situation is a helpful reminder that modern-day AI has a dual-use capacity. There is the upside that AI can be used to possibly cure cancer and aid the world in amazing ways. Meanwhile, there is the horrific downside that AI can be used to harm people and undermine society. There are existential risks associated with AI, so-called X-risks, that AI will lead to widespread human destruction, known also as the probability of doom, or p(doom). This might occur at the hands of bad people who use AI to evil ends, or it could be that the AI itself brings forth such catastrophes. Benjamin Franklin famously made this remark: "The bitterness of poor quality remains long after the sweetness of low price is forgotten." In the case of leading-edge AI, putting the AI into public release right away might seem like the sweet way to proceed. If that AI via testing could have been better shaped and avoided calamities, the sweetness almost certainly would have been forgotten by the resultant bitterness. I ardently vote for rigorous and robust testing of AI, since the fate of humankind could be on the line. This article was originally published on Forbes.com

Anthropic
Yahoo11d ago
Read update
Anthropic Mythos Reveals Pandora's Box Of AI Extensional Risks And For Safety Sakes Not Yet Publicly Released

Closing Bell: ASX braces for more geopolitical chaos as US blocks Strait of Hormuz | Stockhead

The market is bracing for a worsening of the energy crisis as the US opts to blockade the Strait of Hormuz following failed peace negotiations. Pic: Getty Images * ASX slides 0.39% with 8 of 11 sectors lower * Oil rebounds to +US$100 a barrel as US blockades Strait of Hormuz * Energy, utilities, telecoms sectors outperform The fragile two-week ceasefire surrounding the Iranian conflict is in dire jeopardy after peace talks between the US and Iran failed within just 21 hours. The US is preparing to blockade the Strait of Hormuz, unwilling to allow Iranian oil to continue to flow from the Strait despite the ongoing energy crisis. US Central Command says the blockade will be "enforced impartially against vessels of all nations entering or departing Iranian ports and coastal areas, including all Iranian ports on the Arabian Gulf and Gulf of Oman." US President Trump also implied any ships paying Iran's safe passage toll would also be subject to some kind of retaliation. "No one who pays an illegal toll will have safe passage on the high seas," he wrote in a Truth Social post. Oil prices have rebounded to more than US$100 a barrel in response. If successful, the blockade will stymie a further 2 million barrels of oil a day, cutting even more supply from global supply chains. The S&P ASX 200 took the news on the chin, sliding 0.39% by day's end but bouncing off session lows of -0.79%. Energy and utilities benefited directly once again, while defensive sectors like telecoms, consumer staples and financials outperformed relative to the rest of the market. Breadth was weak with just 43 stocks rising vs 151 in the red, but overall the ASX is holding fairly steady, just 3% from its current 52-week high. ASX stocks on the move Nickel Industries (ASX:NIC) slipped 3.11% after China announced it would ban sulphuric acid exports starting next month. China is the largest producer - and consumer - of sulphuric acid, accounting for somewhere around 28.7% of the global market. Heap-leaching and high-pressure acid leaching both rely on sulphuric acid as the core reagent, exposing mining stocks that use the processing method to sudden supply uncertainty. Pro Medicus (ASX:PME) jumped 4.61% on renewing a five-year, $37 million contract with leading academic health system Northwestern Medicine. Monash IVF (ASX:MVF) surged 16.54% after the Soul Patts consortium made a fresh offer to buy the company, upgrading its original price 30% to $0.9 a share. MVF has until COB on Tuesday April 21 to decide. The consortium already holds about 19.6% of Monash shares. Rio Tinto (ASX:RIO) stayed flat (+0.25%) despite more than a dozen bidders lining up to buy its Californian boron operations, which could be worth as much as $2 billion. A2 Milk (ASX:A2M) plunged 12.55% after downgrading its FY26 guidance based on supply chain disruptions, largely caused by the Iranian conflict. A2M lowered its EBITDA margin guidance by about 1.5% and cut revenue to the low-to-mid double-digit range, rather than mid-double-digit growth as previously expected. The company is also expecting FY26 NPAT to be essentially in line with FY25, offering no fresh growth. Finally, EML Payments (ASX:EML) got hammered, plummeting 34.78% on downgrading its EBITDA guidance by about 18%. Management says the problem is timing rather than lost opportunity, but also acknowledges softer consumer demand and macro uncertainty that's set to continue through Q4. ASX Leaders Today's best performing stocks (including small caps): In the news... PARKD (ASX:PKD) has closed out a strategic placement raising $220,000 at $0.03 a share, a 36.4% premium to its last closing price. Leading New South Wales-based concrete construction company Azzurri subscribed for the full 4.9% of PKD equity on offer in the placement, citing PKD's prefabricated modular construction technology as the core draw. "The data centre and industrial sectors present a significant opportunity in NSW and PARKD's technology is well suited to these applications," Azzurri Concrete MD Donato D'Angola said. Prominence Energy (ASX:PRM) has kicked off a maiden round of on-ground exploration at its Gawler helium and hydrogen project in SA. The company is using a low-cost program designed to screen large areas and drum-up drill-ready targets, targeting hydrogen, helium and methane. Xref (ASX:XF1) has achieved some solid annual recurring revenue growth during the March quarter, raising ARR 54% year-on-year to $10.6 million. XF1 also increased its sales by 4% to $4.5m, netted a positive EBITDA of $300,000 and reduced its operational expenses 28% y/y to $4.6m. Prairie Lithium (ASX:PL9) is preparing to cut the ribbon at its commercial-scale direct lithium extraction processing facility in the second quarter this year, targeting formal commissioning for Q4 2026. The first 150 tonnes per year of lithium carbonate equivalent are already set to be shipped to Hydro Lithium under an offtake agreement with the battery material manufacturer, setting PL9 up for an early pay off. ASX Laggards Today's worst performing stocks (including small caps): In Case You Missed It Viking Mines (ASX:VKA) starts trading on the US OTC markets providing North American investors with access to the company's shares. Western Yilgarn (ASX:WYX) granted three exploration licences in prime gold territory as data points to new untested drill targets. Anson Resources' (ASX:ASN) 3D modelling hints at pay zone up to 660ft thick at Mt Fuel-Skyline Geyser 1-25 well. Axel REE (ASX:AXL) is preparing for an ISR test program at its Caladão ionic clay REE project in Brazil. Lodestar Minerals' (ASX:LSR) team of rare earth specialists are now on the ground at its Virgin mountain project in Arizona. Micro-X (ASX:MX1) is flipping the medical imaging paradigm, bringing the machine to the patient and saving lives. GoldArc (ASX:GA8) has delivered high-grade gold hits from drilling at the Mt Stirling deposit as its exploration push builds momentum. Nova Minerals (ASX:NVA) has appointed Ashlie Thorburn as CFO at a pivotal stage for the Estelle gold and antimony project. Trading Halts Great Northern Minerals (ASX:GNM) - acquisition Omnia Metals Group (ASX:OM1) - acquisition and cap raise QEM Limited (ASX:QEM) - acquisition and cap raise WhiteHawk Limited (ASX:WHK) - cap raise This article does not constitute financial product advice. You should consider obtaining independent advice before making any financial decisions.

CHAOS
Stockhead11d ago
Read update
Closing Bell: ASX braces for more geopolitical chaos as US blocks Strait of Hormuz | Stockhead
Showing 5361 - 5380 of 11333 articles