The latest news and updates from companies in the WLTH portfolio.
Amid a raging debate over the impact that new AI models will have on cybersecurity, Mozilla said on Tuesday that its Firefox 150 browser release this week includes protections for 271 vulnerabilities identified using early access to Anthropic's Mythos Preview. The Firefox team says that it has taken resources and discipline to adjust to the firehose of bugs that new AI tools can uncover, but that this big lift is necessary for the security of Mozilla's users, given that the capabilities will inevitably be in attackers' hands soon. Both Anthropic and OpenAI have announced new AI models in recent weeks that the companies say have advanced cybersecurity capabilities that could represent a turning point in how defenders -- and, crucially, attackers -- find vulnerabilities and misconfigurations in software systems. With this in mind, the companies have so far only done limited private releases of their new models, and both have also convened industry working groups meant to assess the advances and strategize. In practice, though, cybersecurity experts have a range of views on how consequential the new capabilities will be. Mozilla's experience, at least in the short term, shows that AI tools like Mythos Preview could have a profound impact for vulnerability hunters. "Our belief is that the tools have changed things dramatically, because now we have automated techniques that can cover, as far as we can tell, the full space of vulnerability-inducing bugs," says Bobby Holley, Firefox's chief technology officer. For years, he says, Firefox and other organizations have relied on a combination of automated vulnerability hunting techniques, like software fuzzing, and manual vulnerability hunting by internal and external researchers to find and fix flaws. And attackers have had these same tools and methods at their disposal. "There were categories of bugs that you could find with human analysis that you couldn't find with automated analysis and, therefore, it was always possible if you were a threat actor and you were willing to spend many millions of dollars to find a bug -- we tried to drive the price of that as high as possible," Holley says. Holley now says that emerging AI capabilities will create a sort of bootcamp that all software will have to go through one way or the other to find and fix a set of latent vulnerabilities in their code. Companies like Anthropic and OpenAI seem to be trying to get as many major players as possible to go through this overhaul before the capabilities are more widely available. "Every piece of software is going to have to make this transition, because every piece of software has a lot of bugs buried underneath the surface that are now discoverable," Firefox's Holley says. "This is a transitory moment that is difficult and requires coordinated focus and a lot of grit to get through, but I think that it is a finite moment, even as the models become more advanced. Maybe the more advanced models will find a few things here or there, but I believe that, at least on the Firefox side having had a bit of a head start here, that we've rounded the curve." Holley says that the Firefox team gained access to Mythos Preview as part of direct collaboration with Anthropic and that Mozilla is not formally part of its larger consortium, called Project Glasswing. Firefox is open source, a type of software that in general could be particularly impacted by new AI bug hunting capabilities given that many open source projects are widely used and relied upon around the world and yet are often maintained by a very small group of volunteers or just one person. And the effects could be especially consequential for "abandonware" that is no longer maintained at all. Raising awareness about the urgency of the issue and the reality of what it takes to secure software in the age of advanced AI vulnerability hunting, both in terms of resources and time, is crucial to getting all hands on deck for open source, Holley says. "I've talked to engineering leaders at very large companies who are saying that they're going to be pulling thousands of engineers off of everything to be working on this for the next six months," he says. "So it is going to be a big challenge for industry, and the concern is for smaller projects and open source. It's difficult for these maintainers to not only have the wherewithal and the access to be able to use these tools, but also to actually do anything with them." In a New York Times Opinion essay last week, Mozilla CTO Raffi Krikorian argued that even with gestures from companies like Anthropic, the arrival of these new AI cybersecurity capabilities will perpetuate dynamics that have played out in software for decades. "The underlying economics haven't changed," Krikorian wrote. "The most valuable software infrastructure in the world continues to be maintained by people working for free, while the companies building fortunes on top of it never had to pay for its upkeep. Now a powerful new capability has arrived -- and as we've seen repeatedly in tech, there's the risk that organizations with resources will receive it first and learn to protect themselves, while others are left vulnerable." For its part, Firefox's Holley says his team has relationships across the open source ecosystem and is working both formally and informally with as many maintainers as it can to share knowledge and tools. "Ultimately the open source stuff is a human problem," Holley says. "There's only so much that you can scale with technology -- there's a lot of the industry and everybody just needing to come together."

OpenAI and Anthropic continue to take swipes at each other. This week, during a podcast appearance, OpenAI CEO Sam Altman called out his competitor's new cybersecurity model, noting that the company was using fear to make its product sound more impressive than it actually is. Anthropic announced Mythos earlier this month, releasing the model to a small cohort of enterprise customers. The company has claimed that Mythos is too powerful to be released to the public out of concern that cybercriminals will weaponize it. Critics have said this rhetoric is overblown. During an appearance on the podcast "Core Memory," Altman implied that Anthropic's "fear-based marketing" was a good way to keep AI in the hands of a small and exclusive elite. "There are people in the world who, for a long time, have wanted to keep AI in the hands of a smaller group of people," he said. "You can justify that in a lot of different ways." "It is clearly incredible marketing to say, 'We have built a bomb, we are about to drop it on your head. We will sell you a bomb shelter for $100 million,'" he added. Fear-based marketing was not invented by Anthropic. Arguably, much of the AI industry has leveraged scare tactics and hyperbole to make its tools sound powerful. Ongoing rhetoric about how AI may lead to the end of the world hasn't just come from luddite doomer activists; it has also come from the people selling this technology to the public -- Altman included.

Alphabet's early investments in SpaceX and Anthropic have grown enormously in value. Several big-name companies are still privately held, meaning that retail investors can't put their money into them directly. The biggest, which is planning on going public in the near future, is SpaceX. Based on what we know about its initial public offering (IPO) plans, its market cap is estimated to be more than $1 trillion, so its early investors are poised to profit handsomely from their stakes. Another popular market segment is generative artificial intelligence (AI). Companies like OpenAI and Anthropic are generating a ton of buzz, but small investors can't invest in them, either. However, there's a way to gain some exposure to both SpaceX and Anthropic through a single investment right now. How? By investing in Alphabet (GOOG 1.39%) (GOOGL 1.30%). Alphabet was a fairly early investor in both SpaceX and Anthropic. Although we won't be able to calculate the exact values of Alphabet's stakes in them until they IPO, it's estimated that Alphabet owns about 6% of SpaceX and about 14% of Anthropic. Depending on what valuation these companies go public at, the combined total could add up to several hundred billion dollars. That's much greater than the sums it paid for those stakes. These two are Alphabet's most profitable external investments, but Alphabet also owns Waymo, its self-driving cab service. While most investors focus on how Alphabet's business is valued, the reality is that they tend to forget about its significant side bets and how much money it could make if it liquidated those investments. Doing that would also open up the avenue to other investments, such as its own AI computing capacity. And given how much it's laying out on data centers, this could be a good time for the company to have a massive influx of cash. Alphabet is directly competing in the AI arms race with its own model, Gemini. It is also hosting workloads for several competitors via its cloud computing platform, Google Cloud. While this may seem like a conflict of interest, the reality is Alphabet is happy to make money however it can from the AI build-out. If it can ensure that it or one of its major clients is a major winner, the massive cloud infrastructure it's putting up will pay off. One of Google Cloud's big advantages is that it can offer access to the custom AI chips that Alphabet designed in partnership with Broadcom (AVGO +0.44%). Its Tensor Processing Units have become a viable alternative to GPUs from Nvidia (NVDA 0.87%), and offer a more cost-effective computing solution as long as the workloads are properly configured. This is driving substantial growth: Google Cloud's revenue rose 48% year over year during Q4. This will be a segment to watch, as accelerating revenue will indicate the broad health of AI spending. In its bid to continue growing, Alphabet is investing a ton in new data center computing capacity, which is far from cheap. However, the payoff on these investments could be massive, which is why Alphabet is choosing to spend so much. If it were to decide to sell its whole SpaceX and Anthropic stakes (which I doubt it would do), the cash it would free up would be enough to fund over a year's worth of build-outs. Time will tell what Alphabet decides to do with its investments, but I think it will likely sell some shares. Overall, Alphabet is still one of the best ways to invest in AI. It has a rock-solid base business, a booming cloud computing segment, and several other bets that can pay off even if some of its AI endeavors fail. It's a no-brainer pick in one of the more difficult sectors of the market, and I think it will continue to crush the market over the next decade as it moves into its role as an AI leader and facilitator.

Pretty darn strong. Here's the company's quarterly results breakdown: In the table, we can see greater than 100% growth from the December 2024 quarter to the December 2025 period. We also see gross margin improvement in recent quarters, though Cerebras did manage better gross margin results in early 2024 when it was far smaller. Setting aside quarters due to accounting wiggles, Cerebras's most recent two quarters yielded incredibly impressive growth (+31% and +26%, sequentually, respectively), with net losses that are more than tolerable for a company as close to the cutting edge of AI compute as Cerebras appears to be. Comparing calendar 2024 with 2025 yields the following metrics: We can see from the numbers that both sides of Cerebras's business are doing well. People want to buy its chips, and the company is seeing quickly rising demand for use of its chips to handle inference. That's a double threat. Wait, but what about customer concentration? Looking backwards, Cerebras has not done a brilliant job diversifying its customer base. Looking forward, it has. Let me explain. If we read the S-1 filing regarding 2025 results, the picture is about as bleak as it was back in 2024: A substantial portion of our revenue is driven by a limited number of customers. Group 42 Holding Ltd (together with its affiliates, "G42") accounted for 24.0% and 85.0% of our total revenue for the years ended December 31, 2025 and 2024, respectively, and in the year ended December 31, 2025, Mohamed bin Zayed University of Artificial Intelligence ("MBZUAI") accounted for 62.0% of our total revenue. While I don't want to overstate my knowledge of the inner workings of the Emirati economy, it is worth mentioning that Peng Xiao is both Group CEO of G42 and a member of the MBZUAI board of trustees. Other people also hold roles at both enterprises. So when we consider Cerebras's 2024 and 2025, we see results that are incredibly proscribed to not merely the MENA region, or even the UAE, but Abu Dhabi industry itself. Looking ahead, the picture changes rapidly. In December of 2025, Cerebras signed a massive deal with OpenAI. Announced publicly in January of this year, "OpenAI and Cerebras have signed a multi-year agreement to deploy 750 megawatts of Cerebras wafer-scale systems to serve OpenAI customers." Per OpenAI, the capacity will come online in tranches. Cerebras also signed a deal in March with Amazon Web Services (AWS), which will see the cloud platform "become the first hyperscaler to deploy Cerebras systems in its data centers." The deal includes the creation of a "co-designed, disaggregated inference-serving solution that will integrate AWS Trainium3 chips with Cerebras CS-3 systems, connected via high-bandwidth networking, to partition inference workloads across Trainium3 and CS-3." Sounds great. If you want to get access to market demand, being present in AWS is a big deal. (Just ask OpenAI!) The OpenAI deal has big bones. The AWS agreement could matter, too. Cerebras notes that it has $24.6 billion worth of remaining performance obligations (RPOs), with a "significant amount of the balance [being] attributable to the Company's obligations pursuant to a master relationship agreement with OpenAI." Does this resolve the revenue concentration concerns? Partially! Deals with OpenAI and AWS certainly make Cerebras less reliant on its historically-critical MENA customers. But the proof will come in its revenue diversifying in practice (results), and not merely theory (forecasts). When will we learn more? Given that Cerebras likely waited to refile to go public until both its OpenAI and AWS deals were locked in. The company didn't want a repeat of its first run at the public markets. Thanks to its IPO refiling timing, we can expect Cerebras to provide some information about its Q1 2026 results before it prices. That means newer, fresher information on the OpenAI deal's impact on its results, if any. We'll still be staring at the very first months of the arrangement, meaning we might not see much revenue from it at this juncture. What's the real bet here? That purpose-built chips for handling AI inference become more popular over time. While the venerable GPU has a lot going for it, we're seeing major clouds build their own chips (Amazon, Google, Microsoft, Meta) for a reason. Yes, derisking from a single supplier source is a goal. But so too are chips that are more efficient at a specific AI task, not merely performant for all. The underlying bet to that wager is that demand for AI compute continues to scale. As we discussed this morning, the compute crunch is showing little progress towards loosening. How long the world will prove compute-constrained is up to your judgment, and your interest in snapping up Cerebras shares in its IPO will likely hinge on how bullish you are on future compute demand. All told, Cerebras's bet on big fucking chips is coming good, and the company has a solid shot at real revenue diversification in the coming years. Precisely how to price Cerebras we can leave to the market. But I don't think it will take too long to get Cerebras public, and its backers liquid.

Amid a raging debate over the impact that new AI models will have on cybersecurity, Mozilla said on Tuesday that its Firefox 150 browser release this week includes protections for 271 vulnerabilities identified using early access to Anthropic's Mythos Preview. The Firefox team says that it has taken resources and discipline to adjust to the firehose of bugs that new AI tools can uncover, but that this big lift is necessary for the security of Mozilla's users, given that the capabilities will inevitably be in attackers' hands soon. Both Anthropic and OpenAI have announced new AI models in recent weeks that the companies say have advanced cybersecurity capabilities that could represent a turning point in how defenders -- and, crucially, attackers -- find vulnerabilities and misconfigurations in software systems. With this in mind, the companies have so far only done limited private releases of their new models, and both have also convened industry working groups meant to assess the advances and strategize. In practice, though, cybersecurity experts have a range of views on how consequential the new capabilities will be. Mozilla's experience, at least in the short term, shows that AI tools like Mythos Preview could have a profound impact for vulnerability hunters. "Our belief is that the tools have changed things dramatically, because now we have automated techniques that can cover, as far as we can tell, the full space of vulnerability-inducing bugs," says Bobby Holley, Firefox's chief technology officer. For years, he says, Firefox and other organizations have relied on a combination of automated vulnerability hunting techniques, like software fuzzing, and manual vulnerability hunting by internal and external researchers to find and fix flaws. And attackers have had these same tools and methods at their disposal. "There were categories of bugs that you could find with human analysis that you couldn't find with automated analysis and, therefore, it was always possible if you were a threat actor and you were willing to spend many millions of dollars to find a bug -- we tried to drive the price of that as high as possible," Holley says. Holley now says that emerging AI capabilities will create a sort of bootcamp that all software will have to go through one way or the other to find and fix a set of latent vulnerabilities in their code. Companies like Anthropic and OpenAI seem to be trying to get as many major players as possible to go through this overhaul before the capabilities are more widely available. "Every piece of software is going to have to make this transition, because every piece of software has a lot of bugs buried underneath the surface that are now discoverable," Firefox's Holley says. "This is a transitory moment that is difficult and requires coordinated focus and a lot of grit to get through, but I think that it is a finite moment, even as the models become more advanced. Maybe the more advanced models will find a few things here or there, but I believe that, at least on the Firefox side having had a bit of a head start here, that we've rounded the curve." Holley says that the Firefox team gained access to Mythos Preview as part of direct collaboration with Anthropic and that Mozilla is not formally part of its larger consortium, called Project Glasswing. Firefox is open source, a type of software that in general could be particularly impacted by new AI bug hunting capabilities given that many open source projects are widely used and relied upon around the world and yet are often maintained by a very small group of volunteers or just one person. And the effects could be especially consequential for "abandonware" that is no longer maintained at all.

Polymarket says it will launch perpetual contracts, signaling the prediction market platform's planned expansion into crypto derivatives. The announcement marks a notable shift for a company built on event-based betting markets, though key details including timing, supported trading pairs, and leverage terms remain unconfirmed. The core of the story is straightforward: Polymarket has stated it will launch perpetual contracts. The wording points to an announced plan rather than a product that is already live and available for trading. No confirmed details have emerged about a specific launch date, which assets will be supported, what leverage limits will apply, or which jurisdictions will have access. The announcement itself is the only confirmed fact at this stage. The move comes as Polymarket has been actively raising capital. CoinTelegraph reported the platform was seeking a $400 million fundraise, while The Information reported discussions at roughly a $1.5 billion valuation. A derivatives product would represent a significant revenue expansion beyond event contracts. Perpetual contracts are a type of derivatives product that lets traders speculate on the price of an asset without an expiration date. Unlike traditional futures, which settle on a fixed date, perpetuals use a funding rate mechanism to keep the contract price anchored to the spot market. They are the most traded instrument in crypto derivatives markets, generating billions in daily volume across platforms like Binance, Bybit, and dYdX. Traders use them for leveraged directional bets, hedging, and basis trading strategies. For Polymarket, which built its reputation on prediction markets tied to real-world events like elections and policy outcomes, perpetual contracts represent a fundamentally different product category. It moves the platform from event-driven binary outcomes into continuous price speculation, similar to what competitor Kalshi has pursued with its own perpetual crypto futures. A new perpetuals venue could draw trader attention, particularly if Polymarket leverages its existing user base and brand recognition from the prediction markets space. The platform attracted significant traffic during recent election cycles, giving it a distribution advantage that pure derivatives exchanges lack. However, the actual impact depends entirely on execution details that have not been disclosed. Liquidity depth, fee structure, and the range of supported markets will determine whether the platform can compete with established derivatives venues. There is also the question of regulatory positioning. Polymarket has already faced scrutiny from U.S. regulators over its prediction market operations. Adding leveraged derivatives products could intensify that attention, particularly given the evolving regulatory landscape around crypto trading platforms in the United States. For traders, the practical takeaway is to wait for confirmed product specifications before making assumptions about the platform's competitive positioning. A perpetuals announcement without details on margin requirements, liquidation mechanics, and asset coverage is a statement of intent, not a finished product. Several critical questions remain unanswered. Until these are resolved, the significance of the announcement is difficult to assess with precision. These are not minor details. The difference between a perpetuals platform that offers 2x leverage on two assets in select jurisdictions and one that offers 100x leverage on dozens of tokens globally is enormous in terms of both market impact and regulatory risk. Are Polymarket perpetual contracts live yet? No. As of this writing, Polymarket has announced its intention to launch perpetual contracts but has not confirmed that the product is available for trading. What are perpetual contracts? Perpetual contracts are leveraged derivatives that track the price of an underlying asset without an expiration date. They are the most popular trading instrument in crypto derivatives markets. How is this different from Polymarket's prediction markets? Prediction markets let users bet on the outcome of specific events. Perpetual contracts allow continuous speculation on asset prices with leverage, making them a fundamentally different product category. What should traders watch for next? Key details to monitor include the launch date, supported trading pairs, leverage limits, geographic availability, and fee structure. These factors will determine whether the platform can meaningfully compete with established crypto trading venues. Disclaimer: This article is for informational purposes only and does not constitute financial or investment advice. Cryptocurrency and digital asset markets carry significant risk. Always do your own research before making decisions.
Amid a raging debate over the impact that new AI models will have on cybersecurity, Mozilla said on Tuesday that its Firefox 150 browser release this week includes protections for 271 vulnerabilities identified using early access to Anthropic's Mythos Preview. The Firefox team says that it has taken resources and discipline to adjust to the firehose of bugs that new AI tools can uncover, but that this big lift is necessary for the security of Mozilla's users, given that the capabilities will inevitably be in attackers' hands soon. Both Anthropic and OpenAI have announced new AI models in recent weeks that the companies say have advanced cybersecurity capabilities that could represent a turning point in how defenders -- and, crucially, attackers -- find vulnerabilities and misconfigurations in software systems. With this in mind, the companies have so far only done limited private releases of their new models, and both have also convened industry working groups meant to assess the advances and strategize. In practice, though, cybersecurity experts have a range of views on how consequential the new capabilities will be. Mozilla's experience, at least in the short term, shows that AI tools like Mythos Preview could have a profound impact for vulnerability hunters. "Our belief is that the tools have changed things dramatically, because now we have automated techniques that can cover, as far as we can tell, the full space of vulnerability-inducing bugs," says Bobby Holley, Firefox's chief technology officer. For years, he says, Firefox and other organizations have relied on a combination of automated vulnerability hunting techniques, like software fuzzing, and manual vulnerability hunting by internal and external researchers to find and fix flaws. And attackers have had these same tools and methods at their disposal. "There were categories of bugs that you could find with human analysis that you couldn't find with automated analysis and, therefore, it was always possible if you were a threat actor and you were willing to spend many millions of dollars to find a bug -- we tried to drive the price of that as high as possible," Holley says. Holley now says that emerging AI capabilities will create a sort of bootcamp that all software will have to go through one way or the other to find and fix a set of latent vulnerabilities in their code. Companies like Anthropic and OpenAI seem to be trying to get as many major players as possible to go through this overhaul before the capabilities are more widely available. "Every piece of software is going to have to make this transition, because every piece of software has a lot of bugs buried underneath the surface that are now discoverable," Firefox's Holley says. "This is a transitory moment that is difficult and requires coordinated focus and a lot of grit to get through, but I think that it is a finite moment, even as the models become more advanced. Maybe the more advanced models will find a few things here or there, but I believe that, at least on the Firefox side having had a bit of a head start here, that we've rounded the curve." Holley says that the Firefox team gained access to Mythos Preview as part of direct collaboration with Anthropic and that Mozilla is not formally part of its larger consortium, called Project Glasswing. Firefox is open source, a type of software that in general could be particularly impacted by new AI bug hunting capabilities given that many open source projects are widely used and relied upon around the world and yet are often maintained by a very small group of volunteers or just one person. And the effects could be especially consequential for "abandonware" that is no longer maintained at all.

In this new adaptation, players work together to climb the iconic Shinra Tower, stacking Floor and Wall tiles of varying heights to build upward while maintaining the tower's stability. The higher you go, the more the structure begins to tilt, turning each decision into a careful balance between progress and collapse. The game reimagines Final Fantasy VII's cast through wooden pieces featuring a charming, storybook-inspired art style. Cloud, Tifa, Aerith, and other familiar faces appear alongside the enemies they encounter, giving the experience a fresh visual identity while staying rooted in the original world. Blending cooperative gameplay with a physical dexterity challenge, Ascend the Shinra Tower looks to offer a different kind of tension - one where teamwork and steady hands matter just as much as strategy.

Anthropic is reportedly preparing to roll out its powerful and controversial AI model, Mythos, to European and UK banks within the coming days, a report by news agency Reuters said, citing sources familiar with the matter. The move comes as global financial institutions scramble to secure their systems against the threats the new AI has uncovered.Until now, access to Mythos has been largely restricted to a small group of US giants, including tech companies like Google, Apple and Amazon as well as banks such as JP Morgan. This latest expansion is sais to be aimed at leveling the playing field as regulators warn that the AI model could expose deep-seated vulnerabilities in the aging technology systems that power the world's banks.The rollout is part of a race to stay ahead of hackers, the report said. While JPMorgan Chase and Bank of America have been testing Mythos internally through Anthropic's "Project Glasswing" initiative, reports have said that the company is planning to offer it to UK banks as well.One source told Reuters that access could be granted to European institutions "within days," while others cautioned that security checks might extend that timeline to a few weeks."All institutions should have access to this model to keep the playing field even and avoid it being misused," Joachim Nagel, chief of the German central bank, was quoted as saying.Mythos is a specialist at finding "high-severity vulnerabilities" in software. If hackers use similar AI first, they could dismantle bank security. Anthropic wants banks to secure their systems by giving them access to Mythos now, and help them plug 'holes' before they are exploited.Notably, the urgency of the situation was a major topic at last week's International Monetary Fund (IMF) spring meeting in Washington, where policymakers expressed concern over how quickly the banking sector can adapt to this new AI age.
Anthropic is reportedly preparing to roll out its powerful and controversial AI model, Mythos, to European and UK banks within the coming days, a report by news agency Reuters said, citing sources familiar with the matter. The move comes as global financial institutions scramble to secure their systems against the threats the new AI has uncovered.Until now, access to Mythos has been largely restricted to a small group of US giants, including tech companies like Google, Apple and Amazon as well as banks such as JP Morgan. This latest expansion is sais to be aimed at leveling the playing field as regulators warn that the AI model could expose deep-seated vulnerabilities in the aging technology systems that power the world's banks.The rollout is part of a race to stay ahead of hackers, the report said. While JPMorgan Chase and Bank of America have been testing Mythos internally through Anthropic's "Project Glasswing" initiative, reports have said that the company is planning to offer it to UK banks as well.One source told Reuters that access could be granted to European institutions "within days," while others cautioned that security checks might extend that timeline to a few weeks."All institutions should have access to this model to keep the playing field even and avoid it being misused," Joachim Nagel, chief of the German central bank, was quoted as saying.Mythos is a specialist at finding "high-severity vulnerabilities" in software. If hackers use similar AI first, they could dismantle bank security. Anthropic wants banks to secure their systems by giving them access to Mythos now, and help them plug 'holes' before they are exploited.Notably, the urgency of the situation was a major topic at last week's International Monetary Fund (IMF) spring meeting in Washington, where policymakers expressed concern over how quickly the banking sector can adapt to this new AI age.
Amazon is deepening its collaboration with the rapidly growing AI company Anthropic through a 25-billion-dollar investment and a cloud services commitment worth over 100 billion dollars. Amazon will immediately invest 5 billion dollars in Anthropic and has committed to investing an additional 20 billion dollars in the company if predefined commercial and technical targets are met. The new funding comes on top of two previous 4-billion-dollar rounds in 2023 and 2024, bringing Amazon's total potential investment from 8 billion to 33 billion dollars - if all targets are achieved. In return, Anthropic commits to using Amazon's cloud services and, in particular, the company's custom Trainium chips to train and run its AI models over the next ten years. As part of the capacity agreement, Anthropic secures up to 5 gigawatts of current and future chip capacity - a scale that the companies compare to the electricity production of five large nuclear power plants. Over the past six months, Anthropic has faced what is often called a positive problem, as the popularity of its AI solutions, especially among enterprise users, has exploded thanks to the growing success of Claude Code. Anthropic has been forced to tighten usage limits even for paying customers to ensure its server capacity can keep up with demand. As part of the new arrangement, the entire Claude platform will be integrated even more tightly into Amazon's cloud. Anthropic's models will henceforth be available directly through the AWS portal, allowing enterprise customers to use Claude's tools with their existing AWS credentials, billing arrangements, and security monitoring. The agreement between Amazon and Anthropic reflects the broader competitive landscape among major cloud providers. Amazon recently agreed to invest up to 50 billion dollars in OpenAI, a rival to Anthropic - and at the same time concluded a 100-billion-dollar cloud deal with the company. Microsoft has also recently followed much the same playbook as Amazon. For a long time, Microsoft mainly partnered with OpenAI, but since then it has also invested over 5 billion dollars in Anthropic and offers the company's AI solutions on its own cloud platform, Azure.

Unlock the full potential of Discord to grow your digital presence. Learn strategies to build communities, improve engagement, and streamline workflows for better efficiency. Discord has become more than a mere chat program in the constantly growing world of online communication. It has turned into an advanced community, business and developer infrastructure. The potential of the platform is unlimited whether you are climbing a huge server, controlling professional support systems and are willing to create a distinctive digital identity. One needs to know how to use the tools of this ecosystem to the fullest extent and power in order to succeed in this ecosystem. The issue of staying engaged and expanding the user base is a complicated problem to community leaders and developers. When handling thousands of users, manual control is no longer an option. Here, automation can change things. Through the combination of developed APIs and bespoke systems, community managers will be able to develop a smooth onboarding experience and offer real-time support. The Discord tokens ecosystem has turned out to be crucial to the operation of large-scale servers. These tokens make it easy to automate systems that can process welcome messages all the way to more complex task management so that professional teams can channel their energies on content strategy as opposed to draining administrative tasks. The buy tokens discord feature is available to developers and agencies that are constructing high-volume operations, allowing them to provide the basis of complex, multi-layered automation surroundings, with stability and uptime guaranteed to their professional servers. In the digital era, you are what you make it to be. In the competitive guilds, Web3 projects, and professional networking areas of Discord, legitimacy generally relates to the past and quality of your profile. Positive participation or special badges create a legacy of trust and authority in the profile. This is what makes most leaders in communities focus on how they can acquire the best accounts which have a track record. The economics of digital identity is a growing field. Collectors and professional users are aware that particular badges and profile histories can raise their status, providing them with access to exclusive levels and community privileges otherwise hard to achieve. To create an immediate professional presence, be it in QA testing, community leadership, or competitive gaming, a wise choice would be to opt to use cheap discord accounts. It enables individuals and businesses to get around the new user hurdle and go straight into the influencing positions and get their objectives and targets in the community achieved at a rapid pace. With more innovations that Discord implements, including more rigorous age verification processes and biometricization, the space is becoming safer and more reliable to all. Such security enhancements represent the commitment of the organisation to creating a professional, sustainable platform. By making sure that the developers and community managers coordinate their workflows with these standards, they are not only securing their servers; they are making the digital ecosystem healthier. Professional operations are successful when they are moderated between anonymity and authenticity. The community managers can establish a professional distance with the help of trustworthy account assets to keep the personal information of their team safe and ensure the active presence at all times. This is the future of Discord: a home where robots and humans co-exist in harmony, with strong security protocols against information theft. The secret of success is the ability to use the appropriate resources, whether you are a user with a fresh account on the site or a professional developer who has to maintain a huge network. The Discord marketplace has a wide variety of tools that not only satisfy the needs of high-level account assets but also provide token-based automation tools. With the right choice of assets, including high-quality, aged, or verified accounts, which can be considered secure and effective, you can create a basis that will be profitable and safe at the same time. Having these tools integrated into your workflow means that you will always be a step ahead of the curve and will be able to respond to the needs of the community in real-time and work with your digital assets at the professionalism required by contemporary standards. Going forward, the perfect example is that whoever learns to use the advanced ecosystem offered by Discord will be at the head of the next generation of online communities. With innovation, security first and the right foundational assets, you are not merely engaged in a social platform, but you are creating a professional digital future.

Experts point to management failures behind ongoing crisis Despite official assurances of adequate fuel stocks, underpinned by Bangladesh Petroleum Corporation (BPC) data, long queues and intermittent supply disruptions continued at filling stations across the country yesterday. While analysts and experts have proposed measures such as an odd-even rationing system and digital tracking to manage demand and ease pressure on pumps, proposals remain sidelined, leaving motorists to endure hours-long waits and sporadic "no fuel" notices. In response to the strain, the BPC has announced a 10-20% increase in supply of diesel, petrol and octane, with 13,048 tonnes of diesel, 1,422 tonnes of octane and 1,511 tonnes of petrol being distributed daily through three state-run marketing companies. However, the retail situation has yet to stabilise. On the ground, the supply boost has not fully translated into availability at pumps. While waiting times have eased slightly in parts of Dhaka and Chattogram, motorists across much of the country continue to face delays and uncertainty. Imports and stock data show no shortage According to port and BPC sources, between 28 February and 21 April, 823,170 tonnes of fuel arrived at Chattogram port in 26 shipments. Of this, 624,452 tonnes came as diesel in 16 vessels, 124,087 tonnes furnace oil in six, 53,364 tonnes octane in two, and 21,266 tonnes jet fuel in two. A Singapore-flagged vessel, Hafnia Cheeta, carrying 32,000 tonnes of diesel from Malaysia, docked yesterday around noon. Based on an average daily demand of 12,500 tonnes, diesel imports over 53 days could meet around 50 days of demand. With a 12-day opening stock in early March, total availability should have covered about 65 days, indicating no supply shortage. For octane, the country had an 18-day stock at the start of March. Imports of 53,364 tonnes, against a daily demand of 1,200 tonnes, add 45 days of supply. Local refineries produce around 700 tonnes daily, adding roughly 37,000 tonnes or 30 days' supply. Combined, availability reaches about 93 days. Despite these figures, retail-level disruptions have continued. Mismanagement, panic and weak oversight The strain began between 28 February and 6 March, when over 175,000 tonnes of fuel were sold in just seven days - more than double normal demand - rapidly depleting reserves. In response, authorities introduced rationing measures, after which long queues formed across fuel stations nationwide. Many motorists were forced to wait for hours and often returned without fuel. According to Bangladesh Petroleum Corporation (BPC) and port sources, 26 vessels carrying 823,170 tonnes of fuel arrived at Chattogram between 28 February and 21 April. Of this, 624,452 tonnes were diesel, alongside furnace oil, octane and jet fuel shipments. BPC data show that, in theory, the combined stock and imports were sufficient to meet demand for extended periods. Despite this, retail disruptions persisted, with officials announcing a 10-20% increase in daily fuel distribution to ease shortages. Yet filling stations continued to report uneven supply, shortened operating hours and "no fuel" notices. Analysts attribute the crisis to distribution failures rather than supply shortages. They cite irregular withdrawals in early March, panic buying triggered by expectations of price hikes, and weak monitoring across depots and stations as key factors. Some fuel was reportedly hoarded, while portions may have been smuggled due to price gaps with neighbouring countries. Former Eastern Refinery general manager Monjare Khorshed Alam said early excess demand was not contained. "If the excessive fuel supply during the first week had been controlled, the crisis would not have become so severe," he said, adding that expectations of price hikes encouraged stockpiling. Energy expert Professor M Tamim pointed to gaps in monitoring and the absence of tracking systems, which allowed irregularities in distribution. He also criticised early signals of price increases, saying they intensified hoarding behaviour. Experts suggest that tools such as app-based fuel tracking and odd-even number plate rationing could have helped stabilise supply and reduce congestion at pumps.

Anduril Industries and Kraken Technology Group have announced a partnership to develop and manufacture unmanned surface vessels for the United States and allied navies, the companies stated. The agreement, unveiled at the Sea-Air-Space exposition in Washington, is intended to support the U.S. Navy's transition towards a more distributed "hybrid fleet" combining crewed ships with autonomous systems. Under the partnership, the two firms will jointly develop and produce a family of small, high-speed unmanned vessels, including the K7 SABRE and K5 KRAKEN platforms. These systems will be manufactured and integrated in the United States under licence, alongside modular payloads designed for a range of missions. The companies said the collaboration is aimed at delivering scalable and rapidly deployable capabilities, with a focus on interoperability across U.S. forces and NATO partners. Mal Crease, Founder and CEO of Kraken Technology Group, said: "This partnership reflects Kraken's commitment to supporting global maritime challenges with hardened operational capabilities at a critical point in history." He added: "Under this agreement Kraken will deliver low-cost, scalable and modular systems that are both reliable and effective." Cory Emmons, General Manager of Surface Dominance at Anduril, said the agreement would expand the company's existing portfolio of autonomous maritime systems. "Kraken is known for their proven, battle-tested platforms. This partnership expands Anduril's family of autonomous surface offerings with small boats carrying mission payloads, adding a complementary capability to larger ASVs and the legacy fleet." Kraken, a UK-based company, has been expanding its presence in the United States in recent years. The announcement follows a $49 million award from U.S. Special Operations Command and additional international contracts, positioning the firm within a growing market for autonomous maritime systems.

The Vercel Breach Started With A Roblox Cheat. It Ended With The Entire AI-Security Thesis. On a random day in February 2026, an employee at a small AI startup called Context.ai went looking for something on the internet. They were not trying to steal credentials or pivot into a billion-dollar cloud company. They were trying to cheat at Roblox. Specifically, according to Hudson Rock researchers who reverse-engineered the victim's browser history, the employee was searching for and downloading "auto-farm" scripts and game exploit executors, the kind of tool that automates grinding inside an online game. Hidden in one of those downloads was Lumma Stealer, one of the most widely distributed pieces of infostealer malware currently in circulation. What Lumma Stealer does is simple. It waits on the infected machine and quietly exfiltrates every credential the user's browser has ever saved. Google Workspace logins. API keys. Session cookies. OAuth tokens. It does not care which of those belong to a game account and which belong to a company email. It harvests everything and ships it to a criminal marketplace, where it sits until someone figures out what it is worth. For two months, those credentials sat in a database. Then someone noticed the email address belonged to a core engineer at Context.ai, a company that builds AI "Office Suite" agents on top of enterprise Google Workspace accounts. On April 19, 2026, Vercel confirmed that an attacker had used those credentials to breach Context.ai, steal the OAuth tokens of its customers, and pivot into the Google Workspace of a Vercel employee who had signed up for Context.ai's product and granted it "Allow All" permissions on their enterprise account. From there, the attacker moved into Vercel's internal systems and lifted customer environment variables that had not been flagged as sensitive. A threat actor then listed what they claimed was Vercel's internal database for sale on BreachForums at $2 million. One employee. One bad download. Two months later, a $2 million ransom listing against one of the most important cloud development platforms on the internet. This is what an AI supply-chain attack actually looks like. And this is why intelligence alone is no longer a moat. The part of this story that matters for enterprise software is not the malware. Infostealers have been around for years. The part that matters is the OAuth grant. Here is what happened in plain language. A Vercel employee wanted to try a promising new AI tool. They found Context.ai's "AI Office Suite," clicked the sign-up button with their work Google account, and when the permissions screen asked them to grant the tool access to their files and email, they clicked allow. The permissions box, as configured by Context.ai, requested broad read access to the user's entire Google Workspace environment, including Drive. The employee did what most employees do. They did not read the box. They clicked through. Months later, when the attacker took over Context.ai's infrastructure, that single OAuth grant became the bridge. The attacker did not need to hack Vercel. They needed to hack the AI startup whose software a Vercel employee had already given the keys to. Vercel's own post-incident language is worth reading: "The incident originated from a small, third-party AI tool whose Google Workspace OAuth app was the subject of a broader compromise, potentially affecting its hundreds of users across many organizations." Vercel has now rotated environment variables and changed the default setting so that new variables are marked "sensitive" by default. They are, in effect, assuming that the employees at their partner companies will continue to click through OAuth consent screens without reading them, because that is what employees do, and the only way to stop the bleeding is to stop trusting the upstream. The real story is not that Context.ai was sloppy. It is that the enterprise AI era has a trust problem that nobody priced in. For most of the last year, the loudest voices in the market have been declaring that SaaS is dying. AI agents will replace the apps. Workflows will collapse. Software budgets will rotate from seat-based licenses to AI compute. The Vercel breach is the data point that argues back. Here is the underlying claim the Vercel timeline makes: That is why Microsoft and Google Workspace are integrating AI directly into the products enterprises already trust, rather than letting a thousand AI startups build OAuth wrappers around the same data. It is why Oracle, whose revenue growth has surprised the market repeatedly over the last two quarters, keeps selling. A Fortune 500 company will always choose a legacy secure wall over a clever open door. The most valuable incumbents are the ones that already own the identity layer. That is what just got reconfirmed. For a pure-play on this thesis, the most obvious beneficiary is the cybersecurity stack that enterprises will now have to put between themselves and every AI tool their employees want to use. Palo Alto Networks is the clearest example. In the company's FY2025 results released August 18: In May 2025, Palo Alto acquired Protect AI and rolled its technology into a new platform called Prisma AIRS, designed specifically to scan AI models, monitor runtime behavior, manage AI agent identities, and govern the exact kind of third-party OAuth grant that caused the Vercel incident. In other words, they built a product for the attack pattern that was happening before they had a name for it. The logic is direct. Every new AI deployment inside a regulated enterprise now has to pass through a security review. Every security review needs a platform that can govern identity, runtime, and data flow across hundreds of third-party AI tools. A fragmented security stack of point solutions cannot do this, because the Vercel attack moved laterally across systems in a way a single-point tool would have missed. A platform that treats AI security as one problem, rather than twelve tools duct-taped together, is what the next decade of enterprise AI has to run on top of. Palo Alto is not the only company that will benefit. Microsoft benefits. CrowdStrike benefits. CyberArk, which Palo Alto has announced plans to acquire for identity security, benefits. Anyone who owns a piece of the identity or runtime security layer in the enterprise AI stack benefits. The Vercel breach is the starting gun, not the finish line. There is a useful thing to do with an event like this. Separate the part that is a story from the part that is a thesis. The story is that one bored engineer downloading a Roblox cheat script in February 2026 cost one of the most important cloud platforms on the internet enough data to attract a $2 million ransom demand. That is a great story. It will get retold at security conferences for a decade. The thesis is that every enterprise AI deployment now has to pay a security tax, and most of the market has not priced it in. OAuth grants persist. Infostealers are cheap. The list of third-party AI tools that any enterprise has accumulated in the last 18 months is long, loosely governed, and written in a language most CISOs cannot fully audit. The companies that will survive this era are not the ones with the smartest AI. They are the ones the enterprise already trusts enough to let inside the wall. For everyone else, the price of admission just went up.

US President Donald Trump said on Tuesday Anthropic was "shaping up" in the eyes of his administration, opening the door for the AI company to reverse its blacklisting at the Pentagon. Trump directed the government in February to stop working with Anthropic. US President Donald Trump said on Tuesday Anthropic was "shaping up" in the eyes of his administration, opening the door for the AI company to reverse its blacklisting at the Pentagon. Trump directed the government in February to stop working with Anthropic. The Pentagon followed up by declaring the firm a supply-chain risk, dealing a major blow to the artificial intelligence lab after a showdown over guardrails for how the military could use its AI tools. The company disputes that characterisation and filed suit against the Defense Department in March over the determination. Anthropic CEO Dario Amodei met with White House officials last week to attempt to repair the relationship. The White House called the meeting productive and constructive. "They came to the White House a few days ago, and we had some very good talks with them," Trump told CNBC's "Squawk Box" on Tuesday. "And I think they're shaping up. They're very smart, and I think they can be of great use. I like smart people ... I think we'll get along with them just fine." When asked if a deal was on the horizon with the Pentagon, Trump said, "It's possible. We want the smartest people." Anthropic, asked for comment, referred to its Friday statement describing its White House meeting as productive and focused on how the two "can work together on key shared priorities such as cybersecurity, America's lead in the AI race, and AI safety." The apparent rapprochement comes weeks after Anthropic unveiled Mythos, its most advanced AI tool, with a potentially unprecedented ability to identify cybersecurity vulnerabilities and devise ways to exploit them, experts have said. Anthropic has said Claude Mythos Preview will not be made generally available. Instead, the company announced Project Glasswing, in which it invited major tech companies, cybersecurity vendors and US bank JPMorgan Chase, along with several dozen other organizations, to privately evaluate the model and prepare defenses accordingly. Anthropic cofounder Jack Clark said last week the firm was discussing its frontier AI model Mythos with the Trump administration without providing details.
Anthropic is moving closer to restoring ties with the U.S. Department of Defense after President Donald Trump said the artificial intelligence company was improving its standing with his administration, raising the prospect that the Pentagon could revisit its ban. Trump's comments followed a White House meeting with Anthropic CEO Dario Amodei and White House officials to discuss collaboration and guardrails around advanced AI systems. Anthropic said the discussion centered on how the two "can work together on key shared priorities such as cybersecurity, America's lead in the AI race and AI safety." Reuters reported that Trump told CNBC's "Squawk Box" he believed the company was "shaping up" and suggested an agreement with the Pentagon could be possible. He added, "It's possible. We want the smartest people." Anthropic has pushed back on that assessment and sued the Defense Department in March over the designation. In the background, Anthropic has been drawing attention for its new AI model Mythos, which it has described as its "most advanced model" and which experts have said could spot software security flaws and map out ways to exploit them. The company has said the Claude Mythos Preview won't be broadly released and instead is being tested through Project Glasswing with selected partners, including JPMorgan Chase, Apple, Google, Microsoft, among others. Anthropic co-founder Jack Clark said last week the company was in discussions with the Trump administration about Mythos, without sharing further details. Photo: Shutterstock This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.

There appears to be little hope for the timely completion of the four-lane railway overbridge (ROB) near Kakka Kandiala village at the A-25 railway line crossing in Tarn Taran. This comes despite earlier claims by Cabinet Minister Harbhajan Singh ETO that the project would be completed ahead of the prescribed deadline. As a result, residents of the area are likely to face traffic-related hardships for several more months. Commuters have already been dealing with significant inconvenience for over two years. With traffic closed on the Kakka Kandiala road, vehicles are being diverted through the roads of Tarn Taran town, leading to frequent congestion since the beginning of the ROB construction. The foundation stone for the project was laid in January 2024, with an initial completion deadline set for June 2026 at an estimated cost of Rs 82.88 crore. During a site visit in May 2025, Minister Harbhajan Singh ETO announced that the project would instead be completed six months earlier, say by December 2025. The minister had also claimed that early completion would reduce the project cost to Rs 70 crore, resulting in savings of Rs 12.88 crore from the initially projected Rs 82.88 crore. However, Simranjit Singh, Executive Engineer (XEN), PWD (B&R), Amritsar, stated that the work is now expected to take another five to six months. At the current pace observed at the site, this timeline also appears difficult to achieve. The officer maintained that work is progressing on a war footing but acknowledged challenges. He explained that the project involves coordination not only with the state government but also with the Ministry of Railways, which has caused procedural delays. Additionally, issues related to funding have contributed to the slow progress. Due to the closure of the Kakka Kandiala road, both heavy and light traffic has been rerouted through Tarn Taran town, where roads remain frequently congested. Traffic jams have become a routine issue for residents. Dr Sukhdev Singh Lauhuka, a former councillor and social worker, has urged the district administration to closely monitor the project and ensure its completion by June, as originally promised at the time of its inauguration. He added that the slow pace of construction is causing inconvenience not only to residents of Tarn Taran but also to commuters travelling by buses to the holy city.
Neutron delay to Q4 FY26 extends cash burn, while early launches are expected to carry negative margins before long-term scale economics improve. As capital floods into the space sector ahead of a potential SpaceX IPO, attention has shifted sharply, almost aggressively toward the next big thing. Since my last coverage Rocket Lab's (RKLB) stock performance was muted and Hi, I'm Yiannis. Spotting winners before they break out is what I do best.Experience: Previously worked at Deloitte and KPMG in external/internal auditing and consulting. Education: Chartered Certified Accountant, Fellow Member of ACCA Global, with BSc and MSc degrees from U.K. business schools. Investment Style: Spotting high-potential winners before they break out, focusing on asymmetric opportunities (with at least upside potential of 3-5X outweighing the downside risk). By leveraging market inefficiencies and contrarian insights, we seek to maximize long-term compounding while protecting against capital impairment.Risk management is paramount -- we seek a strong margin of safety to protect against capital impairment while maximizing long-term compounding. Our 2-3 year investment horizon allows us to ride out volatility, ensuring that patience, discipline, and intelligent capital allocation drive outsized returns over time. Analyst's Disclosure: I/we have no stock, option or similar derivative position in any of the companies mentioned, and no plans to initiate any such positions within the next 72 hours. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article. Seeking Alpha's Disclosure: Past performance is no guarantee of future results. No recommendation or advice is being given as to whether any investment is suitable for a particular investor. Any views or opinions expressed above may not reflect those of Seeking Alpha as a whole. Seeking Alpha is not a licensed securities dealer, broker or US investment adviser or investment bank. Our analysts are third party authors that include both professional investors and individual investors who may not be licensed or certified by any institute or regulatory body.

Artificial intelligence company Anthropic has agreed to commit more than $100 billion to Amazon's AWS cloud platform over the next 10 years to train and run its Claude chatbot. Amazon will invest $5 billion immediately as part of the new agreement announced this week by the companies, and up to another $20 billion in the future. Amazon previously invested $8 billion in Anthropic. The partnership will allow Anthropic to secure up to 5 gigawatts of Amazon's Trainium chips to train and power their artificial intelligence models. "Our custom AI silicon offers high performance at significantly lower cost for customers, which is why it's in such hot demand," said Amazon CEO Andy Jassy. Amazon said AWS customers will be able to access the full Anthropic-native Claude console from within the AWS cloud platform. Earlier this year, privately-held Anthropic said its valuation grew to $380 billion, positioning itself alongside rivals OpenAI and Elon Musk's rocket maker SpaceX, which recently merged with his AI startup xAI, maker of the chatbot Grok. Renaissance Capital, which researches the potential for initial public offerings, counts Anthropic as third among the most valuable private firms, behind SpaceX and ChatGPT maker OpenAI, valued at $500 billion. Anthropic and Amazon have partnered since 2023 to accelerate generative AI adoption for customers to build, deploy, and scale AI applications. Amazon says 100,000 customers run Anthropic Claude models on AWS. In February, the Trump administration ordered all U.S. agencies to stop using Anthropic's artificial intelligence technology and imposed other major penalties for refusing to allow the U.S. military unrestricted use of its AI technology. In an unusually public clash between the government and the company, President Donald Trump, Defense Secretary Pete Hegseth and other officials took to social media to chastise Anthropic, accusing it of endangering national security. Anthropic CEO Dario Amodei refused to back down over concerns the company's products could be used in ways that would violate its safeguards. Anthropic said it would challenge what it called an unprecedented and legally unsound action "never before publicly applied to an American company." Earlier this month, a federal appeals court refused to block the Pentagon from blacklisting artificial intelligence laboratory Anthropic in a decision that differed from the conclusions reached in another judge's ruling on the same issues. Anthropic is not yet profitable but said in February that it's on track for sales of $14 billion over the next year. Anthropic was founded by ex-OpenAI employees in 2021 and released its first version of Claude in 2023, following OpenAI's ChatGPT debut in late 2022.
