The latest news and updates from companies in the WLTH portfolio.
pokemon mega evolution chaos rising has moved into a narrow retail window where availability and affordability are pulling in opposite directions. Some products tied to the set have surfaced at prices above MSRP, yet still below the harsh resale levels that often follow a major Pokémon TCG release. That tension is the story: when stock is thin, even elevated prices can look acceptable. The first retail signals suggest buyers may have only a short time to decide whether to act now or gamble on restocks that may never fully stabilize. Why the Limited-Stock Window Matters The immediate issue is not simply that pokemon mega evolution chaos rising exists in stores. It is that the set is appearing in limited quantities, and the pricing is already inconsistent. The set has surfaced at Walmart and Miniature Market, with the most favorable listing so far tied to the Booster Bundle at Miniature Market. Walmart is carrying the same product at roughly $30 more, a spread that is unusually wide for the same release and a clear sign of how fast the market can re-price a hot product. That kind of gap matters because scarcity changes behavior. When collectors think a product may disappear quickly, normal resistance to higher prices weakens. In practical terms, pokemon mega evolution chaos rising is already being treated as a scarce release, and that perception can be as powerful as the actual inventory count. What the Current Pricing Reveals The retail numbers tell their own story. The Elite Trainer Box is listed at $149. 99, a level above standard retail expectations, but one that has become familiar once pre-order allocations dry up. The Walmart Booster Box is priced at $269. 99 for 36 packs, which works out to about $7. 50 per pack. For some buyers, that is too steep. For others, especially those opening larger quantities, it may still feel more manageable than waiting for aftermarket pressure to rise further. That is the central contradiction inside pokemon mega evolution chaos rising: the set is expensive, but the fear of missing out can make expensive feel reasonable. The market is not just reacting to a product; it is reacting to the possibility that the product may not stay available long enough for a better buying opportunity to emerge. pokemon mega evolution chaos rising and the Scarcity Signal The release is part of the Pokémon TCG's ongoing Mega Evolution series and is set against the backdrop of Lumiose City, with Mega Floette ex positioned as the marquee card. That detail helps explain why interest has been strong since the first product images surfaced. This is not being treated as a routine release cycle. It is part of a broader push toward Mega Evolution Pokémon ex cards that has already built momentum across earlier sets in the series. What makes pokemon mega evolution chaos rising more volatile is how quickly available inventory can disappear. The Miniature Market Booster Bundle has been flagged as limited stock, and the same page has also been noted as a place where the Elite Trainer Box could briefly restock again. That does not guarantee more supply. It only underscores how unstable the inventory picture remains. What Buyers Need to Watch Next There is an important difference between a product being listed and a product being realistically obtainable. The 3-Pack Blister at Walmart appears to be the Charmeleon variant based on what is visible in other retail channels, but the listing itself does not confirm that detail. In a market this tight, even small uncertainties can influence purchasing decisions. That is why pokemon mega evolution chaos rising is being watched so closely: the release is not only about price, but about timing, clarity, and access. Buyers considering multiple products are making decisions with incomplete information, which adds another layer of caution to an already compressed buying window. Regional and Market Impact The broader consequence extends beyond one set. When a release like pokemon mega evolution chaos rising moves quickly into limited-stock territory, it reinforces the pattern that new Pokémon TCG products rarely stay available for long. That can push buyers toward faster decisions, wider retailer comparisons, and greater tolerance for prices above standard levels. For the market, the lesson is simple: scarcity is now part of the pricing model. For collectors, the question is harder. If the set remains volatile and inventory keeps shifting, does waiting improve the odds of a better buy, or simply increase the chance of paying more later?

China's President Xi Jinping warned against a return to the "law of the jungle" in international relations and called for closer economic ties with Spain as he met Prime Minister Pedro Sanchez on Tuesday (April 14, 2026), Chinese state media said. The meeting in Beijing came on the second day of Mr. Sanchez's visit as he seeks to position Spain as a bridge between China and the European Union, whose relations with the United States are under strain.
Anthropic has rolled out a research preview of routines for Claude Code, allowing developers to configure AI-powered automations that run on schedules, respond to API calls, or trigger from GitHub events -- all without keeping a laptop open or managing custom infrastructure. The feature addresses a friction point that's plagued developer automation: managing cron jobs, servers, and tooling just to get an AI assistant to handle repetitive tasks. Routines bundle prompts, repository access, and connectors into packages that execute on Anthropic's web infrastructure. Scheduled routines work on hourly, nightly, or weekly cadences. A team could configure one to pull the top bug from Linear at 2am, attempt a fix, and open a draft PR before anyone wakes up. Developers already using the /schedule command in Claude Code's CLI will find those tasks automatically converted to scheduled routines. API routines get their own endpoints and auth tokens. POST a message, get back a session URL. This opens integration possibilities with alerting systems, deployment pipelines, and internal tools. One use case already emerging: Datadog alerts trigger a routine that pulls traces, correlates them with recent deployments, and drafts a fix before on-call engineers even open the page. Webhook routines -- starting with GitHub -- subscribe to repository events. Claude spins up a session for each matching PR and continues monitoring for comments and CI failures. Teams are using this for automated code review against custom checklists and even cross-language library ports, where a Python SDK change automatically generates a matching Go SDK PR. Beta users have gravitated toward a few workflows. Nightly backlog triage labels and assigns new issues, then posts summaries to Slack. Weekly docs-drift scans catch documentation that references changed APIs and open update PRs automatically. Deploy verification routines run smoke checks and scan error logs, posting go/no-go decisions to release channels. The feedback-to-fix pipeline stands out: a docs feedback widget sends reports directly to a routine, which opens a session with the relevant repo and drafts changes without human intervention on the initial pass. Routines are live now for Pro, Max, Team, and Enterprise subscribers with Claude Code on the web enabled. The daily caps: Pro gets 5 routine runs, Max gets 15, and Team/Enterprise users get 25. Additional runs consume extra usage allocation. All routine execution draws from standard subscription limits, same as interactive sessions. Anthropic says webhook triggers will expand beyond GitHub to additional event sources, though no timeline was provided. For developers tired of maintaining automation infrastructure, the pitch is straightforward: define the task once, let it run in the cloud.

Anthropic's Claude Code has a new repeatable routines feature that works even when your Mac is offline. Claude Code routines are automations that you schedule and repeat. They run on Claude Code's web infrastructure, so your Mac doesn't need to be online for each task. Anthropic says the new feature arrives today as a research preview. Developers already use Claude Code to automate the software development cycle, but until now, they've managed cron jobs, infrastructure, and additional tooling like MCP servers themselves. Routines ship with access to your repos and your connectors, so you can package up automations and set them to run on a schedule or trigger. Example use cases include scheduled tasks, API workflows, and GitHub routines. Claude Code routines are available for Pro, Max, Team, and Enterprise users with these limits: Learn more about Claude Code routines here. Separately, Anthropic gave developers another new way to use Claude Code last month with auto mode. Meanwhile, Claude Cowork recently graduated from research preview with the addition of new enterprise features.

Starlink now provides high-speed internet to 9 million customers in 155 countries. SpaceX successfully launched the Starlink Group 10-24 mission from Cape Canaveral, Florida. According to the latest SpaceX update, the flight used a famous rocket part called the B1080 booster. This was the 26th time this booster flew into space and returned safely. After pushing the satellites toward their orbit, the booster landed perfectly on a drone ship in the Atlantic Ocean. The rocket carried 29 V2 Mini satellites, which weighed a total of about 17 tons. These satellites are built to make the Starlink network stronger by providing faster internet and less lag for users on the ground. This mission is special because it pushed the number of satellites launched by SpaceX in 2026 past 1,000. These new additions will join 10,000 other Starlink satellites already in space. Together, they create a huge constellation that helps people stay connected in the most remote parts of the world. The speed of these launches proves that space is becoming the most important part of our life. By launching over 1,000 satellites in four months, SpaceX is creating a highway that supports important projects in the future Read Next: Artemis II Breaks All Time Record: Humans Travel Furthest Ever From Earth SpaceX completed more than 300 missions for the Starlink network. By landing and reusing their rockets nearly 600 times, they made space travel more affordable. Today, the Starlink system serves 9 million customers across 155 different countries. The weather for this mission was perfect, which allowed the team to follow their busy schedule of more than 120 launches per year. As more satellites go up, the dream of having high quality internet for everyone on the planet is becoming a reality.

Amazon announced Tuesday an agreement to acquire satellite company Globalstar for $11.57 billion to expand its satellite business, as it aims to compete with SpaceX's Starlink business. Key Facts Crucial Quote "What we're seeing is a setup for the battle of the Titans between SpaceX and Amazon," space analyst Chris Quilty told Forbes. "SpaceX was interested in acquiring the same spectrum and that probably accounts for the premium price that we're seeing being paid." Key Background Globalstar currently has a market cap of around $10 billion and reportedly had early talks with SpaceX, which was interested in purchasing the company's spectrum -- the airwaves that provide cell service directly to devices as opposed to through cell towers or antennas. Amazon Leo initially launched in 2019 under the name Project Kuiper, with Bezos recruiting Rajeev Badyal, a former SpaceX executive fired by Elon Musk in 2018, to lead it. Development proceeded slowly for several years due to rocket shortages, manufacturing disruption and launch failures despite Amazon investing billions in satellite manufacturing facilities and launch contracts. The program received its operational license from the Federal Communications Commission in July 2020, and achieved its first orbital milestone in October 2023 with the successful deployment of two prototype satellites. Large-scale deployment began in April 2025, and by November that year Amazon retired the Project Kuiper codename in favor of Amazon Leo, marking the program's formal transition from a research and development initiative to a commercial enterprise. Big Number $10 billion. That's the amount Amazon committed to building Amazon Leo in 2020. The acquisition of Globalstar marks Amazon's biggest bet to scale out its satellite business and is the tech giant's second-largest acquisition to date, falling behind its $13.7 billion acquisition of Whole Foods in 2017. Tangent The deal is the latest chapter in what has become one of the defining business rivalries of the 21st century: the billionaire space race. The race's principal actors have been Elon Musk's SpaceX, which seeks to colonize Mars and provide global satellite internet via Starlink, Jeff Bezos's Blue Origin and Amazon's satellite program, and Richard Branson's Virgin Group through Virgin Galactic and the now-cancelled Virgin Orbit. Other tech billionaires including Robinhood founder Baiju Bhatt, former Google head and current Relativity Space CEO Eric Schmidt and crypto billionaire Jed McCaleb have made bets into the space economy in recent years. For most of the past decade, Musk has lapped the field. SpaceX, which dramatically reduced the cost of rocket launches, has completed more than 600 launches of its spacecraft since its first successful launch in 2008, while Blue Origin only achieved its first successful orbital launch in January 2025. The asymmetry has been even more pronounced in the satellite internet battle: SpaceX's Starlink already serves more than 10 million customers across more than 100 countries, while Amazon Leo is set to launch mid 2026. The World Economic Forum estimates the space economy will reach a valuation of over $1.8 trillion by 2035 from more than $600 billion last year, with the primary drive being the commercial sector -- including satellites, space tourism and data centers -- that's represented around 80% of the industry in recent years.

Anthropic announced a new platform last week, Claude Managed Agents, aiming to cut out the more complex parts of AI agent deployment for enterprises and competes with existing orchestration frameworks.Claude Managed Agents is also an architectural shift: enterprises, already burdened with orchestrating an increasing number of agents, can now choose to embed the orchestration logic in the AI model layer. While this comes with some potential advantages, such as speed (Anthropic proposes its customers can deploy agents in days instead of weeks or months), it also, of course, then also turns more control over the enterprise's AI agent deployments and operations to the model provider -- in this case, Anthropic -- potentially resulting in greater "lock in" for the enterprise customer, leaving them more subject to Anthropic's terms, conditions, and any subsequent platform changes.But maybe that is worth it for your enterprise, as Anthropic further claims that its platform "handles the complexity" by letting users define agent tasks, tools and guardrails with a built-in orchestration harness, all without the need for sandboxing code execution, checkpointing, credential management, scoped permissions and end-to-end tracing. The framework manages state, execution graphs and routing and brings managed agents to a vendor-controlled runtime loop.Even before the release of Claude Managed Agents, new directional VentureBeat research showed that Anthropic was gaining traction at the orchestration level as enterprises adopted its native tooling. Claude Managed Agents represents a new attempt by the firm to widen its footprint as the orchestration method of choice for organizations. Anthropic is surging in orchestration interestOrchestration has emerged as an important segment for enterprises to address as they sca ...

After the triumphant return of four astronauts from a milestone lunar flyby last week, NASA is pivoting to its next crucial challenge: evaluating competing lunar landers from Elon Musk's SpaceX and Jeff Bezos's Blue Origin for future crewed missions. The Artemis II mission, which lasted nearly 10 days, was the first crewed flight under NASA's ambitious moon-return program. It sent astronauts farther from Earth than ever before in a mission serving as a rehearsal for deeper space ventures. This achievement has amplified focus on remaining risks within the program, particularly the need for commercial lunar landers to safely execute a final descent to the moon. NASA aims to put astronauts back on the moon by 2028, amid rising competition from China. The agency has engaged SpaceX and Blue Origin, emphasizing competition to boost progress. Both companies have adopted distinct approaches for their lunar landers, with SpaceX's Starship being notably larger and Blue Origin's Blue Moon taking a traditional approach. As technological challenges mount, NASA remains optimistic that the dual-provider strategy increases the likelihood of a successful mission before international rivals achieve similar milestones.

The question isn't whether artificial intelligence will become a potent weapon in cyberattacks. The question is how close we are to that threshold -- and whether the safety testing infrastructure can keep pace with the models themselves. A detailed evaluation published by the UK's AI Safety Institute offers the most concrete public evidence yet that frontier AI models are approaching meaningful autonomous cyber capabilities. The AISI's assessment of Anthropic's Claude "Mythos" preview models -- early versions of what would become the Claude 4 family -- found that the AI could independently complete multi-step cybersecurity challenges that previously required human expertise. Not toy problems. Real capture-the-flag exercises modeled on actual attack scenarios. And the results should make every CISO pay attention. What the UK Found -- and Why It Matters Beyond the Lab The AI Safety Institute tested two preview models, codenamed Mythos-minor and Mythos-major, against its ATLAS benchmark -- a set of challenges drawn from real-world cybersecurity competitions. These aren't abstract reasoning tests. They require an AI agent to probe systems, identify vulnerabilities, write and execute exploit code, and chain multiple steps together to compromise a target. The kind of work that, until recently, demanded years of specialized training in offensive security. Mythos-major solved 26 out of 78 ATLAS challenges autonomously. That's a 33.3% completion rate. Mythos-minor hit 22 of 78, or 28.2%. For context, the previous Claude 3.5 Sonnet model -- already considered highly capable -- managed just 14 of those same challenges, an 18% rate. The jump from 18% to 33% represents nearly a doubling in autonomous cyber capability in a single model generation. The numbers alone are striking. The details are more so. AISI researchers observed the models successfully performing tasks across the full attack chain: reconnaissance, exploitation, privilege escalation, and lateral movement. Mythos-major demonstrated what the institute described as an ability to "credibly perform some of the individual steps required in a real-world cyberattack." It could scan networks, identify vulnerable services, craft working exploits, and escalate privileges on compromised machines -- all without human guidance. But the institute was careful to draw a line. Completing isolated CTF challenges, even complex ones, isn't the same as executing a full end-to-end cyberattack against hardened production infrastructure. The models still struggled with the longest challenge chains, those requiring sustained planning across many sequential steps. They'd lose context. Make errors they couldn't recover from. Get stuck in loops. Not yet autonomous cyber operators. But closer than anything that came before them. The AISI evaluation also introduced a new dimension to its testing methodology. Alongside the fully autonomous runs, researchers conducted "human uplift" assessments -- measuring whether the models could meaningfully enhance the capabilities of human attackers at different skill levels. This is arguably the more immediately relevant threat vector. Most cyberattacks aren't launched by AI agents operating alone. They're launched by people, many of whom lack elite technical skills but possess enough intent and basic knowledge to be dangerous with the right assistance. Here, the findings were nuanced. The models could explain attack concepts clearly, generate functional exploit code, and help debug failed attempts. For a moderately skilled attacker -- someone with basic penetration testing knowledge but not deep expertise -- Claude Mythos could serve as an effective force multiplier. The AISI's report stopped short of declaring a specific uplift threshold had been crossed, but the directional trend is unmistakable. Anthropic, for its part, has been transparent about these capabilities. The company's own Responsible Scaling Policy classifies models into AI Safety Levels, and the Mythos evaluations fed directly into Anthropic's decision-making about deployment safeguards. According to the AISI report, Anthropic provided pre-release access to the preview models specifically so the UK institute could conduct independent testing before public release -- a practice that remains voluntary but that the UK government has been pushing to formalize. The Arms Race Between Capability and Containment The timing of this evaluation is significant. It arrives as governments worldwide are grappling with how to regulate AI systems whose capabilities are advancing faster than the policy frameworks designed to govern them. The UK's approach, centered on its AI Safety Institute, has emphasized technical evaluation as the foundation for governance. Rather than prescriptive rules about what models can or can't do, AISI has focused on building the measurement infrastructure to understand what models actually do when tested rigorously. The Claude Mythos evaluation represents one of the most detailed public examples of this approach in practice. But there's a tension embedded in the model. AISI's evaluations are conducted on pre-release versions with the cooperation of AI companies. That cooperation has been voluntary. And while Anthropic has been among the most willing participants, the broader industry's commitment to pre-deployment safety testing remains uneven. OpenAI has engaged with AISI on some evaluations. So has Google DeepMind. But the depth and timing of access varies, and there's no legal requirement compelling any company to submit models for independent testing before releasing them. The European Union's AI Act takes a different tack, imposing mandatory obligations on providers of general-purpose AI models above certain capability thresholds. Under that framework, the kind of cyber capability demonstrated by Claude Mythos would likely trigger additional compliance requirements -- including adversarial testing and incident reporting obligations. Whether those requirements will prove effective or merely bureaucratic remains an open question. In the United States, the picture is more fragmented. The Biden administration's executive order on AI safety included provisions for reporting on dual-use foundation models, but enforcement mechanisms remain thin. The current political environment has shown limited appetite for new AI regulation, even as the technical case for oversight grows stronger with each model generation. So the safety testing that does happen relies heavily on the goodwill of companies and the technical capacity of institutions like AISI. And AISI itself acknowledged limitations in its evaluation. The ATLAS benchmark, while more realistic than many academic alternatives, still operates in controlled environments that don't fully replicate the complexity of real-world networks. Challenges are self-contained. Defenders aren't actively responding. The fog of war that characterizes actual cyber operations is absent. This matters because the gap between benchmark performance and real-world impact is where much of the genuine risk assessment lives. A model that solves 33% of CTF challenges might be far more or far less dangerous in practice than that number suggests, depending on how those capabilities translate to actual attack scenarios with real defenders, real network architectures, and real consequences for failure. The cybersecurity community has been watching these developments with a mix of alarm and pragmatism. Offensive security professionals have noted that current AI models, including the most capable ones, still lack the kind of adaptive reasoning that elite human hackers bring to novel situations. When a model encounters an unexpected configuration or a defense it hasn't seen in training data, it tends to fall back on generic approaches rather than creatively improvising. That's a meaningful limitation -- for now. The defensive implications are equally important and often underweighted in public discussion. The same capabilities that make AI models useful for attacking systems also make them useful for defending them. Automated vulnerability discovery, code auditing, anomaly detection, incident response triage -- these are areas where AI is already being deployed by security teams. The question is whether the offense-defense balance shifts as models become more capable, and in which direction. Historical precedent from other dual-use technologies offers limited guidance. Nuclear technology, genetic engineering, cryptography -- each followed its own trajectory of capability development, governance response, and eventual equilibrium (or lack thereof). AI's unique characteristics, particularly the speed of capability improvement and the low marginal cost of deployment, suggest that waiting for problems to manifest before responding may not be a viable strategy. The AISI evaluation also raises questions about the adequacy of current model safeguards. Anthropic has implemented various safety measures in the Claude model family, including training-time interventions designed to make models refuse requests for malicious assistance and system-level monitoring for potentially harmful outputs. The AISI testing was conducted on pre-release versions with some safety measures potentially not yet fully implemented, which complicates direct comparisons to the models ultimately deployed to users. But the fundamental challenge remains: a model capable enough to solve complex cybersecurity challenges autonomously is, by definition, a model that possesses knowledge and reasoning abilities applicable to offensive operations. The difference between a helpful security research assistant and a cyber weapon is largely one of intent, context, and guardrails -- all of which can be manipulated, bypassed, or simply absent in certain deployment scenarios. Fine-tuning, jailbreaking, and prompt injection techniques continue to evolve alongside the models themselves. And open-weight models from other providers, once released, can be modified without any safety constraints at all. The AISI evaluation focused on Anthropic's closed models, but the broader capability trajectory applies across the industry. What Comes Next The Claude Mythos evaluation is a snapshot. A single frame in a rapidly advancing film. The models tested were previews -- not even final release versions. The next generation will be more capable. And the one after that. AISI has signaled its intention to continue and expand its evaluation program, including more sophisticated benchmarks that better approximate real-world conditions. The institute is also working on evaluations that test models' ability to assist with other categories of catastrophic risk, including biological weapons development and the generation of disinformation at scale. Cyber capabilities are one piece of a larger puzzle. For industry practitioners, the practical takeaways are concrete. First, AI-assisted cyberattacks are no longer theoretical. The capability exists in current-generation models, albeit at a level below what elite human operators can achieve. Security teams should be modeling AI-augmented threat actors in their risk assessments now, not waiting for a dramatic public incident to force the issue. Second, the defensive applications of these same models deserve equal investment. If AI can identify and exploit vulnerabilities autonomously, it can also find and flag them before attackers do. Organizations that integrate AI into their defensive operations -- thoughtfully, with appropriate validation -- will have a meaningful advantage over those that don't. Third, the governance question isn't going away. Whether through voluntary frameworks like Anthropic's Responsible Scaling Policy, institutional evaluations like AISI's, or regulatory mandates like the EU AI Act, some form of structured oversight for frontier AI capabilities is taking shape. Companies building, deploying, or relying on AI systems should be engaging with these frameworks proactively rather than reactively. The UK AI Safety Institute deserves credit for publishing this evaluation in detail. Transparency about AI capabilities -- including uncomfortable capabilities -- is a prerequisite for informed governance. Too much of the AI safety discussion has operated in the abstract, trading in hypotheticals and thought experiments. The AISI report grounds the conversation in empirical data. Here's what the model can do. Here's what it can't. Here are the gaps in our ability to measure the difference. That kind of clarity is rare. And necessary. The models will keep getting better. The evaluations need to keep up. And the rest of us -- the people building systems, defending networks, and making policy -- need to be paying very close attention to the gap between what AI can do today and what it will be able to do tomorrow. Because that gap is closing faster than most people realize.

Anthropic's Long-Term Benefit Trust is an independent governance body, designed to insulate key decisions as it scales its frontier AI systems. Narasimhan brings deep experience at the intersection of medicine, regulation, and global health. As the chief executive of Novartis, he has overseen the development and approval of more than 35 novel medicines.
Amazon is making one of its boldest moves in space yet, agreeing to acquire satellite operator Globalstar in a deal valued at about $11.57 billion. The move sharpens Amazon's push into satellite internet and puts it on a more direct collision course with Elon Musk's SpaceX, whose Starlink network still leads the market by a wide margin. The agreement gives Amazon control of Globalstar's satellite operations, infrastructure, and spectrum rights, integrating them into its growing low-Earth-orbit ambitions. The company has been building this effort for years under what was previously known as Project Kuiper, which has now been rebranded as Leo. With this acquisition, Amazon gains an operational backbone it didn't fully have before. Globalstar shareholders will receive either $90 in cash or 0.3210 shares of Amazon stock for each share they hold, CNBC reported. The transaction is expected to close in 2027, subject to regulatory approval. Amazon says the deal will help it roll out a direct-to-device satellite system, with deployment targeted for 2028. That puts it head-to-head with SpaceX's Starlink Mobile initiative, which is already moving into the same territory. The race is shifting from broadband terminals to something bigger: connecting everyday devices straight to satellites without relying on traditional cell towers. "By combining Globalstar's proven expertise and strong foundation with Amazon's customer-obsession and innovation, customers can expect faster, more reliable service in more places -- keeping them connected to the people and things that matter most," Panos Panay, Amazon's senior vice president of devices and services, said in a statement. The deal extends beyond Amazon's internal ambitions. Apple, which took a 20% stake in Globalstar in 2024 as part of a $1.5 billion investment, remains closely tied to the company's satellite network. Globalstar currently powers Apple's Emergency SOS feature, allowing iPhone users to send messages in areas without cellular coverage. Amazon confirmed it has reached an agreement with Apple to support satellite connectivity for current and future iPhone and Apple Watch features, signaling a broader ecosystem play. Amazon's satellite effort has faced delays. The company has launched more than 240 satellites since last April through partners including United Launch Alliance and SpaceX, though it remains far behind Starlink's scale. SpaceX has placed over 10,000 satellites in orbit and built a user base exceeding 9 million, giving it a commanding lead. Regulators are now part of the story. The Federal Communications Commission will review the Globalstar acquisition as Amazon seeks to expand its footprint in direct-to-cell services. FCC Chairman Brendan Carr signaled openness to the deal, framing it as aligned with a broader push to keep the U.S. competitive in next-generation connectivity. "We're ultimately not the arbiter of what technology succeeds or not," Carr said during an interview on CNBC. "We shouldn't be the constraint either, so we've directed the staff to move quickly on all of these different applications." Both Amazon and SpaceX are asking regulators for permission to scale further. SpaceX recently secured approval to add another 7,500 satellites, and Amazon has clearance to deploy thousands more as it works to catch up. Six years after first outlining its satellite ambitions, Amazon is no longer just building a network from scratch. With Globalstar, it's buying time, infrastructure, and spectrum in one move. The gap with Starlink remains large, though the battle is shifting into a new phase where device-level connectivity and global coverage will define the next winners.

WASHINGTON, April 14 (Reuters) - Holdings in Elon Musk's SpaceX company and predictions platform Polymarket are among dozens of future-oriented assets that Federal Reserve chair nominee Kevin Warsh lists on a newly filed financial disclosure that shows dozens of apparently small bets on a wide array of emerging and almost science fiction-sounding ventures. Warsh's major holdings put his assets at well over $100 million, including two $50-million-plus holdings in the Juggernaut Fund LP, apparently part of Warsh's work advising for the Duquesne Family Office, the private investment firm of Stanley Druckenmiller. But it is in dozens of other holdings, listed as part of something called DCM Investments 10 LLC with a market value of no more than half a million dollars, that Warsh's stylings as a traditionalist central banker morph into an emerging future of digitized AI avatars doling out advice, AI-driven art, new vaccines for herpes and longlasting reversible male contraception, and decentralized derivatives trading. SpaceX may be well known for Musk's business blanketing the globe with internet satellite coverage and ambitions for a manned journey to Mars. But the relatively small half-million-dollars spread across dozens of firms suggests early stage bets on less well-known companies that may make it, may not, or may make it big. There's "Recraft," described in Warsh's filing as an "AI vector art platform." There's Volt, an "AI physical security software" company, and 11x, an "autonomous AI workforce platform." A company called Outpace Bio is involved in protein engineering, a field considered to have enormous potential through the use of AI; Partiful takes human welfare in a different direction offering a "social event planning platform"; Cafe X could provide synergy there with its "robotic coffee bar platform." Crypto and fintech are other focuses, including Tenderly, an "ethereum developer platform," Stashfin, described as a "consumer lending neobank," and Lemon Cash, described as a crypto financial services platform. The holdings run on to other health items including a firm developing a herpes vaccine, one developing a "reversible male contraceptive" currently in clinical trials, and a "bionic" clothing company that assists movement. There's also a "digital cloning platform" called Delphi AI, whose website says it will "turn your knowledge into an interactive profile people can talk to. Showcase your expertise, answer repetitive questions, and discover what people want to know next," a tool Warsh might find useful at Fed press conferences. Reporting by Howard Schneider; Editing by Chizu Nomiyama

Anthropic announced a new platform last week, Claude Managed Agents, aiming to cut out the more complex parts of AI agent deployment for enterprises and competes with existing orchestration frameworks. Claude Managed Agents is also an architectural shift: enterprises, already burdened with orchestrating an increasing number of agents, can now choose to embed the orchestration logic in the AI model layer. While this comes with some potential advantages, such as speed (Anthropic proposes its customers can deploy agents in days instead of weeks or months), it also, of course, then also turns more control over the enterprise's AI agent deployments and operations to the model provider -- in this case, Anthropic -- potentially resulting in greater "lock in" for the enterprise customer, leaving them more subject to Anthropic's terms, conditions, and any subsequent platform changes. But maybe that is worth it for your enterprise, as Anthropic further claims that its platform "handles the complexity" by letting users define agent tasks, tools and guardrails with a built-in orchestration harness, all without the need for sandboxing code execution, checkpointing, credential management, scoped permissions and end-to-end tracing. The framework manages state, execution graphs and routing and brings managed agents to a vendor-controlled runtime loop. Even before the release of Claude Managed Agents, new directional VentureBeat research showed that Anthropic was gaining traction at the orchestration level as enterprises adopted its native tooling. Claude Managed Agents represents a new attempt by the firm to widen its footprint as the orchestration method of choice for organizations. Anthropic is surging in orchestration interest Orchestration has emerged as an important segment for enterprises to address as they scale AI systems and deploy agentic workflows. VentureBeat directional research of several dozen firms for the first quarter of 2026 found that enterprises mostly chose existing frameworks, such as Microsoft's Copilot Studio/Azure AI Studio, with 38.6% of respondents in February reporting using Microsoft's platform. VentureBeat surveyed 56 organizations with more than 100 employees in January and 70 in February. OpenAI closely followed at 25.7%. Both showed strong growth between the first two months of the year. Anthropic, driven by increased interest in its offerings, such as Claude Code, over the past year, is putting up a fight. Adoption of the Anthropic tool-use and workflows API increased from 0% to 5.7% between January and February. This tracks closely with the growing adoption of Anthropic's foundation models, showing that enterprises using Claude turn to the company's native orchestration tooling instead of adding a third-party framework. While VentureBeat surveyed before the launch of Claude Managed Agents, we can extrapolate that the new tool will build on that growth, especially if it promises a more straightforward way to deploy agents. Collapsing the external orchestration layer Enterprises may find that a streamlined, internal harness for agents compelling, but it does mean giving up certain controls. Session data is stored in a database managed by Anthropic, increasing the risk that enterprises become locked into a system run by a single company. This may be less desirable for some firms and compete with their desires to move away from the locked-in software-as-a-service (SaaS) applications in the current stacks, which many hope that AI will facilitate. The specter of vendor lock-in means agent execution becomes more model-driven rather than direct by the organization, happens in an environment enterprises don't fully control, and behavior becomes harder to guarantee. It also opens the possibility of giving agents conflicting instructions, especially if the only way for users to exert any control over agents is to prompt them with more context. Agents could have two control planes: one defined by the enterprises' orchestration system through instructions and the other as an embedded skill from the Claude runtime. This could pose an issue for highly sensitive and regulated workflows, such as financial analysis or customer-facing tasks. Pricing, control and competitive set Balancing control with ease is one thing; enterprises also consider the cost structure of Claude Managed Agents. Claude Managed Agents introduces a hybrid pricing model that blends token-based billing with a usage-based runtime fee. This makes Managed Agets more dynamic, though less predictable, when determining cost structures. Enterprises will be charged a standard rate of $0.08 per hour when agents are actively running. For example, at $0.70 per hour, a one-hour session could cost up to $37 to process 10,000 support tickets, depending on how long each agent runs and how many steps it takes to complete a task. Microsoft, currently the leader according to VentureBeat's directional survey, offers several orchestration offerings. Copilot Studio uses a capacity-based billing structure, so enterprises pay for blocks of interactions between users and agents rather than the number of steps an agent takes. Microsoft's approach tends to be more predictable than Anthropic's pricing plan: Copilot Studio starts at $200 per month for 25,000 messages. Compared to similar competitors like OpenAI's Agents SDK, the picture becomes murky. Agents SDK is technically free to use as an open-source project. However, OpenAI bills for the underlying API usage. Agents built and orchestration with Agents SDK using GPT-5.4, for example, will cost $2.50 per 1 million input tokens and $15 per 1 million output tokens. The enterprise decision Claude Managed Agents does give enterprises who find the actual deployment of production agents too complicated a reprieve. It reduces their engineering overhead while adding speed and simplicity in a fast-changing enterprise environment. But that comes with a choice: lose control, observability and portability and risk further vendor lock-in. Anthropic just made a case for why its ecosystem is becoming not just the foundation model of choice for enterprises, but also the orchestration infrastructure. It becomes more imperative for enterprises to balance ease with lesser control.

Anthropic last month reduced the TTL (time to live) for the Claude Code prompt cache from one hour to five minutes for many requests, but said this should not increase costs despite users reporting faster depleting quotas. User Sean Swanson posted a bug report showing that Anthropic introduced a one-hour cache for Claude Code context around February 1, then changed it back to a five-minute cache around March 7. "The 5m TTL is disproportionately punishing for the long-session, high-context use case that defines Claude Code usage," said Swanson. When using AI coding assistants or agents, the context is additional data sent along with the user's prompts, such as existing code or background instructions. Context improves the accuracy of the AI but also requires more processing. Claude prompt caching avoids re-processing previously used prompts including context and background information. The cache can have either a five-minute or one-hour TTL. Writing to the five-minute cache costs 25 percent more in tokens, and writing to the one-hour cache 100 percent more, but reading from cache is around 10 percent of the base price. Jarred Sumner, the creator of the Bun JavaScript runtime who now works for Anthropic, agreed that the analysis was "good detective work" but claimed that the change back to the five-minute cache made Claude Code cheaper because "a meaningful share of Claude Code's requests are one-shot calls where the cached context is used once and not revisited." Sumner said that the Claude Code client determines the cache TTL automatically and there are no plans for a global setting. Swanson revised his analysis in response, agreeing that sessions using subagents do benefit from the lower write cost of the five-minute cache since they interact quickly and "their caches almost never expire." However, he said he has been a $200 per month subscriber for over six months and had never hit a quota limit until March. The "extra burn rate" is "making a once great service unusable," he said. Another factor is that the large one-million-token context window available on paid plans with the Claude Opus 4.6 or Sonnet 4.6 models increases costs, especially with cache misses. Claude Code creator Boris Cherny said that "prompt cache misses when using 1M token context window are expensive... if you leave your computer for over an hour then continue a stale session, it's often a full cache miss." He said that Anthropic is investigating a 400,000-token context window by default, with an option for one million tokens if preferred. There is already a configuration setting for this. Cherny said that larger contexts are now common because users are "pulling in a large number of skills, or running many agents or background automations." Some developers are convinced that cache rebuilding and cache misses are major factors in Claude Code quota exhaustion, which has reached the point where Pro users ($20 per month) may get as few as two prompts in five hours. A number of bugs in the caching code have been reported, such that one user said: "Before those are fixed likely any 5 minutes vs 1 h discussion is entirely moot since numbers are totally flawed." The focus on cache optimization may also be evidence that, under the covers, Anthropic's quotas are simply buying less processing time than they did. Swanson is not alone in reporting that Claude's performance has dropped. For example, a user on the enterprise team plan said: "In March I could use Opus all day and it was getting great results. Since the last week of March and into April, I've had sessions where I maxed out session usage under 2 hours and it got stuck in overthinking loops, multiple turns of realising the same thing, dozens of paragraphs of 'but wait, actually I need to do x' with slight variations." That chimes with similar comments from an AI director at AMD. Cache optimization may be important, but it seems unlikely to account for all these reported issues.

The two announced they were working together last year to bridge crypto and TradFi. Deutsche Börse AG said Tuesday that it invested $200 million in Kraken's parent company, Payward, as the American crypto exchange pushes further into the traditional finance world. The German finance firm, which runs the Frankfurt Stock Exchange, said the deal was to "leverage their complementary capabilities to bridge traditional financial markets and the digital asset economy." The deal -- secondary transaction -- sees Deutsche Börse AG buy existing shares, giving it a 1.5% fully diluted stake in the company. "Spanning trading, custody, settlement, collateral management, and tokenised assets, the partnership will unlock a new range of enhanced products and services that deliver frictionless access to both ecosystems, creating a holistic experience for institutional clients," the firm said in a statement. Kraken -- like other crypto exchanges -- is pushing into the traditional finance world, allowing users to trade stocks, bonds, and other assets. It is even selling one of its products, the Krak mobile money app, as a "primary account for everything." The company made five acquisitions in 2025. Kraken co-CEO Arjun Sethi told DL News in September that the crypto exchange had more deals lined up. Kraken and Deutsche Börse first said they were working together in December in a "partnership to bridge traditional and digital markets." The idea is that to break down barriers so funds can flow across both ecosystems with little friction. Kraken last year said it would integrate directly with 360T, a Deutsche Börse Group subsidiary and one of the world's largest foreign-exchange trading venues, to give clients access to its liquidity pools. Tokenisation is playing a part, too. The two firms said they were working to integrate Kraken's xStocks product -- tokenised representations of US equities and ETFs -- within Frankfurt-based fintech company 360X's ecosystem. Kraken did not immediately respond to questions from DL News. Kraken's latest deal comes as the company expands its portfolio -- within the digital asset space and outside of it. Kraken's portfolio now includes futures trading platform NinjaTrade, which it bought for $1.5 billion, proprietary trading form Breakout, tokenised assets platform Backed Finance, and token management platform Magna.

Vas Narasimhan has been appointed to Anthropic's Board of Directors by the Anthropic Long-Term Benefit Trust. He is a physician-scientist and the Chief Executive Officer of Novartis -- one of the world's leading innovative medicines companies -- and shares Anthropic's conviction that healthcare and life sciences are among the areas where AI has the greatest potential to improve the quality of human life. "Vas brings something rare to our board. He's overseen the development and approval of more than 35 novel medicines for the benefit of patients around the world in one of the most regulated industries," said Daniela Amodei, Co-founder and President of Anthropic. "Getting powerful new technology to people safely and at scale is what we think about every day at Anthropic. Vas has been doing exactly that for years, and I'm grateful he's joining us." Anthropic is a Public Benefit Corporation and its Board is elected by stockholders and the Long-Term Benefit Trust, as detailed here. The Trust is an independent body whose members have no financial stake in Anthropic, and it exists to keep the company's governance aligned in a responsible balance between financial success and the company's public benefit mission of developing AI for the long-term benefit of humanity. With Narasimhan's appointment, Trust-appointed directors now make up a majority of the Board. Narasimhan joins Dario Amodei, Daniela Amodei, Yasmin Razavi, Jay Kreps, Reed Hastings, and Chris Liddell on the Board of Directors. "The Long-Term Benefit Trust's role is to appoint directors who will ensure Anthropic responsibly balances its commitment to stockholders and its public benefit mission as the company grows. Vas has spent his career stewarding breakthrough science responsibly -- exactly the perspective we are excited to have on the board as we develop consequential technology. We're excited for what he'll bring to the table," said Neil "Buddy" Shah, Chair of Anthropic's Long-Term Benefit Trust. "Working across medicine, innovation, and global health has shown me the transformative potential of technology when deployed responsibly. In healthcare, AI is accelerating solutions to some of the hardest scientific challenges, from deepening our understanding of disease biology to designing better medicines," said Narasimhan. "Anthropic is setting the standard for how AI should be developed to benefit humanity, and I'm honored to join the Board and contribute to its mission." Early in his career, Narasimhan worked on HIV/AIDS, malaria, and tuberculosis programs in India, Africa, and South America, and he continues to champion access and global health priorities today. He is an elected member of the US National Academy of Medicine and a member of the Council on Foreign Relations. He serves on the University of Chicago board of trustees and the board of fellows at Harvard Medical School. He previously chaired the Pharmaceutical Research and Manufacturers of America, where he remains on the board of directors.

SpaceX is scheduled to launch a Falcon 9 rocket from Vandenberg Space Force Base in California. Yet another California rocket launch is coming up today. SpaceX is gearing up to get its Falcon 9 rocket off the ground for the third time in April from the Vandenberg Space Force Base in Santa Barbara County. And as expected, the two-stage rocket will help deliver another batch of the company's Starlink broadband internet satellites to Earth orbit. Eager to see liftoff? While plenty of spots are popular among spectators near and far from the launch site, you also have the option to watch a livestream of the mission remotely. Here's everything to know about the latest SpaceX mission, and how to watch a webcast of the Falcon 9 launching in Santa Barbara County. Is there a rocket launch today? Next liftoff from California SpaceX is working toward a Tuesday, April 14, launch from Southern California, with a four-hour launch window opening at 7 p.m. PT, according to a launch alert. The launch will take place from the Vandenberg Space Force Base in Santa Barbara County. A Federal Aviation Administration operations plan advisory suggests a backup opportunity is available the next day if the launch were to be postponed. What is launching from Vandenberg? Falcon 9 to deploy Starlink satellites SpaceX will launch its famous two-stage 230-foot Falcon 9 rocket, one of the world's most active, to deliver 25 Starlink satellites into low-Earth orbit, an altitude nearer Earth's atmosphere where they're able to circle the planet quickly. How to watch SpaceX launch livestream Californians, of course, have plenty of opportunities to see a rocket in person both near the launch site as it lifts off, and further away as it soars overhead. But SpaceX also provides a live webcast of its missions for those who prefer to watch from home or for those viewing the launch locally and looking for updates in real-time. As with most SpaceX missions, the launch will be available to stream on the company's website and its new X TV mobile app, beginning about five minutes before liftoff. SpaceX may also provide updates on social media site X. Does Elon Musk own SpaceX? What to know about rocket company SpaceX is the commercial spaceflight company that billionaire Elon Musk, the world's richest man, founded in 2002 and leads as the CEO. SpaceX is headquartered at Starbase in South Texas near the U.S.-Mexico border. The site, which is where SpaceX has been conducting routine flight tests of its 400-foot megarocket known as Starship, was recently voted by residents to become its own city. As a major government contractor, SpaceX serves as the launch service provider for a variety of government missions both civil and military. For the Department of Defense, SpaceX's Falcon 9 helps launch classified satellites and other payloads into space. And for NASA, Falcon 9 most often helps propel astronauts to the International Space Station on SpaceX's Dragon crew capsule - the only U.S vehicle capable of carrying NASA astronauts to orbit. What is Starlink? Starlink is SpaceX's internet satellite business. With more than 10,000 satellites in its growing orbital constellation, Starlink has become a lucrative part of Musk's business empire, serving millions of customers around the world. Eric Lagatta is the Space Connect reporter for the USA TODAY Network. Reach him at [email protected]

SpaceX is scheduled to launch a Falcon 9 rocket from Vandenberg Space Force Base in California. Yet another California rocket launch is coming up today. SpaceX is gearing up to get its Falcon 9 rocket off the ground for the third time in April from the Vandenberg Space Force Base in Santa Barbara County. And as expected, the two-stage rocket will help deliver another batch of the company's Starlink broadband internet satellites to Earth orbit. Eager to see liftoff? While plenty of spots are popular among spectators near and far from the launch site, you also have the option to watch a livestream of the mission remotely. Here's everything to know about the latest SpaceX mission, and how to watch a webcast of the Falcon 9 launching in Santa Barbara County. Is there a rocket launch today? Next liftoff from California SpaceX is working toward a Tuesday, April 14, launch from Southern California, with a four-hour launch window opening at 7 p.m. PT, according to a launch alert. The launch will take place from the Vandenberg Space Force Base in Santa Barbara County. A Federal Aviation Administration operations plan advisory suggests a backup opportunity is available the next day if the launch were to be postponed. What is launching from Vandenberg? Falcon 9 to deploy Starlink satellites SpaceX will launch its famous two-stage 230-foot Falcon 9 rocket, one of the world's most active, to deliver 25 Starlink satellites into low-Earth orbit, an altitude nearer Earth's atmosphere where they're able to circle the planet quickly. How to watch SpaceX launch livestream Californians, of course, have plenty of opportunities to see a rocket in person both near the launch site as it lifts off, and further away as it soars overhead. But SpaceX also provides a live webcast of its missions for those who prefer to watch from home or for those viewing the launch locally and looking for updates in real-time. As with most SpaceX missions, the launch will be available to stream on the company's website and its new X TV mobile app, beginning about five minutes before liftoff. SpaceX may also provide updates on social media site X. Does Elon Musk own SpaceX? What to know about rocket company SpaceX is the commercial spaceflight company that billionaire Elon Musk, the world's richest man, founded in 2002 and leads as the CEO. SpaceX is headquartered at Starbase in South Texas near the U.S.-Mexico border. The site, which is where SpaceX has been conducting routine flight tests of its 400-foot megarocket known as Starship, was recently voted by residents to become its own city. As a major government contractor, SpaceX serves as the launch service provider for a variety of government missions both civil and military. For the Department of Defense, SpaceX's Falcon 9 helps launch classified satellites and other payloads into space. And for NASA, Falcon 9 most often helps propel astronauts to the International Space Station on SpaceX's Dragon crew capsule - the only U.S vehicle capable of carrying NASA astronauts to orbit. What is Starlink? Starlink is SpaceX's internet satellite business. With more than 10,000 satellites in its growing orbital constellation, Starlink has become a lucrative part of Musk's business empire, serving millions of customers around the world. Eric Lagatta is the Space Connect reporter for the USA TODAY Network. Reach him at [email protected]

Jamie Dimon, chief executive officer of JPMorgan Chase & Co., right, departs the US Capitol in Washington, DC, US, on Wednesday, Feb. 25, 2026. JPMorgan Chase CEO Jamie Dimon said Tuesday that while artificial intelligence tools could eventually help companies defend themselves from cyberattacks, they are first making them more vulnerable. Dimon said that JPMorgan was testing Anthropic's latest model -- the Mythos preview announced by the AI firm last week -- as part of its broader effort to reap the benefits of AI while protecting against bad actors wielding the same technology. "AI's made it worse, it's made it harder," Dimon told analysts on the bank's earnings call Tuesday morning. "It does create additional vulnerabilities, and maybe down the road, better ways to strengthen yourself too." When asked by a reporter about Mythos, Dimon seemed to refer to Anthropic's warning that the model had already found thousands of vulnerabilities in corporate software. "I think you read exactly what is it," Dimon said. "It shows a lot more vulnerabilities need to be fixed." The remarks reveal how artificial intelligence, a technology welcomed by corporations as a productivity boon, has also morphed into a serious threat by giving bad actors new ways to hack into technology systems. Last week, Treasury Secretary Scott Bessent summoned bank CEOs to a meeting to discuss the risks posed by Mythos. JPMorgan, the world's largest bank by market cap, has for years invested heavily to stay ahead of threats, with dedicated teams and constant coordination with government agencies, Dimon said. "We spend a lot of money. We've got top experts. We're in constant contact with the government," he said. "It's a full-time job, and we're doing it all the time."

His 69-page disclosure shows investments in SpaceX, Polymarket, and other crypto companies. U.S. President Donald Trump's Federal Reserve chair nominee, Kevin Warsh, has revealed an extensive investment portfolio. It includes advanced technology, biotech and crypto-linked companies in a recently filed financial report. Trump's Fed Chair Nominee Kevin Warsh Reveals Portfolio The submission indicates that Trump Fed nominee Warsh has a stake in to Elon Musk-owned SpaceX. He also invested in a blockchain-based prediction market Polymarket. These investments come side by side with many smaller investments in emerging and experimental sectors. Further, it's worth noting that Warsh has a total asset holding of more than $100 million, per a 69-page disclosure. It is highly concentrated in two holdings with a value of over $50 million in Juggernaut Fund LP. The fund is associated with his advisory position with the Duquesne Family Office, the investment firm that was initiated by hedge fund guru Stanley Druckenmiller. In addition to these huge investment amounts, the disclosure brings out a group of smaller portfolio placements via a known entity: DCM Investments 10 LLC. All these investments are worth less than half a million. Together, they are indicative of exposure to early-stage ventures in the areas of artificial intelligence, digital platforms, and life sciences. Fed Nominee Holds Major Stake in Crypto, AI Companies Warsh holds a stake in some of the prominent AI-concentrated companies. They include Recraft, an art platform based on vectors, Volt, an artificial intelligence-based physical security company, and 11x, an autonomous workforce solution. Other digital projects are Partiful, a social event management platform and Cafe X, which has robotic coffee services. Furthermore, biotechnology takes a centre stage. Protein engineering with artificial intelligence is underway in one of the portfolio companies. Others are dedicated to medical advances, such as a herpes vaccine candidate and a reversible male contraceptive that is in trials. Crypto and fintech investments are also present. The Trump Fed nominee has also invested in Tenderly, a crypto Ethereum development platform, Stashfin, a crypto loan platform, and Lemon Cash, a crypto financial application for retail users. Another interesting addition is Delphi AI, which is a platform that provides digital replicas to mimic human knowledge and interaction. The company encourages the use of AI tools that can enable users to develop interactive profiles that can answer questions and share knowledge.
