The latest news and updates from companies in the WLTH portfolio.
Best-Selling Author, Emmy Recipient, Personal Finance Expert Since 1981 There's a seductive new pitch making the rounds on social media, financial podcasts, and even cable news: Forget boring index funds -- you can "invest" in the actual news by trading contracts on whether there'll be a ceasefire in Iran, who'll win the midterms, or whether a specific leader will still be in power next month. The platforms selling this pitch, chiefly Kalshi and Polymarket, are calling themselves exchanges. They're calling their products event contracts. And they're attracting a wall of money from regular people who think they've found a smarter way to bet on the world. Here's the truth: The house isn't the problem on these sites. You are. The data is in, and it shows that casual users are getting fleeced by a tiny group of sharps, bots, and people who seem to know things they shouldn't. Before you move a dollar from your brokerage account to one of these platforms, read this. 1. The losses for regular users are worse than losses in sports betting Prediction markets sell themselves as a fairer alternative to sportsbooks because there's no "house edge." That's technically true and practically meaningless. A recent analysis by Citizens JMP Securities, citing data from analytics firm Juice Reel, found that the median return for a prediction market user was -8% from July 2025 through mid-March, compared with -5% for sportsbook users over the same period, according to CoinDesk. That's right -- you're statistically more likely to lose money faster on Kalshi or Polymarket than at DraftKings. And it gets worse the smaller you are. The same research found that users trading less than $100 had a median return of -26.8%. The only cohort that made money? Individuals trading more than $500,000, who generated a median ROI of +2.6%. If you're not moving half a million dollars, you're the mark. 2. A tiny handful of whales win everything This isn't close. According to PredictionMarkets.org in April, market-maker Keyrock's tracking of the sector found that $15.2 billion in profits -- more than two-thirds of all money won on Polymarket -- was held by just 740 accounts. That's a tiny fragment of the nearly 2 million users trading on the platform. Blockchain data paints the same picture. One analyst's review of Polymarket's trading history in 2025 found that roughly 70% of Polymarket's 1.7 million trading addresses have recorded realized losses, and fewer than 0.04% of all addresses captured over 70% of total realized profits -- accumulating roughly $3.7 billion in gains. That's not a market. That's a feeding operation. 3. You're playing against bots, professional desks and 'dumb money hunters' The pros know exactly what this looks like, and they're not hiding it. The Financial Times recently profiled traders who openly describe searching for "dumb money at the tails," primarily from casual bettors playing long-shot wagers on stretched odds. One analytics platform even rolled out an automated counter-trading tool that lets subscribers track losing accounts and bet against whatever those users touch. A 10x Research report put it bluntly: Casual users are trading "dopamine and narrative for discipline and edge," while profit is captured by "a tiny, informed elite who price probability, hedge exposure, and extract premium from retail-driven longshots." When a professional desk with algorithms and real-time data is on the other side of your trade, you don't have a strategy. You have a donation. For more wealth-wrecking habits to avoid, see "13 Dumb Investing Moves -- and How to Avoid Them." 4. The insider trading on war bets is staggering -- and mostly unpunished This is the part that should stop you cold. When the U.S. went to war with Iran at the end of February, suspiciously well-informed accounts cashed in on an industrial scale. According to blockchain analytics firm Bubblemaps, six Polymarket accounts made around $1.2 million in profit after successfully betting on the U.S. striking Iran by the end of February. CNN separately reported on one anonymous trader who won a staggering 93% of their five-figure wagers about Iran, even though the events they predicted were unannounced military operations -- pocketing nearly $1 million since 2024. Another account made more than $553,000 on bets about Iran's supreme leader just before an Israeli strike killed him, NPR reported. Israeli authorities have already arrested two people for allegedly making bets on Polymarket using classified military information. And Polymarket's own CEO once bragged to Axios that it was "super cool" that his platform creates a financial incentive for people with inside knowledge to divulge it to the market. Read that again. That's the company line. You're not an investor on this platform -- you're the liquidity that someone with a secret is draining. 5. Your wins aren't guaranteed, even when you're right You might think: Fine, I'll do my homework and bet on things I actually understand. But being right doesn't mean you get paid. When Ayatollah Khamenei was killed in the February strikes, Kalshi users who had correctly bet on his ouster expected a payout. Instead, trading on the market was paused while the company conducted a "further review of the situation." Kalshi's CEO eventually announced the company would issue only partial refunds, citing an internal "death carveout" rule. One trader on a Kalshi Discord summed it up, telling NPR that getting "rugged" on a correct prediction because of a fine-print carveout was evidence that centralized platforms will always bend to compliance over reality. Sportsbooks don't love paying out either, but at least their rulebook is written before the game starts. On prediction markets, the rules can move after you've already won. 6. The regulatory protection you think exists isn't really there Here's where things get scary. Most Americans using Polymarket aren't using the regulated U.S. version. They're routing through a VPN to hit the offshore exchange, which sits outside any U.S. regulator's reach. CNN reported that the company's U.S.-facing site isn't fully operational yet, but experts say Americans can easily access the offshore site with a virtual private network. Even Kalshi, which is regulated by the Commodity Futures Trading Commission (CFTC), is a tangled mess right now. Several states, including Connecticut, Arizona, and Illinois, tried to shut down prediction market operators for running what they call unlicensed gambling. The federal government responded by suing all three states, challenging their efforts to regulate platforms like Kalshi and Polymarket. Arizona has even filed criminal charges against Kalshi. The courts haven't settled the basic question -- are these things finance or gambling? -- and until they do, you have no idea which consumer-protection rules actually apply to your account. Meanwhile, members of Congress are openly warning about the mess. A bipartisan group has questioned whether the CFTC has enough authority, or interest, to police insider trading on these platforms at all. Until that's resolved, you're betting real money in a legal gray zone with rules that change based on which court ruled last. The bottom line Prediction markets are being marketed as the future of "informed investing." They're not investing. They're a zero-sum game where the best-informed, best-funded, and sometimes best-connected players feast on everyone else. If you wouldn't sit down at a poker table where the other players can see your cards, don't open an account on one of these platforms. If you're looking for a real edge, it's the same one it's always been. Own a diversified slice of the global economy through low-cost index funds, keep your costs down, and let compounding do the heavy lifting over decades. It's slower. It's less exciting. And it's the reason actual wealthy people stay wealthy. For more pitfalls worth sidestepping, take a look at "5 Dangerous Money Traps Every Investor Should Avoid." Betting on war, disaster, and the death of world leaders isn't an asset class. It's a trap with a slick app.

Good morning. With all the recent hubbub about OpenAI, I've repeatedly seen a particular piece of misinformation about its CEO Sam Altman: how much equity he has in the turbocharged AI company. Wanna take a guess at Sam's stake in OpenAI? I'll spare you the ChatGPT prompt: It's zero. While Altman is surely the consummate capitalist -- see: his multibillion-dollar net worth and his many timely investments in startups with names like Airbnb, Stripe, and Uber -- his arrangement with OpenAI is a $76,000 salary and a "for love of the game" narrative. You know, like MJ in his prime. Today's tech news below. -- Andrew Nusca The company formerly known as Facebook officially introduced the first AI model from the Alexandr Wang era. It's called Muse Spark. The model promises, in that grand tech industry tradition, results that are smarter, better, faster, and stronger than what came before. Muse Spark is "purpose-built" for Meta's suite of social media services -- a pragmatic note in a symphony of research-driven AI models. Investors, for some time skeptical of Meta's eye-watering AI investments, were impressed. Meta shares were up almost 7% to $612, and further still in after-hours trading. As such, the news is less about what Muse Spark can do (in short: better understand what you see and hear and say, accomplish more complex tasks using agents) and more about what it will improve. The model is already at work with Meta AI and set to expand to WhatsApp, Instagram, Facebook, Messenger, and even Meta's AI glasses "in the coming weeks." Translation: More engagement and more relevant ad targeting for a company that gets about 97% of its annual revenue from advertisements. Incremental revenue? For spring? Groundbreaking. -- AN Fast-growing AI company Anthropic was in the U.S. military's good graces until it refused to remove key safeguards in its Claude chatbot -- only to be blacklisted as a "supply chain risk" last month. The designation, which all but bars Anthropic's AI from use by the U.S. military complex, felt rather political for a software provider who was once the government's preferred choice. So Anthropic asked the courts to put a stop to it. On Wednesday, that request was denied. The U.S. Court of Appeals for the District of Columbia Circuit declined to pause Anthropic's supply chain risk designation. It's not a final ruling, and the case is far from over. Anthropic maintains the government's retaliation was unlawful; meanwhile, the military argues that it's the decider. Somewhere, Sam Altman is smiling. -- AN First, Australia did it. Then Spain. France started talking about it. Denmark, Indonesia, and Malaysia did, too. The latest hammer to fall? The birthplace of democracy. Prime Minister Kyriakos Mitsotakis said this week that Greece will move to ban social media (but not messaging) access for children under age 15 beginning January 1, 2027. He also called for European Union-wide action on the issue. The Greek legislation hasn't yet passed -- that's planned for summer -- but there has been little opposition thus far to the effort, which did not specifically name any tech corporations. Greece is just the latest nation to acknowledge that this social media stuff really can be harmful to developing brains. Today, that view is widely held, though opinions vary on what to do about it. Some tech firms argue that outright bans simply push children to shadow accounts that can't be as easily regulated and monitored. Parents and politicians say that's a convenient excuse. Whatever your take, the devil will be in the details. Stay tuned. -- AN -- Who is Satoshi Nakamoto? A new investigation points to Blockstream CEO Adam Back (who denies it). -- OpenAI IPO: Yes, retail investors, there's room for you! -- Deere settles $99 million antitrust lawsuit. American farmers said their right to repair equipment they owned was infringed upon. -- Hacker claims to breach China supercomputer. Stolen data for sale? It's a bold strategy, Cotton, let's see if it pays off for 'em. -- Advanced chip packaging: So hot right now. -- Alibaba Cloud CTO steps down. Jingren Zhou will become chief AI architect; Feifei Li replaces him in his old role. -- GoPro cuts 23% of workforce. Nearly 150 pink slips as the once-promising action camera company struggles to turn a profit. -- Arm CEO Rene Haas may be tasked with running SoftBank's international biz.

Ahmed Balaha is a journalist and copywriter based in Georgia with a growing focus on blockchain technology, DeFi, AI, privacy, digital assets, and fintech innovation. $153 million in daily volume. $4 billion total. $200 million in the first week alone. Polymarket's 5-minute prediction markets have gone from experimental product to one of the highest-velocity trading venues in DeFi - and Chainlink oracles are the reason any of it works. The volume surge, confirmed by on-chain data shared across crypto analytics channels, represents a roughly 400% increase from earlier baseline figures, with the 3x weekly growth rate still accelerating as of the latest reporting window. Discover: The best pre-launch token sales Why 5-Minute Prediction Markets Break Standard Oracle Architecture Standard oracle infrastructure built for hourly or daily market resolution can tolerate latency. A price feed delayed by 30 seconds is noise when a contract settles in 48 hours. In 5-minute prediction markets, that same 30-second delay is the difference between a valid settlement and a manipulated one, exactly why Polymarket's architecture required a fundamentally different oracle setup. Chainlink's Data Streams integration, deployed on Polygon where Polymarket settles, delivers timestamped price reports at sub-second intervals. Combined with Chainlink Automation handling the on-chain settlement triggers, the system processes the full cycle, price confirmation, contract resolution, USDC payout, without human intervention and without the manipulation vector that centralized price feeds introduce. The oracles provide the official price feeds that trigger contract settlements, removing the need for a centralized authority entirely. The scale of what's now running through this infrastructure is significant. Over 3,000 traders are actively using Chainlink Data Streams across integrated platforms, and the Dashlink dashboard tracking oracle demand shows a direct correlation between the Polymarket volume surge and a decline in LINK exchange reserves - whales are pulling supply off exchanges as network utilization hits new highs for prediction market settlements. Native USDC collateral adoption within these markets has further accelerated institutional participation by improving capital efficiency. The appeal is obvious: a platform already under scrutiny for insider trading patterns on longer-duration markets now offers a format where information asymmetry has a 5-minute shelf life. The risks are real and shouldn't be buried. Short timeframes amplify volatility, HFT-dominated order flow can crowd out retail, and oracle delays, however rare, carry outsized consequences when resolution windows are measured in minutes. But the volume data doesn't lie: the format is capturing demand that didn't have an instrument before. Convergence Hackathon Closes - Liquid Chain Takes the Grand Prize on CCIP Liquid Chain built a Unified Liquidity Layer that aggregates capital across multiple Layer-2 networks using Chainlink's Cross-Chain Interoperability Protocol (CCIP) as the messaging backbone. The core problem it solves is real and expensive - assets stranded on individual L2s require manual bridging, creating slippage, delay, and trust assumptions that institutional allocators won't accept. Liquid Chain's architecture lets users move assets seamlessly across chains without manual bridge interactions, with CCIP handling the verification and message-passing layer beneath the surface. The project has been pitching its Layer-3 DeFi buildout as a credible answer to the fragmentation problem, and the Convergence judges agreed. Other notable hackathon submissions concentrated on Real-World Asset tokenization and DeFi automation - a consistent signal that Chainlink's developer community is orienting toward institutional-grade infrastructure rather than consumer speculation. The CCIP adoption rate implied by the hackathon submissions validates Chainlink's cross-chain positioning at exactly the moment demand for tamper-proof oracle settlement is breaking records on Polymarket.

The platform pulled a bid on when the American airmen would be extracted from the country after a backlash Prediction platform Polymarket has apologized for allowing users to place bets on whether American airmen from a downed US fighter jet would be rescued from Iran after facing backlash over questionable ethics. On Friday, a two-seater F-15E Strike Eagle was shot down over Iran, prompting the US to launch a high-risk extraction mission. US officials said that while one crew member was evacuated shortly after the crash, it took the US more than 24 hours to locate and extract the other, an operation that reportedly involved several helicopters and a CIA deception ploy. Iranian officials, however, have denied that the operation was a success, claiming that Tehran destroyed a C-130 military transport plane and two Black Hawk helicopters, with videos circulating on social media showing the aircraft's debris. The uncertainty about the fate of the US service members prompted a now-deleted bet that allowed users to buy 'yes' or 'no' positions on whether the airmen would be recovered by April 3 or April 4, with roughly 63% of traders predicting a Saturday rescue. Democratic congressman Seth Moulton was one of the first to flag the bet, writing on X: "This is DISGUSTING." They could be your neighbor, a friend, a family member. And people are betting on whether or not they'll be saved." Polymarket promptly deleted the bet, replying to Moulton: "We took this market down immediately as it does not meet our integrity standards. It should not have been posted, and we are investigating how this slipped through our internal safeguards." Moulton, however, did not accept the apology as sufficient, saying that Polymarket's "integrity standards are severely lacking," pointing to dozens of other active war bets still visible on the platform. He also took a swing at the Trump administration, recalling that Donald Trump Jr. "is an investor in this dystopian death market and may have access to intelligence that isn't public yet." The episode is the latest in a string of controversies for Polymarket tied to the Iran war. According to several media reports, six suspected insiders collectively won $1.2 million by betting that the US would strike Iran on February 28 - the exact day coordinated US-Israeli airstrikes began - with all of the accounts funded within hours of the attack. Israeli prosecutors separately filed indictments against an IDF reservist and a civilian for allegedly using classified military intelligence to bet on Polymarket during the Twelve-Day War in June 2025.

Decentralized prediction market platform, Polymarket, recorded $10.57 billion in trading volume in March 2026, marking the first time the platform has crossed the $10 billion monthly threshold, according to internal data. The figure represents a 33% increase from February 2026 and is roughly 2.5 times higher than volumes seen during the platform's previous peak around the October 2024 U.S. election cycle underscoring growing demand for event-based trading markets. Total trading volume for Q1 2026 reached approximately $26.2 billion, up more than 90% from the previous quarter, highlighting sustained momentum into 2026. Growth was also evident in Polymarket US, the firm's U.S.-focused platform launched in the fourth quarter of 2025 under no-action relief from the Commodity Futures Trading Commission (CFTC). The unit generated over $700 million in March 2026 volume, a 167% increase month-on-month. Polymarket US now accounts for 6.6% of total platform activity, more than doubling its share since the start of 2026, despite remaining invite-only and limited to sports-related markets. The restricted scope means major categories such as geopolitics and cryptocurrency markets, key drivers of global Polymarket activity, are not yet available to U.S. users. Analysts say expanding access to these segments could further accelerate growth particularly as regulatory clarity evolves and broader participation becomes possible.

The Logitech MX Vertical is one of the best ergonomic mice we've ever tested, and easily recommended at this price. After working at your desk for hours, your wrists are likely to become fatigued or even strained due to prolonged, repetitive usage of your standard mouse. While you can use wrist exercises and take breaks to dull the pain, one of the best ways to avoid pain altogether is by using Logitech's masterclass in ergonomic design, the MX Vertical Wireless Mouse, now on sale for a much more attractive $74.99 at Amazon. Why buy the Logitech MX Vertical wireless mouse? In addition to its vertical design that helps stave off symptoms of or even prevent repetitive strain injuries or carpal tunnel, the Logitech MX Vertical mouse is about as light as a feather, weighing in at 135g, so you don't feel like you're moving a brick across your desk. Other noteworthy qualities of this mouse include 6 remappable buttons (via Logitech Options software), a battery life of two or even three weeks before needing to recharge, and good performance for productivity tasks. My colleague Sean Endicott praised this mouse highly the last time it was on sale, saying, "Since picking up my Logitech MX Vertical near the end of 2022, I've used the mouse for thousands of hours each year, and it's barely aged at all. The buttons are still clicky, the grooves are still textured, and the mouse is as responsive as ever." Plus, fellow gamer Brendan Lowry gave this mouse a rare perfect score of 5/5 stars, saying that "Ultimately, the Logitech MX Vertical is an absolutely superb vertical mouse with incredible comfort, amazing performance, and a stylish appearance that users will love." Frankly, the only real problem with this mouse (aside from its materials attracting dust and fingerprints more than some mice) is its usually steep MSRP of $119.99. Thankfully, you won't have to worry about that for a while, as Amazon has chopped its price by 38% to $74.99. So, if you're looking to get work done but you're afraid of hurting your wrists during overtime, the Logitech MX Vertical will keep you safe. Alternative discounts If Amazon's stock or discount on the Logitech - MX Vertical Advanced Wireless Optical Ergonomic Mouse runs out, here are some alternative discounts you can use as back-up options. * $74.95 at Walmart (was $119.99) * $79.99 at Best Buy (was $119.99) FAQ Join us on Reddit at r/WindowsCentral to share your insights and discuss our latest news, reviews, and more.

AI agents need accountability before chaos hardens into risk There's a word that sums up where the software industry is right now: chaos. I was going to write "is heading," but that would have been accurate six months ago. It's here already. AI coding has made it cheap to change any software you want, so everyone has started changing everything at the same time: infrastructure, internal tools, APIs, security models, CI pipelines, even entire product surfaces. The cost of producing code is falling fast, but the cost of understanding what that code does has not. That mismatch is where your AI gremlins live. For the past couple of years, the loudest AI security conversation has been about employees pasting sensitive data into chatbots. That's a real concern, and it deserves attention, but it's not the problem that will define the next wave of incidents, because the real shift isn't AI that talks. It's AI that acts. Coding assistants now open pull requests, and agents merge branches, file tickets, trigger CI jobs, query databases, and call internal APIs. In a growing number of organizations, these systems are no longer experiments. They are part of how work gets done. That changes the risk category: "shadow AI" stops being a policy issue and starts being a privileged access issue. Once an agent can take actions, the question isn't "Did someone paste the wrong thing into a prompt?" It's "Who did what, using which credentials, and under what authority?" Most organizations still can't answer that cleanly. The real problem isn't speed. It's bypass A common framing is that security teams are lagging AI adoption. The more accurate version is that they're being bypassed. AI adoption moves at product speed, while security review moves at organizational speed. When the two collide, the industry's default behavior has been predictable: ship first, govern later. "Later" usually arrives during incident response, when you discover your logs can tell you that "the bot" did something but can't reliably tell you who initiated it, what policy was evaluated, or what scope limitation was supposed to apply. We are building workflows that can take powerful actions, and then acting surprised when we can't explain those actions afterward. This is the part that should worry leaders, not because AI is mystical, but because it makes old mistakes easier to repeat at scale. We've all spent years trying to eliminate shared credentials and unclear ownership. Agentic workflows have a talent for resurrecting both. A familiar pattern: the demo works, then security sees it Here's a pattern I've seen more than once, and it never shows up in a strategy deck. A team prototypes an agent to speed up engineering. It starts innocently: read tickets, propose code changes, open PRs. Someone adds the ability to call internal tools because it's "just one more step," and suddenly the agent can touch GitHub, CI, and deployment. The credentials are whatever is easiest: a shared token, a service account, an API key sitting in a secrets store. It ships. Everyone's happy. Work moves faster. Then someone from security takes a closer look and has the same reaction every experienced security person has when they find a powerful automation running on broad, long-lived credentials: "What are you thinking?" That moment matters. It's not that security hate AI tools. It's that security understands a basic rule that everyone else is temporarily trying to ignore: actions require accountability. If you cannot say who authorized an action, you cannot convincingly claim you control it. In the best case, the team pauses, routes the agent through a proper access path, scopes its permissions, and adds real attribution. In the worst case, the agent stays wired in "temporarily," which is a word that can mean anything from one day to the heat death of the universe. Cheap code amplifies sloppy identity We've seen this movie before. When virtual machines got easy, we got server sprawl. When cloud storage got cheap, we got public buckets. When CI became self-serve, we got pipelines nobody fully understood. Now code is cheap, so integration sprawl is next. Agents are being wired into GitHub, CI, ticketing, databases, and internal APIs using whatever credential is closest at hand. Often that means long-lived tokens stored in environment variables, configuration files, or endpoints. Sometimes those tokens belong to a human. Sometimes they're shared service accounts. Sometimes they're "temporary" keys that have survived three reorganizations. It works until it doesn't. A continuously running agent with a broadly scoped credential is effectively a privileged insider operating at machine speed. It will do exactly what its permissions allow, and it will do it more consistently than a tired human at 2 a.m. If your access model is sloppy, AI won't fix it. It will scale it. Right now the industry is in a hurry. Roadmaps are being rewritten around "AI-first," and teams are rebuilding workflows because models make it possible, not because it's necessarily wise. Hurry creates activity, but it doesn't create coherence. In a hurry, teams grant broad permissions to get a demo working, drop provider keys onto endpoints because it's convenient, and defer identity design because it feels like plumbing. But plumbing is what keeps the building standing. One principle that makes the rest survivable There's a simple principle that should anchor AI governance going forward: if an AI system can take actions, it needs an identity of its own. Not a shared service account, not a copied human API key, and not a static token living in a configuration file. A real, governed identity. That identity should use short-lived credentials, have tightly scoped permissions, and be evaluated against policy at the moment of each tool call. Every action should also be attributable back to a known user or workload intent, so you can apply controls at decision time rather than reconstruct intent in a postmortem. This pushes you toward standardizing how agents reach your systems. Centralize access through one approved path rather than letting ten ad hoc integrations bloom in parallel. Keep provider keys off endpoints as much as possible. Treat tool calls like production changes, because in practice that's what they are. The control plane doesn't go away just because the interface got chatty. A gut check for the 3 a.m. page If you remember only one thing, make it this: AI didn't invent a new security problem. It made an old security problem run faster. The old problem is unaccountable power. The habit of scattering credentials across endpoints. The belief that you can clean up later. We've been trying to stamp that out for twenty years, and it keeps coming back whenever a new wave of tooling makes shortcuts feel harmless again. So here's the test you can run the next time someone proposes wiring an agent into production-adjacent systems. Imagine the 3 a.m. page. Something happened. The logs say an agent did it. The business is asking what went wrong. Can you answer, plainly and confidently, who authorized that action and why the system allowed it? If you can't, you don't have an AI program. You have a chaos generator with a polite user interface. Tame the gremlins now, while the integrations are young and the habits are still forming. Retrofitting governance later is possible. It's just the expensive kind of possible. We've featured the best AI website builder. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

The Artemis II mission marked a significant milestone on Monday night, as four astronauts aboard NASA's Orion spacecraft orbited the moon for the first time in half a century. This mission is part of NASA's broader initiative to return humans to lunar exploration. Elon Musk and SpaceX's Role in Space Exploration While NASA achieved this momentous feat, SpaceX, the rocket company founded by Elon Musk, was not involved in the mission. Musk's silence on the success contrasts with his typical engagement on social media, where he frequently shares provocative content and AI-generated imagery. The Implications of Artemis II The Artemis II mission serves as a demonstration of NASA's engineering capabilities. It underscores the agency's commitment to returning humans to the moon and eventually sending astronauts to Mars. * Mission: Artemis II * Duration Since Last Moon Orbit: 50 years * Astronaut Crew: Four members * Spacecraft: Orion SpaceX's Financial Ventures Despite the lack of involvement in the Artemis II mission, Musk remains deeply engaged in ambitious financial strategies. His efforts in this area are as remarkable as any lunar endeavor. As SpaceX continues to innovate, the company's approach to financing its projects is attracting attention. Conclusively, both NASA and SpaceX are contributing to the evolution of space exploration. However, while NASA celebrated a historic achievement, the focus on future ventures, including SpaceX's own plans, remains at the forefront of the conversation in the aerospace community.

Free April 14 live workshop provides business leaders with a structured Answer Engine Optimization framework for improving discoverability inside AI search assistants, conversational platforms, and emerging intent-driven discovery environments with live Q&A and implementation guidance IMS announced a live executive training event scheduled for April 14, 2026 at 1:00 PM EST designed to help founders, CEOs, and marketing leaders understand how to position their organizations for visibility across leading generative AI platforms such as ChatGPT, Claude, Grok, and Perplexity. The 60-minute session introduces a structured approach to Answer Engine Optimization that helps companies align their websites, authority signals, and content systems with how modern buyers research solutions . ChatGPT is developed by OpenAI, Claude by Anthropic, Grok by xAI, and Perplexity by Perplexity AI. References to these platforms are provided for informational context only and do not imply affiliation or endorsement. Search behavior is shifting from traditional keyword navigation toward AI-assisted evaluation and recommendation environments. Buyers increasingly rely on generative platforms to identify trusted providers before visiting company websites or engaging sales teams. Organizations that are not structured for answer engine interpretation risk losing visibility during early decision stages that influence vendor selection. Marketing Technology News: MarTech Interview With Fredrik Skantze, CEO and Co-founder of Funnel "Generative AI platforms are rapidly becoming the first place buyers go to evaluate options," said Solomon Thimothy, Founder of IMS. "Organizations that understand how to structure authority signals so they can be interpreted and surfaced by these systems will strengthen their positioning during the earliest moments of customer decision making." The session introduces the IMS AEO Alignment Framework, a three-part methodology designed to help organizations transition from ranking-dependent visibility toward answer-engine readiness: Signal Structuring: preparing websites and digital knowledge assets so AI systems can interpret credibility, expertise, and relevance Intent Mapping: translating customer questions, sales conversations, and support insights into structured answer-ready content Marketing Technology News: The Death of Third-Party Cookies Was Just the Start. Are You Ready for Consent Orchestration? Visibility Activation: aligning AEO, GEO, and modern SEO into a unified discovery strategy across search engines, conversational assistants, and voice environments As generative search adoption accelerates across industries, executive teams are reassessing how digital authority is established and maintained. Answer platforms increasingly shape which organizations are introduced during research and comparison stages, making structured discoverability a strategic requirement rather than a tactical marketing adjustment. The April 14 session is designed as a practical working workshop rather than a theory presentation. Participants will learn how to identify structural visibility gaps, understand why many websites are not interpreted clearly by answer engines, and apply frameworks that help AI systems recognize their organization as a credible solution source. The session includes a live Q&A segment that allows attendees to evaluate their own visibility challenges and receive direct implementation guidance. This free training reflects a broader IMS initiative to help business leaders understand how AI-mediated discovery is changing competitive positioning across digital channels. By translating answer engine behavior into actionable steps, the workshop provides organizations with a clear entry point for adapting their visibility strategy to emerging search environments.

WASHINGTON, April 9 (Reuters) - Small defense industry artificial intelligence startups are suddenly fielding calls from generals, combatant commanders and deep-pocketed investors, after the souring relationship between the Pentagon and its once-favored AI vendor, Anthropic, reinforced the need to diversify and increase the number of AI providers for the military. In the weeks since the Department of Defense's troubled relationship with Anthropic burst into public view and led to the company being kicked out of the U.S. military, new defense-focused AI companies like Smack Technologies and EdgeRunner AI say they have experienced a shift in interest that would have been unimaginable just months ago. They have received a surge of overtures about possible contracts and meeting requests and been approached by investors who previously showed no interest. The Pentagon's growing animosity toward its top AI provider, Anthropic, has opened up opportunities for smaller rivals, who have long sought a foot in the door to the most lucrative government contractor in the world. A defense contract can lead to more business with other branches of the U.S. government, and is a useful signal of trust and safety for potential commercial clients. "We've seen a massive increase in demand from customers and the government to get AI solutions fielded since Anthropic was declared a supply-chain risk," said Tyler Sweatt, CEO of Second Front, a company that helps technology firms meet the requirements needed to operate on secure Pentagon networks. "Our customers are turning to us as the Pentagon turns to them to deploy quickly in the wake of the Anthropic blowup." Since the Pentagon deemed Anthropic's products a "supply-chain risk" in March and the two sides became embroiled in a lawsuit, the military has expressed increasing interest in AI startups like Smack Technologies, saying, "We want more, we want demos, let's talk about how we can move faster," said Andrew Markoff, co-founder and chief executive of the 19-person startup based in El Segundo, California. In late March, a judge temporarily blocked the Pentagon's blacklisting of Anthropic. Tyler Saltsman, co-founder and chief executive of EdgeRunner AI, described a similar experience. His company had been waiting more than a year for a Space Force contract to clear the Pentagon's procurement machinery. It was signed within weeks of the Anthropic situation breaking into the open. "I can't prove that the Anthropic drama sped this up," Saltsman said, "but I have a sneaky suspicion it did."
The second quarter of the year started with arguably the most volatile macro situation since 2020. Brent crude has surged to $141 per barrel, with supply disruptions in the Middle East threatening to prolong the shocks. Inflation risks are reaccelerating, growth expectations are softening, and recession probabilities are climbing. One explanation could be second-quarter capital rotation, as institutional money rebalances the books after the first quarter. However, given the fundamentals, the rally increasingly looks like a speculative hiding spot rather than a reflection of improving market conditions. Fundamental Friction The Federal Reserve is trapped between the renewed inflation fears (owing to energy volatility) and a rising probability of recession -- 30% per Goldman Sachs. Policy rates remain stuck at 3.5%-3.75%, limiting flexibility. This reality is what makes the rally in memory and semiconductor stocks appear fundamentally fragile. Micron and Western Digital benefit from AI-driven demand narratives, particularly around data center expansion and storage needs. However, these same companies are highly exposed to cyclical enterprise spending, which tends to contract when GDP slows. Goldman has already trimmed its U.S. growth forecast to 2.1%, a level that historically has pressured hardware investment. Rising energy costs further squeeze margins across the supply chain. The strong push in Lumentum indicates the broader momentum and investors treating anything adjacent to AI infrastructure as an opportunity. Also, it notes how far the capital is willing to chase. While Micron, Western Digital, and SanDisk trade at price-to-sales ratios of 7x to 12x, Lumentum's ratio is 27.7x. Investors are paying nearly $28 for every $1 in revenue the company generates. AI as a Margin Killer BCA Research offers a critical counterpoint to the prevailing AI optimism. In its March report, the firm argues that AI may ultimately erode, not enhance, the profit engines of large technology companies. While productivity gains are likely, history suggests that faster productivity does not guarantee higher profits. BCA sees parallels with the 1995-2005 period, when efficiency surged, but margins didn't rise proportionately. AI threatens traditional competitive moats such as scale, network effects, and proprietary technology. Software could become commoditized, while platforms risk devolving into mere content repositories. With technology stocks now accounting for nearly half of the S&P 500 market capitalization, the implications of such margin compression are significant. Photo by Bigc Studio via Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.

WASHINGTON, April 9 (Reuters) - Small defense industry artificial intelligence startups are suddenly fielding calls from generals, combatant commanders and deep-pocketed investors, after the souring relationship between the Pentagon and its once-favored AI vendor, Anthropic, reinforced the need to diversify and increase the number of AI providers for the military. In the weeks since the Department of Defense's troubled relationship with Anthropic burst into public view and led to the company being kicked out of the U.S. military, new defense-focused AI companies like Smack Technologies and EdgeRunner AI say they have experienced a shift in interest that would have been unimaginable just months ago. They have received a surge of overtures about possible contracts and meeting requests and been approached by investors who previously showed no interest. The Pentagon's growing animosity toward its top AI provider, Anthropic, has opened up opportunities for smaller rivals, who have long sought a foot in the door to the most lucrative government contractor in the world. A defense contract can lead to more business with other branches of the U.S. government, and is a useful signal of trust and safety for potential commercial clients. "We've seen a massive increase in demand from customers and the government to get AI solutions fielded since Anthropic was declared a supply-chain risk," said Tyler Sweatt, CEO of Second Front, a company that helps technology firms meet the requirements needed to operate on secure Pentagon networks. "Our customers are turning to us as the Pentagon turns to them to deploy quickly in the wake of the Anthropic blowup." Since the Pentagon deemed Anthropic's products a "supply-chain risk" in March and the two sides became embroiled in a lawsuit, the military has expressed increasing interest in AI startups like Smack Technologies, saying, "We want more, we want demos, let's talk about how we can move faster," said Andrew Markoff, co-founder and chief executive of the 19-person startup based in El Segundo, California. In late March, a judge temporarily blocked the Pentagon's blacklisting of Anthropic. Tyler Saltsman, co-founder and chief executive of EdgeRunner AI, described a similar experience. His company had been waiting more than a year for a Space Force contract to clear the Pentagon's procurement machinery. It was signed within weeks of the Anthropic situation breaking into the open. "I can't prove that the Anthropic drama sped this up," Saltsman said, "but I have a sneaky suspicion it did." "The Pentagon will continue to rapidly deploy frontier AI capabilities to the warfighter through strong industry partnerships across all classification levels," a Pentagon official said. One Pentagon technologist has previously told Reuters that the falling-out with Anthropic, and the realization that the Defense Department was heavily dependent on one AI provider, forced the department to diversify AI providers. SMACK'S MARINE CORPS CONTRACT SPEEDS UP For Smack, the clearest example of the post-Anthropic acceleration involves the Marine Corps. The company won a contract with the Marine Corps in March 2025 and delivered a successful prototype by October -- software that compresses what is normally a months-long operational planning process into roughly 15 minutes. Despite the successful prototype, momentum stalled. Full production had been budgeted for fiscal year 2027 -- meaning October 2027 at the earliest. Through the 2025 holiday period and into early 2026, there was no clear direction. Then the Anthropic uproar occurred. Within weeks, Smack was invited to multiple meetings with the Marine Corps focused on a single question: how fast can this move into production this year? Markoff said there was "very specific guidance and movement and energy" toward getting the prototype ready for combat operations in 2026 -- an acceleration of more than a year. The shift extended beyond the Marines. Smack holds contracts with the Navy and Air Force, and Markoff said interest came in nearly immediately from U.S. Special Operations Command, and others. EdgeRunner, which is deploying with the Army Special Forces groups and has received a contract with the Space Force, said the Navy has also dramatically sped up engagement. Meetings that had been biweekly or monthly are now happening multiple times a week. Both EdgeRunner and Smack are now racing to get their systems operating at higher security classification levels -- the gateway to the most operationally significant use cases and the largest military contracts. EdgeRunner said the military has told the company it can get to IL-6, a security designation enabling access to secret and top-secret data, within three months -- a timeline Saltsman described as remarkable, given that the process normally takes 18 months or longer. The acceleration, he said, is being driven partly by pressure from Pentagon leadership to cut through procurement bureaucracy, and partly by the urgency the Anthropic situation has injected into the department's AI strategy.

Since the Pentagon deemed Anthropic's products a "supply-chain risk" in March and the two sides became embroiled in a lawsuit, the military has expressed increasing interest in AI startups like Smack Technologies, saying, "We want more, we want demos, let's talk about how we can move faster," said Andrew Markoff, co-founder and chief executive of the 19-person startup based in El Segundo, California. Small defense industry artificial intelligence startups are suddenly fielding calls from generals, combatant commanders and deep-pocketed investors, after the souring relationship between the Pentagon and its once-favored AI vendor, Anthropic, reinforced the need to diversify and increase the number of AI providers for the military. In the weeks since the Department of Defense's troubled relationship with Anthropic burst into public view and led to the company being kicked out of the U.S. military, new defense-focused AI companies like Smack Technologies and EdgeRunner AI say they have experienced a shift in interest that would have been unimaginable just months ago. They have received a surge of overtures about possible contracts and meeting requests and been approached by investors who previously showed no interest. The Pentagon's growing animosity toward its top AI provider, Anthropic, has opened up opportunities for smaller rivals, who have long sought a foot in the door to the most lucrative government contractor in the world. A defense contract can lead to more business with other branches of the U.S. government, and is a useful signal of trust and safety for potential commercial clients. "We've seen a massive increase in demand from customers and the government to get AI solutions fielded since Anthropic was declared a supply-chain risk," said Tyler Sweatt, CEO of Second Front, a company that helps technology firms meet the requirements needed to operate on secure Pentagon networks. "Our customers are turning to us as the Pentagon turns to them to deploy quickly in the wake of the Anthropic blowup." Since the Pentagon deemed Anthropic's products a "supply-chain risk" in March and the two sides became embroiled in a lawsuit, the military has expressed increasing interest in AI startups like Smack Technologies, saying, "We want more, we want demos, let's talk about how we can move faster," said Andrew Markoff, co-founder and chief executive of the 19-person startup based in El Segundo, California. In late March, a judge temporarily blocked the Pentagon's blacklisting of Anthropic. Tyler Saltsman, co-founder and chief executive of EdgeRunner AI, described a similar experience. His company had been waiting more than a year for a Space Force contract to clear the Pentagon's procurement machinery. It was signed within weeks of the Anthropic situation breaking into the open. "I can't prove that the Anthropic drama sped this up," Saltsman said, "but I have a sneaky suspicion it did." "The Pentagon will continue to rapidly deploy frontier AI capabilities to the warfighter through strong industry partnerships across all classification levels," a Pentagon official said. One Pentagon technologist has previously told Reuters that the falling-out with Anthropic, and the realization that the Defense Department was heavily dependent on one AI provider, forced the department to diversify AI providers. Smack's marine corps contract speeds up For Smack, the clearest example of the post-Anthropic acceleration involves the Marine Corps. The company won a contract with the Marine Corps in March 2025 and delivered a successful prototype by October - software that compresses what is normally a months-long operational planning process into roughly 15 minutes. Despite the successful prototype, momentum stalled. Full production had been budgeted for fiscal year 2027 - meaning October 2027 at the earliest. Through the 2025 holiday period and into early 2026, there was no clear direction. Then the Anthropic uproar occurred. Within weeks, Smack was invited to multiple meetings with the Marine Corps focused on a single question: how fast can this move into production this year? Markoff said there was "very specific guidance and movement and energy" toward getting the prototype ready for combat operations in 2026 - an acceleration of more than a year. The shift extended beyond the Marines. Smack holds contracts with the Navy and Air Force, and Markoff said interest came in nearly immediately from U.S. Special Operations Command, and others. EdgeRunner, which is deploying with the Army Special Forces groups and has received a contract with the Space Force, said the Navy has also dramatically sped up engagement. Meetings that had been biweekly or monthly are now happening multiple times a week. Both EdgeRunner and Smack are now racing to get their systems operating at higher security classification levels - the gateway to the most operationally significant use cases and the largest military contracts. EdgeRunner said the military has told the company it can get to IL-6, a security designation enabling access to secret and top-secret data, within three months - a timeline Saltsman described as remarkable, given that the process normally takes 18 months or longer. The acceleration, he said, is being driven partly by pressure from Pentagon leadership to cut through procurement bureaucracy, and partly by the urgency the Anthropic situation has injected into the department's AI strategy.
WASHINGTON, April 9 (Reuters) - Small defense industry artificial intelligence startups are suddenly fielding calls from generals, combatant commanders and deep-pocketed investors, after the souring relationship between the Pentagon and its once-favored AI vendor, Anthropic, reinforced the need to diversify and increase the number of AI providers for the military. In the weeks since the Department of Defense's troubled relationship with Anthropic burst into public view and led to the company being kicked out of the U.S. military, new defense-focused AI companies like Smack Technologies and EdgeRunner AI say they have experienced a shift in interest that would have been unimaginable just months ago. They have received a surge of overtures about possible contracts and meeting requests and been approached by investors who previously showed no interest. The Pentagon's growing animosity toward its top AI provider,Anthropic, has opened up opportunities for smaller rivals, who have long sought a foot in the door to the most lucrative government contractor in the world. A defense contract can lead to more business with other branches of the U.S. government, and is a useful signal of trust and safety for potential commercial clients. "We've seen a massive increase in demand from customers and the government to get AI solutions fielded since Anthropic was declared a supply-chain risk," said Tyler Sweatt, CEO of Second Front, a company that helps technology firms meet the requirements needed to operate on secure Pentagon networks. "Our customers are turning to us as the Pentagon turns to them to deploy quickly in the wake of the Anthropic blowup." Since the Pentagon deemed Anthropic's products a "supply-chain risk" in March and the two sides became embroiled in a lawsuit, the military has expressed increasing interest in AI startups like Smack Technologies, saying, "We want more, we want demos, let's talk about how we can move faster," said Andrew Markoff, co-founder and chief executive of the 19-person startup based in El Segundo, California. In late March, a judge temporarily blocked the Pentagon's blacklisting of Anthropic. Tyler Saltsman, co-founder and chief executive of EdgeRunner AI, described a similar experience. His company had been waiting more than a year for a Space Force contract to clear the Pentagon's procurement machinery. It was signed within weeks of the Anthropic situation breaking into the open. "I can't prove that the Anthropic drama sped this up," Saltsman said, "but I have a sneaky suspicion it did." "The Pentagon will continue to rapidly deploy frontier AI capabilities to the warfighter through strong industry partnerships across all classification levels," a Pentagon official said. One Pentagon technologist has previously told Reuters that the falling-out with Anthropic, and the realization that the Defense Department was heavily dependent on one AI provider, forced the department to diversify AI providers. SMACK'S MARINE CORPS CONTRACT SPEEDS UP For Smack, the clearest example of the post-Anthropic acceleration involves the Marine Corps. The company won a contract with the Marine Corps in March 2025 and delivered a successful prototype by October -- software that compresses what is normally a months-long operational planning process into roughly 15 minutes. Despite the successful prototype, momentum stalled. Full production had been budgeted for fiscal year 2027 -- meaning October 2027 at the earliest. Through the 2025 holiday period and into early 2026, there was no clear direction. Then the Anthropic uproar occurred. Within weeks, Smack was invited to multiple meetings with the Marine Corps focused on a single question: how fast can this move into production this year? Markoff said there was "very specific guidance and movement and energy" toward getting the prototype ready for combat operations in 2026 -- an acceleration of more than a year. The shift extended beyond the Marines. Smack holds contracts with the Navy and Air Force, and Markoff said interest came in nearly immediately from U.S. Special Operations Command, and others. EdgeRunner, which is deploying with the Army Special Forces groups and has received a contract with the Space Force, said the Navy has also dramatically sped up engagement. Meetings that had been biweekly or monthly are now happening multiple times a week. Both EdgeRunner and Smack are now racing to get their systems operating at higher security classification levels -- the gateway to the most operationally significant use cases and the largest military contracts. EdgeRunner said the military has told the company it can get to IL-6, a security designation enabling access to secret and top-secret data, within three months -- a timeline Saltsman described as remarkable, given that the process normally takes 18 months or longer. The acceleration, he said, is being driven partly by pressure from Pentagon leadership to cut through procurement bureaucracy, and partly by the urgency the Anthropic situation has injected into the department's AI strategy. (Reporting by Mike Stone in WashingtonEditing by Chris Sanders and Matthew Lewis)

WASHINGTON, April 9 (Reuters) - Small defense industry artificial intelligence startups are suddenly fielding calls from generals, combatant commanders and deep-pocketed investors, after the souring relationship between the Pentagon and its once-favored AI vendor, Anthropic, reinforced the need to diversify and increase the number of AI providers for the military. In the weeks since the Department of Defense's troubled relationship with Anthropic burst into public view and led to the company being kicked out of the U.S. military, new defense-focused AI companies like Smack Technologies and EdgeRunner AI say they have experienced a shift in interest that would have been unimaginable just months ago. They have received a surge of overtures about possible contracts and meeting requests and been approached by investors who previously showed no interest. The Pentagon's growing animosity toward its top AI provider, Anthropic, has opened up opportunities for smaller rivals, who have long sought a foot in the door to the most lucrative government contractor in the world. A defense contract can lead to more business with other branches of the U.S. government, and is a useful signal of trust and safety for potential commercial clients. "We've seen a massive increase in demand from customers and the government to get AI solutions fielded since Anthropic was declared a supply-chain risk," said Tyler Sweatt, CEO of Second Front, a company that helps technology firms meet the requirements needed to operate on secure Pentagon networks. "Our customers are turning to us as the Pentagon turns to them to deploy quickly in the wake of the Anthropic blowup." Since the Pentagon deemed Anthropic's products a "supply-chain risk" in March and the two sides became embroiled in a lawsuit, the military has expressed increasing interest in AI startups like Smack Technologies, saying, "We want more, we want demos, let's talk about how we can move faster," said Andrew Markoff, co-founder and chief executive of the 19-person startup based in El Segundo, California. In late March, a judge temporarily blocked the Pentagon's blacklisting of Anthropic. Tyler Saltsman, co-founder and chief executive of EdgeRunner AI, described a similar experience. His company had been waiting more than a year for a Space Force contract to clear the Pentagon's procurement machinery. It was signed within weeks of the Anthropic situation breaking into the open. "I can't prove that the Anthropic drama sped this up," Saltsman said, "but I have a sneaky suspicion it did." "The Pentagon will continue to rapidly deploy frontier AI capabilities to the warfighter through strong industry partnerships across all classification levels," a Pentagon official said. One Pentagon technologist has previously told Reuters that the falling-out with Anthropic, and the realization that the Defense Department was heavily dependent on one AI provider, forced the department to diversify AI providers. SMACK'S MARINE CORPS CONTRACT SPEEDS UP For Smack, the clearest example of the post-Anthropic acceleration involves the Marine Corps. The company won a contract with the Marine Corps in March 2025 and delivered a successful prototype by October -- software that compresses what is normally a months-long operational planning process into roughly 15 minutes. Despite the successful prototype, momentum stalled. Full production had been budgeted for fiscal year 2027 -- meaning October 2027 at the earliest. Through the 2025 holiday period and into early 2026, there was no clear direction. Then the Anthropic uproar occurred. Within weeks, Smack was invited to multiple meetings with the Marine Corps focused on a single question: how fast can this move into production this year? Markoff said there was "very specific guidance and movement and energy" toward getting the prototype ready for combat operations in 2026 -- an acceleration of more than a year. The shift extended beyond the Marines. Smack holds contracts with the Navy and Air Force, and Markoff said interest came in nearly immediately from U.S. Special Operations Command, and others. EdgeRunner, which is deploying with the Army Special Forces groups and has received a contract with the Space Force, said the Navy has also dramatically sped up engagement. Meetings that had been biweekly or monthly are now happening multiple times a week. Both EdgeRunner and Smack are now racing to get their systems operating at higher security classification levels -- the gateway to the most operationally significant use cases and the largest military contracts. EdgeRunner said the military has told the company it can get to IL-6, a security designation enabling access to secret and top-secret data, within three months -- a timeline Saltsman described as remarkable, given that the process normally takes 18 months or longer. The acceleration, he said, is being driven partly by pressure from Pentagon leadership to cut through procurement bureaucracy, and partly by the urgency the Anthropic situation has injected into the department's AI strategy. Reporting by Mike Stone in Washington Editing by Chris Sanders and Matthew Lewis Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Artificial Intelligence * Public Policy Mike Stone Thomson Reuters Mike Stone is a Reuters reporter covering the U.S. arms trade and defense industry. Most recently Mike has been focused on the Golden Dome missile defense shield. Mike also spends a lot of his time writing on Ukraine and how industry has adapted, or faltered as it supports that conflict. Mike, a New Yorker, has extensively covered how the U.S. has supplied Ukraine with weapons, the cadence, decisions and milestones that have had battlefield impacts. Before his time in Washington Mike's coverage focused on mergers and acquisitions for oil and gas companies, financial institutions, defense companies, consumer product makers, retailers, real estate giants, and telecommunications companies.

Late last month, the industry learnt that Anthropic is developing Claude Capybara - also referred to internally as Mythos - a powerful new AI model with substantially improved capabilities in vulnerability discovery, exploit development and multi-step attack reasoning. The details emerged through a data leak rather than a formal launch, but the market response was unmistakable: AI has crossed a critical cybersecurity threshold. Frontier models are accelerating attack life cycles. They will allow attackers to identify and exploit vulnerabilities at a scale, speed and level of sophistication that until recently was the preserve of advanced nation state actors. For security leaders, this is both a warning and a call to action. It crystallises a trend that has been building for some time: the democratisation and industrialisation of cyberattacks. Mythos is the early signal of two profound shifts in the threat landscape. The first is the democratisation of advanced attack capabilities. Techniques that once required elite threat actors or well-funded nation state teams will soon be accessible to low-skilled attackers with AI assistance. The paths are already clear: abuse frontier models directly, as the Chinese state-sponsored group that used Claude Code to infiltrate roughly 30 organisations last year did, or wait for the same capabilities to appear in open-source models where no usage policies or safety layers stand in the way. This fundamentally lowers the barrier to entry for sophisticated attacks. Organisations that once considered themselves safe because they were not obvious targets of nation state activity are now at risk from newly capable criminal groups armed with AI-powered tools. The second shift is the industrialisation of cyberattacks. With continued advances in agentic AI, threat actors will be able to scan legacy and software-as-a-service technologies at unprecedented frequency and scale. The result will be a near-continuous flow of novel attack methods targeting enterprise systems, networks and employees. AI allows attackers to move from manual, artisanal operations to repeatable, automated attack pipelines. Attacks are becoming systematic, scalable and reproducible - more like software manufacturing than craft. This is the era of the AI attack factory. The convergence of these two forces produces a dangerous outcome: more attackers executing more sophisticated attacks, increasing both volume and velocity simultaneously. The time-to-exploit window is collapsing towards zero. We should all be alarmed by what the Mythos leak revealed, but we should not be surprised. Security researchers have long anticipated that advanced models would eventually demonstrate proficiency in code review, vulnerability discovery and reverse engineering, and that they would integrate with the tools and APIs that enable penetration testing and exploitation. The gap between writing code and analysing code is narrower than many realise. An AI system capable of generating sophisticated software can be trained or prompted to identify vulnerabilities within it. Combine that with exploit development and the ability to chain multi-step attacks, and you have an entirely new threat surface. In response to this evolving threat landscape, security leaders should conduct a rigorous reassessment of their foundations. This is not only about implementing new tools. It is about ensuring that existing tools are actually tuned for the threat that is now emerging. The step change in AI models' offensive capabilities has not happened in isolation. It has arrived alongside a sharp increase in open-source software supply-chain attacks, with both signals pointing to the same conclusion: the speed and surface area of attacks are accelerating. Whether an organisation has adopted AI or not is irrelevant. Threat actors have, and they will continue to push these capabilities further. New models will keep pushing the boundaries of what is possible, for defenders and attackers alike. That is not a surprise - it is the trajectory the industry has been tracking for years. What the recent disclosures make clear is that continuous reassessment is no longer optional.

The service runs exclusively on Anthropic's infrastructure and costs $0.08 per session hour on top of standard token prices. Anthropic's new "Claude Managed Agents" gives developers a hosted platform for building and running autonomous AI agents. Early adopters like Notion and Rakuten are already using the system. Anthropic has launched Claude Managed Agents as a public beta. The API suite lets developers build and run cloud-hosted AI agents without having to set up their own infrastructure for sandboxing, state management, or tool execution. Until now, teams shipping production-ready agents had to build their own secure containers, permission systems, and agent loops from scratch. Managed Agents handles all of that. According to Anthropic's documentation, the system provides an orchestration harness that independently calls tools, manages context, and handles errors. Anthropic claims this cuts the time from prototype to production by a factor of ten. Sessions can run autonomously for hours, and results persist even if the connection drops. Built-in tools include bash commands, file operations, web search, and connections to external services via MCP servers. A separate "Research Preview" that isn't publicly available yet lets agents spin up other agents, coordinate parallel tasks, evaluate outputs, and manage memory. Those interested can sign up for a waitlist to access these features. Several companies are already using the system, according to Anthropic. Notion lets teams delegate tasks to Claude directly within their workspace. Rakuten built enterprise agents for sales, marketing, and finance that plug into Slack and Teams - each reportedly up and running within a week. Sentry paired its debugging agent with a Claude agent that writes patches and opens pull requests. Managed Agents is available to all API accounts and requires a specific beta header that the SDK sets automatically. Anthropic notes that behavior may change between releases. Pricing is usage-based: standard token rates apply, plus $0.08 per session hour. For now, the system runs exclusively on Anthropic's own infrastructure. The announcement doesn't mention whether it will become available through partner platforms like Amazon Bedrock or Google Vertex AI. For companies with multi-cloud strategies, that could be a significant limitation.

The markets have been rocky. But the remarkable thing is how good returns for most investors have been over the past 12 months -- even including the downturn that started when the United States and Israel attacked Iran on Feb. 28. They show that for the average domestic stock mutual fund or exchange traded fund, the losses have been relatively small -- a decline for the first quarter of the year of 1.2 percent. That's according to final investor returns that have arrived from Morningstar, the financial services company. The actual, up-to-the-last-minute returns for many investors improved on Wednesday, when the stock market surged and oil prices dropped with the start of a fragile cease-fire in the Iran war. At the market close, the S&P 500 was down less than 1 percent since the start of the year. The Morningstar data shows that even with the setback of the war, the average domestic U.S. stock fund gained 16.8 percent over the 12 months through March. And over five years -- a stretch that includes the dismal runaway inflation of 2022 -- the average domestic stock fund returned 8 percent annualized. This record of resilience adds weight to a compelling -- but far from foolproof -- argument that investors are best off sticking with the stock market, even when they are troubled by the state of the nation and the world. That narrative goes like this: Since World War II, the U.S. stock market has bounced back regularly from seemingly irretrievably destructive developments like wars, pandemics, domestic unrest, recession, inflation and flawed economic policies. Corporate earnings have provided the glue holding investor returns together -- and most forecasts for corporate earnings remain strong, as they have been for the last few years. But beware. There's no guarantee that the relative equanimity in the markets will persist, if the fighting resumes and if oil supplies remain constrained. Most corporate earnings forecasts have not changed since the start of the war, yet global economic conditions are shifting rapidly, adding costs and increasing the risk that profits won't grow as rapidly, and may even fall. Furthermore, the historical record shows that recessions, or runaway inflation, can shatter positive investor returns with little warning. Even when economic conditions seem solid, the markets can plummet at a moment's notice. I'd add that whatever you may think of President Trump's policies, his behavior is shattering basic humane norms. On Tuesday morning, he threatened Iran with total destruction. "A whole civilization will die tonight, never to be brought back again," the president wrote on his social media platform, Truth Social. "I don't want that to happen, but it probably will." In a wry response on LinkedIn, Ian Bremmer, the president of the Eurasia Group, an independent geopolitical risk consulting firm, said, "It looks bad for the U.S. president to threaten genocide." Indeed it does. The United States and its ally Israel did not destroy all of Iranian civilization on Tuesday night. But the situation remains tenuous despite the cease-fire, with Iran retaining the ability to throttle the critical Strait of Hormuz and send out missile strikes of its own, and Israel continuing to attack southern Lebanon. Even if the Iran war ends soon, the U.S. threats and the Iranian response have already changed the world, in my estimation. The risk of instability has risen substantially. Investing, under these circumstances, is hazardous. For those who may need to use their money soon -- by which I'd include any period of less than five years -- the stock market's wild swings may be too much to endure. Bonds are probably safer, though they can lose money, too, if sold during periods of rising interest rates. Safe, interest-bearing accounts and Treasury bills are the best bets for those who want to get their hands on their cash quickly, or really can't afford to lose any of it. Consider two of the biggest obstacles that investors have faced. In April 2025, Mr. Trump imposed the steepest tariffs since the 1930s, and the markets shuddered. While he has adjusted many of them after the Supreme Court ruled that he had exceeded his authority in unilaterally imposing tariffs, his administration is still intent on revamping the entire world trade system. Then, last month, the war with Iran resulted in the worst interruption in global oil supplies in decades, setting off steep price increases in oil and gasoline, and, perhaps, the resumption of a disconcertingly high rate of inflation. The combination of the tariffs and the oil shock might have been enough to derail the economy and the markets. But what's astonishing, in retrospect, is how well investor returns held up. Morningstar has been providing quarterly mutual fund and exchange-traded fund returns to The New York Times for decades. These funds -- as well as the workplace-based retirement plan trusts based on them -- are how most people in the United States invest. They provide a window on how the stock and bond markets have affected real people. And they suggest that despite all the headlines, most investors have been insulated financially, so far. The fund returns, for the three months, one year and five years through March 30, included these highlights: International stock funds did better than U.S. domestic ones, both over the quarter and the 12 months through March. The comparisons were a gain of 0.8 percent for international stock funds over the quarter versus a 1.2 percent loss for domestic funds, and a 26 percent gain for international funds over 12 months versus a 16.8 percent rise for domestic funds. Over five years, however, domestic stock funds beat international funds, with an annualized return of 8 percent versus 6.1 percent for international funds. Latin America stock funds were standouts, with an average return of 12.2 percent over three months, 49.5 percent over 12 months and 10.4 percent, annualized, over five years. Domestic bond funds beat domestic stock funds for the quarter, though domestic taxable funds still lost 0.2 percent over three months, while municipal bond funds were flat. Over 12 months, taxable bond funds gained 5.4 percent while municipal bond funds, which are typically exempt from at least some income taxes, returned 3.8 percent, on average. Over five years, domestic taxable funds returned 2.2 percent. Municipal bond funds returned 0.9 percent. Bond funds, in short, did a bit better than stock funds during the stock market decline this year, but stocks beat bonds over longer periods, as they often do. Within the U.S. stock market, energy funds benefited from soaring oil and natural gas prices, with an average gain of 34.5 percent for the quarter and of 44.6 percent over 12 months. Over five years, their annualized return was 23.3 percent. Precious metal funds, which included those investing in gold and silver producers, returned 7.3 percent for the quarter. But over 12 months, they gained 23.8 percent, and over five years, their annualized return was 104 percent. Target-date retirement funds generally had small losses for the quarter but solid returns over longer periods. For example, the average 2030 fund -- aimed at those who plan to retire in that year -- declined 0.85 percent for the quarter, but posted a gain of 12.8 percent over 12 months and 5.4 percent annualized over five years. Retirement income funds, which typically contain a high bond allocation and a modest exposure to stocks and are often used by retirees, lost 0.4 percent in the quarter, but gained 9.1 percent over 12 months and 3.6 percent over five years, annualized. Holding low-cost index funds that merely track the markets, and staying with them for decades, has been a successful strategy. I'm sticking with it. But it's not without its perils now. The war, the on-again-off-again tariffs, the Trump administration's disregard for traditions and for many of the checks and balances in the U.S. government have changed the investing environment. Lighter regulation may help companies churn out profits; it may also cause serious harm. Risks abound. There are dangers within the markets themselves. Artificial intelligence is a wild card. The technology has propelled market gains and led to the greatest stock market concentration in decades, as I've pointed out. Many market bulls expect A.I. to lead to heightened productivity throughout the economy, and to bigger profits for its leading practitioners. If this doesn't happen, however, it may be difficult to sustain current market valuations. Bullish strategists say that because earnings have increased while stock prices have not kept pace, the overall stock market, and the high-flying tech stocks, are more reasonably priced than they were six months ago. Can investors count on the U.S. stock market to continue to outperform the others? I've got no crystal ball. The enduring strength of the U.S. economy and markets is a marvel. But the shifts underway in the world worry me. So I'm hedging my bets, as I have for some time. Stocks and bonds from throughout the world in modest allocations, along with safe stashes of cash, help me sleep. Do what it takes to get you through the night.

Anthropic has launched a new AI tool, Claude Managed Agents, now available in public beta on its Claude platform. Anthropic has introduced a new AI tool called Claude Managed Agents, now available in public beta on the Claude platform. The tool is designed to help developers build and run AI agents without managing complex backend systems. Earlier, this process required significant computing infrastructure and security setup, but now developers can focus on defining tasks while Anthropic handles the rest. Claude Managed Agents tool can help reduce development time from months to days. It includes features like secure code execution, data tracking, and permission controls. The agents can run for longer periods, continue working during slow internet, and support multiple agents collaborating on complex tasks. How does Claude manage agents work? Claude Managed Agents by Anthropic works by handling the entire process of building and running AI agents in the background. Firstly, developers give a goal, and the system breaks it into steps, uses tools, and completes tasks automatically. It includes built-in features like memory, secure environments, and tool access to help agents work safely. These agents can read files, run code, and interact with systems while continuing tasks independently in the cloud. This setup removes the need for complex infrastructure and allows developers to focus on what the agent should do instead of how it runs. To use Claude Managed Agents, you need an API key, enable beta access, and request permission for advanced features like memory and multi-agent. Claude Managed Agents are useful for developers when they need AI to handle complex, multi-step tasks over a longer time instead of simple, one-time requests. They are best used when tasks require planning, using tools, or running continuously in the background. For example, they can manage workflows, analyze data, or automate processes without constant input. They are especially helpful for developers or teams building applications where AI needs to act independently and complete tasks step by step. What is the pricing of Claude Managed Agents? As per Anthropic, the Claude Managed Agents are priced on consumption. Standard Claude Platform token rates apply, plus $0.08 per session-hour for active runtime. This tool is accessible through the Claude Console, a documentation resource, and a command-line interface, with support for integration into existing developer workflows.

WASHINGTON, April 9 (Reuters) - Small defense industry artificial intelligence startups are suddenly fielding calls from generals, combatant commanders and deep-pocketed investors, after the souring relationship between the Pentagon and its once-favored AI vendor, Anthropic, reinforced the need to diversify and increase the number of AI providers for the military. In the weeks since the Department of Defense's troubled relationship with Anthropic burst into public view and led to the company being kicked out of the U.S. military, new defense-focused AI companies like Smack Technologies and EdgeRunner AI say they have experienced a shift in interest that would have been unimaginable just months ago. They have received a surge of overtures about possible contracts and meeting requests and been approached by investors who previously showed no interest. The Pentagon's growing animosity toward its top AI provider, Anthropic, has opened up opportunities for smaller rivals, who have long sought a foot in the door to the most lucrative government contractor in the world. A defense contract can lead to more business with other branches of the U.S. government, and is a useful signal of trust and safety for potential commercial clients. "We've seen a massive increase in demand from customers and the government to get AI solutions fielded since Anthropic was declared a supply-chain risk," said Tyler Sweatt, CEO of Second Front, a company that helps technology firms meet the requirements needed to operate on secure Pentagon networks. "Our customers are turning to us as the Pentagon turns to them to deploy quickly in the wake of the Anthropic blowup." Since the Pentagon deemed Anthropic's products a "supply-chain risk" in March and the two sides became embroiled in a lawsuit, the military has expressed increasing interest in AI startups like Smack Technologies, saying, "We want more, we want demos, let's talk about how we can move faster," said Andrew Markoff, co-founder and chief executive of the 19-person startup based in El Segundo, California. In late March, a judge temporarily blocked the Pentagon's blacklisting of Anthropic. Tyler Saltsman, co-founder and chief executive of EdgeRunner AI, described a similar experience. His company had been waiting more than a year for a Space Force contract to clear the Pentagon's procurement machinery. It was signed within weeks of the Anthropic situation breaking into the open. "I can't prove that the Anthropic drama sped this up," Saltsman said, "but I have a sneaky suspicion it did." "The Pentagon will continue to rapidly deploy frontier AI capabilities to the warfighter through strong industry partnerships across all classification levels," a Pentagon official said. One Pentagon technologist has previously told Reuters that the falling-out with Anthropic, and the realization that the Defense Department was heavily dependent on one AI provider, forced the department to diversify AI providers. SMACK'S MARINE CORPS CONTRACT SPEEDS UP For Smack, the clearest example of the post-Anthropic acceleration involves the Marine Corps. The company won a contract with the Marine Corps in March 2025 and delivered a successful prototype by October -- software that compresses what is normally a months-long operational planning process into roughly 15 minutes. Despite the successful prototype, momentum stalled. Full production had been budgeted for fiscal year 2027 -- meaning October 2027 at the earliest. Through the 2025 holiday period and into early 2026, there was no clear direction. Then the Anthropic uproar occurred. Within weeks, Smack was invited to multiple meetings with the Marine Corps focused on a single question: how fast can this move into production this year? Markoff said there was "very specific guidance and movement and energy" toward getting the prototype ready for combat operations in 2026 -- an acceleration of more than a year. The shift extended beyond the Marines. Smack holds contracts with the Navy and Air Force, and Markoff said interest came in nearly immediately from U.S. Special Operations Command, and others. EdgeRunner, which is deploying with the Army Special Forces groups and has received a contract with the Space Force, said the Navy has also dramatically sped up engagement. Meetings that had been biweekly or monthly are now happening multiple times a week. Both EdgeRunner and Smack are now racing to get their systems operating at higher security classification levels -- the gateway to the most operationally significant use cases and the largest military contracts. EdgeRunner said the military has told the company it can get to IL-6, a security designation enabling access to secret and top-secret data, within three months -- a timeline Saltsman described as remarkable, given that the process normally takes 18 months or longer. The acceleration, he said, is being driven partly by pressure from Pentagon leadership to cut through procurement bureaucracy, and partly by the urgency the Anthropic situation has injected into the department's AI strategy. (Reporting by Mike Stone in WashingtonEditing by Chris Sanders and Matthew Lewis)