The latest news and updates from companies in the WLTH portfolio.
rocket landing today is the focus as SpaceX plans a Falcon 9 launch from Vandenberg Space Force Base on Friday night. The window runs from 7: 39 p. m. to 11: 39 p. m. on April 10, with the Starlink 17-21 mission headed to low-Earth orbit. A booster landing attempt is planned after stage separation, adding to the visible spectacle for people watching along the coast. Launch Window Opens Friday Night The launch is scheduled to lift off from Vandenberg Space Force Base with the rocket carrying the Starlink 17-21 mission. Starlink is SpaceX's high-speed broadband satellite internet service designed to reach rural and remote communities. That puts rocket landing today squarely in the spotlight for anyone tracking the mission from Southern California. SpaceX will aim to land the rocket's first stage booster on the "Of Course I Still Love You" droneship stationed in the Pacific Ocean so it can potentially be used again. What Viewers Should Know A live webcast is set to begin about five minutes before liftoff. The launch is being watched closely because Vandenberg missions often draw attention from communities up and down the coast, where the white plume can be visible from far beyond the base. The mission is one of the clearest examples yet of rocket landing today becoming both a technical target and a public viewing event. For nearby observers, the countdown is as much about the sky show as it is about the launch itself. Why The Pacific Landing Matters The booster landing attempt is an important part of the mission plan because the first stage is intended to come back for possible reuse. That step comes after stage separation and depends on the recovery effort in the Pacific Ocean. This is also why rocket landing today has become a phrase many people are following closely: the launch is only part of the story, and the landing attempt adds another layer of suspense after liftoff. Local Interest Keeps Building Interest in Vandenberg launches remains high because rockets from the base are often visible across Southern California. People in the region regularly watch launches from beaches and other vantage points, treating them as a recurring night-sky event. For this mission, the key details are simple: Friday night, Vandenberg, a Falcon 9 rocket, and a planned booster landing on a droneship in the Pacific. If conditions hold, rocket landing today will be the part of the mission that keeps eyes fixed on the sky even after the rocket climbs out of view.

Anthropic's announcement about its powerful new AI model this week sparked a wave of warnings and dire predictions, but not everyone is buying into the hype. Anthropic said Tuesday it was not releasing Mythos, its next-generation AI model, due to cybersecurity concerns. The company said Mythos was so powerful that non-experts could use it to exploit vulnerabilities in major operating systems. Instead of a wide release, Anthropic said it was making Claude Mythos Preview available to 11 external organizations, including Google, Microsoft, Amazon Web Services, JPMorganChase, and Nvidia, as part of "Project Glasswing." Anthropic's claims about what Mythos was capable of quickly sparked concern, as well as a meeting between Fed Chair Jerome Powell, Treasury Secretary Scott Bessent, and the heads of major US banks. Some AI commentators warned about the cybersecurity implications, while others cast doubt on the significance of the Anthropic announcement, saying Mythos didn't appear to be leaps and bounds ahead of other models and that it was more likely a matter of good PR. Should Mythos have security execs quaking in their boots? Is Anthropic simply a master at marketing its models? We rounded up what smart people are saying as the internet debates the latest AI development.
NEW YORK -- Calls are increasing inside Congress for investigations into the prediction market platform Polymarket after the latest instance where groups of anonymous traders made strategic, well-timed bets on a major geopolitical event hours before it occurred. On Wednesday, The Associated Press reported that at least 50 brand new accounts on Polymarket placed substantial bets on a U.S.-Iran ceasefire in the hours, even minutes, before President Donald Trump announced the ceasefire late Tuesday on social media. These were the sole bets made on Polymarket through these accounts. In January, an anonymous Polymarket user made a $400,000 profit by betting that Venezuelan leader Nicolas Maduro would be out of office, hours before Maduro was captured. In the hours before the start of the Iran war, another account made roughly $550,000 in a series of trades effectively betting that the U.S. would strike Iran and that Ayatollah Ali Khamenei would be removed from office. Such prescient wagers have raised eyebrows -- and accusations that prediction markets are ripe for insider trading. And the issue goes beyond these three geopolitical events, according to at least one report. Researchers at Harvard University released a paper last month where, using public blockchain data, they estimated that $143 million in profits have been made on Polymarket by individuals who potentially had insider information about events ranging from Taylor Swift's engagement to the awarding of the Nobel Peace Prize last year. Rep. Ritchie Torres, D-N.Y who sits on the House Financial Services Committee as well as the subcommittee on digital assets and financial technology, sent a letter Thursday to the Commodity Futures Trading Commission demanding the regulator review and investigate these well-timed trades. The CFTC regulates the derivatives markets, which includes prediction markets. "This pattern raises serious concerns that certain market participants may have had access to material nonpublic information regarding a market-moving geopolitical event," Torres wrote. The letter was shared exclusively with The AP. "What is the statistical likelihood that of anyone other than an insider trader placing a winning bet 12 minutes before a market-moving presidential announcement," Torres said in an interview with AP. "There are two answers: God, or an insider trader. And something tells me that God it not placing bets around Donald Trump's posts on Truth Social. " Prediction market platforms like Kalshi and Polymarket allow their users to bet on everything from whether it will rain in Phoenix, Arizona next week to whether the Federal Reserve will raise or lower interest rates. At this time, U.S. residents have limited access to Polymarket, which was banned from the U.S. in 2022. The company has moved to reenter the country by acquiring a CFTC-licensed exchange and clearinghouse, giving it a legal pathway to start offering contracts domestically. The company has begun a limited rollout in the U.S. Polymarket also operates a separate, crypto-based platform offshore that remains outside U.S. jurisdiction. That platform accounts for most of its activity. Sen. Richard Blumenthal, D-Connecticut, sent a letter to Polymarket on Thursday demanding the company explain why it continues to allow trades on war and violence as well as whether the company is making any efforts to keep insiders from trading on the platform. "Polymarket has become an illicit market to sell and exploit national security secrets unlike any in history, and by extension a potential honeypot for foreign intelligence services watching for those same suspicious bets and wagers," Blumenthal wrote. Republicans have also criticized these platforms and called for bans on these sorts of bets. There are at least two bills pending in Congress co-signed by both parties, one in the House and one in the Senate. "We don't want to imagine a world where America's adversaries use prediction markets to anticipate our next move," said Rep. Blake Moore, R-Utah, after the release of the AP's findings on the ceasefire wagers. Polymarket did not immediately reply to a request for comment. The stakes are high for both Kalshi and Polymarket as they seek approval to operate in the U.S. and nationwide, particularly in the lucrative sports betting market. Kalshi, which is already regulated in the U.S., and its executives have a goal of making the company the nation's dominant prediction market. Kalshi has also leaned heavily into sports, which critics have said effectively makes it a sports betting platform that dabbles in event-based contracts on the side. Both companies have also announced partnerships with sports teams and even news organizations to broaden their reach as well. The AP has an agreement to sell U.S. elections data to Kalshi. The competition also carries political overtones. Donald Trump Jr. is an investor in Polymarket through his venture capital firm, 1789 Capital, and separately serves as a paid strategic adviser to Kalshi.

Perplexity AI's monthly revenue jumped approximately 50% in one month, pushing its estimated annual recurring revenue (ARR) above $450 million as of March 2026. The surge followed Perplexity's strategic pivot from its core AI-powered search and chatbot experience toward autonomous AI agents that can perform complex tasks on behalf of users e.g., executing workflows rather than just answering questions. A major catalyst was the launch of Perplexity Computer, an agentic tool, combined with a shift to usage-based pricing, charging for heavy usage beyond subscription credits. This model appears to have unlocked significantly higher monetization from power users and enterprises. Perplexity had been scaling rapidly but at a more measured pace, estimates around $100-200M ARR earlier in 2025, with some projections of ~$232M for 2025 overall. The 50% monthly jump represents one of its sharpest accelerations to date, moving it into a much higher league for an AI startup. This development highlights a broader trend in the AI industry: the shift from chatbots/search which compete heavily with free tools like Google or basic LLMs to agentic systems that deliver tangible productivity gains and justify premium, usage-tied pricing. Users seem willing to pay more when AI doesn't just inform but acts. Perplexity isn't alone -- similar momentum is visible elsewhere with venture funding heavily tilting toward agent-related technologies. However, Perplexity still faces challenges, including ongoing publisher lawsuits over how its search features handle content and competition from bigger players. The numbers come primarily from a Financial Times report citing internal figures, and they've been widely corroborated across tech outlets. It's an impressive short-term validation of the agents are the future thesis, though sustaining that velocity will depend on execution, retention, and how well the agents perform in real-world use. Low adoption e.g., <20 PRs/month per dev leads to poor returns. Some teams see gains in velocity but struggle to translate to overall delivery metrics without proper tooling and telemetry. One RCT on experienced open-source devs found AI including Claude increased completion time by 19% in some setups, possibly due to review overhead or slop code requiring rework. Costs can escalate: Opus-heavy usage burns tokens faster; optimization like outing simple tasks to Sonnet/Haiku, prompt caching, model switching is essential for positive ROI. Some power users report high personal compute value, but enterprise bills require governance. Concerns around technical debt, deskilling, code maintainability, or reduced job satisfaction. Gains are often strongest in debugging and understanding codebases rather than pure generation. Measurement is hard: Feels faster isn't enough -- teams need observability for cost-to-value ratios, PR impact, etc. Use Opus for complex reasoning and planning, Sonnet for efficient execution. Agentic features; multi-step workflows, persistent context via CLAUDE.md, large context windows amplify gains over basic autocomplete. Adopt analytics for usage vs. outcomes; focus on high-value tasks. Early high-value coding use cases evolve; pair with training for best results.

April 10 (Reuters) - U.S. Vice President JD Vance and Treasury Secretary Scott Bessent questioned leading tech CEOs about AI model security and how to respond to cyber attacks a week before Anthropic released its new Mythos model, CNBC reported on Friday. Anthropic's Dario Amodei, Alphabet's Sundar Pichai, OpenAI's Sam Altman, Microsoft's Satya Nadella and the heads of Palo Alto Networks and CrowdStrike were on the call, according to the report. Anthropic declined to comment, while Alphabet, OpenAI, Microsoft, Palo Alto and CrowdStrike did not immediately respond to Reuters' requests for comment. Earlier this week, Anthropic launched a powerful AI model but held off on releasing it widely over concerns that it could expose hidden cybersecurity vulnerabilities. Only a group of around 40 tech heavyweights, including Microsoft and Google, would have access to Anthropic's "Claude Mythos" model. The startup had said it had been in ongoing discussions with the U.S. government about the model's capabilities. (Reporting by Harshita Mary Varghese in Bengaluru; Editing by Leroy Leo)

Anthropic's most powerful AI model to date -- Claude Mythos -- has triggered a rare emergency meeting between U.S. bank CEOs, Federal Reserve Chair Jerome Powell, and Treasury Secretary Scott Bessent. The extraordinary gathering this week reflects how seriously the financial sector is taking the cybersecurity risks tied to Mythos and the company's new initiative, Project Glasswing. What Is Claude Mythos and Why Is It Not Public Anthropic released Claude Mythos Preview to a limited group of tech companies, citing the potential damage a wider public release could cause. The system card for Mythos Preview states that its large increase in capabilities led Anthropic to decide against making it generally available. Anthropic has privately warned top government officials that Mythos makes large-scale cyberattacks significantly more likely this year. The company acknowledges the same capabilities that can strengthen cyber defenses can also be weaponized by attackers. Mythos Preview is a general-purpose model, yet in pre-release testing, Anthropic found its cybersecurity capabilities were surprisingly advanced compared with previous models. Logan Graham, who leads offensive cyber research at Anthropic, confirmed the model is advanced enough not only to identify undisclosed software vulnerabilities but also to weaponize them. Project Glasswing: AI Fighting AI at Scale Anthropic formed a coalition called Project Glasswing that includes Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. The companies use Mythos in preview mode to detect software bugs and fix them before hackers gain access to the technology. As part of Project Glasswing, Anthropic is providing over 50 tech organizations access to Mythos Preview along with over $100 million in usage credits to find and fix vulnerabilities in foundational systems representing a large portion of the world's shared cyberattack surface. In just the past few weeks, Claude Mythos Preview identified thousands of zero-day vulnerabilities -- many of them critical -- across every major operating system and every major web browser, along with a range of other important pieces of software. Bank CEOs and the White House Emergency Meeting Federal Reserve Chairman Jerome Powell and Treasury Secretary Scott Bessent met with major U.S. bank CEOs this week to discuss the possible cyber risks raised by Anthropic's Mythos model. The bank heads were already in Washington for a Financial Services Forum board meeting when a special gathering was called to discuss Mythos. Bank of America, Citi, Goldman Sachs, Morgan Stanley, and Wells Fargo CEOs attended the Anthropic meeting. JPMorgan's Jamie Dimon was the only major banking CEO who could not attend. TD Securities analyst Jaret Seiberg warned that if Claude Mythos helps bad actors find coding vulnerabilities faster than banks can fix them, it could destabilize a major bank and quickly become a systemic threat if it shatters confidence in the ability to store wealth and transact using financial institutions. What Mythos Anthropic Means for Cybersecurity Stocks JPMorgan reiterated overweight ratings on CrowdStrike Holdings and Palo Alto Networks following Anthropic's Project Glasswing announcement, setting a 12-month price target of $475 on CrowdStrike and $200 on Palo Alto Networks. JPMorgan analyst Brian Essex described both firms as essential layers in the defensive stack rather than competitive targets. Cybersecurity shares surged after Anthropic unveiled Project Glasswing, reflecting market confidence that the initiative will drive sustained demand for advanced security solutions across enterprise and critical infrastructure sectors. Anthropic's Claude Mythos and Project Glasswing represent a defining moment in AI development -- one where the industry's most capable model is being deployed not for public use, but as a shield against the very threats it could unleash.

The leaders of some of America's largest banks were warned by a top government official this week about a new artificial intelligence model from Anthropic that could lead to heightened risks of cyberattacks, according to three people briefed on the matter but not permitted to speak publicly. The stark message was delivered on Tuesday morning by Treasury Secretary Scott Bessent to a small group of chief executives, including those from Bank of America, Citi and Wells Fargo, in a hastily arranged meeting in Washington, D.C. Mr. Bessent, the people said, cautioned the banks that allowing the new A.I. software to run through their internal computer systems could pose a serious risk to sensitive customer data. The Federal Reserve chair, Jerome H. Powell, who has spoken publicly in recent weeks about the threat of cyberattacks against the financial system, also attended Tuesday's meeting with the bank leaders. The warnings relate to a new intelligence model that Anthropic named Claude Mythos Preview. Anthropic has said the model is particularly good at identifying security vulnerabilities in software that human developers could not find. At Tuesday's meeting, the people briefed on the matter said, the bank executives were told that the new model might be so effective at finding security weaknesses inside banks that hackers or other so-called third-party bad actors could get their hands on the information and exploit it. Anthropic itself has warned about the risks. The company said this week that the model's advancements were so powerful and potentially dangerous that they could not safely be released to the public yet and would instead be contained to a coalition of 40 companies that it called "Project Glasswing." That group includes at least one bank, JPMorgan, the nation's largest, which earlier said it would use the software "to evaluate next-generation A.I. tools for defensive cybersecurity across critical infrastructure." The Trump administration and Anthropic are locked in a legal battle over the Defense Department's recent designation of the company as a "supply chain risk." The government issued that designation after Anthropic insisted on putting limits on the use of its A.I. technology in war. In a statement, a Treasury spokesperson said, "This week's meeting was convened by Secretary Bessent to initiate a process for planning and coordination of our approach to the rapid developments taking place in A.I." The existence of the meeting was reported earlier by Bloomberg News. The Fed declined to comment. "We're taking every step we can to make sure that everybody is safe from these potential risks, including Anthropic agreeing to hold back the public release of the model until our officials have figured everything out," Kevin A. Hassett, director of the National Economic Council, told Fox News on Friday. "There's definitely a sense of urgency." Logan Graham, an Anthropic executive, said in a statement that the new technology would help "secure infrastructure that is critical for global security and economic stability." Rob Copeland is a finance reporter for The Times, writing about Wall Street and the banking industry.

Anthropic's Project Glasswing found decades-old bugs in minutes. Discover why traditional patching is no longer enough and how to contain threats at the browser *** This is a Security Bloggers Network syndicated blog from Menlo Security Blog authored by Menlo Security Blog. Read the original post at: https://www.menlosecurity.com/blog/the-ai-arms-race-just-went-public-what-anthropics-project-glasswing-means-for-every-security-team

The latest trading session saw NRG Energy (NRG) ending at $102.68, denoting a +0.23% adjustment from its last day's close. The stock's performance was ahead of the S&P 500's daily loss of 0.76%. On the other hand, the Dow registered a loss of 0.28%, and the technology-centric Nasdaq decreased by 1.2%. The power company's stock has climbed by 4% in the past month, exceeding the Utilities sector's gain of 1.37% and the S&P 500's gain of 2.71%. The upcoming earnings release of NRG Energy will be of great interest to investors. In that report, analysts expect NRG Energy to post earnings of $1.05 per share. This would mark a year-over-year decline of 7.89%. Any recent changes to analyst estimates for NRG Energy should also be noted by investors. Such recent modifications usually signify the changing landscape of near-term business trends. Consequently, upward revisions in estimates express analysts' positivity towards the company's business operations and its ability to generate profits. Our research demonstrates that these adjustments in estimates directly associate with imminent stock price performance. To exploit this, we've formed the Zacks Rank, a quantitative model that includes these estimate changes and presents a viable rating system. The Zacks Rank system, ranging from #1 (Strong Buy) to #5 (Strong Sell), possesses a remarkable history of outdoing, externally audited, with #1 stocks returning an average annual gain of +25% since 1988. Over the past month, there's been a 0.4% fall in the Zacks Consensus EPS estimate. Currently, NRG Energy is carrying a Zacks Rank of #3 (Hold). In terms of valuation, NRG Energy is currently trading at a Forward P/E ratio of 13.65. This represents a discount compared to its industry's average Forward P/E of 16.72. One should further note that NRG currently holds a PEG ratio of 1.21. Comparable to the widely accepted P/E ratio, the PEG ratio also accounts for the company's projected earnings growth. The Utility - Electric Power was holding an average PEG ratio of 2.54 at yesterday's closing price. The Utility - Electric Power industry is part of the Utilities sector. This industry currently has a Zacks Industry Rank of 140, which puts it in the bottom 45% of all 250+ industries. The Zacks Industry Rank gauges the strength of our industry groups by measuring the average Zacks Rank of the individual stocks within the groups. Our research shows that the top 50% rated industries outperform the bottom half by a factor of 2 to 1. Remember to apply Zacks.com to follow these and more stock-moving metrics during the upcoming trading sessions. Several years ago, we shocked our members by offering them 30-day access to all our picks for the total sum of only $1. No obligation to spend another cent. Thousands have taken advantage of this opportunity. Thousands did not - they thought there must be a catch. Yes, we do have a reason. We want you to get acquainted with our portfolio services like Surprise Trader, Stocks Under $10, Technology Innovators,and more, that closed 228 positions with double- and triple-digit gains in 2023 alone.

In an unusual move, the Treasury secretary and the Federal Reserve chair gathered bank executives to caution about cyberthreats posed by artificial intelligence. The leaders of some of America's largest banks were warned by a top government official this week about a new artificial intelligence model from Anthropic that could lead to heightened risks of cyberattacks, according to three people briefed on the matter but not permitted to speak publicly. The stark message was delivered on Tuesday morning by Treasury Secretary Scott Bessent to a small group of chief executives, including those from Bank of America, Citi and Wells Fargo, in a hastily arranged meeting in Washington, D.C. Mr. Bessent, the people said, cautioned the banks that allowing the new A.I. software to run through their internal computer systems could pose a serious risk to sensitive customer data. The Federal Reserve chair, Jerome H. Powell, who has spoken publicly in recent weeks about the threat of cyberattacks against the financial system, also attended Tuesday's meeting with the bank leaders. The warnings relate to a new intelligence model that Anthropic named Claude Mythos Preview. Anthropic has said the model is particularly good at identifying security vulnerabilities in software that human developers could not find. At Tuesday's meeting, the people briefed on the matter said, the bank executives were told that the new model might be so effective at finding security weaknesses inside banks that hackers or other so-called third-party bad actors could get their hands on the information and exploit it. Anthropic itself has warned about the risks. The company said this week that the model's advancements were so powerful and potentially dangerous that they could not safely be released to the public yet and would instead be contained to a coalition of 40 companies that it called "Project Glasswing." That group includes at least one bank, JPMorgan, the nation's largest, which earlier said it would use the software "to evaluate next-generation A.I. tools for defensive cybersecurity across critical infrastructure." The Trump administration and Anthropic are locked in a legal battle over the Defense Department's recent designation of the company as a "supply chain risk." The government issued that designation after Anthropic insisted on putting limits on the use of its A.I. technology in war. In a statement, a Treasury spokesperson said, "This week's meeting was convened by Secretary Bessent to initiate a process for planning and coordination of our approach to the rapid developments taking place in A.I." The existence of the meeting was reported earlier by Bloomberg News. The Fed declined to comment. "We're taking every step we can to make sure that everybody is safe from these potential risks, including Anthropic agreeing to hold back the public release of the model until our officials have figured everything out," Kevin A. Hassett, director of the National Economic Council, told Fox News on Friday. "There's definitely a sense of urgency." Logan Graham, an Anthropic executive, said in a statement that the new technology would help "secure infrastructure that is critical for global security and economic stability."

Lawmakers are grappling with the crypto-powered betting industry as it explodes in popularity. Polymarket has defended how it fights insider trading after a US senator wrote a letter to the crypto-powered betting market's CEO, accusing it of being an "illicit market." US Senator Richard Blumenthal on Thursday accused the platform of letting users profit off of national security secrets and slammed it for opening a market to bet on the rescue of a US soldier stranded in Iran. "Polymarket operates in full compliance with applicable law, and our insider trading rules are the exact lines that the CFTC and courts draw for derivatives markets," Olivia Chalos, Polymarket's deputy chief legal officer, wrote on X. She added that the platform shares Blumenthal's "commitment to national security and market integrity." The clash comes as regulators and lawmakers grapple with prediction markets' skyrocketing popularity. Politicians from both parties have spoken out against the industry, but it has a powerful ally in Michael Selig, the chair of the Commodity Futures Trading Commission. Blumenthal's letter -- the latest from US lawmakers -- claims that prediction markets turn a blind eye to or "encourage the use of secrets." The lawmaker demanded Polymarket founder Shayne Coplan explain why the company allows bets on deaths of heads of state and military events, and whether it reports suspicious accounts to the government. Blumenthal was particularly concerned with the fact that Polymarket let users place bets on when the US might rescue warplane crew members who were shot down in Iran. Polymarket took down the market after other lawmakers flagged it earlier this week. "Polymarket's new rules on insider trading are paltry, inadequate, and late," added Blumenthal. In response to DL News' request for comment, Polymarket pointed to new rules it released last month aimed at combatting bettors' use of stolen information or illegal tips. Polymarket exploded in popularity and relevance in 2024 when users bet billions of dollars on the US presidential election and, in favoring Donald Trump, outperformed traditional pollsters, who had pegged Kamala Harris as the likely winner. The platform allows users to deposit USDC on the Polygon blockchain to make bets. Since its growth, major companies like trading app Robinhood and crypto exchange Coinbase have started offering prediction market services. Lawmakers this year have introduced a number of bills to try and regulate betting platforms -- with one even trying to block federal officials from making bets on the site.

Anthropic dropped a sobering assessment this week: within two years, AI models will uncover vast numbers of software vulnerabilities that have sat unnoticed in code for years -- and chain them into working exploits. The company's security teams released detailed defensive recommendations alongside Project Glasswing, their initiative to deploy Claude Mythos Preview's capabilities for cyber defense. The math here isn't complicated. If attackers can use frontier models to automate vulnerability discovery and exploit generation, the window between a patch dropping and a working exploit appearing shrinks dramatically. Anthropic's security engineers have watched this happen in their own testing. According to Anthropic's technical findings, AI models excel at recognizing signatures of known vulnerabilities in unpatched systems. Reversing a patch into a working exploit -- exactly the kind of mechanical analysis these models handle well -- used to require specialized skills. Now it's becoming automated. The company noted that publicly available models below Mythos capability levels can already find serious vulnerabilities that traditional code reviews missed for extended periods. Mozilla Firefox vulnerabilities discovered through AI scanning serve as one documented example. Anthropic's recommendations prioritize controls that hold even against attackers with unlimited patience and AI assistance. Friction-based security measures -- extra pivot hops, rate limits, non-standard ports -- lose effectiveness when adversaries can grind through tedious steps automatically. Their top priorities: Patch velocity matters more than ever. Internet-facing applications should receive patches within 24 hours of an exploit becoming available. The CISA Known Exploited Vulnerabilities catalog should be treated as an emergency queue. Anthropic recommends using EPSS (Exploit Prediction Scoring System) for prioritizing everything else. Prepare for 10x vulnerability report volume. Over the next two years, intake and triage processes will face pressure they've never experienced. Organizations still running weekly spreadsheet meetings won't keep pace. Scan your own code with frontier models before attackers do. This was Anthropic's single most emphasized recommendation. Legacy code that predates current review practices -- especially code whose original authors have moved on -- represents the highest-value target for proactive scanning. The guidance pushes hard toward hardware-bound credentials and identity-based service isolation. A compromised build server shouldn't reach production databases. A compromised laptop shouldn't touch build infrastructure. Static API keys, embedded credentials, and shared service-account passwords are described as "among the first things an attacker with model-assisted code analysis will find." Organizations without dedicated security teams got specific advice: enable automatic updates everywhere, prefer managed services over self-hosting, use passkeys or hardware security keys, and turn on free security tooling from code hosts like GitHub's Dependabot and CodeQL. Open-source maintainers should expect increased vulnerability report volume -- some valuable, some automated noise. Publishing a SECURITY.md with clear intake processes helps separate signal from spam. Anthropic committed to updating this guidance as Project Glasswing progresses. For enterprises tracking SOC 2 and ISO 27001 compliance, most recommendations map directly to existing controls. The difference now is urgency.

* Elon Musk's entrepreneurial approach focuses on solving unique problems rather than financial gain. * Tackling unique challenges can lead to greater success by fostering innovation and societal value. * The primary constraint in building civilization is not capital but the availability of excellent engineers. * Hiring young, unproven engineers and granting them responsibility can boost a company's innovation rate. * Musk's productivity strategy involves complete control over his time and resource allocation. * The aerospace industry lags behind automotive practices in production efficiency. * Identifying and addressing the biggest limiting factor can exponentially improve productivity. * SpaceX's success is attributed to its unique culture, habits, and problem-solving approach. * Musk's commitment to his missions distinguishes him from other entrepreneurs. * A maniacal devotion to a mission can drive extraordinary commitment and innovation. * Musk's risk-taking is driven by ideological and philosophical motivations. * Engineering talent is crucial for innovation and company success. * SpaceX's organizational dynamics contribute significantly to its achievements. Guest intro Eric Jorgenson is the CEO of Scribe Media. He compiled the writings, podcasts, and interviews of Naval Ravikant into The Almanack of Naval Ravikant, which spread virally and has been read by millions for free. His latest work, The Book of Elon, assembles decades of Elon Musk's interviews into the most complete portrait of his thinking. Elon Musk's unique approach to entrepreneurship * Elon Musk's approach to entrepreneurship is fundamentally about solving unique problems rather than chasing financial gain. -- Eric Jorgenson * Musk chooses to work on projects others deem too crazy, like space exploration. * He literally says nobody else is crazy enough to try space so that's the company I have to go build because nobody else is working on it. -- Eric Jorgenson * Focusing on unique problems encourages innovation and societal value. * The thing that you can do that nobody else is working on is actually more likely to be successful... you're solving a problem you're adding a capability to humanity. -- Eric Jorgenson * Musk's philosophy contrasts with typical business motivations that focus on financial metrics. * Tackling unique challenges can yield better outcomes in entrepreneurship. * Musk's approach is about adding capabilities to humanity, not just financial success. The importance of engineering talent * The fundamental constraint on building and growing civilization is not capital, but truly excellent engineers. -- Eric Jorgenson * Musk emphasizes the need for engineering talent over capital in innovation. * He has all the money he needs; the constraint is truly excellent engineers. -- Eric Jorgenson * Hiring young, unproven engineers can accelerate a company's iteration rate. * He really biases towards hiring young unproven engineers and then giving them like a shocking amount of accountability and responsibility. -- Eric Jorgenson * This approach leads to faster innovation and development. * Engineering talent is crucial for the growth of companies and civilization. * Musk's strategy focuses on nurturing engineering talent to drive progress. Musk's productivity strategies * Musk's productivity involves complete control over his time and quick resource pivoting. * He fired his scheduler because he's like I want complete perfect control of my time I wanna be able to like work on the most important thing immediately I want to be able to move whenever I want. -- Eric Jorgenson * This strategy enhances productivity in his companies. * Musk's operational style is crucial for understanding his leadership approach. * The aerospace industry lags behind automotive practices in production efficiency. * The aerospace best practice even at SpaceX was like way way way way behind how they thought about... we can there's so much room for improvement because of what they learned on model three production. -- Eric Jorgenson * Focusing on the biggest limiting factor can lead to exponential productivity improvements. * Always identify and attack the biggest limiter... laser in on a single constraint that if removed would unlock everything downstream. -- Eric Jorgenson SpaceX's unique culture and success * SpaceX's success is attributed to its unique culture, habits, and problem-solving approach. * The answer the deepest answer is like the culture and the habits and the routines the selection of the people and then the memes that spread through the organization how they work and how they attack problems. -- Eric Jorgenson * The organizational dynamics at SpaceX differ from other aerospace companies. * Musk's commitment to his missions sets him apart from other entrepreneurs. * So few entrepreneurs I think would make the bets that Elon has made because they're not truly purely as ideologically and philosophically motivated as he is. -- Eric Jorgenson * SpaceX's culture fosters innovation and success in the aerospace industry. * Musk's risk-taking is driven by ideological and philosophical motivations. * The company's culture and habits contribute significantly to its achievements. The mindset required for high-stakes ventures

The Los Angeles neighbors of Sean "Diddy" Combs, who is currently serving 50 months on prostitution-related charges, are reportedly worried about his potential return if a New York federal appeals court sides with the disgraced mogul and frees him ahead of his projected 2028 release date. Those living in the Holmby Hills enclave -- home to many A-listers, including movie director Ridley Scott, and a slew of tech bigwigs -- along with realtors showing nearby homes, are concerned about the area's tie to the Bad Boy Records founder, 56, insiders with direct knowledge told TMZ Friday. Should Combs be freed and return to his L.A. home, neighbors fear he could, as TMZ put it, sow "the same chaos that led to [his] arrest." Snapchat cofounder Evan Spiegel and his Victoria's Secret Angel wife Miranda Kerr also live in the tony 'hood, as does former Napster founder-turned-briefly Facebook president Sean Parker, as well as socialite Alexandra von Fürstenberg. Combs' home there is reportedly such a concern that realtors are warning interested buyers about it, lest they stay mum and ultimately get sued over failure to disclose. News of the neighborhood's handwringing comes a day after lawyers for Combs presented his case to the Second Circuit Court of Appeals, which is tasked with determining whether to overturn his July 2025 conviction and reducing his more than four-year sentence. The legal team for Combs, who is serving 50 months at FCI Fort Dix in New Jersey, argues his sentence was influenced unjustly by the charges for which he was acquitted, namely, federal sex trafficking. He was also tried at the time for racketeering. Both charges could have resulted in a life sentence. Currently, though, Combs -- who was taken into custody in September 2024 -- is slated for release on April 15, 2028. Last month, the Bureau of Prisons' inmate locator put the date at April 25, 2028, down from the prior June 4, 2028 release date.

CoreWeave Inc. today announced that it has won a multiyear contract to supply Anthropic PBC with cloud infrastructure. The company's shares closed 10.8% higher on the news. The data center capacity commissioned by Anthropic developer will start coming online later this year. CoreWeave stated that the infrastructure will "support the development and deployment of Anthropic's Claude." That suggests Anthropic plans to run both model training and inference workloads on the platform. CoreWeave didn't specify the dollar value of the deal. However, chief executive Michael Intrator did tell Bloomberg that Anthropic will use a "variety" of Nvidia Corp. chips deployed in U.S. data centers. The most advanced Nvidia graphics card that the company currently offers is the Blackwell Ultra. It comprises two 4-nanometer dies linked together by a custom interconnect called the NV-HBI. The technology streams data between the modules at a rate of 10 terabits per second. The Blackwell Ultra includes not only matrix multiplication units, the core building blocks of AI accelerators, but also more specialized circuits. There's so-called warp-synchronous memory that speeds up the task of sharing data between an AI model's components. Additionally, cores called Special Function Units speed up perform mathematical operations that involve transcendental numbers such as pi. The disclosure of the Anthropic partnership comes a day after CoreWeave expanded an existing infrastructure deal with Meta Platforms Inc. The revised contract covers "initial deployments" of Vera Rubin, the successor to the Blackwell Ultra. If those deployments come online in the near future, it's possible Anthropic will also use the new chip to power its CoreWeave-hosted workloads. The cloud provider operates more than 43 data centers with about 850 megawatts of capacity as of February. CoreWeave provides infrastructure to its largest customers through so-called Dedicated Access AZs. Those are clusters with a single-tenant design, which means that they're used by a single organization. Cloud providers usually run workloads on infrastructure that is shared by multiple customers. CoreWeave has incorporated multiple custom elements into its data centers. According to the company, its facilities can predict spikes in workloads' hardware usage and optimize their cooling equipment accordingly. CoreWeave also reconfigures the associated power distribution hardware. The company sells its AI infrastructure alongside other offerings. CoreWeave provides instances powered solely by central processing units that can be used to run general-purpose workloads. Additionally, it offers services that ease tasks such as fixing AI training errors. According to CoreWeave, its cloud platform is used by 9 out of the world's 10 top AI model providers. That group includes not only Anthropic but also rival OpenAI Group PBC. Last year, the latter company agreed to rent $22.4 billion worth of infrastructure from CoreWeave.

Anthropic's latest AI Model, Claude Mythos, will break the cybersecurity vulnerability management operational models. Mythos is so good at discovering and building viable exploits it is currently being rolled-out in a controlled manner under "Project Glasswing". Those cybersecurity companies who have early access are attesting to the blazing speed and accuracy of the model and have declared the traditional processes the industry uses to manage vulnerabilities in their systems is no longer viable. The Problem is Twofold First, new AI models like Mythos, are incredibly proficient at identifying weaknesses in code that could be leveraged by cyber attackers. Mythos has found over 2000 high-severity vulnerabilities, including in every major operating system and web browser! The second issue is how fast workable exploits can be created to take advantages of discovered vulnerabilities. The latest AI models are highly proficient and quickly figuring out how to leverage weakness and chain them together across multiple vulnerabilities to gain unprecedented access to targeted systems and infrastructures. The speed of discovery and exploitation of vulnerabilities is now well beyond what defenders can address. Currently, the industry must become aware of vulnerabilities through industry announcements, direct notification by researchers, or in rare cases by self-discovery efforts. They must then verify the vulnerability and understand its potential applicability to their environment. It gets rated and based upon that rating; resources will be committed to develop a patch. The patch must be tested and then scheduled for roll-out in a way that it can be withdrawn if something unforeseen occurs. This takes time and may incur downtime for impacted systems. Legacy Patching Fails Most organizations have a cadence for addressing different severity vulnerabilities. A patch calendar may bundle fixes to control the disruption and prioritize the most urgent fixes. High risk may be fixed in weeks or a month, medium in several months, and low, perhaps every year if they choose to fix them at all. The goal is simply to fix the vulnerabilities before the attackers could create and deploy an exploit in the wild, which typically took months. No longer. Now, what took months will take minutes with Mythos and other AI models. That breaks the entire vulnerability management system that protects our digital world. For those who read my annual cybersecurity predictions (video version), we can check off prediction number 2, which outlined how AI acceleration would shrink the time-to-patch window dramatically, beyond what is currently possible for cybersecurity teams. Predicted Strategic Outcomes First, organizations will cut corners to speed up patch release for the impactful vulnerabilities most likely to be exploited. This will shrink the patch window a little, but not enough, and introduce errors in patches which will have undesired impacts on users. Essentially, the number of 'bad patches' will increase. Secondly, the increased attack velocity will drive software developers to commit much more to using AI tools to proactively detect and resolve vulnerabilities prior to product release. This should have happened long ago, but in the race to market, security vetting often gets deferred to later. The outcome will be slower product release timelines from responsible vendors. The haphazard companies will want to take advantage and continue to push vulnerable code to get into the market faster. But that will eventually have consequences. Third, there will be massive shift for cybersecurity teams to adopt these AI tools to compete with attackers by trying to detect and address vulnerabilities before the hackers. The tools, processes, and operating models will need to be entirely redrawn. The window of exposure will be the metric that must shrink, from months to hours. Adaptation Required The latest AI tools will compress the vulnerability lifecycle from discovery to exploitation at a pace that challenges the foundations of today's security operations. Organizations that continue to rely on legacy processes will find themselves operating outside the window of safety. Defenders can no longer rely on traditional disclosure cycles, patch cadences, or reactive security models when intelligent systems can discover and weaponize weaknesses in hours. To survive this new era, organizations must reinvent their processes around AI-driven velocity. The signals are clear; it is time to radically adapt vulnerability management or be victimized.

Now part of SpaceX, xAI wants to build the power plant to provide electricity for its data center in nearby Memphis, Tennessee, and another one coming in Southaven, Mississippi. Elon Musk's xAI, now owned by SpaceX, is facing a new legal challenge from environmental groups in Mississippi, where the company plans to build a massive, methane gas-burning power plant in the town of Southaven. Nonprofits including the NAACP, Young, Gifted & Green, and the Safe and Sound Coalition want Mississippi to revoke the permit that the state's environmental regulator granted to xAI last month allowing it to build the plant. Members of the groups live near xAI's local operations. The power plant will "worsen the region's ongoing ozone problem," the groups' lawyers wrote in a petition filed to the state on Thursday, and result in "significant increases of pollutants like nitrogen dioxide and, relatedly, fine particulate matter," that would harm air quality and threaten the health of residents. Musk's company obtained the permit from the Mississippi Department of Environmental Quality on March 10, enabling it to install 41 natural gas-burning turbines permanently in DeSoto County, Mississippi, in order to power its nearby data centers. XAI currently operates a data center called Colossus 2 in Memphis, Tennessee, just across the state line, and is building out a new facility dubbed Macrohardrrr in Southaven. Musk, the world's richest person, is counting on the Memphis area to serve as the backbone for xAI's buildout, as he tries to compete with OpenAI, Anthropic and Google in the booming AI market. SpaceX acquired xAI in February in a transaction that values the combined entity at $1.25 trillion, ahead of what's expected to be a record IPO in the coming months. Across the U.S., communities have grown concerned about the financial and environmental risks associated with the buildout of the power-intensive infrastructure that underpins AI models and the apps and services that work on top of them. Represented by the Southern Environmental Law Center, the groups opposing xAI's development argue that the company, via its local subsidiary MZX Tech LLC, and the state's regulator didn't use accurate pollution estimates while considering the power plant. They also say xAI wasn't required to use the cleanest possible turbines or purchase environmental offsets, and that local stakeholders were cut out of key meetings, while government emails revealed the regulator was rushing the process under pressure from xAI. The authorization that xAI obtained is known as a Prevention of Significant Deterioration (PSD) permit -- a federal air quality standard that applies to significant sources of pollution like utility-sized power plants. Such permits are typically granted after years of correspondence between the Environmental Protection Agency, state regulators and the public. Representatives for xAI didn't respond to a request for comment. The MDEQ told CNBC by email on Friday that it had received the groups' "request for an evidentiary hearing regarding the permit," and that xAI would have the chance to join the proceeding as a party.

Bessent, Powell warned bank CEOs about Anthropic risks, sources say U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened an urgent meeting with bank CEOs this week to warn of cyber risks posed by Anthropic's latest AI model, two sources familiar with the matter said on Thursday. Rachel Faber reports.
Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development , Standards, Regulations & Compliance Internal memos used to by the Department of Defense to justify its decision to blacklist artificial intelligence firm Anthropic said the firm's models could not be reliably controlled for military use. See Also: New OnDemand Webinar | Overcoming Top Data Compliance Challenges in an Era of Digital Modernization The documents, posted Friday in San Francisco federal district court, provide the most detailed explanation to date as to why the Pentagon designated Anthropic a "supply-chain risk." The memos focus on the company's refusal to support certain government uses of its technology - and its public fight with the department over those uses (see: Anthropic Fight Lays Bare How Fundamental AI Is to the DOD). Defense officials argued that Anthropic - the only developer currently allowed in some of the military's sensitive networks - retains the exclusive ability to modify, restrict or override how its models function once deployed in Pentagon environments. "Anthropic's ability to unilaterally alter system guardrails and model weights without [Department of War] consent could fundamentally change the system's function and creates a significant operational risk," wrote Emil Michael, a former Uber executive who is now undersecretary of defense for research and engineering. Michael also accused the AI mainstay in a March memo of using "ongoing good faith negotiations for Anthropic's own public relations." "A vendor that raises the prospect of disallowing its software to function in critical military operations, and treats its negotiations the DoW primarily as tools for brand-building cannot be trusted, particularly when that marketing campaign is openly hostile to the DoW and duplicitous," he wrote. Michael acknowledged that the military accepted a baseline level of risk by ushering in an AI system to its network and accepting Anthropic's role as the software's maintainer. That risk became intolerable when "Anthropic asserted in the negotiations that it have an approval role in the operational decision chain," he said. That, combined with what Michael terms a hostile public posture "represents a fully mature supply-chain risk - including increased potential for model poisoning, insider threat risk, data exfiltration and denial of service - posing a direct, intolerable and material risk to our warfighting capability which warrants the designation of Anthropic as a supply-chain risk." A third-party assessment conducted by Exiger Diligence and commissioned by the Pentagon rated Anthropic's overall risk as "medium" across cyber, operational and compliance categories. Anthropic's dispute with the government is unfolding across multiple courts. A federal appeals court in Washington recently declined to block the supply-chain risk designation, allowing Defense to continue enforcing the blacklist (see: Court Backs Pentagon Anthropic Ban, but Fight Continues). A separate federal judge in California has taken a more limited approach, granting Anthropic partial relief that constrains how broadly the government can apply the designation. Anthropic has argued in court filings that the designation is factually and procedurally flawed. The designation threatens hundreds of millions of dollars in near-term federal revenue and potentially far more if the policy expands across government and commercial partners.

Slayyyter, stylized as "$layyter," released her third studio album, "Wor$t Girl in America," on March 27, 2026. The project isn't just another pop release, but an adventure in storytelling, both visually and sonically. The album marks a drastic and unexpected shift from her second studio album, "Starfucker," which leaned heavily into glamour, and "Troubled Paradise," which played like a hyperpop trip. Here, $layyyter trades polish and heavy autotune for something more natural, loud and personal. Beginning the era, $layyyter released "Beat Up Chanel$," a blend of old and new Slayyyter. The track feels transitional, balancing her glamorous pop roots with a rougher, more chaotic edge. Next came "Cannibalism," which introduced a unique mix of 1950s jive cadence with hyperpop production, creating something theatrical and unexpected. Then came "Crank," a loud, club-pop electronic track that leaned fully into chaos. After that was "Dance," which felt like a more straightforward pop song compared to the others. Lastly, "Old Technology" arrived, indie sleaze-inspired and gritty, giving listeners a clearer idea of the album's direction. The album unravels by placing the singles first, followed by the new material. It opens with a dreamy loop in "Dance..." that still holds onto Slayyyter's older glamour before transitioning into "Beat Up Chanel$," which begins ramping up the intensity. "Cannibalism" shifts the album further, introducing sharper, more unconventional production. "Old Technology" pushes the sound into grittier territory, while "Crank" delivers the explosive moment the album had been building toward. "Gas Station" marks the first brand-new track. It's vulnerable and restrained, something Slayyyter isn't typically known for. The shift contrasts sharply with the chaos that opens the album. "Yes Goddd" brings the energy back with an aggressive electronic sound reminiscent of early 2010s Skrillex, adding another unexpected texture. "Unknown Loverz" returns to dreamy nostalgia, while "Old Fling$" blends into dark pop. "I'm Actually Kinda Famous" leans self-aware and sarcastic, exploring insecurity about her place in stardom while keeping the tone playful. "$T. Loser" continues the gritty direction. Listening closely, the messier, louder songs often appear in all caps or include dollar signs, subtly separating the new Slayyyter from her earlier sound. That use of dollar signs becomes a key detail throughout the album. It signals where Slayyyter is heading while still referencing the sounds that shaped her. Kesha's early 2010s aesthetic comes to mind, where dollar signs matched a dirty, trashy, gritty sound. Slayyyter adapts that concept, using it to reinforce the album's chaotic, experimental tone. The dollar signs create a rough, unpolished aesthetic that fits the project. The final stretch turns introspective. "What Is It Like, To Be Liked?" creates an emotional shift before "Prayer" transitions into "Brittany Murphy," which closes the album with vulnerability. The ending feels distant from the explosive opening. Visually, the album is just as deliberate. The smudged name across covers, dollar signs and individual visuals for each track reinforce the concept. Even teasers used VHS-style clips, adding to the gritty aesthetic. Slayyyter returns with a cohesive, conceptual and chaotic sound. "Wor$t Girl in America" is more than an album. It's visual storytelling, loud, messy and intentionally imperfect.
