News & Updates

The latest news and updates from companies in the WLTH portfolio.

Why Anthropic met with bank CEOs about AI security risks

* Key insight: Anthropic's newest AI vulnerability hunting model, Mythos, compresses discovery-to-exploit timelines, altering cyber risk economics. * What's at stake: Undetected flaws could precipitate operational outages, reputational damage and regulatory intervention. * Forward look: Expect broader proliferation of attack-capable models; prioritize independent verification over vendor assurances.Source: Bullets generated by AI with editorial review Are the warnings about Anthropic's Claude Mythos AI model real or overblown? Bank CEOs met this week with Anthropic executives, Federal Reserve Chair Jerome Powell and Treasury Secretary Scott Bessent at the White House to discuss the risks Anthropic's Claude Mythos presents to the financial system, a meeting first reported by Bloomberg. A person familiar with the matter confirmed that the CEOs of Bank of America, Citi, Goldman Sachs, Morgan Stanley and Wells Fargo attended the Anthropic meeting. All the executives were already in Washington, D.C. for a Financial Services Forum meeting. The White House meeting with Anthropic "was added to their calendars at the last minute, but there was no rush to Washington," the person said. Claude Mythos is an artificial intelligence model that detects security vulnerabilities in software. According to its maker, Anthropic, the technology can spot software flaws -- even 30-year-old vulnerabilities that no human has noticed before. The security concern is that Mythos will help bad actors find coding vulnerabilities faster than banks can fix them. "That could destabilize a big bank if customers lose access to funds or faith that their assets are secure," said TD Securities analyst Jaret Seiberg. "Such a move could quickly become a systemic threat if it shatters confidence in the ability to store wealth and to transact using financial institutions. TJ Marlin, founder, CEO of Guardrail Technologies, said bank executives should be "wide awake" to the Mythos risk. "Mythos revealed something that should concern every institution in that room: critical vulnerabilities were already living inside systems that passed every existing security scanner," Marlin said. "Mythos did not create new risks. It illuminated risks that were already there, undetected, in production environments at the world's most sophisticated institutions. When the Treasury Secretary and the Fed Chair feel compelled to pull the CEOs of the five largest banks in America out of other meetings for an unscheduled emergency briefing, that is the moment they are legally documenting that, 'You were told.'" Because of this danger, Anthropic has not released the technology widely. It formed a coalition called Project Glasswing that includes Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA and Palo Alto Networks. The companies will use Mythos in preview mode to detect software bugs and fix them before hackers get a hold of the technology. "We formed Project Glasswing because of capabilities we've observed in a new frontier model trained by Anthropic that we believe could reshape cybersecurity," Anthropic wrote in a blog. "Claude Mythos Preview is a general-purpose, unreleased frontier model that reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities." Mythos has already found thousands of high-severity vulnerabilities, according to Anthropic, including some in every major operating system and web browser. "Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely," the company stated in its blog. "The fallout -- for economies, public safety, and national security -- could be severe." Is it a systemic risk? Security vulnerability software has existed for decades and has become more effective over time. Is Mythos so much better that it rises to the level of a systemic risk? "We do not see a systemic crisis from Mythos as imminent," Seiberg said. He said regulators are taking these risks seriously and are ensuring big banks are properly prepared. "Mythos is a genuinely capable model, but it is a product launch, not a singular event," said Nitin Raina, chief information security officer at AI consultancy Thoughtworks. "There will be many more like it -- from Anthropic, OpenAI, Google, and increasingly capable open-source models. The capability frontier is moving for everyone. The right response to that is heightened focus, not alarm." But even so, the new Anthropic model is different from existing vulnerability scanning software, according to Nitin Seth, co-founder and CEO of Incedo, a company that helps with AI deployments. "Traditional scanners are effective at identifying known weaknesses," Seth said. "Tools like Mythos appear to go further -- reasoning across systems, surfacing deeper flaws, and in some cases chaining them into real attack paths. That is the real shift. In cybersecurity, understanding how a system can actually fail matters more than simply generating a longer list of findings." Traditional software vulnerability scanners like Snyk, SonarQube and GitHub Advanced Security operate by pattern matching: they compare code against a database of known vulnerability signatures, Marlin said. "They are excellent at finding yesterday's problems," he said. "Mythos does not pattern-match. It reasons about what code is supposed to do, identifies the gap between intent and execution and can chain vulnerabilities across systems in ways no human analyst or legacy tool could replicate at speed. It found critical vulnerabilities in every major operating system and browser because it was looking differently." Bank executives should give this new risk their immediate attention, Seth said. "The issue is not just one more security tool," he said. "It is that AI is compressing the time between vulnerability discovery and exploitation. When that cycle shrinks materially, the economics of cyber risk change. For banks, that means hidden weaknesses are far less likely to remain hidden for long." Raina agreed with that assessment. "For banks in particular, which are running complex environments that combine legacy core systems with modern interfaces, that compression of time is what matters most," he said. The harder problem for most financial institutions isn't discovering more vulnerabilities -- it's deciding which ones matter and reducing exposure before they can be exploited, Raina said. "Banks are already managing large attack surfaces, significant third-party dependencies, and legacy infrastructure that can be difficult to patch quickly," he said. "Mythos doesn't change that challenge, but it does raise the tempo at which it needs to be addressed." What banks can do to shield themselves Banks should treat cybersecurity as a business resilience issue, not just a technical control issue, Seth said. "The priorities now are to accelerate AI-enabled defense, reduce structural weaknesses in legacy environments, tighten identity and access, and focus as much on containment as on detection," he said. "The goal is not zero risk. The goal is to build a security operating model that can learn, adapt, and respond at the pace of the threat." Marlin recommended three steps banks should take. First, they should treat the White House meeting this week "as a legal trigger, not an informational briefing." That means they should document a board-level response immediately. The Gramm-Leach-Bliley Act, the Federal Reserve's Supervisory Guidance on Model Risk Management, cybersecurity guidelines from all the bank regulators and SEC disclosure rules "collectively mean that 'we were informed and took no documented action' is now the most dangerous position a bank can occupy," he said. The next step banks should take is audit their AI-generated code exposure. "Every line written or modified by an AI coding assistant and passed by your existing scanners should be treated as unverified," Marlin said. "The question your chief information security officer needs to answer this week: what percentage of our production codebase was AI-assisted and what independent verification has it received?" And third, banks should independently verify their security layers. "The banks that will be most exposed are those whose AI security posture depends entirely on assurances from the AI providers themselves," Marlin said. "'Anthropic said it was safe' is not a defensible position when the Treasury Secretary has personally briefed your CEO on the inadequacy of provider-native safeguards. You need a verification layer with no commercial relationship with the model vendors and no financial incentive to pass what should fail."

Anthropic
American Banker17d ago
Read update
Why Anthropic met with bank CEOs about AI security risks

Boaz Weinstein Scores Win After UK Trust Backs SpaceX Proposal

A Baillie Gifford & Co. trust that placed an early bet on SpaceX recommended a proposal backed by activist investor Boaz Weinstein after losing a key shareholder vote. Edinburgh Worldwide Investment Trust is backing tender offers supported by Weinstein's Saba Capital Management that will give shareholders the chance to exit at net asset value minus costs in the coming weeks or after a potential SpaceX listing, the London-listed trust said in a statement Friday. The Edinburgh Worldwide board's own proposal, which didn't garner enough support in a ballot this week, would have allowed shareholders to receive about 85% of their holdings in cash. The balance -- effectively the portion of the fund's holdings invested in SpaceX -- would be deferred until "a future liquidity event" at Elon Musk's company. The result is the biggest win yet for Weinstein, a veteran derivatives trader, who began waging a high-profile battle to take control of seven UK trusts trading at a discount in December 2024. While those attempts failed, some of the targeted trusts have gone on to make the structural changes that Saba sought anyway. In the case of Edinburgh Worldwide, 36.8% of the issued share capital -- consisting almost entirely of holdings from Saba and two other institutions -- voted against the board's tender offer, according to the statement. The board said 53.8% of the total votes cast opposed its proposal. The shareholders will vote on April 30 on whether to replace the trust's board with candidates proposed by Saba. If the hedge fund firm wins, it's widely expected that Saba will become the vehicle's manager. "This is a very disappointing outcome," Edinburgh Worldwide Chairman Jonathan Simpson-Dent said in the statement. "There remains a high likelihood of Saba succeeding in appointing its proposed new board," he added. "Faced with this reality, the board's priority is to ensure shareholders can still exercise their right to a meaningful choice." A representative for Saba had no immediate comment. Read More on UK Investment Trusts: Boaz Weinstein's Hedge Fund Closes In On a Big Win: Chris Hughes What UK Investment Trust Activism Means For You: Money Distilled Weinstein's Doomed UK Trust Campaign Is Still Making Money

SpaceX
Bloomberg Business17d ago
Read update
Boaz Weinstein Scores Win After UK Trust Backs SpaceX Proposal

Anthropic warns new AI tool 'too powerful' to release to public

Anthropic, the tech company that recently clashed with the Pentagon, says it has developed a new AI model that is too powerful to be released to the public. Known as Mythos, it is described as being highly proficient at hacking amid claims the tool has already identified thousands of critical security flaws in key systems - some undetected for decades. Concern has reached the highest levels in Washington, with reports of urgent crisis talks between US officials and bank CEOs over the potential risks.

Anthropic
Channel 417d ago
Read update
Anthropic warns new AI tool 'too powerful' to release to public

The unconventional logic behind SpaceX's $1.75 trillion price tag

NEW YORK, April 10 (Reuters) - Wall Street is reaching for some unusual yardsticks to price Elon Musk's SpaceX. At least one of SpaceX's large institutional investors is privately benchmarking the rocket and satellite company not against aerospace rivals like Boeing or telecom giants like AT&T, but against market darling Palantir Technologies and AI infrastructure plays like GE Vernova and Vertiv - in a bid to justify a $1.75 trillion valuation ahead of what could be the largest IPO in history. The framework, described to Reuters for the first time by a source familiar with the company's thinking, illustrates the unusual challenge of pricing a company with no obvious public peers - and the lengths to which Wall Street is going to rationalize a premium valuation. SpaceX has confidentially filed for a U.S. IPO, Reuters reported last week. The company is scheduled to hold an analyst day on April 21, Reuters previously reported. At a potential valuation of $1.75 trillion, SpaceX looks expensive by many traditional measures, including comparisons to the earnings and revenue multiples at firms often cited as reference points for parts of its business. In space that means Boeing and Lockheed Martin, whose United Launch Alliance joint venture competes with SpaceX in launch services. In internet access, the peers would be AT&T and Verizon. But financial backers of the firm, on track to raise $75 billion in an IPO this year, contend that comparisons to established firms in legacy businesses miss the point of SpaceX and other Musk companies - to take advantage of the emergence of long-term, "secular" economic shifts at a time when few competitors are equipped to do so. Musk's companies have historically commanded rich multiples in part because investors are betting on him personally - Tesla being the clearest example -- and SpaceX investors expect that dynamic to carry over into any public offering. It's "pretty darn exciting" to sell into "the largest total addressable market in human history" - a potential $370 ⁠billion in space business, SpaceX CFO Bret Johnsen told IPO bankers on ⁠a conference call this week, according to two people familiar with the matter. He tabbed the potential market for the firm's Starlink internet service at $1.6 trillion, the people said. SpaceX did not respond to a request for comment. RETHINKING COMPARABLES Finding the right comparables for SpaceX lies at the center of a fierce debate over the pricing of the massive IPO, as bankers and investors grapple with how to value the company despite few, if any, closely comparable public peers. It is common for investors and bankers to sort for comparables by sector, using the longstanding assumption that industry is a good proxy for financial opportunity and risk. But many investors contend that comparable companies do not need to operate in the same industry - because, in this view, what matters are a firm's potential cash flows, growth profiles and risk characteristics. This approach holds that a better comparison for SpaceX comes from companies selling into the AI data-center buildout, which have famously been rewarded with rising shares and high multiples. For smaller funds, the calculus is different, said Jay Bala, portfolio manager at Toronto-based AIP, which manages roughly $100 million in assets, a large portion concentrated in SpaceX. "I'm piggybacking ⁠on the largest funds in the world. A huge amount of due diligence has already been done. I'm not going to second-guess some of ⁠the biggest investors on the planet," he said. He acknowledged it is difficult to obtain detailed financial information about SpaceX: "You can only get so much. It's hard to get numbers sometimes." STARLINK VERSUS LEGACY TELECOMS For Starlink -- or what SpaceX calls its "connectivity" business -- the reflexive benchmarks are legacy telecom firms, but some investors argue those comparisons are skewed by aging fixed infrastructure, saturated domestic markets and years of modest growth. "I wouldn't look at a legacy AT&T and Verizon as being very relevant to the economic model for Starlink, even though they're both in the business of giving you communication," a senior executive at one of SpaceX's large institutional investors told Reuters, speaking on condition of anonymity to discuss confidential internal work. Instead, SpaceX investors point to Palantir for its secular growth, high return on invested capital, good margins and asset-light composition -- qualities that fans say justify the high multiples the stock commands and suggest greater opportunities down the road. Palantir is well known as one of the priciest stocks in the market, recently trading at 43 times expected revenue and 75 times earnings. Skeptics say those levels are likely unsustainable, but SpaceX fans contend that the figures show that premium valuations are attainable if backed by outstanding financial performance. That said, at $1.75 trillion, even Palantir would be cheaper on some of these measures than SpaceX, which would trade at 110 times 2025 revenue estimates, according ⁠to a PitchBook calculation. "Investors should size positions with the understanding that they are paying a platform premium today for infrastructure-monopoly economics tomorrow," PitchBook analyst Franco Granda said in a note last month. ROCKET MANUFACTURING COMPARISONS For the rocket manufacturing side of the business, SpaceX investors contend that the firm's accomplishments - for instance, it has built a reusable launch system, driven down unit costs dramatically and expanded into a commercial market where demand for launch capacity continues to grow -- demand valuations far above ⁠those prevailing at Lockheed, which traded recently at around 20 times next year's expected earnings. Boeing's current high multiples mostly reflect its state as a turnaround story. Instead, they turn to industrial names such as GE Vernova and Vertiv - companies whose stocks have soared on ⁠the back of AI data-center spending - arguing that SpaceX's launch operations deserve a similar re-rating to the "picks and shovels" of the data-center age. Even these preferred comps do not look a lot like SpaceX, however. GE Vernova was recently trading at around 30 times expected cash flow and four times last year's revenue. Vertiv, which sells power and cooling equipment for data centers, traded recently ⁠at 19 times expected operating profit and 6 times last year's sales. MESSY PRICING AND RATIONALIZATIONS Bankers and investors say SpaceX is difficult to price because of the company's unique space operations and AI business, which is particularly difficult to value at an early stage. "Pricing is always going to be messy here," said Aswath Damodaran, a valuation expert and finance professor at New York University's Stern School of Business. "Nobody else has that capacity to launch satellites in numbers and at the price that they can do -- that's their big advantage." He adds that much of the current pricing reflects investors justifying their decision to purchase the shares rather than relying on traditional metrics. "They're hoping there's enough mood and momentum behind SpaceX, and when it goes public, the mood and momentum will take the stock up." "They've made the decision already that SpaceX is a great buy," Damodaran said. "Now they're looking for some way that they can justify that, and this pricing sounds like that exposed rationalization." (Reporting by Echo Wang in New York; Additional reporting by Joey Roulette in Houston; Editing by Colin Barr and Matthew Lewis)

UnconventionalSpaceX
Superhits 97.9 Terre Haute, IN17d ago
Read update
The unconventional logic behind SpaceX's $1.75 trillion price tag

Researchers Invented A Fake Eye Condition. ChatGPT, Gemini And Perplexity Repeated It As Real

Researchers created a fake eye condition that major AI chatbots repeated as real. ShowQuick Read Summary is AI-generated, newsroom-reviewed * Artificial intelligence chatbots repeated a fake eye condition called Bixonimania as real * Researcher Almira Osmanovic Thunstrom created and published fake studies on the condition * Major AI chatbots including Microsoft Copilot and ChatGPT disseminated the false information Did our AI summary help? Let us know. Switch To Beeps Mode It is a well-documented fact that artificial intelligence (AI) models are prone to hallucinations, generating confident but false information. But what happens when they are purposely fed misinformation? Almira Osmanovic Thunstrom, a medical researcher at the University of Gothenburg, Sweden, set out to find the same using an experiment. Thunstrom created a fake eye condition called 'Bixonimania' and published two papers about it on a preprint server. Within weeks of her uploading information about the condition, attributed to an imaginary author, major AI chatbots began repeating the invented condition as if it were real. Microsoft Copilot was the first major AI chatbot to pick the fake condition, describing Bixonimania as an "intriguing and relatively rare condition'. On the same day, Google's Gemini explained that Bixonimania is a condition caused by "excessive exposure to blue light". Perplexity said one in 90,000 were affected by Bixonimania, while OpenAI's ChatGPT informed users about the symptoms to look out for. Thunstrom said she conducted the experiment to test whether large language models (LLMs) would swallow the misinformation and then reproduce it as reputable health advice. "I wanted to see if I can create a medical condition that did not exist in the database," Thunstrom told Nature, adding that she created a health-related condition and used the name bixonimania because it "sounded ridiculous". "I wanted to be really clear to any physician or any medical staff that this is a made-up condition, because no eye condition would be called mania, that's a psychiatric term." Also Read | Social Media Users Notice They Are Mimicking Writing Style Of AI Models: 'Am I Going Insane?' Bixonimania And LLMs To ensure that the LLMs and readers were aware that the studies were fake, Thunstrom left several clues. She invented Lazljiv Izgubljenovic as the lead researcher, who worked at a non-existent university called Asteria Horizon University in equally fake Nova City, California. The papers also started with statements like, "this entire paper is made up" and "fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group". Despite such obvious clues, the LLMs still picked up the studies and pushed Bixonimania as a real-life health condition. More troublingly, the fake papers were also cited in peer-reviewed literature, highlighting that some researchers were relying on AI-generated references. Researchers at the Maharishi Markandeshwar Institute of Medical Sciences and Research in Mullana, India, cited the fake preprints in a study published in the journal Cureus. After Thunstrom posted about her AI experiment, the LLM chatbots started modifying the search results when quizzed about Bixonimania. Quizzed why Gemini did not filter out the condition initially, a Google spokesperson said the results reflected the performance of a previous model. Show full article Track Latest News Live on NDTV.com and get news updates from India and around the world Artificial Intelligence, Bixonimania, Large Language Models

Perplexity
NDTV17d ago
Read update
Researchers Invented A Fake Eye Condition. ChatGPT, Gemini And Perplexity Repeated It As Real

Boaz Weinstein Scores Win After UK Trust Backs SpaceX Proposal | Company Business News

A Baillie Gifford & Co. trust that placed an early bet on SpaceX recommended a proposal backed by activist investor Boaz Weinstein after losing a key shareholder vote. Edinburgh Worldwide Investment Trust is backing tender offers supported by Weinstein's Saba Capital Management that will give shareholders the chance to exit at net asset value minus costs in the coming weeks or after a potential SpaceX listing, the London-listed trust said in a statement Friday. The Edinburgh Worldwide board's own proposal, which didn't garner enough support in a ballot this week, would have allowed shareholders to receive about 85% of their holdings in cash. The balance -- effectively the portion of the fund's holdings invested in SpaceX -- would be deferred until "a future liquidity event" at Elon Musk's company. The result is the biggest win yet for Weinstein, a veteran derivatives trader, who began waging a high-profile battle to take control of seven UK trusts trading at a discount in December 2024. While those attempts failed, some of the targeted trusts have gone on to make the structural changes that Saba sought anyway. In the case of Edinburgh Worldwide, 36.8% of the issued share capital -- consisting almost entirely of holdings from Saba and two other institutions -- voted against the board's tender offer, according to the statement. The board said 53.8% of the total votes cast opposed its proposal. The shareholders will vote on April 30 on whether to replace the trust's board with candidates proposed by Saba. If the hedge fund firm wins, it's widely expected that Saba will become the vehicle's manager. "This is a very disappointing outcome," Edinburgh Worldwide Chairman Jonathan Simpson-Dent said in the statement. "There remains a high likelihood of Saba succeeding in appointing its proposed new board," he added. "Faced with this reality, the board's priority is to ensure shareholders can still exercise their right to a meaningful choice." A representative for Saba had no immediate comment.

SpaceX
mint17d ago
Read update
Boaz Weinstein Scores Win After UK Trust Backs SpaceX Proposal | Company Business News

Anthropic's Mythos Will Force a Cybersecurity Reckoning -- Just Not the One You Think - IT Security News

The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user. The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.

Anthropic
IT Security News - cybersecurity, infosecurity news17d ago
Read update
Anthropic's Mythos Will Force a Cybersecurity Reckoning -- Just Not the One You Think - IT Security News

Rob May: Anthropic's Mythos could revolutionize cybersecurity, risks of AI misuse by state actors, and the emergence of a two-tier AI economy | TWIST

* Anthropic's new AI model, Mythos, has advanced capabilities in identifying and exploiting security vulnerabilities. * Mythos poses significant risks if released, as it could be used by malicious actors to exploit software vulnerabilities. * Anthropic's cautious approach to releasing AI models is seen as a responsible move given the potential dangers. * The AI industry is moving towards a two-tier economy, with privileged access to advanced models like Mythos. * Anthropic is perceived to have surpassed OpenAI in several areas due to its focused approach. * The parity among top AI labs suggests that superintelligence will become widely accessible once AGI is achieved. * Product market fit is crucial for entrepreneurial success, with Anthropic leading in this area. * More powerful AI models are expected to emerge, necessitating strategic responses to mitigate associated risks. * The open source community may take several months to catch up to proprietary models like Mythos. * Anthropic's strategic decisions are shaping the competitive landscape in the AI sector. * The potential release of Mythos raises geopolitical concerns about AI misuse by state-sponsored groups. * AI advancements are driving a shift in how cybersecurity threats are identified and managed. * The rapid evolution of AI technology requires proactive measures to address potential security threats. * Anthropic's leadership in AI development is influencing the broader tech industry. * Access to advanced AI models is becoming a key competitive differentiator in the tech sector. Guest intro Rob May is CEO of Neurometric, a company optimizing inference for multi-model AI systems. He previously co-founded Talla, an AI-driven knowledge management platform, and Backupify, a cloud backup service acquired by Datto. Rob is also an angel investor in over 100 AI startups. The impact of Mythos on cybersecurity * Mythos is capable of finding and exploiting security vulnerabilities that humans often miss. * Mythos is incredible at both finding exploiting and patching security vulnerabilities in software that humans have often missed. -- Rob May * The model can chain together multiple vulnerabilities to create sophisticated exploits. * This model is able to create exploits out of three four sometimes five vulnerabilities that in sequence give you some kind of very sophisticated end outcome. -- Rob May * Mythos could be used by malicious actors if released, posing a significant security threat. * Anthropic cannot let this out of the bag because if they did then north korea and china and everyone else could use it to essentially break the modern. -- Rob May * The potential misuse of Mythos highlights the need for cautious AI deployment. * Anthropic's decision to withhold Mythos reflects a responsible approach to AI development. * Anthropic is doing the right thing here by saying hey we're not gonna release something this dangerous you know from this off the cuff. -- Rob May The emerging two-tier AI economy * Anthropic is creating a two-tier economy in AI access, impacting competition. * It does create a situation in which we now have a very much two tier economy there are the companies that are sufficiently important that anthropic is letting them have access to mythos and that means they can be ahead on both defense and offense. -- Rob May * Companies with access to Mythos gain a competitive edge in AI development. * The distribution of AI resources is shifting, affecting innovation and market dynamics. * Anthropic's strategic decisions are influencing the competitive landscape in the AI sector. * The two-tier economy raises questions about fairness and access to advanced AI tools. * The privileged access to Mythos underscores the importance of strategic partnerships in AI. * The competitive dynamics in the AI industry are being reshaped by access to advanced models. Anthropic's strategic focus and competitive edge * Anthropic has surpassed OpenAI in several areas due to its focused approach. * I think anthropic has significantly passed openai on a lot of things and I think the reason is that they've been more focused. -- Rob May * The company's strategic focus is driving its success in the AI industry. * Product market fit is a key factor in Anthropic's competitive advantage. * I think product market fit is the ultimate arbiter of entrepreneur success and clearly no one has more pmf than anthropic today. -- Rob May * Anthropic's leadership in AI development is influencing the broader tech industry. * The company's strategic decisions are shaping the competitive landscape in the AI sector. * Anthropic's focused approach is contributing to its success in AI development. The future of AI and superintelligence * The parity among top AI labs suggests that superintelligence will become widely accessible. * What we've seen over the last eighteen months that surprised everybody is that the parity amongst the top labs... is when we hit agi like we're all gonna have access to superintelligence for free and open source three to five months later. -- Rob May * The achievement of AGI will lead to a significant shift in AI accessibility. * The open source community may take several months to catch up to proprietary models. * I'm curious if it's more than three to five months until they catch up and if so we have more time to fix the world. -- Rob May * The rapid evolution of AI technology requires proactive measures to address potential security threats. * More powerful AI models are expected to emerge, necessitating strategic responses. * More powerful models are gonna come from us and from others and so we do need a plan to respond to this. -- Rob May Geopolitical implications of AI advancements

Anthropic
Crypto Briefing17d ago
Read update
Rob May: Anthropic's Mythos could revolutionize cybersecurity, risks of AI misuse by state actors, and the emergence of a two-tier AI economy | TWIST

Bitget debuts SpaceX proxy token as Musk's IPO target climbs above $1.75 trillion

The exchange's new preSPAX sale offers synthetic exposure to SpaceX's post IPO performance through Republic as investors watch one of the most anticipated listings in history. Bitget is launching a SpaceX-linked token sale through its new IPO Prime platform, offering users early exposure to a synthetic asset tied to SpaceX's future public market performance. The product, called preSPAX, is being issued by Republic and will be available through a subscription sale on Bitget from April 18 to April 21, with OTC trading set to begin on April 21. According to Bitget, preSPAX is not direct SpaceX equity. Instead, it is a digital asset structured to mirror the economic performance of SpaceX stock after the company goes public. The offering documents set an implied SpaceX valuation of $1.5 trillion for the token sale, with 94,000 tokens available at $650 each and a total subscription value of $61.1 million. The timing is notable because SpaceX is moving closer to what could become one of the biggest IPOs on record. Bloomberg reported on April 1 that SpaceX had filed confidentially for an initial public offering, a process that lets companies submit draft registration documents to the SEC outside public view before formally launching a roadshow. Bloomberg said the company could seek a valuation of more than $1.75 trillion, while an earlier Bloomberg report in December said SpaceX was targeting a valuation of about $1.5 trillion and a fundraise of more than $30 billion. A listing at more than $1.75 trillion would follow SpaceX's merger with xAI, a transaction that valued SpaceX at $1 trillion and xAI at $250 billion. Bloomberg later reported that SpaceX had raised its IPO target to more than $2 trillion and could seek to raise as much as $75 billion, underscoring how quickly expectations around the deal have climbed. IPO Prime is being positioned as a way for crypto users to gain early economic exposure to private unicorns before they list publicly. Republic describes its mirror token structure as a regulated product designed to give investors exposure to the economic performance of private companies ahead of a potential IPO. Bitget's own terms say preSPAX does not create a legal relationship with SpaceX and has not been endorsed, approved, or authorized by the company. Settlement also depends on the lockup period of the underlying debt asset ending after a SpaceX IPO, at which point the issuer would convert value into tokens or USDT based on the company's market price.

xAISpaceX
Crypto Briefing17d ago
Read update
Bitget debuts SpaceX proxy token as Musk's IPO target climbs above $1.75 trillion

Anthropic's Mythos AI Uncovered Serious Security Holes in Every Major OS and Browser

It's a step change in cybersecurity. Exploits that would take experts weeks to develop can now be generated in hours. Concerns about AI's ability to turbocharge cybersecurity threats have been building for years. Anthropic's latest model could mark a turning point after the company claimed the model could identify and exploit zero-day vulnerabilities in every major operating system and web browser. One of the standout use cases for large language models is analyzing and writing code. This has long raised worries that the technology could help automate much of the work of hackers, potentially lowering the barrier for cyberattacks. Leading models have demonstrated steady progress on various cybersecurity-related benchmarks, and there has been evidence malicious actors are using the technology. But so far, the impact appears to have been modest, suggesting practical barriers remain that prevent the widespread use of the technology. According to Anthropic, that's about to change. The company says its latest model, Mythos, has hacking capabilities so potent the company will not make it publicly available. Instead, it's releasing Mythos to a select group of major technology companies and open source developers as part of an initiative called Project Glasswing. Those participating can use the model to identify vulnerabilities in their code and patch them before hackers get access to similar capabilities. "The vulnerabilities that Mythos Preview finds and then exploits are the kind of findings that were previously only achievable by expert professionals," the company's researchers write in a blog post. "We believe the capabilities that future language models bring will ultimately require a much broader, ground-up reimagining of computer security as a field." Fortune first reported news of Mythos last month, after a data leak at Anthropic revealed details about the new model. While the AI excels at cybersecurity tasks, it's designed to be a general purpose model, and the company says its hacking capabilities are simply a result of vastly improved coding and reasoning skills. In testing, Anthropic's researchers discovered the model was able to find "zero-day" vulnerabilities -- ones that were previously undiscovered -- in every major operating system and web browser. Many were decades old, an indicator of how hard they were to detect. But the model isn't just good at finding vulnerabilities. The company's red team -- security researchers who simulate hacking attacks to identify security weaknesses -- showed the model could chain together multiple vulnerabilities to create complex attacks capable of sidestepping defenses. Its capabilities are a step change from the previous best models. Given the challenge of attacking the Firefox web browser's JavaScript engine, Anthropic's previous most powerful model Opus 4.6 succeeded just twice, compared to 181 times for Mythos. Most worryingly, the team found that engineers with no security background could use it to develop successful attacks overnight. Key to the new capabilities is the model's ability to operate autonomously for long stretches. To find bugs, the researchers used Anthropic's coding agent Claude Code to call the model and give it a simple prompt to scan for vulnerabilities in a particular codebase. The model then read the code, came up with hypotheses about potential bugs, and ran tests to validate them without any human involvement.

Anthropic
Singularity Hub17d ago
Read update
Anthropic's Mythos AI Uncovered Serious Security Holes in Every Major OS and Browser

Anthropic's Mythos Will Force a Cybersecurity Reckoning -- Just Not the One You Think

Anthropic said this week that the debut of its new Claude Mythos Preview model marks a critical juncture in the evolution of cybersecurity, representing an unprecedented existential threat to existing software defense strategies. So, is it more AI hype -- or a true turning point? According to Anthropic, Mythos Preview crosses a threshold of capabilities to discover vulnerabilities in virtually any and every operating system, browser, or other software product and autonomously develop working exploits for hacking. With this in mind, the company is only releasing the new model to a few dozen organizations for now -- including Microsoft, Apple, Google, and the Linux Foundation -- as part of a consortium dubbed Project Glasswing. But after years of speculation about how generative AI could impact cybersecurity, the news this week ignited controversy about whether a reckoning has really arrived and what it might look like in practice. Some are extremely skeptical of Anthropic's claims. They argue that existing AI agents can already help users find and exploit vulnerabilities much more easily and cheaply than ever before, and that this reality is fueling refinements in how companies discover and patch their software without fundamentally changing the paradigm. And then there's the ick factor that Anthropic will almost certainly benefit financially from positioning its latest model as mysterious, uniquely powerful, and exclusive. Other researchers and practitioners, though, say that they agree with Anthropic's assessment and point out that the company has said Mythos Preview is just the first to achieve capabilities that will ultimately be widely available in other models. "I typically am very skeptical of these things, and the open source community tends to be very skeptical, but I do fundamentally feel like this is a real threat," says Alex Zenla, chief technology officer of cloud security firm Edera. Zenla and others specifically point to one Mythos Preview capability as the pivot point. Generative AI, they say, is now getting more capable at identifying and developing what are known as "exploit chains," or groups of vulnerabilities that can be exploited in sequence to deeply compromise a target -- essentially Rube Goldberg-machine-style hacking. Many of the most sophisticated hacking techniques employ exploit chains, including so called zero-click attacks that compromise a system without requiring any interaction from a user. "We are already living in the world where companies run vulnerable software, vulnerable hardware, and struggle to patch. Many companies are not capable of securing their infrastructure -- that hasn't really changed from yesterday to today," says longtime security engineer and researcher Niels Provos. "But from what I understand, Mythos is really good at coming up with multistage vulnerabilities, and then also provides the proof of exploitation. I don't think it intrinsically changes the problem space, but it changes the required skill level to find these vulnerabilities and exploit them." A limited release of Mythos Preview to Project Glasswing participants only gives defenders a small lead time to find weaknesses in their own systems using the model and start to grapple more broadly with how software development, update cycles, and patch adoption needs to change before attackers have widespread access to such capabilities themselves. Industry leaders seem to be heeding the warning. Anthropic's frontier red team lead, Logan Graham, told WIRED on Tuesday that as the company reached out to organizations about Project Glasswing ahead of this week's announcement, the phone calls got shorter and shorter because the potential threat was becoming more obvious. "This is an issue that involves all of the model developers. Our goal here is just to kick things off," Graham said. "It's really important that Mythos Preview gets in the hands of defenders to give a head start." The people considering the impacts of Mythos Preview extend far beyond tech firms. Bloomberg reported this week that US Treasury secretary Scott Bessent and Federal Reserve chair Jerome Powell convened a meeting of finance sector leaders at the Treasury's headquarters in Washington, DC, on Tuesday to discuss the potential impacts of models like Mythos Preview on cybersecurity. Jeetu Patel, president and chief product officer of Cisco, which is a member of Project Glasswing, told WIRED at the HumanX AI conference in San Francisco that Mythos Preview "is a very, very big deal." "In the long run, you want to make sure that your defenses are machine-scale, because the attacks are machine-scale," Patel said. "If I have billions of agents that are going to be attacking my infrastructure, I need to make sure that I can defend it effectively. What Anthropic did here is a fantastic thing, because it just creates a level of asymmetry against the bad actors." Still, some argue that the frenzy is overblown -- a splinter of the overall AI hype cycle. "It's every spaghetti Western ever where big-tent preachers say the end is nigh and then skip town with everyone's money," says longtime security and compliance consultant Davi Ottenheimer. "It's a shift, like learning how to fight with machine guns when others are still using bolt-action rifles, but it's not magical and mystical." Some argue, though, that given how long it takes for these mentality shifts to proliferate across all industries and organizations, it can be useful to seize on specific incidents or advances as an opportunity to raise awareness. Other cybersecurity reckonings have come after catastrophic breaches like the Aurora attacks on Google that highlighted the importance of "zero trust" architecture, or the Solarwinds and Log4shell hacking sprees that popularized a "secure by design" approach to software development. Anthropic argues that the debut of Mythos Preview can be used as a more prudent type of inflection point, because it is still a warning of what could be to come, not a real-world demonstration of a worst-case scenario. Security experts also say that the moment presents an opportunity to address shortcomings in how software is currently developed. "For decades, we have built an enormous global industry to defend, detect, and respond to 'vulnerabilities' -- flaws and defects in software -- that should never have existed in the first place," Jen Easterly, the longtime cybersecurity practitioner and former US Cybersecurity and Infrastructure Security Agency director, wrote on Wednesday. Project Glasswing, she argues, could usher in "a future in which AI helps us move beyond endlessly defending against flawed software and toward building technology that is more secure from the start. Not the end of cybersecurity as a mission, but the beginning of the end of cybersecurity as we know it." Edera's Zenla emphasizes that Mythos Preview is not a lightning bolt that will change everything overnight. Instead, she says, it is another step toward the security version of infinite monkeys at infinite typewriters eventually producing Shakespeare. "If you get a million vulnerability researchers, they can find a huge number of bugs. But humans are not very good at holding lots of contextual information in their minds for long periods of time, so finding very long chains of vulnerabilities that are actually exploitable together has been rare," she says. "Mythos and models like it will accelerate the pace at which attackers will be able to group vulnerabilities into sets that can work together. Some people are going to be grumpy about it for a long time, but I do think the dynamic has shifted."

Anthropic
DNyuz17d ago
Read update
Anthropic's Mythos Will Force a Cybersecurity Reckoning -- Just Not the One You Think

Bessent summons bank executives over Anthropic cyber risk

Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell brought together a group of bank executives this week to discuss cybersecurity concerns in the wake of Anthropic's new Mythos model, multiple people familiar with the meeting told The Hill. The meeting, held at the Treasury Department on Tuesday, came as Anthropic announced its latest artificial intelligence model, Claude Mythos Preview, will be held back from public release because its capabilities are too advanced and risky to end up in bad actors' hands. Several of the executives were already in Washington for the Financial Services Forum, an advocacy organization of the country's eight largest banks, The Hill's sources said. Bloomberg, which first reported the meeting, said Bessent and Powell convened the group to warn of the possible risks the Mythos model could present to banks, citing anonymous people familiar with the matter. Reuters reported the meeting sought to ensure the banks were taking steps to defend their systems. Anthropic said this week it has been in discussions with US government officials about Claude Mythos Preview and "its offensive and defensive capabilities." Those attending the meeting included Bank of America CEO Brian Moynihan, along with Citi Group CEO Jane Fraser, Morgan Stanley CEO Ted Pick, Goldman Sachs Group CEO David Soloman and Wells Fargo & Co. Charles Scharf, Bloomberg reported. JP Morgan CEO Jamie Dimon was not able to attend, one of the people familiar confirmed to The Hill. Representatives for the banks and the Federal Reserve declined to comment. The Treasury Department did not immediately respond to The Hill. The Claude Mythos Preview model will be available to a select group of technology firms including Apple, CrowdStrike, Amazon Web Services, and Microsoft, according to Anthropic. These companies, aligned with more than 40 organizations that build or manage critical software infrastructure, will use Mythos Preview in their defensive security work and findings will be shared by Anthropic with the whole industry. The consortium is part of Anthropic's new initiative, Project Glasswing, which was formed after the company discovered the capabilities of Mythos Preview.

Anthropic
The Hill17d ago
Read update
Bessent summons bank executives over Anthropic cyber risk

Bessent, Powell summon Wall Street CEOs for emergency meeting over Anthropic AI risks amid Pentagon dispute

Fox News senior national correspondent Rich Edson says the model currently 'has no plans' to be publicly available on 'Special Report.' Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned Wall Street bank heads to Washington, D.C. on Tuesday for a flash meeting to warn them of cybersecurity threats posed by AI giant Anthropic, according to a Thursday night report from Bloomberg. Bessent and Powell convened the last-minute meeting at Treasury's D.C. headquarters in order to ensure the banks were ready to guard against risks from Anthropic's latest model, Claude Mythos Preview, a powerful new AI model that experts warn marks a profound shift in the technology. Each bank summoned is marked by the Fed as "structurally important" to the global financial system. The attendees included chief executives from Goldman Sachs, Citigroup, Morgan Stanley, Bank of America and Wells Fargo. Bank of America CEO Brian Moynihan was in attendance, a source with knowledge of his schedule told Fox News Digital. Spokespeople for Goldman Sachs and Wells Fargo declined to comment. Citigroup and Morgan Stanley did not immediately respond to requests for comment. PENTAGON'S AI BATTLE WILL HELP DECIDE WHO CONTROLS OUR MOST POWERFUL MILITARY TECH JPMorgan Chase CEO Jamie Dimon was also summoned but was unable to attend, Bloomberg reported, citing sources familiar. JPMorgan, notably, is a member of Anthropic's "Project Glasswing," an initiative to use Mythos as a defense against future similar models. JPMorgan did not immediately respond to a request for comment. Mythos has garnered a swell of intrigue online thanks to Anthropic's claims that the AI can autonomously identify and exploit software weaknesses. The company touted Mythos as a "frontier model" that can outperform "all but the most skilled humans at finding and exploiting software vulnerabilities." It claimed the model has already identified thousands of software flaws previously unknown to their developers, including some that were decades old inside companies widely considered to be security strongholds. "This could make cyberattacks of all kinds much more frequent and destructive, and empower adversaries of the United States and its allies," Anthropic wrote in a blog post. "Addressing these issues is therefore an important security priority for democratic states." ANTHROPIC'S DEMOCRATIC TIES UNDER FIRE AS TRUMP ADMIN SEVERS PENTAGON CONTRACTS In light of the security risks, a source close to Anthropic told Fox News Digital that the company has briefed senior U.S. government officials about Mythos, though did not specify which agencies. The increasingly relevant AI titan was once a core partner of the U.S. military, securing a $200 million contract with the Pentagon in July 2025. However, the partnership split open in February after the company drew redlines against the War Department using its technology for autonomous weapons and domestic surveillance. After issuing the company an ultimatum, Secretary of War Pete Hegseth designated Anthropic as a supply chain risk, barring federal contractors from using its products. Anthropic sought to appeal that designation, but a federal appeals court rejected their plea Wednesday. When asked to comment on the Treasury's Tuesday meeting, the Department of War referred Fox News Digital to a statement in support of the Wednesday ruling from Acting Attorney General Todd Blanche. CLICK HERE TO DOWNLOAD THE FOX NEWS APP "Today's D.C. Circuit stay allowing the government to designate Anthropic as a supply chain risk is a resounding victory for military readiness," Blanche posted on X Wednesday. "Our position has been clear from the start -- our military needs full access to Anthropic's models if its technology is integrated into our sensitive systems. Military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company." The Department of Treasury and the Federal Reserve Board did not immediately return requests for comment.

Anthropic
Fox News17d ago
Read update
Bessent, Powell summon Wall Street CEOs for emergency meeting over Anthropic AI risks amid Pentagon dispute

The well-tempered chaos sequencer

The well-tempered chaos sequencer tool-kit is a mechanical chaos based musical system that seeks to create an intuitive musical relation to the underlying generic structures of chaos in systems. Chaos is generated mechanically by the paths of ball bearings that fall though a modular structure of pins. Depending on the path of a ball the sequencer triggers sound events, such as CV, MIDI or OSC. The well-tempered chaos tool-kit is a mechanical chaos based musical system that seeks to create an intuitive musical relation to the underlying generic structures of chaos in systems. Chaos is generated by the paths of ball bearings that fall through a modular geometrical structure of pins. Depending on the path of a ball the sequencer triggers sound events, such as CV, MIDI or OSC. By adding a mechanism to endlessly loop the balls over the pins the chaos can be continuously sampled. Through the repeated sampling of the chaos generated by a pin formation a structure can be recognized. In return the sequencer uses this structure to trigger sound events. By changing the geometrical patterns of the pins different sound event structures can be constructed that resemble or approximate musical patterns. When the ball loops infinitely the sequencer can become a musical object, that autonomously expresses musical structures. The idea that simulates randomness can converge through mechanical computation to different statistical distributions structures (Gaussian, non-Gaussian, Boltzmann etc.), is used to express well-tempered chaotic musical events. The logic of the sequencer can be extended modularly by using the bread board to create logical elements. The modularity of the system is achieved by an electric design using only non-soldered electrical components. The bearing balls counter mechanism can be built using only 2.54mm jumper cables and the board itself. This design is important to make the reproducible of the sequencer very simple. The model for the board itself is generated by a parametric script using an parametric OpenSCAD script. It takes into account the radius of the ball bearings, the dimension of the used fan or looping mechanism, as well as the maximum dimension of the whole board. The code will then automatically generate a printable STL- file, which also include holds for Jumper cables embedded. The embedded cables can then be utilized as both button switches or electrical logic boards. As an opensource musical instrument the whole system is designed to be easily reproducible and modular. It is designed to appeal to a very wide range of musical interests and it is constructed to make it easily accessible for a large number of people, with an open invitation to develop, share the concept and to play with the idea. The project only uses resources that are publicly available and open source. The design and electrical schematics intended for a very basic level of technical understanding and programming. Still the design takes no compromises for modularity of both software and hardware, and opens the modularity of bread board logic to the world of music. Example 1.0: Don't Panic! it's probably going to work! Exmaple 3: Don't Panic! This is probably improvised music! 2. how is the chaos sequencer tool-kit a musical instrument?

CHAOS
Hackaday.io17d ago
Read update
The well-tempered chaos sequencer

Accenture invests in Replit to advance AI-driven software development for enterprises

Global professional services giant Accenture has strategically invested in AI software development platform Replit. This collaboration aims to accelerate AI-driven software creation for businesses worldwide. Accenture will integrate Replit's technology internally to boost productivity and assist clients in adopting AI tools for their development processes, signaling a significant shift in how software is built. Accenture has made a strategic investment in Replit, a US-based artificial intelligence (AI) software development platform, as part of its efforts to accelerate AI-driven software creation for enterprises. The financial terms of the transaction were not disclosed. The companies will collaborate to explore how AI-assisted development can be applied in enterprise environments. Accenture will also adopt Replit's technology internally to enhance productivity and support its clients in integrating AI tools into their development workflows. Replit, founded in 2016 by Amjad Masad, offers an online integrated development environment (IDE) that allows developers to write, test, and deploy code collaboratively in the cloud. The company has been expanding its enterprise-focused offerings through its "vibecoding" tools. Announcing the partnership on social media, Masad said Accenture's investment and collaboration would "bring secure vibecoding to enterprises globally." "Accenture is investing in Replit, adopting it internally, and working with us to bring secure vibecoding to enterprises globally," he wrote, adding, "The way software gets built is changing. Every company will need to reinvent how they build and operate." Accenture, one of the world's largest professional services firms with over 700,000 employees, has been ramping up its AI-related capabilities through investments, acquisitions, and partnerships.

Replit
Economic Times17d ago
Read update
Accenture invests in Replit to advance AI-driven software development for enterprises

Feds Warn Major Banks of Anthropic Mythos Cyber Threat | PYMNTS.com

Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell arranged the meeting at Treasury's headquarters in Washington to ensure the banks are aware of the potential risks and are taking precautions, according to the report. Many of the CEOs were already in Washington to attend a meeting of the Financial Services Forum, per the report. Reached by PYMNTS, the Federal Reserve declined to comment on the report. The Treasury Department did not immediately reply to PYMNTS' request for comment. According to the report, Anthropic has limited the release of Mythos to a few major firms so they can secure their systems before similar AI models are released. It was reported Tuesday (April 7) that Anthropic unveiled a program called Project Glasswing that will allow select partners to gain early access to "Claude Mythos Preview." The initiative includes participation from leading companies such as Amazon, Microsoft and Apple, alongside cybersecurity and infrastructure players like Crowdstrike, Palo Alto Networks, Google and Nvidia. The model is being positioned specifically for defensive cybersecurity work, and the initiative is meant to allow partners to identify vulnerabilities and strengthen systems before threats can be exploited, according to the report. In its announcement of Project Glasswing, Anthropic said: "Anthropic has also been in ongoing discussions with US government officials about Claude Mythos Preview and its offensive and defensive cyber capabilities." The company also said in the announcement: "We are hopeful that Project Glasswing can seed a larger effort across industry and the private sector, with all parties helping to address the biggest questions around the impact of powerful models on security." It was reported March 26 that an accidental data leak forced Anthropic to confirm that it is testing an unreleased AI model called Claude Mythos that has capabilities that exceed any system it has previously released. A day later, on March 27, it was reported that concerns about the capabilities of an AI model being tested by Anthropic drove down cybersecurity stocks. In November, Anthropic reported that another of its AI models had been manipulated into carrying out a wide-reaching cyber-espionage operation.

Anthropic
PYMNTS.com17d ago
Read update
Feds Warn Major Banks of Anthropic Mythos Cyber Threat | PYMNTS.com

The Billionaire Just Predicted Anthropic Would Crush Palantir. It Could Already Be Happening

Palantir's valuation makes it vulnerable for a deeper drawdown. Contrarian investor Michael Burry gained fame from The Big Short as one of a handful of investors in the movie that bet big on the housing crash in 2008 and won. Burry collected about $100 million for himself with his bet against subprime mortgages and made more than $700 million for his investors. Now, Burry, who's believed to be a billionaire, is still an active investor, though he has transitioned his Scion Asset Management to a family office, and he's now betting big against Palantir (NASDAQ: PLTR), the deep data analytics company now best known for its AI platform. Will AI create the world's first trillionaire? Our team just released a report on the one little-known company, called an "Indispensable Monopoly" providing the critical technology Nvidia and Intel both need. Continue " Burry bought long-dated put options on the tech stock in the third quarter, spending roughly $9.2 million to do so. This week, he stepped up his attack on Palantir, saying in a now-deleted post on X that "Anthropic is eating Palantir's lunch." Those comments, which came just as Anthropic said its new Claude Mythos AI model can identify software vulnerabilities and is too powerful to be released, seemed to be having an impact on Palantir. Over Wednesday and Thursday, Palantir stock lost 13%, even as the broad market surged on Wednesday on news of the ceasefire. However, software stocks plunged on Thursday in response to the new threat from Anthropic. Image source: Getty Images. Burry backed up his argument by noting that Anthropic's annual recurring revenue (ARR), which is a proxy for run rate revenue, increased from $9 billion to $30 billion. The billionaire attributed that growth to taking market share from traditional software companies, saying that businesses are choosing Anthropic for its "easier, cheaper [and more] intuitive" product. Burry also went after Palantir's core, arguing that the analytics stock depends on outside large language models (LLMs) like Anthropic to make its AI platform work. During Anthropic's kerfuffle with the Pentagon earlier this year, Palantir had to remove Claude from its systems and rebuild the platform, a vulnerability. In its 10-K report, Palantir doesn't discuss Anthropic specifically, but says its AI platform provides connectivity to third-party LLMs. Arguably, what makes Palantir's AI platform and what's driven its growth is the utility of the LLMs it works with, meaning that the value should be accruing to the owners of those LLMs, like Anthropic and OpenAI, rather than Palantir itself. That's the argument Burry is making, at least, and that seems to make sense. That's why software stocks have plunged in recent weeks. Investors see new tools from Anthropic as providing an alternative to traditional enterprise software, and that extends to Palantir. Based on Palantir's recent results, the business hasn't shown any kind of vulnerability to competition from Anthropic or anyone else. Its revenue growth rate has accelerated in each of the last ten quarters, reaching 70% in the fourth quarter. However, Palantir's valuation has long made it vulnerable to this kind of pullback as the stock is now by more than a third from its peak last October. Even after that sell-off, though, Palantir still trades at a price-to-sales ratio of 85, which makes it the most expensive stock on the S&P 500 by far. Palantir isn't alone in the software sell-off either, as SaaS stocks are falling across the board, and the Anthropic threat shows no signs of slowing down. At this point, Palantir's business results are still strong, but the same could be said of many of its SaaS peers that are also posting impressive growth. It may take more than just results to rewrite the narrative for Palantir at this point, especially as the company clearly relies on LLMs like Anthropic. Given its valuation, there's plenty of room for the stock to fall further, especially if Anthropic continues to make advances and release new products. Before you buy stock in Palantir Technologies, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now... and Palantir Technologies wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $550,348!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $1,127,467!* Now, it's worth noting Stock Advisor's total average return is 959% -- a market-crushing outperformance compared to 191% for the S&P 500. Don't miss the latest top 10 list, available with Stock Advisor, and join an investing community built by individual investors for individual investors. Jeremy Bowman has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Palantir Technologies. The Motley Fool has a disclosure policy.

Anthropic
NASDAQ Stock Market17d ago
Read update
The Billionaire Just Predicted Anthropic Would Crush Palantir. It Could Already Be Happening

XAI Training 10 Trillion Parameter Model ??" Likely Out in Mid 2026

xAI would be leading in raw announced scale of parameters. No other lab has publicly confirmed training 10T or even 6T models right now. The 6T model alone is roughly double the rumored size of Grok 4 and far larger than most current estimates for GPT-5 or Claude 4.6. Parameter count is only part of the story. AI models are judged more on: Active parameters per token (MoE efficiency). Training data quality and "intelligence density" (xAI claims higher density per gigabyte). Inference-time compute (reasoning modes, multi-agent orchestration). Real-world benchmarks (coding, agentic tasks, multimodality). Chips Needed & Costs for Pre-Training Runs Exact per-model costs are not public (models are still training), but here are the best analyses and estimates. Colossus 2 hardware: ~550,000 NVIDIA GPUs (mostly GB200/GB300 Blackwell variants) at ~$18 billion hardware cost alone (average ~$32k-$40k per GPU). This supports the full parallel training lineup. Total CapEx is tens of billions of dollars for Colossus 2 (land, power infrastructure, cooling, networking). Includes on-site gas turbines/Megapacks for 400+ MW dedicated power and rapid buildout. Per-model rough estimates (community/analyst extrapolations). 10T model needs ~$1.5 billion+ in compute (one early analyst call. scales with FLOPs and duration). Initial pre-training phase ~2 months on Colossus 2. 6T model needs Similar order of magnitude but lower. benefits from shared cluster efficiency. Smaller 1T/1.5T runs: Significantly cheaper/faster due to parallelization.

xAI
freedomsphoenix.com17d ago
Read update
XAI Training 10 Trillion Parameter Model ??" Likely Out in Mid 2026

The Billionaire Just Predicted Anthropic Would Crush Palantir. It Could Already Be Happening | The Motley Fool

Palantir's valuation makes it vulnerable for a deeper drawdown. Contrarian investor Michael Burry gained fame from The Big Short as one of a handful of investors in the movie that bet big on the housing crash in 2008 and won. Burry collected about $100 million for himself with his bet against subprime mortgages and made more than $700 million for his investors. Now, Burry, who's believed to be a billionaire, is still an active investor, though he has transitioned his Scion Asset Management to a family office, and he's now betting big against Palantir (NASDAQ: PLTR), the deep data analytics company now best known for its AI platform. Burry bought long-dated put options on the tech stock in the third quarter, spending roughly $9.2 million to do so. This week, he stepped up his attack on Palantir, saying in a now-deleted post on X that "Anthropic is eating Palantir's lunch." Those comments, which came just as Anthropic said its new Claude Mythos AI model can identify software vulnerabilities and is too powerful to be released, seemed to be having an impact on Palantir. Over Wednesday and Thursday, Palantir stock lost 13%, even as the broad market surged on Wednesday on news of the ceasefire. However, software stocks plunged on Thursday in response to the new threat from Anthropic. Burry backed up his argument by noting that Anthropic's annual recurring revenue (ARR), which is a proxy for run rate revenue, increased from $9 billion to $30 billion. The billionaire attributed that growth to taking market share from traditional software companies, saying that businesses are choosing Anthropic for its "easier, cheaper [and more] intuitive" product. Burry also went after Palantir's core, arguing that the analytics stock depends on outside large language models (LLMs) like Anthropic to make its AI platform work. During Anthropic's kerfuffle with the Pentagon earlier this year, Palantir had to remove Claude from its systems and rebuild the platform, a vulnerability. In its 10-K report, Palantir doesn't discuss Anthropic specifically, but says its AI platform provides connectivity to third-party LLMs. Arguably, what makes Palantir's AI platform and what's driven its growth is the utility of the LLMs it works with, meaning that the value should be accruing to the owners of those LLMs, like Anthropic and OpenAI, rather than Palantir itself. That's the argument Burry is making, at least, and that seems to make sense. That's why software stocks have plunged in recent weeks. Investors see new tools from Anthropic as providing an alternative to traditional enterprise software, and that extends to Palantir. Based on Palantir's recent results, the business hasn't shown any kind of vulnerability to competition from Anthropic or anyone else. Its revenue growth rate has accelerated in each of the last ten quarters, reaching 70% in the fourth quarter. However, Palantir's valuation has long made it vulnerable to this kind of pullback as the stock is now by more than a third from its peak last October. Even after that sell-off, though, Palantir still trades at a price-to-sales ratio of 85, which makes it the most expensive stock on the S&P 500 by far. Palantir isn't alone in the software sell-off either, as SaaS stocks are falling across the board, and the Anthropic threat shows no signs of slowing down. At this point, Palantir's business results are still strong, but the same could be said of many of its SaaS peers that are also posting impressive growth. It may take more than just results to rewrite the narrative for Palantir at this point, especially as the company clearly relies on LLMs like Anthropic. Given its valuation, there's plenty of room for the stock to fall further, especially if Anthropic continues to make advances and release new products.

Anthropic
The Motley Fool17d ago
Read update
The Billionaire Just Predicted Anthropic Would Crush Palantir. It Could Already Be Happening | The Motley Fool

Anthropic's Mythos Will Force a Cybersecurity Reckoning -- Just Not the One You Think

Anthropic said this week that the debut of its new Claude Mythos Preview model marks a critical juncture in the evolution of cybersecurity, representing an unprecedented existential threat to existing software defense strategies. So, is it more AI hype -- or a true turning point? According to Anthropic, Mythos Preview crosses a threshold of capabilities to discover vulnerabilities in virtually any and every operating system, browser, or other software product and autonomously develop working exploits for hacking. With this in mind, the company is only releasing the new model to a few dozen organizations for now -- including Microsoft, Apple, Google, and the Linux Foundation -- as part of a consortium dubbed Project Glasswing. But after years of speculation about how generative AI could impact cybersecurity, the news this week ignited controversy about whether a reckoning has really arrived and what it might look like in practice. Some are extremely skeptical of Anthropic's claims. They argue that existing AI agents can already help users find and exploit vulnerabilities much more easily and cheaply than ever before, and that this reality is fueling refinements in how companies discover and patch their software without fundamentally changing the paradigm. And then there's the ick factor that Anthropic will almost certainly benefit financially from positioning its latest model as mysterious, uniquely powerful, and exclusive. Other researchers and practitioners, though, say that they agree with Anthropic's assessment and point out that the company has said Mythos Preview is just the first to achieve capabilities that will ultimately be widely available in other models. "I typically am very skeptical of these things, and the open source community tends to be very skeptical, but I do fundamentally feel like this is a real threat," says Alex Zenla, chief technology officer of cloud security firm Edera. Zenla and others specifically point to one Mythos Preview capability as the pivot point. Generative AI, they say, is now getting more capable at identifying and developing what are known as "exploit chains," or groups of vulnerabilities that can be exploited in sequence to deeply compromise a target -- essentially Rube Goldberg-machine-style hacking. Many of the most sophisticated hacking techniques employ exploit chains, including so called zero-click attacks that compromise a system without requiring any interaction from a user. "We are already living in the world where companies run vulnerable software, vulnerable hardware, and struggle to patch. Many companies are not capable of securing their infrastructure -- that hasn't really changed from yesterday to today," says longtime security engineer and researcher Niels Provos. "But from what I understand, Mythos is really good at coming up with multistage vulnerabilities, and then also provides the proof of exploitation. I don't think it intrinsically changes the problem space, but it changes the required skill level to find these vulnerabilities and exploit them." A limited release of Mythos Preview to Project Glasswing participants only gives defenders a small lead time to find weaknesses in their own systems using the model and start to grapple more broadly with how software development, update cycles, and patch adoption needs to change before attackers have widespread access to such capabilities themselves. Industry leaders seem to be heeding the warning. Anthropic's frontier red team lead, Logan Graham, told WIRED on Tuesday that as the company reached out to organizations about Project Glasswing ahead of this week's announcement, the phone calls got shorter and shorter because the potential threat was becoming more obvious. "This is an issue that involves all of the model developers. Our goal here is just to kick things off," Graham said. "It's really important that Mythos Preview gets in the hands of defenders to give a head start."

Anthropic
Wired17d ago
Read update
Anthropic's Mythos Will Force a Cybersecurity Reckoning -- Just Not the One You Think
Showing 6041 - 6060 of 11425 articles