News & Updates

The latest news and updates from companies in the WLTH portfolio.

US Regulators Reportedly Warn Top Bank CEOs Over Anthropic AI Cyber Risk In Urgent Briefing

US financial regulators have raised fresh concerns about advanced artificial intelligence and its potential impact on banking cybersecurity, holding an urgent meeting with top Wall Street executives to discuss risks linked to a new frontier AI system developed by Anthropic, according to a new report. Concretely, US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened a closed-door session with chief executives of major US banks, warning them to strengthen cyber defenses against emerging AI-driven threats. The meeting, as reported by Bloomberg News, focused on concerns that Anthropic's latest advanced model could significantly lower the barrier for sophisticated cyberattacks, particularly by helping attackers identify vulnerabilities in widely used software systems. Bank executives were reportedly urged to reassess their cybersecurity frameworks and prepare for scenarios where AI systems could be used to automate or scale intrusion attempts against financial infrastructure. Channel News Asia added that the discussions were triggered by assessments that Anthropic's newest model demonstrated unusually strong capability in identifying software weaknesses, raising fears that such tools could be misused if they fall into the wrong hands. While details of the model's internal capabilities have not been publicly disclosed, the concern among regulators is that frontier AI systems are rapidly improving in areas such as code analysis, vulnerability detection and automated reasoning, skills that could be weaponized in cyber warfare or financial crime. The urgency of the briefing reflects a broader shift in how US authorities view artificial intelligence risk. Rather than treating it solely as a technology sector issue, regulators are increasingly framing advanced AI as a potential systemic financial stability risk, similar to major shocks in cybersecurity or market infrastructure. The Straits Times also noted that officials are pushing banks to coordinate more closely with regulators and AI developers to ensure safeguards are built in before such systems are widely deployed in sensitive environments. The concerns come amid growing global debate over how to regulate rapidly advancing AI systems. The meeting highlighted fears that frontier models are becoming more capable of autonomous planning, coding assistance and vulnerability discovery, raising questions about oversight, containment and responsible release practices.

Anthropic
International Business Times17d ago
Read update
US Regulators Reportedly Warn Top Bank CEOs Over Anthropic AI Cyber Risk In Urgent Briefing

Coreweave signs multi-year cloud deal with Anthropic to power Claude

Coreweave has signed a multi-year cloud deal with AI startup Anthropic. The agreement will provide Anthropic with compute capacity for its Claude model family starting later this year. Financial details were not disclosed. Coreweave's stock rose more than 5 percent in premarket trading. The buildout will happen in phases, with the option to expand later. For Coreweave, the Anthropic deal is part of a string of major contracts: last year, the company signed an $11.9 billion deal with OpenAI, followed by a $6.3 billion order with Nvidia in September, and just the day before, an expanded $21 billion deal with Meta. The Anthropic contract helps Coreweave diversify its revenue - until now, around 67 percent of its income came from Microsoft. Coreweave's stock is up about 29 percent year to date.

Anthropic
THE DECODER17d ago
Read update
Coreweave signs multi-year cloud deal with Anthropic to power Claude

Palantir stock rebounds after Trump endorsement amid Anthropic competition

Investing.com -- Palantir Technologies (NASDAQ:PLTR) shares reversed off session lows Friday after President Trump mentioned the company in a social media post, though the stock remained down 2% after falling as much as 6% earlier in the session. Trump wrote, "Palantir Technologies (PLTR) has proven to have great war fighting capabilities and equipment. Just ask our enemies!!!" The endorsement came as the stock faced selling pressure amid concerns about competitive threats from Anthropic. Palantir shares had declined over 15% in the past five days before Trump's mention today. The recent weakness followed Anthropic's release of a new product around multi-agent orchestration, which raised questions about potential headwinds for the software sector. Wedbush analyst Dan Ives earlier reiterated an Outperform rating and $230.00 price target on Palantir, addressing the competitive concerns. The analyst commented, "Palantir has been under pressure over the past few days, including down 7% today, after Anthropic released a new product around multi-agent orchestration, which continues to add more headwinds to the software sector. While Anthropic is hitting a new scale with the company now at $30 billion ARR, up from $9 billion at the start of the year, we believe this is not at the expense of PLTR's business as the company continues to accelerate both its US commercial and government businesses with US Commercial growing 137% YoY and US government accelerating 66% YoY." Ives noted that Palantir's US commercial business grew 137% YoY while US government revenue accelerated 66% YoY, suggesting the company's growth trajectory remains intact despite competitive developments in the AI sector. Related articles Palantir stock rebounds after Trump endorsement amid Anthropic competition Geospatial analytics firm HawkEye 360 files for NYSE IPO amid defense tech boom

Anthropic
Yahoo7 Finance17d ago
Read update
Palantir stock rebounds after Trump endorsement amid Anthropic competition

The "SaaS-Pocalypse" Continues: Cloudflare, ServiceNow, CrowdStrike Under Fire as Anthropic Rewrites the Rules

This post may contain links from our sponsors and affiliates, and Flywheel Publishing may receive compensation for actions taken through them. The so-called "SaaS-pocalypse" is back, and it's hitting harder today than it did yesterday. Anthropic's release of "Claude Code Security," an AI-driven security product, has rattled investors across the enterprise software and cybersecurity space, sending shares of Cloudflare (NYSE:NET), ServiceNow (NYSE:NOW | NOW Price Prediction), and CrowdStrike (NASDAQ:CRWD) sharply lower in Friday morning trading. This follows Thursday's brutal sector repricing, when Anthropic's Managed Agents release triggered a broad software selloff. Today, the market is asking a harder question: if Anthropic can build AI-native security tools, what exactly are incumbents selling? The fear isn't subtle. Investors are pricing in the possibility that AI agents and large language models could systematically commoditize the enterprise software subscription moats that companies like Cloudflare, ServiceNow, and CrowdStrike have spent years building. That's the SaaS-pocalypse thesis, and right now, it's winning. Cloudflare shares are down 12% today to $170, following a prior close of $193.05. That's the second consecutive day of double-digit losses. NET closed at $211.25 on Wednesday, April 8, fell to $193.05 on Thursday, and is now at $164.60 on Friday. That's a staggering two-day collapse for a stock that was still up significantly year-to-date heading into this week. The irony here is real. Cloudflare's leadership has argued publicly that the rise of AI agents makes their network infrastructure more essential, not less. CEO Matthew Prince has positioned the company as "the platform AI agents run on and through." The bull case is that Cloudflare sits at the exact layer where agentic computing operates, making it indispensable rather than displaceable. The bear case, now loudly winning in the market, is that Anthropic's expanding product suite threatens to bypass or commoditize that layer entirely. Reddit sentiment for NET stock is deeply divided, with infrastructure bulls viewing the two-day rout as a buying opportunity. That's a debate that won't resolve today. ServiceNow shares are down 8% today to $82, from a prior close of $89.81. The stock is now down 46% year-to-date and hitting fresh 52-week lows. ServiceNow's workflow automation platform is perhaps the most directly threatened by the SaaS-pocalypse thesis. AI agents can automate IT service management and enterprise workflow functions, which is precisely what ServiceNow charges premium subscription prices to deliver. UBS cut ServiceNow stock to Neutral with a $100 target today, while TD Cowen also cut to Hold with a $129 target yesterday, compounding the selling pressure. The contrarian argument centers on ServiceNow's depth of enterprise integration and its own AI product updates. CEO Bill McDermott has said that "no AI company in the enterprise better positioned for sustainable profitable revenue growth." ServiceNow also holds a direct partnership with Anthropic, integrating Claude models into its AI Platform. That partnership hasn't stopped the selling, but it doesn't just complicate the pure-disruption narrative. Reddit sentiment for NOW stock shifted from bearish across April 8 and 9 to neutral on Friday morning, suggesting some contrarian interest is emerging. CrowdStrike shares are down 5% today to $377, from a prior close of $394.68. That's a meaningful decline, but it looks almost tame next to Cloudflare's 12% drop. CrowdStrike is still up 2% over the past year, and analysts remain broadly constructive, with 42 buy ratings and zero sell ratings on the stock. The core bull argument for CrowdStrike is that AI proliferation actually increases cybersecurity demand. More AI agents mean a larger attack surface, which means more need for endpoint and threat protection. CrowdStrike recently expanded its collaboration with IBM and launched new AI-driven security tools, which have helped stabilize investor sentiment relative to peers. Anthropic's Claude Code Security is seen as a more direct threat to legacy security platforms than to CrowdStrike's modern AI-native Falcon platform. If you're watching this sector closely, the case for bottom-fishing in CrowdStrike rests on exactly that logic. The analyst consensus price target of $489.86 implies more than 30% upside from current levels. That's a thesis worth watching carefully. No matter how you slice it, today is a sector-wide reckoning. Watch for whether any of these three names find support into the close, and keep an eye on whether Anthropic makes further product announcements that could extend the SaaS-pocalypse into next week.

Anthropic
24/7 Wall St.17d ago
Read update
The "SaaS-Pocalypse" Continues: Cloudflare, ServiceNow, CrowdStrike Under Fire as Anthropic Rewrites the Rules

Anthropic's $12 Billion CoreWeave Deal Signals a New Arms Race for AI Infrastructure

Anthropic just locked in one of the largest infrastructure agreements in artificial intelligence history. And it didn't build a single data center to do it. The Claude developer signed a five-year, approximately $12 billion agreement with CoreWeave, the Nvidia-backed cloud computing company that has rapidly become the go-to infrastructure provider for AI firms unwilling -- or unable -- to build out their own server farms. The deal, first reported by Yahoo Finance, gives Anthropic dedicated access to massive GPU clusters necessary to train and deploy its next generation of large language models. The numbers alone are staggering. But the strategic implications run deeper. This agreement represents a fundamental bet by Anthropic that outsourcing compute to a specialized provider is more efficient than vertically integrating -- a direct contrast to the approach taken by rivals like xAI and Meta, which are pouring tens of billions into proprietary data center buildouts. It also marks a defining moment for CoreWeave, which went public in late March 2025 in one of the year's most closely watched IPOs. The company's stock has been volatile since its debut, but deals of this magnitude provide exactly the kind of contracted revenue visibility that Wall Street craves. The Compute Hunger Is Insatiable The AI industry's appetite for computing power has entered a phase that even optimistic projections from two years ago didn't fully anticipate. Training frontier models -- the most capable systems from companies like OpenAI, Google DeepMind, and Anthropic -- now requires clusters of tens of thousands of GPUs running for months at a time. Inference, the process of actually running these models for end users, demands its own enormous and growing share of compute. Anthropic's deal with CoreWeave is structured around access to Nvidia's latest GPU hardware, which remains the gold standard for AI training workloads. The five-year term and $12 billion price tag suggest Anthropic is securing not just current-generation chips but future hardware as well, likely including Nvidia's Blackwell and successor architectures as they become available. This isn't Anthropic's first major infrastructure commitment. The company has previously secured compute through deals with Amazon Web Services, which has invested billions in Anthropic and offers Claude through its Bedrock platform. Google Cloud has also been a significant partner. But the CoreWeave agreement represents something different: a dedicated, large-scale compute arrangement with a provider whose entire business model is built around serving AI workloads, not general enterprise cloud customers. The distinction matters. AWS and Google Cloud serve millions of customers across every industry. CoreWeave was purpose-built for GPU-intensive computing. That specialization means Anthropic can potentially get better performance, more predictable access, and pricing structures tailored to its specific needs. So why not just build its own infrastructure? Cost and speed. Constructing data centers from scratch takes years and requires expertise in real estate, power procurement, cooling systems, and hardware logistics -- none of which are core competencies for a research lab. CoreWeave handles all of that. The trade-off is dependency. Anthropic is committing $12 billion over five years to a single provider, creating significant concentration risk. If CoreWeave faces operational issues, supply chain disruptions, or financial difficulties, Anthropic's model training and deployment schedules could be directly impacted. But Anthropic appears to have calculated that the risk is manageable, especially given CoreWeave's deep relationship with Nvidia and its rapidly expanding data center footprint across the United States and internationally. CoreWeave's Meteoric Rise -- and the Questions That Follow CoreWeave's trajectory has been nothing short of extraordinary. Founded in 2017 as a cryptocurrency mining operation, the company pivoted to GPU cloud computing as AI demand exploded. It raised over $15 billion in debt and equity financing before going public, built relationships with every major AI lab, and positioned itself as the anti-hyperscaler -- a cloud provider laser-focused on GPU compute rather than trying to be everything to everyone. The Anthropic deal adds to a contract backlog that already includes commitments from Microsoft, which signed a reported $10 billion-plus agreement with CoreWeave for Azure AI infrastructure. These mega-deals have given CoreWeave a revenue profile that looks more like an infrastructure utility than a typical startup, with long-term contracted cash flows that support its heavy capital expenditure. But questions persist. CoreWeave carries substantial debt. Its capital expenditure requirements are enormous and ongoing, as each new generation of Nvidia GPUs demands fresh investment. And the company's valuation, both public and in prior private rounds, assumes sustained, exponential growth in AI compute demand -- an assumption that, while currently well-supported, is not without risk. The broader chip deal market has exploded in parallel. According to Yahoo Finance, the total value of AI infrastructure agreements signed in 2024 and early 2025 dwarfs anything seen in previous technology cycles. Nvidia's data center revenue has surged past $100 billion on an annualized basis, and every major cloud provider is racing to secure allocation of its most advanced chips. This dynamic has created a seller's market for GPU access. Companies that locked in supply early -- or that have direct relationships with Nvidia -- hold significant advantages. CoreWeave's status as one of Nvidia's largest customers, and an early investor in its GPU cloud vision, has been a critical competitive moat. For Anthropic specifically, the timing of this deal coincides with intensifying competition in the foundation model space. OpenAI continues to push ahead with GPT-5 and beyond. Google is investing heavily in Gemini. Meta's Llama models are gaining traction in the open-source community. And newer entrants like xAI, backed by Elon Musk's considerable resources, are building hundred-thousand-GPU supercomputers in Memphis. Anthropic can't afford to fall behind on compute. Its Claude models have earned a reputation for strong performance on reasoning, coding, and safety benchmarks, but maintaining that position requires continuous investment in training larger, more capable systems. The CoreWeave deal is essentially an insurance policy against being outgunned on infrastructure. There's a financial dimension too. Anthropic was last valued at approximately $61.5 billion in a funding round earlier this year, according to multiple reports. The company has raised billions from investors including Google, Spark Capital, and Salesforce Ventures. Committing $12 billion to infrastructure signals confidence that revenue -- from API access, enterprise contracts, and consumer products -- will scale fast enough to justify the expenditure. Whether that confidence proves warranted depends on how quickly the AI market matures from a capital-intensive buildout phase into a sustainable, revenue-generating business. Right now, the industry is spending far more on infrastructure than it's generating in direct AI revenue. That gap will need to close. What This Means for the Broader AI Industry The Anthropic-CoreWeave deal is part of a larger pattern reshaping the technology sector. The traditional cloud computing model -- dominated by AWS, Microsoft Azure, and Google Cloud -- is being supplemented, and in some cases challenged, by specialized GPU cloud providers. CoreWeave is the most prominent, but companies like Lambda, Crusoe Energy, and Applied Digital are also carving out positions in this market. This fragmentation creates opportunities and risks. AI companies get more options for sourcing compute, which can drive better pricing and performance. But it also introduces complexity. Managing workloads across multiple providers, ensuring data security and model integrity, and negotiating contracts worth billions of dollars all require sophisticated operational capabilities. For investors, these mega-deals offer a clearer picture of where AI spending is actually going. It's not just software. It's physical infrastructure -- data centers, power plants, cooling systems, networking equipment, and above all, GPUs. The companies that control these assets, or that have locked in long-term access to them, are positioned to capture enormous value. Nvidia remains the ultimate beneficiary. Every CoreWeave deal, every hyperscaler buildout, every AI startup securing compute -- they all flow back to Nvidia's data center business. The company's dominance in AI training chips is virtually unchallenged, and its software platform, CUDA, has created deep lock-in across the industry. AMD and Intel are working to close the gap, and custom chip efforts from Google (TPUs), Amazon (Trainium), and Microsoft (Maia) represent longer-term competitive threats. But for now, Nvidia's position is secure, and deals like the Anthropic-CoreWeave agreement only reinforce it. The $12 billion figure will grab headlines. It should. But the real story is what it reveals about the current state of AI: a technology sector betting tens of billions of dollars that artificial intelligence will become as fundamental to the global economy as electricity or the internet. The infrastructure is being built at a pace not seen since the fiber-optic boom of the late 1990s. Whether this era ends differently than that one did remains the trillion-dollar question.

CrusoexAIAnthropic
WebProNews17d ago
Read update
Anthropic's $12 Billion CoreWeave Deal Signals a New Arms Race for AI Infrastructure

OpenAI challenges Anthropic's Claude Max with $100 Pro plan for Codex - The Economic Times

The new plan offers five times the Codex usage of the $20 Plus tier, making it better suited for longer, more intensive coding sessions. This development comes on the back of the AI major's announcement that third-party integrations, including OpenClaw, will no longer be covered under standard subscription limits, and that usage through such tools will move to a separate pay-as-you-go model.

Anthropic
Economic Times17d ago
Read update
OpenAI challenges Anthropic's Claude Max with $100 Pro plan for Codex - The Economic Times

Anthropic's Mythos is a wake up call, but experts say the era of AI-driven hacking is already here | Fortune

Anthropic's new AI model, Mythos, is causing a stir among cybersecurity experts and policymakers. The company says its new model is so skilled at finding and exploiting software vulnerabilities that it's too dangerous to release. Instead, it is limiting access to a small group of major technology companies whose software is the foundation for many other digital services, hoping to give defenders time to strengthen their systems. Anthropic is not the only AI lab producing models with these kinds of capabilities, or considering similar release strategies to try to ensure cyber defenders have access to these systems before hackers do. OpenAI is reportedly preparing a new model -- internally known as "Spud" -- that could match Mythos in cybersecurity capabilities. According to a report from Axios, the company is also working on an advanced cybersecurity-focused system that it plans to release in a phased rollout to a small group of partners, again to try to give defenders a head start. Some analysts have dismissed these cautious, limited releases as more about marketing and creating hype around new models, rather than purely safety-driven decisions. But most agree that AI-driven cyber capabilities have reached a dangerous tipping point. Even without the powerful new model, they say existing, publicly available AI models can already carry out sophisticated cyberattacks -- sometimes in minutes. Researchers are concerned about both the scale and accessibility of AI‑enabled attacks. Tasks that once required advanced expertise -- like scanning code for vulnerabilities or running attacks that require chaining multiple exploit together -- are increasingly being automated or semi‑automated by AI systems. Attackers, even those lacking high levels of technical skills, can now launch highly-automated attacks across thousands of systems at once in a massive, coordinated assault. In practical terms, that raises questions both for enterprises and policymakers about how to protect critical infrastructure in a world where these advanced AI capabilities will soon be in the hands of bad actors and hostile nation states. Unless government and industry harden defenses, the world could see a wave of devastating cyber attacks taking down banking systems, power grids, hospitals, or water systems. It is exactly such a nightmare scenario that Anthropic says it is hoping to head off by limiting Mythos' release. Some researcher say is not clear, however, how much the new models increase the chances of this kind of cyber-Armageddon. But the reason for their skepticism is not reassuring: they say that much of what Mythos can do may already be possible with smaller, cheaper, openly available models. Recent research from the AI security firm AISLE suggests that several of the vulnerabilities Anthropic highlighted in its announcement -- including decades-old bugs -- could have been detected by openly available models that anyone can download and run for free. There are a couple of caveats: Rather than simply pointing an AI model at an entire software application or a complete code base and asking the AI model to find a way to hack it -- as Anthropic appears to have done with Mythos -- the AISLE researchers already knew which segments of code contained the bugs and fed the models these code chunks. Smaller models generally have narrower context windows, meaning they can't take in an entire large code base at once. But it is possible to imagine a pipeline in which a large code base is broken into smaller pieces, each of which is fed in turn to a small AI model, allowing it to examine each segment for possible exploits, experts said. According to Spencer Whitman, chief product officer at AI security firm Gray Swan, the hard part of what researchers achieved with Mythos was autonomously finding the vulnerabilities within large codebases and then testing those exploits. "Finding vulnerabilities is hard because it requires locating weak points buried within millions of lines of code and verifying that these targets result in a real exploit," he told Fortune. "Mythos claims it autonomously completed both steps." "The fact that some of these vulnerabilities sat undetected in codebases for decades underscores just how hard the first step actually is -- and why automating it is significant," he added. Smaller models may be able to achieve comparable results to Mythos, according to Charlie Eriksen, a security researcher at Aikido Security, but they require more technical skill, careful prompting, and better-designed tooling to get there. Models like Mythos, however, may make it considerably easier for even those with less technical skills to carry out sophisticated and devastating cyber attacks. "This technology is moving so fast that it's naive to assume others aren't able to easily replicate similar results, if not already, at least very soon," he said. "Anybody with a computer can develop very powerful offensive cyber capabilities in a short amount of time, without needing a lot of expertise in cybersecurity." Anthropic's decision to limit Mythos' release is also putting unusual power in the hands of a single company. Even though Anthropic says it is consulting with the U.S. government on Mythos' capabilities and the vulnerabilities it is uncovering (and there are calls for it to work with other allied governments too), the company is effectively deciding who gets access to one of the most advanced cyber capabilities ever developed. Some security experts and software developers -- especially those committed to open-source software, that is publicly-accessible and often usable for free -- argue the world would be safer if Mythos were released so that every defender, not just Anthropic's chosen partners, could use it to find and patch vulnerabilities. "Whatever the right judgment call is, the most striking aspect of this situation is how reliant we are on the judgment of a handful of private actors who aren't accountable to the public," Jonathan Iwry, a fellow at the Wharton Accountable AI Lab, said. Anthropic did loop in the government early. According to reporting from Axios, the company actively warned U.S. government officials about a new, powerful model that significantly increased the risk of cyberattacks at least a month ago. Anthropic, in a blog post announcing Project Glasswing, later said briefing the government on what the model could do, where the risks were, and how it was managing them, was a "priority from the start." Despite these efforts, there's also a growing "governance gap," according to Hamza Chaudhry, AI and National Security Lead at the Future of Life Institute. These systems are being integrated into offensive cyber operations faster than policymakers can build the frameworks to govern how these capabilities are used or secured. In the past, even cyber capabilities developed by and for the use of government, particularly hacking tools developed by the U.S. National Security Agency, have ended up in the hands of bad actors. For example, in 2016, a hacking group called the Shadow Brokers published a cache of hacking tools and exploits used against major software systems -- including Microsoft Windows -- that were widely-believed to have been developed by the NSA. Some of the leaked NSA exploit code was later used in WannaCry, while NotPetya also relied on the NSA-linked EternalBlue exploit, helping make both attacks among the most damaging in recent history. The cyber abilities of AI models such as Mythos pose completely new governance challenges, too. With previous hacking tools, a human had to deliberately choose to deploy those exploits. But, according to Anthropic, in safety tests, Mythos would sometimes use its hacking abilities to accomplish some other goal in ways that surprised its creators. The safety issue is often not the AI model's coding skills, per se, but its autonomous capabilities, Chaudhry said. As AI systems become more agentic, they are able to set sub-goals, adapt their approach, and continue operating without direct human instruction at every step. The concern is that an AI system might pursue an objective in ways that extend beyond what its operator explicitly intended. "The agent... pursues its objective function through whatever pathways its intelligence and autonomy identify as optimal," he said. "An adversary state or non-state actor deploying an autonomous AI agent... is no longer directing actions so much as initiating a process whose specific trajectory they cannot fully predict." Whether companies have access to Mythos or not, experts say those not currently using AI to secure their systems may already be falling behind. Even with Anthropic limiting widespread access to its new models, AI-driven offensive capabilities are out there in less powerful forms, for those who know how to use them. Most security teams operate on the assumption that time is somewhat on their side -- that there's at least a gap between a vulnerability existing and an attacker finding it, and another gap between finding it and being able to use it. For most of recent history, that was roughly true. But advanced AI models are collapsing both gaps at once, according to Emanuel Salmona, co-founder and CEO of Nagomi Security. "Mythos found critical vulnerabilities across every major operating system and browser -- some of them decades old -- in weeks," he said. "When that capability is broadly available, and Anthropic's own people are saying six to eighteen months, the organizations that were already behind [on security] don't just fall further back. The model they built their programs around stops working entirely."

Anthropic
Fortune17d ago
Read update
Anthropic's Mythos is a wake up call, but experts say the era of AI-driven hacking is already here | Fortune

SpaceX begins installing equipment at Texas facility, eyes year-end production, sources say

TAIPEI, April 10 (Reuters) - SpaceX has begun installing equipment at its advanced chip packaging facility in Bastrop, Texas, as the satellite and rocket company aims to begin production there by the end of this year, two sources familiar with the matter told Reuters. One of the sources said the timeline had seen some delays, but the company was still targeting ⁠a start of production ⁠before the year-end. The facility will package radio frequency (RF) chips used in products related to SpaceX's satellite-based internet system Starlink, the sources said, declining to be named as the information is not public. The RF chips to be packaged ⁠in Bastrop are currently ⁠packaged by external providers, but SpaceX plans to bring at least part of the packaging process in-house once the facility is ready, according to one of the sources and a third source. SpaceX did not immediately respond to a request for comment. In 2025, Texas Governor Greg Abbott ⁠said that over the next three years, SpaceX's Bastrop facility would expand by 1 million square feet ⁠to produce Starlink kits and related components, including ⁠advanced packaged silicon products. The expansion is expected to cost more than $280 million, Abbott said. Elon ⁠Musk has been building up the space company's semiconductor capabilities and unveiled a plan last month to build advanced chip factories at a sprawling facility in Austin, Texas. (Reporting by Wen-Yee Lee; Editing by Janane Venkatraman)

SpaceX
Superhits 97.9 Terre Haute, IN17d ago
Read update
SpaceX begins installing equipment at Texas facility, eyes year-end production, sources say

After a Recent Deal With X-Energy, Is Fluor Becoming the Ultimate Nuclear Pick-and-Shovel Play?

It was an early investor in NuScale Power and recently agreed to a contract to help X-Energy develop its project in Texas. The nuclear energy buildout is very real, and the White House has affirmed its stance with multiple executive orders and multibillion-dollar investments to grow its nuclear fleet and accelerate the deployment of next-generation nuclear technology. As part of its ambitious goals, the United States will look to quadruple its new nuclear energy capacity to 400 gigawatts (GW), have 10 large reactors under construction by 2030, and provide grants and pilot programs for companies developing nuclear fuel, small modular reactors (SMRs), and microreactor technology. Will AI create the world's first trillionaire? Our team just released a report on the one little-known company, called an "Indispensable Monopoly" providing the critical technology Nvidia and Intel both need. Continue " One company emerging as a key partner in achieving this buildout is Fluor (NYSE: FLR), the global engineering, procurement, and construction (EPC) company with expertise in managing these massive projects. The firm is strengthening its nuclear portfolio through a new deal with X-Energy, a developer of advanced reactor technology. Fluor is making major moves and flexing its muscle as a trusted EPC firm for nuclear energy companies. It was an early investor in NuScale Power in 2011 and is helping NuScale develop its RoPower project in Romania; it recently opened an office in Bucharest to support its projects. This month, the EPC firm came to an agreement with X-Energy to work on its advanced nuclear project at Dow's UCC Seadrift Operations in Texas. As part of the contract, Fluor will provide Front-End Loading Stage 2 services, which include strategic planning, feasibility assessments, and cost control and risk mitigation. X-Energy is a private company specializing in Generation IV high-temperature gas-cooled reactors and nuclear fuel that recently filed a prospectus to go public. It counts Dow and Amazon among its partners and has recently submitted a draft registration statement with the Securities and Exchange Commission to go public. The project is part of the U.S. Department of Energy's Advanced Reactor Demonstration Program, which aims to accelerate the commercialization of advanced nuclear technology through cost-shared partnerships. As part of this project, X-Energy will deploy four 80-megawatt small modular reactor units. Image source: Getty Images. Fluor is solidifying its position as a trusted partner for nuclear energy companies, including breakthrough companies that could change how nuclear energy is deployed, such as NuScale Power and X-Energy. In February, it was selected as the EPC contractor on Centrus Energy's uranium enrichment facility in Piketon, Ohio. Investors seeking exposure to nuclear energy while diversifying away from miners or fuel producers may find Fluor a compelling choice. Fluor leverages a specialized workforce to develop these projects and is highly involved in the planning phases that determine if these major projects are possible. As a result, the company generates revenue early in the nuclear buildout, years before any plants come online. This makes Fluor an excellent pick-and-shovel play for investors looking to capitalize on the nuclear energy buildout in the decades ahead. Before you buy stock in Fluor, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now... and Fluor wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $550,348!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $1,127,467!* Now, it's worth noting Stock Advisor's total average return is 959% -- a market-crushing outperformance compared to 191% for the S&P 500. Don't miss the latest top 10 list, available with Stock Advisor, and join an investing community built by individual investors for individual investors. Courtney Carlsen has positions in Centrus Energy. The Motley Fool has positions in and recommends Amazon. The Motley Fool recommends NuScale Power. The Motley Fool has a disclosure policy.

X-energy
NASDAQ Stock Market17d ago
Read update
After a Recent Deal With X-Energy, Is Fluor Becoming the Ultimate Nuclear Pick-and-Shovel Play?

US Banks Warn of Risks After Kraken Wins Federal Reserve Account - FinanceFeeds

Crypto exchange Kraken has secured a Federal Reserve master account through its Wyoming-based banking arm, marking the first time a crypto-native firm has gained direct access to the central bank's payment infrastructure. The account, granted by the Kansas City Fed, comes with a "limited-purpose" structure and is initially approved for one year. Master accounts are typically reserved for banks, allowing direct access to the Fed's payment rails. For Kraken, this means it can move funds via Fedwire and hold balances at the central bank, bypassing traditional banking intermediaries. The result is faster settlement and potentially lower transaction costs, particularly for institutional clients. However, the account is not equivalent to full banking access. Kraken cannot earn interest on reserves, access emergency lending facilities, or use systems such as FedNow or ACH. These restrictions reflect an attempt by regulators to contain risk while still allowing limited integration into the financial system. The approval has triggered scrutiny from lawmakers and banks, who argue that granting central bank access to crypto firms introduces new risks. Critics point to the opacity of the approval process and question whether standard Federal Reserve protocols were followed. Representative Maxine Waters has formally requested additional disclosures from the Kansas City Fed, focusing on the conditions attached to the account and the rationale behind the decision. The concern extends beyond Kraken itself, as the approval could set a precedent for other crypto firms seeking similar access. From a banking perspective, the issue is both competitive and structural. Direct access to Fed infrastructure allows crypto firms to operate more independently of traditional banks, potentially eroding their role in payments and custody. At the same time, it raises questions about how non-bank entities should be supervised when connected to core financial infrastructure. The restrictions placed on Kraken's account are designed to limit its exposure to the broader financial system. The firm cannot access central bank credit or key retail payment systems, and its ability to hold balances is constrained. Even with these limits, the account allows Kraken to streamline operations for wholesale clients, particularly in high-value transactions. By settling directly through Fedwire, the firm can reduce counterparty risk and eliminate delays tied to intermediary banks. Kraken plans to use the account initially for institutional clients, with potential expansion into additional services over time. "We look at this as a great testament to regulatory rigor and cooperation. It promotes principles of both safety and soundness, and innovation," said Jonathan Jachym, the firm's global head of policy. The structure suggests a cautious regulatory approach: enabling access while restricting the most sensitive functions of central banking. The decision signals a gradual shift in how digital asset firms connect to traditional financial infrastructure. Under a more crypto-friendly policy environment, regulators appear willing to explore new frameworks that allow participation without granting full banking privileges. At the same time, the move highlights unresolved tensions. Crypto firms gaining direct access to central bank systems could alter liquidity flows, payment dynamics, and the role of intermediaries. The lack of transparency around approval criteria adds another layer of uncertainty for both markets and regulators. If additional firms secure similar access, the distinction between crypto platforms and regulated financial institutions may continue to narrow. Whether that transition strengthens efficiency or introduces systemic vulnerabilities will depend on how these accounts are structured, monitored, and scaled over time.

Kraken
FinanceFeeds17d ago
Read update
US Banks Warn of Risks After Kraken Wins Federal Reserve Account - FinanceFeeds

Anthropic Model Scare Sparks Urgent Bessent, Powell Warning To Bank CEOs

Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned Wall Street leaders to an urgent meeting on concerns that the latest artificial intelligence model from Anthropic PBC will usher in an era of greater cyber risk. Bessent and Powell assembled the group at Treasury's headquarters in Washington on Tuesday to make sure banks are aware of possible future risks raised by Anthropic's Mythos and potential similar models, and are taking precautions to defend their systems, according to people familiar with the matter who asked not to be identified citing the private discussions. Many of the executives were in town already for a meeting of the Financial Services Forum, an advocacy group made up of the biggest lenders. A representative for the Treasury didn't immediately respond to a request for comment. A spokesperson for the Fed declined to comment. The previously unreported meeting, arranged on short notice, is another sign that regulators consider the possibility of a new breed of cyber attacks as one of the biggest risks facing the financial industry. All the banks summoned to the meeting are classified as systemically important by top regulators, meaning their stability is a priority for the global financial system. Powell's participation in the meeting signaled that the concern was one of systemic risk, and not tied to the Trump administration's previous clashes with Anthropic, said some of the people. The Fed, with its network of examiners, is also deeply familiar with banking operations. Anthropic's Mythos is a more powerful system that the AI firm has said is capable of identifying and then exploiting vulnerabilities in every major operating system and web browser when directed by a user to do so. Regulators' caution about the power of the model in hackers' hands echoes Anthropic's own prudence. Anthropic has limited the release of it to just a few major technology and finance firms at first. Those companies, which include Amazon.com Inc. and Apple Inc. as well as JPMorgan Chase & Co., are part of "Project Glasswing," which will work to secure the most important systems before other similar AI models become available. Anthropic has said that it has been in discussions prior to its recent release with US officials about Mythos and its "offensive and defensive cyber capabilities." In releasing Mythos to a very limited set of companies, Anthropic pointed to several vulnerabilities that the AI system was capable of both identifying and potentially exploiting during testing. None of the examples related specifically to financial institutions, but in one instance, the firm's security team said it was able to compromise a web browser so that a website set up by a hacker could read data from another website "e.g., the victim's bank." Mythos Preview "fully autonomously discovered" a way of reading information stored in "multiple different web browsers" and then used that ability to find ways to exploit them, according to a post from Anthropic's security team. Anthropic has separately been battling the Trump administration in court. The Pentagon had labeled the company as a supply-chain risk, a designation that Anthropic has opposed. Earlier this week, a federal appeals court declined, at least for now, Anthropic's request that it put a pause to the Pentagon's designation. Chief executive officers summoned to the meeting with the Fed and Treasury include Citigroup Inc.'s Jane Fraser, Morgan Stanley's Ted Pick, Bank of America Corp.'s Brian Moynihan, Wells Fargo & Co.'s Charlie Scharf, and Goldman Sachs Group Inc.'s David Solomon, said the people. JPMorgan's Jamie Dimon was unable to attend, the people said. Spokespeople for the banks declined to comment. A representative for Anthropic had no immediate comment. In recent years, regulators have required banks to hold some capital tied to the potential for cyber attacks, as well as other so-called operational risks such as lawsuits and rogue employees. Banks have sometimes chafed at those requirements, given that operational risk is more difficult to measure than the market and credit risks that also factor into banks' capital levels.

Anthropic
FA Magazine17d ago
Read update
Anthropic Model Scare Sparks Urgent Bessent, Powell Warning To Bank CEOs

Ghanaian man's trip home ends in chaos after airport incident in Netherlands

A Ghanaian man based in the Netherlands saw his planned trip back home abruptly cut short after an unexpected incident at the airport caused major disruption. According to reports circulating online, the unidentified man was preparing to travel to Ghana when he suddenly began acting erratically for reasons that remain unclear. Eyewitnesses described the situation as shocking, as the man reportedly started "misbehaving" without any apparent provocation. The incident was captured in a TikTok video by another Ghanaian who had accompanied him to the airport to see him off. In the video, the individual recording can be heard expressing concern over the sudden turn of events, noting that everything seemed normal prior to the episode. The unusual behavior quickly attracted attention within the airport, prompting authorities to intervene. Several police officers were reportedly called to the scene to manage the situation and ensure the safety of other travelers and staff. The man was eventually escorted away by the officers, effectively halting his journey to Ghana. It remains unclear what may have triggered the incident or whether he received medical attention afterward.

CHAOS
Modern Ghana Media Communication Ltd.17d ago
Read update
Ghanaian man's trip home ends in chaos after airport incident in Netherlands

Bessent, Powell warned bank CEOs about Anthropic model risks

(Reuters) -- U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened an urgent meeting with bank CEOs this week to warn of cyber risks posed by Anthropic's latest AI model, two sources familiar with the matter said on Thursday. Anthropic launched the powerful Mythos ...

Anthropic
Nikkei Asia17d ago
Read update
Bessent, Powell warned bank CEOs about Anthropic model risks

BAFTA Review Exposes Shocking Failures Behind On Air Chaos

The fallout from the BAFTA Film Awards controversy continues to ripple through the entertainment world, as a damning independent review reveals what really went wrong. What should have been a celebratory night quickly spiraled into chaos, leaving organizers, broadcasters, and audiences grappling with uncomfortable truths about planning, accountability, and oversight. BAFTA Review Reveals Structural Weaknesses An independent investigation into the awards ceremony uncovered serious cracks in the system that contributed to the widely criticized on-air moment. The review, commissioned by the BAFTA board and conducted by RISE Associates, highlighted "a number of structural weaknesses" in planning, escalation procedures, and crisis coordination. While the findings were critical, the board emphasized that "it did not find evidence of malicious intent on the part of those involved in delivering the event. We accept its conclusions in full." Still, the absence of ill intent did little to soften the broader implications. The report, made available to The Hollywood Reporter, clarified that BAFTA's internal systems were not robust enough to handle unexpected incidents, particularly in a live broadcast environment where timing and responsiveness are crucial. BAFTA Apology Acknowledges Deep Impact In the wake of the controversy, BAFTA issued a sweeping apology that addressed multiple communities affected by the incident. The organization stated, "We apologize unreservedly to the Black community, for whom the racist language used carries real pain, brutality, and trauma; to the disability community, including people with Tourette Syndrome, for whom this incident has led to unfair judgement, stigma, and distress; and to all our members, guests at the ceremony and those watching at home. What was supposed to be a moment of celebration was diminished and overshadowed." The statement didn't stop there. BAFTA also confirmed, "We have written to those directly impacted on the night to apologize." These acknowledgments showed the wide-reaching consequences of the incident, which went beyond a single moment on stage and ignited broader conversations about responsibility and sensitivity in global broadcasts. BBC Faces Backlash Over Broadcast Failure The controversy didn't just stop with BAFTA. The BBC, which aired the ceremony, came under intense scrutiny for allowing the offensive language to make it to viewers despite a built-in delay. Following its own investigation, the broadcaster's Executive Complaints Unit delivered a blunt verdict earlier this week. As The Blast reported, the unit stated, "The ECU found that the inclusion of the n-word in the broadcast (which was also streamed live on iPlayer) was highly offensive, had no editorial justification and represented a breach of the BBC's editorial standards, but that the breach was unintentional." This conclusion placed the BBC in a delicate position, acknowledging the seriousness of the mistake while maintaining that it was not deliberate. Former director-general Tim Davie had earlier described the incident as "a genuine error," attributing the oversight to confusion during the editing process. Despite these explanations, the backlash highlighted growing expectations for broadcasters to exercise tighter control over live and near-live content. BAFTA Editing Breakdown Explained By BBC Further details about the mishap revealed a breakdown not in policy, but in execution. BBC chief content officer Kate Phillips shed light on how the moment slipped through the cracks. She explained that the production team "did not hear the n-word at the time it was said and therefore no decision was taken to leave the word within the broadcast. The ECU accepted this was a genuine mistake." Phillips also pointed out that the team had successfully edited out another instance of the same word, adding that this was done "especially as the team did correctly identify and edit out a subsequent use of the same word, in line with the protocols that were agreed in advance of the event regarding offensive and unacceptable language." This explanation painted a picture of a system that, while theoretically sound, failed at a critical moment when vigilance mattered most. BAFTA Streaming Delay Fuels Outrage If the initial broadcast sparked concern, the delayed removal of the footage only intensified public anger. The unedited version remained accessible online longer than expected, compounding the damage. The ECU did not mince words, calling the delay a "serious mistake" and noting that "The fact that the unedited recording remained available for so long aggravated the offense caused by the inadvertent inclusion of the n-word in the broadcast." Kate Phillips addressed this issue as well, explaining, "There was a lack of clarity among the team present at the event as to whether the word was audible on the recording. This resulted in there being a delay before the decision was taken to remove the recording from iPlayer." The prolonged availability of the clip raised serious concerns about internal communication and crisis management, exposing gaps that extended beyond the initial mistake. Ultimately, the review makes one thing clear: while the incident may not have been intentional, it exposed critical vulnerabilities in both BAFTA's event planning and the BBC's broadcast processes. As both organizations promise reforms, the spotlight remains firmly fixed on whether meaningful change will follow or if history could repeat itself.

CHAOS
The Blast17d ago
Read update
BAFTA Review Exposes Shocking Failures Behind On Air Chaos

Weekly Funding Roundup: Sarvam AI Nears $350M, Perplexity Launches $1M Challenge

Startup News Today: AI Boom Fuels Record $300B Venture Funding Surge in Q1 2026 No, Sarvam AI has not officially closed its funding round yet. The company is currently in advanced stages of raising $300-350 million, and the deal is expected to close soon. The round already has strong investor interest, which signals high confidence in its business model. 2. What is the latest AI startup funding news? This week saw multiple AI funding deals across sectors. Sarvam AI is nearing a $300 million round. Modus raised $85 million, Trent AI secured $13 million, and Atlas raised $6 million. Perplexity also launched a $1 million funding challenge. 3. What is Perplexity's Billion Dollar Build Challenge? Perplexity launched the "Billion Dollar Build" challenge, where participants use its AI platform to create a startup with a path to a $1 billion valuation. Winners can receive up to $1 million in funding and an equal amount in platform credits. The challenge runs for 8 weeks and targets early-stage founders. 4. How much funding has Trent AI raised? Trent AI has raised $13 million in a Seed funding round. The round was led by LocalGlobe and Cambridge Innovation Capital, with additional participation from angel investors linked to major tech firms. 5. Which companies have invested in Sarvam AI? Sarvam AI is expected to receive investment from Bessemer Venture Partners, along with participation from global tech players like NVIDIA and Amazon. Their involvement highlights strong global backing for India's AI ecosystem.

Perplexity
Analytics Insight17d ago
Read update
Weekly Funding Roundup: Sarvam AI Nears $350M, Perplexity Launches $1M Challenge

After a Recent Deal With X-Energy, Is Fluor Becoming the Ultimate Nuclear Pick-and-Shovel Play? | The Motley Fool

It was an early investor in NuScale Power and recently agreed to a contract to help X-Energy develop its project in Texas. The nuclear energy buildout is very real, and the White House has affirmed its stance with multiple executive orders and multibillion-dollar investments to grow its nuclear fleet and accelerate the deployment of next-generation nuclear technology. As part of its ambitious goals, the United States will look to quadruple its new nuclear energy capacity to 400 gigawatts (GW), have 10 large reactors under construction by 2030, and provide grants and pilot programs for companies developing nuclear fuel, small modular reactors (SMRs), and microreactor technology. One company emerging as a key partner in achieving this buildout is Fluor (FLR +1.28%), the global engineering, procurement, and construction (EPC) company with expertise in managing these massive projects. The firm is strengthening its nuclear portfolio through a new deal with X-Energy, a developer of advanced reactor technology. Fluor is making major moves and flexing its muscle as a trusted EPC firm for nuclear energy companies. It was an early investor in NuScale Power in 2011 and is helping NuScale develop its RoPower project in Romania; it recently opened an office in Bucharest to support its projects. This month, the EPC firm came to an agreement with X-Energy to work on its advanced nuclear project at Dow's UCC Seadrift Operations in Texas. As part of the contract, Fluor will provide Front-End Loading Stage 2 services, which include strategic planning, feasibility assessments, and cost control and risk mitigation. X-Energy is a private company specializing in Generation IV high-temperature gas-cooled reactors and nuclear fuel that recently filed a prospectus to go public. It counts Dow and Amazon among its partners and has recently submitted a draft registration statement with the Securities and Exchange Commission to go public. The project is part of the U.S. Department of Energy's Advanced Reactor Demonstration Program, which aims to accelerate the commercialization of advanced nuclear technology through cost-shared partnerships. As part of this project, X-Energy will deploy four 80-megawatt small modular reactor units. Fluor is solidifying its position as a trusted partner for nuclear energy companies, including breakthrough companies that could change how nuclear energy is deployed, such as NuScale Power and X-Energy. In February, it was selected as the EPC contractor on Centrus Energy's uranium enrichment facility in Piketon, Ohio. Investors seeking exposure to nuclear energy while diversifying away from miners or fuel producers may find Fluor a compelling choice. Fluor leverages a specialized workforce to develop these projects and is highly involved in the planning phases that determine if these major projects are possible. As a result, the company generates revenue early in the nuclear buildout, years before any plants come online. This makes Fluor an excellent pick-and-shovel play for investors looking to capitalize on the nuclear energy buildout in the decades ahead.

X-energy
The Motley Fool17d ago
Read update
After a Recent Deal With X-Energy, Is Fluor Becoming the Ultimate Nuclear Pick-and-Shovel Play? | The Motley Fool

XAI Training 10 Trillion Parameter Model ??" Likely Out in Mid 2026

xAI would be leading in raw announced scale of parameters. No other lab has publicly confirmed training 10T or even 6T models right now. The 6T model alone is roughly double the rumored size of Grok 4 and far larger than most current estimates for GPT-5 or Claude 4.6. Parameter count is only part of the story. AI models are judged more on: Active parameters per token (MoE efficiency). Training data quality and "intelligence density" (xAI claims higher density per gigabyte). Inference-time compute (reasoning modes, multi-agent orchestration). Real-world benchmarks (coding, agentic tasks, multimodality). Chips Needed & Costs for Pre-Training Runs Exact per-model costs are not public (models are still training), but here are the best analyses and estimates. Colossus 2 hardware: ~550,000 NVIDIA GPUs (mostly GB200/GB300 Blackwell variants) at ~$18 billion hardware cost alone (average ~$32k-$40k per GPU). This supports the full parallel training lineup. Total CapEx is tens of billions of dollars for Colossus 2 (land, power infrastructure, cooling, networking). Includes on-site gas turbines/Megapacks for 400+ MW dedicated power and rapid buildout. Per-model rough estimates (community/analyst extrapolations). 10T model needs ~$1.5 billion+ in compute (one early analyst call. scales with FLOPs and duration). Initial pre-training phase ~2 months on Colossus 2. 6T model needs Similar order of magnitude but lower. benefits from shared cluster efficiency. Smaller 1T/1.5T runs: Significantly cheaper/faster due to parallelization.

xAI
freedomsphoenix.com17d ago
Read update
XAI Training 10 Trillion Parameter Model ??" Likely Out in Mid 2026

Well-timed bets on Polymarket tied to the Iran war draw calls for investigations from lawmakers

Calls are increasing inside Congress for investigations into the prediction market platform Polymarket after the latest instance where groups of anonymous traders made strategic, well-timed bets on a major geopolitical event hours before it occurred. On Wednesday, The Associated Press reported that at least 50 brand new accounts on Polymarket placed substantial bets on a U.S.-Iran ceasefire in the hours, even minutes, before President Donald Trump announced the ceasefire late Tuesday on social media. These were the sole bets made on Polymarket through these accounts. In January, an anonymous Polymarket user made a $400,000 profit by betting that Venezuelan leader Nicolas Maduro would be out of office, hours before Maduro was captured. In the hours before the start of the Iran war, another account made roughly $550,000 in a series of trades effectively betting that the U.S. would strike Iran and that Ayatollah Ali Khamenei would be removed from office. Such prescient wagers have raised eyebrows -- and accusations that prediction markets are ripe for insider trading. And the issue goes beyond these three geopolitical events, according to at least one report. Researchers at Harvard University released a paper last month where, using public blockchain data, they estimated that $143 million in profits have been made on Polymarket by individuals who potentially had insider information about events ranging from Taylor Swift's engagement to the awarding of the Nobel Peace Prize last year. Rep. Ritchie Torres, D-N.Y who sits on the House Financial Services Committee as well as the subcommittee on digital assets and financial technology, sent a letter Thursday to the Commodity Futures Trading Commission demanding the regulator review and investigate these well-timed trades. The CFTC regulates the derivatives markets, which includes prediction markets. "This pattern raises serious concerns that certain market participants may have had access to material nonpublic information regarding a market-moving geopolitical event," Torres wrote. The letter was shared exclusively with The AP. "What is the statistical likelihood that of anyone other than an insider trader placing a winning bet 12 minutes before a market-moving presidential announcement," Torres said in an interview with AP. "There are two answers: God, or an insider trader. And something tells me that God it not placing bets around Donald Trump's posts on Truth Social. " Prediction market platforms like Kalshi and Polymarket allow their users to bet on everything from whether it will rain in Phoenix, Arizona next week to whether the Federal Reserve will raise or lower interest rates. At this time, U.S. residents have limited access to Polymarket, which was banned from the U.S. in 2022. The company has moved to reenter the country by acquiring a CFTC-licensed exchange and clearinghouse, giving it a legal pathway to start offering contracts domestically. The company has begun a limited rollout in the U.S. Polymarket also operates a separate, crypto-based platform offshore that remains outside U.S. jurisdiction. That platform accounts for most of its activity. Sen. Richard Blumenthal, D-Connecticut, sent a letter to Polymarket on Thursday demanding the company explain why it continues to allow trades on war and violence as well as whether the company is making any efforts to keep insiders from trading on the platform. "Polymarket has become an illicit market to sell and exploit national security secrets unlike any in history, and by extension a potential honeypot for foreign intelligence services watching for those same suspicious bets and wagers," Blumenthal wrote. Republicans have also criticized these platforms and called for bans on these sorts of bets. There are at least two bills pending in Congress co-signed by both parties, one in the House and one in the Senate. "We don't want to imagine a world where America's adversaries use prediction markets to anticipate our next move," said Rep. Blake Moore, R-Utah, after the release of the AP's findings on the ceasefire wagers. Polymarket did not immediately reply to a request for comment. The stakes are high for both Kalshi and Polymarket as they seek approval to operate in the U.S. and nationwide, particularly in the lucrative sports betting market. Kalshi, which is already regulated in the U.S., and its executives have a goal of making the company the nation's dominant prediction market. Kalshi has also leaned heavily into sports, which critics have said effectively makes it a sports betting platform that dabbles in event-based contracts on the side. Both companies have also announced partnerships with sports teams and even news organizations to broaden their reach as well. The AP has an agreement to sell U.S. elections data to Kalshi. The competition also carries political overtones. Donald Trump Jr. is an investor in Polymarket through his venture capital firm, 1789 Capital, and separately serves as a paid strategic adviser to Kalshi.

Polymarket
opb17d ago
Read update
Well-timed bets on Polymarket tied to the Iran war draw calls for investigations from lawmakers

XAI Training 10 Trillion Parameter Model ??" Likely Out in Mid 2026

xAI would be leading in raw announced scale of parameters. No other lab has publicly confirmed training 10T or even 6T models right now. The 6T model alone is roughly double the rumored size of Grok 4 and far larger than most current estimates for GPT-5 or Claude 4.6. Parameter count is only part of the story. AI models are judged more on: Active parameters per token (MoE efficiency). Training data quality and "intelligence density" (xAI claims higher density per gigabyte). Inference-time compute (reasoning modes, multi-agent orchestration). Real-world benchmarks (coding, agentic tasks, multimodality). Chips Needed & Costs for Pre-Training Runs Exact per-model costs are not public (models are still training), but here are the best analyses and estimates. Colossus 2 hardware: ~550,000 NVIDIA GPUs (mostly GB200/GB300 Blackwell variants) at ~$18 billion hardware cost alone (average ~$32k-$40k per GPU). This supports the full parallel training lineup. Total CapEx is tens of billions of dollars for Colossus 2 (land, power infrastructure, cooling, networking). Includes on-site gas turbines/Megapacks for 400+ MW dedicated power and rapid buildout. Per-model rough estimates (community/analyst extrapolations). 10T model needs ~$1.5 billion+ in compute (one early analyst call. scales with FLOPs and duration). Initial pre-training phase ~2 months on Colossus 2. 6T model needs Similar order of magnitude but lower. benefits from shared cluster efficiency. Smaller 1T/1.5T runs: Significantly cheaper/faster due to parallelization.

xAI
freedomsphoenix.com17d ago
Read update
XAI Training 10 Trillion Parameter Model ??" Likely Out in Mid 2026

Bessent Urgently Summons Bank CEOs Over Anthropic's New AI (1)

Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned Wall Street leaders to an urgent meeting on concerns that the latest artificial intelligence model from Anthropic PBC will usher in an era of greater cyber risk. Bessent and Powell assembled the group at Treasury's headquarters in Washington on Tuesday to make sure banks are aware of possible future risks raised by Anthropic's Mythos and potential similar models, and are taking precautions to defend their systems, according to people familiar with the matter who asked not to be identified citing the private discussions. Many of the executives were in town already for a meeting of the Financial Services Forum, an advocacy group made up of the biggest lenders. A representative for the Treasury didn't immediately respond to a request for comment. A spokesperson for the Fed declined to comment. The previously unreported meeting, arranged on short notice, is another sign that regulators consider the possibility of a new breed of cyber attacks as one of the biggest risks facing the financial industry. All the banks summoned to the meeting are classified as systemically important by top regulators, meaning their stability is a priority for the global financial system. Powell's participation in the meeting signaled that the concern was one of systemic risk, and not tied to the Trump administration's previous clashes with Anthropic, said some of the people. The Fed, with its network of examiners, is also deeply familiar with banking operations. Anthropic's Mythos is a more powerful system that the AI firm has said is capable of identifying and then exploiting vulnerabilities in every major operating system and web browser when directed by a user to do so. Regulators' caution about the power of the model in hackers' hands echoes Anthropic's own prudence. Anthropic has the release of it to just a few major technology and finance firms at first. Those companies, which include Amazon.com Inc. and Apple Inc. as well as , are part of "Project Glasswing," which will work to secure the most important systems before other similar AI models become available. Anthropic has said that it has been in discussions prior to its recent release with US officials about Mythos and its "offensive and defensive cyber capabilities." In releasing Mythos to a very limited set of companies, Anthropic pointed to several vulnerabilities that the AI system was capable of both identifying and potentially exploiting during testing. None of the examples related specifically to financial institutions, but in one instance, the firm's security team said it was able to compromise a web browser so that a website set up by a hacker could read data from another website "e.g., the victim's bank." Mythos Preview "fully autonomously discovered" a way of reading information stored in "multiple different web browsers" and then used that ability to find ways to exploit them, according to a post from Anthropic's security team. Read More: Anthropic has separately been battling the Trump administration in court. The Pentagon had the company as a supply-chain risk, a designation that Anthropic has opposed. Earlier this week, a federal appeals court declined, at least for now, Anthropic's request that it put a pause to the Pentagon's designation. Chief executive officers summoned to the meeting with the Fed and Treasury include 's , 's , 's , 's , and 's , said the people. JPMorgan's was unable to attend, the people said. Spokespeople for the banks declined to comment. A representative for Anthropic had no immediate comment. In recent years, regulators have required banks to hold some capital tied to the potential for cyber attacks, as well as other so-called operational risks such as lawsuits and rogue employees. Banks have sometimes chafed at those requirements, given that operational risk is more difficult to measure than the market and credit risks that also factor into banks' capital levels. (Updates with detail on lobbying meeting in third paragraph, background on Mythos starting in ninth paragraph, and capital requirements in 14th paragraph.) © 2026 Bloomberg L.P. All rights reserved. Used with permission.

Anthropic
news.bloomberglaw.com17d ago
Read update
Bessent Urgently Summons Bank CEOs Over Anthropic's New AI (1)
Showing 6141 - 6160 of 11425 articles