The latest news and updates from companies in the WLTH portfolio.
The NAACP and its Mississippi State Conference are suing Elon Musk's xAI, alleging that it did not get a permit before emitting large amounts of pollution into a Memphis-area community. The suit alleges that xAI and subsidiary MZX Tech violated the Clean Air Act by not getting the permit for their Colossus Gas Plant, which powers its Colossus 2 data center with 27 gas turbines. This data center powers the AI chatbot "Grok" which is used on Musk's social media site X and operates as a standalone app. The suit says that the plant emits large amounts of pollution that are linked to asthma, respiratory diseases, heart problems and certain cancers. It also says that the communities surrounding the plant have a disproportionately high Black population. It says that if the companies had gone through the Clean Air Act process, they could have been required to install technology that cuts down on this pollution. "A data center should not be a potential death sentence for a community's health," Abre' Conner, NAACP's director of environmental and climate justice, said in a statement Tuesday. "By looking to evade clear air laws to operate dirty turbines that emit pollution and known carcinogens, these companies are following a shameful, familiar pattern: asking Black and frontline communities to bear the toxic brunt of 'innovation,'" she continued. The NAACP initially threatened to sue xAI over the Memphis gas turbines in mid-February. The Clean Air Act requires plaintiffs to provide a 60-day notice of their intent to sue under the law. The Hill has reached out to xAI for comment. Robert Tipton, branch president of the NAACP in DeSoto County, Miss., told The Hill that he's not against Musk being a businessman or "making money," but he is against "secrecy" and "potential health issues that may come from this." "We have members that live within a mile or two miles" of the plant, he said, adding that "they believe they are experiencing a different kind of cough" and their family members are sick. The lawsuit is the latest instance of resistance to data centers because of their community impacts. Residents of communities around the country have raised concerns about energy prices and water use as well as potential pollution. AI companies have pushed to rapidly build data centers in an effort to expand their computing power amid the race to develop the technology. They initially had support among both Republican and Democratic politicians, but the tide has turned against the infrastructure over the past year. Meanwhile, this is not the only time xAI has been accused of skirting air pollution requirements. The NAACP previously threatened to sue over pollution from the company's Colossus 1 data center.

Jack Clark, one of Anthropic's co-founders who also serves as Head of Public Benefit for Anthropic PBC, confirmed that the AI company had briefed the Trump administration about its new Mythos model. The model, announced last week, is so dangerous that it's not being released to the public, largely due to its alleged powerful cybersecurity capabilities. In an interview at the Semafor World Economy summit this week, Clark explained why the company was still engaged with the U.S. government while simultaneously suing them. This March, Anthropic filed a lawsuit against Trump's Department of Defense (DOD) after the agency labeled the company a supply-chain risk. Anthropic had clashed with the Pentagon over whether the military should have unrestricted access to Anthropic's AI systems for use cases that included mass surveillance of Americans and fully autonomous weapons. (OpenAI ended up winning the deal instead.) At the conference, Clark downplayed the administration's labeling of its business as a supply-chain risk, saying it was merely a "narrow contracting dispute" and that Anthropic didn't want it to get in the way of the fact that the company cares about national security. "Our position is the government has to know about this stuff, and we have to find new ways for the government to partner with a private sector that is making things that are truly revolutionizing the economy, but are going to have aspects to them which hit National Security, equities, and other ones," said Clark. "So absolutely, we talked to them about Mythos, and we'll talk to them about the next models as well." His confirmation comes after reports last week that Trump officials were encouraging banks to test Mythos, including JPMorgan Chase, Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley. Clark also addressed other aspects of AI's impact on society during the interview, including things like unemployment and higher education. Previously, Anthropic CEO Dario Amodei has warned that AI's advances could bring unemployment to Depression-era numbers, but Clark slightly disagrees. He explained in the interview that Amodei believes that AI will get much more powerful than people expect very qu ...

SpaceX IPO is drawing massive global attention as reports suggest a possible valuation near $2 trillion in 2026. This could make it the biggest IPO in history. SpaceX revenue is estimated at $15-16 billion, driven by Starlink satellite internet and Falcon 9 rocket launches. Investors are watching SpaceX IPO valuation risk closely. The price-to-sales ratio looks extremely high compared to major tech stocks. Elon Musk leadership adds both excitement and uncertainty. Strong demand exists, but SpaceX IPO risk and bubble concerns remain high in global markets today. Analysts call it high reward but also high volatility opportunity.
The NAACP and its Mississippi State Conference are suing Elon Musk's xAI, alleging that it did not get a permit before emitting large amounts of pollution into a Memphis-area community. The suit alleges that xAI and subsidiary MZX Tech violated the Clean Air Act by not getting the permit for their Colossus Gas Plant, which powers its Colossus 2 data center with 27 gas turbines. This data center powers the AI chatbot "Grok" which is used on Musk's social media site X and operates as a standalone app. The suit says that the plant emits large amounts of pollution that are linked to asthma, respiratory diseases, heart problems and certain cancers. It also says that the communities surrounding the plant have a disproportionately high Black population. It says that if the companies had gone through the Clean Air Act process, they could have been required to install technology that cuts down on this pollution. "A data center should not be a potential death sentence for a community's health," Abre' Conner, NAACP's director of environmental and climate justice, said in a statement Tuesday. "By looking to evade clear air laws to operate dirty turbines that emit pollution and known carcinogens, these companies are following a shameful, familiar pattern: asking Black and frontline communities to bear the toxic brunt of 'innovation,'" she continued. The NAACP initially threatened to sue xAI over the Memphis gas turbines in mid-February. The Clean Air Act requires plaintiffs to provide a 60-day notice of their intent to sue under the law. The Hill has reached out to xAI for comment. Robert Tipton, branch president of the NAACP in DeSoto County, Miss., told The Hill that he's not against Musk being a businessman or "making money," but he is against "secrecy" and "potential health issues that may come from this." "We have members that live within a mile or two miles" of the plant, he said, adding that "they believe they are experiencing a different kind of cough" and their family members are sick. The lawsuit is the latest instance of resistance to data centers because of their community impacts. Residents of communities around the country have raised concerns about energy prices and water use as well as potential pollution. AI companies have pushed to rapidly build data centers in an effort to expand their computing power amid the race to develop the technology. They initially had support among both Republican and Democratic politicians, but the tide has turned against the infrastructure over the past year. Meanwhile, this is not the only time xAI has been accused of skirting air pollution requirements. The NAACP previously threatened to sue over pollution from the company's Colossus 1 data center.

April 13 (Asia Today) -- Anticipation surrounding a potential initial public offering by Anthropic is drawing renewed attention to the value of SK Telecom's minority stake, as analysts point to both financial gains and strategic upside. SK Telecom invested about $100 million in Anthropic in 2023, securing an early stake in the U.S. generative artificial intelligence company. The startup has since expanded rapidly with its large language model Claude and a newer model known as Claude Mitos, contributing to sharply rising annual revenue projections. Analysts expect Anthropic's valuation to climb further after a public listing, potentially boosting the value of SK Telecom's holdings. SK Telecom currently owns about 0.3% of Anthropic. The stake is estimated at roughly 1.3 trillion won (about $970 million) based on book value. Some analysts say it could rise to the high 3 trillion won range (about $2.6 billion) after the IPO. Market watchers say the investment goes beyond financial returns, highlighting potential collaboration. SK Telecom has been expanding its artificial intelligence services, including its consumer platform A. and enterprise-focused AI business, and could integrate Anthropic's models to enhance products and develop new services. Industry observers said the investment gains would likely be recorded as other comprehensive income, meaning they may not immediately improve earnings. Realized profits would depend on when the company sells its shares. Still, analysts said continued growth in AI company valuations and strong performance reviews of Anthropic's latest models could lead to further revaluation. Executives at SK Telecom have also emphasized strategic synergy, saying the investment is intended to strengthen the company's position in the global AI ecosystem, not just generate financial returns. Analysts said securing ties with a leading AI model developer could support long-term growth as demand for generative AI accelerates. -- Reported by Asia Today; translated by UPI © Asia Today. Unauthorized reproduction or redistribution prohibited. Original Korean report: https://www.asiatoday.co.kr/kn/view.php?key=20260413010003863 Read More Trump nominates former Rep. Michelle Park Steel as U.S. ambassador to S. Korea Middle East turns to Korean air defense amid missile threats South Korea boosts military fuel readiness amid energy risks

With the Federal Reserve Chairman meeting with bank CEOs to discuss the security implications of Claude Mythos, you can bet that your board of directors will ask you about the impact of the AI model on your cybersecurity strategy. Here's how to prepare. Key takeaways On April 7, 2026, Anthropic unveiled Claude Mythos Preview, its most powerful frontier model to date and one that excels at cybersecurity tasks, specifically, vulnerability discovery in code. (I previously wrote about Claude Opus 4.6 and its impact on cybersecurity.) I'll spare you the details of the decades-old, zero-day vulnerabilities that Claude Mythos proved capable of finding and exploiting in internal testing, as I'm sure you're already aware. But suffice it to say the model was so powerful, Anthropic thought it prudent to assemble a group of technology partners in an initiative called Project Glasswing to apply Mythos' capabilities to defensive security. And now, with Federal Reserve Chairman Jerome Powell meeting with leaders of the largest U.S. banks to discuss the cybersecurity implications of this mythic new model, you can bet that your board of directors and executive management team will have questions for you about Claude Mythos at the next quarterly meeting -- or sooner. We're here to help you provide answers. The question every board will ask about Claude Mythos When it's time for your 15-minute cyber update, your board of directors will inevitably ask you, "What are you doing about Claude Mythos? How are you preparing for a world in which AI-assisted attackers can find and exploit vulnerabilities in minutes?" Essentially, your board-friendly answer needs to be, "We're fighting fire with fire. We're transforming our security operations with agentic AI so that we can autonomously and preemptively find and fix our exposures at machine speed." You can then report on the number of security workflows you've automated with AI and the increases in efficiency and effectiveness that you're achieving as a result. Depending on your board's security savvy, you may need to address how you're evolving your vulnerability management function to handle this new reality of AI-driven vulnerability discovery. One new approach that forward-leaning security leaders have begun implementing is exposure management, or CTEM. What is exposure management? Exposure management is a strategic approach to preemptive security designed to reduce cyber risk. It continuously assesses, prioritizes, and remediates your organization's most critical cyber exposures. Cyber exposures are toxic combinations of preventable cyber risks (such as vulnerabilities, misconfigurations, and excessive permissions) that give threat actors a path to your most sensitive systems and data. By continually and agentically assessing, prioritizing, and remediating risks, exposure management provides the answer to the question of how to build a "Mythos-ready" security program. It offers the solution to the single biggest challenge associated with AI-vulnerability discovery: how security and remediation teams will address the massive backlog of findings that AI-assisted vulnerability discovery will create. Exposure management is a "Mythos-ready" security program To understand the role exposure management plays in a world flooded with AI-driven vulnerability discoveries, it's important to understand the difference between frontier models and exposure management solutions. What frontier models do: Claude Code Security and Mythos Preview read and reason about source code. They identify logic flaws, memory corruption vulnerabilities, injection weaknesses, and authentication bypasses by tracing data flows and understanding how software components interact. Mythos does this with extraordinary autonomy and can chain vulnerabilities into working exploits. Fundamentally, this is application security: static and dynamic analysis of codebases operating at the source-code layer. What exposure management does: Exposure management allows you to discover every asset across your environment (IT, cloud, identity, AI, and OT); determine whether they're vulnerable; prioritize exposures based on business and technical context; orchestrate staged remediation; and validate that fixes are closed. An individual vulnerability may not appear dangerous until it forms an attack chain leading to a critical system. Exposure management helps you see individual vulnerabilities in context and how they combine to create high-risk attack paths. Bottom line: Frontier models and exposure management operate in categorically different domains and solve fundamentally different problems. Exposure management and the preemptive security lifecycle To put a finer point on the difference between frontier models and exposure management, let's examine the complete preemptive security lifecycle that enterprises require. Frontier AI -- even at Mythos-class capability -- addresses only the first stage of this lifecycle. Exposure management addresses everything else. Stage 1 -- Software vulnerability discovery. Identifying that a flaw exists in software. This is where frontier models excel. Mythos has demonstrated extraordinary capability here, finding bugs that survived decades of human review and millions of automated test runs. This capability is genuine and consequential. Stage 2 -- Asset discovery. Employing multiple discovery methods, including scanners, agents, OT-specific sensors, and more, to identify every asset in an enterprise: endpoints, servers, cloud workloads, containers, network devices, OT/ICS assets, identity objects, AI applications, MCP servers. This is something Mythos can't do. Stage 3 -- Assessment. Determining whether specific deployed assets are affected by specific vulnerabilities. This requires deep interrogation of the asset: connecting to live systems, parsing configurations, checking patch levels, inspecting running services across IT, cloud, OT, and identity environments at enterprise scale -- and doing so without impairing the performance of the live asset. A model that found a Linux kernel vulnerability cannot determine which of an organization's 50,000 Linux hosts are running the affected version without sensor-level access. Stage 4 -- Prioritization. This stage becomes more critical, not less, in an AI-accelerated world. When frontier models can discover thousands of new vulnerabilities in weeks and generate working exploits on demand, the volume flowing into the remediation pipeline explodes, but the operational constraints don't change. Enterprises still have finite maintenance windows, change management processes, compatibility dependencies, and business continuity requirements. Patching 40,000+ CVEs simultaneously across 100,000 assets is not operationally feasible. The math only works with the intelligent prioritization that exposure management provides. 4 steps to building a Mythos-ready security program: How Tenable can help In a recent blog, Anthropic offered several recommendations to prepare your security program for an AI-accelerated offense. Here's how Tenable can help you strengthen your organization's cybersecurity posture and reduce your risk in the age of AI-driven attacks: 1 - "Close your patch gap." Anthropic says to patch everything in the CISA KEV immediately, use EPSS to prioritize the rest, and automate deployment. In theory, this advice makes sense. In practice, it's a bit misguided. For one thing, even if you patched everything in the CISA KEV immediately, you'd still have gaps. The CISA KEV catalog operates off of strict inclusion criteria, so just because a CVE hasn't landed in the KEV doesn't mean it's less critical. On the contrary, Tenable Research is currently tracking 201 CVEs that are being actively exploited in the wild, yet that aren't part of the KEV. The Critix Session Recording Vulnerability (CVE-2024-8069) provides an example of a CVE for which Tenable Research issued a watch designation nearly a full year (286 days) before it hit the KEV. Then there's the issue of prioritization. With the vulnerability discovery capabilities of Mythos falling into the wrong hands, the number of vulnerabilities could grow by 10X or more. As Tenable Co-CEO Steve Vintz pointed out in a recent LinkedIn post, "Prioritization is no longer optional. It's survival." But prioritizing based on EPSS alone will leave you chasing your tail. EPSS prioritizes based only on probability of exploitation. In contrast, Tenable One provides much finer-grained prioritization than both EPSS and CVSS. Through the proprietary Vulnerability Priority Rating (VPR), Tenable uses machine learning to narrow the 60% of CVEs flagged as critical or high by CVSS to the 1.6% that create actual risk for your organization. Tenable One additionally factors other criteria into its prioritization engine, including reachability (is this asset actually exposed through the network topology?), identity context (what permissions does a compromised asset inherit? does it create a path to domain admin?), business criticality (is this a revenue-generating system or a development sandbox?), and attack path analysis. Answering those questions requires cross-domain telemetry at a scale and specificity that no external model possesses and that only Tenable One can provide. Finally, more vulnerabilities means more to patch, even as your patching constraints remain the same: you still have to sort through compatibility dependencies and business continuity requirements, among other things. Tenable One gives you the speed, scale, automation, and control to manage your entire update lifecycle. You can deploy autonomous patching across 20,000+ products and 250,000+ unique patches spanning Windows, Linux, and macOS while using customizable controls to test patches and prevent deployment of problematic updates. And our newly announced agentic AI engine, Tenable Hexa AI, will automate asset discovery, tagging, triage, prioritization, and remediation workflows so that your organization can keep pace as vulnerability discovery escalates. 2 - "Prepare for much higher vulnerability volume." Tenable has a proven track record when it comes to developing and releasing plugins to identify new vulnerabilities. We deliver over 100 new plugins each week and, because we use AI to accelerate the speed and scale of plugin development, in general, we can deliver fully automated plugin coverage within 12 to 24 hours. When a plugin assesses whether a server is missing a specific patch, it returns a clear, binary, deterministic answer (yes or no) with six-sigma accuracy (0.32 defects per million scans). This precision underpins every downstream decision: whether to open a remediation ticket, whether to take a production system offline, whether to report a finding to an auditor, whether to trigger a staged patch deployment. In contrast, frontier AI models are probabilistic by design. Anthropic's own documentation for Mythos reveals the model occasionally attempts to conceal its methods, circumvent sandboxes, and produce inconsistent outputs. Running the same prompt twice can yield different results. For code-level security research, this variability is tolerable -- a human researcher reviews and validates findings. But for operational vulnerability management at enterprise scale, where tens of thousands of assets are assessed continuously and findings flow directly into compliance reporting and remediation workflows, probabilistic output is not acceptable. Compliance frameworks like SOC 2, FedRAMP, PCI-DSS, HIPAA, and FISMA require reproducible, auditable assessment results. Cyber insurance underwriters require them. Board-level risk reporting requires them. The deterministic scanning foundation that Tenable has built over 24 years -- with more than 318,000 plugins -- is not a legacy artifact. It's a structural requirement of the market Tenable serves. 3 - "Reduce and inventory what you expose." Tenable One sensors -- scanners, endpoint agents, passive network monitors, web application scanners, OT-specific sensors, identity directory connectors, and cloud API integrations -- are designed to discover every asset across live enterprise environments and deterministically assess whether deployed systems are vulnerable. The Tenable One platform then prioritizes exposures based on runtime exploitability context, orchestrates staged remediation, and validates that fixes are closed. Tenable's sensors continuously discover assets across environments that are heterogeneous, distributed, and often air-gapped. We can even assess your shadow AI footprint. A model cannot discover what it cannot reach. 4 - "Design for breach." The attack path analysis capabilities of Tenable One provide visibility into how threat actors chain together vulnerabilities, misconfigurations, and excessive permissions to reach your critical assets. This attack path mapping enables you to proactively close those gaps and preemptively disrupt the attacker's journey. Tenable One can also help you implement zero trust by mapping assets and identities across your environment, showing how they're connected, and where trust boundaries are. It also adds governance for your fastest growing risk surface: AI agents with admin-level access. Navigating the new era of AI-driven risk The arrival of Claude Mythos marks a fundamental shift in the cyber landscape, where the speed of vulnerability discovery is now measured in minutes rather than months. While this "mythic" new model provides attackers with an unprecedented ability to find and chain exploits, it also serves as a catalyst for organizations to modernize their defense. To stay ahead, security leaders must move beyond traditional methods and embrace exposure management. By integrating the deterministic precision of Tenable One with the automated power of Tenable Hexa AI, your organization will be able to transform its security operations into an agentic, preemptive force capable of moving at machine speed. Don't let the coming flood of AI-generated vulnerabilities overwhelm your team. By focusing on intelligent prioritization, closing your patch gaps, and gaining full visibility into your attack paths, you can confidently answer your board's toughest questions and build a truly "Mythos-ready" security program. Forward-Looking Statements This blog post contains "forward-looking statements" within the meaning of the federal securities laws, including statements regarding the potential impact of LLMs like Mythos on the cybersecurity landscape and our expectations for the future of Exposure Management. These statements involve risks and uncertainties that could cause actual results to differ materially, including the risks and uncertainties described in our most recent Annual Report on Form 10-K and other SEC filings from time to time. All forward-looking statements in this blog post are based on information available to Tenable as of the date of this post. Tenable assumes no obligation to update any forward-looking statements contained in this post.

The NAACP and its Mississippi State Conference are suing Elon Musk's xAI, alleging that it did not get a permit before emitting large amounts of pollution into a Memphis-area community. The suit alleges that xAI and subsidiary MZX Tech violated the Clean Air Act by not getting the permit for their Colossus Gas Plant, which powers its Colossus 2 data center with 27 gas turbines. This data center powers the AI chatbot "Grok" which is used on Musk's social media site X and operates as a standalone app. The suit says that the plant emits large amounts of pollution that are linked to asthma, respiratory diseases, heart problems and certain cancers. It also says that the communities surrounding the plant have a disproportionately high Black population. It says that if the companies had gone through the Clean Air Act process, they could have been required to install technology that cuts down on this pollution. "A data center should not be a potential death sentence for a community's health," Abre' Conner, NAACP's director of environmental and climate justice, said in a statement Tuesday. "By looking to evade clear air laws to operate dirty turbines that emit pollution and known carcinogens, these companies are following a shameful, familiar pattern: asking Black and frontline communities to bear the toxic brunt of 'innovation,'" she continued. The NAACP initially threatened to sue xAI over the Memphis gas turbines in mid-February. The Clean Air Act requires plaintiffs to provide a 60-day notice of their intent to sue under the law. The Hill has reached out to xAI for comment. Robert Tipton, branch president of the NAACP in DeSoto County, Miss., told The Hill that he's not against Musk being a businessman or "making money," but he is against "secrecy" and "potential health issues that may come from this." "We have members that live within a mile or two miles" of the plant, he said, adding that "they believe they are experiencing a different kind of cough" and their family members are sick. The lawsuit is the latest instance of resistance to data centers because of their community impacts. Residents of communities around the country have raised concerns about energy prices and water use as well as potential pollution. AI companies have pushed to rapidly build data centers in an effort to expand their computing power amid the race to develop the technology. They initially had support among both Republican and Democratic politicians, but the tide has turned against the infrastructure over the past year. Meanwhile, this is not the only time xAI has been accused of skirting air pollution requirements. The NAACP previously threatened to sue over pollution from the company's Colossus 1 data center.

Trump's Fed Chair Pick Invested in Solana, Polymarket and SpaceX Kevin Warsh, President Trump's nominee for Federal Reserve Chair, disclosed a fortune exceeding $100 million, with investments in various sectors including crypto and emerging tech startups. His financial disclosure includes investments in Solana, dYdX, Dapper Labs, and other crypto ventures, as well as AI-focused and tech companies.

It looks like Federal Reserve Chair nominee Kevin Warsh may have quite a few lucrative holdings in various businesses. As per Reuters, holdings in Elon Musk's SpaceX company and predictions platform Polymarket are among dozens of future-oriented assets that Warsh lists on a newly filed financial disclosure that shows dozens of apparently small bets on a wide array of emerging and almost science fiction-sounding ventures. Who is Kevin Warsh? Kevin Warsh is an American economist and former Federal Reserve governor who has been nominated in 2026 as a candidate for Chair of the U.S. Federal Reserve. He previously served on the Federal Reserve Board of Governors from 2006 to 2011, where he worked during the global financial crisis and focused on financial stability and monetary policy operations. Before his Fed role, he worked in the private sector at Morgan Stanley and also served as a White House economic adviser under President George W. Bush. READ: Trump threatens to sue Federal Reserve Chair Jerome Powell for incompetence (December 30, 2025) Warsh is widely regarded as a "monetary hawk," meaning he generally supports tighter monetary policy to control inflation, even if it comes at the cost of slower short-term economic growth. This position has made him a prominent voice in debates over interest rates and inflation control. After leaving the Fed, he became involved in academia and policy research, including work at the Hoover Institution at Stanford University. His nomination for Federal Reserve Chair has drawn significant attention because the Fed plays a central role in setting U.S. interest rates and managing inflation. Supporters argue that his experience in both markets and policy makes him well qualified, while critics question whether his policy stance is too restrictive for current economic conditions. His appointment outcome is expected to have major implications for global financial markets, interest rate expectations, and economic policy direction in the United States. As per Reuters, Warsh's major holdings put his assets at well over $100 million, including two $50-million-plus holdings in the Juggernaut Fund LP, apparently part of Warsh's work advising for the Duquesne Family Office, the private investment firm of Stanley Druckenmiller. The disclosure of Kevin Warsh's extensive financial interests highlights the broader issue of how senior economic policymakers often maintain deep ties to private markets even while being considered for top public roles. Such holdings are not unusual among individuals with long careers in finance and investment, but they can raise questions about potential conflicts of interest, especially in positions that directly influence monetary policy, regulation, and financial stability. In Warsh's case, the presence of investments linked to emerging technologies and high-growth sectors reflects the increasingly interconnected nature of modern finance, where venture capital, private equity, and innovation-driven assets play a growing role in wealth creation. Financial disclosures serve an important transparency function, allowing lawmakers, regulators, and the public to assess whether appropriate safeguards are in place. If confirmed, a nominee in such a position would typically be required to divest certain holdings or place assets into blind trusts to avoid any perception of undue influence or policy bias. READ: Central banks rally behind Fed chair Jerome Powell amid clash with Trump ( The debate surrounding Warsh's financial profile also reflects a larger tension in economic governance: balancing expertise drawn from private-sector experience with the need for impartial decision-making in public office. As financial markets evolve and new asset classes emerge, this tension is likely to become more pronounced in future nominations. The focus of the confirmation process will extend beyond individual investments to broader questions of credibility, independence, and judgment in managing the U.S. central bank. The outcome will be closely watched by markets, as leadership at the Federal Reserve has significant implications for interest rates, inflation expectations, and global financial stability.

The NAACP and its Mississippi State Conference are suing Elon Musk's xAI, alleging that it did not get a permit before emitting large amounts of pollution into a Memphis-area community. The suit alleges that xAI and subsidiary MZX Tech violated the Clean Air Act by not getting the permit for their Colossus Gas Plant, which powers its Colossus 2 data center with 27 gas turbines. This data center powers the AI chatbot "Grok" which is used on Musk's social media site X and operates as a standalone app. The suit says that the plant emits large amounts of pollution that are linked to asthma, respiratory diseases, heart problems and certain cancers. It also says that the communities surrounding the plant have a disproportionately high Black population. It says that if the companies had gone through the Clean Air Act process, they could have been required to install technology that cuts down on this pollution. "A data center should not be a potential death sentence for a community's health," Abre' Conner, NAACP's director of environmental and climate justice, said in a statement Tuesday. "By looking to evade clear air laws to operate dirty turbines that emit pollution and known carcinogens, these companies are following a shameful, familiar pattern: asking Black and frontline communities to bear the toxic brunt of 'innovation,'" she continued. The NAACP initially threatened to sue xAI over the Memphis gas turbines in mid-February. The Clean Air Act requires plaintiffs to provide a 60-day notice of their intent to sue under the law. The Hill has reached out to xAI for comment. Robert Tipton, branch president of the NAACP in DeSoto County, Miss., told The Hill that he's not against Musk being a businessman or "making money," but he is against "secrecy" and "potential health issues that may come from this." "We have members that live within a mile or two miles" of the plant, he said, adding that "they believe they are experiencing a different kind of cough" and their family members are sick. The lawsuit is the latest instance of resistance to data centers because of their community impacts. Residents of communities around the country have raised concerns about energy prices and water use as well as potential pollution. AI companies have pushed to rapidly build data centers in an effort to expand their computing power amid the race to develop the technology. They initially had support among both Republican and Democratic politicians, but the tide has turned against the infrastructure over the past year. Meanwhile, this is not the only time xAI has been accused of skirting air pollution requirements. The NAACP previously threatened to sue over pollution from the company's Colossus 1 data center.

The NAACP is suing xAI and a subsidiary called MZX Tech for allegedly operating unpermitted methane gas turbines to power its Colossus 2 data center in South Memphis. The association is asking the federal district court of the Northern District of Mississippi to declare that the company has violated the Clean Air Act, force it to stop using its unpermitted turbines and assess financial penalties against xAI for violating federal law, among other requests. The lawsuit claims that xAI -- the Elon Musk-founded AI startup now owned by SpaceX -- is operating 27 gas turbines without an air permit to power Colossus 2, one of a growing number of data centers xAI has set up to train Grok, its AI assistant. Gas turbines expel pollution, hazardous chemicals and fine particulate matter that are linked to things like heart problems, respiratory diseases and even certain cancers, issues that are particularly concerning given Colossus 2's close proximity to people's homes. Operating these turbines without an air permit also violates the Clean Air Act, which requires sources of pollution to be permitted before being operated or constructed. The NAACP is represented in the lawsuit by the Southern Environmental Law Center and Earthjustice. Before filing today's lawsuit, the NAACP sent xAI a 60-day notice of intent to sue in compliance with the Clean Air Act. xAI's failure to respond to the notice is why the lawsuit is moving forward today. "xAI's continued operation of these turbines without a permit and without adequate pollution controls is not only illegal, it's an insult to families living nearby who for months have expressed serious concerns about how air pollution from the company's personal power plant could impact their health and well-being," Ben Grillot, a Senior Attorney for the Southern Environmental Law Center, said. "xAI must be held accountable for its reckless, unlawful actions -- and that's exactly what this lawsuit aims to do." Besides the high cost of sourcing the components that train and run AI models, AI companies often have to generate power to run the data centers where all those components are being installed. Oracle is reportedly turning to gas generators like xAI. Google, Meta and Amazon, meanwhile, have all invested in or signed deals with nuclear energy providers to power their data center efforts. Building new energy sources for data centers is one of several price-lowering methods proposed by the Ratepayer Protection Pledge, an agreement several tech companies signed to try and prevent data centers from raising the cost of the average person's energy bill. Quickly building out new energy sources might help ease costs, but it doesn't account for the negative environmental impacts of having a new power plant in your neighborhood, something the Trump administration doesn't appear all too interested in addressing. In his latest AI framework proposal, President Donald Trump largely ignored the environmental impact of AI in favor of calling for the permitting process for things like on-site energy generators to be streamlined.
The crypto bros might be able to cut the line to get into any club in Miami, but they're stuck trying to bribe the bouncer at Club Mythos. Anthropic's new model, Claude Mythos, is currently only available in a limited capacity due to concerns that it may be a little too capable of exploiting cybersecurity vulnerabilities. Anthropic has given access to only a select few partners, and the cryptocurrency community would really like to get on the list. According to the Information, a number of crypto exchanges, including Coinbase, have been communicating with Anthropic in hopes of getting their hands on Mythos. The exchange isn't alone in hoping that it can tap AI tools to help bolster security. Binance also reportedly has been using AI models (including Claude Opus) to test its systems and try to poke holes in potential vulnerabilities before a bad actor does. Crypto custodian firm Fireblocks told The Information it also uses Anthropic's publicly available model for pentesting, and claims that the model has spotted issues that human testers missed. But to date, it seems none of the crypto companies have managed to get the Mythos treatment. Anthropic has said the platform is capable of spotting cybersecurity issues that evade the eyes of “all but the most skilled humans," and claimed to have used the model to spot security flaws that have been hiding in legacy systems for nearly three decades without detection (though, for what it's worth, some researchers were able to replicate the flaw detection with less powerful models). The crypto space has a very obvious reason for wanting to get in on Mythos' offerings: they sit on billions of dollars worth of digital assets and are surprisingly vulnerable to attack. Coinbase has been hit with several high-profile cybersecurity incidents over the years, including one last year that reportedly exposed sensitive customer data. Anthropic is currently holding back Mythos because it believes the model could be used to exploit major security vulnerabilities in platforms and online infrastructure at scale. Crypto is a pretty logical target for anyone who wants to use an AI model for malicious means. If the crypto bros keep getting bounced by Anthropic, maybe they can pull up to Club OpenAI. Per Bloomberg, the company is doing its own limited release of a new cybersecurity tool, which it definitely did not hastily announce to avoid getting left behind by its rival's hype train. Odds are the cover charge is much cheaper over there.

This strategic move positions Amazon to directly challenge Elon Musk's SpaceX/Starlink in the rapidly growing satellite internet market. Through this acquisition, Amazon will gain control of GlobalStar's satellite operations, infrastructure, and assets, integrating them with its existing Amazon Leo project, a press release stated. This merger is expected to accelerate Amazon's efforts to deliver satellite-based internet services, particularly in areas where traditional cellular networks are unavailable. Separately, Amazon also signed an agreement with Apple to provide satellite connectivity for current and future iPhone and Apple Watch users. And Globalstar currently provides satellite-based safety features, including Emergency SOS and Find My for the iPhone and Apple Watch. Competition From Musk While Amazon currently operates a few hundred satellites, Musk's SpaceX has a significant lead. Through its Starlink business, the company boasts approximately 11,800 deployed satellites and over 10 million active customers. Starlink's approach relies on subsidizing user terminals, resulting in higher subscriber acquisition costs. In contrast, Amazon plans to leverage its manufacturing expertise to produce more affordable consumer terminals and reduce costs for customers. Whether Amazon can position itself as a formidable competitor to SpaceX in the satellite sector remains to be seen. A key milestone for Amazon will be meeting the Federal Communications Commission's requirement to have 1,618 satellites operational by July. The acquisition is understood to be closing next year, subject tot to regulatory approvals and specific deployment metrics by Globalstar. Photo: Image Via Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.

The recent buzz around Anthropic's Mythos model has been intense, and for good reason. Early reports suggest a model that significantly advances automated reasoning over large codebases, vulnerability discovery, and exploit generation. Some are already calling it a "game changer" for offensive security. But like most breakthroughs in AI, the reality is more nuanced. Let's unpack what Mythos is, why it's getting so much attention, and where the real impact will (and won't) be. At its core, Mythos is designed to operate deeply within software systems: This is what sets it apart from earlier models. Traditional LLMs often struggled with: Mythos appears to push beyond that, closer to what human security researchers do when analyzing complex systems. are inherently less exposed to this class of AI-driven analysis. Why? Because Mythos appears to be most effective when it has full visibility into the source code. Without that: This creates a natural barrier for attackers. Although "security through obscurity" isn't a solution, in practice: AI doesn't just change what attackers can do, it changes how fast everything happens. And this is where security vendors feel the most pressure. The challenge isn't whether vulnerabilities exist, it's how fast vendors can respond once they're discovered. The new race: This shifts the competitive advantage to vendors that can: One immediate and very practical impact: bug bounty platforms are about to get noisy. Expect a surge of: This creates a scaling problem for security teams. Organizations will need to adapt: Otherwise, teams risk wasting cycles on low-quality reports and missing real vulnerabilities buried in noise. Ironically, AI will be needed to defend against AI-generated reports. This is where traditional security layers still matter: Mythos increases discovery capability, but doesn't eliminate defense in depth. The Mythos model presents a meaningful step forward. It brings AI closer to acting like a real security researcher, capable of deep reasoning and complex analysis. And as always in cybersecurity, the winners won't be those with the best tools, but those who can but those who can operationalize speed, from detection to mitigation, at scale.

(NEW YORK) -- A rare asteroid will soon be visible to the naked eye in a rare celestial event, according to astronomers. Asteroid 99942 Apophis - named after the Egyptian deity of chaos, darkness and fire - is expected to safely pass close to Earth on April 13, 2029, according to NASA. The asteroid will pass within roughly 20,000 miles of Earth - nearly 12 times closer than the moon's average distance from Earth, and closer than many satellites in geosynchronous orbit - making it one one of the closest approaches ever recorded for an object if its size and a "very rare event," according to NASA. The approach will be visible to observers on the ground in the Eastern Hemisphere, weather permitting, according to NASA. It will be close enough that sky-watchers won't need a telescope or binoculars to see it, astronomers say. When Apophis was first discovered in 2004, it was labeled a potentially hazardous asteroid because of the possibility that it could impact Earth in 2029, 2036 or 2068, according to NASA. After closely tracking the asteroid and its orbit using optical telescopes and ground-based radar, astronomers are now confident that there is no risk of Apophis impacting Earth for at least 100 years. The Earth's gravitational pull could change the asteroid's orbit around the sun as it passes in 2029, making the orbit slightly larger or the orbital period slightly longer, but the risk of impact with Earth will remain the same, NASA says. Its close passage will also afford astronomers around the world the opportunity to learn more about the asteroid. Apophis is the Greek name for the Egyptian god known as Apep. The name was proposed by the astronomers who discovered the asteroid: Roy Tucker, David Tholen and Fabrizio Bernardi of the Kitt Peak National Observatory near Tucson, Arizona. The asteroid is a relic of the early solar system from about 4.6 billion years ago, made of leftover raw material that was never part of a planet or moon, according to NASA. Though its exact size and shape is unknown, it has a mean diameter of about 1,115 feet and a long axis of at least 1,480 feet. Apophis' surface is weathered due to eons of exposure to space weather, including solar wind and cosmic rays, according to the Massachusetts Institute of Technology. Observatories around the world and in space will observe the asteroid's historic approach to Earth in order to better understand its physical properties. NASA has redirected a spacecraft to rendezvous with Apophis shortly after its close approach in 2029, while the European Space Agency is sending a spacecraft to study it. When the April 2029 flyby occurs, Apophis will become a member of the "Apollo" group, the family of asteroids that cross Earth's orbit but that themselves have orbits around the sun that are wider than the Earth's, according to the ESA.

The NAACP filed a lawsuit against Elon Musk's xAI on Tuesday, accusing the artificial intelligence company of violating the Clean Air Act with its use of natural gas-burning turbines to power data centers in and around Memphis, Tennessee. The suit, filed in the U.S. District Court for the Northern District of Mississippi, alleges that between August and December 2025, xAI and its subsidiary MZX Tech, LLC, installed and operated 27 gas turbines in Southaven, Mississippi, "without an air permit or regard for the health and safety of people living nearby." The turbines emit smog-forming pollutants and particulate matter that can lead to increased health risks and an unpleasant odor, among other things. The NAACP is seeking declaratory and injunctive relief for the companies to "cease operating the Colossus Gas Plant unless and until they obtain the required permits; to apply the necessary pollution controls; and to pay appropriate civil penalties for each day of violation." "Our right to clean air is not up for negotiation, especially when companies prove expediency not people is their priority," Abre' Conner, NAACP Director of Environmental and Climate Justice, said in an e-mailed statement. xAI did not immediately respond to a request for comment.

Canada's AI minister says Anthropic withholding Mythos is 'responsible' - BERITAJA is one of the most discussed topics today. In this article, you will find a clear explanation, key facts, and the latest updates related to this topic, presented in a concise and easy-to-understand way. Read more news on Beritaja. Artificial Intelligence Minister Evan Solomon met connected Tuesday pinch representatives of Anthropic -- the institution that said its latest chatbot, Mythos, is excessively risky for nationalist merchandise past week. Anthropic released a strategy preview paper surrounding its newest AI model, Claude Mythos, which the institution stated was "substantially beyond those of immoderate exemplary we person antecedently trained" and truthful would not beryllium released to the public, citing cybersecurity dangers. "Anthropic and the Canadian authorities are engaged successful constructive, ongoing discussions," Solomon said successful an emailed connection from his agency to BERITAJA. "I met pinch Anthropic this greeting arsenic portion of our continued engagement pinch starring AI companies connected safety, security, and Canada's sovereign interests. The Government of Canada takes the protection of its systems, its captious infrastructure, and Canadians' information pinch the utmost seriousness. Solomon stated that "Anthropic's attack of moving pinch defenders first, alternatively than releasing this caller exemplary broadly, is the responsible way and gives group protecting captious systems a caput start." Concerns about accelerated AI improvement and ransomware person grown importantly amongst Canadians. A January 2026 national study by the Canadian Centre for Cyber Security stated that galore Canadian organizations and businesses, "regardless of size aliases sector," arsenic good arsenic individuals, are susceptible to ransomware attacks. However, "critical infrastructure and ample corporations" were recovered to beryllium the apical targets for ransomware activities.

Sign up for Semafor Business: The stories (& the scoops) from Wall Street. Read it now. The upcoming SpaceX IPO, expected to be the biggest in history at up to $2 trillion, is poised to ignite a "Cambrian explosion" of funding for new space-oriented startups, said Ariel Ekblaw, founder and CEO of nonprofit space R&D lab Aurelia Institute. Speaking Tuesday at Semafor World Economy in Washington, DC, Ekblaw said the hotly anticipated IPO, expected in June, will unlock new liquidity for space-enthusiast investors to fund a raft of new firms in the sector, joining with investors who missed out on Elon Musk's SpaceX. Ekblaw, an aerospace architect who also runs an affiliated VC fund, said space companies will touch every sector of the economy. Startups already are working on robotics and manufacturing, pharmaceuticals, even hospitality. "Don't consider space as a sector that may or may not be relevant to your business," Ekblaw advised. "Space is an emerging market. It's a physical domain around the Earth." Generating solar power in space could also be a way to reinforce an electric grid stretched by AI data centers, added Omeed Malik, founder and president of 1789 Capital and an investor in SpaceX.

The efficiency of modern transit is often taken for granted until a foundational system suffers a catastrophic failure. Recently, a significant RailOne glitch was experienced by thousands of passengers, leading to a state of systemic chaos across the network. The digital railway infrastructure was rendered non-functional for an extended duration, causing a ripple effect that compromised the integrity of ticketless travel regulations and mid-journey booking protocols. It was observed that the seamless interface usually relied upon by commuters vanished, replaced by error screens and stalled processing bars. This event served as a stark reminder of the vulnerability inherent in centralized booking systems when technical redundancies fail to activate. As the morning rush commenced, the failure of the RailOne platform was first detected by early-morning travelers. A total suspension of services was noted, whereby mobile applications and station kiosks were unable to communicate with the central database. Because the primary method of verification was inaccessible, a large-scale influx of passengers onto platforms was witnessed without the possession of valid digital credentials. The usual order maintained by automated gates and scanning devices was replaced by manual interventions, which were quickly overwhelmed by the sheer volume of the traveling public. A sense of confusion was shared by both the staff and the commuters as the standard operating procedures for technical outages were found to be insufficient for a glitch of this magnitude. A significant portion of the news report focused on the ethical and logistical challenges posed by ticketless movement. Since the RailOne interface was down, the ability to purchase fares was removed from the consumer. Consequently, many individuals were forced into a position where travel was conducted without formal payment. This phenomenon of involuntary ticketless travel was scrutinized by railway authorities, yet no immediate solution could be provided while the servers remained unresponsive. The legal implications of this movement were debated, as the responsibility for the lack of a valid ticket was shifted from the passenger to the service provider. The infrastructure meant to prevent such occurrences was the very element that facilitated the irregularity. For those already en route when the system collapsed, the situation was particularly dire. Mid-journey booking features, which allow passengers to upgrade seats or extend their travel distance, were completely severed. It was reported that individuals attempting to rectify their ticketing status while on board were met with repeated timeouts. The train conductors were placed in a difficult position, as the handheld devices used for verification were also linked to the malfunctioning RailOne backend. The inability to process payments mid-transit meant that the revenue stream for the duration of the glitch was essentially halted, leading to projected financial losses for the operating companies. While the specific lines of code responsible for the error were not initially disclosed, the malfunction was characterized as a synchronization error between the cloud storage and the local client interfaces. The RailOne system architecture was designed to handle high traffic, yet it was overwhelmed by a specific sequence of requests that triggered a cascading failure. Every attempt to reboot the local nodes was met with further instability. It was noted by technical analysts that the lack of an offline mode for the ticketing software exacerbated the crisis, as the system lacked the autonomy to function without a persistent connection to the primary servers. The burden of the RailOne glitch was felt most heavily by the ground staff and on-board attendants. A transition to manual ticketing was attempted, yet the lack of physical ticket stock in an increasingly paperless environment made this a near-impossible task. Passive observation of the crowds revealed a growing frustration, which the staff had to manage without updated information from the central command. The communication breakdown meant that those on the front lines were as uninformed as the passengers they were meant to assist. Instructions were issued sporadically, often contradicting previous orders, as the organization struggled to regain control over the narrative of the event. Beyond the immediate inconvenience, the financial repercussions of the RailOne failure were significant. A massive shortfall in daily revenue was recorded, as thousands of journeys were completed without the collection of fares. Regulatory bodies have signaled that an investigation into the service level agreements of the software providers will be conducted. The incident raised questions regarding the reliability of third-party digital solutions in the public sector. It was argued that the reliance on a single point of failure like RailOne poses a risk to national mobility. The necessity for a more robust, decentralized ticketing system was highlighted by industry experts in the aftermath of the chaos. In response to the outcry, plans for enhanced system redundancies were announced. It was suggested that future iterations of the RailOne software must include a failsafe that allows for encrypted offline validation. The goal is to ensure that even in the event of a total network blackout, the process of booking and verification can continue in a limited capacity. The lessons learned from this specific glitch are expected to inform the development of the next generation of transit technology. A focus on "graceful degradation" -- where a system remains functional at a reduced level during a failure -- is now being prioritized by the engineering teams involved. The restoration of the RailOne system was eventually achieved after several hours of intensive troubleshooting. While the digital gates were reopened and the apps regained functionality, the residual impact on the day's schedule was felt until the late evening. The event stands as a landmark case of how a single technical glitch can halt the movement of an entire region. As the railway industry continues to push toward total digitalization, the RailOne incident serves as a cautionary tale regarding the balance between innovation and reliability.

* Key insight: Hackers are weaponizing rogue employees at Kraken to bypass traditional security measures and extort the cryptocurrency exchange. * What's at stake: The security incident validates traditional banks' fears that granting Federal Reserve master accounts to uninsured crypto institutions could introduce vulnerabilities to the nation's financial networks. * Supporting data: Rogue employees potentially viewed roughly 2,000 client accounts, which represents 0.02% of the exchange's total user base. Overview bullets generated by AI with editorial review Just weeks after making history as the first digital asset company to gain direct access to the Federal Reserve's payment infrastructure, cryptocurrency exchange Kraken is fighting an extortion plot fueled by rogue employees. For the traditional banking industry, the security incident validates long-standing fears that granting Federal Reserve master accounts to uninsured crypto institutions could introduce systemic operational and cybersecurity vulnerabilities into the nation's core financial networks. Criminals are threatening to release videos of the exchange's internal systems. Specifically, the videos apparently show the exchange's internal client support systems and the customer data accessible within them, as accessed with legitimate employees' credentials. The breach stems from insider recruitment, as the attackers compromised members of the company's customer support team, according to an April 13 post on social media platform X by Nick Percoco, the company's chief security officer. Kraken has not confirmed whether the compromises were monetary bribes. The exchange also has not specified whether the rogue employees recorded the footage themselves or if the hackers recorded it while the employees granted them access. Kraken has revoked the employees' access, notified the affected clients and is refusing to negotiate with or pay the bad actors, Percoco said. The extortion attempt comes a month after the Federal Reserve Bank of Kansas City approved a "limited purpose" master account for Kraken Financial on March 4. The controversial decision gives the Wyoming-chartered institution the ability to move funds directly via the central bank's payment rails. Banking advocates, led by the Independent Community Bankers of America, have vehemently opposed the approval. Rebeca Romero Rainey, president and CEO of the trade group, noted that "granting nonbank entities and crypto institutions access" to master accounts poses direct risks to the broader banking system, according to a press release on the day of the Fed's decision. The trade group urged the Federal Reserve to limit account access to institutions that already meet the financial sector's "highest standards." Inside the extortion attempt Attackers threatened to share videos of Kraken's internal systems with media outlets and across social platforms if the exchange rejects their demands, according to Percoco. The company traced the security breach to its own staff. The exchange received a tip in February 2025 about a video on a criminal forum that showed access to its client support systems, Percoco said. The company identified a support team member as the culprit and immediately revoked their access. Recently, the exchange uncovered a second, similar incident involving a different support team employee, he said. The rogue employees potentially viewed roughly 2,000 client accounts, which represents 0.02% of the exchange's total user base, according to Percoco. Percoco emphasized in his post that "funds were never at risk" and that the exchange's core systems remained secure. The company also "will not pay these criminals" and "will not ever negotiate with bad actors," he said. "We are working with federal law enforcement to ensure the individuals involved face consequences for their actions," Percoco said. The rising threat of insider recruitment The security breach at Kraken highlights a growing cybersecurity trend targeting the broader financial and technology sectors: the weaponization of rogue employees. Cybercriminals actively recruit insiders to bypass traditional hacking methods. For example, a darknet advertisement sought to hire individuals currently working at or contracted to cryptocurrency exchanges such as Kraken, Coinbase and Binance, according to findings released in December by cybersecurity firm Check Point. In the cases Check Point analyzed, the criminals offered payouts ranging from $3,000 to $15,000 based on the employee's level of access, promising that the arrangement requires no malware and respects the rogue employee's anonymity. The extortion attempt at Kraken mirrors other recent industry incidents involving insider threats. Last year, Coinbase published a post detailing how it is standing up to extortionists, noting that the company increased its investment in insider-threat detection and automated response. To further secure its support operations, Coinbase said at the time it was opening a new support hub in the U.S. and adding stronger security controls across all locations. Ammunition for banking advocates opposing Fed access The extortion attempt arms traditional banking advocates with a concrete example of the operational and cybersecurity vulnerabilities they warned about when opposing the exchange's new Federal Reserve master account. The Independent Community Bankers of America and 42 state bankers' associations last week urged the Kansas City Fed to reconsider the approval, according to a joint letter. The trade groups pressed the central bank to ensure that the terms of the digital asset company's account access include "robust risk controls and enforceable off-ramps," according to the letter. Some outside observers and researchers echo these concerns, warning that lightly regulated crypto firms could pose broad operational and financial stability risks. The Bank Policy Institute, a banking research and advocacy group, noted in an October 2020 report that businesses such as Kraken face subtle incentives to shift their reserves toward riskier assets. Traditional banks undergo rigorous supervision and must hold deposit insurance precisely to mitigate such risks, according to the institute. The internal security breach also arrives amid heightened congressional scrutiny over the exchange's integration into the federal payment rails. In late March, Rep. Maxine Waters, the top Democrat on the House Financial Services Committee, demanded the Kansas City Fed disclose more details regarding the approval process, citing potential financial-system risks. Earlier in March, Michelle Bowman, the Fed's vice chair for supervision, acknowledged that granting a crypto exchange direct access to the federal payment system is uncharted territory. But, she said, Kraken's limited purpose account would be a test case. "It's a bit of an experiment," Bowman said.
