The latest news and updates from companies in the WLTH portfolio.
rklb is back in focus as capital floods into the space sector ahead of a potential SpaceX IPO, and the market is treating that shift as a turning point rather than a passing headline. The immediate question is not whether interest exists, but whether the current wave of enthusiasm can be converted into durable momentum. What Happens When Capital Chases the Next Space Winner? The current setup is being shaped by two forces at once: a broader rush into space-related names and a more specific reassessment of Rocket Lab's place in that trade. One analyst at Stifel lifted the price target on Rocket Lab to $105 from $90 while keeping a Buy rating, and that new target matched the highest price target on Wall Street for the company. That matters because the stock has already been volatile, with repeated large swings over the past year. At the same time, the market has been rewarding companies tied to defense and space spending themes. That backdrop has helped explain why the share price has been pushed higher even when the company itself has not delivered a single transformational update. The movement is less about one event and more about a pattern: investors are searching for the next listed name that can capture the same demand that a potential SpaceX IPO is bringing into the sector. What If the Current Momentum Meets Real Business Execution? The case for rklb rests on more than trading sentiment. The company recently gained support from a multi-launch agreement with the Institute for Q-shu Pioneers of Space, Inc., adding three additional Electron launches and bringing the total number of missions for iQPS to 15. That extended relationship signals recurring revenue potential, which is a central theme in the small launch market. There is also a broader operational picture. Rocket Lab is up 17. 7% since the beginning of the year and was trading close to its 52-week high of $96. 30 from January 2026. A separate read on the stock notes that it remains about 14% below a recent peak, which shows how quickly sentiment has been moving. The company also carries a backlog of $1. 85 billion, with space systems representing 74% of that total and launch services making up the rest. Analysts project revenue rising to $870 million this year and $1. 2 billion in 2027. What If Neutron Delivers Late, Not Soon? The biggest timing issue remains Neutron. The rocket was expected to launch this year, but a Stage 1 fuel tank ruptured during a hydrostatic pressure test, delaying the launch to the fourth quarter of this year. The issue was identified before launch, which limits the immediate operational damage, but it still pushes larger payload opportunities into a later window. That delay matters because it keeps some of the company's larger-launch ambitions on the back burner for at least a couple more quarters. Even so, the company has not stood still. In mid-March, it signed a $190 million contract for 20 hypersonic test flights using HASTE, its suborbital variant of Electron. That agreement shows how defense and national security work is helping broaden the company's mix while Neutron remains in development. What If the Market Starts Separating Hype From Staying Power? The most likely path is a split verdict. In the best case, rklb continues to benefit from sector momentum, converts its backlog into delivery, and gets a cleaner launch narrative once Neutron finally takes off. In the most challenging case, enthusiasm outruns execution, and the stock remains vulnerable because its valuation is being pulled higher by expectation as much as by current results. The middle path is more realistic: strong customer activity, steady financial growth, and periodic volatility whenever launch milestones slip. Who wins in that setup? Long-term investors who can tolerate swings may benefit if the company keeps turning contracts into repeat business. Customers looking for reliable small-launch and space systems support also gain from a business that is becoming more diversified. Who loses? Short-term traders who confuse momentum with certainty, and any investor expecting the company to fully close the gap with larger rivals before its next major technical milestone. For now, the right reading is disciplined rather than dramatic. The market is signaling that the space sector may be entering a new capital cycle, and rklb is one of the clearest public names positioned to absorb that attention. But the next phase will be decided less by excitement than by delivery, cadence, and whether the company can turn recurring contracts and launch progress into sustained execution. That is the real test ahead for rklb.

SpaceX has struck a deal with AI coding platform developer Cursor that will allow it to use xAI's Colossus supercomputer to "create the world's best coding and knowledge work AI." On X, SpaceX said, "the combination of Cursor's leading product and distribution to expert software engineers with SpaceX's million H100 equivalent Colossus training supercomputer will allow us to build the world's most useful models." As part of the deal, SpaceX will retain the option to purchase Cursor for $60 billion later this year. If it doesn't, it will owe Cursor $10 billion "for our work together." All of this comes weeks before SpaceX's impending IPO, which could value the company as high as $1.75 trillion. Although SpaceX has been primarily a rocket company for most of its existence, it has diversified in recent years. While development of the Starship launch vehicle continues apace, in 2026, SpaceX also supports 10 million+ Starlink customers. In February, it also absorbed Grok-developer xAI, making it the parent company of X and bringing most of SpaceX CEO Elon Musk's various ventures under a single banner ahead of the IPO. The Cursor deal is just the latest step in boosting SpaceX's value ahead of its IPO, but it also solves a number of problems facing SpaceX, xAI, and Cursor. It means Cursor can train its own AI model(s) on xAI's massive dataset, and would no longer be dependent on OpenAI and Anthropic to enhance its coding toolsets. In a Tuesday blog post, Cursor says it "released Composer less than six months ago as our first agentic coding model. After that, Composer 1.5 scaled reinforcement learning by over 20x. Composer 2 then added continued pretraining, reaching frontier-level performance at a fraction of the cost of other models. Each step up in compute has translated to meaningfully more capable models." It acknowledged, however, that Cursor has "been bottlenecked by compute," so the SpaceX deal means "our team will leverage xAI's Colossus infrastructure to dramatically scale up the intelligence of our models." This move also gives xAI its own coding tool to better compete with contemporary AI firms, and it adds even more narrative and actual value to SpaceX. Cursor's momentum can now fuel the continued expansion and growth of xAI and Grok, which has struggled to maintain relevance versus ChatGPT, Gemini, and other AI chatbots. SpaceX also locks Cursor at a set value for its potential purchase, which, at the rate Cursor's valuation has grown, is a victory in itself. Cursor was valued at just $2.5 billion in January 2025, but that jumped to $29.3 billion by year's end, The Wall Street Journal reports. Last week, it was looking at a funding round that would push its estimated value to over $50 billion. However, this also represents a strategic risk for SpaceX. Although Cursor and xAI may be able to develop a proprietary coding tool to compete with other major AI companies, doing so will take time. If it takes too long, or never quite catches up, SpaceX could be saddled with a company that peaked before it was purchased. That's on top of the debt it acquired with the mergers with xAI and its subsidiary, Twitter/X. Fortunately for Musk and his fellow SpaceX shareholders, the IPO will probably come before the gamble needs to show its returns. But with Musk claiming xAI needs to be rebuilt from the ground up, how it is rebuilt may go a long way to deciding it and Cursor's long-term future.

Cyber defence is now being tested by the same tools designed to strengthen it. Anthropic is investigating a claim that a small group of people gained access to its Claude Mythos model, a system the company says is too powerful to release publicly. The allegation matters because it does not describe a conventional break-in. Instead, it points to a possible failure in access control inside a third-party vendor environment, where advanced AI can become exposed without a direct attack on the core system. Why this matters right now The immediate concern is not that malicious actors are confirmed to hold the model. Anthropic says it has no evidence its systems were affected, and there is no suggestion the model has been used for harm. But the claim is still significant because Mythos is built around identifying vulnerabilities and simulating offensive techniques. If access controls fail, the issue becomes bigger than one company's internal safeguards; it becomes a test of whether cyber defence can keep pace with tools that are powerful enough to reveal weaknesses at scale. What the access claim reveals about control The reported route into Mythos is what makes the episode especially sensitive. Anthropic said it is investigating "unauthorised access to Claude Mythos Preview through one of our third-party vendor environments. " That wording points to a perimeter problem rather than a classic hack. The company has already released the model to a limited number of tech and financial firms for security testing, which means the protection of access now depends not only on Anthropic but also on those firms and contractors. This is where cyber defence becomes more complicated. A model can be technically secure at the centre and still be exposed through weaker links around it. The wider lesson is that the most advanced systems are only as controlled as the least disciplined environment that can reach them. In that sense, the question is not only who got in, but how many layers of trust the access chain required to fail first. Expert warnings around misuse and exposure Raluca Saceanu, chief executive of cyber-security company Smarttech247, said the episode was "most likely through misuse of access rather than a classic hack. " That distinction matters. A misuse case suggests that permissions, oversight, and internal discipline may be more important than brute-force intrusion in the early stages of AI-related incidents. Saceanu also warned that when powerful AI tools are accessed outside their intended controls, the risk is not merely a security incident. It can also mean the spread of capabilities that could be used for fraud, cyber abuse, or other malicious activity. That is a narrow but serious framing: the danger is not simply theft of code or data, but leakage of capability. Richard Horne, head of the UK's National Cyber Security Centre, struck a more balanced note at a recent security conference, arguing that frontier AI can help make systems safer if it is secured from misuse. He said recent media coverage showed how quickly such tools are enabling discovery and exploitation of vulnerabilities at scale, and he urged delegates to keep doing the basics of cyber-security well. Broader risks for companies testing advanced AI The case also exposes a structural tension in how frontier systems are deployed. Anthropic has made Mythos available to a small number of companies to help secure their systems against its reported ability to exploit vulnerabilities. That is a defensible strategy, but it creates a new dependency: the same model that helps defend systems must itself be guarded through third parties, contractors, and restricted access channels. That is why the latest claim resonates beyond one company. The UK's AI Security Institute has already warned that Mythos represents a step up from earlier models in the cyber threat it poses. It said the model could carry out multi-step attacks and discover weaknesses without human intervention, tasks that would normally take human professionals days. In one test, Mythos completed a 32-step cyber-attack simulation in three out of 10 attempts. Those numbers do not prove a breach, but they do explain the anxiety. When a model can move from finding flaws to simulating attack paths, the security burden shifts from software performance to governance, segmentation, and human discipline. The issue is not only whether the model can think, but whether the institutions around it can limit who gets to ask it to think. What comes next for cyber defence The unresolved question is whether this incident becomes a one-off access problem or a sign that the safeguards around frontier AI are still catching up with the technology itself. For now, Anthropic says it is investigating a claim, not confirming a compromise. Yet the episode adds pressure to every organisation that is experimenting with advanced models in cyber defence while trying to keep them away from misuse. If the controls are thin, the capabilities may not stay contained for long. That leaves the core challenge intact: can cyber defence evolve fast enough to secure systems designed to expose weaknesses before those weaknesses are exposed by the wrong people?

A small group exploited third-party vendor weaknesses to access an AI model capable of discovering thousands of zero-day vulnerabilities, forcing Anthropic to launch a $100M restricted access program. An AI model that can autonomously find over 1,000 zero-day vulnerabilities across major operating systems just got accessed by people who were never supposed to touch it. That's roughly the cybersecurity equivalent of leaving the keys to every lock in the building taped to the front door. Anthropic confirmed that its Claude Mythos Preview model, a system with genuinely alarming offensive cybersecurity capabilities, was breached by a small group of unauthorized users. The access was gained through compromised contractor credentials from a third-party vendor, combined with URL inferences gleaned from a separate data breach at Mercor, an AI training data provider. The incident occurred just two weeks after Anthropic publicly announced Mythos on April 7, 2026. Here's the thing about Mythos that makes this breach particularly unsettling. This isn't a chatbot that writes poetry or summarizes PDFs. Mythos was designed to discover security vulnerabilities autonomously, and it turned out to be disturbingly good at the job. The model identified thousands of zero-day vulnerabilities, which are security flaws unknown to the software vendor and therefore unpatched, across major operating systems and web browsers. Among its discoveries was a 27-year-old flaw in OpenBSD, a system widely regarded as one of the most secure operating systems ever built. In English: Mythos found holes in software that the entire global security community missed for nearly three decades. At the time the breach was discovered, over 99% of the vulnerabilities Mythos identified remained unpatched. That statistic alone explains why Anthropic wasn't exactly planning to hand out free trials. The model's capabilities represent a double-edged sword of historic proportions. In defensive hands, it's a revolutionary security tool. In the wrong hands, it's a skeleton key to the internet. The unauthorized users gained access within roughly 24 hours of the model's public announcement. The speed of the intrusion suggests either sophisticated planning or an opportunistic exploitation of already-compromised credentials. Either way, it exposed a fundamental weakness not in Anthropic's core infrastructure, but in the sprawling chain of third-party vendors that modern AI companies depend on. Anthropic's response was swift and expensive. The company launched Project Glasswing, a restricted access program designed to let vetted organizations use Mythos for defensive cybersecurity purposes while keeping the model locked away from everyone else. The program comes with $100 million in usage credits for participating organizations. That's a substantial investment, roughly signaling that Anthropic views this not as a PR crisis to manage but as an existential governance challenge to solve. The goal is straightforward: allow trusted entities like government agencies and financial institutions to leverage Mythos for identifying and patching vulnerabilities in their own systems, without creating pathways for malicious exploitation. Look, the concept sounds elegant on paper. In practice, restricting access to a model this powerful is like trying to put toothpaste back in the tube. Once the capabilities are known to exist, the incentive structure for bad actors to replicate or access them only intensifies. The breach itself has been categorized as a vendor security failure, which is a polite way of saying the weakest link wasn't Anthropic's own security but the credentials management practices of a contractor. This pattern is painfully familiar across the tech industry. Some of the most consequential breaches in history, from Target to SolarWinds, exploited third-party access points rather than primary defenses. This incident arrives at a moment when AI safety discourse has shifted from theoretical hand-wringing to concrete urgency. Government officials and financial sector leaders have reportedly begun urgent discussions about how to govern AI systems with capabilities this significant. For investors tracking the AI and cybersecurity sectors, the Mythos breach crystallizes several trends worth watching closely. First, the cybersecurity market is almost certainly about to see accelerated capital flows. When an AI model can find thousands of zero-day vulnerabilities that human researchers missed for decades, every organization with a digital footprint suddenly needs to reassess its defense posture. Companies specializing in vulnerability management, endpoint detection, and AI-powered security tools stand to benefit as enterprises scramble to adapt. Second, AI companies face a new category of reputational and regulatory risk. Anthropic built Mythos with defensive applications in mind, but the unauthorized access demonstrates that intent and outcome don't always align. Regulators will likely use this incident as evidence that voluntary safety commitments are insufficient, potentially accelerating mandatory compliance frameworks for AI developers. Any company building frontier AI models should be pricing in the cost of significantly more rigorous access controls and vendor audits. Third, the third-party vendor ecosystem is becoming a critical vulnerability surface for AI companies specifically. Traditional software companies have dealt with supply chain security for years, but AI models represent a unique challenge. The value of unauthorized access to a model like Mythos is orders of magnitude higher than access to a conventional enterprise software tool. This asymmetry between the value of the asset and the security of the access chain creates an extremely attractive target profile for sophisticated threat actors. The competitive landscape may also shift in interesting ways. Anthropic's willingness to invest $100 million in a controlled access program suggests that frontier AI companies will increasingly need to build security and governance infrastructure that rivals their research capabilities. That's expensive and complex, potentially favoring larger, better-capitalized players over smaller AI startups that lack the resources to manage models with dual-use potential. There's also a less obvious dynamic at play. Mythos's ability to discover vulnerabilities at scale could eventually become a net positive for overall internet security, if its deployment remains restricted to defensive applications. The 99% unpatched rate means the model has essentially generated a roadmap for fixing critical flaws across the software ecosystem. Whether that roadmap gets used for patching or exploitation depends entirely on how well Anthropic and its partners can maintain control. The Mercor data breach connection adds another layer of concern. It suggests that breaches at AI training data providers can have cascading effects, creating attack vectors that weren't previously considered. As the AI supply chain grows more interconnected, a security failure at one node can compromise systems several degrees removed. For what it's worth, Anthropic appears to be taking this seriously rather than defaulting to the standard corporate playbook of minimizing and moving on. The scale of the Glasswing investment and the speed of the response suggest genuine alarm at the leadership level. But the fundamental tension remains unresolved. Building AI systems powerful enough to autonomously discover zero-day vulnerabilities means building AI systems powerful enough to cause serious harm if control is lost. The Mythos breach didn't result in catastrophic exploitation, at least not that we know of yet. The next one might not be so uneventful. Bottom line: The Mythos incident is a live demonstration that AI safety isn't an abstract philosophical debate. It's an operational security problem with real-world consequences. How Anthropic, regulators, and the broader industry respond will set precedents for governing the most capable AI systems ever built. The $100 million question, literally, is whether restricted access programs can actually work when the incentives to break them are this high.

SAN FRANCISCO - SpaceX says it has the rights to buy artificial intelligence coding tool Cursor for $60 billion later this year as Elon Musk's space exploration and AI company looks for ways to compete with rivals Anthropic and OpenAI ahead of a planned Wall Street debut. SpaceX said that, alternatively, it could pay $10 billion to "work together" with Cursor. Recommended Videos SpaceX announced the deal Tuesday on the social platform X, which along with the AI chatbot Grok is part of a constellation of properties that Musk has merged into his rocket company. Cursor, made by San Francisco startup Anysphere, is a popular AI coding assistant. What SpaceX describes as Cursor's wide "distribution to expert software engineers" is likely part of what makes it attractive to Musk's company, giving it access to a new customer base. Cursor said its new partnership with SpaceX subsidiary xAI will enable it to build future AI products using xAI's massive AI data center complex Colossus, based in Memphis, Tennessee. "We've wanted to push our training efforts much further, but we've been bottlenecked by compute," Cursor said in a statement on X, which didn't mention the possibility of being acquired. "With this partnership, our team will leverage xAI's Colossus infrastructure to dramatically scale up the intelligence of our models." Cursor, which started in 2022, helped sparked a trend called "vibe coding" as AI coding assistants have become increasingly capable of doing the work of computer programming. Cursor competes with other coding tools like Anthropic's Claude Code and OpenAI's Codex but also has relied heavily on partnerships with those larger AI research companies for the foundations of its technology. It was Cursor's Composer, combined with Anthropic's Claude Sonnet, that a prominent AI researcher was playing with for weekend projects when he coined the phrase "vibe coding" in early 2025.
SAN FRANCISCO(AP) -- SpaceX says it has the rights to buy artificial intelligence coding tool Cursor for $60 billion later this year as Elon Musk's space exploration and AI company looks for ways to compete with rivals Anthropic and OpenAI ahead of a planned Wall Street debut. SpaceX said that, alternatively, it could pay $10 billion to "work together" with Cursor. SpaceX announced the deal Tuesday on the social platform X, which along with the AI chatbot Grok is part of a constellation of properties that Musk has merged into his rocket company. Cursor, made by San Francisco startup Anysphere, is a popular AI coding assistant. What SpaceX describes as Cursor's wide "distribution to expert software engineers" is likely part of what makes it attractive to Musk's company, giving it access to a new customer base. Cursor said its new partnership with SpaceX subsidiary xAI will enable it to build future AI products using xAI's massive AI data center complex Colossus, based in Memphis, Tennessee. "We've wanted to push our training efforts much further, but we've been bottlenecked by compute," Cursor said in a statement on X, which didn't mention the possibility of being acquired. "With this partnership, our team will leverage xAI's Colossus infrastructure to dramatically scale up the intelligence of our models." Cursor, which started in 2022, helped sparked a trend called "vibe coding" as AI coding assistants have become increasingly capable of doing the work of computer programming. Cursor competes with other coding tools like Anthropic's Claude Code and OpenAI's Codex but also has relied heavily on partnerships with those larger AI research companies for the foundations of its technology. It was Cursor's Composer, combined with Anthropic's Claude Sonnet, that a prominent AI researcher was playing with for weekend projects when he coined the phrase "vibe coding" in early 2025.

SpaceX has struck a deal with AI coding platform developer Cursor that will allow it to use xAI's Colossus supercomputer to "create the world's best coding and knowledge work AI." On X, SpaceX said, "the combination of Cursor's leading product and distribution to expert software engineers with SpaceX's million H100 equivalent Colossus training supercomputer will allow us to build the world's most useful models." As part of the deal, SpaceX will retain the option to purchase Cursor for $60 billion later this year. If it doesn't, it will owe Cursor $10 billion "for our work together." All of this comes weeks before SpaceX's impending IPO, which could value the company as high as $1.75 trillion. Although SpaceX has been primarily a rocket company for most of its existence, it has diversified in recent years. While development of the Starship launch vehicle continues apace, in 2026, SpaceX also supports 10 million+ Starlink customers. In February, it also absorbed Grok-developer xAI, making it the parent company of X and bringing most of SpaceX CEO Elon Musk's various ventures under a single banner ahead of the IPO. The Cursor deal is just the latest step in boosting SpaceX's value ahead of its IPO, but it also solves a number of problems facing SpaceX, xAI, and Cursor. It means Cursor can train its own AI model(s) on xAI's massive dataset, and would no longer be dependent on OpenAI and Anthropic to enhance its coding toolsets. In a Tuesday blog post, Cursor says it "released Composer less than six months ago as our first agentic coding model. After that, Composer 1.5 scaled reinforcement learning by over 20x. Composer 2 then added continued pretraining, reaching frontier-level performance at a fraction of the cost of other models. Each step up in compute has translated to meaningfully more capable models." It acknowledged, however, that Cursor has "been bottlenecked by compute," so the SpaceX deal means "our team will leverage xAI's Colossus infrastructure to dramatically scale up the intelligence of our models." This move also gives xAI its own coding tool to better compete with contemporary AI firms, and it adds even more narrative and actual value to SpaceX. Cursor's momentum can now fuel the continued expansion and growth of xAI and Grok, which has struggled to maintain relevance versus ChatGPT, Gemini, and other AI chatbots. SpaceX also locks Cursor at a set value for its potential purchase, which, at the rate Cursor's valuation has grown, is a victory in itself. Cursor was valued at just $2.5 billion in January 2025, but that jumped to $29.3 billion by year's end, The Wall Street Journal reports. Last week, it was looking at a funding round that would push its estimated value to over $50 billion. However, this also represents a strategic risk for SpaceX. Although Cursor and xAI may be able to develop a proprietary coding tool to compete with other major AI companies, doing so will take time. If it takes too long, or never quite catches up, SpaceX could be saddled with a company that peaked before it was purchased. That's on top of the debt it acquired with the mergers with xAI and its subsidiary, Twitter/X. Fortunately for Musk and his fellow SpaceX shareholders, the IPO will probably come before the gamble needs to show its returns. But with Musk claiming xAI needs to be rebuilt from the ground up, how it is rebuilt may go a long way to deciding it and Cursor's long-term future.

SAN FRANCISCO(AP) -- SpaceX says it has the rights to buy artificial intelligence coding tool Cursor for $60 billion later this year as Elon Musk's space exploration and AI company looks for ways to compete with rivals Anthropic and OpenAI ahead of a planned Wall Street debut. SpaceX said that, alternatively, it could pay $10 billion to "work together" with Cursor. SpaceX announced the deal Tuesday on the social platform X, which along with the AI chatbot Grok is part of a constellation of properties that Musk has merged into his rocket company. Cursor, made by San Francisco startup Anysphere, is a popular AI coding assistant. What SpaceX describes as Cursor's wide "distribution to expert software engineers" is likely part of what makes it attractive to Musk's company, giving it access to a new customer base. Cursor said its new partnership with SpaceX subsidiary xAI will enable it to build future AI products using xAI's massive AI data center complex Colossus, based in Memphis, Tennessee. "We've wanted to push our training efforts much further, but we've been bottlenecked by compute," Cursor said in a statement on X, which didn't mention the possibility of being acquired. "With this partnership, our team will leverage xAI's Colossus infrastructure to dramatically scale up the intelligence of our models." Cursor, which started in 2022, helped sparked a trend called "vibe coding" as AI coding assistants have become increasingly capable of doing the work of computer programming. Cursor competes with other coding tools like Anthropic's Claude Code and OpenAI's Codex but also has relied heavily on partnerships with those larger AI research companies for the foundations of its technology. It was Cursor's Composer, combined with Anthropic's Claude Sonnet, that a prominent AI researcher was playing with for weekend projects when he coined the phrase "vibe coding" in early 2025.

Sigrid Jin was waiting to board a plane when he saw stunning news that artificial intelligence start-up Anthropic had accidentally leaked the source code for Claude Code, its popular A.I. agent. Mr. Jin, 25, an undergraduate student, scrambled to post a copy online. His worried girlfriend quickly texted him: Was he violating copyright law? Mr. Jin turned to a team of A.I. assistants for a solution. He directed them to rewrite the leaked code in another programming language, then shared that version online. Within hours, more than 100,000 people had liked or linked to it. Anthropic, one of the leading A.I. companies alongside OpenAI, has said the leak had been caused by human error and, citing copyright violations, demanded that GitHub, an online library of computer code, remove posts sharing the code. Thousands of posts were taken down. But Mr. Jin's version remains online. He said Anthropic had not asked him to take it down. It is unclear whether Anthropic, which did not respond to questions from The New York Times, is drawing a distinction with the rewritten code. Mr. Jin said he believed rewriting the code transformed it into a new work, one that Anthropic could not claim ownership over. He said he was driven less by money or fame than by a desire to make a broader philosophical point. What is the value of copyrighted intellectual property in an era when A.I. can easily replicate not just computer code but art, music and literature in minutes? "I just wanted to raise some ethical questions in the A.I. agent era," he said. "Any creative work can be reproduced in a second." Computer code has long been treated as a protected creative work, akin to music or art. But enforcing copyright has been difficult, because a software's underlying computational instructions can be copied or tweaked in ways that are hard to trace. Even what counts as protected has been up for debate. Google and Oracle waged a legal battle for years, arguing over where to draw the line between creative expression and the basic functions of software. Now, a new technology is making that even more complicated. When the Anthropic leak surfaced online, Mr. Jin and his friends already treated A.I. assistant tools like Claude Code and OpenClaw as employees to handle daily tasks. These agents don't just answer questions; they carry out tasks on their own once prompted with a goal, such as "organize my receipts" or "make a new social media post." The agents also make copying and imitation easier than ever and on a far greater scale. For many software companies, as well as authors, artists and musicians, the risk is not just direct copying. It's that the market for their work could be flooded with A.I.-generated substitutes that cost almost nothing to produce, diminishing the value of the original work. "What happened with the Claude Code leak is essentially a preview of what's coming for every creative industry," said Russ Pearlman, a lawyer specializing in A.I. and technology and chief information officer of Dallas College. Existing copyright rules, he said, were built on the assumption that copying takes time and that there's a meaningful window to take action to protect a work. "When an A.I. agent can rewrite 512,000 lines of code into a different language before most people have finished their morning coffee, that assumption collapses," he said. In 2022, the United States Copyright Office said works created entirely by A.I. without human creative input are not eligible for copyright protection. A follow-up review reaffirmed that decision, finding that a simple human prompt was not enough. But courts have yet to decide how much human involvement is required. "Artists and musicians are extremely concerned about this," said Yelena Ambartsumian, the founder of Ambart Law, a firm in New York that counsels start-ups about A.I., intellectual property and other matters. "All of the resources you put into being able to protect your copyrightable human expression, does it really matter if in a second or two hours that expression can be copied and then changed?" Many popular A.I. models were trained to produce humanlike prose by ingesting vast swaths of material posted online. Artists, authors and media companies have said A.I. firms have infringed their copyrights by using their work to train their systems. Last year, Anthropic agreed to pay $1.5 billion to a group of authors and publishers in the largest settlement in the history of U.S. copyright cases, after a judge ruled it had illegally downloaded and stored millions of copyrighted books. Anthropic has argued that, rather than replicating a creator's exact work, its systems analyze the underlying patterns in that work to build something new. (The New York Times sued OpenAI and Microsoft in 2023, accusing them of copyright infringement of news content related to A.I. systems. The two companies have denied those claims.) "The library of everything that has been written has already been fed into A.I.," said Kandis Koustenis, a lawyer who specializes in intellectual property at Bean, Kinney & Korman, a firm in Virginia. "From the author's point of view, the genie is out of the bottle a little bit." Since the advent of the personal computer, tech companies have devised ways to recreate software that is similar to rivals' without violating copyright, including techniques that insulate programmers from directly copying the original code. Mr. Jin argued that he had used a comparable approach to rewrite the Anthropic code, using A.I. agents rather than human programmers. That distinction has not been tested in court. While some A.I. companies, including Anthropic, closely guard the inner workings of their systems, others have embraced open source, based on the idea that transparency makes A.I. safer and accelerates innovation. As agents make it easy to replicate such work with minimal human input, creativity is becoming more valuable, Mr. Jin said. His goal was not to create something new, but to highlight how few truly novel ideas remain. "We are now relying on models that are relying on ideas that come out of other people's heads," Mr. Jin said. "It is becoming difficult to have novelty." Meaghan Tobin covers business and tech stories in Asia with a focus on China and is based in Taipei.

All information and data in this article is solely for informational purposes. For more information please view the Barchart Disclosure Policy here Cloud giant Amazon (AMZN) will invest $5 billion in AI startup and Claude-maker Anthropic, with commitments to pump in up to $20 billion in additional capital in the coming years. Anthropic will make use of Amazon's custom chip, Tranium, as well as their ARM-based processor, Graviton, to train their AI models. Commenting on the development, Amazon CEO Andy Jassy said, "Anthropic's commitment to run its large language models on AWS Trainium for the next decade reflects the progress we've made together on custom silicon, as we continue delivering the technology and infrastructure our customers need to build with generative AI." After an initial pop on the news, shares of Amazon have retreated slightly. The $2.7 trillion market cap company's stock is up 8.75% on a year-to-date (YTD) basis. Thus, the market appears to be fairly indifferent about it. But should it be, or is it ignoring Amazon's AI credentials? Let's find out. At the onset, I must be forthright and admit that at the current juncture, Google's (GOOGL) (GOOG) Tensor Processing Unit, or TPU, is leading the race in custom ASICs. With names like Apple (AAPL), Meta (META), and Anthropic itself onboarded as customers, Amazon's Tranium set of chips is looking up at TPU in the custom chip ladder. However, with $20 billion in annualized revenue run rate (ARR) already reached for its custom chip division, Amazon is already a serious contender. Amazon's chip division is home to several key proprietary technologies. Trainium serves as the company's purpose-built silicon for AI training and inference workloads, while Graviton functions as a general-purpose CPU that now powers a substantial share of computing operations across AWS infrastructure. Rounding out the lineup is the Nitro System, a dedicated platform managing security protocols and networking functions. Notably, CEO Andy Jassy has signaled that demand for these processors has reached a point where Amazon may soon begin offering rack-level configurations directly to external buyers. Jassy believes this business line carries the potential to generate an annual recurring revenue figure of $50 billion, a move that would place the company in direct competition with Nvidia (NVDA) and Advanced Micro Devices (AMD). On the supply front, the picture looks increasingly favorable. Capacity from Trainium 2 has been completely absorbed by existing customers, and Trainium 3, which began reaching customers at the start of 2026, is already fully subscribed. Perhaps most telling is the reception for Trainium 4, a chip still approximately 18 months away from broad availability, which has already begun attracting advance orders from prospective buyers. Moreover, Trainium3, built on TSMC's 3nm process, delivers 2.52 petaflops of FP8 compute per chip with 144 GB of HBM3e memory, representing a 4.4x performance improvement over Trainium2 with 4x better energy efficiency, with customers already reporting 50% lower training and inference costs versus GPU alternatives. Notably, where Trainium holds a structural advantage is in its tight integration within the broader AWS ecosystem and its flexibility across both training and inference workloads. Trainium's head architect has noted that the chip delivers 30% to 40% better price performance compared to other hardware vendors within AWS and that Trainium chips serve both inference and training workloads quite effectively. Looking at the roadmap, Trainium4 is expected to begin delivering in 2027, promising 6 times the FP4 compute performance, 4 times more memory bandwidth, and 2 times more high-bandwidth memory capacity than Trainium3. Perhaps most strategically significant, a key development for Trainium4 is its planned support for Nvidia's NVLink Fusion interconnect technology, allowing AWS to build heterogeneous clusters that blend its custom silicon with Nvidia hardware rather than forcing customers to choose between ecosystems. Moving to Claude, Amazon's investment of about $8 billion in 2023 has proved to be a masterstroke, just like Microsoft's (MSFT) was with ChatGPT. With Amazon's ownership of Anthropic expected to be between 16% and 18% currently, and the latter's valuation already hovering around $380 billion, the e-commerce giant is already sitting on some nifty profits. Moreover, with it being the primary training and cloud partner of Anthropic, Amazon is also bringing in some serious revenue from the company. During the fourth quarter of 2025, Amazon reported net sales of $213.4 billion, representing a 14% increase compared to the same period a year earlier. The AWS segment delivered particularly strong results, posting revenue of $35.6 billion, a gain of 24% on an annual basis. Earnings per share came in at $1.95, a 4.8% improvement over the prior year's figure, extending the company's streak of year-over-year (YoY) earnings growth to nine consecutive quarters. Despite this progress, the result fell marginally short of the consensus estimate of $1.97, marking the first time in nine quarters that Amazon failed to meet bottom-line expectations. From a cash flow perspective, net cash generated from operating activities reached $54.5 billion for the period, climbing 19.3% relative to the year before. Amazon wrapped up the full year 2025 holding a cash balance of $86.8 billion, carrying no short-term debt obligations on its balance sheet. Looking ahead, management has guided for first quarter 2026 revenues in the range of $173.5 billion to $178.5 billion. The midpoint of that range would translate to an annual growth rate of approximately 13%. On the valuation front, the picture presents a degree of contrast. The AMZN stock currently trades at a premium relative to broader industry benchmarks, yet sits at a discount when measured against its own historical averages. Forward multiples for P/E, P/S, and P/CF stand at 32.09, 3.30, and 16.44, each above their respective sector medians of 16.31, 0.94, and 10.21. That said, these same metrics trail the stock's five-year historical averages of forward P/E of 162.72 and forward P/CF of 10.21 by a considerable margin. Thus, analysts remain bullish about the AMZN stock, earmarking for it an overall rating of "Strong Buy." The mean target price of $286.66 denotes an upside potential of about 15% from current levels. Out of 58 analysts covering the stock, 49 have a "Strong Buy" rating, six have a "Moderate Buy" rating, and three have a "Hold" rating.
/Amazon%20-%20Image%20by%20bluestork%20via%20Shutterstock.jpg)
SAN FRANCISCO (AP) -- SpaceX says it has the rights to buy artificial intelligence coding tool Cursor for $60 billion later this year as Elon Musk's space exploration and AI company looks for ways to compete with rivals Anthropic and OpenAI ahead of a planned Wall Street debut. SpaceX said that, alternatively, it could pay $10 billion to "work together" with Cursor. SpaceX announced the deal Tuesday on the social platform X, which along with the AI chatbot Grok is part of a constellation of properties that Musk has merged into his rocket company. Cursor, made by San Francisco startup Anysphere, is a popular AI coding assistant. What SpaceX describes as Cursor's wide "distribution to expert software engineers" is likely part of what makes it attractive to Musk's company, giving it access to a new customer base. Cursor said its new partnership with SpaceX subsidiary xAI will enable it to build future AI products using xAI's massive AI data center complex Colossus, based in Memphis, Tennessee. "We've wanted to push our training efforts much further, but we've been bottlenecked by compute," Cursor said in a statement on X, which didn't mention the possibility of being acquired. "With this partnership, our team will leverage xAI's Colossus infrastructure to dramatically scale up the intelligence of our models." Cursor, which started in 2022, helped sparked a trend called "vibe coding" as AI coding assistants have become increasingly capable of doing the work of computer programming. Cursor competes with other coding tools like Anthropic's Claude Code and OpenAI's Codex but also has relied heavily on partnerships with those larger AI research companies for the foundations of its technology. It was Cursor's Composer, combined with Anthropic's Claude Sonnet, that a prominent AI researcher was playing with for weekend projects when he coined the phrase "vibe coding" in early 2025. Copyright © 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, written or redistributed.

SAN FRANCISCO - SpaceX says it has the rights to buy artificial intelligence coding tool Cursor for $60 billion later this year as Elon Musk's space exploration and AI company looks for ways to compete with rivals Anthropic and OpenAI ahead of a planned Wall Street debut. SpaceX said that, alternatively, it could pay $10 billion to "work together" with Cursor. Recommended Videos SpaceX announced the deal Tuesday on the social platform X, which along with the AI chatbot Grok is part of a constellation of properties that Musk has merged into his rocket company. Cursor, made by San Francisco startup Anysphere, is a popular AI coding assistant. What SpaceX describes as Cursor's wide "distribution to expert software engineers" is likely part of what makes it attractive to Musk's company, giving it access to a new customer base. Cursor said its new partnership with SpaceX subsidiary xAI will enable it to build future AI products using xAI's massive AI data center complex Colossus, based in Memphis, Tennessee. "We've wanted to push our training efforts much further, but we've been bottlenecked by compute," Cursor said in a statement on X, which didn't mention the possibility of being acquired. "With this partnership, our team will leverage xAI's Colossus infrastructure to dramatically scale up the intelligence of our models." Cursor, which started in 2022, helped sparked a trend called "vibe coding" as AI coding assistants have become increasingly capable of doing the work of computer programming. Cursor competes with other coding tools like Anthropic's Claude Code and OpenAI's Codex but also has relied heavily on partnerships with those larger AI research companies for the foundations of its technology. It was Cursor's Composer, combined with Anthropic's Claude Sonnet, that a prominent AI researcher was playing with for weekend projects when he coined the phrase "vibe coding" in early 2025.
Wednesday. Today, we're conserving space to discuss the SpaceX-Cursor deal by collapsing our usual formatting. Chart of the Day: After releasing the impressive and comparatively cheap Kimi K2.6 model this week, Chinese AI lab Moonshot AI has seen demand for its models spike: Given how people have been hot-swapping models via demand routers like OpenRouter, model loyalty seems to be a joke. Which means the model treadmill will continue. Viva! Rockets, satellite internet, AI and social media conglomerate SpaceX on Tuesday announced a deal with Cursor that's worth anywhere between $10 billion and $60 billion. SpaceX is partnering with Cursor, a purveyor of AI coding tools for developers that has scaled to more than $2 billion in annualized revenue. Reporting indicates the startup could be worth as much as $50 billion today on the private markets. In simple terms: SpaceX and Cursor will bring their respective talents together to "create the world's best coding and knowledge work AI," as the space company put it. And per Cursor, the deal would let it "dramatically scale up the intelligence" of its models. Why are there two prices for the deal? SpaceX will pay no less than $10 billion to Cursor for the collaboration, but has the option to buy Cursor whole-cloth for $60 billion "later this year." We won't know if SpaceX will pull that lever until it does, or doesn't. Why are the two companies partnering? If you combine the two, you get a previously constrained AI lab ("we've been bottlenecked by compute," Cursor said in a post) with all the muscle it could hope for. In theory, this could result in better models from the pair than either could manage on their own. Are their relative strengths truly complementary? Yes. Few companies in history can match xAI's ability to raise cash, and Cursor has been successfully selling its AI coding product in a market dominated by OpenAI (Codex) and Anthropic (Claude Code). Cursor's growth is both impressive and important. Many of the world's developers use AI coding tools from OpenAI, Anthropic and Cursor. That means the younger startup has access to a firehose of usage data from developers using both its own models and its competitors' products in production. It's worth noting that Cursor lets its customers opt out of sharing their information with the company and third parties, but presumably, enough folks allow the company to learn from their work to provide a valuable data source. So while one could frame the deal as 1 + 1 = 3 (xAI's compute + Cursor's AI prowess = more competitive AI coding models), it's important to include the data component in our calculations.

Cursor has formed a strategic partnership with SpaceX, giving the company the choice to buy Cursor for $60 billion or pay $10 billion to work together on development, reports WSJ. As part of the deal, Cursor will use SpaceX's Colossus supercomputer, built by xAI and acquired by SpaceX in a February 2026 all-stock merger that valued the combined company at about $1.25 trillion. Colossus delivers computing power equal to roughly one million Nvidia H100 chips. Cursor CEO Michael Truell showed his support for the partnership by reposting SpaceX's announcement on X. SpaceX will decide by the end of the year whether to buy Cursor for $60 billion or pay $10 billion to work together. If it buys Cursor, this would be the biggest AI startup acquisition so far and add the top developer coding tool to Elon Musk's tech group, which includes xAI, Starlink, and a possible 2026 IPO. If SpaceX chooses the $10 billion collaboration, Cursor will stay independent and, with $2 billion in ARR and expected growth to $6 billion by year-end, will be in a strong position for an IPO by 2027.

SAN FRANCISCO (AP) -- SpaceX says it has the rights to buy artificial intelligence coding tool Cursor for $60 billion later this year as Elon Musk's space exploration and AI company looks for ways to compete with rivals Anthropic and OpenAI ahead of a planned Wall Street debut. SpaceX said that, alternatively, it could pay $10 billion to "work together" with Cursor. SpaceX announced the deal Tuesday on the social platform X, which along with the AI chatbot Grok is part of a constellation of properties that Musk has merged into his rocket company. Cursor, made by San Francisco startup Anysphere, is a popular AI coding assistant. What SpaceX describes as Cursor's wide "distribution to expert software engineers" is likely part of what makes it attractive to Musk's company, giving it access to a new customer base. Cursor said its new partnership with SpaceX subsidiary xAI will enable it to build future AI products using xAI's massive AI data center complex Colossus, based in Memphis, Tennessee. "We've wanted to push our training efforts much further, but we've been bottlenecked by compute," Cursor said in a statement on X, which didn't mention the possibility of being acquired. "With this partnership, our team will leverage xAI's Colossus infrastructure to dramatically scale up the intelligence of our models." Cursor, which started in 2022, helped sparked a trend called "vibe coding" as AI coding assistants have become increasingly capable of doing the work of computer programming. Cursor competes with other coding tools like Anthropic's Claude Code and OpenAI's Codex but also has relied heavily on partnerships with those larger AI research companies for the foundations of its technology. It was Cursor's Composer, combined with Anthropic's Claude Sonnet, that a prominent AI researcher was playing with for weekend projects when he coined the phrase "vibe coding" in early 2025.
SAN FRANCISCO (AP) -- SpaceX says it has the rights to buy artificial intelligence coding tool Cursor for $60 billion later this year as Elon Musk's space exploration and AI company looks for ways to compete with rivals Anthropic and OpenAI ahead of a planned Wall Street debut. SpaceX said that, alternatively, it could pay $10 billion to "work together" with Cursor. SpaceX announced the deal Tuesday on the social platform X, which along with the AI chatbot Grok is part of a constellation of properties that Musk has merged into his rocket company. Cursor, made by San Francisco startup Anysphere, is a popular AI coding assistant. What SpaceX describes as Cursor's wide "distribution to expert software engineers" is likely part of what makes it attractive to Musk's company, giving it access to a new customer base. Cursor said its new partnership with SpaceX subsidiary xAI will enable it to build future AI products using xAI's massive AI data center complex Colossus, based in Memphis, Tennessee. "We've wanted to push our training efforts much further, but we've been bottlenecked by compute," Cursor said in a statement on X, which didn't mention the possibility of being acquired. "With this partnership, our team will leverage xAI's Colossus infrastructure to dramatically scale up the intelligence of our models." Cursor, which started in 2022, helped sparked a trend called "vibe coding" as AI coding assistants have become increasingly capable of doing the work of computer programming. Cursor competes with other coding tools like Anthropic's Claude Code and OpenAI's Codex but also has relied heavily on partnerships with those larger AI research companies for the foundations of its technology. It was Cursor's Composer, combined with Anthropic's Claude Sonnet, that a prominent AI researcher was playing with for weekend projects when he coined the phrase "vibe coding" in early 2025.
At Google, leaders are anxious about falling behind in the race to offer AI coding tools, especially as rivals like Anthropic PBC offer more effective and popular tools to businesses, according to people familiar with the matter. The search giant is now working to unite some of its coding initiatives under one banner to speed progress and take advantage of a surge in customer interest. In some corners of Alphabet Inc.'s Google, particularly AI lab DeepMind, concerns about the company's position are mounting, according to current and former employees and executives, who declined to be named because they weren't authorized to speak publicly. Businesses are just starting to realize that AI coding tools can enable anyone to build products by prompting a chatbot. But Google doesn't have a clear solution for them. Its Gemini model's capabilities are sprinkled across half a dozen different coding products with different branding, indicating how the company's lack of focus and competing internal efforts have hampered success, the people said. Even internally, some Google engineers prefer to use Anthropic's Claude Code, they said. More concerning, the people said, are the engineers who are struggling to adopt AI coding at all. Google has made some effort to reduce the internal confusion over priorities. Chief AI Architect Koray Kavukcuoglu is working with Google's main engineering team to unite the company's internal artificial intelligence coding tools in the coming weeks under Antigravity, a platform released last year, according to a spokesperson. DeepMind is also devoting more resources to AI coding by forming a new team led by research engineer Sebastian Borgeaud, according to a former Google employee. That new team was earlier reported by The Information. John Jumper, who won the Nobel Prize in 2024 alongside Google DeepMind Chief Executive Officer Demis Hassabis, is also at work on AI coding, according to a person familiar with the matter. Google was widely viewed as ascendant in AI late last year with the release of Gemini 3, a model that appeared to outperform rival services across a range of benchmarks. In recent months, however, Anthropic and OpenAI have gained business momentum by focusing on the lucrative market for products that streamline the process of writing and debugging code to speed up software development. "Coding is the single easiest way to actually make money," said Keith Zhai, co-founder of startup TinyFish, which makes web agents. Many engineers in the valley toggle back and forth between Claude Code and OpenAI's Codex to see which program will give them the best results, but Google often isn't in the conversation, he added. Google still has plenty of reasons to feel confident about its position: the company has made big strides in the quality of its foundation models, which underlie coding tools, and it has deep pockets and substantial computing power. "We've seen tremendous adoption of our internal coding tools such as Antigravity and others since introducing them over recent years, and their use has been turbocharging our model and AI tooling development," a Google spokesperson said in a statement. Meanwhile, Google has been eager to tout the speed of its internal culture change. Alphabet said in February that roughly 50% of new code at the company is written by AI. But Silicon Valley engineers are embracing AI coding so quickly that even a momentary lag in the market could be consequential. There is a growing conviction in the industry that coding is not just a lucrative early application of AI, but the key to building software that matches human capabilities, said Raj Gajwani, a former Google executive who is now chief business officer of startup OpenArt AI. "From a computer science point of view, if you win at coding this year, you get the raw data you need to win at model capability next year," he said. Google's emphasis on its own technology has also complicated the push to catch up. Most employees are banned from using competing tools such as Claude Code or Codex due to security concerns, but Googlers can request exceptions if they can demonstrate they have a business case, one former employee said. Some teams at DeepMind, including those working on the Gemini model, internal applications, and open source models, use Claude Code, according to three former employees. "You want the best people to use the best tool, even inside Google," one of the former employees said. Anthropic cut off OpenAI's access to its models last year, Wired magazine reported. Google has invested billions of dollars in Anthropic. A spokesperson for Anthropic did not immediately reply to a request for comment. In recent years, DeepMind has tried to tighten control over how its AI breakthroughs are woven into Google products. Last year, Google appointed Kavukcuoglu to a new position as chief AI architect, a role in which he is charged with folding generative AI into Google products. Yet confusion about who is leading the charge on AI coding persists. Along with DeepMind, Google Cloud, Google Core, Google Labs and Android are all pushing AI coding in different ways, one of the people said. Google released its Antigravity platform last year following the acquisition of talent and technology from startup Windsurf in a $2.4 billion deal. It joined a cluttered lineup of Google AI coding tools that includes Gemini Code Assist, Gemini CLI, AI Studio, Firebase Studio and Jules. Kathy Korevec, who oversaw Jules, jumped from Google to OpenAI earlier this month, according to her LinkedIn profile. In a post on social network X, Korevec wrote that Google had an opportunity to build AI developer tools that "feel cohesive, intuitive, and truly great to use. What I saw more often was fragmentation. Parallel tools. Overlapping surfaces. Smart teams solving similar problems in slightly different ways. That's not a talent problem. It's a systems problem." Korevec didn't immediately reply to a request for comment. Within the Googleplex, there is a philosophical clash between AI researchers who want to move as quickly as possible and more traditional senior engineers who have exacting standards for code quality, former employees say. AI usage is factored into performance reviews, according to a former employee. But engineers who try to use internal AI coding tools often hit capacity constraints due to competition for computing power, the former employee said. One of the executives who oversaw efforts to promote AI coding within Google, Brian Saluzzo, recently departed. Saluzzo did not immediately reply to a request for comment. Companies are still figuring out how to best incorporate AI into their workflows, and having a variety of products on offer gives Google more chances to see what sticks. But incumbent players like Google have only so much of an edge, said Deepti Srivastava, a former Google executive who is founder and CEO of AI startup Snow Leopard. "The market is moving too fast for the larger companies to think about it and then move," Srivastava said. "Speed is your only moat."

Polymarket CEO Shayne Coplan is regularly late to private meetings, attended at least one of them barefoot and is "easily distracted," texting and taking phone calls mid-conversation, according to a Bloomberg report. The quirks would read as startup color if the company weren't bleeding its lead to rival Kalshi eight days before its biggest backer reports earnings. It reports Q1 earnings on April 30, with the consensus analyst target at roughly $194 on a Strong Buy rating. Raymond James raised its target to $222 earlier this month. Kalshi Pulls Ahead The two companies' valuations moved in lockstep for most of the past year. That changed last month, when Kalshi raised $1 billion at a $22 billion valuation led by Coatue Management. Polymarket is reportedly weighing a raise at $15 billion, a gap of roughly $7 billion. Kalshi's year-to-date notional volume has also pulled ahead, at $37.5 billion versus Polymarket's $29.2 billion. Bloomberg reports the hold-up is structural. Polymarket's international exchange runs on crypto rails, creating engineering challenges its US-first competitors didn't face. Pop-Up Problems, Scheduled Downtime A recent fee rollout was botched enough that an employee admitted on Discord "the rollout was terrible." The company added that it was "adding way more checks before anything like this can be pushed out in the future." The operational stumbles have piled up elsewhere. Polymarket's Washington pop-up bar drew negative coverage for technical snafus during its opening. An earlier pop-up grocery store also opened late. On Monday, a scheduled five-minute exchange restart turned into an outage of more than an hour. ICE CEO Jeff Sprecher called Coplan a "genius" and said he has urged him to move faster. "You're not going to be a prime time company unless you can access the US legally," he recalls telling Coplan. On his $2 billion bet, Sprecher was blunt. "My Polymarket on these things is either a complete wipeout or they are going to be home runs." Polymarket did not immediately respond to a request for comment from Benzinga. Image: Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.

'The Big Weekend Show' analyzes the possibilities of artificial intelligence when it comes to influencing voters. One of Anthropic's Artificial Intelligence (AI) philosophy architects argued that intentional discrimination could be a way to combat stigmas on topics of race and gender. In a 2023 paper authored alongside a number of other AI researchers, Amanda Askell, a philosopher hired by Anthropic to develop their AI's moral compass, argued companies might benefit from a kind of overcorrection toward stereotypes. But, the paper explained, that would require human input on how to modify its answers. "Larger models can over-correct, especially as the amount of [human input] training increases. This may be desirable in certain contexts, such as those in which decisions attempt to correct for historical injustices against marginalized groups, if doing so is in accordance with local laws," Askell wrote. PALANTIR'S SHYAM SANKAR: AMERICANS ARE 'BEING LIED TO' ABOUT AI JOB DISPLACEMENT FEARS The comment referred to an experiment on how Anthropic's models dealt with the race of students. "In the discrimination experiment, the 175B parameter model discriminates against Black versus White students by 3% in the Q condition and discriminates in favor of Black students by 7% in the Q+IF+CoT condition," the paper notes, referring to one AI trained without human corrections and a second one trained with the help of input. Askell was joined by four other authors: Deep Ganguli, Nicholas Schiefer, Thomas Kiao and Kamilė Lukošiūtė. The paper's contents have surfaced as AI companies increasingly wrestle with the ethics their models are trained on -- the presuppositions and moral determinations that inform its outputs. It also highlights the challenges engineers face in training models on human content while simultaneously trying to leave behind certain human behaviors. The question of ethics has forced Anthropic in particular into the spotlight in recent weeks. The company made headlines earlier this year for clashing with the Department of War over restrictions that prevent its technology from being deployed to conduct lethal operations. HUGH GRANT MOVIE SLAMS AI; DIRECTOR WARNS 'IT MIGHT KILL US ALL' It also comes as Anthropic decided to withhold its latest model, Mythos, citing fears that the model proved too effective at finding cyber vulnerabilities that could wreak havoc in the hands of hackers. Amid questions of AI application, Anthropic has marketed its flagship AI, Claude, as the "ethical" AI choice. "Our central aim is for Claude to be a good, wise and virtuous agent, exhibiting skill, judgment(sic), nuance and sensitivity in handling real-world decision-making," Claude's constitution reads. STANFORD PROF ACCUSED OF USING AI TO FAKE TESTIMONY IN MINNESOTA CASE AGAINST CONSERVATIVE YOUTUBER To get a better sense of what that means in practice, companies like Anthropic have turned to researchers like Askell. On her website, Askell described her role as refining the way an AI thinks. "I'm a philosopher working on finetuning and AI alignment at Anthropic. My team trains models to be more honest and to have good character traits and works on developing new finetuning techniques so that our interventions can scale to more capable models," Askell wrote. PENTAGON'S AI BATTLE WILL HELP DECIDE WHO CONTROLS OUR MOST POWERFUL MILITARY TECH She previously held a similar position at OpenAI, the parent company of ChatGPT, focusing on AI safety. The 2023 paper, written two years after she joined Anthropic, noted that encountering discrimination in AI models shouldn't come as a surprise. "In some ways, our findings are unsurprising. Language models are trained on text generated by humans, and this text presumably includes many examples of humans exhibiting harmful stereotypes and discrimination," the paper reads. But it noted that AIs seem to be able to adjust their outputs even without clarification of what discrimination means. CLICK HERE TO DOWNLOAD THE FOX NEWS APP "Our results are surprising in that they show we can steer models to avoid bias and discrimination by requesting an unbiased or non-discriminatory response in natural language." Askell and Anthropic did not immediately respond to a request for comment from Fox News Digital.

The New York Times took a close look at how Elon Musk is reshaping SpaceX's priorities ahead of its highly anticipated, potentially record-breaking IPO -- and what that could mean for the company and its investors. As the NYT's Ryan Mac noted in the article, "Shifting aims before an I.P.O. would be unthinkable for most corporate leaders, who tend to focus on their core businesses and try to project steadiness to potential investors."
