The latest news and updates from companies in the WLTH portfolio.
Chaos finally over at Malaga airport? New English-language videos guides hope to slash queues this Easter PASSENGERS travelling through Malaga airport this Easter are being promised an end to the passport control headaches that have plagued journeys since the EU introduced the EES last October. Authorities have taken note of the constant queues, chaos and confusion and introduced a new English-language campaign to ease the problems. The scheme uses video guides to help speed up document checks for both European Union and third-country citizens such as British holidaymakers. READ MORE: Spain's Easter holiday airport strikes: All you need to know as disruption starts TODAY The initiative kicks off this weekend just in time to manage the influx of tourists and expats arriving for the Semana Santa holidays. The campaign provides specific guidance for British expats and other non-EU nationals regarding the use of residence permits, visas and passports. But most importantly for worried expats and tourists travelling to and from the Costa del Sol, it also clarifies the precise documentation requirements for families travelling with minors based on their age. READ MORE: Spain's police union warns of border collapse at Alicante airport as understaffing and new EES cause travel chaos Simple instructions delivered by a police spokesperson will be broadcast on information screens across both the arrivals and departures halls. Helpfully subtitled, the videos aim to direct travellers to the correct security checkpoints to avoid the confusion that often causes severe bottlenecks. The campaign video features a female police officer actively encouraging eligible passengers to use the automated biometric e-gates 'for faster processing'. READ MORE: Spain to be hit by a wave of airport strikes - here's how it could affect your Easter holiday Meanwhile, clear infographics warn that all European citizens under the age of 18 are prohibited from using the automated scanners. Instead, minor travellers are instructed to bypass the e-gates and report directly to a physical police desk to show their passports to an officer. Authorities hope the Malaga pilot will be a success and eventually be rolled out to other airports across Spain.

27th March 2026 - (Hong Kong) Evening rush-hour travel descended into gridlock after a man suffered severe electrocution injuries on the MTR East Rail Line between Kowloon Tong and Tai Wai on Thursday, forcing a lengthy suspension and leaving stations and platforms packed with stranded passengers. Emergency teams located the injured man on the track within the tunnel section shortly after 5pm, following reports that the rear cab's emergency exit ramp on a southbound train had been opened in transit. He was removed from the scene at around 6.30pm and taken to Queen Elizabeth Hospital for urgent treatment before later being transferred to Prince of Wales Hospital, where he remains in a critical condition. Police said the 35-year-old, who had entered the track area via the rear cab emergency door, sustained extensive burns consistent with electrocution. The case is being handled as a person falling onto the track. Sha Tin District Crime Squad is investigating the circumstances; no arrests have been made. The incident triggered extensive disruption across the corridor. Services between Mong Kok East and Tai Wai were halted for search and rescue operations involving MTR staff, police and firefighters. Two trains were held within the affected section for around two hours before being cleared to move to Tai Wai and Kowloon Tong respectively, allowing passengers to disembark. Full line services were gradually restored from approximately 6.56pm, though frequencies remained adjusted elsewhere. Severe crowding hit multiple interchanges. Tai Wai station was jammed, queues for taxis stretched long at Tai Po Market, and Kowloon Tong saw large numbers waiting for onward travel. To ease congestion, MTR deployed additional staff, operated free shuttle buses between Kowloon Tong and Tai Wai, and boosted Tuen Ma Line services. During the prolonged stoppage in the tunnel, some passengers reported feeling unwell, while others faced acute needs within carriages given the lack of facilities. MTR said the train's safety systems functioned as designed, stopping the train and alerting the driver immediately after the rear cab emergency access was opened. The operator confirmed the affected section's safety before resuming services.

A federal judge on Thursday temporarily blocked the U.S. government's decision to label Anthropic a national security risk. The order gives the AI firm an early win in its legal battle with the Department of Defense, allowing it to continue federal contracts and temporarily avoid being labeled a "supply chain risk." The fight stems from a disagreement between Anthropic and the Pentagon earlier this year over how its AI could be used in battle. While negotiating a $200 million contract, Anthropic wanted to put limits on how the Pentagon could use its AI for surveillance and autonomous weapons. The Department of Defense argued Anthropic couldn't dictate how it used the systems. The disagreement led the Pentagon to declare Anthropic a "supply chain risk," effectively blacklisting it from contracting for federal government work. "The record supports an inference that Anthropic is being punished for criticizing the government's contracting position in the press," wrote Judge Rita F. Lin in her ruling, as reported by The New York Times. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government." While the case continues, the injunction gives Anthropic breathing room to continue working with federal contractors for now. It also doesn't require the Department of Defense to continue working with Anthropic. The government has seven days to respond before the order goes into effect. "We're grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits," an Anthropic spokesperson told Business Insider in a statement. "While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI." The ultimate result of the case will have wide-reaching impacts for the rest of the AI industry and its work with the federal government. Microsoft filed an amicus brief in the case in support of Anthropic, as did employees from OpenAI and Google. After dropping Anthropic earlier this year, the Pentagon reached a deal with OpenAI to use its AI models.
The proposal does not include any of Democrats' demands to limit the tactics of federal immigration officers The U.S. Senate voted early Friday morning on a measure to fund much of the Department of Homeland Security (DHS), but the proposal must get through the House before the partial government shutdown can end. The Senate bill would fund TSA and the rest of DHS except for Immigration and Customs Enforcement (ICE) and Border Patrol, according to the Washington Post. It does not include any of Democrats' demands to limit the tactics of federal immigration officers. If the legislation is approved by the House, President Donald Trump can sign it into law. It could happen as early as Friday, but Republican House leadership did not indicate if they would put the measure up for a vote. House Speaker Mike Johnson told reporters that Republicans are meeting this morning to "decide next steps," per CBS News. Never miss a story -- sign up for PEOPLE's free daily newsletter to stay up-to-date on the best of what PEOPLE has to offer, from celebrity news to compelling human interest stories. The effort to reopen these parts of the government come on the 42nd day of the partial shutdown. In mid-February, Congress failed to pass a measure funding DHS. The lapse in funding has since forced thousands of federal employees, including TSA agents, to work without pay. Across the country, travelers have seen hours-long wait times at airports as staffing shortages continue to affect security checkpoints. In a statement shared with PEOPLE, the DHS said more than 450 TSA officers have left the workforce and thousands more are calling out "because they can't afford necessities like gas, childcare, food or rent." The efforts come as Trump announced on Thursday he would sign an executive order to pay TSA officers. On Truth Social, he said he would direct newly sworn-in Homeland Security Secretary Markwayne Mullin to "immediately pay our TSA Agents in order to address this Emergency Situation." Read the original article on People
ICE deepens Polymarket investment, signaling strong institutional conviction in prediction markets despite scrutiny. Intercontinental Exchange (ICE) is pushing deeper into prediction markets, sending another $600 million to Polymarket. The investment extends a previously agreed funding plan and follows an initial $1 billion placement in October 2025, with a total target of $2 billion. ICE said the transaction forms part of that staged commitment as Polymarket completes a broader fundraising round. In a Friday announcement, ICE also indicated it may buy up to $40 million in Polymarket securities from existing holders. That option would increase ICE's exposure while the company completes its participation in the platform's capital raise. Financial terms, including valuation, were not disclosed, and ICE said those details will be disclosed once fundraising closes. ICE added that the deal should not materially affect its financial results or capital return plans. The company's move signals growing confidence from traditional market operators in event-based trading platforms such as Polymarket and Kalshi. These venues let users trade on real-world outcomes, including elections and geopolitical developments. Polymarket has become a focal point for the sector, drawing rising trading activity alongside intensified scrutiny over market integrity. Regulators and lawmakers have questioned whether prediction markets face risks of manipulation or insider trading, especially when markets connect to sensitive events. ICE's backing gives Polymarket more than funding, according to industry observers. It also ties the company to a major name in global markets as Polymarket expands its position. Rival Kalshi recently raised more than $1 billion at a $22 billion valuation, roughly double its prior mark, and reported substantial annual revenue, reflecting strong demand for contracts tied to news and public events. Polymarket has also taken steps to address oversight pressures. Earlier this year, it acquired a licensed exchange and clearinghouse. It later announced a partnership with Palantir and TWG AI to build a surveillance system designed to flag suspicious trading and manipulation in its sports prediction markets. As approval and regulation evolve, prediction markets may end up alongside stocks and futures as another venue for expressing views on upcoming developments. ICE's additional investment suggests that large incumbents plan to stay involved as the sector faces both growth and regulation.

The files, ranging from draft blog posts to images and documents, could be accessed by anyone who knew how to request them. Thousands of internal files tied to Anthropic's website were briefly left exposed due to a configuration mistake in the company's content management system. The files, ranging from draft blog posts to images and documents, could be accessed by anyone who knew how to request them. Cybersecurity researcher Alexandre Pauwels of the University of Cambridge reviewed the data and estimated that nearly 3,000 unpublished assets linked to the company's blog were accessible in the cache. Many of them had never appeared on the company's public news or research pages. The issue wasn't a hack in the traditional sense. The system storing Anthropic's website content could respond to external requests and return files if someone queried it correctly. Because certain assets were set to "public" by default, draft materials sitting in the system's storage layer remained visible even though they were never meant to be published. Once alerted, Anthropic quickly restricted access. A spokesperson told Fortune Magazine that the exposure resulted from "human error in the CMS configuration" and said the issue had no connection to the company's AI models or internal infrastructure. "An issue with one of our external CMS tools led to draft content being accessible," the spokesperson said. Most of the files appear relatively harmless -- unused graphics, banners, and draft pages. But some documents pointed to things the company hadn't yet announced. Among them were references to an unreleased AI model that internal materials described as the most capable system Anthropic has trained so far. The company later confirmed it is testing a new model with select customers, saying the system represents a "step change" in performance across areas like reasoning, coding, and cybersecurity. Other files included details about a private executive retreat in the U.K. that Anthropic CEO Dario Amodei is expected to attend with leaders from major European companies. Situations like this happen more often than tech companies like to admit. Even firms known for tight security have stumbled over similar mistakes. Apple accidentally revealed upcoming iPhone names through its own website in 2018. Gaming giants such as Epic Games and Nintendo have also exposed unreleased assets through misconfigured servers. Anthropic says none of the files involved customer data, AI models, or internal security architecture. "These materials were early drafts of content considered for publication and did not involve our core infrastructure, AI systems, customer data, or security architecture," the spokeperson said.
As companies continue to burn through billions of dollars by running massively resource-hungry AI models -- and only passing on a fraction of the costs to consumers and enterprise clients -- the AI race shows no signs of slowing down. On Thursday, a data leak caused by a major security lapse in its public-facing content management system revealed that Anthropic is working on a powerful new model release. The company has since officially acknowledged the new project, dubbed "Claude Mythos," with a spokesperson describing it to Fortune as a "step change" in AI proficiencies and the "most capable we've built to date." The spokesperson said it's a "general purpose model with meaningful advances in reasoning, coding, and cybersecurity." In an enormously ironic twist, a draft blog obtained by Fortune, which was "available in an unsecured and publicly-searchable data store," claimed that the new model "poses unprecedented cybersecurity risks." In other words, let's hope the new model wasn't responsible for the security of Anthropic's company blog. It's a major test for the company, which has received significant media attention as of late for its Claude Code and Claude Cowork tools, the successes of which appear to have rattled Anthropic's competitors, including OpenAI, to their core. The leaks also revealed a "new tier" of AI models, dubbed Capybara. Mythos appears to be part of this new tier, but how Capybara fits in with Anthropic's existing tiers -- Opus, Sonnet, and Haiku, in decreasing size, capability, and cost -- remains to be seen. "Compared to our previous best model, Claude Opus 4.6, Capybara gets dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity, among others," the leaked blog reads, as quoted by Fortune. While it may score higher in cybersecurity tests, it could simultaneously represent a major challenge for existing cybersecurity defenses, the company warned. "In preparing to release Claude Capybara, we want to act with extra caution and understand the risks it poses -- even beyond what we learn in our own testing," the company wrote in the leaked blog post. "In particular, we want to understand the model's potential near-term risks in the realm of cybersecurity -- and share the results to help cyber defenders prepare." The model "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders," Anthropic boasted. The risks appear to have been real enough for cybersecurity stocks to plunge on Friday, following the latest news. Anthropic has also previously admitted that hackers used its Claude AI model to automate cybercrimes targeting banks and governments. According to the company's November blog post, a Chinese state-sponsored group exploited the AI's agentic capabilities to infiltrate "roughly thirty global targets and succeeded in a small number of cases" by "pretending to work for legitimate security-testing organizations" to sidestep Anthropic's AI guardrails. Reality check: a frontier AI company is working on what it claims to be the next big thing that's more capable than anything that's come before it is pretty standard fare, and it remains to be seen whether Claude Mythos will actually represent a major "step change" in practice, outside of a carefully curated testing environment. Case in point, OpenAI's long-awaited GPT-5 model turned out to be a major letdown when it was released in August, falling well short of the company's lofty promises. More on Anthropic: Protestors Outside Anthropic Warn of AI That Keeps Improving Itself

eWeek content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More Serial tech founder Brett Adcock has launched a new startup, Hark, which aims to build a family of artificial intelligence devices. The Figure AI founder and CEO is jumping into an emerging segment with a lot of buzz, as OpenAI, Apple, Meta, and Google are all planning to launch hardware specifically for AI usage. While the launch date for these devices has not been disclosed, Adcock told Bloomberg that they would arrive shortly after the startup's large language model (LLM), which is scheduled for this summer. According to Adcock, this LLM will focus on speech and memory, with improved capabilities to anticipate user needs. "We do believe that there's more than one device to rule the world here," said Adcock. "We're working on a family of AI devices both for yourself and for the home." Form factor of the AI device The form factor for the AI device has not been settled. Some see an AI pendant, such as the Humane AI Pin, as one potential option, with users speaking to the chatbot and allowing it to see what's in front of them. Another form factor could be a connected smart hub, able to communicate across several devices in the home and provide answers through text and voice. OpenAI is reportedly exploring a smart speaker with similar connected capabilities. Smart glasses appear to be where Meta, Google, and Apple are focusing, with all three reportedly working on glasses that place AI at the center of the experience. Meta already leads the smart glasses market and has accelerated its timeline with a $3.5 billion investment in eyewear designer EssilorLuxottica. Apple has also reportedly put other wearable plans on hold to prioritise its smart glasses launch, which could arrive this year. Sunken cost to beat the smartphone? In some ways, the smartphone is already a well-equipped device for interacting with AI. It remains a far better form factor than a pin or smart glasses for text-based AI usage, which is how the vast majority of people communicate with ChatGPT and other chatbots. At the same time, a top-end smartphone has excellent front and rear cameras, significant processing power for high-intensity AI tasks, and it is far more natural for users to speak to their phone than to their glasses or a pin. There is a concern, especially given the level of investment already going into AI infrastructure, that this could become a sunk cost. Meta sold seven million smart glasses in 2025, a notable achievement but still a small fraction of the 1.2 billion smartphones sold that year. That is, before considering potential backlash against smart glasses, pins, and other always-on devices that raise privacy concerns in public or professional settings. Misuse by bad actors is already an issue, which could limit adoption and push users back toward their smartphones. Related reading: Want more on AI hardware? Rumors suggest OpenAI is developing a smart speaker with a built-in camera, expected in 2026.

Joe Walsh is a senior editor for digital politics at CBS News. Joe previously covered breaking news for Forbes and local news in Boston. A judge has blocked the Trump administration from labeling Anthropic a "supply chain risk" and cutting off all federal work with the artificial intelligence firm, an early win for Anthropic in its bitter feud with the government over AI guardrails. U.S. District Judge Rita Lin on Thursday ruled in favor of Anthropic, which sued the federal government earlier this month for taking actions that it called an "unprecedented and unlawful" attempt to punish the company for First Amendment-protected speech. Lin's ruling in the case prevents the government from enforcing its supply chain risk designation against Anthropic, a move that aimed to stop private government contractors from using the company's powerful Claude AI model. It also halts an order by President Trump for every federal agency to "IMMEDIATELY CEASE all use of Anthropic's technology." In the ruling, she called the administration's moves "Orwellian" and said they could "cripple" the company. "At bottom, Anthropic has shown that these broad punitive measures were likely unlawful and that it is suffering irreparable harm from them," she wrote. The dispute revolves around Anthropic's push to bar the military from using Claude for domestic surveillance or to power fully autonomous weapons. The Defense Department has said it needs to maintain the authority to use AI for "all lawful purposes," and that there are already restrictions in place against those particular uses. The judge wrote that her ruling does not stop the Trump administration from taking "lawful actions" that were allowed beforehand, so it is free to choose a different AI provider instead of Anthropic. Lin stayed her order for seven days, giving the government an opportunity to appeal. In a statement after the ruling, a spokesperson for Anthropic said, "We're grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI." The Justice Department and Pentagon did not immediately respond to requests for comment. In an often-scathing 43-page ruling, Lin wrote that the government's moves against the company "appear designed to punish Anthropic." She said the Pentagon can choose to use whatever AI products it wants, but that the government "went further." "The record supports an inference that Anthropic is being punished for criticizing the government's contracting position in the press," she wrote. "...Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation." She pointed to some officials' heated comments about Anthropic, including a post by Defense Secretary Pete Hegseth that called the company "sanctimonious" and said it "delivered a master class in arrogance." The judge also took issue with the Trump administration's labeling of Anthropic a "supply chain risk," a formal designation that federal law defines as a "risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert" a national security system. Lin wrote that the government hadn't shown why Anthropic posed that kind of risk and hadn't followed the required legal processes for determining that an entity is a supply chain risk. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," Lin said. She said Anthropic's due process rights were likely violated because the company didn't have an opportunity to respond to the government's moves against it. She said Mr. Trump's order for federal agencies to stop using Anthropic immediately was essentially a form of "debarment," or a ban on a company contracting with the government -- but usually, firms that face debarment have the ability to oppose that measure. And she called the government's actions "arbitrary and capricious," pointing to cordial contract negotiation emails between Pentagon Chief Technology Officer Emil Michael and Anthropic CEO Dario Amodei even as the military called Anthropic a serious threat. After the administration took action against Anthropic, Lin noted, federal agencies aside from the Pentagon quickly terminated their use of Claude, endangering its lucrative public sector business. And Anthropic has said some government contractors are worried that they could run afoul of the president's order if they use Claude, wrote Lin. "One of the amicus briefs described these measures as 'attempted corporate murder,'" Lin wrote. "They might not be murder, but the evidence shows that they would cripple Anthropic." Lin also formally rejected a social media post by Hegseth that said military contractors must cut off all "commercial activity" with Anthropic -- which she said seemed to illegally require companies to stop using Claude on non-military work. During a hearing in San Francisco earlier this week, Justice Department attorney Eric Hamilton conceded that a supply chain risk designation would only stop government contractors from using Anthropic's technology for military-related work, not their other business. Anthropic argued that Hegseth's post still caused damage to the company. The dispute between Anthropic and the Pentagon highlights a broader debate over how to deal with the potential risks posed by AI. Anthropic has long been vocal about the possible dangers of unconstrained AI, and has called for governments to enact safety and transparency rules. Meanwhile, the Trump administration has argued that strict AI regulations could stifle innovation, and has accused some AI models of being ideologically skewed or "woke." The recent feud revolves around a set of mass surveillance and autonomous weapon-related "red lines" set by Anthropic, the only company whose AI model was deployed on the military's classified systems. The showdown comes as the U.S. military uses Claude in its war with Iran. Anthropic has said it isn't looking to second-guess the military's decisions. But it argues that without guardrails to block AI-powered mass surveillance on Americans or weapons that can strike without human input, there's a risk of Claude making fatal mistakes or operating in a way that clashes with democratic values. Amodei told CBS News in a late February interview: "I think we are a good judge of what our models can do reliably and what they cannot do reliably." The Pentagon has balked at Anthropic's push for guardrails. The military says mass surveillance of Americans and fully autonomous weapons are already barred by federal law and internal Pentagon policies, respectively. "But we do have to be prepared for the future," Michael said in a CBS News interview last month. "So we'll never say that we're not going to be able to defend ourselves in writing to a company." As talks between the two sides broke down last month, administration officials publicly lashed out at Anthropic, accusing the company of trying to police the military and impose its own values onto the government. Michael said Amodei has a "God-complex," and Mr. Trump called Anthropic a "radical left, woke company." Last month, Mr. Trump ordered federal agencies to stop using Anthropic, though he gave the military six months to phase out the service, and Hegseth said Anthropic would be labeled a supply chain risk. Anthropic quickly sued. Lawyers for the two sides faced off in person during this week's hearing in San Francisco federal court. The Justice Department's lawyer, Hamilton, argued that labeling Anthropic a supply chain risk was warranted because the tense negotiations between the Pentagon and Anthropic had made the military fear that the company could "manipulate" its software or install a "kill switch." He said the designation was based on a "risk of future sabotage." Lin appeared unconvinced, and said the government appeared to be saying that a company can be designated a supply chain risk because it is "stubborn" and "asks annoying questions." Anthropic's lawyer, Michael Mongan, argued that if Anthropic posed such a serious risk, it doesn't make sense that the government appeared open to striking a deal until the very end. "A saboteur is not going to get into a public spat," Mongan said. "They're just going to accept the contractual term proposed by the government and then go and do ... nefarious things."
The Italian competition authority, AGCM, has fined the Danish-founded review platform Trustpilot four million euros for inadequate measures to ensure review authenticity, including reviews marked as verified. AGCM labeled the practice of allowing companies to select which customers receive review invitations as "misleading" and criticized the lack of transparency regarding the platform's operations. Trustpilot's CEO, [...]

Anthropic is testing a new AI model called "Claude Mythos," the company confirmed after Fortune reported that draft materials about the in-development system had been left in an unprotected, publicly accessible data store on the company's website. The company's spokesperson characterized the new system as "a step change" in AI performance, adding that it was "the most capable we've built to date." The company said it was moving carefully with the rollout because of the model's power, and that a small group of customers currently has early access. Fortune reporter Bea Nolan identified the exposed data. Two cybersecurity researchers -- Roy Paz of LayerX Security and Alexandre Pauwels of the University of Cambridge -- separately assessed the documents at Fortune's request. Pauwels counted nearly 3,000 files tied to Anthropic's blog that had not previously appeared on any of the company's public-facing pages. Anthropic attributed the exposure to a configuration error in one of its external content management tools, calling it "human error." After Fortune contacted the company, Anthropic restricted access to the data store. The draft blog post applied both names -- "Claude Mythos" and "Capybara" -- to what the document indicated was a single model. According to the document, the system outperformed Claude Opus 4.6 across several benchmarks -- including cybersecurity, software coding, and academic reasoning -- and would occupy a new tier above Opus in Anthropic's model lineup. The document also flagged the model's high running costs and said a public launch had not yet been scheduled. The documents outlined significant cybersecurity risks associated with the model. The draft blog characterized the system as more advanced in cybersecurity tasks than any competing AI model and warned that it could allow attacks to scale faster than defenders could counter them. Anthropic said the early-access rollout would focus on cyber defense organizations, giving them time to reinforce their systems ahead of a broader release. The exposed files also contained details of a planned private summit for European business leaders, to be held at a U.K. country manor. Anthropic CEO Dario Amodei is set to attend. Anthropic confirmed the event in a statement, saying it was one in a series of gatherings the company has held for business leaders over the past year.

The move, Ives argues, is no longer a distant possibility but a logical next step, fueled by deepening operational ties, shared AI ambitions, and Elon Musk's vision for dominating the next era of technology. Tesla and SpaceX are two of Elon Musk's most popular and notable companies, but a new note from one Wall Street analyst claims the two companies will become one sometime next year, as 2027 could see the dawn of a new horizon. In a bold new research note, Wedbush analyst Dan Ives has reaffirmed his long-standing prediction: Tesla and SpaceX will merge in 2027. The move, Ives argues, is no longer a distant possibility but a logical next step, fueled by deepening operational ties, shared AI ambitions, and Elon Musk's vision for dominating the next era of technology. He writes: "Still Expect Tesla and SpaceX to Merge in 2027. We continue to believe that SpaceX and Tesla will eventually merge into one company in 2027 with the groundwork already in place for both operations to become one organization. Tesla already owns a stake in SpaceX after the company's $2 billion investment in xAI got converted to SpaceX shares following SpaceX's acquisition of xAI earlier this year initially tying both of Musk's ventures closer together but still represents <1% of SpaceX's expected valuation. The recent announcement of a joint Terafab facility between SpaceX and Tesla further ties both operations together making it more feasible to merge operations given the now existing overlap being built out across the two with this the first step." The groundwork is already being laid. Earlier this year, SpaceX acquired xAI, converting Tesla's $2 billion investment in the AI startup into a small equity stake, less than 1 percent, in SpaceX. Regulatory filings cleared the transaction in March 2026, formally linking the two Musk-led companies financially for the first time. Then came the announcement of a joint TERAFAB facility in Austin, Texas: two advanced chip factories, one dedicated to Tesla's AI needs for vehicles and Optimus robots, the other targeting space-based data centers. Elon Musk launches TERAFAB: The $25B Tesla-SpaceXAI chip factory that will rewire the AI industry Ives calls Terafab the "first step" toward full operational integration. SpaceX's impending IPO, expected as soon as mid-June 2026, will turbocharge these plans. The company aims to raise approximately $75 billion at a roughly $1.75 trillion valuation, far exceeding earlier estimates. Proceeds will fund Starship rocket flights, a NASA-contracted lunar base, expanded Starlink services across maritime, aviation, and direct-to-mobile applications, and crucially, orbital AI infrastructure A major driver is the exploding demand for AI compute. U.S. data centers are projected to consume 470 TWh of electricity by 2030, constrained by power grids and land. SpaceX's strategy, launching millions of solar-powered satellites to host data centers in orbit, bypasses Earth's energy bottlenecks. Solar energy captured in space avoids atmospheric losses and day-night cycles, offering a scalable solution for AI training and inference. The xAI acquisition ties directly into this vision, positioning the combined entity as a leader in extraterrestrial computing. The merger would create a formidable conglomerate spanning electric vehicles, robotics, satellite communications, human spaceflight, and defense. Ives highlights SpaceX's role in the Trump administration's "Golden Dome" missile defense shield, which would leverage Starlink satellites for tracking. For Tesla, access to SpaceX's launch cadence and orbital assets could accelerate autonomous driving, Robotaxi fleets, and Optimus deployment. Musk, who has signaled his desire to own roughly 25 percent of Tesla to steer its AI future, views the combination as essential to overcoming fragmented regulatory scrutiny from the FTC and DOJ. Challenges remain. Antitrust hurdles could delay or reshape the deal, and shareholder approvals on both sides would be required. Yet Ives remains bullish, maintaining an Outperform rating on Tesla with a $600 price target, implying substantial upside from current levels. The analyst sees the merger as the "holy grail" for consolidating Musk's disruptive tech empire.

The AI data center firm Crusoe announced today that Microsoft (MSFT) will lease 900 MW of capacity from its campus in Abilene, Texas, which also serves as the flagship site of the Stargate Project. The new infrastructure will increase the Microsoft will increase its AI infrastructure by leasing 900 MW at Crusoe's Abilene campus, enabling support for growing AI workloads and providing access to large-scale, reliable infrastructure. The development is expected to boost Abilene's property tax revenue by 32% and Taylor County's by 25%, contributing long-term value to the local community. OpenAI and Oracle decided not to increase their site footprint from 1.2 GW to 2 GW, allowing available capacity for other hyperscale clients like Microsoft Azure.

Can't-miss innovations from the bleeding edge of science and tech As companies continue to burn through billions of dollars by running massively resource-hungry AI models -- and only passing on a fraction of the costs to consumers and enterprise clients -- the AI race shows no signs of slowing down. On Thursday, a data leak caused by a major security lapse in its public-facing content management system revealed that Anthropic is working on a powerful new model release. The company has since officially acknowledged the new project, dubbed "Claude Mythos," with a spokesperson describing it to Fortune as a "step change" in AI proficiencies and the "most capable we've built to date." The spokesperson said it's a "general purpose model with meaningful advances in reasoning, coding, and cybersecurity." In an enormously ironic twist, a draft blog obtained by Fortune, which was "available in an unsecured and publicly-searchable data store," claimed that the new model "poses unprecedented cybersecurity risks." In other words, let's hope the new model wasn't responsible for the security of Anthropic's company blog. It's a major test for the company, which has received significant media attention as of late for its Claude Code and Claude Cowork tools, the successes of which appear to have rattled Anthropic's competitors, including OpenAI, to their core. The leaks also revealed a "new tier" of AI models, dubbed Capybara. Mythos appears to be part of this new tier, but how Capybara fits in with Anthropic's existing tiers -- Opus, Sonnet, and Haiku, in decreasing size, capability, and cost -- remains to be seen. "Compared to our previous best model, Claude Opus 4.6, Capybara gets dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity, among others," the leaked blog reads, as quoted by Fortune. While it may score higher in cybersecurity tests, it could simultaneously represent a major challenge for existing cybersecurity defenses, the company warned. "In preparing to release Claude Capybara, we want to act with extra caution and understand the risks it poses -- even beyond what we learn in our own testing," the company wrote in the leaked blog post. "In particular, we want to understand the model's potential near-term risks in the realm of cybersecurity -- and share the results to help cyber defenders prepare." The model "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders," Anthropic boasted. The risks appear to have been real enough for cybersecurity stocks to plunge on Friday, following the latest news. Anthropic has also previously admitted that hackers used its Claude AI model to automate cybercrimes targeting banks and governments. According to the company's November blog post, a Chinese state-sponsored group exploited the AI's agentic capabilities to infiltrate "roughly thirty global targets and succeeded in a small number of cases" by "pretending to work for legitimate security-testing organizations" to sidestep Anthropic's AI guardrails. Reality check: a frontier AI company is working on what it claims to be the next big thing that's more capable than anything that's come before it is pretty standard fare, and it remains to be seen whether Claude Mythos will actually represent a major "step change" in practice, outside of a carefully curated testing environment. Case in point, OpenAI's long-awaited GPT-5 model turned out to be a major letdown when it was released in August, falling well short of the company's lofty promises.

Senate Passes DHS Funding Bill To End 40-Day Shutdown, Airport Chaos At 2:22 a.m. EST, the Senate unanimously passed a spending bill to fund the Department of Homeland Security after a 40-day shutdown that disrupted airport security and sparked travel chaos for millions of Americans. The bill, which excludes funding for Immigration and Customs Enforcement and Customs and Border Protection, still needs House approval and President Trump's signature. The overnight breakthrough came as airport TSA lines worsened nationwide this week, with TSA agents calling out sick or quitting due to missed paychecks.

Anthropic, founded by ex-OpenAI staff, prioritizes secure, ethical AI attracting institutional focus. Confidential details about Anthropic's next-generation artificial intelligence model have surfaced online after the company's internal materials were inadvertently made public. Known for its conversational AI bot Claude, Anthropic was revealed to be testing a system described as even more advanced than its current lineup. Key Capabilities and Potential Consequences of the New Model Anthropic describes this under-development AI model as the most powerful system it has created to date. Although the tool remains undisclosed to the public and is only accessible to select early partners, both its performance metrics and potential risks are under active review. According to company-distributed documents, the system achieves a significant breakthrough, outpacing previous generations in both power and capability. ContentsKey Capabilities and Potential Consequences of the New ModelImmediate Reactions from Technology Companies and Markets Leaked files refer to a model dubbed "Claude Mythos," highlighting its possible ability to not just identify but also exploit software vulnerabilities. The suggestion that such powerful technology could autonomously discover and leverage vulnerabilities has triggered debate in cybersecurity circles. While precise information about the model's capabilities is still lacking, the potential security risks have become a focal point among experts evaluating the system's impact. Anthropic's existing product suite includes AI models Opus, Sonnet, and Haiku, each designed to offer varying balances of scale, abilities, and pricing. According to the leaked documents, a new tier named "Capybara" -- anticipated to be more robust and sizable than even Opus -- is also in the pipeline, signaling the company's ambitions to push the boundaries of AI development. Immediate Reactions from Technology Companies and Markets In the wake of the leak, shares of several U.S.-based cybersecurity and software giants saw a steep decline. Notable names like Palo Alto Networks, Crowdstrike, and Fortinet experienced stock value drops ranging from four to six points within a short span. The effects rippled across broader technology indices, bringing a general downturn to the software and cybersecurity sectors. Alongside temporary swings in the tech sector, digital asset markets also reacted sharply. Bitcoin, for instance, saw its valuation dip swiftly following news of the leak, slipping below the peak it had achieved just hours prior. This volatility highlighted the interconnectedness of emerging technologies and the sensitive nature of market sentiment. An initial investigation revealed that the leak stemmed from roughly three thousand company blog posts and associated materials being accidentally uploaded to a publicly accessible data repository. The company confirmed that these documents included unpublished announcements and confidential internal communications, inadvertently exposing corporate strategies and future project details. Founded in 2021 by former OpenAI employees, Anthropic has carved out a niche by emphasizing ethical AI development, secure systems, and scalable artificial intelligence solutions. By closely following innovations in the technology sector, Anthropic has positioned itself as a company of interest to major organizations and regulatory agencies. Its latest advancements continue to draw attention and scrutiny in equal measure. You can follow our news on Telegram, Facebook, Twitter & CoinmarketcapDisclaimer: The information contained in this article does not constitute investment advice. Investors should be aware that cryptocurrencies carry high volatility and therefore risk, and should conduct their own research.

Intercontinental Exchange, Inc. Intercontinental Exchange, the parent company of the New York Stock Exchange, has completed a $600 million direct cash investment in prediction market platform Polymarket as part of a broader equity fundraising round, according to a company announcement. The new investment follows ICE's previously disclosed $1 billion commitment made in October 2025. With the latest infusion, ICE says it has now fulfilled its obligations under the investment agreement, which also includes plans to purchase up to $40 million in additional Polymarket securities from existing holders. Polymarket, a blockchain-based prediction market platform that allows users to trade on the outcomes of real-world events, has drawn increasing attention from institutional investors amid growing interest in event-driven data markets and decentralized financial infrastructure. Polymarket has support for bitcoin deposits, giving users a direct way to fund their accounts with BTC alongside other existing crypto options. ICE stated that the investment is not expected to materially impact its financial results or capital return plans. Final valuation details of the latest transaction are expected to be disclosed once the fundraising round is fully completed. The move further signals traditional financial market infrastructure firms expanding into alternative data and crypto-adjacent platforms. ICE, which operates major exchanges including the NYSE, continues to diversify into digital markets, data services, and fintech infrastructure. Polymarket has become one of the most prominent prediction market platforms globally, leveraging blockchain rails to facilitate trading on political, economic, and cultural outcomes. The companies emphasized that the announcement does not constitute an offer to sell or solicit securities. Market observers say the scale of ICE's investment underscores rising institutional interest in prediction markets as both a trading venue and a data source. Polymarket's embrace by TradFi In the past year, the relationship between the crypto-native prediction market and traditional financial powerhouse Intercontinental Exchange (ICE) has become one of the most closely watched intersections of decentralized markets and institutional capital. Polymarket, launched in 2020 by founder Shayne Coplan, has grown into one of the largest blockchain-based prediction platforms, where users trade shares on the outcomes of future events -- from elections to economic indicators and geopolitical developments -- using cryptocurrency rails. In late 2025, Polymarket re-entered the U.S. market under full Commodity Futures Trading Commission (CFTC) regulation after previously being blocked amid enforcement actions, marking a significant shift from its earlier status as an offshore, lightly regulated venue. In December 2025, Polymarket launched its U.S.-focused app after the CFTC approval, restoring American access to its prediction markets and initially offering sports betting with plans to expand into other categories like propositions and elections. Editorial Disclaimer: We leverage AI as part of our editorial workflow, including to support research, image generation, and quality assurance processes. All content is directed, reviewed, and approved by our editorial team, who are accountable for accuracy and integrity. AI-generated images use only tools trained on properly license material. In Bitcoin, as in media: Don't trust. Verify.

Users of Claude, the AI chatbot made by Anthropic, will have their service restricted amid a surge in demand for the ChatGPT rival. The new rate limits follow a series of outages for Claude this month, with a record number of people joining the service following a high-profile dispute between Anthropic and the US Department of War. The feud stemmed from Anthropic's refusal to allow the Pentagon to use its artificial intelligence technology for domestic surveillance and autonomous weapons. The AI startup was subsequently blacklisted from all federal agencies, with the Department of War officially informing Anthropic that it would deem it a "supply chain risk to national security" - a label previously reserved for foreign adversaries. Anthropic's stance received widespread public backing, fueled further by efforts from ChatGPT creator OpenAI to make a deal with the US military. Claude subsequently overtook ChatGPT to top the app charts earlier this month, with the company's chief product officer revealing that more than a million people per day were signing up for the AI bot. The latest session limits are designed to prevent Claude from crashing or slowing to a crawl during busy hours by reducing the power of the service for heavy users. "To manage growing demand for Claude we're adjusting our five hour session limits for free/Pro/Max subs during peak hours," Thariq Shihipar, a member of Anthropic's technical team, wrote in a post to X. "I know this was frustrating. We're continuing to invest in scaling efficiently. I'll keep you posted on progress." Mr Shihipar claimed that the restrictions would only impact around 7 per cent of users based on current usage. It is not the first time that Anthropic has introduced rate limits, with the company setting up similar limitations for paying customers last August. "Some of the biggest Claude Code fans are running it continuously in the background 24/7," Anthropic said. "These uses are remarkable and we want to enable them. But a few outlying cases are very costly to support. For example, one user consumed tens of thousands in model usage on a $200 plan."
Users of Claude, the AI chatbot made by Anthropic, will have their service restricted amid a surge in demand for the ChatGPT rival. The new rate limits follow a series of outages for Claude this month, with a record number of people joining the service following a high-profile dispute between Anthropic and the US Department of War. The feud stemmed from Anthropic's refusal to allow the Pentagon to use its artificial intelligence technology for domestic surveillance and autonomous weapons. The AI startup was subsequently blacklisted from all federal agencies, with the Department of War officially informing Anthropic that it would deem it a "supply chain risk to national security" - a label previously reserved for foreign adversaries. Anthropic's stance received widespread public backing, fueled further by efforts from ChatGPT creator OpenAI to make a deal with the US military. Claude subsequently overtook ChatGPT to top the app charts earlier this month, with the company's chief product officer revealing that more than a million people per day were signing up for the AI bot. The latest session limits are designed to prevent Claude from crashing or slowing to a crawl during busy hours by reducing the power of the service for heavy users. "To manage growing demand for Claude we're adjusting our five hour session limits for free/Pro/Max subs during peak hours," Thariq Shihipar, a member of Anthropic's technical team, wrote in a post to X. "I know this was frustrating. We're continuing to invest in scaling efficiently. I'll keep you posted on progress." Mr Shihipar claimed that the restrictions would only impact around 7 per cent of users based on current usage. It is not the first time that Anthropic has introduced rate limits, with the company setting up similar limitations for paying customers last August. "Some of the biggest Claude Code fans are running it continuously in the background 24/7," Anthropic said. "These uses are remarkable and we want to enable them. But a few outlying cases are very costly to support. For example, one user consumed tens of thousands in model usage on a $200 plan."

The highly anticipated initial public offering of SpaceX is quickly becoming one of the most watched financial events within the stock market industry. With estimates suggesting a valuation of around $1.5 trillion, the company's debut would not only rank among the largest IPOs ever, but also mark a pivotal moment for the Space sector at large. Founded by Elon Musk, SpaceX has evolved from a disruptive launch provider into a space and communications company as well. Among its premier products are the Falcon 9, which showcases its reusable rocket technology, and Starlink, the satellite network that has created a fast-growing, global internet business. Together, these innovations have positioned SpaceX less as a traditional aerospace firm and more of a hybrid of infrastructure, telecom, and tech platform. Impact on Space Sector One reason for this interest is anytime there's a public debut of a dominant private company, investors often see this as a validation for an entire sector. For many investors, SpaceX's IPO represents a gateway into space, leading to both institutional and retail money to seek exposure across similar companies. In addition, SpaceX's business model presents a challenge to many space-focused companies. Its product types, including rockets, satellite, and end-user connectivity, span across multiple different sectors, which gives it a significant competitive edge and upside over its peers within the space sector. SpaceX won't be just another participant in the space industry, it is becoming the platform around which the industry is being built. Companies must increasingly decide whether to compete with SpaceX and invest in products similar to SpaceX offerings, or whether to align themselves as potential partners within the sector. Moving Forward The potential IPO of SpaceX represents more than just a milestone for the company, it will be a critical moment for the space sector at large. While it promises to bring unprecedented attention and investments into the sector, it also presents risks related to competition and investors capital concentration. Benzinga Disclaimer: This article is from an unpaid external contributor. It does not represent Benzinga's reporting and has not been edited for content or accuracy. Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.