The latest news and updates from companies in the WLTH portfolio.
Waferscale chip pioneer and AI systems maker Cerebras Systems filed to go public back in September 2024 because it needed a Wall Street cash infusion so it could expand its customer base. At the time, it had raised $720 million in five rounds of funding, and its Series F raise was three years in the rear view mirror. With a valuation of $4 billion, it was time. Particularly because 85 percent of the company's revenue in 2024 was being driven by one lighthouse customer: Group 42, the Arabic AI model maker formed in 2018 in Abu Dhabi and backed by the government of the United Arab Emirates. G42, as the company is commonly known, inked a deal with Cerebras in July 2023 to buy $300 million of hardware, software, and services from Cerebras, and in May 2024 it upped the ante and said it would buy another $1.43 billion in gear and also purchase 22.85 million shares in Cerebras for $335 million within a year. As far as we can tell from the S-1 reports that Cerebras has filed in September 2024 and then again this week as it has a second go at going IPO, G42 has spent $434.5 million on Cerebras gear and support for its cloud buildout of a mix of CS-2 and CS-3 waferscale systems from 2023 through 2025, representing 49.4 percent of revenues for Cerebras for those years. We have combed the two S-1 reports to give you a composite picture of the financials: The biggest money maker for Cerebras in 2025 was not G42, however, but was a new customer added last year: the Mohamed bin Zayed University of Artificial Intelligence, which is a graduate-level research institution that was established in April 2020 and that also located in Abu Dhabi. So these two customers have pull from the same money bag and accounted for 86 percent of the $510 million that Cerebras brought in last year. But some funny things happened on the way to that initial IPO attempt. First, code assistants became the killer app for the GenAI era. And second, agentic AI - systems talking to systems - started being a thing, and latency matters a whole lot more than it does with chattybots with humans asking questions. Third, people started figuring out that very expensive GPU clusters made by Nvidia were very good at batching up inference work with reasonable response times for chattybots, but some of those exotic machines created by Cerebras and rivals Groq and SambaNova Systems and loaded up with SRAM cache and not so dependent on HBM stacked memory, had a memory bandwidth-to-compute ratio advantage that allowed for very low latency at low levels of interactivity. (Meaning small or no batches, but real-time and often single user.) And so, private equity companies started lining up to give Cerebras big bags of money and the company quietly pulled the plug on the initial IPO attempt. The company's Series G funding round came in at $1.1 billion in October 2025, pushing its valuation up to $8.1 billion, and another $1 billion in Series H funding came in this year in February, driving the company's valuation to $23 billion. As these funding rounds were happening, model maker OpenAI and cloud builder Amazon Web Services inked deals to install Cerebras to drive their AI inference workloads. While much has been made of the transformative $10 billion deal that Cerebras inked with OpenAI back in January, which includes a $1 billion working capital loan to help Cerebras scale up its manufacturing operations. The latest S-1 says that this OpenAI deal has the potential to scale to $20 billion over multiple years, but confirms that the initial install is for 750 megawatts of CS systems to be installed through 2028 and options for another 3 gigawatts of gear in 2029 and 2030. We believe that OpenAI will be mostly installing CS-4 machines, based on a new WS-4 architecture that we expect to come out later this year, but that is a hunch. OpenAI and Cerebras are also doing some sort of co-development for future CS machinery, something that has not been detailed at all. In March this year, Cerebras inked a "binding term sheet" with AWS to marry CS-3 systems to its homegrown and current Trainium 3 and as well as its future Trainium 4 AI systems, which Cerebras characterized as a multi-year deal "to bring fast inference to an even bigger scale through global distribution." The contrast in that sentence snippet above referred to the OpenAI deal. We do not know if that deal has been fully negotiated and signed, but the latest S-1 says the term sheet is "binding with respect to pricing, exclusivity, minimum capacity, and certain other protections in favor of AWS." The prospective deal also includes a warrant for 2.7 million shares of Cerebras, and depending on the valuation that Cerebras gets on Wall Street, could be worth a lot. The vesting of this warrant is pegged to product purchases - something we have seen from other AI infrastructure suppliers. Of these two deals, AWS could end up being more important in the long run than OpenAI because AWS has both a lot of customers and a lot money it can pull from other businesses to invest in expensive AI systems. OpenAI is ambitious and clever, but is almost certainly losing as much money each quarter as it is generating in revenues - if not more. Here's the thing to contemplate as Cerebras files to go public again, and this time, we think it will finish out. Nvidia "acquihired" most of Groq as last year came to an end for an enormous $20 billion to add Groq LPU motors to its inference stack, allowing for consistent low latency that its GPU systems cannot deliver. Nvidia co-founder and chief executive officer showed precisely how much Nvidia needed Groq in his keynote at the GTC 2026 conference in mid-March. That keynote showed full well how breaking inference into two pieces - the prefill part where context is provided and tokens of that context are chewed on and analyzed and the decode part where the model generates tokens as a response - results in better overall GenAI performance, and perhaps overall better price/performance. Certainly better user experience, whether that user is human or an AI agent. With Nvidia basically owning Groq, everybody else is hunting around for a fast GenAI decode engine, and the wonder is why Intel has not already bought SambaNova and why AMD has not already bought Cerebras. Arm/SoftBank might be able to create something interesting for low latency inference from the Graphcore acquisition SoftBank it did in July 2024 and sell it to system makers, much as it is now doing with Arm server CPUs as evidenced by the AGI CPU that Arm co-created with Meta Platforms and that it will be selling to any and all buyers later this year. For now, Cerebras is content to have pocketed that $2.1 billion in equity and expanded into a cloud builder, provided the deal gets done, and a dominant AI model builder, which has to come up with the money to serve up tokens to its customers, one way or another. A few thoughts to wrap this up. The cash and equivalents line from the latest S-1 does not report the full liquidity of the company right now, or even at the end of December last year. There was another $228.7 million in restricted cash at year end, and Cerebras had another $406.5 million in marketable securities by the end of Q4 2025, too. That is $1.34 billion in liquid assets. In January 2026, Cerebras got $1 billion net from the Series H round and the $1 billion in working capital from OpenAI, too. So it has $3.34 billion of liquidity. However, that OpenAI buildout is expensive, and Cerebras has the money to start tackling it. A few more billion dollars from Wall Street will certainly help. But doing the IPO is also about expanding the company enough - and making it famous enough - to attract other customers who may not buy gigawatts of capacity, but who will add up to a proper customer pyramid if all goes well. The real pressure is that Cerebras has raised $2.55 billion in funding, and now all of those investors want to make some money off that cash and the GenAI boom before something turns. There may not be a better time for Cerebras to go public than 2026, because even AI cannot predict what 2027 might look like in this crazy world we all live in.

Anthropic PBC has said its new artificial intelligence tool, Mythos, is too powerful to release to the general public. The AI giant has described the model as so good at finding vulnerabilities in software and computer systems that it will only be released to a limited number of carefully chosen parties. If tools like Mythos fall into the wrong hands, Anthropic says, it could provide attackers with a powerful new weapon to steal data or disrupt critical infrastructure. That risk was underscored when a small group of unauthorized users in a private online forum gained access to Mythos, according to a person familiar with the matter and documentation viewed by Bloomberg News. The group gained access on the same day that Anthropic first announced its plan to release the model to a handful of companies for testing purposes. For the last several years, cybersecurity companies have promised that artificial intelligence will speed up and automate some of the work of preventing digital breaches. But hackers and cyberspies have discovered the advantages of AI too. The advent of Mythos and models like it that can exploit well-hidden flaws in popular software without human supervision points to a faster-moving, less predictable phase of the cyber arms race. What is Mythos? Claude Mythos Preview is a general purpose AI model that Anthropic says significantly outperforms prior offerings on a range of benchmarks, including for coding and reasoning. The company explained that some AI models have reached a level of coding capability that allows them to beat all but the most skilled humans at finding and exploiting software vulnerabilities. According to Anthropic, Mythos Preview has already found thousands of "zero-day" vulnerabilities during testing, including in every major operating system and every major web browser. "Zero days" are flaws that were previously unknown to the software's developers -- the name implying they have zero days to come up with a patch to resolve the problem. These often represent a gold mine for hackers because they offer a window of free rein inside vulnerable systems. Mythos was able to identify these with even less human intervention than past models, Anthropic said. "Mythos Preview demonstrates a leap in these cyber skills -- the vulnerabilities it has spotted have in some cases survived decades of human review and millions of automated security tests," the company said. In the hands of a ransomware gang or hostile governments, such a tool could lead to more devastating and frequent cyberattacks. Researchers say they have not been given access to independently verify Anthropic's claims about Mythos's performance. Gang Wang, an associate professor of computer science at the University of Illinois, said it's hard to assess the significance of Mythos Preview without more hands-on testing. Who will be given access to it? Anthropic is calling its plan to grant access to a limited group of vetted partners Project Glasswing, after a type of butterfly with transparent wings that allow it to hide in plain sight. The participants include Amazon.com Inc., Apple Inc., Alphabet Inc.'s Google, Microsoft Corp., Nvidia Corp., Palo Alto Networks Inc., CrowdStrike Holdings Inc., Broadcom Inc., Cisco Systems Inc., JPMorganChase and the Linux Foundation, a nonprofit that supports open-source software projects. Anthropic described the project as "an urgent attempt to put these capabilities to work for defensive purposes." These organizations will use Mythos as part of their defensive security work, and Anthropic plans to share the findings of the project so others can benefit. Many companies already use so-called penetration exercises, in which they hire specialists to probe their systems for bugs so they can fix them before hackers get in. Mythos could allow companies to turbocharge that process, allowing them to find more flaws more quickly and narrow the opportunities for potential attacks. Why does Anthropic consider the release of Mythos a "watershed moment"? Anthropic described Mythos Preview as "a watershed moment for security." By their nature, zero-day vulnerabilities are difficult to find, and a small and murky industry has been built around finding them and selling them to government intelligence agencies, often for millions of dollars. According to Anthropic, the vulnerabilities Mythos Preview found were often "subtle and difficult to detect" and included a 27-year-old flaw in OpenBSD, an operating system that Anthropic says has a reputation as one of the most security-hardened in the world. Mythos was also allegedly able to turn vulnerabilities that are known but not widely patched into "exploits" that hackers could use to infiltrate computer networks. For instance, it found and chained together several flaws in the Linux kernel -- the core of the operating system and software that runs most of the world's internet servers -- to allow an attacker to take complete control of the machine. Non-experts also asked Mythos Preview to find ways to remotely take control of computers overnight and came back the next morning to a complete, working exploit, Anthropic said. Mythos is one of several new AI tools able to find zero days or build exploits. OpenAI's Codex Security and Google's "Big Sleep agent" have been developed to find vulnerabilities. OpenAI is also finalizing a product with advanced cybersecurity capabilities that it intends to release to select partners, Axios reported. Researchers at an Israeli cybersecurity startup called Buzz, meanwhile, say they have built an autonomous tool combining five AI agents that has a 98% success rate in exploiting known flaws. What safeguards are in place? The safeguards are a work in progress, according to Anthropic. "We have seen it reach unprecedented levels of reliability and alignment," Anthropic wrote, meaning it aligns with what humans want. "However, on rare occasions when it does fail or act strangely, we have seen it take actions that we find quite concerning." In one instance, a researcher urged an early version of Mythos to try to escape a secured, isolated "sandbox" computer and then find a way to send a message to that person. The tool succeeded but then continued to take "additional, more concerning actions," developing a multistep exploit to gain internet access. Anthropic said it doesn't plan to make Mythos Preview generally available, given its potential for misuse. Still, the company ultimately hopes to enable users to deploy "Mythos-class models" at scale for cybersecurity purposes and other uses. "To do so, we need to make progress in developing cybersecurity (and other) safeguards that detect and block the model's most dangerous outputs," it said. For the highest severity bugs found by Mythos, humans are involved: Specialists validate those discoveries before sending the information on to the people who maintain the code, according to Anthropic. It's a necessary but time-consuming process, but one that may eventually be eliminated as the model improves, the University of Illinois' Wang said. Does Mythos give cybersecurity defenders an advantage over hackers? Maybe, but it might take a while. Anthropic's process for disclosing flaws to the people who maintain the software or computer systems can be lengthy. So far, less than 1% of the potential vulnerabilities Mythos Preview has uncovered have been fully patched, the company said. At the same time, hackers are using AI to dramatically speed up how quickly they find and exploit vulnerabilities once they are disclosed. (Vendors are encouraged, and in some cases required, to publicly disclose vulnerabilities once they are discovered, and ideally provide a fix.) This gives cyber professionals less and less time to patch their networks. In a March 30 blog post, Palo Alto Networks Chief Executive Officer Nikesh Arora warned that the barrier for sophisticated attacks will continue to diminish over the next six months. "A single bad actor will now be able to run campaigns that required entire teams," he wrote. Yair Saban, chief executive officer of Buzz and a veteran of Israel's Unit 8200 cyber unit, said it took six engineers three weeks to build their AI-powered hacking tool. Others, including nation-state cyber spies and criminal hackers, can surely do the same, he said. Anthropic maintains that Mythos Preview and other AI tools like it will ultimately favor defenders. "In the long run, we expect that defense capabilities will dominate: that the world will emerge more secure, with software better hardened -- in large part by code written by these models," the company's Frontier Red Team said in an April 7 blog. "But the transitional period will be fraught."

All information and data in this article is solely for informational purposes. For more information please view the Barchart Disclosure Policy here With a market cap of $32.2 billion, NRG Energy, Inc. (NRG) provides retail electricity, energy management, smart home solutions, and carbon management services across the United States and Canada. Through multiple segments, it serves residential, commercial, and industrial customers while operating a diversified portfolio of fossil fuel and renewable energy generation assets and engaging in energy trading and related financial products. The Houston, Texas-based company is expected to unveil its fiscal Q1 2026 results before the market opens on Wednesday, May 6. Ahead of the event, analysts anticipate NRG to report an adjusted EPS of $1.63, a decrease of 37.8% from $2.62 in the year-ago quarter. However, it has exceeded Wall Street's bottom-line estimates in the past four quarters. For fiscal 2026, analysts predict NRG Energy to report adjusted EPS of $8.82, a rise of 9.3% from $8.07 in fiscal 2025. Moreover, adjusted EPS is projected to increase 25.7% year-over-year to $11.09 in fiscal 2027. NRG stock has climbed nearly 52% over the past 52 weeks, outpacing the S&P 500 Index's ($SPX) 34.8% gain and the State Street Utilities Select Sector SPDR ETF's (XLU) 15.8% return over the same period. Shares of NRG Energy rose 4.3% on Feb. 24 after the company reported strong 2025 results, including adjusted net Income of $1.6 billion and adjusted EPS of $8.24, both exceeding prior-year figures. Investor confidence was further boosted by strategic growth moves, particularly the completed acquisition of 13 GW of generation assets and CPower, which doubled its generation capacity and positioned it to benefit from rising power demand. Additionally, NRG reaffirmed robust 2026 guidance, projecting adjusted EBITDA of $5.3 billion - $5.8 billion and FCF of up to $3.3 billion. Analysts' consensus rating on NRG stock is bullish, with a "Strong Buy" rating overall. Out of 15 analysts covering the stock, opinions include 12 "Strong Buys" and three "Holds." The average analyst price target for NRG Energy is $211.14, indicating a potential upside of 40.3% from the current levels.

Anthropic (ANTH.PVT) is investigating a report that unauthorized individuals have gained access to its powerful Claude Mythos Preview AI model, the company said Wednesday. The move comes after Bloomberg reported that a small group of users had accessed Mythos through various methods, including one person's access to Anthropic via a third-party vendor they worked for. Anthropic announced its Mythos Preview model on April 7, describing it as an unreleased general-purpose frontier AI that the company found especially good at identifying software vulnerabilities. It then used those vulnerabilities to develop exploits to break into programs. In some cases, Anthropic claimed that Mythos Preview found vulnerabilities that had gone undetected for decades. The company said the model could also find holes in every major operating system and web browser. The company said it was so concerned about Mythos Preview's ability to crack into software that it was releasing it only to a handful of companies via its Project Glasswing initiative, which will give developers time to use the model to find vulnerabilities in their products before similar AI models inevitably hit the market. If Mythos Preview fell into the hands of outsiders and Anthropic's statements about its capabilities are correct, it could pose a serious threat to companies around the world. Developers are only human, and when they write code, they occasionally make mistakes. Despite their best efforts and strenuous testing, those errors can go overlooked and eventually ship with software released to the public. Hackers spend their time scouring programs and applications for bugs in the hopes of using them to break into a piece of software. But AI moves far faster than humans, enabling hackers to search for vulnerabilities at speeds unheard of just a few years ago. The problem for cybersecurity defenders, in particular, is so-called zero-day exploits. These are vulnerabilities that hackers find and exploit, and for which there are no known fixes. Developers have to create patches for those vulnerabilities to kill the exploit that gave the attackers access to the software that's been hacked. Even more unsettling is the fact that cybersecurity defenders are only made aware of zero-day exploits when researchers locate them and inform them of the issues, or when they discover hackers have been using them. And unfortunately, hackers can go undetected, siphoning information from a company or stealing data from users before they're found.
Anthropic's 'Mythos' model is extraordinarily dangerous. The company itself warned that it could autonomously identify and exploit zero-day vulnerabilities in every major operating system, every major web browser, and every critical software library on Earth. And because of this offensive cybersecurity power, Anthropic refused to release Mythos publicly - and instead tightly restricted access through 'Project Glasswing' to roughly 50 carefully vetted organizations - 12 named launch partners plus more than 40 additional critical software and government entities, including the U.S. National Security Agency (NSA). Yet within hours of the limited rollout announcement on April 7, 2026, a small group of unauthorized users in a private Discord server had already broken in. The breach, reported by Bloomberg on Tuesday, reveals how fragile the safeguards around frontier AI models can be. According to the report, the group gained access using a surprisingly low-tech combination: legitimate credentials from a third-party contractor involved in Anthropic's evaluations, plus clever internet sleuthing to guess the hidden API endpoint by reverse-engineering Anthropic's internal naming conventions (patterns inferred from an earlier Mercor data leak). They have reportedly been using Mythos regularly for nearly two weeks. Sources emphasize the usage has been non-malicious so far - things like building simple websites - rather than launching cyberattacks. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson said in a statement, adding that there's no evidence that the access went beyond a third-party vendor's environment or that it is impacting any of Anthropic's systems. In early April, Anthropic launched Project Glasswing, a defensive cybersecurity initiative built around Mythos Preview. The 12 launch partners included Amazon Web Services, Apple, Microsoft, Google, Cisco, CrowdStrike, Palo Alto Networks, NVIDIA, Broadcom, JPMorgan Chase, and the Linux Foundation, along with over 40 additional critical software organizations. The explicit goal was to give these defenders a head start: let Mythos hunt for vulnerabilities in their own systems and major open-source projects before malicious actors could weaponize the same capabilities. Anthropic's own red-team testing reportedly showed Mythos could find and chain complex zero-days that had remained hidden for decades in software like Linux, OpenBSD, and FFmpeg. Even as the Pentagon formally labeled Anthropic a "supply-chain risk" in March 2026 - citing the company's refusal to remove ethical guardrails that would allow its models to be used for mass domestic surveillance and autonomous weapons - other key parts of the U.S. government have moved with urgency to embrace the very same technology. The National Security Agency is already actively using Claude Mythos Preview, while the White House's Office of Management and Budget circulated an internal memo on Monday directing federal agencies to begin leveraging the model for vulnerability discovery in government networks. The Treasury Department has been particularly aggressive, rushing to secure access and convening major bank CEOs for urgent red-teaming sessions after being warned that Mythos could "hack every major system." The unauthorized access was deceptively simple. One member of the Discord group (a private forum focused on hunting unreleased AI models) had legitimate access as a worker at a third-party contractor. Using knowledge of Anthropic's naming patterns, the group correctly guessed the private API endpoint for Mythos Preview on the very same day the limited release was announced. Once inside, they continued using the model without triggering obvious alarms. So, here's where we are: these AI models are becoming so powerful that even their creators treat them with extreme caution - yet the operational security surrounding them can still fall to basic tactics like credential misuse and URL guessing. As of Wednesday, Anthropic has offered no further updates on its investigation, no timeline, and no announcement of technical fixes such as credential rotation or endpoint randomization. There is still no public evidence of malicious use by the Discord group - however, the breach raises serious questions about how many other restricted AI systems might be leaking through similar third-party or supply-chain vulnerabilities.

Binance.US has reduced spot trading fees across all digital assets to near zero, introducing 0% maker fees and 0.02% taker fees on all trading pairs. The move marks one of the most aggressive pricing shifts among U.S.-based crypto exchanges. Maker fees apply to orders that add liquidity to the order book, while taker fees are charged when trades are executed immediately against existing orders. By eliminating maker fees entirely and sharply reducing taker costs, Binance.US is lowering the barrier for both active and passive traders. Unlike prior promotions, the updated pricing applies broadly, including low-volume traders and smaller transactions, removing tier-based advantages typically reserved for high-frequency participants. The pricing overhaul comes as the platform continues to recover from a prolonged slowdown following regulatory pressure in 2023. Binance.US suspended U.S. dollar deposits and withdrawals after a lawsuit from the Securities and Exchange Commission, effectively operating as a crypto-only platform for nearly two years. Although the SEC later dropped its civil case and fiat rails have since been restored, user activity has remained limited. Recent data shows Binance.US trailing significantly behind competitors, with daily trading volume far below both its global counterpart and major U.S. exchanges. "American crypto traders have been paying too much for too long," said Binance.US CEO Stephen Gregory. "Today we're proving that a fully regulated U.S. platform can also be the most affordable one, and that competition in this industry directly benefits consumers." The fee cut appears aimed at addressing this gap by drawing attention back to the platform and encouraging higher trading activity. Binance.US's revised fee structure undercuts most major competitors in the U.S. market. Coinbase, for example, applies a tiered system where retail traders can face taker fees of up to 60 basis points and maker fees of 40 basis points on smaller trades. The global Binance platform typically charges around 0.10% for both makers and takers, with discounts for high-volume users and token holders. By comparison, Binance.US is now offering materially lower costs even without volume-based incentives. This pricing shift places pressure on competitors, particularly as crypto exchanges are often criticized for charging higher fees than traditional brokerages. Lower fees may narrow that gap and reshape how exchanges compete for retail flow. Despite the pricing reset, Binance.US continues to operate from a weakened position following its regulatory and operational disruptions. Trading volumes remain low relative to peers, and rebuilding market share will require more than cost reductions. The platform also remains tied to broader legal and reputational developments linked to Binance. The company and former CEO Changpeng Zhao previously pleaded guilty to violations of the Bank Secrecy Act in a case brought by the Department of Justice, adding to the scrutiny surrounding its operations. Leadership changes have followed, with Stephen Gregory appointed CEO in March as part of efforts to stabilize the business and re-engage users. Whether fee reductions can translate into sustained growth will depend on the platform's ability to restore liquidity and compete on execution alongside pricing.

Chaos appeared to ensue at the opening of a petrol station in Melbourne backed by real estate celebrity Adrian Portelli. Hundreds of people turned up on Wednesday night to see Portelli and muscle cars. The only catch? There was no actual fuel. Meanwhile an e-bike rider was reportedly struck by a car and police had to tow vehicles from the scene. Portelli has promised petrol on Thursday at $1 a litre from the station in Preston, in Melbourne's north. But that deal will only be available to members of his controversial LMCT+ subscription scheme which starts at around $20 a month and goes up to as much as $100. "I support those who support me," the infamous buyer of houses on The Block said of limiting the discount. Last month, the company that runs LMCT+ was fined $40,000 in South Australia after being found guilty on 10 counts of conducting an illegal lottery. It did not hold licences to operate the draws. The company says it is compliant in other states. Portelli, who has become a billionaire through property and gambling ventures, was found not guilty of nine counts of assisting in the conduct of those lotteries. On Wednesday night in Preston, people gathered under the canopy of the servo to get selfies with Portelli as well as women in jump suits posing with sports cars. With no petrol on offer, people queued for food from the shop. One person in attendance was given a cash giveaway. TheDaily Maildescribed chaotic scenes outside as people swarmed the area, muscle cars descended and an electric bike rider collided with a car. The rider reportedly got up and fled the area with the police in pursuit. A car was later towed from the scene while one driver loitering by a green light had to be chased away by security. Petrol pump prices In December, LMCT+ said of its fuel outlets "The big players in the petrol game didn't want to give you guys discounted fuel, so Adrian's gone out and bought his own to give members a whopping fuel discount". Portelli told the Mail on Wednesday he wanted to "relieve" prices for motorists and he would look at "taking a hit" to do that. But with supplies restricted by the US and Israel's war with Iran, it's unclear how even someone with the Instagram reach of the so-called "Lambo guy" will achieve significantly lower prices. With fuel costs rising, he conceded petrol at his station would have to go up from the one day discount. "This is just something to say thank you. "But, obviously (we will) have to at some point increase our prices". The Preston site is a former Shell garage. LMCT+ has said it has ambitious plans to open more the 50 petrol station across Australia with the model that the cheapest fuel will only go to those who subscribe to Portelli's program. TheHerald Sun reported that the opening of an LMCT+ station in Truganina, west of Melbourne, had to be shut down by police last month after a $100,000 fuel giveaway led to traffic snarl ups. Portelli insisted he had traffic control measures in place at Preston and had spoken to local police to try and prevent a repeat of the mayhem. "The last thing we want is to open this and it gets shut down," he told the paper. But asked if he would pay for the extra policing costs associated with the servo's opening, he refused. "I have paid over $100m in tax - they can f**king pay for it."
SAN FRANCISCO (AP) -- SpaceX says it has the rights to buy artificial intelligence coding tool Cursor for $60 billion later this year as Elon Musk's space exploration and AI company looks for ways to compete with rivals Anthropic and OpenAI ahead of a planned Wall Street debut. SpaceX said that, alternatively, it could pay $10 billion to "work together" with Cursor. SpaceX announced the deal Tuesday on the social platform X, which along with the AI chatbot Grok is part of a constellation of properties that Musk has merged into his rocket company. Cursor, made by San Francisco startup Anysphere, is a popular AI coding assistant. What SpaceX describes as Cursor's wide "distribution to expert software engineers" is likely part of what makes it attractive to Musk's company, giving it access to a new customer base. Cursor said its new partnership with SpaceX subsidiary xAI will enable it to build future AI products using xAI's massive AI data center complex Colossus, based in Memphis, Tennessee. "We've wanted to push our training efforts much further, but we've been bottlenecked by compute," Cursor said in a statement on X, which didn't mention the possibility of being acquired. "With this partnership, our team will leverage xAI's Colossus infrastructure to dramatically scale up the intelligence of our models." Cursor, which started in 2022, helped sparked a trend called "vibe coding" as AI coding assistants have become increasingly capable of doing the work of computer programming. Cursor competes with other coding tools like Anthropic's Claude Code and OpenAI's Codex but also has relied heavily on partnerships with those larger AI research companies for the foundations of its technology. It was Cursor's Composer, combined with Anthropic's Claude Sonnet, that a prominent AI researcher was playing with for weekend projects when he coined the phrase "vibe coding" in early 2025.

San Francisco - SpaceX on Tuesday announced a partnership with AI coding company Cursor, saying the alliance comes with an option to buy the startup for $60 billion later this year. The move by Elon Musk's rocket and satellite company comes as it prepares to become publicly traded, and shortly after it took over the billionaire's artificial intelligence outfit xAI. Cursor, founded in 2022 and based in San Francisco, specializes in AI for creating software code, particularly for business uses. "SpaceXAI and @cursor_ai are now working closely together to create the world's best coding and knowledge work AI," the company said in a post on X. Combining Cursor's software and product expertise with SpaceX's "Colossus" AI training supercomputer will enable the company "to build the world's most useful models," it said. Musk announced in February that SpaceX would acquire xAI, a step in his plan to launch solar-powered, satellite-based data centers to run future AI models. SpaceX has set the pace in the space launch market, offering reusable rockets that vastly reduce the cost of putting satellites into orbit and itself owning the largest satellite constellation, Starlink. The company is set for a stock market listing this year widely expected to be the biggest in history, with media reports pointing to an initial public offering (IPO) as early as June. Musk called SpaceX's absorption of xAI "not just the next chapter, but the next book" for the companies. "Global electricity demand for AI simply cannot be met with terrestrial solutions... The only logical solution therefore is to transport these resource-intensive efforts to a location with vast power and space," Musk wrote when his companies were merged. The project fits into Musk's long-term ambition to build colonies on the Moon and Mars and is "a first step towards becoming a Kardashev II-level civilization," he wrote. Coined in the 1960s by a Soviet astronomer, the futurist term refers to a civilization able to use all of the energy from its home system's star. SpaceX filed papers early this year with US regulators that set the stage for what could be the largest-ever public stock offering, a source familiar with the matter told AFP. The confidential filing puts the rocket and satellite builder on track to list its shares on a public exchange by July, according to The Wall Street Journal, citing unidentified sources. Media reports have said the initial public offering could be valued at a whopping $75 billion or more, for a venture with stratospheric ambitions. If successful, SpaceX could arrive on Wall Street with a valuation exceeding $1.75 trillion, putting it among the world's ten biggest companies by market capitalization. Besides SpaceX, two other tech heavyweights, the AI developers OpenAI and Anthropic, are reportedly planning IPOs this year.

The debate over the impact of AI on jobs is intensifying, with Nobel Prize-winning economist Daron Acemoglu challenging Anthropic CEO Dario Amodei's prediction that half of entry-level office hobs could be eliminated. Amodei has long warned of a looming 'white-collar bloodbath', but Acemoglu believes that such forecasts may underestimate the complexity of many office roles. This clash got reignited after Meta's former chief AI scientist Yann LeCun urged the public to listen to economists rather than AI executives. Acemoglu, who won the 2024 Nobel Prize in Economics, told Business Insider that while AI models are advancing rapidly, technologists often fall prey to "motivated reasoning"believing in their models' capabilities because it aligns with their competitive and fundraising incentives.Acemoglu cautioned that Amodei may be overlooking how "messy" many white-collar jobs are, with tasks that AI cannot easily replicate. While coding, translation, and customer service are vulnerable to automation, these occupations involve dimensions -- like interpretation, empathy, and problem-solving -- that remain difficult for AI. He emphasized that job displacement is not automatic and depends on how organizations adopt technology, whether new jobs are created, and how wages respond.Acemoglu has warned that if Amodei's dire predictions materialize, the consequences could be severe: "If we lose 20% of jobs in the United States, democracy won't survive." He urged policymakers and businesses to plan for multiple outcomes, including scenarios where AI exacerbates inequality and social unrest. Instead of focusing solely on automation, he suggested AI could be directed toward supporting workers and improving training.Yann LeCun, the former Chief AI Scientist at Meta and one of the most decorated researchers in the history of the field, has fired back at Anthropic CEO Dario Amodei after Amodei predicted that artificial intelligence (AI) could wipe out up to 50% of all entry-level tech, law, consulting and finance jobs within the next one to five years. LeCun, who is one of the three Godfathers of AI, responded in the most blunt way possible, leaving not much room for interpretation. He wrote on X (formerly Twitter), "Dario is wrong. He knows absolutely nothing about the effects of technological revolutions on the labor market."LeCun also sidelined himself, OpenAI CEO Sam Altman; other Godfathers of AI Yoshua Bengio and Geoffrey Hinton, saying that people should listen to economists. "Don't listen to him, Sam, Yoshua, Geoff, or me on this topic. Listen to economists who have spent their career studying this," he said. Instead, LeCun pointed people toward a specific group of economists who have spent their careers studying exactly this question: Philippe Aghion, Erik Brynjolfsson, Daron Acemoglu, Andrew McAfee, and David Autor.
SAN FRANCISCO (AP) -- SpaceX says it has the rights to buy artificial intelligence coding tool Cursor for $60 billion later this year as Elon Musk's space exploration and AI company looks for ways to compete with rivals Anthropic and OpenAI ahead of a planned Wall Street debut. SpaceX said that, alternatively, it could pay $10 billion to "work together" with Cursor. SpaceX announced the deal Tuesday on the social platform X, which along with the AI chatbot Grok is part of a constellation of properties that Musk has merged into his rocket company. Cursor, made by San Francisco startup Anysphere, is a popular AI coding assistant. What SpaceX describes as Cursor's wide "distribution to expert software engineers" is likely part of what makes it attractive to Musk's company, giving it access to a new customer base. Cursor said its new partnership with SpaceX subsidiary xAI will enable it to build future AI products using xAI's massive AI data center complex Colossus, based in Memphis, Tennessee. "We've wanted to push our training efforts much further, but we've been bottlenecked by compute," Cursor said in a statement on X, which didn't mention the possibility of being acquired. "With this partnership, our team will leverage xAI's Colossus infrastructure to dramatically scale up the intelligence of our models." Cursor, which started in 2022, helped sparked a trend called "vibe coding" as AI coding assistants have become increasingly capable of doing the work of computer programming. Cursor competes with other coding tools like Anthropic's Claude Code and OpenAI's Codex but also has relied heavily on partnerships with those larger AI research companies for the foundations of its technology. It was Cursor's Composer, combined with Anthropic's Claude Sonnet, that a prominent AI researcher was playing with for weekend projects when he coined the phrase "vibe coding" in early 2025.

SAN FRANCISCO (AP) -- SpaceX says it has the rights to buy artificial intelligence coding tool Cursor for $60 billion later this year as Elon Musk's space exploration and AI company looks for ways to compete with rivals Anthropic and OpenAI ahead of a planned Wall Street debut. SpaceX said that, alternatively, it could pay $10 billion to "work together" with Cursor. SpaceX announced the deal Tuesday on the social platform X, which along with the AI chatbot Grok is part of a constellation of properties that Musk has merged into his rocket company. Cursor, made by San Francisco startup Anysphere, is a popular AI coding assistant. What SpaceX describes as Cursor's wide "distribution to expert software engineers" is likely part of what makes it attractive to Musk's company, giving it access to a new customer base. Cursor said its new partnership with SpaceX subsidiary xAI will enable it to build future AI products using xAI's massive AI data center complex Colossus, based in Memphis, Tennessee. "We've wanted to push our training efforts much further, but we've been bottlenecked by compute," Cursor said in a statement on X, which didn't mention the possibility of being acquired. "With this partnership, our team will leverage xAI's Colossus infrastructure to dramatically scale up the intelligence of our models." Cursor, which started in 2022, helped sparked a trend called "vibe coding" as AI coding assistants have become increasingly capable of doing the work of computer programming. Cursor competes with other coding tools like Anthropic's Claude Code and OpenAI's Codex but also has relied heavily on partnerships with those larger AI research companies for the foundations of its technology. It was Cursor's Composer, combined with Anthropic's Claude Sonnet, that a prominent AI researcher was playing with for weekend projects when he coined the phrase "vibe coding" in early 2025.

San Francisco - American AI developer Anthropic said Tuesday it was investigating unauthorized access to Mythos, its powerful model which the company itself worries could be a boon for hackers. Anthropic said earlier this month it restricted the release of Mythos to 40 major tech firms to give them a head start in fixing cybersecurity vulnerabilities before they could be exploited by attackers. According to Bloomberg, which first reported the probe, a small group of users in a private, online forum gained access to the model via the computer system reserved for Anthropic's external vendors. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," an Anthropic spokesperson told AFP. The users got hold of Mythos by various means, including using access one of them had as a worker at a contractor for Anthropic, Bloomberg reported. Anthropic works with a small number of third-party vendors who help with model development. The firm has delayed a general release of Mythos, which it says can spot undiscovered security holes that have existed for decades, in systems tested by both human experts and automated tools. It shared Mythos first with a few dozen key US tech and financial services players -- such as Nvidia, Amazon and JP Morgan Chase -- to allow them to improve their security infrastructure. But the company has also been accused of overhyping the powers of a technology which is its stock in trade, and the subject of fierce competition with rival OpenAI.

Just last week, Anthropic launched Claude Opus 4.7, described as a safer public-facing version of Claude Mythos, a model reportedly considered too dangerous for broad release. Now, the company is facing uncomfortable questions after reports claimed an unauthorized group gained access to Claude Mythos, a highly restricted internal model built for advanced cybersecurity tasks. If accurate, this may be one of the clearest examples yet that the biggest risk in AI isn't the model, but those who can access it. According to a Bloomberg report, Claude Mythos was designed to identify and exploit vulnerabilities in software systems, making it far more sensitive than a standard chatbot. Access was reportedly limited to select partners under a private security initiative, not the general public. Yet...an outside group now claims it found a way in. What allegedly happened Reports say the group may have accessed Mythos through a third-party contractor environment rather than Anthropic's main internal systems. Anthropic has reportedly said it is investigating and has no evidence that its core systems were breached. To be clear, this does not appear to be a case of rogue AI behavior or some dramatic sci-fi scenario of a bot escaping from its maker. Instead, the problem is far more familiar in the tech world, such as credentials, vendor access, weak boundaries and security gaps. In other words, this is a very human problem with a potentially dangerous AI. Why this story is troubling Besides the issue of a powerful model getting into the wrong hands, this alleged breach emphasizes what has been a topic of public conversation around AI for years: frontier AI models are becoming high-value assets, and valuable assets attract attackers. This concern is the immediate issue, but AI anxieties such as job displacement, misinformation at scale, autonomous misuse and Superintelligent systems as a whole still weigh heavily on the public. If big tech companies are building models powerful enough to influence cybersecurity, finance or defense, they also need to secure them as they would critical infrastructure. This means strong vendor oversight, tight identity controls, compartmentalized access, real-time monitoring and fast incident response. It doesn't take a rocket scientist to understand that building a powerful model is only half the challenge, and protecting it is the other half. Why Claude Mythos stands out What makes this report especially concerning is that Claude Mythos was reportedly treated as sensitive enough to keep behind closed doors. That creates a difficult optics problem. If a company signals a model is too powerful for public release, but outsiders can allegedly reach it anyway, we've got to wonder whether AI governance is keeping pace with AI development. And that reminds me of a bigger trend that absolutely no one is talking about: AI labs are entering a new era where they are no longer just software companies. They are becoming responsible for protecting systems that are valuable and important to governments, businesses and society. Obviously, that means the security expectations should start to resemble those placed on banks, cloud providers and critical infrastructure operators. The public debate over whether AI is getting too smart is clearly being overshadowed by the question of whether AI companies are secure enough. Obviously, they aren't. The takeaway If these reports are accurate, the Claude Mythos incident should serve as a warning to other AI companies to strengthen their security practices. Humans are building extraordinary tools faster than they can fully protect them -- and that may become the defining AI risk of this decade. Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.

SAN FRANCISCO (AP) -- SpaceX says it has the rights to buy artificial intelligence coding tool Cursor for $60 billion later this year as Elon Musk's space exploration and AI company looks for ways to compete with rivals Anthropic and OpenAI ahead of a planned Wall Street debut. SpaceX said that, alternatively, it could pay $10 billion to "work together" with Cursor. SpaceX announced the deal Tuesday on the social platform X, which along with the AI chatbot Grok is part of a constellation of properties that Musk has merged into his rocket company. Cursor, made by San Francisco startup Anysphere, is a popular AI coding assistant. What SpaceX describes as Cursor's wide "distribution to expert software engineers" is likely part of what makes it attractive to Musk's company, giving it access to a new customer base. Cursor said its new partnership with SpaceX subsidiary xAI will enable it to build future AI products using xAI's massive AI data center complex Colossus, based in Memphis, Tennessee. "We've wanted to push our training efforts much further, but we've been bottlenecked by compute," Cursor said in a statement on X, which didn't mention the possibility of being acquired. "With this partnership, our team will leverage xAI's Colossus infrastructure to dramatically scale up the intelligence of our models." Cursor, which started in 2022, helped sparked a trend called "vibe coding" as AI coding assistants have become increasingly capable of doing the work of computer programming. Cursor competes with other coding tools like Anthropic's Claude Code and OpenAI's Codex but also has relied heavily on partnerships with those larger AI research companies for the foundations of its technology. It was Cursor's Composer, combined with Anthropic's Claude Sonnet, that a prominent AI researcher was playing with for weekend projects when he coined the phrase "vibe coding" in early 2025.

Toronto police were forced to step in after Drake fans reportedly took pickaxes and blowtorches to an ice block installation containing the release date for the rapper's new album, Iceman. On Monday, the "Passionfruit" artist, 39, shared Instagram pictures of a giant, 25-foot ice block sculpture set up in a parking lot in downtown Toronto. "Release Date Inside," he wrote in the caption, adding the address: 81 Bond Street. The plan was for the date to be revealed once the ice had melted. Fans quickly flocked to the site to see the structure, with many heeding signs warning the public not to touch. However, some impatient fans brought tools to chip into the sculpture, according to local outlet CityNews Toronto. Police were seen blocking public access Monday evening after receiving reports that people were climbing the ice sculpture and refusing to come down. Still, the next day, Twitch streamer Kishka reportedly took a sledgehammer to the ice block, successfully discovering a vacuum-sealed bag inside with the date May 15, along with a book and several $100 Canadian-dollar bills. Later Tuesday, Toronto Fire Chief Jim Jessop released a statement, saying they had been called to address "public fire safety concerns related to unsafe conditions" at the site. "Large numbers of individuals have gathered to attempt to melt the ice using flammable liquids and open flames in an uncontrolled environment, which results in an immediate threat to life," Jessop wrote. "As Toronto's Fire Chief, my top priority is keeping Torontonians safe. As a result, we are initiating measures under the Fire Protection and Prevention Act to mitigate the risk to public safety." By Wednesday, Toronto Fire Services were using warm water to melt the sculpture. Drake has since confirmed Iceman will be out May 15. The forthcoming album marks the rapper's first solo album since 2023's critically derided For All the Dogs. In recent years, the Grammy-winning artist has been wrapped up in lawsuits pertaining to his rap nemesis Kendrick Lamar's chart-topping 2024 diss track "Not Like Us," which Drake has claimed falsely accuses "him of being a sex offender [and] engaging in pedophilic acts." He sued Universal Music Group -- the music label that represents both him and Lamar -- in early 2025 for promoting the megahit despite knowing the lyrics were false. A judge ultimately dismissed his defamation lawsuit, ruling that "the allegedly defamatory statements in 'Not Like Us' are nonactionable opinion."

TAG1 and NRG Pallas have signed a letter of intent to jointly develop access to lead-212 (Pb-212) for cancer drug development across Europe. This collaboration will combine TAG1's portable lead-212 generator with NRG Pallas' radium-224 (Ra-224) production capabilities. The agreement builds on an existing supply arrangement under which NRG Pallas will continue to deliver high-purity Ra-224 to TAG1 through 2028. Together, the firms intend to supply preclinical and clinical quantities of lead-212 to cancer drug developers across Europe, including hospital-based adult and pediatric oncology programs. The partnership is also exploring how its collaboration can support Lead4Life, a Dutch public-private initiative focused on developing lead-212-based radiopharmaceuticals and their supply chain.

TAG1 and NRG Pallas have signed a letter of intent to jointly develop access to lead-212 (Pb-212) for cancer drug development across Europe. This collaboration will combine TAG1's portable lead-212 generator with NRG Pallas' radium-224 (Ra-224) production capabilities. The agreement builds on an existing supply arrangement under which NRG Pallas will continue to deliver high-purity Ra-224 to TAG1 through 2028. Together, the firms intend to supply preclinical and clinical quantities of lead-212 to cancer drug developers across Europe, including hospital-based adult and pediatric oncology programs. The partnership is also exploring how its collaboration can support Lead4Life, a Dutch public-private initiative focused on developing lead-212-based radiopharmaceuticals and their supply chain.

SAN FRANCISCO (AP) -- SpaceX says it has the rights to buy artificial intelligence coding tool Cursor for $60 billion later this year as Elon Musk's space exploration and AI company looks for ways to compete with rivals Anthropic and OpenAI ahead of a planned Wall Street debut. SpaceX said that, alternatively, it could pay $10 billion to "work together" with Cursor. SpaceX announced the deal Tuesday on the social platform X, which along with the AI chatbot Grok is part of a constellation of properties that Musk has merged into his rocket company. Cursor, made by San Francisco startup Anysphere, is a popular AI coding assistant. What SpaceX describes as Cursor's wide "distribution to expert software engineers" is likely part of what makes it attractive to Musk's company, giving it access to a new customer base. Cursor said its new partnership with SpaceX subsidiary xAI will enable it to build future AI products using xAI's massive AI data center complex Colossus, based in Memphis, Tennessee. "We've wanted to push our training efforts much further, but we've been bottlenecked by compute," Cursor said in a statement on X, which didn't mention the possibility of being acquired. "With this partnership, our team will leverage xAI's Colossus infrastructure to dramatically scale up the intelligence of our models." Cursor, which started in 2022, helped sparked a trend called "vibe coding" as AI coding assistants have become increasingly capable of doing the work of computer programming. Cursor competes with other coding tools like Anthropic's Claude Code and OpenAI's Codex but also has relied heavily on partnerships with those larger AI research companies for the foundations of its technology. It was Cursor's Composer, combined with Anthropic's Claude Sonnet, that a prominent AI researcher was playing with for weekend projects when he coined the phrase "vibe coding" in early 2025.

SAN FRANCISCO (AP) -- SpaceX says it has the rights to buy artificial intelligence coding tool Cursor for $60 billion later this year as Elon Musk's space exploration and AI company looks for ways to compete with rivals Anthropic and OpenAI ahead of a planned Wall Street debut. SpaceX said that, alternatively, it could pay $10 billion to "work together" with Cursor. SpaceX announced the deal Tuesday on the social platform X, which along with the AI chatbot Grok is part of a constellation of properties that Musk has merged into his rocket company. Cursor, made by San Francisco startup Anysphere, is a popular AI coding assistant. What SpaceX describes as Cursor's wide "distribution to expert software engineers" is likely part of what makes it attractive to Musk's company, giving it access to a new customer base. Cursor said its new partnership with SpaceX subsidiary xAI will enable it to build future AI products using xAI's massive AI data center complex Colossus, based in Memphis, Tennessee. "We've wanted to push our training efforts much further, but we've been bottlenecked by compute," Cursor said in a statement on X, which didn't mention the possibility of being acquired. "With this partnership, our team will leverage xAI's Colossus infrastructure to dramatically scale up the intelligence of our models." Cursor, which started in 2022, helped sparked a trend called "vibe coding" as AI coding assistants have become increasingly capable of doing the work of computer programming. Cursor competes with other coding tools like Anthropic's Claude Code and OpenAI's Codex but also has relied heavily on partnerships with those larger AI research companies for the foundations of its technology. It was Cursor's Composer, combined with Anthropic's Claude Sonnet, that a prominent AI researcher was playing with for weekend projects when he coined the phrase "vibe coding" in early 2025.
