The latest news and updates from companies in the WLTH portfolio.
Perth's international airport terminal is in lock down this afternoon over the discovery of an unattended item, throwing passenger travel plans into chaos. WA Police and Australian Federal Police officers locked down part of Terminal 1, the international wing of Perth Airport, and have established an exclusion zone while the item in the forecourt is investigated. Passengers returning from overseas have faced long stagnant queues since the middle of the afternoon as they wait to clear customs. One told The West Australian they had been waiting close to two hours, since their flight from Bali had landed. "The line was long but was moving," they said. WA's biggest courts and crime stories to your inbox Sign-up to our weekly newsletter for free Sign up "At perhaps 4pm the line stopped moving - zero announcements on PA, zero communication from any staff until perhaps 4.50pm. "Then about 5pm some staff member finally came and told us there was an AFP incident out front and they're not letting us out until its cleared." A Perth Airport spokesperson confirmed the eastern end of the forecourt has been locked down, but said travellers can exit using the western end. Traffic outside is being directed to use the Terminal 2 short term carpark, due to the exclusion zone making the regular area unusable. "Check in areas at Terminal 1 International are currently closed to passengers and staff, and the Terminal 1 short term car park has also been closed, with traffic redirected to the Terminal 2 short term car park," the spokesperson said in a statement. "Arriving international passengers will be escorted from the terminal via alternative doors. "Perth Airport is working closely with the AFP and WAPOL to maintain the exclusion zone while the investigation continues." More to come. Get the latest news from thewest.com.au in your inbox. Sign up for our emails

Perth's international airport terminal is in lock down this afternoon over the discovery of an unattended item, throwing passenger travel plans into chaos. WA Police and Australian Federal Police officers locked down part of Terminal 1, the international wing of Perth Airport, and have established an exclusion zone while the item in the forecourt is investigated. Passengers returning from overseas have faced long stagnant queues since the middle of the afternoon as they wait to clear customs. One told The West Australian they had been waiting close to two hours, since their flight from Bali had landed. "The line was long but was moving," they said. "At perhaps 4pm the line stopped moving - zero announcements on PA, zero communication from any staff until perhaps 4.50pm. "Then about 5pm some staff member finally came and told us there was an AFP incident out front and they're not letting us out until its cleared." A Perth Airport spokesperson confirmed the eastern end of the forecourt has been locked down, but said travellers can exit using the western end. Traffic outside is being directed to use the Terminal 2 short term carpark, due to the exclusion zone making the regular area unusable. "Check in areas at Terminal 1 International are currently closed to passengers and staff, and the Terminal 1 short term car park has also been closed, with traffic redirected to the Terminal 2 short term car park," the spokesperson said in a statement. "Arriving international passengers will be escorted from the terminal via alternative doors. "Perth Airport is working closely with the AFP and WAPOL to maintain the exclusion zone while the investigation continues."

For years, SpaceX's mission was clear: Get humans to Mars. "The most powerful thing we could do is establish a second, self-sustaining civilization outside of Earth," Elon Musk, SpaceX's chief executive, told Forbes in 2003, a year after founding the company. "And the only place that's really feasible is Mars." As a reminder of that goal, SpaceX has a mural in a cafe at its Hawthorne, Calif., campus featuring the progression of human settlement on the Red Planet. The company also sells "Occupy Mars" T-shirts, which Mr. Musk has regularly worn in public. But over the last six months, Mr. Musk has shifted SpaceX's priorities. Though the tech mogul once forecast that humans would take off for Mars as early as 2024, he has de-emphasized reaching the planet. Instead, SpaceX on Tuesday said it had struck a deal with the artificial intelligence start-up Cursor that could result in its acquiring the young company for $60 billion. And Mr. Musk, 54, has proposed other moonshots that could drive more attention and investment to SpaceX as it prepares for one of the largest-ever initial public offerings. Among his pronouncements are A.I. data centers that could orbit Earth, moon-based factories and an A.I. chip manufacturing plant, all of which will contribute to a utopian future where humans never have to work, he has said. This week, some investors and fund managers are expected to get a closer view of those plans when they visit SpaceX's facilities in Texas and Tennessee before the I.P.O., one person who was invited said. Some investors were also scheduled to visit SpaceX's Hawthorne campus next week, the person said. The changing goals have caused whiplash. "It's a hallucinogenic business plan," said Ross Gerber, the chief executive of Gerber Kawasaki, an investment firm that owns SpaceX shares. He added that Mr. Musk "has lost his mind" as he tries to drum up excitement for the public offering. Shifting aims before an I.P.O. would be unthinkable for most corporate leaders, who tend to focus on their core businesses and try to project steadiness to potential investors. Mr. Musk's new goals for SpaceX raise questions about how much shareholders can rely on his word, corporate governance experts said. Yet the billionaire has an uncanny ability to bring investors along for the ride, they said. "In most other corporations where the C.E.O. makes promises that do not prove out, investors tend to react in an adverse way, and they usually do not last long," said Brian Quinn, a law professor at Boston College. But with Mr. Musk, he said, "people believe him or want to believe him." In online posts, Mr. Musk has acknowledged SpaceX's "priority shift." But he has said the new goals do not take away from the Mars plan and are steppingstones to making humans a multiplanetary species. "The capabilities we unlock by making space-based data centers a reality will fund and enable self-growing bases on the moon, an entire civilization on Mars and ultimately expansion to the universe," Mr. Musk wrote in a February letter to SpaceX employees. Mr. Musk has a history of making bold predictions that do not materialize. But while his timelines can be imprecise, his long-term visions have delivered huge opportunities, his supporters said. "Elon is always directionally correct," said Peter Diamandis, a SpaceX investor and the founder of the XPrize Foundation, a nonprofit that supports technological development. "His time frames may be off, but he'll eventually get there." Mr. Musk and a SpaceX spokesman did not respond to requests for comment. Over the years, Mr. Musk has acknowledged his lack of business plans and his reliance on gut instinct. Eight former SpaceX executives and employees, speaking on the condition of anonymity because they feared retribution, told The New York Times that during their times at the company, they had become accustomed to Mr. Musk's whipsaw directives and his use of social media to make announcements or product changes. In 2014, Mr. Musk announced on Twitter, now known as X, that SpaceX would hold an event to unveil the second version of its Dragon capsule, a spacecraft meant to ferry passengers and cargo from orbit, two former employees said. The vehicle was not near completion, so his team scrambled to pull together a full design and event, the former employees said. "We want to take a big step in technology and really create something that was a step change in spacecraft technology," Mr. Musk said at the event, where he unveiled a vehicle that could land anywhere on Earth using jet propulsion. (SpaceX later scrapped the idea in favor of parachute-based landing after Mr. Musk determined that Dragon's jet propulsion wasn't practical, three of the people told The Times.) That same year, Mr. Musk became interested in satellite-based internet and began meeting with Greg Wyler, the founder of OneWeb, a satellite start-up, said two people familiar with the discussions, who requested anonymity out of fear of retribution. The relationship never came to fruition, and Mr. Musk set out on his own, opening a SpaceX engineering office in Redmond, Wash., in 2015 to develop internet satellites. The resulting service, Starlink, underwent layoffs as SpaceX invested in research and development. But the bet paid off: Starlink now has 10 million subscribers and generated $8 billion in sales in 2024, according to documents obtained by The Times. Now Mr. Musk appears to be trying to replicate the Starlink playbook, but with data centers in space. SpaceX had not previously focused on A.I., much less on orbital data centers, three of the former SpaceX executives said. But after Google and others began discussing orbital data centers last year, Mr. Musk declared in October that "SpaceX will be doing this." In January, SpaceX filed paperwork with the Federal Communications Commission to potentially launch one million satellites for an "orbital data center system." A week later, it announced a merger with xAI, Mr. Musk's A.I. start-up. "In 36 months, but probably closer to 30 months, the most economically compelling place to put A.I. will be in space," Mr. Musk said in a recent podcast appearance. This year, more than 20 engineers and researchers have left xAI, whose products have lagged behind those of OpenAI, Anthropic and Google in use. Mr. Musk appears eager to push SpaceX further into A.I. In the deal with Cursor announced Tuesday, SpaceX said the combination with the young A.I. company, which makes code-writing software, would "allow us to build the world's most useful" A.I. models. Another new goal is the moon. While two of the former SpaceX executives said Mr. Musk had previously dismissed landing on the moon because it was not a new achievement, he said in February that the company had "shifted focus to building a self-growing city on the moon." With the success of NASA's recent Artemis II mission and the agency's commitment to further moon exploration, Mr. Musk may see an immediate financial opportunity, the former SpaceX executives said. SpaceX will "strive to build a Mars city and begin doing so in about 5 to 7 years, but the overriding priority is securing the future of civilization and the moon is faster," Mr. Musk posted on Feb. 8. That month, he also spoke to some SpaceX employees about building lunar A.I. satellite factories and launching those satellites into orbit using a space catapult, according to a recording of the employee meeting obtained by The Times. Mr. Musk mentioned Mars only once. Susan C. Beachy contributed research.

Leveraged positions to remain available, fueling high-octane trading behaviors. Prediction market apps Kalshi and Polymarket have formally announced plans to launch perpetual futures markets to their US-facing customers. Starting as soon as next week, traders will be able to open and close positions in "perps," a commonly used term to define contracts that never expire. Speculation surrounding the new product availability grew rampant following an April 14th "teaser" video that was released by Kalshi, called "Timeless." Then, on Tuesday, tech news site The Information confirmed that perpetual futures markets on cryptocurrencies such as Bitcoin will launch for Kalshi's US traders on April 27th. A few hours later, Polymarket confirmed that "perps are coming" to its own platform. The promotional video post on X invites followers to sign up for early access, and shows a sample user interface opening positions on crypto, precious metals, and stocks with up to 10x leverage. What are perpetual futures markets? Perpetual futures markets are prediction contracts that do not have a fixed settlement date. Their popularity has increased in large part due to traders wanting to avoid the hassle of manually closing and re-opening positions as traditional contracts reach maturity. With "perps," Kalshi and Polymarket users will soon be able to hold positions indefinitely (if they choose to do so). Perps markets are already available internationally, but next week will mark the first time United States traders will have access to these products via the Kalshi and Polymarket apps. Funding fees for these types of contracts are expected to max out at around 11% APR with settlement windows ranging from 5 minutes to 8 hours, depending on the asset and trading volume. For Kalshi, the launching of perpetual futures markets follows CFTC regulatory approval for its affiliate company, Kinetic Markets, to allow margin trading through its role as a Future Commission Merchant (FCM). New products will offer margin trading and leveraged positions Polymarket's social media video displays the ability to leverage upcoming perpetual futures markets positions to at least 10 times a user's actual crypto-native stablecoin balance. Upon scrutiny, there appears to be more "room" for the leverage slider to go higher when freezing the 30-second video at the 0:05 timestamp frame. There's also a menu button toggle option to exceed 10x leverage. SOURCE: Polymarket X account - April 21, 2026 (freeze frame at 0:05 timestamp) Margin trading enjoys its own niche among new and seasoned traders alike. It enables positions that would otherwise be "out of reach" of the confines of one's actual account balance. The pros and cons of leveraged positions A $1,000 USD or stablecoin position that's leveraged 10 times represents a virtual control amount of $10,000 on any existing contract. If the corresponding asset increases in value by 1%, the trader's position has improved by $100 (instead of the $10 improvement one would expect when not participating in margin trading). But the leverage multiplier works both ways. If the asset in question drops in value by 1%, the investor loses 10% of the original account balance value. In cases where a 10x leveraged asset drops by 9.5% or more, the entire balance is lost, as prediction market apps typically force contract closures once the maintenance of the trade dips to a minimum level. As a general rule, the higher the leverage, the more equity volatility. The cross-margin systems that Kalshi and Polymarket use for customers who want to use their existing account balances to "buffer" positions can result in cascading liquidations for margin traders if the asset experiences sudden spikes or flash crashes. For now, Polymarket is offering a waiting list that doesn't include a specific launch date for perpetual futures markets. For Kalshi traders in the US, "perps" will launch on Monday, April 27th.

'Handful' of people allegedly gain unauthorised access to model adept at detecting cybersecurity vulnerabilitiesBusiness live - latest updatesThe AI developer Anthropic has confirmed it is investigating a report that unauthorised users have gained access to its Mythos model, which it has warned poses risks to cybersecurity.The US startup made the statement after Bloomberg reported on Wednesday that a small group of people had accessed the model, which has not been released to the public because of its ability to enable cyber-attacks. Continue reading......

OpenAI CEO Sam Altman has 'linked' the recent attack on his San Francisco home to remarks from rival Anthropic CEO Dario Amodei. During a recent interview with podcaster Ashlee Vance on the "Core Memory" podcast, Altman suggested that comments from rival AI companies may have played a role in the attack. His comments come after a Molotov cocktail attack on his residence earlier this month, which authorities say was carried out by a suspect who had travelled across states with the intent to harm him."I think the doomerism talk hasn't helped. I think the way certain other labs talk about us hasn't helped," Altman said, adding, "I think the way Anthropic talks about OpenAI doesn't help."Authorities said Daniel Moreno-Gama traveled from Texas to California and threw a Molotov cocktail at Altman's home before being arrested near OpenAI's headquarters on April 10. He is facing state and federal charges, including attempted murder. The FBI said it found an "anti-AI" document on him listing names of multiple AI CEOs, though those names have not been disclosed.Altman said the incident had a personal impact on him. "I was just, like, you know, there's gonna be more stuff like this, and it's incredibly disheartening. I went through a real depressive cycle about it. But it's very scary," he added.Altman's remarks come amid ongoing tensions between OpenAI and Anthropic, a company founded by former OpenAI employees who had raised concerns about safety approaches. While it is not clear which specific comments Altman was referring to, the two companies have exchanged public and private criticism in recent months.Anthropic CEO Dario Amodei had earlier criticised industry messaging in an internal memo, reportedly calling OpenAI's approach "safety theatre." Amodei wrote, "Sam is trying to undermine our position while appearing to support it. I want people to be really clear on this: he is trying to make it more possible for the admin to punish us by undercutting our public support."Altman, on his part, has also criticised Anthropic's business positioning. Earlier, he said "Anthropic serves an expensive product to rich people."The rivalry has extended to product strategies as well. Commenting on Anthropic's decision not to publicly release its Claude Mythos model, Altman described it as "fear-based marketing.""It is clearly incredible marketing to say, 'We have built a bomb. We were about to drop it on your head. We will sell you a bomb shelter for $100 million to run across all your stuff, but only if we pick you as a customer,'" Altman added.The competition between the two companies has also played out in public forums, including industry events where both leaders declined to present a unified front.Despite the tensions, Altman expressed hope that the broader conversation around AI would shift. "I hope cooler heads prevail," he said.
Firefox developer Mozilla revealed that an early version of Anthropic's Claude Mythos AI identified 271 vulnerabilities in the Firefox browser during internal testing, all of which were patched this week. The findings point to how advanced AI systems are starting to scan large codebases at a scale that once depended on long hours of manual work by cybersecurity researchers. Mozilla said even hardened software targets could now be examined more deeply in a shorter time. "As these capabilities reach the hands of more defenders, many other teams are now experiencing the same vertigo we did when the findings first came into focus," Mozilla wrote. "For a hardened target, just one such bug would have been red-alert in 2025, and so many at once makes you stop to wonder whether it's even possible to keep up." Earlier testing using another Anthropic model had uncovered 22 security-sensitive bugs in a previous Firefox release. Despite that progress, Mozilla noted that eliminating software exploits entirely has long been considered unrealistic. "Until now, the industry has largely fought security to a draw," the company wrote. "Vendors of critical internet-exposed software like Firefox take security extremely seriously and have teams of people who get out of bed every morning thinking about how to keep users safe." Mozilla said the new system can review source code and flag weaknesses in ways that previously required highly specialized human expertise. Internal results showed the model did not uncover bugs beyond the reach of top-tier researchers. "Some commentators predict that future AI models will unearth entirely new forms of vulnerabilities that defy our current comprehension, but we don't think so," the company said. "Software like Firefox is designed in a modular way for humans to be able to reason about its correctness. It is complex, but not arbitrarily complex." Launched in March, Claude Mythos is described by Anthropic as its most advanced model for reasoning, coding, and cybersecurity tasks, positioned above its earlier Opus series. Pre-release testing suggested it could identify thousands of unknown vulnerabilities across operating systems and browsers. Access to the system remains limited through a restricted initiative known as Project Glasswing, which allows select firms, including Amazon, Apple, and Microsoft, to scan software for security flaws. Security researchers warn that the same capability could be used offensively. AI tools that can analyze code at scale may also automate the discovery of exploitable bugs across widely used software systems. Testing by the U.K.'s AI Security Institute showed the model could carry out complex cyber operations on its own, including completing a multi-stage corporate network attack simulation without human input. Those results have drawn attention from governments and intelligence agencies. Despite earlier tensions with Donald Trump's administration over the use of Anthropic's technology, the National Security Agency has deployed Claude Mythos Preview on classified networks, according to people familiar with the matter. The move signals growing interest among U.S. agencies in AI tools that can detect critical software vulnerabilities. Anthropic has also acknowledged that current cybersecurity benchmarks are struggling to keep pace with its latest models, raising questions about how to measure AI performance in this field. Mozilla said the results suggest a possible turning point, where defenders may begin to narrow the long-standing gap with attackers. "We are extremely proud of how our team rose to meet this challenge, and others will too," the company wrote. "Our work isn't finished, but we've turned the corner and can glimpse a future much better than just keeping up. Defenders finally have a chance to win, decisively."

=SpaceX filed a complaint to the FCC on April 20th with a list of complaints, starting by saying "Amazon simply refuses to take 'yes' for an answer. Amazon consistently rejects any offer to resolve its self-inflicted inability to meet the FCC's deployment requirements, insisting instead on inflicting maximum harm on the millions of Americans that rely on competing systems". This segment refers to Amazon's FCC obligation that they launch at least 50 per cent (1,616) of their planned LEO fleet by July. As SpaceX helpfully points out Amazon remains over 1300 satellites short of this milestone. Not mentioned in the filing is the additional problem caused by the April 19th failure by a Jeff Bezos Blue Origin rocket to successfully place an AST SpaceMobile satellite into orbit. The US authorities will now expect a compressive investigation in the 2nd stage of the New Glenn rocket, and this will likely take months, not weeks. Amazon Leo's launch plans will be severely impacted by this stoppage. SpaceX's letter to the FCC further alleges that Amazon has made a never-ending cascade of extension requests "seeking different deadlines for a different number of satellites". SpaceX cites the long-standing 'Teledesic precedent' and used by Amazon, O3b and Telesat that could not meet their deployment milestones but whose delays were foreseeable. SpaceX summarises its position saying that it does not oppose Amazon's desire to deploy additional satellites into its first-generation system "but that any satellites it deploys after the milestone date must be deferred to a subsequent processing round".

What if your breath is silently controlling how your entire day unfolds? In this insightful conversation, Acharya Raj Mishra explains how breathing patterns influence your emotions, focus, and energy. Why do most people ignore something so powerful yet so basic? Can changing your breath actually change your mindset and daily experience? This discussion reveals simple yet impactful awareness techniques. Watch till the end to learn how to take control of your day through your breath.#breathing #mindfulness #acharyarajmishra #selfawareness #mindset #spirituality #peace
A supply chain attack originating from a third-party AI assistant has exposed customer credentials at one of the web's most critical infrastructure providers -- and no one saw it coming. On the morning of April 19, 2026, engineers across the internet refreshed their dashboards to find an unsettling message from Vercel the cloud deployment platform that quietly underpins millions of websites, serverless functions, and frontend applications. The company had been breached. Hackers had found their way inside not through some zero-day exploit or brute-force attack against Vercel's own perimeter, but through something far more mundane and far more dangerous: a single employee's AI productivity tool. In less than 48 hours, a forum post on BreachForums claimed access to Vercel's source code, API keys, GitHub tokens, and NPM tokens enough, the threat actor boasted, to mount "the largest supply chain attack ever." The asking price: $2 million in Bitcoin. This is the full story of how it happened, why it matters, and what every developer should do right now. What is Vercel, and Why Should You Care? If you have deployed a React app, a Next.js site, or virtually any modern JavaScript frontend in the last few years, there is a very good chance you have used Vercel. The company was founded in 2015, originally as ZEIT, and has since become the dominant platform for frontend deployment a cloud layer sitting between your code repository and the open internet. Vercel is the official steward of Next.js, the React framework with over 520 million NPM downloads in 2025 alone. It runs serverless functions, edge compute, CI/CD pipelines, and preview deployments for companies ranging from scrappy startups to...

Anthropic is investigating potential "unauthorized access" to its Claude Mythos model that has been touted for its ability to find cybersecurity flaws, the company told Bloomberg. A group gained access to the model through a third-party contractor portal and by using internet sleuthing tools, according to the report. However, the group is only interested in trying the models and not using them maliciously, according to a person familiar with the matter. "We're investigating a report claiming unauthorized access to Claude Mythos Previous through one of our third-party vendor environments," Anthropic said in a statement. The Claude Mythos Preview arrived earlier this month as part of "Project Glasswing" with significant fanfare. Anthropic limited the preview release to a small number of trusted test companies including Amazon, Microsoft, Apple and Cisco. Another was Mozilla, which said the model helped it find and patch 271 Firefox vulnerabilities. A growing number of banks and government agencies have been seeking access as well in order to safeguard their own systems. However, several unauthorized users (who reportedly have a private chat on Discord), supposedly gained access to Mythos through a developer portal and by making an educated guess as to where the model might be located. That same group may also have access to other unreleased Anthropic models, according to the report. The new Mythos model has gained notoriety of late for its supposed ability to sniff out security flaws in operating systems and internet browsers. This has prompted some skepticism among security researchers but also fear that AI-generated cyber attacks could become a "real threat," CTO of cloud security firm Edera Alex Zenla recently told Wired. Anthropic was recently designated as a "supply chain risk" by the US Department of Defense, but has been in talks with the Trump administration of late to have that label removed.
SYDNEY -- The central banks of Australia and New Zealand said on Wednesday they were monitoring the release of Anthropic's advanced Mythos artificial intelligence model, joining authorities around the world in expressing concerns about the new cybersecurity risks it poses. Designed for defensive cybersecurity tasks, Mythos' vast capabilities have sparked fears about the threat to traditional software security, after Anthropic said a preview had uncovered "thousands" of major vulnerabilities in "every major operating system and web browser." Experts have also warned that the model can identify and exploit previously unknown vulnerabilities faster than companies can fix them. The Reserve Bank of Australia said in a statement it was closely monitoring the development and was "engaging with peer regulators, government and regulated entities." The Reserve Bank of New Zealand said it was also in contact with other regulators both domestically and in Australia over what it called the "developing risk" from Mythos. On Tuesday, Bundesbank President Joachim Nagel called the model a double-edged sword, saying: "it could be used not only to improve digital security systems, but also to leverage their vulnerabilities for malicious purposes." Anthropic has introduced Claude Mythos Preview through a tightly controlled program called Project Glasswing. Access has been granted to major technology companies including Amazon, Microsoft, Nvidia, and Apple. The company has also expanded access to more than 40 additional organizations that build or maintain critical software infrastructure.

The AI developer Anthropic has confirmed it is investigating a report that unauthorised users have gained access to its Mythos model, which it has warned poses risks to cybersecurity. The US startup made the statement after Bloomberg reported on Wednesday that a small group of people had accessed the model, which has not been released to the public because of its ability to enable cyber-attacks. "We're investigating a report claiming unauthorised access to Claude Mythos Preview through one of our third-party vendor environments," said Anthropic. Bloomberg said a "handful" of users in a private online forum gained access to Mythos on the same day Anthropic said it was being released to a small number of companies including Apple and Goldman Sachs for testing purposes. It reported that the unnamed users got to Mythos through access that one of them had as a worker at a third-party contractor for Anthropic and by deploying methods used by cybersecurity researchers. The group has not run cybersecurity prompts on the model and is more interested in "playing around" with the technology than causing trouble, according to Bloomberg, which corroborated the claims via screenshots and a live demonstration of the model. Nonetheless, news of the potential breach will alarm authorities who have raised concerns about Mythos's potential to wreak havoc and will raise questions about how potentially damaging technology can be kept out of the wrong hands. Kanishka Narayan, the UK's AI minister, has said UK businesses "should be worried" about the model's ability to spot flaws in IT systems - which hackers could then act upon. The model has been vetted by the world's leading safety authority for the technology, the UK's AI Security Institute (AISI), which warned last week that Mythos was a "step up" from previous models in terms of the cyber-threat it posed. AISI said Mythos could carry out attacks that required multiple actions and discover weaknesses in IT systems without human intervention. It said these tasks would normally take human professionals days to carry out. Mythos was the first AI model to successfully complete a 32-step simulation of a cyber-attack created by AISI, solving the challenge in three out of its 10 attempts.
OpenAI CEO Sam Altman criticized the cybersecurity marketing strategy of rival company, Anthropic for its newly launched cybersecurity product, Claude Mythos. "You can justify that in a lot of different ways," said Altman. Altman further used a metaphor to illustrate his point, likening Anthropic's strategy to selling a bomb shelter while threatening to drop a bomb. 'We have built a bomb, we are about to drop it on your head. We will sell you a bomb shelter for $100 million," he added. Anthropic AI Raises Cybersecurity Fears In the following week, Anthropic unveiled Claude Opus 4.7 to test new cyber capabilities, saying it is less advanced than Mythos Preview as part of a phased safety rollout. Last week, Barclays CEO Venkatakrishnan flagged Mythos as a potential catalyst for cyberattacks on global banks. He called it a "serious issue" and warned that Mythos was just the beginning, with more advanced systems likely to emerge rapidly. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Image via Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.

Anthropic's new cybersecurity AI model, Mythos, which was supposed to be accessible to a small group of partner companies, has reportedly been accessed by a group of unauthorized users. That's according to Bloomberg, which reported that the group has managed to get early access to the model via a third-party vendor. Unauthorized users reportedly accessed Mythos early Per the report, a private online forum is behind the unauthorized access. Speaking with TechCrunch, a spokesperson for Anthropic confirmed that the company is investigating the claim, saying the issue is related to its Claude Mythos Preview. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." As of now, the company says there is no evidence that its internal systems were directly affected. However, the unauthorized access part itself is what stands out here. The group is said to have used connections linked to a contractor working with Anthropic. From there, they tried multiple methods before successfully getting into the system. Discord group used model after launch Members of the group are reportedly part of a Discord community focused on tracking unreleased AI tools. They allegedly gained access on the same day Mythos was announced and have been using it since. According to the report, the group figured out where the model might be hosted based on patterns from earlier Anthropic deployments. They then tested that assumption until it worked. Interestingly, the intent does not appear malicious, at least for now. The group is said to be more interested in exploring new models rather than causing damage. At the time Mythos was announced, Anthropic said that it was shared with a small group of partners, including companies like Apple, under Project Glasswing. The company wanted to keep access limited as it warned about its misuse if it falls into the wrong hands. Well, that plan now looks a bit shaky, even if Anthropic's core systems remain untouched, third-party access points are tough to lock down.

In times of uncertainty and rapid change, it can feel as though everything around us is spinning out of control. Deadlines, expectations and the emotions of others can easily pull us into the chaos. Yet calm is not something we find outside ourselves - it is a state we cultivate within. By learning to pause, reflect and reconnect with what truly matters, we can navigate even the most turbulent moments with clarity and balance. Here are seven simple practices to help you stay calm, centered and focused when everything around you feels uncertain: Breathe: Slow, deep breathing relaxes the body and quiets the mind. It creates space for clarity and helps you respond, rather than react. Mental traffic control: Build the habit of taking short mental breaks. Step back, observe your surroundings, and gently slow the stream of thoughts. You might focus on a calming image, listen to soothing music, or simply take in the world around you. This practice strengthens your ability to stay present. Self-reflect: Set aside time to reflect on your day. Consider what you've learned, what you're grateful for, and where your actions can better align with your values. Reflection helps you process experiences and stay focused on what matters most. Focus: In times of chaos, it's easy to absorb other people's urgency and lose sight of your own priorities. Stay grounded in your beliefs and values. Be empathetic to others, but not at the expense of your own well-being. Stay centred: You don't have to fix everything by becoming part of the chaos. True clarity comes from remaining calm and maintaining a balanced perspective. When you are centered, you can see situations more clearly and respond with intention. Be flexible: Like bamboo in a storm, flexibility allows you to adapt and grow. Staying open to new ideas and perspectives can lead to better outcomes than rigidly holding onto plans, while still honouring your core principles. Maintain your sense of humour: Don't take everything too seriously. Find lightness wherever you can, and approach challenges with a fresh perspective. View responsibilities not as burdens, but as opportunities to contribute with purpose. Ultimately, it's about embracing the best of each moment and letting go of what

American AI developer Anthropic said Tuesday it was investigating unauthorized access to Mythos, its powerful model which the company itself worries could be a boon for hackers. Anthropic said earlier this month it restricted the release of Mythos to 40 major tech firms to give them a head start in fixing cybersecurity vulnerabilities before they could be exploited by attackers. According to Bloomberg, which first reported the probe, a small group of users in a private, online forum gained access to the model via the computer system reserved for Anthropic's external vendors. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," an Anthropic spokesperson told AFP. The users got hold of Mythos by various means, including using access one of them had as a worker at a contractor for Anthropic, Bloomberg reported. Anthropic works with a small number of third-party vendors who help with model development. The firm has delayed a general release of Mythos, which it says can spot undiscovered security holes that have existed for decades, in systems tested by both human experts and automated tools. It shared Mythos first with a few dozen key US tech and financial services players -- such as Nvidia, Amazon and JP Morgan Chase -- to allow them to improve their security infrastructure. But the company has also been accused of overhyping the powers of a technology which is its stock in trade, and the subject of fierce competition with rival OpenAI. bl/cms/kaf/mtp

CISA, America's top cyber defense agency, has no access to Anthropic's Mythos Anthropic is marketing its new cybersecurity-focused AI model Mythos as too dangerous for public release, but hasn't even provided the US Cybersecurity and Infrastructure Agency with access to it. This seems bizarre. On Tuesday, reports appeared that Anthropic had opened an investigation after discovering that a small group of Discord users had gained unauthorized access to Mythos. This indeed sounds very dangerous since the AI company has been marketing Mythos as one of the most powerful vulnerability-hunting AI models currently being tested. Experts fear Mythos could also empower attackers. The model's supposed capabilities might, of course, just be part of the hype - Anthropic needs the big and the rich to buy and use its product, and the idea that machines will do a better job than costly human analysts can certainly sound tempting. Still, the firm deems Mythos too dangerous for public release. It has only provided limited access to more than 40 companies and organizations that are testing and using it to shore up their systems. The risk is deemed urgent. The US Treasury and the European Central Bank have raised the issue with major banks, and financial analysts in the United Kingdom and Germany have also been examining risks around Mythos. That's why it's just so bizarre that, according to Axios sources, Anthropic hasn't provided CISA - America's top cyber defense agency tasked with helping to secure those very same banks as well as critical infrastructure - with access to Mythos. Essentially, the agency has to sit on the outside looking in. Axios sources say that Anthropic only briefed CISA and the US Commerce Department on Mythos' capabilities. However, unlike CISA, the Commerce Department's Center for AI Standards and Innovation has been testing Mythos - as has the National Security Agency, despite the Department of War having previously declared Anthropic a "supply chain risk." CISA has been suffering under the second President Donald Trump administration, which has spent the last year reducing capacity at the agency. It now has less money, fewer employees, and other resources. Still, the decision to cut CISA out must have been Anthropic's, and the company's reasoning is unclear. The fact, at least so far, is that security teams at critical infrastructure organizations have often followed CISA's guidance on dealing with cyber threats. Of course, Anthropic - still arguing with the Pentagon - might simply be trusting the private sector more since it's not so dependent on constantly shifting political winds. The Mozilla Foundation, one of the organizations Anthropic shared an early version of Mythos with, said this week that thanks to the AI model, the new release of Firefox 150 includes fixes for 271 vulnerabilities identified during initial evaluation. "Defenders finally have a chance to win, decisively," the Mozilla Foundation wrote in a blog post ambitiously titled "The zero-days are numbered."

Elon Musk calls space-based AI a "no-brainer." The prospectus his company just handed to prospective shareholders strikes a considerably quieter note. Key Takeaways * SpaceX's S-1 filing, obtained by Reuters, warns that orbital AI compute and lunar or interplanetary projects are early-stage and may never turn a profit. * The company is aiming for a valuation near $1.75 trillion with a $75 billion raise -- which would be the biggest IPO ever recorded. * Starship, the reusable heavy-lift rocket tied to almost every growth plan, has hit repeated delays and test failures, and the filing concedes that matters. The private pitch is louder than the paperwork. In the pre-IPO filing SpaceX has prepared for what may become the largest stock debut in history, the company tells potential investors something Elon Musk has not been saying on stage: the plan to build artificial intelligence data centers in orbit, along with outposts on the moon and Mars, leans on technology that has not been proven and could fail to make money. The risk factors inside the S-1, details of which Reuters reviewed and had not been previously reported, paint a picture of the rocket builder's future that is far more guarded than Musk's recent public pronouncements. U.S. securities law requires these disclosures. They exist to warn buyers about what could go wrong, and to give the company a legal buffer if any of it does. "Our initiatives to develop orbital AI compute and in-orbit, lunar, and interplanetary industrialization are in early stages, involve significant technical complexity and unproven technologies, and may not achieve commercial viability." -- SpaceX S-1 filing excerpt Another passage is blunter still about the environment those data centers would live in. Any orbital AI hardware, the document states, would run "in the harsh and unpredictable environment of space, exposing them to a wide and unique range of space-related risks that could cause them to malfunction or fail." Musk's Public Script vs. The Paperwork An S-1 is the document a company files to lay out its finances and its hazards before listing shares. SpaceX is targeting a debut in the coming months at roughly $1.75 trillion, with a $75 billion raise attached. That combination would put it ahead of every previous IPO on record. Space-based AI is obviously the only way to scale -- so said Musk. The filing chooses different words. At the World Economic Forum in January, Musk argued that placing AI data centers in orbit was "a no-brainer," predicting space would become the cheapest home for AI inside two to three years. A month later, after confirming a merger between SpaceX and his social media and AI firm xAI, he declared that "space-based AI is obviously the only way to scale." SpaceX did not immediately respond to requests for additional comment on the filing. Everything Rides on Starship The prospectus also concedes, in language investors will recognize as unusually direct, how much of the company's growth story depends on a single vehicle: Starship, the fully reusable next-generation rocket that has endured several delays and test-flight failures. "Any failure or delay in the development of Starship at scale or in achieving the required launch cadence, reusability and capabilities thereof would delay or limit our ability to execute our growth strategy." -- SpaceX S-1 filing excerpt Starship is built to haul payloads far heavier than SpaceX's Falcon 9 workhorse, with the goal of slashing launch prices for Starlink satellites, orbital data centers, and crewed missions to the moon. If the rocket does not deliver on cadence, reusability, and scale, much of the company's stated ambition sits in the slow lane. That is the tension buyers of SpaceX stock will have to price in: the loudest promises about humanity's future in space come from a founder known for loud promises, while the filing drafted for their protection is careful to say, in writing, that none of it is guaranteed to work.

'Handful' of people allegedly gain unauthorised access to model adept at detecting cybersecurity vulnerabilities The AI developer Anthropic has confirmed it is investigating a report that unauthorised users have gained access to its Mythos model, which it has warned poses risks to cybersecurity. The US startup made the statement after Bloomberg reported on Wednesday that a small group of people had accessed the model, which has not been released to the public because of its ability to enable cyber-attacks. "We're investigating a report claiming unauthorised access to Claude Mythos Preview through one of our third-party vendor environments," said Anthropic. Bloomberg said a "handful" of users in a private online forum gained access to Mythos on the same day Anthropic said it was being released to a small number of companies including Apple and Goldman Sachs for testing purposes. It reported that the unnamed users got to Mythos through access that one of them had as a worker at a third-party contractor for Anthropic and by deploying methods used by cybersecurity researchers. The group has not run cybersecurity prompts on the model and is more interested in "playing around" with the technology than causing trouble, according to Bloomberg, which corroborated the claims via screenshots and a live demonstration of the model. Nonetheless, news of the potential breach will alarm authorities who have raised concerns about Mythos's potential to wreak havoc and will raise questions about how potentially damaging technology can be kept out of the wrong hands. Kanishka Narayan, the UK's AI minister, has said UK businesses "should be worried" about the model's ability to spot flaws in IT systems - which hackers could then act upon. The model has been vetted by the world's leading safety authority for the technology, the UK's AI Security Institute (AISI), which warned last week that Mythos was a "step up" over previous models in terms of the cyber threat it posed. AISI said Mythos could carry out attacks that required multiple actions and discover weaknesses in IT systems without human intervention. It said these tasks would normally take human professionals days to carry out. Mythos was the first AI model to successfully complete a 32-step simulation of a cyber-attack created by AISI, solving the challenge in three out of its 10 attempts.
