News & Updates

The latest news and updates from companies in the WLTH portfolio.

SpaceX Offers AI Coding Startup Cursor $60B to Save Elon Musk's Failing xAI Software

By poaching top engineers and integrating Cursor's elite tech, SpaceX is positioning itself as the leader of the 'vibe coding' era Elon Musk's aerospace giant has reportedly made a massive financial play to rescue its struggling software division. Sources suggest SpaceX is prepared to pay a staggering sum to integrate Cursor's advanced technology before a potential public offering. This high-stakes move in California marks a desperate attempt to bridge the gap as Musk's internal AI projects fall behind industry rivals. SpaceX has locked in an agreement with Cursor that provides a clear path to ownership, valuing the AI newcomer at $60 billion (£44.42 billion) by the end of the year. Under the terms, the rocket company can either choose a full buyout or hand over $10 billion (£7.40 billion) to fund their joint projects. 'SpaceXAI and @cursor_ai are now working closely together to create the world's best coding and knowledge work AI,' the firm announced on Tuesday. This social media update arrived moments before a New York Times report claimed SpaceX had settled on a $50 billion (£37.02 billion) price tag to buy Cursor, based on information from two anonymous sources. The news outlet later adjusted its article to include the official statement released by SpaceX. In a post on X, Cursor chief Michael Truell expressed that he is 'excited to partner with the SpaceX team to scale up Composer,' highlighting the startup's flagship AI model. Truell described the deal as, 'a meaningful step on our path to build the best place to code with AI.' The agreement serves as a direct reaction to the growing strain on Musk's goals for the sector. He had openly admitted that xAI -- the venture he started and later folded into SpaceX this February -- was falling behind competitors when it came to programming power. After that confession, he moved to cut staff numbers at xAI through a wave of lay-offs. At the same time, he launched a bold recruitment drive to snatch top developers from rival firms, including two of Cursor's own experts, Andrew Milich and Jason Ginsberg. In February, Musk joined the reusable rocket firm with his AI venture, xAI, in a merger he priced at $1.25 trillion (£0.93 trillion). He is now set to list the unified business on the stock market in what is expected to be a historic public offering. This alliance provides SpaceX with immediate entry to one of the most profitable AI programming tools currently available. Since its debut in 2023, the startup's digital assistant has supported developers in creating, checking, and refining code across large projects. It is now a primary fixture in what experts describe as the 'vibe coding' movement -- a term used for the AI-driven methods that have quickly reshaped the way software is produced. Oskar Schulz, the president of Cursor, highlighted why the deal makes sense for his firm: 'The SpaceX team has an enormous amount of compute, and we think together we can scale up our model efforts, and we're really excited about it. We really like their team.' Before SpaceX entered the picture, Cursor was already in the final stages of a deal to raise roughly $2 billion from investors. This funding would have valued the startup at over $50 billion. The investment round was expected to be led by Andreessen Horowitz, with support from Nvidia and Thrive Capital -- a notable connection, as both Andreessen and Nvidia are already major backers of xAI.

xAISpaceX
International Business Times UK1d ago
Read update
SpaceX Offers AI Coding Startup Cursor $60B to Save Elon Musk's Failing xAI Software

Anthropic Just Announced Huge News for Alphabet and Broadcom | The Motley Fool

Alphabet and Broadcom's TPUs are starting to attract more clients. In the world of generative artificial intelligence (AI), few companies generate as much buzz as Anthropic. For example, its Claude platform is often the leading platform for assisting coders, and its latest model, Mythos, couldn't even be released to the general public due to its potential threat to cybersecurity. It's at the top of the food chain right now, and any company partnering with Anthropic is often seen as a leader. If their equipment is good enough for Anthropic, the thinking goes, it's likely about the best available. Recently, Anthropic made an announcement regarding its usage of Tensor Processing Units (TPUs), which were created through a joint venture between Broadcom (AVGO +0.51%) and Alphabet (GOOG 1.47%) (GOOGL 1.54%). Both of these companies stand to benefit from increased TPU usage, and each looks like a phenomenal investment. Anthropic announced that starting in 2027, it will be using multiple gigawatts of computing power of next-generation TPUs. Long-term commitments like these help give investors clarity for what to expect from companies in the years ahead. This partnership, alongside several others that Broadcom has announced, will help it continue to deliver impressive revenue growth. By the end of 2027, Broadcom expects its custom AI chip business to generate more than $100 billion annually. Its AI semiconductor division (which includes other products outside of custom AI chips) generated $8.4 billion in revenue last quarter (up 106% year over year). There's a ton of growth ahead, and growing partnerships with AI leaders like Anthropic are a good sign for the future. Alphabet is recognizing TPU revenue through its Google Cloud division, and this segment is catching fire. Last quarter, its revenue rose 48% year over year. That's rapid growth for a legacy company, and expanded TPU partnerships with Anthropic and others will lead to continued strong growth for this important segment within Alphabet. Both of these stocks are fantastic investment options in the AI space. Instead of being in the leadership position like Nvidia, they are comfortably in the challenger role and only looking to take market share, which they appear to be doing. Anthropic will continue using Nvidia hardware as well, but Nvidia no longer has this massive AI growth segment to itself. Still, I think a well-balanced approach of owning several of these companies is the best call for most AI investors.

Anthropic
The Motley Fool1d ago
Read update
Anthropic Just Announced Huge News for Alphabet and Broadcom | The Motley Fool

Their Chaos is Our Peace": Fighting Zionist Repression in Texas and Beyond

Last month, on March 24, Idris Robinson, a philosophy professor at Texas State University, filed a lawsuit against the university for wrongful termination and for violating his right to free speech. The University had decided not to renew Robinson's tenure-track contract - despite his stellar academic reviews - after Zionists pressured the school. The Zionists were playing the same broken record: Robinson was "antisemitic," and a glorifier of "terrorism," for supporting the Palestinian liberation struggle. Robinson became a target after giving a talk entitled "Strategic Lessons from the Palestinian resistance" at a public library in Asheville, NC on June 29, 2024 (as part of an anarchist book fair). While Robinson spoke, a confrontation started between Zionist agitators, who came to film the event, and some audience members. It later turned into a scuffle. Robinson wasn't present for the scuffle (he had been escorted from the room, as noted in the lawsuit). He also did not speak at the event as a Texas State employee. Yet one year later, Zionist agitators managed to spin a narrative and get Robinson fired. As we will see, a closer look at the evidence - including newly released security camera footage - shows that they were the real aggressors. Robinson's story fits into a sadly familiar pattern of people losing their academic jobs over Palestine. Faculty members at several universities have been fired, suspended, or pushed out for supporting Palestinian liberation, including Steven Salaita at University of Illinois, Urbana-Champaign (2014), Maura Finkelstein at Muhlenberg College (2024), Jodi Dean at Hobart and William Smith Colleges (2024), Jairo Fúnez-Flores at Texas Tech University (2024), Lara Sheehi at George Washington University (2024), Eric Cheyfitz at Cornell University (2025), and Sang Hea Kil at San José State University (2025), among others. Students, who are often in a more precarious position than faculty, have also been expelled, censured, and in some instances kidnapped and jailed by ICE. All these cases are strikingly similar, with university administrators eagerly punishing those who challenge Zionism (or leaving them to be punished by other authorities). Taken together, these cases are about much more than fighting censorship. They are also a test of whether our own networks of solidarity can support those facing repression. The systematic nature of the repression should also make us rethink the usual defenses of targeted individuals based on liberal ideals of free speech and "academic freedom." As colonial and white supremacist institutions, universities were never meant to allow inquiry that is aligned with liberation. Anticolonial speech has always had a cost, and some are paying it. A Scuffle at the Library Idris Robinson is a philosopher whose work is informed by, and meant to enhance, liberation struggles. Since 2022 he has been teaching at Texas State University (where he is the only Black philosophy professor). Robinson's talk in Asheville focused on lessons to be learned from the achievements of the Palestinian resistance. This discussion took place at the height of a still-ongoing genocide in Gaza, as Palestinian fighters resisted the Israeli army with bravery and tactical brilliance. As Robinson spoke, audience members identified three Zionist agitators (two of whom are Jewish) who were filming the event. The audience confronted them, and a scuffle broke out. The crowd moved from the library conference room to the hallway and eventually outside. Robinson left the room before things escalated, and none of those present witnessed him participating in any physical confrontation. The Asheville incident has been widely framed as a violent "antisemitic" attack. The Anti-Defamation League, a Zionist counterinsurgency organization, included the incident in its bogus "antisemitism map" with the description: "Jewish individuals were harassed and assaulted at an anti-Israel event at a library." Multiple media outlets also incorrectly reported that one of the Zionist agitators is a "Holocaust survivor" (one of the Zionists, David Moritz, only claims that his father was a survivor). The Zionist Organization of America declared that the whole anarchist book fair was "antisemitic." Asheville Police pursued charges of "ethnic intimidation" against some of the event attendees. Given the unfavorable political climate and draconian legal system, four attendees ended up pleading guilty to assault - a fact paraded by Fox News and other right-wing media. All this helped create the false narrative that the Zionists were the victims. Playing the Victim While Being the Aggressor The Asheville incident is a good example of Zionists playing the victims while being the aggressors. The three Zionist agitators in the library - David Moritz, Monica Buckley, and David Campbell - are well-known in Asheville for their racist provocations. A report about the library event from the Asheville Blade details their "long records of open bigotry and harassment." Moritz, a real estate investor who is currently running for Asheville City Council, has been spreading anti-Arab and anti-Muslim propaganda that would have made the editors of Der Stürmer proud. Buckley, a realtor and yoga instructor, has spread her Zionist message in Asheville City Hall, and her social media is similarly filled with racist content. She openly supports the US-reboot of Betar - a Zionist terrorist group historically inspired by Mussolini's fascist militias - which recently has been stalking Palestinian organizers and calling for "blood in Gaza." The third agitator, Campbell, is a Christian Zionist who describes himself as a "MAGA extremist." On social media, he posts racist, homophobic, and anti-trans content, along with pictures of himself with armed Zionist Americans and Israeli soldiers. These agitators have a history of going to public spaces looking for a fight. Moritz has taunted students in the Gaza solidarity encampments at both UNCA and UCLA (at UCLA he was caught on camera putting his hands on the neck of a university security guard). Buckley is known to get into people's faces and then play the victim; one Asheville resident summarized her behavior at protests: "Monica [Buckley] gets as close as possible with a sign or a flag in your face...If you move the flag, they start screaming assault." At the Asheville library, Moritz and Buckley were violent and even bragged about it. Security camera footage from that day shows Moritz kicking a person, unprovoked. His kick sent the person flying, falling to the ground on their back. Moritz was so proud of himself that he uploaded a video of his kick, on repeat, to social media (with the title "Jews Fight Back," which is the slogan of Betar). Moritz was also captured on camera kicking a second person, outside the library, as event attendees were attempting to responsibly escort the agitator out to prevent further escalation. Again unprovoked, Moritz kicks - yet even after this second kick, no one in the footage hits Moritz back. Like Moritz, Buckley has also boasted on social media about her violence at the library event. In one Instagram video (which she has since deleted) Buckley claims that someone took her phone and says, "So I jumped her, to get my phone back." Buckley alleges that she was then surrounded by violent audience members and adds, "I held on and I fought hard and I didn't stop fighting the whole time, and those little fuckers can fuck off." All this information has been either ignored by mainstream media or somehow twisted to fit the agitators' narrative. Knowing that corporate media will cherry-pick in their favor, Moritz and Buckley also posted videos where they present themselves as "peaceful," innocent attendees who were victimized by an "antisemitic mob." In one joint video posted on Buckley's Instagram page, Moritz even declares that anti-Zionism "is worse than antisemitism, it is genocidal antisemitism." Texas State University Sides with the Zionists For an entire year, the summer 2024 Asheville incident was of no concern to Texas State University (TXST). Idris Robinson was teaching at the University as usual, in good standing with colleagues. In an internal review, the chair of the philosophy department commented that "Idris is making excellent progress on the tenure-track." But on June 5, 2025, David Moritz wrote an Instagram post smearing Robinson in connection to Asheville - saying he "glorifies terrorism" and incites violence - and identifying him as a TXST professor. The post listed the contact information for TXST President Kelly Damphousse and encouraged people to act. The following day, Robinson was informed that he had been placed on "administrative leave." The University did not give specific reasons, only mentioning "multiple complaints and allegations" concerning the summer 2024 event. It is worth dwelling on this sequence. A random real estate schmuck in North Carolina posts something on the internet, and the next day a brilliant philosopher in Texas is suspended from the job. What exactly happened behind the scenes, of course, remains unclear. (TXST has so far not complied with a public information request for material regarding Robinson's termination, explaining that given the lawsuit, "the University will seek to withhold any [relevant] information.") But what's clear is that unlike in some other cases of Palestine-related firings, there was no public campaign against Robinson by major Zionist organizations. All it took, apparently, was a social media post. This level of precarity should not be surprising given how the University operates. Tom Alter, a tenured history professor, was fired by TXST in September 2025 after a fascist influencer smeared him on social media for things said in an off-campus talk. TXST was forced to reinstate Alter after a judge's injunction, but did not allow him to teach, and soon fired him again. In the same month, TXST effectively expelled Devion Canty Jr., a Black undergraduate student who mocked the death of white supremacist Charlie Kirk near a Turning Point USA memorial on campus. All these decisions were overseen by TXST President Damphousse. Damphousse studied "law enforcement" and as a young man had aspired to become a police officer, before settling for a job as a prison guard. In his academic research on "terrorism," funded in part by the Department of Homeland Security, Damphousse compares "left-wing terrorism" (in which he includes Puerto Rican independence activists, environmental activists, and Black liberation activists) to "right-wing terrorism" - all in hopes of improving counterinsurgency strategies. Could such a person be expected to defend employees and students who challenge capitalism, US imperialism, and white supremacy? And given that US universities are deeply invested in the genocide - through partnerships, endowments, and a shared US-Israeli imperialist agenda - why would their administrations defend those who get in the way? "Our Collective Has to Catch Us" Many are outraged by the repression on campus. Students, faculty, and staff at Texas State have expressed solidarity with Idris Robinson and Tom Alter. At March 2026 campus protests, signs read "Justice for Idris & Tom." The local employees' union (TSEU CWA 6186) condemned the firings of both professors as well as the forced withdrawal of Canty. A union spokesperson noted that Robinson's case "is nearly identical to Tom Alter's and shows once again that Texas State University President Kelly Damphousse would rather cede to political pressure than defend faculty." Eric Cheyfitz, a tenured Cornell University professor who was suspended in 2025 over Palestine, argued that labor power is key to fighting back in cases like Robinson's. "The only way to stop anything is general strikes," Cheyfitz said. "If we strike whenever they do this to one of us, that would change the face of things, because the university couldn't hire scab labor like you do in an auto factory. If faculty had the courage to organize and walk out, that would have made a difference." Maura Finkelstein, a Jewish anti-Zionist anthropologist who was fired from her tenured position at Muhlenberg College in 2024, also emphasized the need to organize against this systematic repression. She considers Robinson's case to be essentially identical to her own and to all other Palestine-related firings. "All of these stories are the same," Finkelstein said, "and I think it's really important to refuse to be scared by the particulars." In every case, the university - a paradigmatic liberal institution - readily disposed of those who went against the dominant, genocidal agenda. In every case, we saw that if you scratch a liberal, a fascist bleeds. This is why Finkelstein wishes for networks of support that can nourish our movements without being dependent on such institutions. "We have to take risks," Finkelstein said, "and then our collective has to catch us." Losing a job in the US means losing health insurance, she explained, and this is obviously designed to keep people in line. To truly "catch" people would mean collectively ensuring they have what they need after getting fired; it must go beyond individual fundraisers to cover legal fees. We must also address the fact that those who have been fired over Palestine usually cannot get rehired in the US, no matter how much they had been wronged by their employers. Building collective support that enables risk-taking is urgent. In his recent book The Revolt Eclipses Whatever the World Has to Offer (2025), Robinson describes the US as on a trajectory toward another civil war. Tension with fascist forces is mounting; those forces use all means available in their assaults, and they have control of the major institutions. Appealing to such institutions for help will not fix our problems nor stop the US-backed genocide. Robinson gives us a better way to relate to these institutions: "their chaos is our peace, their confusion is our sanity..." Relevant fundraisers: - Idris Robinson (sign petition) - Tom Alter - Devion Canty Jr. - Sameer Project and lifeline4gaza

CHAOS
Counter Punch1d ago
Read update
Their Chaos is Our Peace": Fighting Zionist Repression in Texas and Beyond

SpaceX Secures Right To Acquire AI Start-Up Cursor for USD 60 Billion Ahead of IPO | 🔬 LatestLY

SpaceX has struck an ambitious agreement to potentially acquire the AI coding start-up Cursor for USD 60 billion, marking a significant escalation in Elon Musk's efforts to consolidate his technology empire around artificial intelligence. According to a report by the Financial Times, the deal grants the rocket manufacturer the right to buy Cursor later this year or, alternatively, pay a $10 billion fee to advance a strategic partnership -- effectively one of the largest termination fees in corporate history. The collaboration aims to combine Cursor's widely used AI code-editing tools with the massive computing power of "Colossus," the world's largest AI training supercomputer operated by xAI. SpaceX, which absorbed Musk's AI venture xAI in February, confirmed the partnership in a statement on X, noting the two companies are "now working closely together to create the world's best coding and knowledge work AI". Starlink Outage Across Globe Disrupts US Navy Drone Tests, Highlights Pentagon's Mounting Reliance on Elon Musk's SpaceX: Report. For Cursor, the deal provides access to an unprecedented level of compute resources. "The SpaceX team has an enormous amount of compute, and we think together we can scale up our model efforts," said Cursor president Oskar Schulz. The start-up, which had been in talks to raise independent funding at a valuation exceeding USD 50 billion, has reportedly halted those discussions to focus on the SpaceX integration. The move is widely seen as a tactical play to bolster Musk's AI capabilities against competitors like OpenAI and Anthropic. Despite the rapid growth of xAI, Musk has previously admitted that the lab trailed rivals in specialised software coding tasks. By securing Cursor - a leader in the "vibe coding" movement, where AI handles the bulk of software development - SpaceX gains immediate access to a premier product used by thousands of professional engineers. This acquisition drive follows a series of internal restructurings where Musk has merged disparate entities - including X (formerly Twitter) and xAI - into the SpaceX corporate umbrella. The goal is to create a unified AI and aerospace conglomerate capable of supporting long-term goals such as autonomous lunar infrastructure. The timing of the deal is critical as SpaceX prepares for an initial public offering (IPO) expected as early as this summer. The flotation is projected to value the combined entity at USD 1.75 trillion, making it the largest public debut in history. Analysts suggest that the structured nature of the deal -- an option to buy rather than an immediate merger - may be designed to avoid delaying the IPO. A full acquisition at this stage would require extensive regulatory filings and financial updates, whereas a partnership option allows SpaceX to maintain its listing momentum while signalling its dominance in the AI sector to potential investors. Ross Nordeen Exits xAI: One of Last Remaining Co-Founder Leaves As Elon Musk Initiates Full Rebuilding of AI Startup Ahead of SpaceX IPO. The USD 60 billion valuation marks a meteoric rise for Cursor, which was valued at just USD 29 billion in late 2025. It also highlights the intensifying "war for talent" in Silicon Valley; SpaceX has already successfully recruited top engineering heads from Cursor to lead its internal AI projects. However, the deal may face scrutiny. OpenAI was an early investor in Cursor, and industry experts have noted that existing change-of-control clauses in software licensing agreements could complicate a final takeover by a direct competitor.

AnthropicxAISpaceX
LatestLY1d ago
Read update
SpaceX Secures Right To Acquire AI Start-Up Cursor for USD 60 Billion Ahead of IPO | 🔬 LatestLY

Anthropic's London Knowledge Quarter office marks its 'most important' hub outside of the US -- TFN

Anthropic's new office will be in London's Knowledge Quarter in King's Cross, following in the footsteps of rival OpenAI, as the AI company deepens its relationship with the UK amid its feud with the US government. Silicon Valley heavyweight Anthropic secured a new 158,000 square foot office in the Knowledge Quarter at Regent's Place, with a capacity of around 800 people. The company's existing London team, which consists of more than 200 people, including 60 AI safety researchers, is one of its most significant hubs outside the US. The expansion will build on that headcount, swelling the company's ranks. Anthropic is offering up to £630,000 per year for London engineers, which will force European founders to reassess their compensation packages to fight for talent, investors told Tech Funding News. OpenAI has also leased space in London's Knowledge Quarter, marking its first permanent UK office. The one-mile radius of King's Cross is home to 100 academic, research, and commercial organisations. Big Tech firms like Google DeepMind and Meta are based in the area, as well as buzzy AI names AI video startup Synthesia and autonomous car company Wayve. Anthropic's annual run-rate revenue surpassed $30 billion, according to an April statement, up from $9 billion at the end of 2025. The company forecast a cash burn rate of around a third of revenue in 2026, per reports. It expects that to drop to 9% by 2027 and to break even by 2028. In the UK, its chatbot Claude is used by hedge fund Man Group, the London Stock Exchange Group, and even GOV.UK in a Department for Science, Innovation and Technology pilot. Still, Claude usage in the UK is not as high as in other countries such as Singapore, Australia and Switzerland. "Europe's largest businesses and fastest-growing startups are choosing Claude, and we're scaling to match. London is already one of our most important research and commercial hubs outside the US, and our expansion in the Knowledge Quarter gives us the room to grow into," Pip White, who leads northern Europe at Anthropic, tells TFN. "The UK combines ambitious enterprises and institutions that understand what's at stake with AI safety with an exceptional pool of AI talent -- we want to be where all of that comes together." Earlier this month, Anthropic released a preview of its most advanced model to date, Mythos, which caused widespread security concerns. The US National Security Agency is reportedly using the system to detect software vulnerabilities, despite the fact that the Department of Defense (DoD) in March designated Anthropic a "supply-chain risk," according to Axios. The designation came after CEO Dario Amodei refused to lift Claude's safety barriers for US military applications, kickstarting a public spat between the Anthropic and the Trump Administration. The company argues that its system was not developed for "fully lethal autonomous weapons," according to court filings, which is one of its red lines. Anthropic sees Mythos as a tool for securing software, not breaking it. The UK is generally more cautious with AI than the US, which is perhaps why it makes sense that Anthropic would deepen its ties with the country, as its ethical red lines could be seen as an asset rather than an obstacle. The UK is positioning itself as a leader in AI safety and politicians back strong governance ideas. For example, the government-backed AI Security Institute this week published an evaluation of Mythos that urged companies to invest in cybersecurity. "Future frontier models will be more capable still, so investment now in cyber defence is vital. AI cyber capabilities are dual-use; while they pose security challenges, they can also help deliver game-changing improvements in defence," it stated. Anthropic is granting access to Mythos in stages via a coalition including Apple, Nvidia, Google and others in what it is calling Project Glasswing. However, the UK's own red lines aren't so clear: the government has still pursued controversial contracts with US defence tech giant Palantir for the NHS, which has faced fierce criticism and calls for a cancellation.

Anthropic
Tech Funding News1d ago
Read update
Anthropic's London Knowledge Quarter office marks its 'most important' hub outside of the US  --  TFN

West Bengal Election 2026 Polymarket: BJP 52% vs TMC 47%, $2 Million Volume -- Prediction Market Data

West Bengal's 2026 Legislative Assembly Election has become the most traded Indian state election on Polymarket, the global prediction market platform, with total trading volume crossing $2,040,589 -- more than $2 million -- ahead of Phase 1 voting on April 23. The market currently prices BJP as the favourite to win the election at 52%, with AITC (All India Trinamool Congress) at 47% and CPI at less than 1%. The volume figure is a striking data point. West Bengal 2026 has attracted more prediction market liquidity than any Indian state election recorded on the platform -- a reflection of both the genuine uncertainty of the outcome and the extraordinary national and international interest in whether Mamata Banerjee's TMC can survive its most serious electoral challenge since the party came to power in 2011. The current odds tell a tight but clear story. BJP is priced at 52% implied probability of winning -- meaning the market believes the BJP-led NDA has a marginally better than even chance of winning enough seats to form a government in the 294-member assembly. TMC is at 47%, meaning the market gives Mamata Banerjee's party a near-even but slightly lower probability of retaining power. The buy and sell prices for each outcome reveal the market's confidence level. BJP "Yes" is trading at 53.5 cents and "No" at 50.4 cents -- a tight spread that reflects genuine uncertainty. TMC "Yes" is at 46.9 cents and "No" at 53.9 cents -- also a tight spread but tilted the other way. The narrowness of the spreads across both outcomes confirms this is a market that genuinely does not know who will win, with bettors on both sides keeping prices in close equilibrium. CPI's less than 1% probability and the 0.1 cent buy price for "Yes" reflects what every political analyst in India agrees on -- the Left's return to power in Bengal is not a realistic outcome in this election cycle. Polymarket is a decentralised prediction market where participants use real money to bet on binary outcomes. The $2,040,589 trading volume on West Bengal 2026 makes it the highest volume Indian state election on the platform -- surpassing previous high-profile contests including recent Rajasthan, Madhya Pradesh and Telangana elections. The volume breakdown is also interesting. BJP has attracted $180,823 in contract volume, TMC has $130,897 -- and CPI has generated $800,228 in volume despite a near-zero probability. That last number reflects the mechanics of prediction markets where "No" contracts on long-shot outcomes are essentially free money for sophisticated traders -- buying "No" on CPI at 0.0 cents is effectively a near-certain return. The Polymarket data is worth placing alongside the Phalodi Satta Bazaar prediction that has been circulating -- which gives TMC 158-161 seats and BJP 127-130, a TMC majority. The two informal markets are pointing in opposite directions on the winner question. Polymarket -- a global crypto-based prediction market drawing liquidity from international participants -- is pricing BJP at 52%. Phalodi -- an India-based informal betting network drawing on ground-level political intelligence from within the state -- is pricing TMC as the majority winner. The divergence is itself a data point. International capital is betting on change. Domestic intelligence is betting on continuity. Which reads the room better will be known on May 4. It bears repeating that both Polymarket and the Phalodi Satta Bazaar are informal and unregulated markets with no official standing. Betting on elections is illegal in India, and election results are determined solely by voting and counting -- not by prediction markets of any kind. These figures are reported purely as a data point on market sentiment ahead of voting. West Bengal is a 294-seat assembly where a majority requires 148 seats. In 2021, TMC won 213 seats and BJP won 77. The Polymarket market's near-even pricing reflects the magnitude of the shift that would need to happen for BJP to win -- the party would need to add approximately 70 seats from its 2021 tally while TMC would need to lose approximately 65. That is a large swing by any measure, but the market -- with $2 million of real money behind it -- is saying it considers that swing roughly as likely as its opposite. West Bengal votes on April 23 and April 29. Counting is May 4. The most traded Indian election prediction market in Polymarket history will have its answer in 12 days.

Polymarket
Business Upturn1d ago
Read update
West Bengal Election 2026 Polymarket: BJP 52% vs TMC 47%, $2 Million Volume  --  Prediction Market Data

SpaceX says unproven AI space data centres may not be commercially viable

SpaceX warned investors that its ambitions to build space-based artificial intelligence data centres, as well as human settlements on ⁠the moon and Mars, rely on unproven technologies and may not become commercially viable, according to a company filing. The business risks laid out in SpaceX's pre-IPO filing, which have not been previously reported, present a far more cautious assessment of the rocket maker's future than the vision laid out publicly by billionaire CEO Elon Musk in recent weeks, as the company gears up for what ⁠could be the largest initial public offering in history. Risk factors in ⁠a prospectus are required by U.S. securities law and are designed to inform investors of potential pitfalls while also shielding companies from future legal liability.

SpaceX
The Hindu1d ago
Read update
SpaceX says unproven AI space data centres may not be commercially viable

SpaceX may acquire Cursor in $60 billion deal amid AI push The Mainstream

In a major development in the AI and coding space, Cursor has entered into a strategic partnership with SpaceX that could lead to a potential acquisition later this year. As part of the agreement, SpaceX has secured an option to acquire the AI coding platform for $60 billion. If the acquisition does not go through, the aerospace company will pay $10 billion for joint development efforts. The collaboration is focused on building advanced AI models for coding. Cursor stated, "With this partnership, our team will leverage xAI's Colossus infrastructure to dramatically scale up the intelligence of our models." SpaceX highlighted the combined strength of both companies, saying, "The combination of Cursor's leading product and distribution to expert software engineers with SpaceX's million H100 equivalent Colossus training supercomputer will allow us to build the world's most useful models." Colossus, located in Memphis, Tennessee, is currently considered the world's largest AI training cluster. It was initially built by xAI in 122 days and is now being expanded to support the computational equivalent of 1 million GPUs. SpaceX recently acquired xAI, bringing the facility under its control. The partnership comes as SpaceX prepares for a potential public listing. Earlier in April, reports suggested the company filed for an initial public offering targeting a valuation of $1.75 trillion. Regulatory filings also showed a consolidated loss of $4.94 billion last year, largely due to increased spending on AI infrastructure after the xAI acquisition. Meanwhile, Cursor is also in talks to raise at least $2 billion at a $50 billion valuation, a sharp rise from its current $29 billion valuation. Reports earlier this year indicated that the company reached $2 billion in annualised revenue, with projections suggesting it could exceed $6 billion by the end of the year. Also read: Viksit Workforce for a Viksit Bharat Do Follow: The Mainstream LinkedIn | The Mainstream Facebook | The Mainstream Youtube | The Mainstream Twitter About us: The Mainstream is a premier platform delivering the latest updates and informed perspectives across the technology business and cyber landscape. Built on research-driven, thought leadership and original intellectual property, The Mainstream also curates summits & conferences that convene decision makers to explore how technology reshapes industries and leadership. With a growing presence in India and globally across the Middle East, Africa, ASEAN, the USA, the UK and Australia, The Mainstream carries a vision to bring the latest happenings and insights to 8.2 billion people and to place technology at the centre of conversation for leaders navigating the future.

SpaceXxAI
CIO News1d ago
Read update
SpaceX may acquire Cursor in $60 billion deal amid AI push The Mainstream

Anthropic's Mythos model accessed by unauthorized users, report says

Anthropic has opened an investigation after discovering that a small group of users gained unauthorized access to the AI company's powerful new Mythos model, Bloomberg News reported on Tuesday. The "small group of unauthorized users" was said to have accessed the advanced Mythos AI model the same day Anthropic began rolling out a preview of the model to a limited group of approved companies for testing in late February. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," Anthropic said in a statement. With the model publicly introduced on April 7th, the incident is raising fresh concerns over how tightly the high-end cybersecurity tool is being controlled. Anthropic has touted its Claude Mythos Preview model as "so powerful that it could enable dangerous cyberattacks," according to a person familiar with the matter and documentation reviewed by the media outlet. The San Francisco-based company said there was no evidence the unauthorized access impacted any of Anthropic's systems or went beyond the third-party vendor's environment, Bloomberg reported. Still, Anthropic has not publicly confirmed the full scope of the incident, and it remains unclear whether any vulnerabilities were identified or exploited by the unauthorized users. Access traced to private online group The users were said to be part of a private Discord forum that managed to gain entry despite the model being restricted to select organizations under the newly launched Project Glasswing initiative. Project Glasswing - limited to 40 technology and infrastructure organizations, including Amazon, Google, Microsoft, Apple, and Cisco - has granted those companies permission to test Mythos' extraordinary vulnerability-detection mechanisms and autonomous security patching on their own systems. According to the person familiar with the matter, the users "relied on a mix of tactics" to break into the system, but there was no direct breach of Anthropic's core systems. The Discord channel at the center of the incident is alleged to focus on finding information about unreleased models, often using bots to scour the internet, including sites like GitHub, for details that AI companies and industry insiders have posted online. One method of access was via a single worker at the unnamed third-party contractor used by Anthropic, while another tactic included "trying commonly used internet sleuthing tools often employed by cybersecurity researchers," the person told Bloomberg. Built to find and exploit vulnerabilities The Mythos rollout has already drawn scrutiny from regulators and policymakers, after internal testing (and external evaluations) have shown the model can uncover serious flaws in operating systems, browsers, and other foundational software. This has triggered warnings across the board that the frontier model could be misused to accelerate cyberattacks or expose critical weaknesses in widely used systems. Anthropic itself has categorized Mythos as being "too dangerous" for public consumption, and has sparked fears after its preview model had uncovered "thousands" of major vulnerabilities and zero days in "every major operating system and web browser." Security experts are also warning that the advanced AI tool - capable of autonomously identifying and exploiting vulnerabilities within just a matter of hours - could easily outpace existing cybersecurity defenses. Anthropic has been slowly expanding its availability to not only select corporate entities but also government users, including financial institutions and US federal agencies, prompting the Trump administration to call a meeting with Anthropic's CEO, Dario Amodei, to discuss its own blacklist of the AI start-up. Earlier on Tuesday, financial regulators across Australia and South Korea raised concerns about the AI model, arguing it could destabilize entire banking systems, joining earlier warnings from regulators in several EU nations.

AnthropicDiscord
Cybernews1d ago
Read update
Anthropic's Mythos model accessed by unauthorized users, report says

AI startup Anthropic commits US$100 billion to Amazon's AWS over next 10 years

Artificial intelligence company Anthropic has agreed to commit more than US$100 billion to Amazon's AWS cloud platform over the next 10 years to train and run its Claude chatbot. Amazon will invest US$5 billion immediately as part of the new agreement announced this week by the companies, and up to another US$20 billion in the future. Amazon previously invested US$8 billion in Anthropic. The partnership will allow Anthropic to secure up to 5 gigawatts of Amazon's Trainium chips to train and power their artificial intelligence models. "Our custom AI silicon offers high performance at significantly lower cost for customers, which is why it's in such hot demand," said Amazon CEO Andy Jassy. Amazon said AWS customers will be able to access the full Anthropic-native Claude console from within the AWS cloud platform. Earlier this year, privately-held Anthropic said its valuation grew to US$380 billion, positioning itself alongside rivals OpenAI and Elon Musk's rocket maker SpaceX, which recently merged with his AI startup xAI, maker of the chatbot Grok. Renaissance Capital, which researches the potential for initial public offerings, counts Anthropic as third among the most valuable private firms, behind SpaceX and ChatGPT maker OpenAI, valued at US$500 billion. Anthropic and Amazon have partnered since 2023 to accelerate generative AI adoption for customers to build, deploy, and scale AI applications. Amazon says 100,000 customers run Anthropic Claude models on AWS. In February, the Trump administration ordered all US agencies to stop using Anthropic's artificial intelligence technology and imposed other major penalties for refusing to allow the US military unrestricted use of its AI technology. In an unusually public clash between the government and the company, President Donald Trump, Defense Secretary Pete Hegseth and other officials took to social media to chastise Anthropic, accusing it of endangering national security. Anthropic CEO Dario Amodei refused to back down over concerns the company's products could be used in ways that would violate its safeguards. Anthropic said it would challenge what it called an unprecedented and legally unsound action "never before publicly applied to an American company." Earlier this month, a federal appeals court refused to block the Pentagon from blacklisting artificial intelligence laboratory Anthropic in a decision that differed from the conclusions reached in another judge's ruling on the same issues. Anthropic is not yet profitable but said in February that it's on track for sales of US$14 billion over the next year. Anthropic was founded by ex-OpenAI employees in 2021 and released its first version of Claude in 2023, following OpenAI's ChatGPT debut in late 2022.

AnthropicSpaceXxAI
Jamaica Gleaner1d ago
Read update
AI startup Anthropic commits US$100 billion to Amazon's AWS over next 10 years

Investors in Edinburgh Worldwide Advised to Defend Against Saba's Latest £165M SpaceX Proposal - Internewscast Journal

Shareholders are being called to action to prevent a US hedge fund from taking over a key investor in Elon Musk's SpaceX. The Edinburgh Worldwide Investment Trust (EWIT) faces a takeover attempt by Saba Capital, a New York-based hedge fund led by financier Boaz Weinstein, known for his skills in poker. Weinstein aims to replace the current board with his three selected candidates. EWIT, which holds a significant stake in Musk's space exploration company valued at approximately £165 million, is set to conduct its annual general meeting later this month. During this meeting, all board members will be up for re-election, presenting a crucial moment for the trust's future. This marks Weinstein's third attempt to overthrow the board, following two previous defeats. His earlier efforts -- one last year and another in January -- were blocked by a strong turnout of EWIT's more than 20,000 shareholders, many of whom are small investors opposing his takeover strategy. However, EWIT's leadership has raised concerns this weekend, suggesting that Saba Capital is banking on shareholder apathy to decrease voter participation at the AGM. Such a scenario could allow Weinstein to secure the necessary votes to gain control, which would enable the hedge fund to acquire EWIT's valuable SpaceX investment at a minimal cost. To counter this threat, EWIT's management is urging small investors to actively vote against Weinstein's nominees and to support the current board members. Investors using online platforms are particularly encouraged to cast their votes early, as their submission deadlines might fall up to a week before the meeting. They have urged small investors to make sure they vote against Saba's board nominees and support the existing directors, with those holding shares through investment platforms being asked to vote earlier as their deadlines could be as much as a week before the meeting. EWIT investors using the Fidelity platform will need to cast their votes by this Friday (April 24) while for customers of Hargreaves Lansdown, Interactive Investor and AJ Bell the cut-off date is on Monday (April 27). Investors can also vote at the AGM on the day if they attend. EWIT chairman Jonathan Simpson-Dent told The Mail on Sunday: 'If investors turn out in significant numbers, as they did in January, Saba can be defeated and shareholders can protect access to high-growth companies like SpaceX.' Last week, shareholder advisory firms PIRC and ISS recommended investors reject Saba's nominees. PIRC said it had 'concerns' the three candidates could undermine the board's independence. ISS said Saba had 'not presented a compelling case for change in control'. Baroness Altmann, a former government pensions minister and shareholder rights' campaigner, said: 'Saba has cynically relied on weak shareholder protections so far but previous rounds of this battle have shown the power ordinary shareholders have to defend their own interests.' The tussle over the trust has taken on renewed urgency after reports emerged that SpaceX is planning to list later this year, in what is likely to be one of the biggest stock market floats in history. It is estimated that the firm could be worth as much as £1.3 trillion when it goes public, meaning EWIT's stake could surge, generating hefty returns for investors. Richard Stone, head of industry body the Association of Investment Companies, said: 'If shareholders don't come out in force, Saba will be able to grab the steering wheel of Edinburgh Worldwide with its valuable SpaceX flotation around the corner.' The trust has estimated that at least 75 per cent of its investors would need to vote for it to be in with a chance of defeating Saba, which is its largest shareholder and controls around 30 per cent of the business. This is slightly higher than the record 70 per cent turnout the trust recorded in January when Saba last attempted to take over the board. Saba scored a victory earlier this month when it defeated proposals put forward by EWIT's board that would have allowed shareholders to cash out before it takes control of the business. The sector suffered a blow on Thursday when investors in Impax Environmental Markets, another UK firm targeted by Saba, approved an exit offer that would effectively dismantle the trust, despite shareholders voting to continue the business last year. Trusts have demanded City watchdog, the Financial Conduct Authority, intervene to stop minority investors such as Weinstein calling repeated votes to force their agenda on companies. But the regulator's head of markets Simon Walls previously said such events were part of the 'rough and tumble' of finance.

SpaceX
Internewscast Journal1d ago
Read update
Investors in Edinburgh Worldwide Advised to Defend Against Saba's Latest £165M SpaceX Proposal - Internewscast Journal

New AI lab Core Automation 'nerdsniped' researchers from Anthropic, Google DeepMind

From day one, the startup has branded itself as "the world's most automated AI lab." A new AI startup called Core Automation, founded by an ex-OpenAI researcher, is snatching top talent from Anthropic and Google DeepMind. On Tuesday, Core Automation wrote in its first X post that it is "building the world's most automated AI lab." "Our objective: systems that optimize and automate work, starting with research itself," the company wrote in the post. Jerry Tworek, a former vice president at OpenAI, lists himself as the CEO and cofounder of Core Automation on his X bio. In an X post on Tuesday, Anthropic researcher Rohan Anil said he left the company after Tworek "nerdsniped" him. "Ok I did leave anthropic, a few weeks ago, it was one of the best places to work for a researcher," Anil, who also worked in Google DeepMind, wrote. "Jerry Tworek nerdsniped me into starting this with him and others." Anmol Gulati, a research scientist from Google DeepMind working on Gemini, said in a post that he was "starting something new with some exceptional people." "I've increasingly felt that the current research paradigm -- scaling models, data, and static deployment won't get us all the way," Gulati wrote. "We believe the next phase comes from something different: new learning algorithms, architectures beyond today's stack, and systems that automate the process of building itself." On its website, Core Automation wrote that its team consists of people who have "helped build frontier models" and "influential architectures." It's not the first time top AI researchers have left big labs for startups. Yann LeCun, formerly Meta's chief AI scientist, left the company to start his AI startup Advanced Machine Intelligence Labs, also known as AMI Labs. The startup focuses on developing world models -- AI systems that better understand and reflect the real world. AMI Labs' approach departs from Meta's focus on commercially driven model development and scaling. Last year, tech giants were battling over top AI talent, offering multibillion-dollar acquihires and massive pay packages. Startups were also active players in the talent war, offering competitive salaries and equity packages, as well as the unique impact and ownership that come with working at a smaller company. Shawn Thorne, managing director at executive search firm True Search, told Business Insider last year that base salaries at startups rose rapidly as they compete to attract AI talent. Equity is "the big factor" helping offset the "opportunity cost" for top researchers or engineers who might otherwise choose to start their own ventures, he said. To sweeten the deal, startups also offer additional incentives such as cofounder titles, access to compute, and time for independent research, Thorne added.

Anthropic
DNyuz1d ago
Read update
New AI lab Core Automation 'nerdsniped' researchers from Anthropic, Google DeepMind

Exclusive-SpaceX says unproven AI space data centers may not be commercially viable, filing shows

NEW YORK, April 21 (Reuters) - SpaceX warned investors that its ambitions to build space-based artificial intelligence data centers, as well as human settlements on the moon and Mars, rely on unproven technologies and may not become commercially viable, according to a company filing. The business risks laid out in SpaceX's pre-IPO filing, which have not been previously reported, present a far more cautious assessment of the rocket maker's future than the vision laid out publicly by billionaire CEO Elon Musk in recent weeks, as the company gears up for what could be the largest initial public offering in history. Risk factors in a prospectus are required by U.S. securities law and are designed to inform ⁠investors of potential pitfalls while also ⁠shielding companies from future legal liability. "Our initiatives to develop orbital AI compute and in-orbit, lunar, and interplanetary industrialization are in early stages, involve significant technical complexity and unproven technologies, and may not achieve commercial viability," SpaceX said in an excerpt from the S-1 filing, which was seen by Reuters. Any future AI orbital data centers will operate "in the harsh and unpredictable environment of space, exposing them to a wide and unique ⁠range of space-related risks that could cause them to ⁠malfunction or fail," the document said. MUSK SAYS AI IN SPACE IS A 'NO-BRAINER' Companies use the S-1 registration document to disclose their finances and risks before going public. SpaceX is targeting a listing in the coming months at a valuation of roughly $1.75 trillion with a $75 billion raise, which would make it the largest initial public offering in history. Musk said at the World Economic Forum in January that building AI data centers in space was "a no-brainer" and that it would be the cheapest place to put AI within ⁠two to three years. In February, after announcing a merger between SpaceX and his social media and artificial intelligence firm xAI, he said "space-based AI is obviously the only way to scale". SpaceX ⁠did not immediately respond to a request for further comment. SpaceX also highlighted its heavy dependence ⁠on Starship, its next-generation fully reusable rocket, which has suffered several delays and testing failures. "Any failure or delay in the development of Starship at scale or ⁠in achieving the required launch cadence, reusability and capabilities thereof would delay or limit our ability to execute our growth strategy," the filing said. Starship is designed to loft far larger payloads than SpaceX's workhorse Falcon 9 rocket, aiming to dramatically reduce launch costs for Starlink satellites, space‑based data centers and human missions to the moon. (Reporting by Echo Wang; Writing by Joe Brock; Editing by Nick Zieminski)

SpaceXxAI
Superhits 97.9 Terre Haute, IN1d ago
Read update
Exclusive-SpaceX says unproven AI space data centers may not be commercially viable, filing shows

Sam Altman Says Anthropic Is Scaring People To Promote Its Mythos AI

Anthropic stated that Mythos is extremely powerful and it could be used dangerously to impact several sectors. The company restricted its use with a limited set of enterprises and prevented its misuse from cyberattacks. An AI researcher Nicholas Carlini tested the model and found shocking results, as per Bloomberg. He concluded that Mythos can identify weaknesses in software and exploit them on its own. It is also capable of building its own hacking tools and could target platforms like Linux.

Anthropic
TimesNow1d ago
Read update
Sam Altman Says Anthropic Is Scaring People To Promote Its Mythos AI

Cut chaos with weekly kitchen reset

Americans spend more than half their food budget eating out or grabbing takeout; this is significant given that food prices at grocery stores and restaurants continue to increase. While convenience makes ordering to-go seem like a good plan, eating at home is both more economical and better for you. But when the week gets busy, good intentions slip away, produce wilts in the drawer and everyone forgets about that leftover pork chop lingering in the back of the fridge.

CHAOS
TimesDaily1d ago
Read update
Cut chaos with weekly kitchen reset

iTWire - TrendAI partners with Anthropic to extend leadership in AI security

Trend Micro's enterprise business accelerates its transformation as AI security category leader TrendAI, the enterprise cybersecurity business from Trend Micro Incorporated (TYO: 4704; TSE: 4704), today announced a strategic engagement with Anthropic, embedding Claude models across its platform to power agentic workflows, automation, AI-native security operations, and develop threat research to identify vulnerabilities in AI systems and infrastructure. TrendAI™ will use Claude to advance vulnerability discovery while ensuring coordinated action in real-world risk reduction. Rachel Jin, Chief Platform and Business Officer, Head of TrendAI™: "We launched TrendAI™ to define the AI security category. This next phase is about scaling that vision globally, with leading partners like Anthropic. Our broad, strategic collaboration across research, defence, and innovation will define how AI is secured moving forward." TrendAI™'s use of Claude spans threat research, real-world risk reduction, platform innovation, and global go-to-market execution. This will operate across the full AI security lifecycle, from vulnerability discovery to automated defence and AI-native operations. Ash Alhashim, Head of Cybersecurity GTM at Anthropic: "For 35 years, TrendAI™ has been at the forefront of cybersecurity. By using Claude to power TrendAI Vision One™ and initiatives like TrendAI™ Zero Day Initiative™ (ZDI) and Pwn2Own, TrendAI™ is advancing the next iteration of vulnerability discovery and reporting -- and tilting the scales toward defenders." Focus areas include: Advancing AI Threat Research: TrendAI™ is scaling its threat research to address the growing attack surface of AI, building on proven programs like Pwn2Own Berlin under TrendAI™ ZDI. This approach brings real-world vulnerability discoveries into AI systems, helping identify and address critical weaknesses before it reaches production environments. Driving AI-Native Innovation: Anthropic's Claude models will help power TrendAI™'s platform innovation, enhancing agentic workflows, automation, and AI-native security operations. This enables organisations to reduce noise, act faster, and scale security alongside AI adoption. The announcement comes as TrendAI™ prepares to welcome over 600 cybersecurity leaders to its Spark Leadership Exchange in Phoenix, Arizona in May. Anthropic will join TrendAI™ on stage at the event alongside other industry leaders, reinforcing a shared commitment to shaping the future of AI security and engaging directly with global enterprise leaders. To learn more about the Spark Leadership Exchange, visit: https://resources.trendmicro.com/spark-leadership-exchange.html

Anthropic
itwire.com1d ago
Read update
iTWire - TrendAI partners with Anthropic to extend leadership in AI security

ETU Prioritizes Respect on ANZAC Day Amid QR Chaos

The Electrical Trades Union Queensland & Northern Territory Branch confirm that no industrial action will be taken on ANZAC Day to any extent that would impact members of the public and affect services running. Any disruptions to services on ANZAC Day will be at the hands of Queensland Rail (QR) and the LNP State Government. The ETU and its members hold a deep and sincere respect for the sanctity of ANZAC Day and the profound sacrifice made by those who served, so that Australians may enjoy the freedoms and way of life we have today. This is not a position taken lightly - it is a value shared by our Delegates, Officials, and membership. ETU President Jason Young says, "It has always been the ETU's intention to withdraw any industrial action on ANZAC Day. Our Delegates and Officials considered this well in advance of today's correspondence, and the decision to not proceed with action on this day was already settled." Speaking at a press conference this morning ETU Rail Organiser Darren Wood said, "Our membership does not want to affect any of the services running on ANZAC day, we understand the sanctity of ANZAC day, and we want to do our best to lift what action is necessary in order to allow services to run for people to be able to pay their respects." It has now been two weeks since the ETU requested that Queensland Rail and the LNP State Government seriously consider the union's proposal for an Electrical-only Enterprise Agreement - a proposal made in good faith, at no cost to the employer, and one that would have avoided the current dispute entirely by allowing both parties to focus on reaching an agreement and moving forward. To date, the ETU has received no response to this request by Queensland Rail and the Government. The ETU again calls on Queensland Rail and the LNP State Government to respond to this proposal without further delay. Industrial relations require good faith from all parties at the table. We have demonstrated ours. We now ask that QR and the Government demonstrate theirs. /Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.

CHAOS
Mirage News1d ago
Read update
ETU Prioritizes Respect on ANZAC Day Amid QR Chaos

Dow Jones Top Company Headlines at 1 AM ET: SpaceX Secures Option to Buy AI Startup Cursor for $60 Billion | OpenAI ...

SpaceX Secures Option to Buy AI Startup Cursor for $60 Billion The rocket company said close work in a coding partnership could lead to a combination. ---- OpenAI Under Criminal Probe in Florida Over Mass Shooter's ChatGPT Use The investigation seeks to determine responsibility for an attack that killed two people. The chatbot advised the suspect on the weapon and timing, Florida's attorney general says. ---- T-Mobile and Germany's Deutsche Telekom Weigh Combination The big German carrier is already T-Mobile's largest shareholder. ---- BHP Upgrades Copper Guidance, Signs China Iron-Ore Deal Like a number of its rivals, BHP has put copper at the heart of its growth plans, forecasting rising demand for the industrial metal. ---- United Airlines to Reduce Capacity as Fuel Costs Soar The carrier said it has made schedule adjustments for the rest of the year to account for volatile prices. ---- Capital One Financial First-Quarter Revenue Rises The McLean, Va., bank logged higher revenue in its latest quarter as provision for credit losses declined. ---- Amazon.com Launches Program for GLP-1 Weight-Loss Drugs With One Medical Amazon One Medical members can get Novo Nordisk's Wegovy and Eli Lilly's Foundayo GLP-1 pills in a program that integrates primary-care services with the technology company's pharmacy business. ---- New York Sues Coinbase, Gemini Over Crypto Exchanges' Prediction Markets Federal and state regulators have been sparring over who has oversight of the platforms, which have surged in popularity ---- Casey Wasserman's Talent Agency Draws Takeover Interest A handful of private-equity firms and United Talent Agency are among the early bidders for the power broker's firm. ---- Chip-Equipment Supplier ASM International Logs Higher Sales on Booming AI Demand The Dutch group posted strong sales for the first quarter as chip makers continue to invest in tools to make increasingly sophisticated semiconductors in a bid to satisfy booming demand for artificial intelligence. ---- Airlines cut flights as fuel costs surge - an economic fallout from the Iran war that markets may be missing For travelers, the disappearing flights are translating into fewer route and connecting options, and of course higher fare prices. ---- Deutsche Lufthansa to Cancel 20,000 Short-Haul Flights to Save Jet Fuel The German group said that the flights canceled are operated by several of its airlines, mostly its regional carrier CityLine, and will equal a 1% reduction in its passenger capacity. ---- Spirit Airlines in Talks With Trump Administration on Government Investment Florida-based Spirit has been working to sell some planes and refocus operations on core cities.

SpaceX
Morningstar1d ago
Read update
Dow Jones Top Company Headlines at 1 AM ET: SpaceX Secures Option to Buy AI Startup Cursor for $60 Billion | OpenAI ...

SpaceX targets Cursor in $60B AI coding push | News.az

SpaceX said it has secured an option to either acquire code-generation startup Cursor for $60 billion later this year, or pay $10 billion to proceed with a new partnership, as it expands further into the fast-growing market for AI developer tools, News.Az reports, citing Reuters. Alongside OpenAI and Anthropic, Cursor is among several Silicon Valley startups attracting large numbers of developers by using artificial intelligence to automate coding -- an area where AI firms have already found strong commercial momentum. The potential deal could strengthen the position of xAI, the maker of the Grok chatbot that SpaceX merged with in February, giving it a firmer foothold in the AI coding segment where it has trailed competitors. It would also provide Cursor with expanded computing resources to advance its AI models. "The combination of Cursor's leading product and distribution to expert software engineers with SpaceX's million H100 equivalent Colossus training supercomputer will allow us to build the world's most useful models," SpaceX said in a post on X on Tuesday. Colossus is xAI's supercomputer cluster based in Memphis, which the company has described as the largest in the world. The firm has been investing billions of dollars into AI infrastructure. The announcement comes ahead of SpaceX's widely anticipated public debut expected in the coming months. The company is reportedly targeting a valuation of करीब $1.75 trillion and planning a $75 billion fundraising that could become the largest IPO ever. In March, two product engineering leaders at Cursor -- Andrew Milich and Jason Ginsberg -- said they had joined SpaceX to work on the company's lunar initiatives and xAI. Elon Musk welcomed the hires, stating, "Orbital space centers and mass drivers on the Moon will be incredible."

AnthropicSpaceXxAI
News.az1d ago
Read update
SpaceX targets Cursor in $60B AI coding push | News.az

Anthropic Mythos Breach: How a private Discord group accessed the powerful Cyber AI

New Delhi: Anthropic just teased their new AI tool called Mythos a couple weeks back. It is supposed to help big companies find weak spots in their systems before hackers do, however, public release of the model was held back due to its abilities being exploited by threat actors. But now a Bloomberg report says some folks who should not have it got in anyway. They slipped through a third-party vendor on the very day Anthropic announced the limited release. The whole thing happened fast. A small group in a private Discord channel figured out how to reach the preview version. They used access from someone working at a contractor for Anthropic plus some smart guesses about where the model lives online. The group even showed screenshots and did a live demo for the reporters. They say they just wanted to play around with the new tech, not cause trouble. Still, it leaves you wondering how safe these controlled releases really are. How the Claude Mythos unauthorised access happened As per the report, Anthropic's main systems were not compromised; they got in through a vendor environment. One person in the group works at a third-party contractor and had some login rights. They combined that with guesses based on how Anthropic sets up other models. The group belongs to a Discord that hunts for details on unreleased AI stuff. They started using Mythos right away, mostly for simple tasks like building basic websites to stay under the radar. Anthropic responded quickly. A spokesperson told reporters, "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." So far the company says they found no sign that their own systems got touched. That is a relief, but it still highlights how tricky supply chains can get in tech. What this means for AI security releases I remember chatting with a friend who works in enterprise security last month. He joked that every "limited release" plan sounds solid until the first leak. This Mythos case proves his point. Anthropic set up Project Glasswing to share the tool only with trusted names like Apple and a few others. The goal was to let them test it for defence while keeping it away from bad actors. Now that plan has a visible crack. The tool itself can spot vulnerabilities in major operating systems and browsers better than most humans, according to Anthropic. That power is exactly why they kept it tight. But when access slips out, even curious users create risks. Someone else with worse plans could follow the same path later. Experts keep warning about third-party vendors being the weak link. Companies lean on contractors more than ever, yet those extra connections open new doors for trouble. In this case the breach stayed outside Anthropic's walls, but it still raises questions about how well everyone checks their partners. Anthropic designed Mythos to strengthen corporate defences. The company knew from the start it could flip and become a hacking helper if it fell into wrong hands. That is why the controlled rollout mattered so much. This incident might push other AI firms to tighten their own release processes even more.

AnthropicDiscord
News9live1d ago
Read update
Anthropic Mythos Breach: How a private Discord group accessed the powerful Cyber AI
Showing 741 - 760 of 10807 articles