News & Updates

The latest news and updates from companies in the WLTH portfolio.

Anthropic quietly targets Microsoft's most popular app

Anthropic just made a massive move that goes directly at Microsoft's (MSFT) home turf. The AI bellwether just launched a beta version of Claude for Word, layering its robust model onto arguably the most popular piece of software in history, according to Business Insider. Word isn't just another app. Having personally used the product for decades, I know it has been the go-to for negotiating contracts, writing memos, and shaping deals. With Claude inside the software's workflow, Anthropic's goal is to effectively remove the friction, sitting exactly where the work is happening. As a result, Anthropic is becoming a critical intelligence layer within software, especially after it recently outperformed many major large language models. And that's a much more serious challenge to Microsoft than might be obvious. Anthropic takes aim at Microsoft's moat Anthropic's move into Word has everything to do with control. Office work has been synonymous with Microsoft's Word processor over the years, particularly for consultants, students, finance teams, and lawyers. Now with the add-in, the dynamic switched up in a big way. Users don't need to leave the platform, where Anthropic meets them at the exact point of work. So it then stops being an outside assistant and becomes more like a native document layer. Also, there's a unique sales pitch layered in the agreement. Essentially, if users can run the plug-in through Bedrock, Vertex AI, or any internal LLM gateway, they can sidestep the need to buy into Microsoft's AI stack. For lawyers in particular, the development could be a game-changer, enabling them to flag risks, preserve formatting, and survive tracked changes. Here are some of the prompts that can potentially be used. Setting up Claude for Word * Works with Microsoft Word on the web, Word for Windows with Microsoft 365, and Word for Mac on supported versions. * Individuals can install Claude for Word from the Microsoft Marketplace by selecting Get it now. * Following the installation, open Word, launch the add-in, and sign in with your Claude account. * Admins can deploy it through the Microsoft 365 Admin Center, enabling Office Store access, adding the app, and assigning it to users or teams. Source: Claude Support Claude's bigger workflow grab * Word is the newest entry in Claude's Microsoft foothold, currently in beta for Team and Enterprise users. * Before that, Claude spread its tentacles into Excel for spreadsheet work and PowerPoint for slide creation and editing. * It connects to Microsoft 365 and pulls context from SharePoint, OneDrive, Outlook, and Teams. * Anthropic went deeper into Microsoft's stack through Copilot Studio, the Researcher agent, and Agent Mode in Excel. * Outside Microsoft, Claude is embedded in Slack workflows. * It can also plug into Google Workspace (Gmail, Calendar, and Drive). * New enterprise plug-ins extend Claude into DocuSign, FactSet, and LSEG workflows, too. Getty Images/Bloomberg What Claude does better than the average chatbot Anthropic's Claude is a family of AI models, and from the start, its edge has been clear. Instead of being all flashy, it has always aimed to be a reliable coworker who actually understands the brief. It debuted on March 14, 2023, and since then has quickly risen in the ranks to become a major work platform, offering quick-response modes, deeper adaptive thinking for tougher problems, and Projects and Artifacts for drafting. In addition, its model Sonnet 4.6 now stretches to a whopping 1 million-token context window in beta, a massive flex that means Claude will offer far more organized and context-specific responses for some of the most challenging tasks. "I've only subscribed to it for a week or so now, but so far I like the tone way better," said Reddit commenter tulobanana, responding to a Claude versus ChatGPT debate in the r/ChatGPT subreddit. "It's night and day. Claude sounds a lot more natural and most importantly, doesn't do any of the 'let's look at this calmly' and 'you're not overreacting, you're not being dramatic' and 'it's nothing mystical' type of s**t. At least not yet, knock on wood." Here are some standout benchmark stats. * Coding is Claude's biggest selling point. Claude Opus 4.1 posted a head-turning 74.5% on SWE-bench Verified (its best published score on software-engineering benchmark). * Claude stood out in complicated "research and finish the task" work. Outside group Artificial Analysis reported that Claude Opus 4.6 scored 1606 Elo on these tasks, roughly 150 points higher than GPT-5.2 (xhigh). Put simply, Claude beats its rival sevenout of 10 times in head-to-head tests. * Claude stands apart when workloads get huge. In a particular Anthropic test, Opus 4.6 scored 76% on a task that was essentially about how effective AI was finding the correct information that's buried inside 1 million tokens of text. Why Anthropic is suddenly everywhere Claude has had a noisy stretch of late. In the past few weeks, it has been at the heart of a Pentagon blow-up, Washington sounding the alarm over its latest model, and a growing tussle on who controls the next layer of enterprise AI. * Anthropic's Pentagon fight is still a headache, BBC noted. Following a dispute over guardrails on military usage, the Defense Department called it a "supply-chain risk" and cut off Pentagon business. * Funny enough, Anthropic is still communicating with Washington. Co-founder Jack Clark said the company is still discussing Mythos with the Trump administration, Forbes reported, undercutting the "Dario the Ethical" narrative. * Mythos is Anthropic's most capable model that's tailor-made for coding and agentic tasks, according to Reuters. It has the ability to identify and exploit weaknesses in major operating systems and browsers. The development was so serious that CEOs of major banks, led by Treasury Secretary Scott Bessent and Fed Chair Jerome Powell, called an emergency meeting. * Anthropic is also deepening its push into enterprise software, creating headaches for rivals such as Palantir after recently rolling out 10 new business plug-ins, Reuters confirmed. The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc. This story was originally published April 14, 2026 at 2:33 PM.

Anthropic
The Island Packet10d ago
Read update
Anthropic quietly targets Microsoft's most popular app

BoE's Bailey sees major cybersecurity risks in new Anthropic model

April 14 (Reuters) - Central banks and financial regulators must quickly understand the implications of a new artificial intelligence model that could pose major cybersecurity dangers, Bank of England Governor Andrew Bailey said on Tuesday. "It would be reasonable to think that the events in the Gulf are the most recent challenge to us in this world, until, I think it was last Friday, you wake up to find that Anthropic may have found a way to crack the whole cyber risk world open," Bailey said at an event at Columbia University in New York. Anthropic's Mythos product has drawn warnings from cyber experts about its potential to supercharge complex cyberattacks, which could challenge the banking industry and its existing technology systems. Regulators wanted to "work out what this actually means," Bailey said. "The issue is: to what extent is this new version of the product going to be able to, ⁠in a sense, identify vulnerabilities in other systems which can be exploited for cyberattack purposes." He said cyber risks had risen up the list of concerns of regulators most rapidly in recent years. "It's ⁠the one that never goes away. You have to keep mitigating it, but the threat actors will move on, so we have to deal with it," Bailey said. He dedicated most of Tuesday's event to discussing the issue of central banks' operational independence, which was "not robust enough" when it came to matters of financial stability. Bailey argued that monetary and financial stability policy - often depicted as separate issues or sometimes even at odds with each other - should be viewed together within an overarching objective of protecting the value of money. While monetary policy is defined by numerical inflation targets, financial stability is harder to grasp, leading to a distinction between the two, Bailey said. "This is important because independence in respect of financial stability is otherwise not as robust, and I would argue not robust enough," Bailey said in his speech. His remarks come as central banks on both sides of the Atlantic face increasing levels of political pressure, albeit to differing degrees. In the United States, U.S. President Donald Trump has called for lower interest rates and has repeatedly chastised Fed Chair Jerome Powell. In Britain, finance minister Rachel Reeves has pushed regulators including the BoE to give greater weight to economic growth when making decisions. Bailey said financial stability cuts across private interests in the financial system, as well as governments seeking to boost economic growth by loosening regulation to increase lending - particularly when memories of past crises fade.

Anthropic
Yahoo! Finance10d ago
Read update
BoE's Bailey sees major cybersecurity risks in new Anthropic model

Anthropic Valuation Faces New Questions After Claude Mythos Findings | Investing.com

In Claude news today, the UK's AI Safety Institute has confirmed that Anthropic's Claude Mythos Preview can autonomously execute sophisticated multi-stage cyberattacks at success rates no prior AI model has approached, completing a 32-step corporate network intrusion simulation that had never been finished by any AI system. These findings reframe AI safety assessments from theoretical benchmarks into operational risk disclosures, with direct implications for enterprise deployment decisions and the financial sector's security posture. Anthropic confirmed the model's existence on April 13, 2026, weeks after its presence was first surfaced via a late-March website leak, and announced it would not release Claude Mythos Preview publicly, citing the model's autonomous offensive capability rather than regulatory or safety-threshold constraints. US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell reportedly warned bank executives about the threat posed by the model in the days preceding the institute's formal disclosure. The UK AI Safety Institute recently evaluated Mythos Preview against The Last Ones (TLO), a complex corporate network attack simulation. Mythos Preview succeeded in 3 out of 10 attempts, completing an average of 22 out of 32 steps, outperforming Claude Opus 4.6, which averaged only 16 steps. The evaluation required significant computational resources, with sessions consuming up to 100 million tokens, and performance improved with additional computing power. Beyond simulation tasks, Mythos Preview autonomously identified thousands of zero-day vulnerabilities across major operating systems, including long-standing flaws that had remained undetected for years. Anthropic's internal tests found that engineers could direct the model to find remote code execution flaws overnight, yielding complete exploits by morning. In tests targeting privilege escalation, over half of the 40 vulnerabilities resulted in successful exploit chains without human intervention. However, experts have cautioned against framing Mythos as a "super-hacker," noting that while its capabilities are real, the confirmed number of severe vulnerabilities is much smaller, emphasizing the importance of accurate risk assessment in enterprise AI governance. The Claude news came as Anthropic, a private company, is now valued at about $61.5Bn, offering no direct equity exposure for public investors. Its valuation influences public companies' AI credibility. Palantir Technologies, a competitor for AI contracts, has faced selloff pressure due to AI safety concerns, but the delay in the release of Claude Mythos eliminates a competitive threat. However, increased scrutiny raises regulatory hurdles for all AI vendors, including Palantir's AIP platform. In contrast, CrowdStrike Holdings and Palo Alto Networks, as partners in Project Glasswing, stand to gain from enhanced vulnerability detection through Mythos. Companies like Amazon and Google, which support Anthropic's computing needs, will see neutral to positive effects from the Mythos disclosure, indicating ongoing compute demand. Investors must consider the impact of evolving AI regulations on stock valuations as formal oversight is expected to develop by 2026. Monitor the patch velocity for vulnerabilities identified by Mythos Preview, as Anthropic reports over 99% remain unpatched. The timeline for coordinated disclosure with Project Glasswing partners will influence whether the narrative remains focused on defense or shifts to an AI-enabled security incident if an exploit occurs. Any incident involving a Glasswing partner could impact the stock prices of CRWD, PANW, or the affected operator. On the regulatory side, look for comments from the EU AI Office regarding whether Mythos Preview's capabilities require mandatory conformity assessments under the AI Act, which could set a precedent for frontier cybersecurity models. The upcoming earnings reports for AMZN, GOOGL, and MSFT in late April and early May will be key for insights into the economics of the Glasswing partnership and Mythos-related compute demand. However, a critical uncertainty remains how quickly hostile state actors can reach the capability threshold demonstrated by Mythos Preview with this Claude News, an issue that will not be resolved before the next annual capability review in 2027. *** Looking to start your trading day ahead of the curve?

Anthropic
Investing.com10d ago
Read update
Anthropic Valuation Faces New Questions After Claude Mythos Findings | Investing.com

Anthropic Mythos And Embracing The AI 'Bugmageddon'

This is the online edition of The Wiretap newsletter, your weekly digest of cybersecurity, internet privacy and surveillance news. To get it in your inbox, subscribe here. nthropic was blunt in its assessment. "AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities," the $380 billion company proclaimed in a blog post last week. It had just released its Mythos model, which it said was able to find and exploit bugs in "every major operating system and web browser" available, all on its own. One worrying example: Mythos discovered a previously unknown flaw in the widely-used OpenBSD open source operating system that had been sitting there for 27 years. Anthropic isn't the only one to have come to the conclusion that AI has reached its "singularity" moment in cybersecurity, where machines outclass people at hacking. Last month, when Israeli startup Tenzai's AI tools competed in a series of elite hacking competitions, the models outperformed over 99% of human competitors. Andrea Michi, former Deepmind researcher and cofounder of AI startup Depthfirst, warned last month AI was close to acquiring "superhuman" hacking skills. The fear that a wave of AI-powered cyberattacks is coming has spawned hyperbolic neologisms like "bugmageddon" and "vulnpocalypse." But in the cyber community, there's a growing consensus that the prowess of AI hackers should be celebrated. For the period in which non-malicious companies are the ones in control of powerful AI, however long that lasts, they can start patching as many flaws as AI discovers. That's one reason Anthropic set up Project Glasswing, giving Mythos to a select group of 40 major tech companies so they could find vulnerabilities before cybercriminals do. "That's what excites me most," says Daniel Cuthbert, a cybersecurity expert and senior fellow at the U.K.-based defense think tank RUSI. "If we see vendors finally start fixing stuff with this being a threat, then the internet is better off." There's no doubt that companies need to act. On Friday, Anthropic said organizations should prepare for bigger backlogs of vulnerabilities that need addressing and advised using AI to find as many bugs as possible in a product before shipping it. The Cloud Security Alliance, a nonprofit that surveyed 250 cybersecurity executives about how to protect against AI hacking, said that companies should start using AI to do regular automated scans for vulnerabilities and "introduce AI agents to the cyber workforce across the board." Many have already started, including the White House, which is reportedly leading a review of critical infrastructure systems that could be affected by the new AI models' capabilities, according to the Wall Street Journal. Some believe the most pertinent near-term impact will be to increase the workload for already overloaded IT teams, according to Jeremiah Grossman, CEO of Root Evidence, a cybersecurity company that helps clients with determining the importance of a given vulnerability. He tells Forbes that companies are already struggling to adequately prioritize patching bugs, leading to significant backlogs. A massive influx of AI-identified vulnerabilities will only "add insult to injury," he says. "Now we're going to be even more buried? Cool." They may also be a distraction, given that Grossman claims only 10-20% of real-world attacks begin with an exploit of a software weakness. Most use phishing or social engineering to breach a computer network. It's still not clear just how good Mythos will be in real-world scenarios. So far, Anthropic hasn't presented any findings that indicate the AI can break into well-defended systems, as noted by the AI Security Institute, a research organisation within the U.K. government's Department of Science, Innovation and Technology. It wrote in an assessment published Monday that it had tested Mythos and found that while it "can exploit systems with weak security posture," it hadn't been tasked with breaking through security protections like firewalls. Grossman predicted the vast majority of security teams won't see an impact in the next few years. That should be enough time for defenders to get a headstart on attackers. Got a tip on surveillance or cybercrime? Get me on Signal at +1 929-512-7964. THE BIG STORY GTA Developer Rockstar Games Hacked For the second time in three years hackers have stolen data from Rockstar Games, the developer of the hugely popular Grand Theft Auto series. The hackers threatened to release Rockstar information if a ransom wasn't paid, though that hasn't happened so far. The company sought to downplay the severity of the breach, saying only "non-material company information" was stolen. Stories You Have To Read Today After myriad lawsuits and reports of child grooming on its platform, Roblox announced two new types of age-based accounts for younger users: Roblox Kids for children between five and eight, and Roblox Select for those between nine and 15. "When they roll out in early June, these accounts will more closely align content access, communication settings and parental controls with a user's age," the company wrote in a blog post. The Trump administration is trying to unmask a Redditor user who criticized ICE and has ordered Reddit to appear before a grand jury in Washington D.C. to provide information on the person's identity, according to The Intercept. Prosecutors use such grand jury subpoenas to compel witnesses or companies to provide information so they can determine if there is probable cause to charge someone with a crime. The FBI can find deleted Signal messages in people's iPhones because of the way Apple's operating system stores notification information, 404 Media reports. Winner of the Week The Department of Justice got a court order to remove Russian spies from compromised home and business routers made by the Chinese company TP-Link, allowing the FBI to send an update to affected devices to change settings and harden security. According to the DOJ, Russian agents working for the GRU intelligence agency abused their illegal access to the routers by intercepting emails and passwords for a range of targets. They included "individuals in the military, government, and critical infrastructure sectors." Loser of the Week A group of civil liberties organizations have called on Meta to abandon plans to introduce facial recognition into its Ray Ban and Oakley smart glasses, warning it will embolden stalkers. Previous reports in the New York Times said Meta had been testing the feature, though the company told Wired, "If we were to release such a feature, we would take a very thoughtful approach before rolling anything out."

Anthropic
Forbes10d ago
Read update
Anthropic Mythos And Embracing The AI 'Bugmageddon'

BoE's Bailey sees major cybersecurity risks in new Anthropic model

By William Schomberg and Andy Bruce April 14 (Reuters) - Central banks and financial regulators must quickly understand the implications of a new artificial intelligence model that could pose major cybersecurity dangers, Bank of England Governor Andrew Bailey said on Tuesday. "It would be reasonable to think that the events in the Gulf are the most recent challenge to us in this world, until, I think it was last Friday, you wake up to find that Anthropic may have found a way to crack the whole cyber risk world open," Bailey said at an event at Columbia University in New York. Anthropic's Mythos product has drawn warnings from cyber experts about its potential to supercharge complex cyberattacks, which could challenge the banking industry and its existing technology systems. Regulators wanted to "work out what this actually means," ⁠Bailey said. "The issue is: to what ⁠extent is this new version of the product going to be able to, in a sense, identify vulnerabilities in other systems which can be exploited for cyberattack purposes." He said cyber risks had risen up the list of concerns of regulators most rapidly in recent years. "It's the one that never goes away. You have to keep mitigating it, but the threat actors will move on, so we have to deal with it," Bailey said. He dedicated most of Tuesday's event to discussing the ⁠issue of central banks' operational independence, which was "not robust enough" ⁠when it came to matters of financial stability. Bailey argued that monetary and financial stability policy - often depicted as separate issues or sometimes even at odds with each other - should be viewed together within an overarching objective of protecting the value of money. While monetary policy is defined by numerical inflation targets, financial stability is harder to grasp, leading to a distinction between the two, Bailey said. "This is important because independence in respect of financial stability is otherwise not as robust, and I would argue not robust enough," Bailey said in his speech. His remarks come as central banks ⁠on both sides of the Atlantic face increasing levels of political pressure, albeit to differing degrees. In the United States, U.S. President Donald Trump has called for lower interest rates and has repeatedly chastised Fed Chair Jerome ⁠Powell. In Britain, finance minister Rachel Reeves has pushed regulators including the BoE to give greater weight to economic ⁠growth when making decisions. Bailey said financial stability cuts across private interests in the financial system, as well as governments seeking to boost economic growth by loosening regulation to ⁠increase lending - particularly when memories of past crises fade. Much as monetary policy aims to protect the real value of money, Bailey said financial stability policy protects trust in money and that the two should be seen as complementary. "I see merit in creating a single overarching narrative with a strong focus on the value of money. It would remove descriptions of financial stability such as 'tangential' or 'in conflict'," Bailey said. (Additional reporting by Suban Abdulla; Editing by Andrea Ricci )

Anthropic
1470 & 100.3 WMBD10d ago
Read update
BoE's Bailey sees major cybersecurity risks in new Anthropic model

Crypto Markets Predict Anthropic's Valuation Ahead of IPO

Synthetic tokens offer price exposure only, with no ownership rights. Tokenized pre-IPO Anthropic shares trading on Jupiter now imply a market capitalization of $851 billion, more than double the company's last official funding valuation. The synthetic tokens, launched via PreStocks on the Solana-based DEX aggregator, climbed from roughly $122 per share in October 2025 to approximately $900 by April 14, 2026. Secondary Markets Price Anthropic Far Above Its Last Funding Round Anthropic closed a $30 billion Series G round in February 2026 at a $380 billion post-money valuation. The gap between that figure and the $851 billion implied on Jupiter reflects aggressive investor positioning ahead of a potential IPO. Traditional secondary platforms echo the trend. Shares on Hiive, a major pre-IPO marketplace, traded above $849 on April 14, closely matching the on-chain price. Follow us on X to get the latest news as it happens The PreStocks tokens are structured instruments backed 1:1 by SPV exposure to actual Anthropic shares. Holders gain price exposure but receive no voting rights, dividends, or legal ownership in the company, much like how it happens for Bitget is doing with SpaceX pre-IPO. AI IPO Wave Looms Over Public Markets Anthropic is reportedly in discussions for a Q4 2026 listing that could raise over $60 billion. Goldman Sachs and JPMorgan Chase are among the banks competing for underwriting roles. It is not the only AI giant approaching public markets. SpaceX filed confidentially with the SEC in early April, targeting a valuation above $1.7 trillion. OpenAI is also preparing a listing at roughly $1 trillion. Combined, these three debuts could introduce more than $3 trillion in new market capitalization, a volume that would dwarf total US IPO proceeds over the past decade.

SpaceXAnthropic
BeInCrypto10d ago
Read update
Crypto Markets Predict Anthropic's Valuation Ahead of IPO

Figma and Wix shares tumble as Anthropic targets AI web design market

Investing.com -- Figma (NYSE:FIG) fell 6% Tuesday after The Information reported that Anthropic is preparing to launch an AI-powered tool for designing websites and presentations, potentially as soon as this week. Adobe Systems (NASDAQ:ADBE) fell 2.7%, Wix dropped 4.7%, and GoDaddy declined 3% on the same report. According to The Information, Anthropic's upcoming design tool aims to help both technical and non-technical users create presentations, websites, landing pages and products using natural language prompts. A person with knowledge of the products said the tool would pose a threat to startups like presentation-maker Gamma and AI design tool Google Stitch. The report also indicated that Anthropic is preparing its next flagship model, Claude Opus 4.7, which could be released alongside the design tool. However, Opus 4.7 is not Anthropic's most advanced model. That distinction belongs to Claude Mythos. Related articles Figma and Wix shares tumble as Anthropic targets AI web design market Einhorn prioritizes capital preservation, warns of downside risks

Anthropic
Yahoo7 Finance10d ago
Read update
Figma and Wix shares tumble as Anthropic targets AI web design market

SpaceX Could Be the Biggest IPO in History. Here's What Happened to the Last 5 Mega-IPOs. | The Motley Fool

At 108 times sales, SpaceX would debut at nearly four times Meta's IPO valuation, despite growing more slowly and losing $5 billion last year. SpaceX is reportedly targeting a valuation potentially exceeding $2 trillion and a roughly $75 billion raise when it goes public later this year -- about 2.5 times the current record set by Saudi Aramco's $29.4 billion offering in 2019. The company dominates global space launch, and its Starlink segment is generating nearly $11 billion in annual revenue. This is a real, cash-generating business with a genuine moat. So should you invest? Let's look at what happened to the last five "mega-IPOs" that have raised $15 billion or more. The track record isn't great. And that's putting it lightly. The only stock to outperform the market over the long haul was Meta Platforms. And even that was down considerably at the one-year mark. There's another issue: valuation. SpaceX would be by far the most expensive stock to make a debut on this scale. If the targeted $2 trillion-plus market cap is achieved, the company's $18.5 billion in revenue would imply a price-to-sales (P/S) ratio of 108 -- almost four times as expensive as Meta shares when they hit the market. Consider that at the height of the AI boom in late 2023 -- a time when Nvidia was tripling its revenue year over year -- Nvidia shares topped out at a P/S around 40. The historical record is pretty clear: Most IPOs of this scale just don't pan out for investors -- short-term and long. Considering this and the fact that the stock will trade with such an extreme multiple, I can't recommend you buy SpaceX at IPO. If the share price falls considerably after the IPO, I might consider it. But I would not jump in at the beginning.

SpaceX
The Motley Fool10d ago
Read update
SpaceX Could Be the Biggest IPO in History. Here's What Happened to the Last 5 Mega-IPOs. | The Motley Fool

Top Unconventional Tips for Saving Money on Gas

High gas prices often trigger a wave of articles offering tips to save money at the pump. However, many just recycle well-known advice. Tips like driving at consistent speeds or combining errands are often reiterated. Readers seek innovative and unconventional strategies to reduce fuel costs. Unconventional Tips for Saving Money on Gas At El-Balad, we believe it's time to explore unique approaches to fuel savings. Below are some unconventional yet intriguing ideas to consider. 1. Home Distillation for Fuel Recently, a U.S. appeals court ruled that the centuries-old ban on home distillation is unconstitutional. This change opens up new possibilities for saving on gas expenses. * Making homemade liquor is now legal. * With proper mechanical adjustments, your vehicle could potentially run on ethanol derived from home-distilled spirits. 2. DIY Biofuel Production Many gasolines already contain ethanol. By distilling your own spirits, you might convert them into biofuel. Here's how to get started: * Purchase a still online for approximately $100. * Gather necessary ingredients like corn and yeast. * Ensure access to clean water for the distillation process. While the initial investment may seem high, doing the work yourself can lead to substantial savings on fuel costs. 3. Rethink Your Fuel Source Another unconventional suggestion is to explore electric vehicles (EVs). With rising fuel prices, many individuals are considering EVs as a long-term investment. Factors influencing this decision include: * Current and projected gas prices. * Maintenance costs associated with traditional vehicles. * Incentives and rebates for EV purchases. 4. The Power of Community Engaging with a community of car enthusiasts can provide unstructured advice. Connect with others who share their experiential tips for managing fuel consumption. This could include: * Sharing insights about fuel-efficient driving habits. * Recommending alternative routes to minimize mileage. * Hosting vehicle maintenance workshops for better fuel efficiency. 5. Fuel-Saving Technologies Look into technology that can optimize fuel efficiency. From mobile applications to smart gadgets, the right tools can help track your driving habits and suggest improvements. The quest for unconventional methods to save money on gas encourages a creative mindset. By exploring alternative fuel sources, investing in technology, and engaging with communities, drivers can find new ways to combat high fuel prices. It's time to rethink how we approach fuel savings, moving beyond the standard advice available today.

Unconventional
El-Balad.com10d ago
Read update
Top Unconventional Tips for Saving Money on Gas

Polymarket Probes Startups Offering Copy Trading Tools Linked to Insider Activity - FinanceFeeds

Polymarket has launched an audit of third-party startups building "copy-trading" applications on its platform, following concerns that these tools may enable users to replicate trades based on nonpublic information. The move comes as the prediction market operator faces growing scrutiny over potential insider activity. The startups under review develop tools that allow users to track and mirror the trading behavior of high-performing accounts. In some cases, these apps flag unusually large or well-timed bets that could indicate access to privileged information, raising questions about market fairness and transparency. The audit reflects mounting pressure on Polymarket to enforce clearer rules after previously supporting external developers through its Builders Program. That initiative encouraged startups to build on top of its infrastructure, but some of those products now sit at the center of compliance concerns. Copy-trading apps aggregate data from active traders and present curated lists of accounts with strong performance or suspicious activity patterns. Users can then either manually replicate trades or automate the process through bots, effectively outsourcing decision-making to observed market participants. According to reporting, these tools often highlight traders with consistent winning streaks or identify trades placed at moments that suggest informational advantage. The apps generate revenue by charging subscription or access fees for these insights and automation features. The model has contributed to a sharp increase in activity on Polymarket, with copy-trading tools reportedly adding hundreds of millions of dollars in trading volume. While this boosts liquidity, it also raises the risk that information asymmetry becomes amplified across the platform. "These copy-trading apps give their customers lists of Polymarket traders with good winning streaks, or flag unusually large or oddly timed bets that may be based on confidential information," The Information reported. Some of the startups involved have taken an aggressive approach to positioning their services. Polycool, one of the audited projects, advertises a "guide to Polymarket insider trading" on its website, framing prediction markets as structurally different from traditional financial markets. "This isn't the stock market, where using nonpublic information will land you in jail," Polycool states. "The rules for decentralized prediction markets are a completely different game." Another startup, Kreo, promotes tools designed to help users "find insiders before the rest." These messaging strategies highlight a broader issue: the absence of clearly enforced norms around information use in decentralized or quasi-regulated trading environments. Both startups were part of Polymarket's Builders Program, which launched in November to expand the platform's ecosystem. The program enabled third-party developers to build applications on top of Polymarket's data and execution layer, but oversight of these tools appears to be tightening. Polymarket and its main competitor Kalshi have both faced scrutiny over insider trading practices, particularly as volumes and visibility increase. In response, Polymarket introduced clearer rules and enforcement measures last month, signaling a shift toward tighter governance. The current audit suggests that internal policies alone may not be sufficient if third-party tools enable indirect circumvention. As prediction markets evolve, regulators and platforms are likely to focus more closely on how information flows through the ecosystem, not just on direct trading behavior. The outcome of the audit could influence how prediction market platforms structure developer access going forward. Stricter controls, revised data permissions, or limits on copy-trading functionality may emerge as platforms attempt to align growth with compliance expectations.

Polymarket
FinanceFeeds10d ago
Read update
Polymarket Probes Startups Offering Copy Trading Tools Linked to Insider Activity - FinanceFeeds

The OpenAI-Anthropic Cold War Comes to Illinois

Despite its best efforts, the Trump administration has been unable to implement a moratorium preventing states from passing laws regulating AI companies. Thus far, most states have used their authority to create guardrails that AI firms must comply with. But in Illinois, OpenAI has thrown its weight (and lobbying budget) behind a bill that would grant it legal protection from large-scale harm. Unfortunately for it, another frontier AI lab has put its thumb on the other side of the scale. According to a report from Wired, Anthropic has also decided to get involved in local politics and is lobbying against the bill that OpenAI has been pushing for. The bill at the center of the power struggle between AI giants is Senate Bill 3444, the Artificial Intelligence Safety Act. The legislation was authored by Democratic Senator Bill Cunningham, and while the incredibly generic name would make one think that the goal is to establish safety standards for AI, the law would actually offer safety to AI companies that might face litigation. Effectively, it would offer frontier AI companies a legal shield preventing them from being held responsible for large-scale harms caused by their AI models, including death or serious injury of 100 or more people or at least $1 billion in property damage. OpenAI has been trying to get out in front of laws that would create any additional burden on AI companiesâ€"a policy that has almost certainly been hastened by the fact that the company has been subject to several wrongful death lawsuits from families who lost a family member to suicide following conversations with ChatGPT. The company also publicly backed a piece of AI safety legislation in California that, while it added transparency requirements for frontier model makers, did not implement any liability laws that the companies could face. The legislation in Illinois goes a step further than just not establishing liability risk, but actually shields companies from it. Per Wired, Anthropic has taken issue with that approach and has been working behind the scenes to either alter or kill the bill entirely. “We are opposed to this bill. Good transparency legislation needs to ensure public safety and accountability for the companies developing this powerful technology, not provide a get-out-of-jail-free card against all liability,†Cesar Fernandez, Anthropic’s head of US state and local government relations, told the publication. Anthropic has been much more aggressive than OpenAI in advocating for stricter safety standards for AI companies. The two companies were previously on opposite ends of an AI safety bill in California (OpenAI eventually offered its support for that law, but only after it was pretty clear it was going to pass). Anthropic is backing a competing AI safety bill in Illinois, SB 3261, that would, among other things, require AI firms to create public safety and child protection plans that could be audited to determine their effectiveness. While some of Anthropic's pro-safety positions come down to marketing, the idea that AI companies should at least be subject to some scrutiny if someone were to, let's say, use an AI model to develop a chemical weapon, does not exactly seem like a radical act of self-flagellation. It seems like a pretty reasonable expectation of accountability, and it seems particularly wild that a company like OpenAI would express concerns over the existential threats posed by the development of its technology while also pushing to not be liable should any of those doomsday outcomes come to fruition. We've moved beyond the "At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus" stage of AI to the "we are issuing our support for the No One Is Responsible For The Harms Of The Torment Nexus Act."

Anthropic
Gizmodo10d ago
Read update
The OpenAI-Anthropic Cold War Comes to Illinois

BoE's Bailey sees major cybersecurity risks in new Anthropic model

April 14 (Reuters) - Central banks and financial regulators must quickly understand the implications of a new artificial intelligence model that could pose major cybersecurity dangers, Bank of England Governor Andrew Bailey said on Tuesday. "It would be reasonable to think that the events in the Gulf are the most recent challenge to us in this world, until, I think it was last Friday, you wake up to find that Anthropic may have found a way to crack the whole cyber risk world open," Bailey said at an event at Columbia University in New York. Anthropic's Mythos product has drawn warnings from cyber experts about its potential to supercharge complex cyberattacks, which could challenge the banking industry and its existing technology systems. Regulators wanted to "work out what this actually means," Bailey said. "The issue is: to what extent is this new version of the product going to be able to, ⁠in a sense, identify vulnerabilities in other systems which can be exploited for cyberattack purposes." He said cyber risks had risen up the list of concerns of regulators most rapidly in recent years. "It's ⁠the one that never goes away. You have to keep mitigating it, but the threat actors will move on, so we have to deal with it," Bailey said. He dedicated most of Tuesday's event to discussing the issue of central banks' operational independence, which was "not robust enough" when it came to matters of financial stability. Bailey argued that monetary and financial stability policy - often depicted as separate issues or sometimes even at odds with each other - should be viewed together within an overarching objective of protecting the value of money. While monetary policy is defined by numerical inflation targets, financial stability is harder to grasp, leading to a distinction between the two, Bailey said. "This is important because independence in respect of financial stability is otherwise not as robust, and I would argue not robust enough," Bailey said in his speech. His remarks come as central banks on both sides of the Atlantic face increasing levels of political pressure, albeit to differing degrees. In the United States, U.S. President Donald Trump has called for lower interest rates and has repeatedly chastised Fed Chair Jerome Powell. In Britain, finance minister Rachel Reeves has pushed regulators including the BoE to give greater weight to economic growth when making decisions. Bailey said financial stability cuts across private interests in the financial system, as well as governments seeking to boost economic growth by loosening regulation to increase lending - particularly when memories of past crises fade.

Anthropic
Yahoo! Finance10d ago
Read update
BoE's Bailey sees major cybersecurity risks in new Anthropic model

A Bug, a Bounty, and a Shakedown: How Kraken's $3 Million Crypto Theft Exposed the Dark Side of White-Hat Hacking

A security researcher found a critical flaw in one of the world's largest cryptocurrency exchanges. What should have been a routine bug bounty report turned into something far more troubling -- an alleged extortion scheme involving millions of dollars in stolen digital assets, a standoff between a major exchange and a blockchain security firm, and a public reckoning over where ethical hacking ends and criminal behavior begins. The story, which first erupted in mid-2024 and has continued to reverberate through the crypto industry, centers on Kraken, the San Francisco-based exchange, and CertiK, a prominent blockchain security company. It's a case that has forced uncomfortable questions about trust, disclosure norms, and the increasingly blurred line between finding vulnerabilities and exploiting them. Here's what happened. In early June 2024, Kraken received what appeared to be a legitimate bug bounty submission. A security researcher reported a critical vulnerability that allowed users to artificially inflate their balances on the platform. The flaw was real. And it was serious. According to Kraken's chief security officer, Nick Percoco, the bug enabled an attacker to initiate a deposit, receive credit for funds before the transaction was fully confirmed on the blockchain, and then withdraw real money against that phantom balance. As Percoco described it on X (formerly Twitter), the vulnerability could let "any attacker" print assets on Kraken at will, as reported by Yahoo Finance. Kraken's security team moved fast. Within 47 minutes of confirming the bug, they had a fix deployed. No client funds were at risk, the company said. That should have been the end of it. It wasn't. When Kraken investigated further, they discovered that the researcher who reported the bug hadn't simply tested the flaw with a minimal proof of concept, which is standard practice in responsible disclosure. Instead, three accounts linked to the researcher had exploited the vulnerability over several days, withdrawing approximately $3 million worth of cryptocurrency from Kraken's treasury. Percoco said the initial researcher had shared the bug with two other individuals, who then used it to extract far larger sums. The first test transaction involved just $4 in crypto -- enough to demonstrate the flaw. But the subsequent withdrawals dwarfed that figure by orders of magnitude. When Kraken asked the researchers to return the funds and provide details of their activity so the exchange could conduct a proper accounting, the response was startling. According to Percoco, the researchers refused to return the crypto unless Kraken agreed to pay a bounty equivalent to what they claimed the bug could have cost the exchange if fully exploited. That demand, Kraken said, amounted to extortion -- not a bug bounty negotiation. "This is not white-hat hacking, it is extortion," Percoco wrote on X, in a post that quickly went viral across crypto circles. The identity of the firm behind the researchers didn't stay secret for long. CertiK, a well-known blockchain auditing and security company, publicly identified itself as the party involved. In a statement posted to X, CertiK pushed back forcefully on Kraken's characterization, framing the withdrawals as necessary testing to determine the full scope of the vulnerability. CertiK said it had found that Kraken's system allowed millions of dollars in fabricated deposits to be made and converted into real crypto, and that the exchange's internal controls failed to catch any of this activity over a multi-day period. CertiK also alleged that Kraken had threatened its employees. "Kraken's security team has THREATENED individual CertiK employees to repay a MISMATCHED amount of crypto in an UNREASONABLE time without providing repayment addresses," the firm wrote, with emphasis in the original post. CertiK said it would transfer the funds to "an account that Kraken can access" and rejected any suggestion that its actions constituted extortion. The public spat was extraordinary. Two established players in the crypto world -- one an exchange handling billions in daily volume, the other a security firm that has audited hundreds of blockchain projects -- were essentially accusing each other of bad faith in front of the entire industry. Bug bounty programs exist for a reason. They create a structured, legal framework for security researchers to report vulnerabilities in exchange for financial rewards, rather than exploiting those flaws for personal gain or selling them to malicious actors. Kraken operates one of the more generous programs in the crypto space, with payouts ranging from $500 for minor issues to $1.5 million for critical vulnerabilities. The program, like most, comes with clear rules: test with minimal amounts, don't withdraw more than necessary to demonstrate the bug, return any extracted funds, and cooperate with the company's security team. By Kraken's account, CertiK violated all of those norms. By CertiK's account, the extended testing was justified because the vulnerability was so severe that a limited proof of concept wouldn't have captured its true impact. The crypto industry largely sided with Kraken. Security researchers and industry commentators noted that withdrawing $3 million goes well beyond what any reasonable interpretation of responsible disclosure would permit. Several pointed out that if a traditional penetration tester extracted millions from a bank's system during a security audit, they'd face criminal charges regardless of whether they eventually returned the money. And the legal implications are real. Kraken said it treated the incident as a criminal matter and referred it to law enforcement. Under U.S. law, unauthorized access to computer systems -- even when a vulnerability is discovered in the course of legitimate research -- can trigger charges under the Computer Fraud and Abuse Act if the researcher exceeds the scope of authorized testing. The distinction between finding a bug and exploiting it is not a gray area in most legal frameworks. It's a bright line. The incident also raised pointed questions about CertiK's own business model. The firm has built a significant reputation as an auditor of smart contracts and blockchain protocols, charging projects substantial fees to review their code for vulnerabilities. If a security firm's researchers are willing to extract millions from an exchange during what they describe as testing, what does that say about the firm's internal controls and ethical standards? The question hung over the entire episode. CertiK, for its part, maintained that it acted in good faith throughout and that all funds were eventually returned. The firm said its researchers identified a vulnerability that could have resulted in hundreds of millions in losses had it been discovered by a genuinely malicious actor, and that Kraken should have been grateful for the disclosure rather than hostile. But gratitude is hard to muster when $3 million walks out the door. The fallout extended beyond the two companies. The incident prompted renewed discussion across the crypto industry about the adequacy of existing bug bounty frameworks. Some researchers argued that bounty payouts are often insultingly low relative to the severity of the vulnerabilities discovered, creating perverse incentives for researchers to seek alternative ways to monetize their findings. Others countered that the solution to low bounties isn't theft -- it's negotiation, public pressure, or simply walking away. More recently, the broader question of exchange security has remained in the spotlight. Kraken has continued to invest in its security infrastructure, and the exchange has not reported any subsequent incidents of this nature. But the episode served as a reminder that even major platforms can harbor critical flaws -- and that the people who find those flaws don't always have the purest intentions. The crypto industry's relationship with security researchers has always been complicated. The decentralized ethos that animates much of the space prizes independence, skepticism of authority, and a certain hacker mentality. Bug bounties are supposed to channel those impulses productively. When they work, everyone benefits: the company patches a flaw, the researcher gets paid, and users are protected. When they don't work -- when the boundaries of acceptable conduct are crossed -- the result is a mess that damages trust on all sides. Kraken's experience is a cautionary tale for exchanges and security firms alike. For exchanges, it underscores the need for airtight deposit verification systems and real-time monitoring that can catch anomalous activity before millions disappear. For security firms, it's a stark reminder that the line between hero and villain in cybersecurity is thinner than it looks -- and that crossing it, even with good intentions, can carry severe consequences. So where does this leave the industry? The legal proceedings, if any materialize publicly, could set important precedents for how bug bounty disputes are handled in the crypto world. The informal norms that have governed responsible disclosure for decades are under strain, and the Kraken-CertiK episode exposed just how fragile those norms can be when real money -- not just reputation -- is on the line. For now, the $3 million has been returned. The bug has been fixed. But the questions the incident raised haven't gone away. And they won't anytime soon.

Kraken
WebProNews10d ago
Read update
A Bug, a Bounty, and a Shakedown: How Kraken's $3 Million Crypto Theft Exposed the Dark Side of White-Hat Hacking

Anthropic Brings Repeatable Routines to Claude Code, Turning AI Into a Persistent Engineering Teammate

For months, developers using Anthropic's Claude Code have operated in a familiar loop: open a terminal, type a natural-language instruction, watch the AI execute, then repeat. Every session started from scratch. Every workflow had to be re-explained. That's about to change. Anthropic has introduced a feature called Routines to Claude Code, its command-line coding agent, giving developers the ability to define repeatable, multi-step workflows that the AI can execute on command. Think of it as saved procedures for an AI pair programmer -- scripts written in plain English rather than Bash, but with the flexibility to adapt to context each time they run. The feature, first detailed by 9to5Mac, represents a quiet but significant shift in how AI coding assistants are designed. Rather than optimizing solely for one-off queries, Anthropic is building infrastructure for persistent, structured collaboration between developers and AI agents. The implications stretch well beyond convenience. How Routines Work -- and Why They Matter At its core, the Routines feature lets developers create markdown files that describe a sequence of steps Claude Code should follow. These files live inside a project's directory structure, making them version-controllable, shareable across teams, and portable between machines. A routine might instruct Claude to pull the latest changes from a Git branch, run a test suite, identify failing tests, propose fixes, and then open a pull request -- all triggered by a single command. The syntax is deliberately simple. Anthropic opted for natural language with lightweight structural conventions rather than inventing a new DSL. This matters. It means any developer can write a routine without learning a new programming language, and any developer can read one and immediately understand what it does. But here's where it gets interesting. Routines aren't rigid scripts. Claude Code interprets them with contextual awareness. If a step says "fix any type errors in the changed files," the AI determines which files changed, what the type errors are, and how to resolve them -- dynamically, each time. The routine provides the skeleton. Claude provides the intelligence. This sits at an important intersection. Traditional automation tools like Makefiles, shell scripts, and CI/CD pipelines are deterministic: they do exactly what you tell them, every time. AI agents are flexible but unpredictable: they interpret intent but may drift. Routines attempt to bridge that gap -- giving developers the repeatability of automation with the adaptability of an AI agent. For engineering teams managing large codebases, the appeal is obvious. Code review checklists, deployment preparation steps, onboarding procedures for new contributors, bug triage workflows -- all of these involve repetitive sequences of tasks that require judgment at each step. Routines formalize the sequence while delegating the judgment. And they're composable. One routine can reference another. A "prepare release" routine might invoke a "run full test suite" routine, then a "update changelog" routine, then a "tag and push" routine. Nesting keeps individual routines focused and reusable. Anthropic has also built in safeguards. Routines can include explicit checkpoints where Claude Code pauses and asks the developer for confirmation before proceeding. This is critical for high-stakes operations -- deploying to production, modifying database schemas, or making bulk changes across hundreds of files. The developer stays in the loop without having to babysit every step. The feature ships as part of Claude Code's standard tooling, available to all subscribers on Anthropic's Max plan. No separate API costs for routine execution beyond normal token usage. The Competitive Context: Coding Agents Are Becoming Workflow Engines Anthropic isn't operating in a vacuum. The race to build the dominant AI coding assistant has intensified dramatically in 2026, and the competitive field is crowded. GitHub Copilot, backed by Microsoft and OpenAI, remains the most widely adopted tool, with deep integration into VS Code and GitHub's broader platform. Copilot has been expanding its own agent capabilities, including the ability to handle multi-file edits and respond to GitHub Issues autonomously. Google's Gemini Code Assist has similarly pushed into agentic territory, with tight integration into Google Cloud's developer tools. Then there are the startups. Cursor, the AI-native code editor, has attracted a devoted following among individual developers and small teams. Devin, from Cognition, positions itself as a fully autonomous software engineer. Poolside and Magic are building foundation models specifically for code generation. What distinguishes Anthropic's approach with Routines is the emphasis on developer control. Where some competitors lean toward full autonomy -- "just tell the AI what you want and walk away" -- Anthropic is betting that professional developers want structured collaboration. They want to define the process. They want checkpoints. They want to version-control their AI workflows the same way they version-control their code. This philosophy aligns with how Anthropic has positioned Claude more broadly: capable but controllable, powerful but transparent. The Routines feature is that philosophy made concrete in a developer tool. It also reflects a growing recognition across the industry that the value of AI coding tools isn't just in generating code. It's in managing the entire software development workflow. Writing code is perhaps 30% of what developers actually do. The rest is reading code, reviewing code, debugging, testing, deploying, documenting, communicating. Tools that only help with generation miss most of the job. Routines address this directly. A routine doesn't have to generate a single line of code. It could analyze a codebase for security vulnerabilities, summarize recent changes for a standup meeting, or audit dependency versions. The feature reframes Claude Code from "AI that writes code" to "AI that participates in engineering processes." So where does this go next? The logical extension is integration with external systems. Routines that can interact with Jira, Slack, Linear, or PagerDuty. Routines triggered automatically by webhooks -- a new PR opens, and Claude runs a predefined review routine. Routines that execute on a schedule, like a nightly codebase health check. Anthropic hasn't announced these capabilities yet. But the architecture of Routines -- markdown files, composable steps, checkpoint-based control flow -- is clearly designed to accommodate them. The foundation is being laid for something more ambitious than a coding assistant. It's the scaffolding for an AI-powered engineering operations layer. For now, the immediate impact will be felt by teams already using Claude Code in their daily workflows. The ability to encode institutional knowledge -- "here's how we do deployments," "here's our code review checklist," "here's how we handle hotfixes" -- into executable, AI-powered routines is genuinely useful. It reduces onboarding friction. It enforces consistency. And it frees senior engineers from repeatedly explaining processes that can be documented once and executed indefinitely. Whether Routines become a standard feature that every AI coding tool eventually copies, or a differentiator that pulls developers toward Claude Code specifically, depends on execution. The concept is sound. The implementation, based on early reports, is clean and intuitive. But adoption will hinge on reliability -- on whether developers trust Claude Code to follow their routines accurately, consistently, and without unexpected behavior. Trust is the currency here. Not tokens. Anthropic appears to understand that. The checkpoint system, the use of plain markdown, the decision to keep routines inside the project repository -- all of these are trust-building design choices. They give developers visibility and control. They make the AI's behavior auditable and reproducible. For an industry that's spent the last three years oscillating between AI hype and AI skepticism, that's exactly the right instinct.

Anthropic
WebProNews10d ago
Read update
Anthropic Brings Repeatable Routines to Claude Code, Turning AI Into a Persistent Engineering Teammate

SpaceX Could Be the Biggest IPO in History. Here's What Happened to the Last 5 Mega-IPOs.

At 108 times sales, SpaceX would debut at nearly four times Meta's IPO valuation, despite growing more slowly and losing $5 billion last year. SpaceX is reportedly targeting a valuation potentially exceeding $2 trillion and a roughly $75 billion raise when it goes public later this year -- about 2.5 times the current record set by Saudi Aramco's $29.4 billion offering in 2019. The company dominates global space launch, and its Starlink segment is generating nearly $11 billion in annual revenue. This is a real, cash-generating business with a genuine moat. Will AI create the world's first trillionaire? Our team just released a report on the one little-known company, called an "Indispensable Monopoly" providing the critical technology Nvidia and Intel both need. Continue " So should you invest? Let's look at what happened to the last five "mega-IPOs" that have raised $15 billion or more. The track record isn't great. And that's putting it lightly. The only stock to outperform the market over the long haul was Meta Platforms. And even that was down considerably at the one-year mark. There's another issue: valuation. SpaceX would be by far the most expensive stock to make a debut on this scale. If the targeted $2 trillion-plus market cap is achieved, the company's $18.5 billion in revenue would imply a price-to-sales (P/S) ratio of 108 -- almost four times as expensive as Meta shares when they hit the market. Consider that at the height of the AI boom in late 2023 -- a time when Nvidia was tripling its revenue year over year -- Nvidia shares topped out at a P/S around 40. The historical record is pretty clear: Most IPOs of this scale just don't pan out for investors -- short-term and long. Considering this and the fact that the stock will trade with such an extreme multiple, I can't recommend you buy SpaceX at IPO. If the share price falls considerably after the IPO, I might consider it. But I would not jump in at the beginning. Before you buy stock in Nvidia, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now... and Nvidia wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $556,335!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $1,160,572!* Now, it's worth noting Stock Advisor's total average return is 975% -- a market-crushing outperformance compared to 193% for the S&P 500. Don't miss the latest top 10 list, available with Stock Advisor, and join an investing community built by individual investors for individual investors. Johnny Rice has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Meta Platforms and Nvidia. The Motley Fool recommends General Motors. The Motley Fool has a disclosure policy.

SpaceX
NASDAQ Stock Market10d ago
Read update
SpaceX Could Be the Biggest IPO in History. Here's What Happened to the Last 5 Mega-IPOs.

Design tool stocks fall on Anthropic AI product rumor By Investing.com

Risk Disclosure: Trading in financial instruments and/or cryptocurrencies involves high risks including the risk of losing some, or all, of your investment amount, and may not be suitable for all investors. Prices of cryptocurrencies are extremely volatile and may be affected by external factors such as financial, regulatory or political events. Trading on margin increases the financial risks. Before deciding to trade in financial instrument or cryptocurrencies you should be fully informed of the risks and costs associated with trading the financial markets, carefully consider your investment objectives, level of experience, and risk appetite, and seek professional advice where needed. Fusion Media would like to remind you that the data contained in this website is not necessarily real-time nor accurate. The data and prices on the website are not necessarily provided by any market or exchange, but may be provided by market makers, and so prices may not be accurate and may differ from the actual price at any given market, meaning prices are indicative and not appropriate for trading purposes. Fusion Media and any provider of the data contained in this website will not accept liability for any loss or damage as a result of your trading, or your reliance on the information contained within this website. It is prohibited to use, store, reproduce, display, modify, transmit or distribute the data contained in this website without the explicit prior written permission of Fusion Media and/or the data provider. All intellectual property rights are reserved by the providers and/or the exchange providing the data contained in this website. Fusion Media may be compensated by the advertisers that appear on the website, based on your interaction with the advertisements or advertisers.

Anthropic
Investing.com10d ago
Read update
Design tool stocks fall on Anthropic AI product rumor By Investing.com

Will Trump Greenlight Anthropic's Mythos After The Pentagon Fight? - Amazon.com (NASDAQ:AMZN)

Anthropic co-founder Jack Clark said the firm is discussing its next frontier model with the Trump administration despite a Pentagon contract dispute that led the agency to label the company as a supply chain risk. "AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities," Anthropic noted. Mythos Preview is an AI model that exposes software vulnerabilities, helping companies protect themselves from threats. The development handed the Trump administration a key interim victory in a closely watched legal fight over artificial intelligence and military use. The ruling allows the designation to remain in effect while litigation continues, though it does not resolve the case on its merits. "We have a narrow contracting dispute, but I don't want that to get in the way of the fact that we care deeply about national security," Clark said at the Semafor World Economy event in Washington on Monday. "Our position is the government has to know about this stuff, and we have to find new ways for the government to partner with a private sector that is making things that are truly revolutionizing the economy, but are going to have aspects to them which hit national security, equities, and other ones. So absolutely, we're talking to them about Mythos, and we'll talk to them about the next models as well," he continued. Specific details regarding the discussions with the U.S. government -- including which agencies are involved -- have not been disclosed, according to Reuters. Photo: Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.

Anthropic
Benzinga10d ago
Read update
Will Trump Greenlight Anthropic's Mythos After The Pentagon Fight? - Amazon.com (NASDAQ:AMZN)

Design tool stocks fall on Anthropic AI product rumor By Investing.com

Risk Disclosure: Trading in financial instruments and/or cryptocurrencies involves high risks including the risk of losing some, or all, of your investment amount, and may not be suitable for all investors. Prices of cryptocurrencies are extremely volatile and may be affected by external factors such as financial, regulatory or political events. Trading on margin increases the financial risks. Before deciding to trade in financial instrument or cryptocurrencies you should be fully informed of the risks and costs associated with trading the financial markets, carefully consider your investment objectives, level of experience, and risk appetite, and seek professional advice where needed. Fusion Media would like to remind you that the data contained in this website is not necessarily real-time nor accurate. The data and prices on the website are not necessarily provided by any market or exchange, but may be provided by market makers, and so prices may not be accurate and may differ from the actual price at any given market, meaning prices are indicative and not appropriate for trading purposes. Fusion Media and any provider of the data contained in this website will not accept liability for any loss or damage as a result of your trading, or your reliance on the information contained within this website. It is prohibited to use, store, reproduce, display, modify, transmit or distribute the data contained in this website without the explicit prior written permission of Fusion Media and/or the data provider. All intellectual property rights are reserved by the providers and/or the exchange providing the data contained in this website. Fusion Media may be compensated by the advertisers that appear on the website, based on your interaction with the advertisements or advertisers.

Anthropic
Investing.com South Africa10d ago
Read update
Design tool stocks fall on Anthropic AI product rumor By Investing.com

Design tool stocks fall on Anthropic AI product rumor By Investing.com

Risk Disclosure: Trading in financial instruments and/or cryptocurrencies involves high risks including the risk of losing some, or all, of your investment amount, and may not be suitable for all investors. Prices of cryptocurrencies are extremely volatile and may be affected by external factors such as financial, regulatory or political events. Trading on margin increases the financial risks. Before deciding to trade in financial instrument or cryptocurrencies you should be fully informed of the risks and costs associated with trading the financial markets, carefully consider your investment objectives, level of experience, and risk appetite, and seek professional advice where needed. Fusion Media would like to remind you that the data contained in this website is not necessarily real-time nor accurate. The data and prices on the website are not necessarily provided by any market or exchange, but may be provided by market makers, and so prices may not be accurate and may differ from the actual price at any given market, meaning prices are indicative and not appropriate for trading purposes. Fusion Media and any provider of the data contained in this website will not accept liability for any loss or damage as a result of your trading, or your reliance on the information contained within this website. It is prohibited to use, store, reproduce, display, modify, transmit or distribute the data contained in this website without the explicit prior written permission of Fusion Media and/or the data provider. All intellectual property rights are reserved by the providers and/or the exchange providing the data contained in this website. Fusion Media may be compensated by the advertisers that appear on the website, based on your interaction with the advertisements or advertisers.

Anthropic
Investing.com UK10d ago
Read update
Design tool stocks fall on Anthropic AI product rumor By Investing.com

SpaceX rocket launch lights up the sky over CSRA

AUGUSTA, Ga. (WRDW/WAGT) - A SpaceX rocket was visible across the CSRA on Tuesday morning after taking off in Florida. The rocket launched at 5:23 a.m. from Cape Canaveral to deploy 29 Starlink satellites in low Earth orbit. Starlink delivers internet service through a network of satellites. The launch created a classic glowing parabola shape frequently seen in SpaceX launches.

SpaceX
https://www.wrdw.com10d ago
Read update
SpaceX rocket launch lights up the sky over CSRA
Showing 4761 - 4780 of 11425 articles