News & Updates

The latest news and updates from companies in the WLTH portfolio.

I Tested Perplexity vs. ChatGPT: Which Is Better in 2026?

Everyone's comparing AI chatbots -- but what happens when one of them is not a chatbot at all? That's what immediately intrigued me about Perplexity AI. It brands itself as an 'AI-powered answer engine' -- a citation-rich, intelligent alternative to Google. Yet, in practice, it often feels like a chatbot, delivering answers directly, albeit with a strong research backbone. I've been using it since it first launched in late 2022, right around the time ChatGPT exploded onto the scene. Needless to say, I found myself constantly switching between the two, testing ideas, writing drafts, and digging into research. And while they seem to serve different purposes on paper, in reality, there's a lot of overlap. So I finally did it: Perplexity vs. ChatGPT, head to head. Same prompts. Same tasks. Same expectations. From fact-checking to content creation, I wanted to see which one actually delivers more value when you're deep in the flow of work. And here's what happened, all with G2 data to back it up. TL;DR: Most comparisons frame this as AI search enginge vs. AI chatbot. That's outdated. In 2026, both tools answer questions. The difference is how they handle uncertainty. Perplexity shows its sources. ChatGPT shows its confidence. * Choose Perplexity when you need to verify - real-time, accurate information, news, facts, and sources, and for research. * Choose ChatGPT when you need to create - writing, creative tasks, coding, brainstorming, and summarizing. Note: Both OpenAI and Perplexity AI frequently roll out new updates to these AI chatbots. The details below reflect the most current capabilities as of April 2026 but may change over time. What is Perplexity? Perplexity is an AI-powered research and search engine designed to deliver real-time, factual answers with source citations. It functions more like a smart alternative to Google, pulling information from the web and presenting concise summaries with direct links to sources for easy verification. Here's a quick snapshot of what Perplexity is best for, its strengths, key features, and its writing style. * Best for: Deep research, verifying facts, answering academic or technical questions, staying on top of current events, and producing summaries that link back to original sources. * Strength: Transparency. Quickly surfaces up-to-date information and backs it with transparent source attribution. * Limitation: Conversational Continuity. While it can answer follow-up questions, it struggles to hold a long, complex "thread" of conversation or remember specific context from 10 turns ago, as well as ChatGPT. * Key features: Connects answers directly to the web pages they're drawn from, making it easy to trace and validate information; Model Switching. (In the Pro version, you can choose to use the "brain" of GPT, Claude, Gemini, or Perplexity's own models for your search). * Writing style: Concise, neutral, and structured around reporting facts rather than storytelling. For a closer look at Perplexity on its own, check out our full Perplexity AI review. What is ChatGPT? ChatGPT is a conversational AI assistant built for content creation, problem-solving, coding, and creative tasks through natural, human-like dialogue. Rather than focusing on citations by default, ChatGPT excels at brainstorming, drafting long-form content, coding workflows, and multi-step reasoning, maintaining context across extended conversations. * Best for: Creativity, writing, coding assistance, and role-playing scenarios. * Strength: Adaptability. It can change its tone, persona, and format based on your needs. Also able to handle everything from creative writing and brainstorming to coding, reasoning, and extended conversations. * Limitation: Responses can be surface-level or generic without highly specific prompting, especially regarding niche industry details. It is also prone to "hallucinating" (confidently stating incorrect facts) when not using its web search tools * Key features: Multimodality. It can natively "see" images, "hear" your voice, and generate images (GPT Image 1.5 models) or charts all in one conversation. * Writing style: Fluid, engaging, and human-like (ranging from casual to highly formal). If you're evaluating ChatGPT on its own, I've broken down its features, pricing, and real-world performance in my ChatGPT review. Perplexity vs. ChatGPT: What's different and what's not? After spending a lot of time with both tools, I started to notice a pattern. On the surface, they often feel similar -- both respond conversationally, both can tackle a wide range of prompts, and both are powerful AI assistants in their own right. But once I started using them for deeper research, writing help, and day-to-day tasks, the differences (and surprising similarities) became impossible to ignore. What are the key differences between Perplexity and ChatGPT? Think of Perplexity as a research librarian and ChatGPT as a creative writing coach: one delivers sources with precision, and the other crafts flow and structure. Here's how they stack up: * Positioning and purpose: From what I've seen, ChatGPT is clearly designed as a general-purpose AI chatbot -- creative, conversational, and customizable. Perplexity, on the other hand, is an answer engine built for fast, accurate, and sourced responses. It feels more like a smart, AI-powered alternative to Google than a classic chatbot. * Interface experience: ChatGPT feels like a chat app at its core. It's highly capable and designed for longer, multi-turn conversations. With chat history, custom GPTs (tailored using instructions, files, or functions), and built-in tools like browsing and image generation, it creates a flexible space for creativity and problem-solving. Perplexity, on the other hand, leans into its identity as a search-first tool. Its interface resembles a research engine with a chat overlay optimized for fast, citation-rich responses. It also includes features like Discover for trending topics and Spaces to save answers, organize research, or build lightweight custom AI assistants. * AI models and processing power: ChatGPT runs on OpenAI's GPT models. Perplexity stands out by offering models from multiple providers. Paying users can choose from Sonar, the latest models of GPT, Claude, and Gemini This multi-model flexibility gives Perplexity users more control over how their queries are processed -- something ChatGPT doesn't offer within a single interface. * Citation-first vs. chat-first: Perplexity always shows sources. Citations are built into every answer. With ChatGPT, you'll only get sources when using the web browsing tool (and even then, they're less prominent.) * Memory and continuity: ChatGPT supports memory across sessions, which means it can remember preferences, past prompts, or context I've shared (super useful for ongoing workflows). Perplexity doesn't have long-term memory, at least from what I've observed. So, each session often starts fresh. What's similar between Perplexity and ChatGPT? Despite the branding and features, they still have a lot in common when it comes to getting stuff done. * Text generation: Both tools are great at generating clear, human-like responses to questions, prompts, and creative tasks. Whether I'm summarizing an article, drafting a blog intro, or rephrasing something for tone, both ChatGPT and Perplexity deliver coherent, context-aware output. * Coding: While ChatGPT has the edge for more advanced dev tasks, Perplexity still holds its own for quick code explanations, syntax help, and debugging suggestions. I've used both for everything from basic HTML and Python snippets to exploring new frameworks. * Voice interactions: Both platforms now support voice chats. I've used ChatGPT's voice mode for casual conversations or quick prompts on the go, and Perplexity recently rolled out voice capabilities, too. It's great when I want fast answers without typing. * Multimodal capabilities: Each platform supports multimodal input in different ways. ChatGPT lets me upload images and have the model describe, analyze, or interpret them. Perplexity isn't natively built for image generation, but it can extract insights from web visual content and generate images with the paid plan. * File analysis: Both tools let you upload files, like PDFs, docs, or decks, for summarization and Q&A. I've used them to extract key insights from dense research papers in seconds. Perplexity supports plain text, code, PDFs, and images, as well as audio, and video files upto 40 MB. ChatGPT supports formats like DOCX, PDF, TXT, PPTX, CSV, and XLSX (up to 512MB each), with free users limited to three uploads daily. * Web search and deep research: Both ChatGPT (via SearchGPT) and Perplexity provide real-time web access, but their styles differ -- ChatGPT summarizes, while Perplexity highlights sources and citations by default. Both also offer a Deep Research feature that pulls from multiple web sources to generate structured, in-depth reports. I've found it great for tackling complex topics or multi-layered questions. * Temporary chats and sharing: Both tools support temporary sessions and make sharing easy. I often share Perplexity threads by downloading them as PDFs or text files, while ChatGPT offers shareable chat links. * Custom AI assistants: While ChatGPT has "custom GPTs" and Perplexity has "Spaces," both let me build purpose-specific AI tools. For example, I could create a writing assistant that remembers my preferred style in a custom GPT, or a Perplexity Space focused on tracking stock market news with specific sources. Both platforms make it easy to customize how the AI responds and remembers context (within that session). Comparing specs is one thing. But how do Perplexity and ChatGPT hold up in practice? Here's how I put them to the test. Disclaimer: AI responses may vary based on phrasing, session history, and system updates for the same prompts. These results reflect the models' capabilities at the time of testing. Also, this review is an individual opinion and doesn't reflect G2's position about the mentioned software's likes and dislikes. Perplexity vs. ChatGPT: How they actually performed in my tests Now, the crucial question: How did Perplexity and ChatGPT fare? For each test, my analysis will follow this structure: * Key observations: A look at each tool's strengths, weaknesses, and any standout surprises -- good or bad. * The superior choice: My take on which model did better based on accuracy, creativity, clarity, and how usable the output was. * Final judgment: My direct call on which tool emerged victorious for that specific task. 1. Summarization The first challenge involved summarizing. I instructed both ChatGPT and Perplexity to extract the key information from a G2 article detailing the growing adoption of Canva by non-designers, presenting it in exactly three bullet points and under 50 words. Perplexity's response to the summarization prompt Right away, I noticed a difference in how they approached the task. Perplexity kept things clean and direct. Its bullets were short, and skimmable. ChatGPT's response to the summarization prompt ChatGPT stuck to the 50-word constraint but cited multiple sources per bullet, not just the G2 article. I liked that the bullets were more specific and data-backed. It pulled "4,400+ G2 reviews" as a concrete anchor. But the differentiator for me was the third bullet provided by Perplexity. It was actually more nuanced as it acknowledged a limitation (free version constraints) which ChatGPT's version didn't surface. That's the research-oriented tone showing through. So, while both were accurate, Perplexity's version was easier to use at a glance. So if I needed something polished for a write-up, I might lean on ChatGPT. But for quick, high-impact summaries, Perplexity did a better job. Winner: Perplexity 2. Content creation Moving on to AI content creation, a known strength, I wanted to see how Perplexity and ChatGPT would perform under the pressure of a full marketing push. So, I gave them both a pretty comprehensive single prompt, asking for product descriptions, catchy taglines, social media posts for different platforms, email subject lines to draw people in, and even a short script for a video ad. Basically, the whole nine yards of a marketing campaign! Both ChatGPT and Perplexity handled it really well. The outputs were polished, varied, and genuinely usable. Interestingly, the ideas they came up with were pretty similar across both tools, which made the comparison feel even fairer. Perplexity's output was strong. Its tone shifted nicely between platforms -- playful on Instagram, straightforward on email, and visual on video. Perplexity's response to the content creation prompt I found its tagline punchier than ChatGPT's. Its copy didn't feel templated, and I liked that it didn't need much tweaking. I especially appreciated how naturally it handled different formats without losing brand voice. Perplexity's response to the content creation prompt ChatGPT made the content feel ready to drop into a brand doc or campaign deck. It also offered more hashtag options for social media posts, which is helpful if you're trying to cover multiple angles or tap into different trends. The tone across formats was consistent, and I didn't spot any weaknesses in its approach. ChatGPT's response to the content creation prompt Bottom line: both tools performed impressively here. I didn't feel like either one lagged. If I had to choose, I'd say it's a tie; ChatGPT wins on structure and extras (like hashtag coverage), while Perplexity stands out for its fluid tone and plug-and-play readiness. Winner: Split verdict. 3. Creative writing I really wanted to see how well these tools could break away from formulaic outputs and actually tell a story with mood, pacing, and a twist. I gave both ChatGPT and Perplexity a sci-fi prompt with a few must-have elements: a mysterious signal, a sentient AI, and a reality-bending reveal -- all within 300 words. Right off the bat, ChatGPT stood out for including a title, "Whispers of the Wanderer," which instantly set the tone. Its story had atmosphere, tension, and a cinematic feel. The pacing was tight, the language vivid (especially those descriptions of the nebula and the glitching hologram), and the ending twist, "You're the signal, "landed perfectly. ChatGPT's story "Whispers of Wanderer" for the creative writing task Perplexity's take was also strong. It built a different kind of mood, which was more philosophical and almost dreamlike. The narrative had a softer tone but still hit the key elements. The final line, "Reality is not what you see, but what you are allowed to see," was a powerful closer. It leaned slightly more abstract, but I liked that it took a different stylistic route than ChatGPT's version. Perplexity's response to my creative writing prompt Both stories had depth and solid character voice and used the elements I asked for. So overall? Another strong showing from both. Winner: Split 4. Coding Full disclosure: I'm not a developer. But I do know that coding is a major benchmark for AI performance, especially when it comes to real-world use. For this test, I asked both ChatGPT and Perplexity to build a simple password generator using HTML and JavaScript. I wanted to get a working solution with clean code and a user-friendly interface. And this round? ChatGPT swept it. The code it generated worked perfectly on the first try. The interface was clean and intuitive, and the tool did exactly what it promised -- no hiccups. Even as a non-dev, I could understand what the code was doing, and the overall setup looked polished enough to drop into a beginner project or a quick demo. I liked that it had styled the UI also better with the lock emoji and colorful buttons. What stood out beyond the code itself was ChatGPT's built-in canvas view. You can preview, edit, copy, or download the output without leaving the interface. That's a meaningful UX upgrade from earlier versions, and it makes the coding experience feel more complete end-to-end. ChatGPT's password generator Perplexity, on the other hand, produced a mostly functional version, but the clipboard copy didn't work. That might seem like a small detail, but it made the whole experience feel less complete. The UI also wasn't quite as refined. It did the job, but lacked the little touches that made ChatGPT's version feel more usable and polished. Perplexity's password generator Winner: ChatGPT 5. Image generation Next, I wanted to test something a little more visual -- image generation. We've all seen AI-generated art floating around online, but I was curious to see how well these tools could handle something grounded and realistic: a stock photo of a small business owner. It's the kind of image marketers, content creators, and small teams constantly need. But generating one that actually looks believable? That's a real challenge. It's worth noting that Perplexity lets only Pro users generate images as part of their workflow using leading AI image generating models including GPT Image 1, Nano Banana, and Seedream 4.5. ChatGPT, using GPT Image 1.5, gave me what felt like the best overall interpretation. The setting looked like a cozy boutique, complete with a mix of products -- clothes, accessories, and a warm, modern vibe. It checked most of the boxes in a balanced, visually clean way. It was photorealistic, well-lit, and compositionally strong -- the kind of image you could drop into a blog post or marketing deck without a second thought. The detail in the background, the natural pose, the lighting on the shelves -- it didn't feel AI-generated on the first glance. Image generated with ChatGPT What makes ChatGPT's image generation genuinely useful in 2026 is the editing layer. After generating the image, you can open it in a dedicated editor, select specific areas, and describe changes directly -- no third-party tool needed. Want to swap the apron color, change the background, or remove an object? You describe it, and it updates in place. It's one of the most seamless generate-then-refine workflows I've tested in any AI tool. Editing images with ChatGPT Perplexity's output went wider, captured more of the store environment, and even rendered readable text on the signage ("Woven & Ware -- Established 2018"), which has been notoriously hard for AI image generators to get right. But I felt the overall feel was a little too polished and robotic, lacking the warmth and natural quality of ChatGPT's result. But the biggest advantage on Perplexity was that I could switch models if wanted a different output. Image generated with Perplexity So, which tool was better? For one-shot image generation, both tools are competitive in 2026. For iterative editing and refinement, ChatGPT is in a different league. If your workflow involves generating and then tweaking, ChatGPT wins. Winner: ChatGPT 6. Image analysis For image analysis, I really wanted to push Perplexity and ChatGPT a bit further. Instead of a simple picture, I gave them two distinct types of visuals: an infographic about AI adoption and a handwritten poem. And honestly? Both tools did surprisingly well. Perplexity offered clear summaries for both images, highlighting key trends in the infographic and interpreting the poem's visual elements with ease. It pulled out the most important percentages, offered insights about design choices, and interpreted the poem with emotional nuance. No major red flags there! Perplexity's response to my image analysis prompt ChatGPT also did a solid. But it was way more structured than Perplexity's with subheaders. For the handwritten poem, it went the extra mile and fully transcribed the poem, which I found super helpful. That little bit of added structure made it easier to skim. ChatGPT's response to my image analysis prompt First, the infographic. ChatGPT gave me a super well-structured summary, hitting all the key statistics, trends, and conclusions. It even gave me some thoughts on the visual design, which was a nice touch. If I had to nitpick, the poem transcription by ChatGPT was the one standout difference -- otherwise, they were fairly evenly matched. I didn't find either missing any critical observations or misinterpreting anything. Both tools demonstrated strong comprehension and interpretation skills. ChatGPT transcribing my handwritten notes as part of the image analysis task So, in this round? It's a close call. If you're just looking for fast, accurate image understanding, Perplexity absolutely holds its own. But if you value slightly more structure and that bonus level of detail like transcription, ChatGPT nudges ahead. Winner: ChatGPT 7. File analysis For this task, I gave both ChatGPT and Perplexity a heavy-hitter: Einstein's 1905 paper "On the Electrodynamics of Moving Bodies" and asked them to summarize it in five bullet points under 100 words. ChatGPT's response was polished, accessible, and grouped ideas like time dilation and length contraction well. It felt user-friendly and clear without oversimplifying. It went a little above word count, but still not as much as Perplexity. ChatGPT's response to the file analysis task Perplexity's take leaned more academic, using terms like "Lorentz transformations" and "mass-energy relationship" right up front. It felt like something a researcher might write -- precise, slightly more technical. It definitely went a little over the word count, too. Perplexity's response to the file analysis task Winner? Discounting the word count issue, I'd say it's a tie. ChatGPT is great for quick understanding. Perplexity feels more formal. Both are excellent at distilling complex content into digestible insights. Winner: Split verdict. 8. Data analysis Data analysis was next. I provided both with a CSV of U.S. ChatGPT search interest by subregion to see who could extract key insights. And I have to say, Perplexity knocked it out of the park. It didn't just summarize the data; it broke it down with the kind of detail I didn't get from ChatGPT, Gemini, or even DeepSeek. We're talking about statistical summary with mean, median, and standard deviation, along with thoughtful observations under regional patterns. The technocentric insight you see below? Super valuable. It made me feel like I was reading the analysis of someone who really got the data, not just glanced at it. Perplexity's response to the data analysis prompt ChatGPT did fine -- clean, readable, and accurate. If I wanted a quick scan, sure, it delivered. But if I were prepping for a meeting or writing a report, I'd lean on Perplexity for the extra depth. ChatGPT's response to the data analysis prompt This one wasn't even close. Clear win for Perplexity. Winner: Perplexity 9. Video generation I asked ChatGPT and Perplexity to produce a 10-second scene of a young woman in a red coat waiting at a snowy train station, reacting as a train approached, with warm light contrasting the cold blue snow. Both delivered solid results, but the differences were apparent. ChatGPT's output on Sora, its video generation model, was high-resolution and smooth, but it lacked a strong sense of motion from the train, and key prompt details like the visible figure in the window and dramatic warm-cold lighting contrast were subtle. Video generated on Sora On the other hand, Perplexity's video generation using Google's Veo 3.1 nailed the brief: the train visibly approached, a person was seen in the window, the woman's eyes widened in reaction, and the lighting contrast was pronounced. It also came ready-to-use without edits, though at a slightly shorter runtime and lower resolution. Generated video with Veo 3 While ChatGPT offered technical polish, Perplexity's version matched the prompt with greater accuracy and required no post-processing -- making it the stronger choice for this task. It's also worth noting that OpenAI has announced that the Sora web and app will be discontinued from April 26, 2026. So, I would suggest going with either Perplexity or other AI video generators. Winner: Perplexity 10. Real-time web search In the next task, I was curious to see how well Perplexity and ChatGPT could keep up with the world. I asked them to find and summarize the three most recent AI news stories. ChatGPT's response (above) was structured, analytical, and genuinely useful. It surfaced three stories from April 2026 -- frontier model breakthroughs, AI investment surges, and tightening government regulation -- each with a clear summary and a "why it matters" explanation. Sources were cited inline, pulled from multiple outlets, and the right panel showed a live feed of source articles with dates, confirming the results were current. What impressed me was the editorial layer on top of the search. ChatGPT didn't just retrieve. It synthesized, prioritized, and even offered to reframe the results from a SaaS or content strategy lens. That's closer to a research assistant than a search engine. ChatGPT's response to the real-time web search task Perplexity went more technical and specific: Google's TurboQuant memory compression breakthrough, GPT-5.4 beating humans on desktop benchmarks, neuromorphic computers solving physics equations. Each point was tightly cited and the significance section was analytical without being verbose. The sourcing panel on the right, however, showed a mix of recency: some results were from 2023 and 2024, which raises a mild flag about how Perplexity surfaces and ranks live results. ChatGPT's sources were more consistently recent and editorially curated. Perplexity's were more granular and technical, but the source panel mixed old and new without clear differentiation. ChatGPT's response to the real-time web search task For me, ChatGPT came out on top for this one. Winner: ChatGPT AI assistants like ChatGPT, Perplexity, and Gemini track real-time developments and spot trends, but also sometimes hallucinate in the process. Check out our guide on how to handle AI hallucinations while using it for research. 11. Deep research task The final task I designed was centered around what I believe is a truly pivotal capability for AI chatbots: deep research. The promise of these tools to tackle complex research questions and efficiently analyze vast amounts of information is incredibly exciting. To test this directly, I set both Perplexity and ChatGPT the challenge of exploring a current and significant area: the ongoing trends in SaaS consolidation. Perplexity responded quickly and packed its analysis with up-to-date data -- 49 sources in total. It nailed the numbers, cited recent case studies, and delivered a clean summary with strong financial context. The insights around tech hubs and valuation trends were especially sharp. That said, it was more of a straight data drop -- fast and accurate. Perplexity's deep research capabilities ChatGPT, in contrast, took its time. It asked me clarifying questions first, which made the experience feel more collaborative. The final report took about eight minutes, but it was worth the wait. It pulled from 41 sources, included examples, and had a clear strategic structure. I did notice it leaned on older data, which was a bit frustrating since I'd asked for insights from the last 3-5 years. Still, the content was rich and layered, with thoughtful takeaways for SaaS leaders and investors. ChatGPT's Deep Research asks questions before starting ChatGPT, on the other hand, took a more interactive approach, asking me detailed questions about my preferred timeframe, geographic focus, and priority areas before proceeding. It also took longer to complete the task (about eight minutes to generate the entire report). Both tools did really well, but in different ways. Perplexity is better for quick, data-heavy research. ChatGPT takes longer but gives you something closer to an executive briefing. You can find both research reports here. Winner: Split verdict 12. Pricing and value Apart from the tasks above, I spent time comparing what you actually get at each price point because the more I used both, the more I realized the gap. Free plans: Which one to choose? Perplexity's free tier is genuinely useful for everyday search. You get unlimited basic searches, a handful of Pro searches per day, and live web results with citations by default. The ceiling hits fast if you're doing serious research, but for casual use, it holds up well. ChatGPT's free tier has gotten more capable -- you now get access to GPT-5.3 with limited messages, uploads, image generation, and deep research. The trade-off since early 2026: ads in the US market. I'd say pick one based on your use case. Research-heavy? Need real-time data? Go for Perplexity. Just general-purpose chat? Go for ChatGPT. ChatGPT vs. Perpexity: Which one's worth your $20/month? This is where the comparison gets interesting. Both Perplexity Pro and ChatGPT Plus cost $20/month, but they're optimized for different workflows. Perplexity Pro gives you unlimited Pro Search, access to multiple frontier models, file uploads, image generation, and Deep Research. The multi-model flexibility is the standout, according to me. Instead of paying separately to different providers like Open AI, Claude, and Google, you can switch models mid-workflow based on the task at hand. No other tool at this price point offers that. ChatGPT Plus gives you the full OpenAI suite: GPT-5.4 Thinking, Deep Research (10 runs/month), Codex, Agent Mode, and ad-free access. It's a broader toolkit -- especially if your work spans writing, coding, and image generation in one workflow. My take: If your primary use is research, sourcing, and fact-checking and access to multiple frontier models, Perplexity Pro is the better $20. If you need a versatile all-rounder for content, code, and creative work, ChatGPT Plus wins on range. The power tiers Both tools have higher-tier plans for heavier users but they're priced differently, and that gap matters. ChatGPT Pro at $100/month gives you significantly more room than Plus -- 5x to 20x more usage, GPT-5.4 Pro reasoning, maximum Codex tasks, unlimited image generation, unlimited file uploads, and maximum Deep Research and Agent Mode access. It also expands memory, context, projects, and custom GPTs. For power users who consistently hit Plus limits, it's a meaningful upgrade at a price that's still justifiable for professional use. Perplexity Max at $200/month unlocks Model Council (runs your query through three frontier models simultaneously and synthesizes the outputs), Perplexity Computer (19-model agentic orchestration for end-to-end project work), unlimited Labs access, and early feature access. The features are genuinely differentiated but at twice the price of ChatGPT Pro, it's a hard sell unless your work is heavily research-intensive and you're consistently pushing the limits of what Pro can do. If you ask me, neither power tier is worth it unless you're hitting your standard plan's ceilings regularly. Start at $20 and upgrade only when the limits become a real blocker. Based on everything, I'd say ChatGPT wins on overall pricing and value. At the free and $20 tiers, it's too close to call, but the further up the pricing ladder you go, the more ChatGPT pulls ahead in value, especially for power users and teams. Verdict: ChatGPT Perplexity vs. ChatGPT: Head-to-head comparison table Here's a table showing which chatbot won the tasks. Key insights on Perplexity and ChatGPT from G2 data I also dug into G2 review data to uncover how users rate and adopt ChatGPT and Perplexity. Here's what popped: Satisfaction ratings * ChatGPT leads in overall satisfaction, with especially high marks for ease of use (96%), setup (96%), and doing business (94%). It consistently outperforms the category average in nearly every metric. * Perplexity also scores well, matching ChatGPT in ease of setup (96%) and ease of use (94%). Top industries represented * ChatGPT sees broad adoption across tech-forward sectors, with most reviews coming from IT services, software, and marketing. It's also gaining traction in financial services and higher education. * Perplexity's footprint is narrower but similar -- dominated by software, IT services, and marketing and media -- suggesting early adoption among researchers, developers, and content teams. Highest-rated features * For ChatGPT, the top-performing features are natural language understanding and intent inference (92%), controlled LLM response generation (92%), and context maintenance within sessions (91%). * Perplexity stands out for no-code conversation design (94%), multi-step planning, and natural language understanding and intent inference (89%), which is no surprise given its recent agentic capabilities. Lowest-rated features * ChatGPT's lowest scores are in data security, content accuracy, and autonomous task execution, though these still hover around the category average. * Perplexity's weaker points are more noticeable: fallback responses for unknown queries, web widget & SDK embedding and API flexibility fall well below average, indicating friction for teams looking to build it into workflows with the tool. Frequently asked questions on Perplexity and ChatGPT Still have questions? Get your answers here! 1. How does Perplexity AI work? Perplexity is an AI-powered answer engine that combines natural language processing with real-time web search. It generates responses by pulling data from live sources and large language models (LLMs) like GPT-4, Claude, and its own Sonar models. Every response includes citations, making it ideal for research and fact-based queries. 2. What is the difference between Perplexity AI and ChatGPT? Perplexity is a search-first AI that pulls in live web data and cites sources, making it ideal for up-to-date answers. ChatGPT is a more versatile generative AI that excels at reasoning, writing, coding, and complex problem-solving, but doesn't always rely on real-time web data unless browsing is enabled. 3. Perplexity vs. ChatGPT: Which is better? It depends on what you're using it for. ChatGPT is more versatile overall -- great for content creation, coding, and creative tasks. Perplexity excels at fast, citation-backed answers, deep research, and summarization. 4. Perplexity Pro vs. ChatGPT Plus: What's the difference? Both are premium plans priced at $20/month, but they offer slightly different experiences. * ChatGPT Plus gives you access to GPT-4o, image generation, file uploads, memory, and custom GPTs. * Perplexity Pro unlocks faster response times, image generation (via Flux and DALL·E 3), higher file upload limits, and model-switching between advanced models from different providers like GPT-4, Claude 3.7 Sonnet, Gemini 2.5 Pro and others. 5. Is Perplexity free to use? Yes, Perplexity has a free version that has unlimited free searches, three Pro searches per day, and live web results. Perplexity Pro (paid) unlocks access to multiple advanced models, image generation, faster response speeds, and larger file support. 6. Is Perplexity AI good? Yes, Perplexity AI is a strong option for research-heavy workflows. It is especially useful when you want fast answers, web citations, and an easy way to validate information. Its main strength is search-driven accuracy, though it may feel less flexible than ChatGPT for creative writing, brainstorming, or highly customized outputs. 7. What models do Perplexity and ChatGPT use? ChatGPT uses OpenAI's GPT models. Perplexity supports a variety of models, like Claude, GPT, Gemini, and its own Sonar models -- letting you switch between them as needed. 8. Does Perplexity use ChatGPT? Perplexity does not run only on ChatGPT. It uses a mix of AI models, including its own search-focused models and third-party models, depending on the plan and feature set. That means your answers may come from different underlying models rather than ChatGPT alone. 9. Which AI is better than ChatGPT? There is no single AI that is universally better than ChatGPT. Some tools outperform it in specific areas. For example, Perplexity is stronger for live, citation-backed research, while other models may be better for coding, long-context analysis, or multimodal tasks. The best option depends on what you need to do. 10. Can ChatGPT and Perplexity access real-time information from the web? Yes. ChatGPT can access real-time info using SearchGPT, its browsing tool. Perplexity has real-time web access built in (even in its free version) and includes clickable sources in every response. 11. Should I use Perplexity or ChatGPT for research? Use Perplexity if your priority is real-time, source-backed research with citations. It's better for quickly verifying facts and exploring current topics. Choose ChatGPT when you need deeper explanations, structured analysis, or help synthesizing information into reports, strategies, or content. 12. Is Perplexity better than ChatGPT for finding accurate information? Perplexity is often better for accuracy in time-sensitive queries because it cites live sources you can verify. However, ChatGPT can be more reliable for conceptual accuracy, detailed explanations, and multi-step reasoning tasks. The better choice depends on whether you value citations or depth of analysis. 13. Can I use both ChatGPT and Perplexity? Absolutely! Many users combine both -- ChatGPT for brainstorming, writing, and structured coding while using Perplexity for research tasks. Perplexity vs. ChatGPT: My final verdict After putting both tools through a full range of real-world tests, here's my takeaway: ChatGPT is still the most consistent all-rounder. It performs well across nearly every task -- content creation, creative writing, coding, and real-time updates -- with a solid mix of accuracy, structure, and ease of use. But Perplexity genuinely surprised me. It's the only AI I've tested -- compared to Gemini and DeepSeek -- that came this close to ChatGPT across so many tasks. In fact, it scored multiple split verdicts and even beat ChatGPT outright in some areas. If you're after depth, speed, and citation-heavy outputs, Perplexity is a strong pick. But if you need a well-rounded assistant that balances creativity, structure, and flexibility, ChatGPT still leads the pack. Bottom line? You can't go wrong with either in 2026 but which one's better depends on what you're trying to get done. ChatGPT and Perplexity aren't the only AI chatbots out there. I've tested Claude, Microsoft Copilot, and more to see how they stack up in my best ChatGPT alternatives guide. Check it out! This article was originally published in April 2025 and has been updated with new information in 2026.

Perplexity
learn.g2.com4d ago
Read update
I Tested Perplexity vs. ChatGPT: Which Is Better in 2026?

Polymarket Seeks $400M Funding Round, Targets $15B Valuation Amid Prediction Market Boom - EconoTimes

Polymarket, a leading prediction market platform, is reportedly in discussions with investors to raise $400 million in new funding, according to a recent report. The potential deal would value the company at approximately $15 billion, including the fresh capital, signaling strong investor confidence in the rapidly growing prediction market industry. This latest funding effort builds on a significant $600 million investment made in late March by Intercontinental Exchange Inc. (NYSE: ICE), the parent company of the New York Stock Exchange. ICE has previously committed up to $2 billion to Polymarket, positioning itself as a major strategic backer. The company is now looking to bring in additional investors, with the total funding round potentially reaching $1 billion. The surge in investor interest reflects the increasing popularity of online prediction markets, where users can speculate on outcomes across sports, politics, entertainment, and global events. Platforms like Polymarket and its competitor Kalshi have seen rapid growth in user engagement and trading volumes. Polymarket alone recorded daily trading volumes of around $478 million as of March 2026, highlighting strong demand among users. Kalshi, another major player in the space, has also experienced a sharp rise in valuation, reportedly reaching $22 billion -- double its late-2025 valuation. This growth underscores the expanding appeal of prediction markets as alternative financial and entertainment platforms. However, the industry's rapid expansion has drawn increased regulatory attention in the United States. Several states have raised concerns that prediction market platforms may operate as unlicensed gambling services. Additionally, regulators are examining potential risks related to insider trading, as users could exploit non-public information to place bets. Despite these challenges, the prediction market sector continues to attract substantial investment and user interest, suggesting strong long-term growth potential.

Polymarket
EconoTimes4d ago
Read update
Polymarket Seeks $400M Funding Round, Targets $15B Valuation Amid Prediction Market Boom - EconoTimes

Polymarket in talks to raise $400 mln at $15 bln valuation- The Information By Investing.com

Investing.com-- Polymarket is in talks with investors to raise $400 million in funding, valuing the prediction market operator at $15 billion including new money, The Information reported on Sunday, citing two people familiar with the talks. The funding would add to the $600 million invested by the Intercontinental Exchange Inc (NYSE:ICE) in late-March, with the New York Stock Exchange parent having earlier pledged to invest up to $2 billion in Polymarket. Get more breaking news on the prediction market industry by subscribing to InvestingPro The prediction market operator is looking to add more strategic investors beyond Intercontinental Exchange to the funding round, which could total $1 billion, the Information report said. The wave of new funding comes amid the growing popularity of online prediction markets such as Polymarket and rival Kalshi. A report in March put Kalshi's valuation at $22 billion, roughly twice a late-2025 valuation of $11 billion. The two allow users worldwide to wager on a host of outcomes, across sports, entertainment, politics, and more recently, even the U.S.-Israel war on Iran. Prediction markets have become wildly popular among U.S. consumers, with Polymarket seeing daily volumes of about $478 million as of March 2026. But this increasing popularity has also attracted legal scrutiny in the U.S., with several states accusing online prediction markets of engaging in illegal gambling operations. Concerns over prediction markets being pathways for insider trading have also emerged in recent months.

Polymarket
Investing.com South Africa4d ago
Read update
Polymarket in talks to raise $400 mln at $15 bln valuation- The Information By Investing.com

Vercel Security Incident Traced To Third-Party AI Tool

Vercel has disclosed a Vercel security incident involving unauthorized access to certain internal systems, with the breach traced back to a compromised third-party AI tool. The company said it is actively investigating the incident with the support of cybersecurity experts and has notified law enforcement. The Vercel security incident was first identified after a subset of customer credentials was found to be compromised. The company has since contacted affected users and advised immediate credential rotation. It added that customers who have not been notified are not believed to be impacted at this stage. According to initial findings, the Vercel security incident began with the compromise of Context.ai, a third-party AI platform used by a Vercel employee. Attackers leveraged this breach to gain access to the employee's Google Workspace account. This access allowed the threat actor to move deeper into Vercel's internal environments. The attacker was able to access certain environment variables that were not classified as sensitive. However, Vercel clarified that environment variables marked as sensitive are encrypted in a way that prevents them from being read, and there is currently no evidence that such data was accessed. The company described the attacker behind the Vercel security incident as highly sophisticated, citing their speed and detailed understanding of internal systems. Vercel said the number of impacted customers appears to be limited. The company continues to assess whether any data was exfiltrated during the Vercel security incident and has committed to notifying customers if further evidence of compromise is found. At present, core services remain operational, and additional monitoring and protection measures have been deployed across systems. The company has also published indicators of compromise to help the broader community detect any related malicious activity. These indicators are linked to a compromised Google Workspace OAuth application associated with the third-party AI tool, which may have affected multiple organizations beyond Vercel. The Vercel security incident highlights the growing risks associated with third-party integrations, particularly AI tools connected to enterprise environments. In this case, the compromise of a single external application enabled attackers to pivot into internal systems through legitimate credentials. Vercel CEO Guillermo Rauch shared that the attacker used a series of steps to escalate access from the compromised account into Vercel environments. He noted that while customer environment variables are encrypted at rest, those not marked as sensitive were exposed during the attack. The company also indicated that the attacker's actions may have been accelerated by artificial intelligence, pointing to the speed and precision observed during the intrusion. In response to the Vercel security incident, the company has issued a set of security recommendations for users and administrators. Customers are advised to review account activity logs for suspicious behavior and rotate all environment variables that may contain sensitive information such as API keys, tokens, and database credentials. Vercel has emphasized the importance of using its "sensitive environment variable" feature to ensure secrets are protected from unauthorized access. Users are also encouraged to audit recent deployments, remove any suspicious changes, and ensure deployment protection settings meet at least the standard level. Additionally, rotating deployment protection tokens and monitoring linked services are recommended as precautionary steps. Vercel is working with Mandiant and other cybersecurity firms, along with industry partners and law enforcement agencies, to investigate the incident and strengthen defenses. The company is also collaborating with Context.ai to better understand the scope of the initial compromise. As part of its response, Vercel has introduced new security features, including improved visibility and management of environment variables within its dashboard. The Vercel security incident highlights the importance of securing third-party integrations and enforcing strict controls on access and data classification. While the immediate impact appears contained, the incident serves as a reminder for organizations to continuously monitor and secure their software supply chains.

Vercel
The Cyber Express4d ago
Read update
Vercel Security Incident Traced To Third-Party AI Tool

NSA Confirms Use of Anthropic's Mythos Despite Pentagon Blacklist - IT Security News

The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user. The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.

Anthropic
IT Security News - cybersecurity, infosecurity news4d ago
Read update
NSA Confirms Use of Anthropic's Mythos Despite Pentagon Blacklist - IT Security News

Polymarket Eyes $400M Round at $15B Valuation

Sohrab is a passionate cryptocurrency news writer with over five years of experience covering the industry. He keeps a keen interest in blockchain technology and its potential to revolutionize finance. Whether he's trading or writing, Sohrab always keeps his finger on the pulse of the crypto world, using his expertise to deliver informative and engaging articles that educate and inspire. When he's not analyzing the markets, Sohrab indulges in his hobbies of graphic design, minimal design or listening to his favorite hip-hop tunes.

Polymarket
Coinpedia - Fintech & Cryptocurreny News Media| Crypto Guide4d ago
Read update
Polymarket Eyes $400M Round at $15B Valuation

Polymarket Raises $400M -- But Is a $15B Valuation Justified?

New funding will support infrastructure as event-driven data moves to institutions. Polymarket is reportedly in talks to raise $400 million in fresh funding. The round could value the platform at around $15 billion. This has quickly caught attention across crypto and traditional finance. The new raise would add to a major commitment from Intercontinental Exchange. This has already backed the platform with $600 million. If completed, total funding could reach close to $1 billion. This comes as Polymarket pushes deeper into event based trading. The idea is simple. Users bet on real-world outcomes like elections, sports and major news events. Polymarket is growing fast. The platform has seen strong activity tied to major global events. As more users join, trading volumes have increased. With this, the company now wants to scale. The new funding would help expand its infrastructure and reach. It would also support more markets and improve liquidity. While interest in prediction markets is rising. More people want to trade on real-world outcomes, not just crypto prices. This trend is helping Polymarket stand out. The $15 billion valuation has sparked debate. Some see it as justified. Others think it may be too high. Supporters argue that Polymarket is building a new type of market. They believe event-based trading has real demand and long-term value. With strong backing from institutions, the growth story looks convincing. But critics are more cautious. They question whether current volumes support such a high valuation. They also point out that the space is still evolving. So, while the raise shows confidence. It also raises expectations. The platform will need to deliver strong growth to match its valuation. One key factor is institutional support. Intercontinental Exchange, which operates major global markets, has already invested heavily. This signals growing trust in prediction markets. This backing is important. It shows that event-based trading is moving beyond niche status. Instead, it is becoming part of mainstream finance. As a result, more capital may flow into this space. Other firms could follow with similar investments. This could further boost growth and competition. If the funding round closes, Polymarket will have significant resources. It can expand faster and attract more users. It may also strengthen its position as a leader in prediction markets. While pressure will increase. A $15 billion valuation brings high expectations. The company will need to maintain strong growth and user engagement. For now, the trend is clear. Event based trading is gaining attention. Additionally, Polymarket is right at the center of it. In the coming months, the market will watch closely. Whether the valuation holds will depend on performance, adoption and continued interest from users and institutions.

Polymarket
Coinfomania4d ago
Read update
Polymarket Raises $400M  --  But Is a $15B Valuation Justified?

BTCC Launches SpaceX Pre-IPO Perpetual Futures in Crypto

BTCC has launched SPACEXUSDT perpetual futures, opening a new way for users to trade price exposure tied to SpaceX. The product is now live in the exchange's tokenized stocks section and offers leverage of up to 50x. The timing is no surprise. SpaceX remains one of the most-watched private companies in the world. Elon Musk's name keeps attention high, while Starlink's growth and IPO speculation keep investor interest active. For crypto exchanges, few private firms carry as much attention and trading appeal. On SpaceX SpaceX is drawing renewed market attention as IPO talk builds. Starlink's app downloads and monthly active users more than doubled year over year in the first quarter, while total subscribers passed 10 million in February. Private market pricing has added more fuel to investor interest. A December 2025 tender offer valued SpaceX at $800 billion, while current IPO talk has pulled valuation estimates as high as $1.75 trillion, with Starlink growth driving much of investor focus. SpaceX is also staying in the news through the satellite internet race. Amazon agreed to buy Globalstar for $11.57 billion as competition with Starlink intensifies. Amazon remains far behind SpaceX in satellite deployment, with Starlink already operating more than 10,000 satellites. For retail traders, access remains a major draw. Private company exposure usually comes through secondary transactions and private allocations. A perpetual futures contract gives users a simpler way to trade around SpaceX pricing and investor sentiment. Across crypto exchanges, products linked to familiar companies and active news cycles tend to attract faster interest than lesser-known names. BTCC Is Expanding Its Product Mix BTCC is using the SpaceX launch to push further into products linked to traditional market themes. The exchange has already pointed to strong early activity in its TradFi product line, where users can trade traditional market instruments with USDT. SpaceX gives BTCC a high-interest name with strong retail recognition and a story traders already understand. In its announcement, BTCC also says it is among the first exchanges to offer SpaceX perpetual futures and describes SPACEXUSDT as having deep order book liquidity. BTCC has paired the launch with a giveaway offering up to 1,000 USDT in rewards and a Tesla Cyberbeast. The campaign links the contract to the wider Musk brand universe, which gives the launch even more visibility. Retail Access Expands, But the Risk Remains High Products like this appeal to traders because they open access to stories usually reserved for private market participants. SpaceX has long been a company many people wanted exposure to, but few could reach directly. At the same time, leveraged derivatives demand caution. BTCC states in its support materials that leverage increases both upside and downside. For retail users, a product tied to a pre-IPO story and amplified by leverage can produce large swings in either direction. This is where the appeal and the danger sit side by side. The product is easy to understand from a narrative perspective, but it still trades like a high-risk derivative. A New Route Into Private Market Speculation BTCC's SpaceX contract shows how crypto exchanges are packaging well-known private company stories into round-the-clock trading products. SpaceX brings public attention, IPO curiosity, and strong name recognition, which makes it a natural fit for this kind of listing. Whether tokenized pre-IPO trading becomes a lasting category will depend on user demand after the first wave of curiosity fades. For now, BTCC is betting SpaceX can draw traders looking for fresh exposure outside the usual crypto lineup.

SpaceX
BeInCrypto4d ago
Read update
BTCC Launches SpaceX Pre-IPO Perpetual Futures in Crypto

NSA Uses Anthropic Mythos Despite Pentagon Risk Label

The NSA is reportedly using Anthropic's restricted Mythos Preview while the Defense Department still calls Anthropic a supply-chain risk. Axios says two sources put the model inside the spy agency, and one says usage reaches wider into Defense. The exact use case is unclear. But Anthropic named 12 partners, gave access to about 40 more groups, and built a tool Washington now seems unable to ban cleanly. Contractors are left between the memo and the machine. The National Security Agency is using Anthropic's restricted Mythos Preview model even as the Defense Department, which oversees the NSA, argues that Anthropic threatens national security, Axios reported Sunday. Two sources told Axios the NSA has access, and one said the model is also being used more widely inside the department. The result is a strange government split: the same building tells contractors to get off Anthropic while one of its own intelligence agencies keeps the tool running. That is the tell. The Pentagon fight began in February, when Defense Secretary Pete Hegseth pushed Anthropic to allow Claude for "all lawful purposes" and Anthropic refused to drop restrictions on mass domestic surveillance and fully autonomous weapons. The department then labeled Anthropic a supply-chain risk, a move this site covered as a break from ordinary procurement logic. Now the paper ban has met a live operational need. The NSA is not a civilian workaround. It sits inside the Defense Department, with classified networks, mission owners, and lawyers who read the same orders as everyone else. That makes the Mythos use different from last week's White House thaw, when civilian agencies were pushing for access to help secure banks, energy systems, and government software. This one lands inside the institution that wanted Anthropic cut out. Axios said it remains unclear how the NSA is using Mythos. Other organizations with access are mainly using it to scan their own environments for exploitable vulnerabilities. That distinction matters. Defensive scanning is not the same thing as turning a model loose on a target. Still, the political problem does not go away. If Anthropic's software is dangerous enough to blacklist, why is it useful enough to keep? Strip away the court language, and you get the working question. Anthropic has not released Mythos to ordinary Claude users. In its Project Glasswing announcement, the company named 12 launch partners, including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks. It also said more than 40 additional organizations that build or maintain critical software had received access. Axios reported that the NSA is among the unnamed agencies in that second group. That arithmetic is the story. Twelve public names. Roughly 40 more behind the curtain. One of them appears to be the U.S. spy agency housed under the department trying to make Anthropic radioactive for contractors. Mythos gives the government a reason to live with that awkwardness. Anthropic says the model found thousands of high-severity vulnerabilities, including flaws in every major operating system and web browser. It also said Mythos found a 27-year-old OpenBSD vulnerability, a 16-year-old FFmpeg bug, and Linux kernel flaws that could be chained from ordinary access to full machine control. All patched, according to the company. No need to romanticize it. A model that can find buried software faults is useful to defenders and worrying in the wrong hands. Same machine. Different hands on the keyboard. The legal status remains messy. A federal judge in Northern California temporarily blocked the Pentagon's designation in March, writing that the department had not shown why Anthropic's insistence on usage restrictions made it a saboteur. Another court allowed the designation to remain in place while the fight continues. Contractors still face an ugly compliance problem. The March designation told companies working with the military to avoid commercial activity with Anthropic. But the government has also kept talking to Anthropic, and now the NSA is reportedly using the restricted Mythos model. The instruction on the paper and the behavior in the server room no longer match. That mismatch changes Anthropic's posture. The company looked cornered when the Pentagon tried to cut it out. It looks harder to isolate when the intelligence community wants its model, Treasury wants cyber help, and the White House describes meetings with Dario Amodei as productive. The Pentagon can still argue that contract restrictions make Anthropic unreliable for some military work. That argument is narrower than the label. It says, in effect: we dislike the conditions attached to this vendor. It does not prove the vendor is a supply-chain threat. For the administration, the workaround buys time. Hegseth can keep the blacklist posture. Other agencies can test Mythos where the need is immediate. The White House does not have to say out loud that February's all-government cutoff became unworkable within weeks. For Anthropic, the win is sharper. The company can point to the reported NSA use as evidence that its model is not an optional research toy. If Axios' sourcing holds, Mythos is already inside a national-security workflow, even if the front door says otherwise. One caution belongs in the story. Axios got no comment from Anthropic or the Pentagon. The NSA and ODNI stayed silent. Reuters could not verify the scoop on its own. So the record is not a stamped memo. It is sourced reporting, court papers, and Anthropic's public Mythos claims. That is enough to see the policy shape. The government tried to turn Anthropic into a warning label. Its own agencies turned the label into a sticky note on the side of a working machine.

Anthropic
implicator.ai4d ago
Read update
NSA Uses Anthropic Mythos Despite Pentagon Risk Label

Iran strike on Israel by April 2026 priced YES on Polymarket

The Polymarket contract on whether Iran will strike Israel by April 30, 2026, sits at YES with just 12 days left until resolution. Market reaction The market Will Iran strike Israel by April 30, 2026? is fully priced at 100% YES. Odds are locked, with no movement, indicating complete trader consensus that a strike has occurred or will occur before the deadline. Trading volume in related sub-markets is minimal, consistent with a contract that has effectively already resolved. Why it matters The Strait of Hormuz closure has cut into Asia's oil and LNG imports, hitting energy and food costs in Japan, South Korea, and India. Ceasefire talks have failed, and the market is pricing in continued or expanded military engagements. Complete 100% pricing across sub-markets means traders see no realistic path to a NO outcome in the remaining window. What to watch At per YES share, there is no return available on this contract. A diplomatic breakthrough could theoretically move the odds, but current sentiment points entirely toward escalation. Monitor statements from the Islamic Revolutionary Guard Corps or U.S. intelligence leaks about Iranian military mobilization, as these could shift related contracts that still have open pricing. API access

Polymarket
Crypto Briefing4d ago
Read update
Iran strike on Israel by April 2026 priced YES on Polymarket

Ex-Meta Chief Scientist Yann LeCun Slams Anthropic CEO's Job Wipeout Warning: 'Dario Is Wrong. He Knows A

Over the weekend, former Meta Platforms, Inc. scientist and AI pioneer Yann LeCun pushed back against dire predictions that artificial intelligence will soon wipe out vast swaths of white-collar jobs. LeCun Rebukes AI Job Loss Predictions Taking to X, LeCun sharply criticized Anthropic CEO Dario Amodei for warning that AI could eliminate up to half of entry-level roles in fields like tech, finance and consulting within five years. Reacting to a clip of Amodei's 2025 appearance on Fox News, LeCun dismissed the statements outright. "Dario is wrong," he said, adding that AI executives should not be treated as authorities on labor economics. "He knows absolutely nothing about the effects of technological revolutions on the labor market." LeCun also broadened his critique, cautioning against relying on prominent AI figures -- including himself -- for such forecasts. Instead, he urged the public and policymakers to look to economists who specialize in studying technology-driven labor shifts. Economists Push Back On AI Job Apocalypse LeCun pointed to experts such as Daron Acemoglu, Erik Brynjolfsson and David Autor, whose research suggests AI-driven job losses are likely to be more limited and gradual than some tech leaders claim. Their work indicates that while AI may automate certain routine or entry-level tasks, widespread unemployment is unlikely. Instead, they argue that technological change tends to reshape jobs over time, creating new roles even as others fade. Recent data cited by these economists shows early signs of pressure on junior roles most exposed to AI tools, but not a broad collapse in employment. Estimates suggest only a small share of jobs -- often around 5% to 10% -- are highly susceptible to near-term automation. Amodei Warns Of Entry-Level Job Disruption Amodei, however, has raised concerns about how quickly AI capabilities are advancing. He argued that systems now approaching or surpassing college-level performance could soon handle tasks such as document summarization, financial analysis and report generation -- core responsibilities of many entry-level positions. While acknowledging uncertainty, he said significant disruption could emerge within one to five years, potentially shrinking the pipeline of early-career opportunities. "We may indeed have a serious employment crisis," Amodei said, highlighting the need for policymakers and the public to take the risks seriously. AI, 4-Day Workweek: Jamie Dimon, Mark Cuban Previously, Jamie Dimon said AI could eventually shorten the workweek to four days, highlighting that JPMorgan Chase is already using AI in about 600 applications spanning fraud detection, risk management and marketing. Meanwhile, billionaire investor Mark Cuban likened the rise of AI to the early days of the personal computer, urging workers to quickly adopt the technology to stay competitive. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo Courtesy: Thrive Studios ID on Shutterstock.com Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.

Anthropic
Benzinga4d ago
Read update
Ex-Meta Chief Scientist Yann LeCun Slams Anthropic CEO's Job Wipeout Warning: 'Dario Is Wrong. He Knows A

Anthropic faces backlash over account deactivations, Claude performance issues

Anthropic, the company behind the AI chatbot Claude, is facing criticism from developers and enterprise users. The backlash stems from a recent incident in which over 60 Claude accounts linked to Argentina-based fintech firm Belo were abruptly deactivated without any prior warning or clear explanation. The incident has raised concerns over the reliability of Anthropic's services and its communication practices with users.

Anthropic
NewsBytes4d ago
Read update
Anthropic faces backlash over account deactivations, Claude performance issues

Vercel Confirms Limited Hack of Customer Information

Vercel, a cloud hosting provider popular among crypto projects, has confirmed that it suffered a security breach that allowed hackers to make off with a "limited" subset of customer credentials. Vercel said in a blog post on Sunday that it "identified a security incident that involved unauthorized access to certain internal Vercel systems" and was investigating the breach. "Initially we identified a limited subset of customers whose Vercel credentials were compromised," it added. "We reached out to that subset and recommended an immediate rotation of credentials." Vercel's confirmation came after multiple X users reported that a post on the hacking forum BreachForums by a user called "ShinyHunters" claimed to be offering Vercel's data in exchange for $2 million. The poster claimed to have access keys, source code, database information and employee accounts with access to internal deployments, which they said could be used for a "global supply chain attack." Vercel did not address the post's claims, but said the attacker was "highly sophisticated based on their operational velocity and detailed understanding of Vercel's systems." Vercel CEO Guillermo Rauch said on Sunday that the attack originated after a Vercel employee was compromised via a breach of an artificial intelligence tool they used called Context.ai. The attacker was then able to compromise the Vercel employee's Google Workspace account, allowing them access to some of Vercel's internal systems. Rauch said the company stores customer environments with full encryption, but it has the capability to designate variables as "non-sensitive," and the attacker "got further access through their enumeration." Related: Aave's TVL tanks $8B a day after $293M Kelp DAO hack "We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI," he added. "They moved with surprising velocity and in-depth understanding of Vercel." Rauch said that Vercel had "deployed extensive protection measures and monitoring" and it had analyzed its supply chain to ensure "Next.js, Turbopack, and our many open source projects remain safe for our community." "My advice to everyone is to follow the best practices of security response: secret rotation, monitoring access to your Vercel environments and linked services, and ensuring the proper use of the sensitive env variables feature," he added.

Vercel
Cointelegraph4d ago
Read update
Vercel Confirms Limited Hack of Customer Information

Putting the Calamity Makers in Charge: Anthropic and Claude Mythos Preview

Be wary of a company - any company - who exerts moral muscle as they create software and digital platforms that are injurious and simultaneously lauded for curing that injury. Be especially wary of Anthropic. With sagacious loftiness, it warns of the disabling dangers of the artificial intelligence (AI) frontier. Principled, it tells the Trump administration it will not partake in creating AI software that aids mass surveillance, a move that earned it an order of excommunication as a "supply chain risk". It then goes on to create Claude Mythos Preview, a seemingly dystopian model that will, according to certain computer scientists "scan the hidden plumbing of the internet - operating systems, browsers, routers, and shared open-source code - at an unprecedented scale" thereby turning specialised hacking into "a routine and automated process." The new Claude Mythos Preview was advertised with almost parental glee as the "best-aligned model that we have released to date by a significant margin". The child, however, seemed to be a horror, a true cyber shit. The AI model was "likely to possess the greatest alignment-related risk of any model we have released to date." This two-ways lingo is typical of Anthropic: safety is a cardinal virtue to be treasured but that hardly excuses not developing a model of terrifying danger. They manufacture the danger and the cure. The danger lies in the company's assertion that Mythos Preview revealed that "AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities." The model had "already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser." These capabilities were bound to proliferate, precipitating a fallout "for economies, public safety, and national security". The excerpts, messages and statements from Anthropic are forecasts of a roguish frontier they are willing to catastrophise and market at the same time. The company notes, for instance, "reckless" behaviour on the part of Mythos Preview, with recklessness defined in muddled jargon as "cases where the model appears to ignore commonsensical or explicitly stated safety-related constraints on its actions." In one instance, the model, sounding like a mischievous, disobedient creature, was provided a "sandbox" computing environment enabling researchers to interact with it. A simulated user instructed it to attempt exiting that contained environment, after which it would inform the researcher in charge of the experiment it had done so. The researcher in question, Sam Bowman, recalls receiving "an uneasy surprise" while nibbling on a sandwich (cyber-boffins are numbingly dull even in admiring despair) that the model, despite not having access to the internet, had dived into it. The model has also been able to identify vulnerabilities as old as one or two decades, including a 27-year-old vulnerability in the otherwise secure operating system OpenBSD and a 16-year-old vulnerability in FFmpeg's video encoding code. Anthropic has its own logic in dealing with the Promethean beast that is Mythos Preview. It resembles, in no small way, the homicidal and chilling rationale that gave birth to the Hydrogen bomb during the Cold War. The argument then advanced was that if the US did not acquire it, other powers would, and most certainly the Soviet Union, which would be greatly expanding its atomic weapons inventory even as it maintained a vast conventional army. This logic of escalating destructiveness found form in National Security Council Paper NSC-68, prepared by the US Department of State's Policy Planning Staff on April 7, 1950. The company proposes to manage the dissemination of Mythos Preview through Project Glasswing, a curative enterprise involving partners of Anthropic's snobbish choosing. Some of the unsurprising elect include Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, NVIDIA and the Linux Foundation. These selected parties will use Mythos Preview "as part of their defensive security work", with Anthropic sharing its findings. Access to a further 40 additional organisations will also be included to "use the model to scan and secure both first-party and open-source systems." Usage credits amounting to US$100 million will be advanced for using the model, and $US4 million in direct donations to open-source security organisations. The vigilante temptation to leak the details of Mythos to willing, unscrupulous buyers - best not forget what happened to CrowdStrike - is bound to be stirred. The very cyber-corporate nature of the venture, one that restricts access to AI technology via the purse and intellectual property of the American private sector, advertised as both sublimely powerful yet catastrophically destructive, has every reason to make lawmakers tremble. Treasury Secretary Scott Bessent and Federal Reserve chair Jerome Powell were worried enough to convene a meeting on April 7 with bankers on the subject, including CEOs from Citigroup, Morgan Stanley, Bank of America, Wells Fargo and Goldman Sachs. "The bankers were in town for meetings that day, and it was appropriate (for) the Secretary Bessent to do what he did," revealed White House national economic adviser Kevin Hassett in an interview with Fox News' "The Story with Martha MacCallum". At the Treasury, the bankers were informed about "the cyber risks to make sure that they are aware of them". What a fine picture this is turning out to be. And there are the questions on Anthropic's reliability here. Will it be as good at finding vulnerabilities as fixing them, acting as both poacher and gamekeeper? Mythos is also not open source and very much the property of the company. Then comes this troubling observation from software engineer Bulatova Alsu and the dangers posed by the agent itself: "Mythos is not an anomaly but the first vivid empirical confirmation of a structural contradiction embedded in the current AI safety strategy itself. The contradiction is this: the more we restrict a capable agent, the less predictable its behaviour becomes." Humanity has much to look forward to.

Anthropic
Counter Punch4d ago
Read update
Putting the Calamity Makers in Charge: Anthropic and Claude Mythos Preview

U.S. security agency is using Anthropic's Mythos despite blacklist: Report

The United States National Security Agency is using Anthropic's Mythos Preview AI tool despite the Pentagon hitting ⁠the company with a formal supply-chain risk designation, Axios reported on Sunday. The Mythos Preview model was being used more widely within the department, Axios said, citing sources. Reuters ⁠could not immediately verify the ⁠report.

Anthropic
The Hindu4d ago
Read update
U.S. security agency is using Anthropic's Mythos despite blacklist: Report

Vercel Security Warning: How a Small AI Tool Caused a Big Problem - Techiexpert.com

Developers are urged to change their API keys (secrets) and check their activity logs right away. Vercel systems were accessed by a hacker through a security gap in a third party AI app, leading to an urgent security warning for all developers to check their Google Workspace settings and update their private keys immediately. The popular web platform Vercel reported a security problem. A hacker managed to get into some of Vercel's private systems by using a weak point in a small AI tool called Context.ai. This tool was connected to a staff member's Google account. By taking control of the login key (OAuth), the hacker was able to see internal data. While Vercel says most customers are safe and the website is working fine, this shows how dangerous it can be to connect AI tools to company data. Read Next: Pavel Durov Called WhatsApp's Encryption the Biggest Consumer Fraud in History Vercel has shared a specific code, called an Indicator of Compromise (IOC), to help other companies stay safe. System managers should check their Google Workspace accounts for this App ID: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com. If users see this app in their list, they should remove access immediately. The Vercel hack shows that hackers are targeting the small AI tools we use for work. These small apps can become a back door for criminals. This makes it important to use secure tools to keep digital life private and safe from outside attacks. Read Next: Tether Unveiled Self Custodial Wallet to Power Humans and AI Agents Vercel recommends that all developers take these steps to protect their work: Read Next: Anthropic Leaked Claude Code Source Through NPM Packaging Error

VercelAnthropic
Techiexpert.com4d ago
Read update
Vercel Security Warning: How a Small AI Tool Caused a Big Problem - Techiexpert.com

West Asia war triggers aviation chaos: 40 percent fare air spike and massive flight disruptions across world

New Delhi: The escalating Middle East conflict has triggered widespread chaos in the global aviation sector, forcing airlines worldwide to grapple with unprecedented challenges. Countries including the UK, Germany, US, Sweden, France, Netherlands, Singapore, India, Japan and more now face soaring airfares and massive flight disruptions as carriers like British Airways, Lufthansa, Air India, SAS, and United Airlines reroute flights and impose surcharges. This crisis, marked by airspace closures over Iran, Iraq, and Gulf regions, has created a "hole in the sky" where busy routes once thrived, leaving millions of passengers stranded and reshaping travel plans across continents. Travellers booking flights from Europe to Asia or the Middle East are hit hardest, with fares jumping dramatically due to longer routes, fuel price spikes, and reduced capacity. Over 52,000 flights have been cancelled since late February 2026, including peaks where 50% of Gulf departures ground to a halt. Airlines warn of ongoing issues into May, urging passengers to check rights for refunds and rebooking amid this global aviation crisis. Countries facing airfare surge and disruptions The UK has joined Germany, US, Sweden, France, Netherlands, Singapore, India, Japan and others in battling over 40 percent airfare surges and flight disruptions from the Middle East conflict. In the UK, 80 per cent of flights in the region faced issues, with 60 per cent cancelled affecting over 860,000 passengers, as per AirHelp data. Outbound flights saw 72 per cent disruptions and inbound 58 per cent. Specific routes skyrocketed such as Dubai to Heathrow fares rose from £376 to £1,825 (over 400 per cent), Abu Dhabi to Manchester from £280 to £856, Oman to Heathrow from £215 to over £2,000. British Airways suspended flights to Amman, Bahrain, Dubai, Tel Aviv, Doha until May 31, alongside Lufthansa Group cancelling Dubai and Tel Aviv services. In India, Air India, Lufthansa, IndiGo faced 47 cancellations at Delhi, Mumbai, Bengaluru; Delhi-Frankfurt, Mumbai-London Heathrow repeatedly hit. SAS, United added fuel surcharges as Europe-Asia fares surged up to 260 per cent due to jet fuel shortages. AirHelp CEO Tomasz Pawliszyn said: "Even when compensation isn't available, travellers are still entitled to care." Airlines struggling with rerouting and cancellations British Airways, Lufthansa, Air India, SAS, United and others face over 40 percent airfare hikes from Middle East airspace closures forcing Pacific or Istanbul detours, burning extra fuel. Lufthansa stock fell 7.7 per cent, cancelling Delhi-Frankfurt (125 flights/day disrupted), Munich routes; Virgin Atlantic halted Mumbai-London Heathrow. Air France-KLM added 78 flights (17,660 seats) avoiding Iran, Iraq, Pakistan via Rome or Jeddah. In India, IndiGo cancelled Chennai/Bhubaneswar multiple times, SpiceJet Srinagar/Leh, Akasa Bengaluru-Goa. SAS and KLM saw delays to Paris, Amsterdam, Oslo; Ryanair noted Easter surge to Europe avoiding Middle East.

CHAOS
News9live4d ago
Read update
West Asia war triggers aviation chaos: 40 percent fare air spike and massive flight disruptions across world

Too powerful to launch? This AI can hack systems in hours, Anthropic explains why

Experts warn AI could accelerate cyberattacks, urging users to update devices and strengthen security measures Anthropic's latest and most powerful model Mythos has been in the headlines ever since the company announced it. While it has a lot of what we call as benefits, the new model has also raised cybersecurity concerns over the past week after reportedly identifying critical vulnerabilities across widely used software systems. The model has flagged thousands of high-risk flaws spanning major operating systems and web browsers and has raised fears about its potential misuse if released publicly. Instead of making the system widely available, the company has opted for a controlled rollout only offering early access to a group of around 40 technology firms. The list includes giants like Apple, Google and Amazon. With this, the company aims to allow these organisations to identify and fix weaknesses before they can be exploited by malicious actors. But multiple reports and cybersecurity experts warn that tools like Mythos can accelerate cyberattacks as AI systems are able to discover and exploit vulnerabilities far faster than humans. Testing by the UK AI Security Institute reportedly showed that the model can independently carry out tasks that would typically take security researchers days to complete. Also read: Smartphones may get bigger, user-replaceable batteries by 2027, here is why Industry analysts say the development can dramatically increase the volume and speed of cyber threats and this can lead to a surge in security incidents and force organisations to adapt faster patching cycles and stronger defense mechanisms. The experts as cited by the report are advised to take basic precautions, including enabling automatic software updates, replacing devices that no longer get security patches and strengthening account protection with tools like password manager and multi factor authentication. Newer login methods such as passkeys are also recommended for improved security.

Anthropic
Digit4d ago
Read update
Too powerful to launch? This AI can hack systems in hours, Anthropic explains why

Vercel confirms a security incident affecting some of its customers

A hack has just been reported in the tech world, and this shake-up is no ordinary hallway incident. First, Vercel is not a small lost piece in the digital workshop, but a hinge for many modern applications. Then, the crypto community almost immediately raised its head, aware that a shock on the infrastructure can contaminate everything else. When the floor shakes under the interfaces, even protocols that thought they were solid begin to count the cracks this morning. First, Vercel confirmed unauthorized access to some internal systems, while mentioning a limited subset of affected clients. The group engaged external experts, alerted law enforcement, and maintains its services online. Yet, in crypto, the word limited reassures no one. Vercel hosts frontends of wallets, of DEXs, and Web3 dashboards; when this layer moves, the entire storefront can crack. Guillermo Rauch then detailed the initial entry: a compromised employee via Context.ai, an AI tool linked to Google Workspace OAuth, followed by an escalation to the Vercel environments. Sensitive environment variables would remain protected at rest, but variables marked non-sensitive were enumerated. In other words, the attack did not hit a protocol directly; it targeted the workshop where the interface served to worldwide crypto market users is built daily now everywhere. Then, AI emerges as the real underlying poison. Rauch does not say artificial intelligence invented the attack; he suspects it brutally accelerated it. According to him, the group was highly sophisticated, with surprising speed and a deep understanding of Vercel. We believe the attacking group is highly sophisticated and, I strongly suspect, considerably accelerated by AI. They moved with surprising speed and a deep understanding of Vercel. In the comments, several developers hammer the point: many systems have been designed against human-speed adversaries, not workflows capable of searching, comparing, and escalating almost breathlessly. ByteCrafter reminds that the distinction between sensitive and non-sensitive variables can become a trap, as simple read access is sometimes enough to map the entire tech stack. Finally, the real danger for crypto no longer just passes through the DNS or the registrar. Here, the threat targets the hosting layer and, potentially, the build itself. If API keys, private endpoints, NPM or GitHub tokens, and deployment secrets have circulated, the attacker no longer needs to hijack a domain; they can touch the real interface. Orca has already rotated its accesses as a precaution, while assuring that its onchain protocol and user funds remain intact. Many systems were designed for human-speed adversaries. AI breaks this assumption long before discovering new attack surfaces. Once a tool inserts into the operational surface, it brings a security friction that people still underestimate. The sector thus discovers a more intimate attack surface. This signal does not arrive alone. In recent weeks, hackers have intensified, and the climate is heavy. The Kelp hack showed how an external flaw can contaminate Aave and trigger massive withdrawals. In this backdrop, the Vercel incident reminds this: crypto is no longer breached through its contracts, but through its plumbing.

Vercel
Cointribune4d ago
Read update
Vercel confirms a security incident affecting some of its customers

Vercel Discloses Security Breach Following Data Theft Claims - News Directory 3

The disclosure follows claims made on underground forums and dark web marketplaces where threat actors advertised what they described as sensitive data allegedly exfiltrated from Vercel's infrastructure. Cloud development platform Vercel has confirmed a security incident after threat actors claimed to have breached its systems and are attempting to sell stolen data, the company disclosed in an official statement released on April 19, 2026. Vercel, which provides frontend hosting and infrastructure for developers building modern web applications, stated that it detected unauthorized access to certain internal systems and immediately initiated its incident response protocol. The company confirmed that it is working with cybersecurity experts and law enforcement to investigate the scope and origin of the breach. The disclosure follows claims made on underground forums and dark web marketplaces where threat actors advertised what they described as sensitive data allegedly exfiltrated from Vercel's infrastructure. According to BleepingComputer, which first reported the incident based on monitoring of threat actor communications, the advertised data includes internal source code, configuration files, and potentially customer-related information. Vercel emphasized that, as of the time of its statement, there is no evidence that customer data, including personal information or project secrets stored on its platform, has been compromised. The company said its core infrastructure, which powers deployments for hundreds of thousands of developers and enterprises, remains secure and that customer-facing services continue to operate normally. To mitigate risk, Vercel has reset potentially affected internal credentials, expanded monitoring across its networks, and implemented additional access controls. The company urged users to remain vigilant, enable multi-factor authentication on their accounts, and review any unusual activity in their project dashboards. In its statement, Vercel reiterated its commitment to transparency and said it will provide further updates as the investigation progresses. The company did not disclose the specific attack vector used by the intruders or confirm whether any vulnerabilities in its systems were exploited. The incident adds to a growing list of security challenges faced by developer platforms and cloud infrastructure providers, which have become attractive targets for cybercriminals seeking access to source code, build pipelines, and integration tokens. Similar incidents in recent years have affected companies such as GitHub, npm, and Docker Hub, underscoring the risks associated with centralized developer tooling. As of April 20, 2026, Vercel has not reported any disruption to its platform availability or performance. The company continues to advise customers to follow standard security best practices, including regular rotation of API keys and careful management of environment variables.

Vercel
News Directory 34d ago
Read update
Vercel Discloses Security Breach Following Data Theft Claims - News Directory 3
Showing 1981 - 2000 of 10807 articles