For just $50, cybercriminals can now accessAI capabilities that were once reserved for elite hackers. Welcome to the era of GhostGPT, where sophisticated cyber-attacks are no longer limited to the most dangerous fraudsters.
A January 23, 2025 report by Abnormal Security, highlighted by Forbes contributor Davey Winder, confirms the growing threat of GhostGPT. This tool is part of a larger trend of AI-driven security threats affecting various sectors, including Gmail users, bank customers, and individuals targeted through smartphone calls and messages.Unlike traditional AI models with ethical guardrails, GhostGPT provides direct, unfiltered answers to harmful queries that would typically be blocked or flagged. Experts emphasize that GhostGPT is not a future concern but a"real and present danger" when used by malicious actors.
For a mere $50, cybercriminals can purchase this highly sophisticated artificial intelligence model that empowers them with capabilities that only the most dangerous fraudsters used to have at their disposal.
Similar to WormGPT and FraudGPT, GhostGPTis altered to bypass the typical safeguards and ethical constraints present inLLMs, allowing even the lowest-skilled fraudsters to generate malicious code and malware and then create alarmingly convincing phishing emails. For example, during a test run by researchers, the model created a very convincing DocuSign phishing email, complete with a fraudulent link. Further complicating matters, it also boasts a “no-logs” policy, meaning that no records of conversations are kept by the program.
First discovered in mid-November when it was being sold through a Telegram channel, the authors offer three packages for the LLM; one week for $50, one month for $150, and three months for $300. These are clearly minuscule amounts when compared to what they could potentially gain from a successful attack.
This is not the first of its kind. As noted earlier, malicious chatbots WormGPT and FraudGPT were found in 2023. Both models were cause for alarm within the cybersecurity community due to how much they lowered the bar for cybercriminals to execute successful, sophisticated attacks.
GhostGPT represents a significant leap inAI-powered cybercrime tools. Unlike mainstream AI models, it operates without ethical constraints, enabling unrestricted generation of malicious content.
Architecture: Likely uses a wrapper connecting to a jailbroken ChatGPT or open-source LLM
UncensoredProcessing: Generates harmful content without typical AI safeguards
No-LogsPolicy: Operates without keeping conversation records, enhancing user anonymity
TelegramIntegration: Easily accessible as a Telegram bot
• Rapid malware and exploit development
• Highly convincing phishing email generation
• Automated social engineering
• Potential for creating polymorphic malware
Its tiered pricing makes it accessible to a wide range of cybercriminals, potentially leading to an increase in both the volume and sophistication of cyberattacks.
This trend, alongside similar tools likeWormGPT and FraudGPT, poses significant challenges for cybersecurity professionals and necessitates more advanced defensive solutions.
Malicious AI models like GhostGPT will continue being created while also becoming more advanced. The advancing sophistication of these attacks will make them harder to detect while also lowering the bar for potential fraudsters, leading to an increased volume of daily attacks.
Traditional cybersecurity methods and training will fail against the sheer volume of sophisticated attacks. If these threats are not taken seriously, organizations will face massive financial losses and potential legal repercussions.
To combat these threats, businesses must implement modern, comprehensive security platforms that provide them with end-to-end visibility across the entire payment process.
Behavioral AI solutions that can integrate with existing ERP systems enable organizations to autonomously detect and flag anomalies, preventing potential financial disaster.