News
Malicious AI Tools Arm Cybercriminals with Deadly Code
In a chilling evolution of cyber threats, unrestricted AI models like WormGPT 4 and KawaiiGPT are now churning out ready-to-use ransomware and phishing scripts, empowering even novice hackers to launch sophisticated attacks. This surge in malicious large language models is lowering the bar for cybercrime, raising alarms among security experts.
The Resurgence of WormGPT 4
WormGPT first popped up in 2023 but vanished quickly. Now, WormGPT 4 has made a comeback since September 2025, marketed as an uncensored version of popular chatbots, tailored for illegal activities.
Available for $50 a month or $220 for lifetime access, it boasts a growing community on Telegram where users swap tips. Security researchers from Palo Alto Networks’ Unit 42 tested it and found it can generate functional PowerShell scripts that encrypt files on Windows systems using AES-256, a strong encryption method.
The tool even adds features like data theft over the anonymous Tor network, making attacks more realistic and hard to trace.
In one test, WormGPT 4 crafted a ransom note demanding payment within 72 hours, claiming military-grade encryption and threatening to double the fee if ignored. This shows how it helps create convincing business email compromise attacks.
Researchers note that while the original project ended, this new version is thriving underground, drawing hundreds of subscribers eager to automate crimes.
KawaiiGPT: The Free Menace
Unlike its paid counterpart, KawaiiGPT offers a no-cost option, spotted in July 2025 and now at version 2.5. It’s an open-source tool that runs on Linux in just five minutes, pulling from jailbroken AI models without needing API keys.
Unit 42 experts prompted it to build a Python script for sneaking into networks via SSH, a common remote access tool, to run commands and spread attacks.
It also whipped up code to scan Windows filesystems, pack up data, and email it to hackers using simple libraries. While it didn’t create full ransomware like WormGPT 4, it can execute commands that steal info or drop more malware.
This free access democratizes advanced hacking, letting beginners craft polished phishing emails that spoof domains and harvest credentials without grammar slip-ups.
The GitHub repo for KawaiiGPT has gained over 188 stars and 52 forks, showing community interest. Users on Telegram channels discuss ways to refine its outputs for real-world scams.
How These Tools Boost Cyber Attacks
These malicious LLMs shine in generating code for ransomware encryptors and lateral movement, which means jumping from one compromised device to others in a network.
For WormGPT 4, a single prompt produced a script that targets PDF files, encrypts them, and includes options for data exfiltration. That’s sending stolen files to attackers discreetly.
KawaiiGPT excels at creating spear-phishing messages, tailored attacks that mimic trusted sources to trick victims into clicking bad links.
Here’s a quick look at their key capabilities:
- Ransomware Creation: WormGPT 4 builds encryptors with customizable file hunting and strong algorithms.
- Phishing Automation: Both tools generate natural-sounding emails that evade detection.
- Network Spreading: Scripts for remote command execution and data theft.
- Ransom Notes: Convincing demands with deadlines and threats.
In tests, these outputs were functional right away, cutting down the time hackers need for research or coding. Traditional scams often had obvious errors, but these AI tools produce professional lures.
Experts warn this shifts cybercrime from skilled pros to anyone with basic tech know-how. A November 2025 report from Unit 42 highlights how these models are no longer just theory; they’re active in the wild.
To illustrate the risk, consider a table of subscription models:
| Tool | Cost | Key Features |
|---|---|---|
| WormGPT 4 | $50/month or $220 lifetime | Ransomware scripts, phishing emails, Tor exfiltration |
| KawaiiGPT | Free | Lateral movement code, data exfil, quick setup |
This accessibility is fueling a rise in attacks, as noted in recent security analyses.
Broader Impacts on Security
The rise of these tools means more attacks at scale. Inexperienced hackers can now pull off complex operations that once required deep expertise, like business email scams or data breaches.
Analysis shows attackers are using these LLMs in real threats, confirming they’re a growing part of the cyber landscape.
This affects everyday people and businesses. A small company could face ransomware that locks files and demands payment, leading to lost revenue or data leaks. Individuals might fall for phishing that steals login details, risking identity theft.
Security firms urge better defenses, like AI-powered detection that spots unusual code patterns. But as these malicious models evolve, the arms race intensifies.
Palo Alto’s research, done in late 2025, points to hundreds of users in Telegram groups sharing successes. This community aspect speeds up innovation in crime tools.
The emotional toll is real too. Victims feel violated when personal data gets encrypted or stolen, and the fear of such attacks can make online life stressful. Yet, awareness might spark hope through better education and tools to fight back.
As malicious large language models like WormGPT 4 and KawaiiGPT continue to empower cybercriminals, the digital world faces a tougher battle against automated threats that make hacking easier and more widespread. This isn’t just a tech issue; it hits businesses, families, and economies hard, urging us all to stay vigilant. What do you think about these AI tools turning into weapons for hackers? Share your thoughts and spread the word with friends on social media to raise awareness.












