Most of today’s generative AI tools come with strong guardrails. They won’t teach you how to make explosives or walk you through committing digital fraud. These rules usually work well, and tools like Grok, Claude, or Gemini will shut you down when you try to use them for anything nefarious.
Unfortunately, cybercriminals often won't take “no” for an answer. While some hackers try to jailbreak mainstream tools with clever prompts, others have taken a different route: they’re building their own unrestricted large language models designed specifically for malicious activity. These chatbots for cybercrime aren’t just experimental side projects; they’re quickly becoming part of the modern cybercriminal toolkit.
Researchers Found AI Models Built Only for Attacks
Cybersecurity researchers from Palo Alto Networks’ Unit 42 recently analyzed two of these underground systems to determine what these tools can do and the risks they pose to everyday businesses.
The findings were unsettling.
These malicious AI models aren’t clunky prototypes. They can write believable phishing emails, assist in malware creation, generate automated attack scripts, and even walk inexperienced hackers through each step needed to commit their crimes. In other words, they make it easier for low-skilled actors to launch high-impact attacks.
Why Criminals Are Turning to Rogue AI
Given the restrictions on legitimate generative AI tools, it’s easy to see why a cybercriminal would build a rogue chatbot.
Chatbots for cybercrime train on stolen code, leaked datasets, or past malware samples, which allows them to:
- Bypass all guardrails
- Provide detailed instructions for illegal actions, including phishing attacks
- Automate tedious tasks like reconnaissance and scripting
These tools are essentially “hacker copilots.” And, as with any automation, they scale quickly, compounding an already challenging cybersecurity landscape. Attackers no longer need great technical skills; they just need access to the right chatbot to develop malware faster, automatically exploit vulnerabilities, send more credible phishing emails, and launch more attacks.
Practical Steps To Stay Ahead of Cybersecurity Threats
You can’t stop criminals from building malicious AI, but you can make yourself a harder target. Here are a few best practices:
- Train your employees on modern phishing tactics: Cybercriminals are using AI to create highly personalized, mistake-free messages. Teach your staff to verify unexpected requests, especially anything involving money, credentials, or account access.
- Use advanced email filtering and endpoint security: Many security platforms now include AI-powered detection that can identify unusual writing patterns or malicious attachments.
- Keep your software and devices fully updated: It’s common for attacks powered by hacker automation to target known vulnerabilities. Eliminating vulnerabilities by keeping software updated can bulletproof your accounts.
- Harden access with MFA: Even if a phishing attempt succeeds, multi-factor authentication (MFA) can stop attackers from logging in.
Keep Up With Threats To Defend Your Company
AI helps criminals almost as much as it helps your business. As chatbots for cybercrime evolve, you need to stay alert and maintain strong defenses.
The threat is real, but with smart security habits and up-to-date protections, you can stay a step ahead of malicious AI and the hackers using it. Tighten your security controls, stay suspicious, and assume that a hacker is always looking for a way to attack.

Contact Us At

