Gold Penguin Official Logo

FraudGPT: New AI Tool for Cybercrime Emerges

FraudGPT recently emerged as a new cybercrime AI tool that is able to carry out malicious activities by bypassing ChatGPT's filters. Threat actors have been actively advertising the bot on dark web marketplaces and telegram channels
A robot hacker AI-generated image, courtesy of arafat92
A robot hacker AI-generated image, courtesy of arafat92
July 31, 2023 2:25 pm

An emerging cybercrime generative AI tool dubbed as ‘FraudGPT’ has been rampantly advertised by threat actors across various digital mediums, namely, in dark web marketplaces and Telegram channels. 

The cybercrime tool was showcased as a “bot without limitations, rules, boundaries” and it is exclusively “designed for fraudsters, hackers, spammers, and like-minded individuals,” according to a dark web user, “Canadiankingpin.”

Consequently, the screenshot that surfaced the web confirmed more than 3,000 sales and reviews of the said tool. More so, the promoter of the tool [Canadiankingpin] indicated details on the subscription fee ranging from $200 up to $1700, depending on the desired longevity.

Image from

Without any ethical boundaries, FraudGPT allows users to manipulate the bot to their advantage and do whatever is asked of it, considering that it is being promoted as a “cutting edge tool” with plenty of harmful capabilities.

This includes creating hack tools, phishing pages, and undetectable malware, writing malicious codes and scam letters, finding leaks and vulnerabilities, and many more. 

In a recent report, Rakesh Krishnan, a Netenrich security researcher, asserted that the AI bot is exclusively targeted for offensive purposes.

He thoroughly elaborated on the adversities and threats arising from the chatbot saying that it will help threat actors against their targets inclusive of business email compromise (BEC), phishing campaigns, and frauds. 

“Criminals will not stop innovating – so neither can we,” Rakesh Krishnan emphasized. 

Amid the recent release of harm-provoking AI bots, we now have FraudGPT which is allegedly a more threatening tool along with ChaosGPT and WormGPT – adding up to the harmful side of AI generation systems.

The recent development of these threatening AI bots trigger cybersecurity and provocatively violates cybersafety. Furthermore, this also puts a bad taste on the progressing AI systems – no matter how viable the other helpful AI generators are.

No wonder other countries are eagerly pushing for AI regulation laws. The alarming side of AI and its boundless potential in endangering users are gradually showing up and it definitely calls for heightened restrictions and regulations. 

Stay Updated On The Latest Tech News
Get notified with the biggest stories in tech and AI delivered straight to your email, once a week, every week. We don't sell your data.
Written by Justin Gluska
Justin is the founder of Gold Penguin, a business technology blog providing the latest news and tools in the artificial intelligence, business, and SaaS world. If it can help you make more money or save you time, he will write about it!
Notify of

Inline Feedbacks
View all comments