FraudGPT: New AI Tool for Cybercrime Emerges

FraudGPT recently emerged as a new cybercrime AI tool that is able to carry out malicious activities by bypassing ChatGPT's filters. Threat actors have been actively advertising the bot on dark web marketplaces and telegram channels

Justin Gluska

Updated March 21, 2024

Reading Time: 2 minutes

An emerging cybercrime generative AI tool dubbed as ‘FraudGPT’ has been rampantly advertised by threat actors across various digital mediums, namely, in dark web marketplaces and Telegram channels. 

The cybercrime tool was showcased as a “bot without limitations, rules, boundaries” and it is exclusively “designed for fraudsters, hackers, spammers, and like-minded individuals,” according to a dark web user, “Canadiankingpin.”

Consequently, the screenshot that surfaced the web confirmed more than 3,000 sales and reviews of the said tool. More so, the promoter of the tool [Canadiankingpin] indicated details on the subscription fee ranging from $200 up to $1700, depending on the desired longevity.

Image from https://netenrich.com/blog/fraudgpt-the-villain-avatar-of-chatgpt

Without any ethical boundaries, FraudGPT allows users to manipulate the bot to their advantage and do whatever is asked of it, considering that it is being promoted as a “cutting edge tool” with plenty of harmful capabilities.

This includes creating hack tools, phishing pages, and undetectable malware, writing malicious codes and scam letters, finding leaks and vulnerabilities, and many more. 

In a recent report, Rakesh Krishnan, a Netenrich security researcher, asserted that the AI bot is exclusively targeted for offensive purposes.

He thoroughly elaborated on the adversities and threats arising from the chatbot saying that it will help threat actors against their targets inclusive of business email compromise (BEC), phishing campaigns, and frauds. 

“Criminals will not stop innovating – so neither can we,” Rakesh Krishnan emphasized. 

Amid the recent release of harm-provoking AI bots, we now have FraudGPT which is allegedly a more threatening tool along with ChaosGPT and WormGPT – adding up to the harmful side of AI generation systems.

The recent development of these threatening AI bots trigger cybersecurity and provocatively violates cybersafety. Furthermore, this also puts a bad taste on the progressing AI systems – no matter how viable the other helpful AI generators are.

No wonder other countries are eagerly pushing for AI regulation laws. The alarming side of AI and its boundless potential in endangering users are gradually showing up and it definitely calls for heightened restrictions and regulations. 

Want to Learn Even More?

If you enjoyed this article, subscribe to our free newsletter where we share tips & tricks on how to use tech & AI to grow and optimize your business, career, and life.


Written by Justin Gluska

Justin is the founder of Gold Penguin, a business technology blog that helps people start, grow, and scale their business using AI. The world is changing and he believes it's best to make use of the new technology that is starting to change the world. If it can help you make more money or save you time, he'll write about it!

Subscribe
Notify of
guest

0 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments