An emerging cybercrime generative AI tool dubbed as ‘FraudGPT’ has been rampantly advertised by threat actors across various digital mediums, namely, in dark web marketplaces and Telegram channels.
The cybercrime tool was showcased as a “bot without limitations, rules, boundaries” and it is exclusively “designed for fraudsters, hackers, spammers, and like-minded individuals,” according to a dark web user, “Canadiankingpin.”
Consequently, the screenshot that surfaced the web confirmed more than 3,000 sales and reviews of the said tool. More so, the promoter of the tool [Canadiankingpin] indicated details on the subscription fee ranging from $200 up to $1700, depending on the desired longevity.
Without any ethical boundaries, FraudGPT allows users to manipulate the bot to their advantage and do whatever is asked of it, considering that it is being promoted as a “cutting edge tool” with plenty of harmful capabilities.
This includes creating hack tools, phishing pages, and undetectable malware, writing malicious codes and scam letters, finding leaks and vulnerabilities, and many more.
In a recent report, Rakesh Krishnan, a Netenrich security researcher, asserted that the AI bot is exclusively targeted for offensive purposes.
He thoroughly elaborated on the adversities and threats arising from the chatbot saying that it will help threat actors against their targets inclusive of business email compromise (BEC), phishing campaigns, and frauds.
“Criminals will not stop innovating – so neither can we,” Rakesh Krishnan emphasized.
Amid the recent release of harm-provoking AI bots, we now have FraudGPT which is allegedly a more threatening tool along with ChaosGPT and WormGPT – adding up to the harmful side of AI generation systems.
The recent development of these threatening AI bots trigger cybersecurity and provocatively violates cybersafety. Furthermore, this also puts a bad taste on the progressing AI systems – no matter how viable the other helpful AI generators are.
No wonder other countries are eagerly pushing for AI regulation laws. The alarming side of AI and its boundless potential in endangering users are gradually showing up and it definitely calls for heightened restrictions and regulations.