Unveiling FraudGPT and WormGPT: The Dark Web’s ‘Malignant’ Alternatives to ChatGPT Explored

Unveiling FraudGPT and WormGPT: The Dark Web's 'Malignant' Alternatives to ChatGPT Explored

In the ever-evolving landscape of artificial intelligence, the realm of cybercrime has witnessed a significant transformation. While ChatGPT and other generative AIs captured the spotlight in 2023 for their technological prowess, a darker narrative unfolds on the Dark Web with the emergence of FraudGPT and WormGPT. These are not just AI models; they are the nefarious twins of ChatGPT, catering specifically to the sinister needs of cybercriminals.

The Dark Web’s Malignant Twins

FraudGPT: A Game-Changer in Malicious Cyberattacks

As revealed in a comprehensive report by IJSR CSEIT, FraudGPT marks a pivotal moment in malicious cyberattacks. This generative AI, available through subscription on the Dark Web, empowers cybercriminals to craft highly convincing phishing emails and fake websites. Unearthed by Netenrich in July 2023, FraudGPT operates similarly to ChatGPT but without the ethical constraints imposed by developers.

SlashNext: FraudGPT Captured Demonstration

WormGPT: Simplicity in Phishing

Another player in this dark game is WormGPT, specializing in the effortless creation of phishing emails. Stemming from GPT-J, an open-source model developed by EleutherAI in 2021, WormGPT boasts parameters equivalent to OpenAI’s GPT-3.

The Price of Malignancy

While OpenAI’s version comes with ethical considerations, FraudGPT and WormGPT demand a price. Cybercriminals invest approximately $200 per month or $1,700 annually for FraudGPT. WormGPT, an older version, is available from around 60 euros monthly to 550 euros yearly, with an updated v2 version customizable for a hefty fee of 5,000 euros.

Unleashing Malicious Scripts

In the realm of code creation, FraudGPT shines as an AI tailored for writing malicious code, generating undetectable malware, crafting phishing pages, developing hacking tools, and composing scam emails. Trustwave researchers conducted a comparative analysis with ChatGPT, revealing distinct results. While ChatGPT provides code and phishing emails, FraudGPT excels with more convincing and language-perfect phishing content.

Tools like FraudGPT empower cybercriminals to generate malicious emails in any language, using nearly flawless language.

A New Frontier for Exploitation

The integration of AI opens new avenues for cybercriminals, and cybersecurity firms are acutely aware. As Zac Amos from ReHack notes, “FraudGPT serves as a stark reminder of how cybercriminals will continue evolving their techniques for maximum impact.”

FraudGPT and WormGPT, two chatbots trained with malware and hacking data, offer unrestricted possibilities to users. The former, identified by Netenrich in the Dark Web and Telegram, positions itself as capable of creating malicious code, undetectable malware, hacking tools, fraudulent phishing websites, vulnerability identification, target reconnaissance, and hacking education. The chatbot seamlessly generates responses for a broad spectrum of criminal activities.

WormGPT, linked to the same pseudonym, focuses on crafting fraudulent emails and SMS. Utilizing GPT-J as its foundation, this AI, available since March 2021, caters to hackers at a cost ranging from 60-100 euros monthly or 550 euros annually.

Shaping a Paradigm Shift

The emergence of these tools signifies a significant shift in cybercrime, indicating increasingly refined and segmented usage. Experts caution about the potential development of more sophisticated malware aided by generative AI. This technology even facilitates attackers in generating malicious emails in different languages, devoid of the typical errors that previously facilitated detection.

For experts, these malicious tools pose a serious future threat. Even individuals with minimal technical knowledge can orchestrate successful malicious campaigns using these AI-powered tools.

New Avenues of Criminality

Proofpoint, a cybersecurity firm, has identified a surge in the use of artificial intelligence for advanced scams, including the notorious “pig butchering.” This tactic involves building trust with victims before stripping them of their assets. Evolving to incorporate deepfakes, manipulated videos, and audios, this technique targets government officials, entrepreneurs, and celebrities, especially those involved in cryptocurrency investments.

TA499, a cybercriminal group, leverages AI-driven deepfake videocalls to impersonate politicians and public figures. This imitation seeks to extract confidential or compromising information from victims, intending to later humiliate them through social media publication.

From Deepfakes to Weapon Creation

Beyond cybersecurity implications, the misuse of artificial intelligence in creating deceptive content and sensitive material, such as non-consensual pornography, raises concerns among legal and digital ethics experts. These AI techniques have also been deployed for political and social manipulation, creating fake images of immigrants to influence public opinion and electoral outcomes.

Moreover, the risk extends to AI potentially aiding in the creation of biochemical weapons and infiltrating critical infrastructure software, presenting significant challenges that existing regulations struggle to address effectively.

Recommendations for Mitigation

Addressing the risks posed by tools like FraudGPT and its counterparts requires a multi-faceted approach. While user education and awareness are crucial, regulatory measures and technological solutions are deemed necessary. In response to emerging dangers, suggestions include forming independent teams to test such tools against AI abuse and contemplating the prohibition of certain open-source models.

However, the challenge lies in striking a balance between the ethical goals of artificial intelligence and the practical implementation of security measures and regulations to prevent the damages caused by tools like FraudGPT.

See also: Meta’s next project is to build a Artificial General Intelligence

Conclusion

In the shadowy realm of the Dark Web, FraudGPT and WormGPT stand as ominous reminders of the evolving landscape of cybercrime. These sinister twins of ChatGPT transcend boundaries, offering cybercriminals unprecedented capabilities to orchestrate malicious campaigns. The emergence of these AI-driven tools signals a paradigm shift, posing significant challenges to cybersecurity experts and raising concerns about the potential escalation of sophisticated cyber threats.

As we navigate this treacherous terrain, it becomes imperative to recognize the far-reaching impact of AI on cybercrime. From the creation of convincing phishing emails to the manipulation of deepfakes for advanced scams, the line between innovation and malevolence blurs. The challenges posed by these malicious AI tools underscore the need for proactive measures, including user education, regulatory interventions, and the ethical considerations surrounding the deployment of generative AI.

FAQs

Q1: What distinguishes FraudGPT and WormGPT from ChatGPT?

A1: FraudGPT and WormGPT are the malevolent counterparts of ChatGPT, specifically designed for cybercriminal activities. While ChatGPT adheres to ethical constraints, the dark twins operate on the Dark Web, enabling the creation of convincing phishing content, malware, and fraudulent emails without restrictions.

Q2: How much does FraudGPT and WormGPT cost?

A2: FraudGPT comes with a monthly cost of around $200 or an annual fee of approximately $1,700. WormGPT, with its older version available from 60 euros monthly to 550 euros yearly, also offers a more updated version for a customizable fee of 5,000 euros.

Q3: How do FraudGPT and WormGPT compare to ChatGPT in generating malicious content?

A3: Trustwave researchers found that while ChatGPT does provide code and phishing emails, FraudGPT excels in creating more convincing and language-perfect phishing content. These dark twins empower cybercriminals to generate malicious emails in any language with nearly flawless language proficiency.

Q4: What challenges do these malicious AI tools pose?

A4: The emergence of tools like FraudGPT and WormGPT signifies a significant shift in cybercrime, with potential repercussions such as the development of more sophisticated malware. These AI-driven tools facilitate the generation of malicious content in various languages, presenting challenges for detection and prevention.

Q5: How can the risks of tools like FraudGPT be mitigated?

A5: Mitigating the risks involves a multi-faceted approach, including user education, regulatory measures, and technological solutions. Suggestions include forming independent teams to test these tools against AI abuse and considering the prohibition of certain open-source models to strike a balance between ethical AI goals and practical security measures.

Follow us on our social networks and keep up to date with everything that happens in the Metaverse!

         Twitter   Linkedin   Facebook   Telegram   Instagram    Google News    Amazon Store

Exit mobile version