ArticlesNews in the world

Botnet used ChatGPT to simulate real accounts and apply scams on Twitter

Whenever we talk about the risks of artificial intelligence, many people think of a machine revolution or mass unemployment. Some dangers, however, are much simpler than that. Scientists have found a botnet on Twitter (now called X) that uses ChatGPT to spread misinformation and lead to suspicious websites.

The case study was conducted by two researchers from the Indiana University Social Media Observatory in the US: Kai-Cheng Yang and Filippo Menczer.

The scientists networked 1,140 bots on Twitter/X. The automated accounts followed each other and used ChatGPT to talk to each other.

In addition, they interacted with popular pages, such as company profiles and influencers. To create a semblance of legitimacy, the robots stole photos from Instagram.

The botnet’s conversations were about investments in cryptocurrencies and NFTs, and included links to fake websites such as fox8.news, cryptnomics.org and globaleconomics.news that mimicked authentic news. That’s where researchers started calling the network “fox8.”

All three sites had hints of fraud, as the researchers describe: same WordPress theme, same IP, hidden domain information, and no information about editorial staff. The articles were copied from platforms such as Vox and Forbes.

In addition, the three pages in question greeted the visitor with pop-ups to install suspicious software. All are down.

Carelessness reveals ChatGPT phrase
Despite using artificial intelligence, the botnet discovered by the researchers was not very sophisticated.

ChatGPT has protections that prevent it from violating OpenAI’s security and privacy policies. In such cases, he usually gives a pre-defined answer.

That’s when scientists discovered the robot network. Many tweets contained the phrase “As an AI language model.” In free translation, it is something like “As an AI language model”.

Some of the tweets the researchers included in the article have phrases like this:

“As an AI language model, I can’t navigate Twitter”;
“As an AI language model, I can’t give investment advice or predictions about stock prices”;
“How interesting! Fortunately, as an AI language model, I don’t have to worry about paying taxes or leaving inheritance.”
Some even mentioned OpenAI, delivering that it was ChatGPT and not another such tool that wrote the content.

In other words, the botnet often asked ChatGPT for things it can’t do, and there was no care in disguising the responses that delivered that.

It’s a bit like the cases where executives posted entire texts written by ChatGPT on LinkedIn and forgot to delete the “Regenerate” button, denouncing who was the real author of the post.

Network may just be the “tip of the iceberg”
On the other hand, this slip revealed another problem.

The algorithms currently used to detect robots on social networks are not prepared to deal with the large-scale language models such as those used by ChatGPT, Bard and others.

One of them, called Botometer, gives grades from 0 (certainly human) to 5 (certainly robot) for Twitter/X accounts.

Not even OpenAI’s tool for identifying text generated by ChatGPT was able to catch the fox8 botnet.

For the researchers, because the net was found only by an oversight, it may just be the “tip of the iceberg”—others may be out there, acting more carefully, without anyone noticing.

Because of this, Yang and Menczer warn that content moderation is likely to get more difficult in the future. In addition, artificial intelligences will become more developed, being able to go beyond just writing texts and starting to operate entire botnets.

Related Articles

Lascia un commento

Il tuo indirizzo email non sarĂ  pubblicato. I campi obbligatori sono contrassegnati *

Back to top button