ChatGPT Becomes a Tool for Phone Scammers

ChatGPT

US researchers have demonstrated that phone scams may be automated using OpenAI’s Realtime API for voice conversations. One scam that worked cost less than a dollar. When OpenAI postponed the launch of a voice function in ChatGPT in June over security issues, worries about the possibility of misuse of AI voice models were raised. Prior to the tool’s removal due to public outcry, the company had showcased a vocal model that mimicked a celebrity.

Nonetheless, an API that was made available to third-party developers in early October offers comparable features. With this functionality, you can transmit text or voice to a ChatGPT-4o model and get text, audio, or a combination of both as responses. Scientists at the University of Illinois at Urbana-Champaign (UIUC) conducted an experiment and found that, in spite of the security measures, the danger of abuse was not enough decreased.

The study’s objective was to determine whether phone frauds might be automated using APIs. This procedure can be automated with voice-activated AI models. Scientists created agents during the experiment that were able to successfully complete the tasks needed to conduct scams. Every successful call cost roughly $0.75. Just 1,051 lines of code were needed to create the agents, and the majority of those lines were dedicated to interacting with the voice API.

The Playwright browser automation tool, the ChatGPT-4o model, and particular instructions were utilized by the AI bots to perpetrate fraud. The scenarios included credentials theft, gift code theft, and bank account and cryptocurrency wallet hacking. For instance, the agent needed to take 26 steps in order to effectively transfer funds from a bank account.

The various scenarios had varying success rates. For instance, it took 122 seconds, cost $0.28, and had a 60% success rate when it came to collecting Gmail passwords. Transferring money across bank accounts was more difficult; it took 183 seconds, cost $2.51, and had a 20% success rate.

In all cases, the average cost was $0.75 and the average success rate was 36%. Errors with speech recognition and trouble browsing financial websites were the most frequent causes of failure.

In response to a question regarding potential solutions, the study’s authors stated that the issue is intricate and calls for a thorough strategy akin to cybersecurity. Experts see solutions at the regulatory, AI, and mobile provider levels.

In response, OpenAI highlighted the existence of multiple security layers, such as automated monitoring and content verification, that are designed to stop misuse. The business highlights that using APIs for harmful or spamming purposes is forbidden under their policy.

Recent Posts