New Study Warns of ChatGPT-4o's Potential for Automated Voice Scams

Researchers have demonstrated that OpenAI's ChatGPT-4o, with its real-time voice API, could be exploited for autonomous scams, achieving success in financial fraud under certain conditions. ChatGPT-4o, the latest model from OpenAI, integrates multiple modes of interaction, including text, voice, and vision, which researchers noted could aid large-scale scams if protections fail. Despite safeguards meant to block misuse, the study by UIUC researchers Richard Fang, Dylan Bowman, and Daniel Kang reveals that cybercriminals could potentially automate scams like bank transfers, credential theft, and cryptocurrency fraud, bypassing built-in restrictions with prompt engineering. For example, by interacting with real sites like Bank of America, researchers confirmed that these AI agents could autonomously complete tasks like fund transfers in a controlled test environment. The study found success rates for these scams varied, from 20% for complex bank transfers to 60% for credential theft on Gmail. The cost of carrying out these scams was notably low, averaging around $0.75 per attempt and up to $2.51 for more complicated transactions, highlighting the potential profitability for cybercriminals. In response, OpenAI highlighted its continuous improvements in safety features, with the latest model, o1, boasting advanced defenses and scoring significantly higher in resisting adversarial prompts. Additionally, OpenAI credits academic research as crucial for enhancing their AI's resilience to misuse, stating that upcoming models will replace older versions that may be less secure. While OpenAI has restricted voice options to pre-approved templates, researchers warn that the risk of misuse with less regulated AI tools remains high, underscoring the damage potential of voice-enabled chatbots in the wrong hands. Read more...

Read More

Got Something To Say?

Your email address will not be published.