My father’s life was in danger when he was a victim of internet fraud. All his life he was a type 2 diabetic, depending on a strict medication regimen. However, internet ads offering magic cures lured him in with ads reading: “What Your Doctor Won’t Tell You: Independent Research Exposes Pharmacies!”
Driven by the hope of a cure, my father made a grave mistake: he stopped his vital medications. Fortunately, his doctor and I were able to intervene before the worst happened.
This is an all too common narrative of vulnerable loved ones falling victim to manipulation. We often assume that seniors are the only victims of scams, but we are all potential victims. That’s true in a new world where scams can be fueled by the exponential growth of artificial intelligence (AI). Even the smartest among us are increasingly vulnerable to fraud and manipulation.
As technology evolves, so do the exploitative tactics employed by scammers. One of the latest examples is voice cloning. Imagine that you receive a desperate call from the phone number of a loved one. Her voice trembles as she pleads for his help. Every inflection and tone is imitated to perfection, making it difficult to hang. The panic and desperation you hear on the other end of the line causes you to respond as anyone would: by helping.
But it could be a huge risk. Earlier this year, a man used AI-generated voices to trick at least eight victims out of hundreds of thousands of dollars in just three days. This example shows how little control we have over AI advances.
AI deception is also infiltrating our most intimate spheres. Now we have CupidBot, an AI-powered app aimed at men to meet romantic prospects. The company promises to help them “get to the good part” without doing anything – the app automatically starts conversations with prospects, schedules dates, without the other person knowing they’re actually interacting with AI. CupidBot blurs the line between human interaction and automated deception. Despite the potential risks of such manipulation, creating apps like this – and more broadly, AI impersonating humans without clear disclosure? is currently legal.
AI’s influence has also been amplified on the national stage, as evidenced by the recent White House condemnation of the alarming rise in faked AI images. A fake image showing the bombing of the Pentagon went viral and circulated on social media so fast that it momentarily caused a loss of nearly $500 million in the stock market. The potential for AI-generated misinformation to impact society and government is uncharted territory for all of us. These cases are mere glimpses of the dangers facilitated by AI, which range from manipulation tactics to serious national security risks.
Regulation is necessary to protect consumers, but right now people are vulnerable to these practices. So how do we protect ourselves?
The first step is recognize the problem. We must remain vigilant for the telltale signs of manipulation and deception. Are you being pressured and told to act quickly? Are you being asked to use unusual payment methods? Are they telling you to divulge private and confidential information? By being aware of these red flags, we strengthen ourselves against the most common scams.
Second, before giving out personal information, sending money, or sharing an inflammatory post, exercise caution and due diligence. Establish a word or phrase with your family before sharing private information. Contact the person or organization to verify the legitimacy of the request. Double check images shared online come from trusted media. Taking additional steps can mean the difference between becoming a victim and maintaining financial security.
Ultimately, the burden cannot fall on consumers alone: we need standards and guidelines for ethical AI, and yes, enforceable regulations. Must raise these concerns to techies from all industries and to lawmakers in the halls of government. Regulating the way AI can be used to mislead consumers should be a long-term goal for all of us, especially as we see the rise of nefarious schemes across the country.
There are a number of steps technology and government leaders can take to help, including strengthening privacy protections, ensuring greater transparency of AI-powered tools, and holding companies accountable for misconduct committed on their platforms. But change will not happen unless we demand it.
AI has tremendous potential for advancements in various fields, from education to scientific research. We cannot ignore the positive impacts, but we must proactively address the damage and ensure robust safeguards to protect individuals and society. Together, we must fight for a future of progress, taking action to reap the benefits of AI, while protecting ourselves and creating a safer world.
Marta Tellado is the executive director of Consumer Reports. Marta is one of the 1.6% Latina CEOs in the US and is a tireless advocate for consumer rights, especially for underserved communities.
Source: La Opinion