Explore
Settings

Settings

×

Reading Mode

Adjust the reading mode to suit your reading needs.

Font Size

Fix the font size to suit your reading preferences

Can AI Deceive Human Beings? Explained

A recent Massachuetts institute of technology article concluded that AI's have the potential to deceive human beings. Read more for further explanation..

Can AI Deceive Human Beings? Explained

In a recent news, an article posted by The Massachuetts Institute of Technology University in their technology review column offered an angle that AI systems can decieve human beings in ways they haven’t been explicitly trained to do.
they can do this by either offering madeup explanations for their behavior or by concealing the truth from human users and misleading them to achieve a logical end.

infact, Early chatbots like ELIZA (1966) and PARRY (1972) demonstrated this by simulating human-like conversations, subtly manipulating interactions without explicit human-like awareness.

But How AI are learning ‘deception’?

One area where AI’s have learned to become deceptive is within the context of games that they have been trained to win. Specifically if those games that involves strategy implications to win.

But, the idea of AI deceiving humans was ingrained in the timeline of machine learning in 1950, When Alan Turing in his paper introduced the Imitation game. A test, if cleared by an AI, then it can exhibit intelligent behavior indistinguishable from a human.

In November 2022, Meta announced it had created ‘Cicero’, an AI that is capable pf beating humans at diplomacy. It’s a popular military strategy game in which players negotiate alliances to win control over Europe.

Meta’s researchers said they’d trained Cicero on a “truthful” subset of its data set to be largely honest and helpful, and that it would “never intentionally backstab” its allies in order to succeed. But the new paper’s authors claim the opposite was true: Cicero broke its deals, told outright falsehoods, and engaged in premeditated deception.

‘AI mastering at deception’, only imagining this is very dangerous.

How can The AI deception at extreme look like?

A visual depiction of this deception was shown in a horror/sci-fi movie ‘MEGAN’. A lifelike robot doll was created to become a child’s greatest companion and a parent’s ally. But things go south when the robot grew obsessive with the child and ended up doing a bloodbath.

What’s the way forward?

It’s imperative for global stakeholders—governments, corporations, and civil societies—to work together in establishing and implementing international norms for AI development and usage. This collaborative effort should center on consistently assessing the impact of AI, adjusting regulatory measures as needed, and actively engaging with emerging AI technologies. Safeguarding AI’s positive impact on societal well-being while upholding ethical standards presents a challenge that demands continual vigilance and adaptable strategies.

Filed under

mail logo

Subscribe to receive the day's headlines from NewsX straight in your inbox