Media coverage of Facebook AI malfunction irresponsible, says Indian-origin researcher

Blaming the media for being ‘irresponsible’ in its coverage on Facebook shutting down one of its AI systems after chatbots started communicating in their own language, an Indian-origin researcher who is part of Facebook's AI Research (FAIR) has said such coverage was ‘clickbaity and irresponsible’.

Indian-origin researcher at FAIR Dhruv Batra. Photo courtesy: Facebook
Indian-origin researcher at FAIR, Dhruv Batra. Photo courtesy: Facebook

Dhruv Batra, who works as a research scientist at FAIR, wrote on his Facebook page that while the idea of AI agents inventing their own language may sound alarming/unexpected to people outside the field, it is a well-established sub-field of AI, with publications dating back decades.

“Simply put, agents in environments attempting to solve a task will often find unintuitive ways to maximise reward. Analysing the reward function and changing the parameters of an experiment is NOT the same as 'unplugging' or 'shutting down AI',” Batra said in the post late on Tuesday.

“If that were the case, every AI researcher has been 'shutting down AI' every time they kill a job on a machine,” he added.

It was widely reported that the social media giant had to pull the plug on the AI system its researchers were working on ‘because things got out of hand’.

“The AI did not start shutting down computers worldwide or something of the sort, but it stopped using English and started using a language that it created,” media reports said.

Initially, the AI agents used English to communicate with each other but they later created a new language that only AI systems could understand, thus, defying their purpose.