When it comes to artificial intelligence research, Google has put in place measures designed to help a company move more effectively on potentially sensitive topics (e.g., race and religion). But some speculate that the company’s level of caution could be censorship.
According to a Reuters report, Google has added an additional layer of checks to all research done by its experts, who must now consult legal teams, policy and public relations teams before dealing with sensitive topics, such as face recognition. Experts have also been advised on several occasions to “be very careful to set a positive tone”.
It’s not clear when Google started implementing the new policy, but people familiar with the issue say it started in June.
“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly worthless projects raise ethical, reputational, regulatory or legal issues,” the document submitted to research staff states.
Managers behind the new policy said that did not mean that researchers should “hide from the real challenges” of using AI.
But talking about it with Reuters, senior scientist Margaret Mitchell warned of the dangers of this policy.
“If we are researching an appropriate thing given our expertise, and we are not allowed to publish it for reasons that are not in line with a high-quality review, then we are entering a serious censorship problem,” she said.
Google has yet to make an official statement.