What does Google’s dismissal of researcher Timnit Gebru mean for the ethics of artificial intelligence

Google caused a riot earlier this month when it fired Timnit Gebra, co-leader of a research team at a company that studies the ethical implications of artificial intelligence. Google claims to have accepted her “resignation,” but Gebra, who is black, claims she was fired for drawing unwanted attention to the lack of diversity in Google’s workforce. She was also at loggerheads with her supervisors over their request to withdraw work she had co-authored on ethical issues related to certain types of AI models that are critical to Google’s business.

On this week’s Trend Lines podcast, Wll’s Elliot Waldman was joined by Karen Hao, senior intellectual intelligence rapporteur for the MIT Technology Review, to discuss Gebru’s overthrow and its implications for the increasingly important field of AI ethics.

Listen to the full interview with Karen Hao on the Trend Lines podcast:

If you like what you hear, subscribe to Trend Lines:
What does Google’s dismissal of researcher Timnit Gebru mean for the ethics of artificial intelligence Apple Podcasts brand Spotify Podcasts badge

The following is a partial transcript of the interview. It is lightly arranged for clarity.

Overview of world politics: First, can you tell us something about Geber and the kind of her attitude in the field of AI, given the pioneering research she conducted and how she ended up at Google to begin with?

Karen Hao: Timnit Gebru, one might say, is one of the cornerstones in the field of AI ethics. She received her PhD in AI ethics from Stanford, with the advice of Fei-Fei Li, who is one of the pioneers in the entire field of AI. When Timnit received her PhD from Stanford, she joined Microsoft for a post-doctorate, before graduating from Google after being approached based on the impressive work she had done. Google started their ethics team for intelligence and they thought she would be a great person to lead it. One of the studies for which she is known is the one who, together with another black researcher, Joy Buolamwini, wrote an algorithmic discrimination that appears in commercial facial recognition systems.

The paper was published in 2018, and at that time the discoveries were quite shocking, as they revised the commercial facial recognition systems that the technology giants were already selling. The findings in the paper showed that these systems sold under the assumption that they were very precise were in fact very inaccurate, especially on dark-skinned and female faces. In the two years since the paper was published, a series of events have taken place that eventually led these technology giants to cancel or suspend the sale of their facial recognition products to the police. The seeds of these actions were actually planted by a paper authored by Timnit. So she’s a really big presence in the field of AI ethics, and she’s done a lot of revolutionary work. She also founded a non-profit organization called Black in AI that really protects diversity in technology, especially in AI. She is a force of nature and a very famous name in the universe.

We should think about how to develop new artificial intelligence systems that do not rely on this method of brutal force scraping billions and billions of sentences from the Internet.

WPR: What exactly were the ethical issues identified by Gebra and her co-authors in the paper that led to her removal?

Hao: The paper dealt with the risks of large language models, which are basically AI algorithms that are trained on a huge amount of text. So, you can imagine being trained for all the articles published on the Internet – all subordinate, Reddit topics, Twitter and Instagram – all. They are trying to learn how to construct sentences in English and how they could then generate sentences in English. One of the reasons why Google is very interested in this technology is because it helps launch their search network. In order for Google to give you relevant results when you search for a query, it must be able to capture or interpret the context of what you are saying, so that if you enter three random words, it can gather the intent of what I am looking for.

What Timnit and her co-authors point out in this paper is that this relatively recent area of ​​research is useful, but it also has quite significant shortcomings that need to be discussed more. One of them is that these models take an enormous amount of electricity for power because they work on really big data centers. And given the fact that we are in a global climate crisis, the terrain should think about the fact that, doing this research, it could exacerbate climate change and then have downstream effects that disproportionately affect marginalized communities and developing countries. Another risk they point out is the fact that these models are so large that they are very difficult to test, and they also affect large parts of the internet that are very toxic.

So they end up normalizing a lot of sexist, racist or violent language that we don’t want to continue in the future. But due to the lack of insight into these models, we are not able to completely dissect the types of things they learn and then eradicate them. In the end, the conclusion of the paper is that these systems have great benefits, but there are also great risks. And as a field, we should spend more time thinking about how we can actually develop new artificial intelligence language systems that don’t rely so much on this brute force, but only train it on billions and billions of sentences stolen from the internet.

WPR: And how did Gebru’s supervisors at Google react to that?

Hao: Interestingly, Timnit said – and this was corroborated by her former teammates – that the paper was actually approved for submission to the conference. This is a very classic procedure for her team and within Google’s broader research team. The whole point of this research is to contribute to academic discourse, and the best way to do that is to submit to an academic conference. They prepared this paper with some external collaborators and submitted it to one of the leading conferences on the ethics of artificial intelligence for next year. She was approved by her manager and other people, but then at the last minute she received a notification from superiors above her manager that she needed to withdraw the paper.

Very little was revealed to her as to why she should have pulled the paper. She then went on to ask many questions about who told her to withdraw the paper, why they asked him to withdraw it and whether any changes could be made to make it more acceptable to submit. They were constantly building it with stone and did not receive any further clarification, so she eventually sent an email just before going on holiday on Thanksgiving saying she would not withdraw the paper if certain conditions were not met for the first time.

Silicon Valley has a concept of how the world works based on the disproportionate representation of a particular subgroup of the world. These are usually white men of a higher class.

She asked who gave the feedback and what the feedback was. She also requested meetings with senior executives to explain what had happened. The way they were treated according to their research was utter disrespectful and was not the way Google has traditionally treated researchers. She wanted an explanation for why they did it. And if they don’t meet those conditions, she would then have an honest conversation with them about the latest date at Google, so she can create a transition plan, leave the company smoothly, and publish the work outside of Google’s context. She then went on vacation, and in the middle of it, one of her live reports sent her a message that they had received an email that Google had accepted her resignation.

WPR: As for the problems raised by Gebra and her co-authors in their work, what does it mean for the field of artificial intelligence ethics to have that huge level of moral hazard, in which the communities most vulnerable to the influences identified by Gebra and her co-authors are environmental consequences and the like? – are they marginalized and often lack a voice in the technical space, while the engineers who build these artificial intelligence models are mostly isolated from risk?

Hao: I think this is at the core of what has been a continuous debate within this community over the past few years, and that is that Silicon Valley has a concept of how the world works based on the disproportionate representation of a particular subgroup of the world. These are usually white men of a higher class. The values ​​they have from their intersection of lived experience have now somehow become the values ​​that everyone should live by. But it doesn’t always go that way.

They make this cost-benefit analysis that it is worth creating these very large language models and that it is worth spending all that money and electricity to reap the benefits of that kind of research. But it is based on their values ​​and experience, and may not end up being the same cost-benefit analysis that someone could do in a developing country, where they would rather not have to deal with the effects of climate change later. This was one of the reasons why Timnit was so persistent in ensuring greater diversity at the decision-making table. If you have more people who have different experiences who can then analyze the impact of these technologies through their lenses and bring their voice into the conversation, then we might have more technologies that do not diminish their advantages so much towards one group at the expense of others.

Editor’s note: The photo above is available under CC BY 2.0 license.

.Source