Introduction to Dr. Timnit Gebru
Dr. Timnit Gebru is a highly experienced and credentialed researcher who studies the social consequences of the advancement of artificial intelligence, and she will be delivering a talk on these issues to students at Middlebury College on April 24th. Dr. Gebru worked at Apple for many years before joining a lab under famed computer vision researcher Fei-Fei Li, at Stanford University. While there, her focus shifted more towards the sociological consequences of AI. After a few years at Stanford, she joined an AI ethics team at Google, ultimately leaving or being fired for writing a paper that some important people at Google very much did not like. Since then, she has continued to advocate for ethically focused AI. Her specific research focus appears to be in combating algorithmic bias. She was the second author on Joy Buolamwini’s famous paper that exposed how the best facial recognition software at the time performed significantly worse on Black women than White men.
Computer vision in practice: who is benefiting and who is being harmed?
Dr. Gebru opens her presentation by commenting on the extreme homogeneity of the machine learning community. She specifically comments on the underrepresentation of Black women, noting that often she was the only Black woman at conferences. She does highlight how the field of machine learning as a whole has improved (in part of course thanks to work by her and others), but she also points out that the computer vision subfield still lags behind.
As evidence for the importance of diversity, Dr. Gebru highlights how different people could have vastly different perspectives on the same technologies. She provides many examples of applications of computer vision that could be discriminatory. She points out how there has been progress in people realizing the importance of diverse datasets, but she also argues that unbiased algorithms can still do damage.
She talks about how, in search of diverse datasets, many people were included in datasets without their consent. She also talks about how people fail to consider how perfect classifiers can be used for dangerous purposes, so, even if a perfectly unbiased algorithm was possible, it would still be used to discriminate against marginalized groups. She also highlights how people naturally trust algorithms more than the accuracy of the model warrants. She advocates for the creation of a governmental oversight organization that would review AI applications for discrimination. She also stressed that even though AI may not be any more capable than a human on an individual task, the scale and efficiency at which it can do that task makes it much more dangerous (an example would be police using AI facial recognition vs human facial recognition).
TLDR: The lack of diversity within the field of computer vision coupled with the dogged pursuit of classification accuracy has led people to create (and employ) algorithms that can accidentally (or on purpose) reinforce systemic oppression.
Proposed questions
- What are your thoughts on content suggestion algorithms on apps and sites such as Tik Tok, Instagram, Twitter, and Reddit.
- Do you think OpenAIs efforts to control the outputs of ChatGPT have been sufficient?
After talk reflection
Talk summary
Dr. Gebru started the talk with an introduction to the definition of eugenics that she was using. She informed us that eugenics was not an exclusively Nazi ideology, and that it outlasted them by many decades. She said it continues prominently today, albeit in a different form. Rather than the “negative” first wave eugenics of the Nazis, today we have “positive” second wave eugenics. Essentially, first wave eugenics involved getting rid of undesirables to improve the human stocks. On the other hand, second wave eugenics is about elevating those deemed to be fit. An example of this type of eugenics is encouraging smart people to reproduce (as a brief sidenote, Dr. Gebru also repeatedly singled out IQ tests as being a particularly common and also bogus tool for these second wave eugenicists).
Once the topic of eugenics had been introduced, she began to explain TESCREAL. She described how it has direct origins in eugenics, and how the eugenicist ideas are fundamental pillars of the respective ideologies. Elaborating more on TESCREAL, it is an acronym that catches many of the main types of people pushing for the creation of artificial general intelligence (AGI). The specifics of the categories don’t matter as much, and she didn’t get into all of them, but a couple she defined thoroughly for us are transhumanists, singularitarians, and cosmism (here is a thread elaborating on TESCREAL from a co-preeminent expert on the topic ). The common theme of TESCREALists is a desire to create a utopia through AGI, be it AI enhanced humans, or the AGI itself. Dr. Gebru characterizes this effort rather as a desire to create a new ruling class. Dr. Gebru folds into TESCREAL the people that vehemently oppose AGI for the threat it poses to humanity.
After those definitions, the talk becomes more focused on what’s happening now, and less on the far (if you ask me) AGI future. A core component of Dr. Gebru’s critique of TESCREAL is that they claim to be the moral future of the world, and yet their current actions are immoral. Essentially, she says how can we trust the current leaders in AGI to lead us to a utopia when right now they are centralizing power to themselves, consuming all of the AI funding for AI research, exploiting and traumatizing workers, stealing from artists, and destroying the environment with high energy demands. At some point in the talk, she also stressed how the money focused incentive structure made positive AI research almost impossible.
Dr. Gebru finished her talk with a brief condemnation of the “pause” letter signed by many prominent tech figures that advocates for pausing AGI research while the risks can be more properly understood. She said that the signatories aren’t talking about the real current problems, but are instead distracting from the real problems with far off future ones. She also said that the letter places the blame for bad outcomes on the AI rather than the humans who are creating them.
From there Dr. Gebru takes some questions. The first was from an Effective Altruist (and therefore a TESCREAList), but it was not really about AGI and was more about what is classified as second wave eugenics. The next question was about the actions of the mainstream media, and Dr. Gebru pointed out how the press just highlights and publicizes powerful people. She said when functioning properly the media should hold the powerful accountable, not just parrot their talking points. Next someone asked a question that caused me to cringe so hard I had to leave the room immediately. When I got back Dr. Gebru was talking about how cars have downsides and ultimately still benefit those in power and harm everyone else. After someone asked some AGI related question that I can’t remember the specifics of, Dr. Gebru took the opportunity to comment on the fact that AGI has no real definition and that real science should be tightly scoped. This was also the main theme for a few other later questions. The last question I will touch on was asking what AI researchers should do, and Dr. Gebru stressed the importance of collective action.
TLDR: AGI is an undefined entity sought after by TESCREALists who view it as the path to utopia, but like other pursuits of utopia their pursuit is and will continue to be exclusionary and eugenicist.
My thoughts
I thought the talk introduced some really interesting ideas about the dangers of dreaming of a utopia. Utopia and eugenics traditionally go hand in hand, and Dr. Gebru presents some compelling reasons that AGI utopias are inherently eugenicist. This talk led me to really consider whether certain ideas I hold are eugenicist, as I have had dreams of transhumanesque cybernetic upgrades for myself and others. Ultimately, I think that my views would not make me a eugenicist in Dr. Gebru’s eyes in the same way that she believes that not all gene editing is eugenics.
I believe that Dr. Gebru did seem to be barely scratching the surface of what she had to say. She came across to me as generally thinking that the industrial revolution was a mistake (although I may be projecting). I believe this because of her comments on the perverse incentive structure for these companies and because of what she said about technology in general always exclusively benefitting the ruling class. In other words, many of her arguments against AGI also can be used against any other tech. I would love to read more from her on these topics. I think they have the potential to be incredibly interesting.
I wish she would have talked about the effective altruists since I don’t really know anything about them other than the posters they put up around campus. Based solely off of the posters and their general mission statement, I would have thought that Dr. Gebru could be an effective altruist. Her point about how people in Ethiopia lack access to clean water seems to me like it belongs on one of those effective altruism posters. This paper argues that effective altruism has become too longterministic, but to me it doesn’t seem necessarily rotten in the same way as the utopia seekers of the TESCREAL bundle.