Former Google scientist Timnit Gebru warns of the downside of AI technology’s built
Timnit Gebru has been warning the world about artificial intelligence for years. And like a scientist in a Hollywood blockbuster, she has been ignored or dismissed. But now her fears are becoming reality, and the public is taking notice.
A decade ago, the idea that algorithms could supplant screenwriters, displace software developers or diagnose diseases was science fiction. Now AI tools are becoming so ubiquitous that high school students are using them to cheat on tests.
The speed of AI adoption in everyday life is a worry, because these decision-making machines are a black box: They’re secretive, private and contain proprietary information that makes it almost impossible to learn more about their inner workings, even when they get things wrong.
Ms. Gebru is an outspoken advocate for ethical AI, which looks at issues of bias and fairness in these high-tech programs. She was ousted from Alphabet Inc.’s Google for highlighting potential bias in its AI products, and went on to launch her own organization called the Distributed Artificial Intelligence Research Institute (DAIR), which documents and researches harm from AI.
“I think people should know that these systems were created to make stuff up. And so they should not believe what they see from these systems. Yes, sure, it’s cool to play with them, it’s interesting to see what they can do etc., but do not believe the outputs that you see,” Ms. Gebru said.
Studies have found that AI systems are far from perfect. A well-known 2016 ProPublica report that inspired Gebru analyzed a recidivism algorithm used in the United States. It said Black prisoners were twice as likely as whites to reoffend, even when real-world examples found the opposite to be true.
In 2018, Amazon.com Inc. walked back its own hiring software that ”downgraded” resumes from women, and earlier this year, researchers found ChatGPT, the chatbot that created a global frenzy when it was released and then temporarily banned in Italy, misrepresented facts and faked research studies.
“Why are we putting out something like this into the world?” Ms. Gebru asks. “If it was for one specific narrow application that’s one thing, but why are we putting out something into this world that my collaborator says is the equivalent of an oil spill on our information ecosystem?” she said.
It’s one of the reasons why she and other critics are calling for regulation. Especially at a time when more products are hitting the market faster than ever before and investors battle for dominance in what some are calling a new space race.
When Ms. Gebru – who’s 39 and holds a bachelor’s degree in electrical engineering and a PhD in computer vision from Stanford University – started her career, she stood out. She’s Black, a woman and works in an industry famously lacking in diversity.
She moved to the U.S. as a teenager to escape the 1998-2000 Eritrean-Ethiopian War. The discrimination she faced after moving and throughout her career has left a lasting mark. “All of those experiences are reflected in the kind of work that I do,” she said.
Those experiences also influenced her work at Microsoft Corp., where she co-authored a 2018 paper called Gender Shades. It showed commercial facial-recognition technology was better at pointing out gender in white or light-skinned people than recognizing the same in those with Black or dark skin. Studies like this show how easy it is for Black people and people of colour to be harmed by AI.
“We’re more susceptible to being harmed by [AI] products,” explained Kishawna Peck, a Black data scientist and founder of the Womxn in Data Science conference in Toronto last month that Ms. Gebru attended and spoke at. “We’re more likely to experience algorithmic harm so it’s important to create programs around it.”
Gender Shades is as relevant now as it was then – perhaps even more so – since AI products are now widely available and governments that work with diverse communities often use these tools to boost efficiency and reduce costs during a time when austerity is back in vogue.
However, when AI programs mess up or fall flat – which can happen due to errors or inaccurate data sets – the effect can cut across race, class, gender and more.
One example occurred in India in 2019. The country introduced a digital social service system that failed to recognize the biometrics of certain local residents. Some who were cut off from subsidized food or other government services died as a result, according to a report by The Guardian.
Instances such as this are why Ms. Gebru finds talk inspired by Hollywood movies about future Terminators and AI apocalypses (as in The Matrix) absurd. These talking points often diminish the real damage being done to people now, she explained.
“[People] should care that there are automated systems being pushed into so many different scenarios without the public having a say and with their data being taken for the profit of a few companies, not for the public good,” she said.
As in other countries, AI has become a topic of big concern in Canada. Researchers and experts have added their names to an open letter asking MPs to pass the Artificial Intelligence and Data Act, which was introduced in 2022.
The bill would create a framework for AI use in Canada, but some experts believe it doesn’t do enough. In another petition, industry experts say the law in its current form lacks enough public consultation and leaves too many specifics about regulations to be decided later.
There’s a popular saying in tech: garbage in, garbage out. Poor or low-quality inputs usually translate into poor outputs. Even when the quality of the data isn’t the problem, human behaviour can still prejudice the results.
Paris Marx, the host of podcast Tech Won’t Save Us, sees labour issues like the latter as an important part of any conversation about automated machines and the future of work. Fear over robots replacing workers has been around for a long time, but little attention is paid to how it will change the overall labour movement.
“What we did see was that the way that [tech] was used was not to completely eliminate jobs, but to reduce the power of workers within their workplaces” he explained. “The conditions of their work are determined by algorithms that decide who gets an order, that decides how much they get paid and they have very little recourse if these things go against them.”
Workers are starting to take notice. Last week, Writers Guild of America members went on strike for better pay and safeguards around how AI is used throughout creative processes.
For people who don’t work in show business and believe they’re out of harm’s way, think again. Companies in different industries are using machine learning to boost their bottom line. For example, a 2023 study by Capterra found that 98 per cent of U.S. human resources leaders plan to use software and algorithms to reduce labour costs.
So, can AI ever be used for good?
A lot of AI programs, including ChatGPT, scrape the internet for personal, public and even copyrighted data to power their tools. This makes it challenging for anyone to opt out of AI in a meaningful or long-lasting way.
One way to start thinking about how to make these tools better is to change the incentive structures that lie at their core by bringing in laws, regulators and public input.
“I think that everybody has a role to play. And ideally the companies themselves would, of their own volition, do certain things, but I just don’t have any hope that they’ll do that without the external pressure, the external incentive structure,” Ms. Gebru said.
Still, Ms. Gebru is optimistic that AI’s eventual place in the world will be shaped by more of the public as awareness of the technology grows.
“This is not, like, an inevitable march toward the future that we have to go to. I think the public has to be well-informed and have a say on whether this is the kind of life that they want to live or not, lead or not.”
This article was first reported by Globe and Mail