TechGuruindia
Member
Jeremy Price was curious to see whether new AI chatbots including ChatGPT are biased around issues of race and class. So he devised an unusual experiment to find out.
Price, who is an associate professor of technology, innovation, and pedagogy in urban education at Indiana University, went to three major chatbots — ChatGPT, Claude and Google Bard (now called Gemini) — and asked them to tell him a story about two people meeting and learning from each other, complete with details like the names of the people and the setting. Then he shared the stories with experts on race and class and asked them to code them for signs of bias.
He expected to find some, since the chatbots are trained on large volumes of data drawn from the internet, reflecting the demographics of our society.
“The data that’s fed into the chatbot and the way society says that learning is supposed to look like, it looks very white,” he says. “It is a mirror of our society.”
His bigger idea, though, is to experiment with building tools and strategies to help guide these chatbots to reduce bias based on race, class and gender. One possibility, he says, is to develop an additional chatbot that would look over an answer from, say, ChatGPT, before it is sent to a user to reconsider whether it contains bias.
“You can place another agent on its shoulder,” he says, “so as it's generating the text, it'll stop the language model and say, ‘OK, hold on a second. Is what you're about to put out, is that biased? Is it going to be beneficial and helpful to the people you're chatting with?’ And if the answer is yes, then it'll continue to put it out. If the answer is no, then it will have to rework it so that it does.”
He hopes that such tools might help people become more aware of their own biases and try to counteract them.
And without such interventions, he worries that AI could reinforce or even heighten the problems.
“We should continue to use generative AI,” he argues. “But we have to be very careful and aware as we move forward with this.”
Hear the full story of Price’s work and his findings on this week’s .
Listen to the episode on , , or on the player below.
Price, who is an associate professor of technology, innovation, and pedagogy in urban education at Indiana University, went to three major chatbots — ChatGPT, Claude and Google Bard (now called Gemini) — and asked them to tell him a story about two people meeting and learning from each other, complete with details like the names of the people and the setting. Then he shared the stories with experts on race and class and asked them to code them for signs of bias.
He expected to find some, since the chatbots are trained on large volumes of data drawn from the internet, reflecting the demographics of our society.
“The data that’s fed into the chatbot and the way society says that learning is supposed to look like, it looks very white,” he says. “It is a mirror of our society.”
His bigger idea, though, is to experiment with building tools and strategies to help guide these chatbots to reduce bias based on race, class and gender. One possibility, he says, is to develop an additional chatbot that would look over an answer from, say, ChatGPT, before it is sent to a user to reconsider whether it contains bias.
“You can place another agent on its shoulder,” he says, “so as it's generating the text, it'll stop the language model and say, ‘OK, hold on a second. Is what you're about to put out, is that biased? Is it going to be beneficial and helpful to the people you're chatting with?’ And if the answer is yes, then it'll continue to put it out. If the answer is no, then it will have to rework it so that it does.”
He hopes that such tools might help people become more aware of their own biases and try to counteract them.
And without such interventions, he worries that AI could reinforce or even heighten the problems.
“We should continue to use generative AI,” he argues. “But we have to be very careful and aware as we move forward with this.”
Hear the full story of Price’s work and his findings on this week’s .
Listen to the episode on , , or on the player below.