TechGuruindia
Member
When ChatGPT and other new generative AI tools emerged in late 2022, the major concern for educators was cheating. After all, students quickly
platforms that with a few simple prompts, a chatbot could write an essay or answer a homework assignment in ways that would be hard for teachers to detect.
But these days, when it comes to AI, another concern has come into the spotlight: That the technology could lead to less human interaction in schools and colleges — and that school administrators could one day try to use it to replace teachers.
And it's not just educators who are worried, this is becoming an education policy issue.
Just last week, for instance, a sailed through both houses of the California state legislature that aims to make sure that courses at the state’s community colleges are taught by qualified humans, not AI bots.
Sabrina Cervantes, a Democratic member of the California State Assembly, who introduced the legislation, said that the goal of the bill is to “provide guardrails on the integration of AI in classrooms while ensuring that community college students are taught by human faculty.”
To be clear, no one appears to have actually proposed replacing professors at the state’s community colleges with ChatGPT or other generative AI tools. And even the bill’s leaders say they can imagine positive uses for AI in teaching, and the bill wouldn’t stop colleges from using generative AI to help with tasks like grading or creating educational materials.
But champions of the bill also say they have reason to worry about the possibility of AI replacing professors in the future. Earlier this year, for example, a dean at Boston University among graduate workers who were on strike seeking higher wages when he listed AI as one possible strategy for handling course discussions and other classroom activities that were impacted by the strike. Officials at the university later clarified that they had no intention of replacing any graduate workers with AI software, though.
While California is the furthest along, it’s the only state where such measures are being considered. In Minnesota, Rep. Dan Wolgamott, of the Democratic-Farmer-Labor Party, that would forbid campuses in the Minnestate State College and University System from using AI “as the primary instructor for a credit-bearing course.” The measure has stalled for now.
Teachers in K-12 schools are also beginning to push for similar protections against AI replacing educators. The National Education Association, the country’s largest teachers union, recently put out that stressed that human educators should “remain at the center of education.”
It’s a sign of the mixed but highly charged mood among many educators — who see both promise and potential threat in generative AI tech.
Even the education leaders pushing for measures to keep AI from displacing educators have gone out of their way to note that the technology could have beneficial applications in education. They're being cautious about the language they use to ensure they're not prohibiting the use of AI altogether.
The bill in California, for instance, faced initial pushback even from some supporters of the concept, out of worry about moving too soon to legislate the fast-changing technology of generative AI, says Wendy Brill-Wynkoop, president of the Faculty Association of California Community Colleges, whose group led the effort to draft the bill.
An early version of the bill explicitly stated that AI “may not be used to replace faculty for purposes of providing instruction to, and regular interaction with students in a course of instruction, and may only be used as a peripheral tool.”
Internal debate almost led leaders to spike the effort, she says. Then Brill-Wynkoop suggested a compromise: remove all explicit references to artificial intelligence from the bill’s language.
“We don’t even need the words AI in the bill, we just need to make sure humans are at the center,” she says. So the final language of the very brief proposed legislation reads: “This bill would explicitly require the instructor of record for a course of instruction to be a person who meets the above-described minimum qualifications to serve as a faculty member teaching credit instruction.”
“Our intent was not to put a giant brick wall in front of AI,” Brill-Wynkoop says. “That’s nuts. It’s a fast-moving train. We’re not against tech, but the question is ‘How do we use it thoughtfully?’”
And she admits that she doesn’t think there’s some “evil mastermind in Sacramento saying, ‘I want to get rid of these nasty faculty members.’” But, she adds, in California “education has been grossly underfunded for years, and with limited budgets, there are several tech companies right there that say, ‘How can we help you with your limited budgets by spurring efficiency.’”
Ethan Mollick, a University of Pennsylvania professor who has become a prominent voice on AI in education, last month that he worries that many businesses and organizations are too focused on efficiency and downsizing as they rush to adopt AI technologies. Instead, he argues that leaders should be focused on finding ways to rethink how they do things to take advantage of tasks AI can do well.
He noted in his newsletter that even the companies building these new large language models haven’t yet figured out what real-world tasks they are best suited to do.
“I worry that the lesson of the Industrial Revolution is being lost in AI implementations at companies,” he wrote. “Any efficiency gains must be turned into cost savings, even before anyone in the organization figures out what AI is good for. It is as if, after getting access to the steam engine in the 1700s, every manufacturer decided to keep production and quality the same, and just fire staff in response to new-found efficiency, rather than building world-spanning companies by expanding their outputs.”
The professor wrote that his university’s new is trying to model the approach he’d like to see, where researchers work to explore evidence-based uses of AI and work to avoid what he called “downside risks,” meaning the concern that organizations might make ineffective use of AI while pushing out expert employees in the name of cutting costs. And he says the lab is committed to sharing what it learns.
AI Education Project, a nonprofit focused on AI literacy, more than 1,000 U.S. educators in 2023 about how educators feel about how AI is influencing the world, and education more specifically. In the survey, participants were asked to pick among a list of top concerns about AI and the one that bubbled to the top was that AI could lead to “a lack of human interaction.”
That could be in response to recent announcements by major AI developers — including ChatGPT creator OpenAI — about new versions of their tools that can respond to voice commands and see and respond to what students are inputting on their screens. Sal Khan, founder of Khan Academy, recently posted a video demo of him using a prototype of his organization’s chatbot Khanmigo, which has these features, to tutor his teenage son. The technology shown in the demo is not yet available, and is at least six months to a year away, according to Khan. Even so, the video about whether any machine can fill in for a human in something as deeply personal as one-on-one tutoring.
In the meantime, many new features and products released in recent weeks focus on helping educators with administrative tasks or responsibilities like creating lesson plans and other classroom materials. And those are the kinds of behind-the-scenes uses of AI that students may never even know are happening.
That was clear in the exhibit hall of last week’s ISTE Live conference in Denver, which drew more than 15,000 educators and edtech leaders. (EdSurge is an independent newsroom that shares a parent organization with ISTE. Learn more about EdSurge ethics and policies and supporters .)
Tiny startups, tech giants and everything in between touted new features that use generative AI to support educators with a range of responsibilities, and some companies had tools to serve as a virtual classroom assistant.
Many teachers at the event weren’t actively worried about being replaced by bots.
“It’s not even on my radar, because what I bring to the classroom is something that AI cannot replicate,” said Lauren Reynolds, a third grade teacher at Riverwood Elementary School in Oklahoma City. “I have that human connection. I’m getting to know my kids on an individual basis. I’m reading more than just what they’re telling me.”
Christina Matasavage, a STEM teacher at Belton Preparatory Academy in South Carolina, said she thinks the COVID shutdowns and emergency pivots to distance learning proved that gadgets can’t step in and replace human instructors. “I think we figured out that teachers are very much needed when COVID happened and we went virtual. People figured out very [quickly] that we cannot be replaced” with tech.
But these days, when it comes to AI, another concern has come into the spotlight: That the technology could lead to less human interaction in schools and colleges — and that school administrators could one day try to use it to replace teachers.
And it's not just educators who are worried, this is becoming an education policy issue.
Just last week, for instance, a sailed through both houses of the California state legislature that aims to make sure that courses at the state’s community colleges are taught by qualified humans, not AI bots.
Sabrina Cervantes, a Democratic member of the California State Assembly, who introduced the legislation, said that the goal of the bill is to “provide guardrails on the integration of AI in classrooms while ensuring that community college students are taught by human faculty.”
To be clear, no one appears to have actually proposed replacing professors at the state’s community colleges with ChatGPT or other generative AI tools. And even the bill’s leaders say they can imagine positive uses for AI in teaching, and the bill wouldn’t stop colleges from using generative AI to help with tasks like grading or creating educational materials.
But champions of the bill also say they have reason to worry about the possibility of AI replacing professors in the future. Earlier this year, for example, a dean at Boston University among graduate workers who were on strike seeking higher wages when he listed AI as one possible strategy for handling course discussions and other classroom activities that were impacted by the strike. Officials at the university later clarified that they had no intention of replacing any graduate workers with AI software, though.
— Wendy Brill-Wynkoop, president of the Faculty Association of California Community Colleges“Our intent was not to put a giant brick wall in front of AI,” Brill-Wynkoop says. “That’s nuts. It’s a fast-moving train. We’re not against tech, but the question is ‘How do we use it thoughtfully?’”
While California is the furthest along, it’s the only state where such measures are being considered. In Minnesota, Rep. Dan Wolgamott, of the Democratic-Farmer-Labor Party, that would forbid campuses in the Minnestate State College and University System from using AI “as the primary instructor for a credit-bearing course.” The measure has stalled for now.
Teachers in K-12 schools are also beginning to push for similar protections against AI replacing educators. The National Education Association, the country’s largest teachers union, recently put out that stressed that human educators should “remain at the center of education.”
It’s a sign of the mixed but highly charged mood among many educators — who see both promise and potential threat in generative AI tech.
Careful Language
Even the education leaders pushing for measures to keep AI from displacing educators have gone out of their way to note that the technology could have beneficial applications in education. They're being cautious about the language they use to ensure they're not prohibiting the use of AI altogether.
The bill in California, for instance, faced initial pushback even from some supporters of the concept, out of worry about moving too soon to legislate the fast-changing technology of generative AI, says Wendy Brill-Wynkoop, president of the Faculty Association of California Community Colleges, whose group led the effort to draft the bill.
An early version of the bill explicitly stated that AI “may not be used to replace faculty for purposes of providing instruction to, and regular interaction with students in a course of instruction, and may only be used as a peripheral tool.”
Internal debate almost led leaders to spike the effort, she says. Then Brill-Wynkoop suggested a compromise: remove all explicit references to artificial intelligence from the bill’s language.
“We don’t even need the words AI in the bill, we just need to make sure humans are at the center,” she says. So the final language of the very brief proposed legislation reads: “This bill would explicitly require the instructor of record for a course of instruction to be a person who meets the above-described minimum qualifications to serve as a faculty member teaching credit instruction.”
“Our intent was not to put a giant brick wall in front of AI,” Brill-Wynkoop says. “That’s nuts. It’s a fast-moving train. We’re not against tech, but the question is ‘How do we use it thoughtfully?’”
And she admits that she doesn’t think there’s some “evil mastermind in Sacramento saying, ‘I want to get rid of these nasty faculty members.’” But, she adds, in California “education has been grossly underfunded for years, and with limited budgets, there are several tech companies right there that say, ‘How can we help you with your limited budgets by spurring efficiency.’”
Ethan Mollick, a University of Pennsylvania professor who has become a prominent voice on AI in education, last month that he worries that many businesses and organizations are too focused on efficiency and downsizing as they rush to adopt AI technologies. Instead, he argues that leaders should be focused on finding ways to rethink how they do things to take advantage of tasks AI can do well.
He noted in his newsletter that even the companies building these new large language models haven’t yet figured out what real-world tasks they are best suited to do.
“I worry that the lesson of the Industrial Revolution is being lost in AI implementations at companies,” he wrote. “Any efficiency gains must be turned into cost savings, even before anyone in the organization figures out what AI is good for. It is as if, after getting access to the steam engine in the 1700s, every manufacturer decided to keep production and quality the same, and just fire staff in response to new-found efficiency, rather than building world-spanning companies by expanding their outputs.”
The professor wrote that his university’s new is trying to model the approach he’d like to see, where researchers work to explore evidence-based uses of AI and work to avoid what he called “downside risks,” meaning the concern that organizations might make ineffective use of AI while pushing out expert employees in the name of cutting costs. And he says the lab is committed to sharing what it learns.
Keeping Humans at the Center
AI Education Project, a nonprofit focused on AI literacy, more than 1,000 U.S. educators in 2023 about how educators feel about how AI is influencing the world, and education more specifically. In the survey, participants were asked to pick among a list of top concerns about AI and the one that bubbled to the top was that AI could lead to “a lack of human interaction.”
That could be in response to recent announcements by major AI developers — including ChatGPT creator OpenAI — about new versions of their tools that can respond to voice commands and see and respond to what students are inputting on their screens. Sal Khan, founder of Khan Academy, recently posted a video demo of him using a prototype of his organization’s chatbot Khanmigo, which has these features, to tutor his teenage son. The technology shown in the demo is not yet available, and is at least six months to a year away, according to Khan. Even so, the video about whether any machine can fill in for a human in something as deeply personal as one-on-one tutoring.
In the meantime, many new features and products released in recent weeks focus on helping educators with administrative tasks or responsibilities like creating lesson plans and other classroom materials. And those are the kinds of behind-the-scenes uses of AI that students may never even know are happening.
That was clear in the exhibit hall of last week’s ISTE Live conference in Denver, which drew more than 15,000 educators and edtech leaders. (EdSurge is an independent newsroom that shares a parent organization with ISTE. Learn more about EdSurge ethics and policies and supporters .)
Tiny startups, tech giants and everything in between touted new features that use generative AI to support educators with a range of responsibilities, and some companies had tools to serve as a virtual classroom assistant.
Many teachers at the event weren’t actively worried about being replaced by bots.
“It’s not even on my radar, because what I bring to the classroom is something that AI cannot replicate,” said Lauren Reynolds, a third grade teacher at Riverwood Elementary School in Oklahoma City. “I have that human connection. I’m getting to know my kids on an individual basis. I’m reading more than just what they’re telling me.”
Christina Matasavage, a STEM teacher at Belton Preparatory Academy in South Carolina, said she thinks the COVID shutdowns and emergency pivots to distance learning proved that gadgets can’t step in and replace human instructors. “I think we figured out that teachers are very much needed when COVID happened and we went virtual. People figured out very [quickly] that we cannot be replaced” with tech.