TechGuruindia
Member
As a doctoral student at the University of California at Los Angeles, I was among those who got a recent campus-wide email with an urgent directive: Don’t use AI chatbots like ChatGPT or Bard or Bing, as doing so “is equivalent to receiving assistance from another person.”
Upon reading it, I took a pause. I’m a former educator in the process of writing my dissertation for my Doctorate of Education, as part of a part-time program while working a full-time job at Google. And as someone who is also a former journalist and editor for EdSurge, I recognize that we should never plagiarize, and that artificially-intelligent chatbots are very, very capable of responding to prompts like “Write me a 500-word essay on Shakespeare’s Twelfth Night.”
But as a student, and frankly, as a former teacher, my university’s approach struck me as incredibly short-sighted.
Oftentimes, when it comes to technology and new innovations, folks move to a “good” or “bad” binary. This is “good,” while this is “bad.” But in this case, AI chatbots actually fulfill a really important role on college campuses. If I’m in need of a tutor, or an editor, or a professor’s help, is that not “receiving assistance from another person”? And furthermore, if those folks aren’t willing or available to help me, why not have a chatbot fulfill that role?
Perhaps we need to reframe the idea of what AI chatbots can do. As such, here are three examples of use cases I’ve heard from fellow students—and how higher education can do a better job of incorporating the student perspective into these policies.
Perhaps one of the most potent examples I’ve heard is that challenging concepts often stop students in their tracks. I mean, what exactly is a Mann-Whitney U stats test, anyway?
For many students, tools like ChatGPT are the tutor they need that can break down new or complex concepts into their most basic parts, in a manner that makes sense to the learner (by plugging in prompts like “Can you explain XYZ at a tenth-grade level?”). In this case, not only is this tutor available on demand, but it’s also approachable, and mostly importantly, free.
Technologists have long dreamed of this vision of a “computer-assisted” tutor. In fact, Patrick Suppes, a Stanford University philosophy professor and pioneer of computerized tutoring, said in 1966 that students would someday have .”
Well, that virtual Aristotle is here.
And it should be noted that the responses from ChatGPT are not always correct, making it wise for students to use it as just another tool in their arsenal rather than something that will step in and do all the work for them.
Additionally, many students use ChatGPT as an editor — a free alternative to existing edtech tools like .
Many university campuses host “writing centers” where students can book time to get editing help on their written assignments. However, these centers have limited hours, limited help and, sometimes, limits on expertise.
UCLA’s new policy describes using AI chatbots as “receiving assistance from another person.” But by that logic, why have writing assistance tools like Grammarly be given leeway to flourish? In fact, by doing a quick search online you’ll find that acceptable use policies at UCLA and many other institutions don’t explicitly reference writing-assistance tools like Grammarly. In fact, on the contrary in some cases — Chris Dew, who has taught at Swinburne University and Teesside University, describes that he encourages his students to use Grammarly, , “You’re just using a writing assistant to write… better.”
A third use case for an AI chatbot is as a “naysayer” or “critic” — not necessarily a role that a university explicitly employs like tutors or editors.
Oftentimes, when writing or describing or designing, it helps to hear feedback, especially when there is argumentation involved. As AI chatbots with a ton of information, like ChatGPT, can provide answers to questions like “I’m arguing XYZ. What might be some responses that disagree with that idea?”
A fundamental part of making an argument in an essay or in a presentation is to understand a counterpoint, but few online tools provide students with a free, instant response that they can plan for in advance.
None of the above examples give way to plagiarism, unless a student has an explicit intent to cheat. However, I should mention that UCLA wasn’t entirely dismissive of AI chatbots in its university-wide email. Towards the end of their email message, there was some clarifying information regarding the role of the professor:
“Individual instructors have the authority to establish course policies for the use of ChatGPT and other AI tools. Acceptable use may vary from one course to another, and indeed from one assignment to another. If you are unsure about whether AI tools may be used for a particular assignment, please ask your instructor for clarification.”
So how can that work?
One example is the approach adopted by an adjunct professor at Villanova University’s graduate department in human resource development.
That instructor, Kyle Ali, took two key steps in his “Diversity in a Global Economy” course. First, he brought together a group of Villanova students to discuss the best use cases and an “acceptable use policy” for AI chatbots in his class. Before beginning a recent Wednesday class session, Ali spent 45 minutes listening and learning from his students.
Following that exercise, Ali also appealed to both educators and students online, publicly requesting existing examples or resources that are already out there:
“‘No, not ever’ deprives students access to potentially transformative technology and feels largely unenforceable. Unregulated use comes with legit teaching and learning, ethics and integrity concerns… Students — what are some ways you’re using it responsibly? Teachers — what have been the implications for grading?”
Within two weeks, Ali had enough information to be able to develop an acceptable use policy for his courses that other professors are starting to adapt. In his policy, he notes that acceptable use includes “prompts or queries to surface additional learning opportunities or potential citations” and “prompts or queries to confirm or challenge examples or assertions.”
The policy still includes a mention that “use of AI technology to generate partial or complete responses to course prompts is not permissible,” and will lead to “an academic integrity violation.”
Ultimately, the underlying approach is not new. Faculty and universities cannot idly sit by as this latest tool — artificial intelligence — becomes more and more omnipresent. But instead of dismissing something like an AI chatbot outright, it’s up to higher education leaders to incorporate student voices to better understand why and how these tools get used — and to use what they hear to formulate the best policies for acceptable use.
Upon reading it, I took a pause. I’m a former educator in the process of writing my dissertation for my Doctorate of Education, as part of a part-time program while working a full-time job at Google. And as someone who is also a former journalist and editor for EdSurge, I recognize that we should never plagiarize, and that artificially-intelligent chatbots are very, very capable of responding to prompts like “Write me a 500-word essay on Shakespeare’s Twelfth Night.”
But as a student, and frankly, as a former teacher, my university’s approach struck me as incredibly short-sighted.
Oftentimes, when it comes to technology and new innovations, folks move to a “good” or “bad” binary. This is “good,” while this is “bad.” But in this case, AI chatbots actually fulfill a really important role on college campuses. If I’m in need of a tutor, or an editor, or a professor’s help, is that not “receiving assistance from another person”? And furthermore, if those folks aren’t willing or available to help me, why not have a chatbot fulfill that role?
Perhaps we need to reframe the idea of what AI chatbots can do. As such, here are three examples of use cases I’ve heard from fellow students—and how higher education can do a better job of incorporating the student perspective into these policies.
AI As On-Campus Tutor
Perhaps one of the most potent examples I’ve heard is that challenging concepts often stop students in their tracks. I mean, what exactly is a Mann-Whitney U stats test, anyway?
For many students, tools like ChatGPT are the tutor they need that can break down new or complex concepts into their most basic parts, in a manner that makes sense to the learner (by plugging in prompts like “Can you explain XYZ at a tenth-grade level?”). In this case, not only is this tutor available on demand, but it’s also approachable, and mostly importantly, free.
Perhaps we need to reframe the idea of what AI chatbots can do.
Technologists have long dreamed of this vision of a “computer-assisted” tutor. In fact, Patrick Suppes, a Stanford University philosophy professor and pioneer of computerized tutoring, said in 1966 that students would someday have .”
Well, that virtual Aristotle is here.
And it should be noted that the responses from ChatGPT are not always correct, making it wise for students to use it as just another tool in their arsenal rather than something that will step in and do all the work for them.
An On-Demand Editor
Additionally, many students use ChatGPT as an editor — a free alternative to existing edtech tools like .
Many university campuses host “writing centers” where students can book time to get editing help on their written assignments. However, these centers have limited hours, limited help and, sometimes, limits on expertise.
UCLA’s new policy describes using AI chatbots as “receiving assistance from another person.” But by that logic, why have writing assistance tools like Grammarly be given leeway to flourish? In fact, by doing a quick search online you’ll find that acceptable use policies at UCLA and many other institutions don’t explicitly reference writing-assistance tools like Grammarly. In fact, on the contrary in some cases — Chris Dew, who has taught at Swinburne University and Teesside University, describes that he encourages his students to use Grammarly, , “You’re just using a writing assistant to write… better.”
A Sparring Partner in Forging an Argument
A third use case for an AI chatbot is as a “naysayer” or “critic” — not necessarily a role that a university explicitly employs like tutors or editors.
Oftentimes, when writing or describing or designing, it helps to hear feedback, especially when there is argumentation involved. As AI chatbots with a ton of information, like ChatGPT, can provide answers to questions like “I’m arguing XYZ. What might be some responses that disagree with that idea?”
A fundamental part of making an argument in an essay or in a presentation is to understand a counterpoint, but few online tools provide students with a free, instant response that they can plan for in advance.
How Students and Universities Can Co-Develop AI Policies
None of the above examples give way to plagiarism, unless a student has an explicit intent to cheat. However, I should mention that UCLA wasn’t entirely dismissive of AI chatbots in its university-wide email. Towards the end of their email message, there was some clarifying information regarding the role of the professor:
“Individual instructors have the authority to establish course policies for the use of ChatGPT and other AI tools. Acceptable use may vary from one course to another, and indeed from one assignment to another. If you are unsure about whether AI tools may be used for a particular assignment, please ask your instructor for clarification.”
So how can that work?
One example is the approach adopted by an adjunct professor at Villanova University’s graduate department in human resource development.
That instructor, Kyle Ali, took two key steps in his “Diversity in a Global Economy” course. First, he brought together a group of Villanova students to discuss the best use cases and an “acceptable use policy” for AI chatbots in his class. Before beginning a recent Wednesday class session, Ali spent 45 minutes listening and learning from his students.
Following that exercise, Ali also appealed to both educators and students online, publicly requesting existing examples or resources that are already out there:
“‘No, not ever’ deprives students access to potentially transformative technology and feels largely unenforceable. Unregulated use comes with legit teaching and learning, ethics and integrity concerns… Students — what are some ways you’re using it responsibly? Teachers — what have been the implications for grading?”
Within two weeks, Ali had enough information to be able to develop an acceptable use policy for his courses that other professors are starting to adapt. In his policy, he notes that acceptable use includes “prompts or queries to surface additional learning opportunities or potential citations” and “prompts or queries to confirm or challenge examples or assertions.”
The policy still includes a mention that “use of AI technology to generate partial or complete responses to course prompts is not permissible,” and will lead to “an academic integrity violation.”
Ultimately, the underlying approach is not new. Faculty and universities cannot idly sit by as this latest tool — artificial intelligence — becomes more and more omnipresent. But instead of dismissing something like an AI chatbot outright, it’s up to higher education leaders to incorporate student voices to better understand why and how these tools get used — and to use what they hear to formulate the best policies for acceptable use.