Anthropomorphism of AI in Learning Environments: Risks of Humanizing the Machine

Definitions of Artificial Intelligence and Generative AI​

  • The Executive Order defines AI as: “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”
  • And Generative AI as: “The term generative AI means the class of AI models that emulate the structure and characteristics of input data in order to generate derived synthetic content.”

Since the advent of generative AI tools in learning environments, educators have sought guidance for the safe and ethical use of AI in education. The recent , released on October 30, 2023, provides much-needed guidance to ensure the responsible and safe development, deployment and use of artificial intelligence (AI) systems and tools.

In preK-12 education environments, educators and school leaders continue to work to clearly and consistently identify the opportunities and challenges of AI systems and tools so that students and educators benefit from the recommendations and actions outlined in the Executive Order (EO). The EO is guided by eight principles and priorities (A–H in the EO), two of which we focus on here:

  1. AI policies must be consistent with the advancement of equity and civil rights.
  2. The interests of Americans who increasingly use, interact with or purchase AI and AI-enabled products in their daily lives must be protected.
AI is not human, and we should not be using human-related terms to refer to these systems and tools because that can lead to misconceptions that cause harm not just to our students but to our communities as well.

The priorities are particularly important in the preK-12 learning environment, where our students are often first introduced to AI systems and tools, learn to use them and potentially become creators of AI. As we know, education is a civil right, and addressing the gap requires powerful learning opportunities propelled by technology, including a nuanced understanding of how and when to use AI-enabled products.

The Danger of Equating AI With Humans​


Ensuring the protection of our communities requires us to consider the biases inherent in AI systems, and much has been written and shared regarding this topic already. The EO highlights something that is crucial for education leaders to remember: AI generates content based on what already exists. AI systems use both machine- and human-based inputs to make predictions and generate content. The data that AI systems use favors historical data and includes biases inherent in those data. Furthermore, show that human users may unconsciously absorb the automated biases from AI systems and tools and that those biases can persist even after they stop using the AI program. This means that even limited interactions with AI tools can have lasting effects. It is essential that education leaders prioritize AI literacy for educators, students and their communities in addition to considering biases, data privacy and age restrictions that they already consider when adopting AI tools.

Related Resources:​


However, our concerns go beyond bias; we want to caution against the anthropomorphizing of AI. AI is not human, and we should not be using human-related terms to refer to these systems and tools because that can lead to misconceptions that cause harm not just to our students but to our communities as well. It’s important to remember that AI systems are just computers and that they make errors. As such, we believe that the term hallucination should be replaced with mistake. Furthermore, the definition of generative AI (gen AI) in the EO calls out the outputs of these models as synthetic content. This is important because it helps reinforce the idea that AI is a tool that uses existing data to make predictions or generate content. In other words, it is creating a best approximation, or guess, based on what has already been created by humans.

AI is not creating new ideas. Understanding and accepting this helps guard against the anthropomorphizing of these AI systems and tools, which can be common in education settings. It is important for learners and educators to remember that AI is a tool and should be referred to as an “it,” never a “she,” “he” or “they.” Educators must remember that AI may sometimes display human-like characteristics, such as having a voice or the ability to answer questions, but they are simply tools that humans have created and use to complement their uniquely human abilities.

AI to Complement Uniquely Human Abilities​

Educators must remember that AI may sometimes display human-like characteristics, such as having a voice or the ability to answer questions, but they are simply tools that humans have created and use to complement their uniquely human abilities.

Still, AI is a sophisticated tool, it can complement human abilities when used appropriately. This idea of using technological systems to complement or enhance human abilities is not new and has been referred to as (IA). IA centers on the importance of keeping humans in the loop as these tools are developed and used in learning environments. For example, many educators are excited about the possibility that AI tools might be able to enable and expand individualized and differentiated learning for all learners. A well-designed AI tool can augment a teacher’s ability to give relevant and timely feedback to the students, which can lead to strengthening meaningful connections between a teacher and their students. By centering educators’ abilities to care for their students as people, we can advance equity and civil rights.

We believe that technologies can be most valuable for teaching and learning when they complement human abilities by putting educators' professional judgment and learners' voices at the center. As much as AI systems and tools can support teaching and learning, it is essential to remember that human judgment will always be required. Human judgment is what will allow us to follow the EO guidance, including the need for schools to ensure that AI policies are consistent with the advancement of equity and civil rights and that the interests of Americans are protected.

Note: This post was written by the human authors and not generated by AI.


  • Read our report designed to help state and local community leaders find effective ways for investing this funding in K-12 education to increase digital equity.
  • Take a look at the U.S. Department of Education Office of Educational Technology’s new policy report, .
  • Learn more about the and how your school can become a .
 
Top
AdBlock Detected

We get it, advertisements are annoying!

Sure, ad-blocking software does a great job at blocking ads, but it also blocks useful features of our website. For the best site experience please disable your AdBlocker.

I've Disabled AdBlock