Artificial Intelligence (AI) & Bullying

Source: Adobe Express Stock Images; An image of someone on a laptop, and a screen display of an AI chat software.

Overview of the module: 

In this module, students will learn more about the newer technology of Artificial Intelligence (AI) and how AI has been used to exacerbate cyberbullying online. This module will explore complex topics such as deep fakes, bots and fake accounts, and misinformation/disinformation, and how these concepts relate to bullying. For a more thorough module on mis/disinformation, please take a look at our misinformation and disinformation module. AI is a new technology, and as with any newly introduced technology, there is a learning curve as we learn how to use the technology efficiently and effectively. We also start to identify some of the issues this technology presents to our society. As with any new and unfamiliar technology, there is a sense of fear and pushback from the larger community. This module is an attempt to inform both students and teachers about how AI technology can be used by people as a tool for cyberbullying but also as a tool for academic and creative purposes. Teachers will also be able to familiarize themselves with AI tools so they can guide their students on the safe usage of AI. AI technology seems to be our immediate future, and as educators, we need to equip students with the necessary skills they need to be successful in their future endeavors.

What will teachers find in this module?

Teachers will find many tools and resources, specifically exploring AI usage. Teachers will also be introduced to how AI can increase the chances of cyberbullying. These lessons and resources provide a framework for teachers to discuss complex topics with younger audiences. While this module can be taught separately from the other modules, students must be aware of certain concepts like cyberbullying and misinformation. Please explore these two modules before proceeding with this module. These lessons include reflections, discussions, and engaging activities that explore the dangers of AI, and propose some strategies to remain safe when using AI tools. 

Why is this module important?

As mentioned above, this is a relatively new technological advancement we have been introduced to, meaning that there will be many questions and lots of trial & error. We want teachers to be able to make use of tools that help them make their lessons more interactive and efficient, while also being aware of some of the problems that have surfaced with AI use, to better inform their students to be safe online. 

How does this module connect to bullying?

AI can be used to create deep fakes that present believable images and sounds of people that can be used to spread rumors and false information about individuals, and because it is presented in visual and auditory forms, it can be hard to dispute and differentiate between what is real and what is not. This is why familiarizing students at a young age with these possibilities can prepare them to get ahead of situations that could otherwise worsen cyberbullying as a whole. 

Content

Additional Resources for Teachers

Note: Many of these resources are separate from the ones listed under the materials for each lesson.

Lesson Topics

What is AI?

Lesson Goals: Students will learn about what Artificial Intelligence (AI) is, the pros and cons of AI, and how to navigate this new technology safely. 

Source: Adobe Express Stock Images; A display of a digital menu with AI in the middle, surrounded by icons.

Materials/Resource for the Lesson:

Lesson Structure:

  • Open: Begin with some slides of some images, audio clips, and video clips that are created by humans, and some created by AI. As an icebreaker, have students play a short game of “Guess which one” and see how well they recognize the difference between AI-created and human created content. 
  • Body: Three Sections
  • Section 1: Video and Discussion
    • Watch the video (https://www.youtube.com/watch?v=b0KaGBOU4Ys&list=PL8TjVyuBdsCmRrAn8HbmyI5oWkaR9Bqi3)
    • Have some questions prepared for students to pay attention to so that they can follow along with the video. Here are some exmples:
      • Who can tell me what AI is? 
      • Can you give me some examples?
      • Why is AI like your robot dog?
      • What can be created using AI?
      • Why do we need to be cautious when using AI?
    • Section 2: Ethical Dilemma Debate (For or Against AI?)
      • Split the class into two groups and have them relocate to sit on the side of their team. Distribute the articles below to all the students.
      • Explain to them what the Ethical Dilemma Debate is. The students will begin by reading the article they have been assigned independently, then discuss with their classmates what they read. Each team will be given enough time to reflect on the articles and brainstorm ideas to debate with the other team.
      • The issue being debated is whether students are for or against AI. This activity will not only motivate students to think critically but also teach them to think of other perspectives. This is something they have to do to win the debate. Students are also expected to debate on positions that may be different from their own.
      • One side will argue for AI and all the positives and possibilities of this new technology. The other side will argue about the dangers of AI, what it’s truly capable of, and what the future could look like without regulations.
      • Article to use: https://www.simplilearn.com/advantages-and-disadvantages-of-artificial-intelligence-article (advantages and disadvantages of AI)
      • Once the students have had a chance to brainstorm, the class will hold a debate, making sure that all the students are offered a chance to speak and share.
    • Section 3: Using AI Safely
      • With all this talk about pros and cons, it is crucial to also mention how to use AI safely and responsibly.
      • https://travasecurity.com/learn-with-trava/blog/6-ways-to-be-safe-while-using-ai(Using AI safely)
      • Have the students read this aloud as a class (popcorn style – each student reads a small paragraph). Once they finish reading, ask a couple of students to summarize the article in their own words.
      • Here are the six safety measures the article explains: 
        • Being careful when choosing AI apps
        • Not sharing personal information (name, phone number, address, photos, etc.)
        • AI has been known to be wrong at times. Don’t blindly trust everything you read from AI. Do your own research to verify information.
        • AI might steal content from other sources without giving them proper credit. This can mean if you are using AI materials, make sure to use it as a tool and not something to do the work for you. This can result in plagiarism. 
        • Don’t save the chats/chat threads
        • Look out for any shady activity. If you see something, say something. 
  • Close: Reflection Circle
    • Following the Using AI Safely portion, have a five-minute reflection circle. Have all the students sit in a circle and ask students to reflect on this exercise by going around the circle, giving each student 30 seconds to share. The teacher is also expected to participate. An example of a reflective question you can ask: 
      • “What is one thing you will take away from today’s lesson?”
      • “What do you think about AI? Are you for it or against it based on what you learned here today?” 
      • “What is one thing you will change in your own life, small or big, knowing what you know about AI?”
Fake or For Real?

Lesson Goals: Students will learn more about Artificial Intelligence (AI) and how it is used to create deepfakes online. Students will be able to explore what deepfakes are, how to spot them online, and why they can be dangerous. The “What is AI?” lesson is recommended as a precursor to this lesson. 

Source: Adobe Express Stock Images; An image of a young girl on a facial recognition software

Materials/Resources for the Lesson:

Lesson Structure:

  • Open:
    • Start with an introduction to the concept of deepfakes. Ask students if they have come across videos, images, or audio that they thought were real, but were edited.
    • Then explain what deepfakes are.
      • You can use a video (https://www.youtube.com/watch?v=gLoI9hAX9dwand discuss afterward, or simply explain what deepfakes are.
      • Deepfakes are videos, audio, or images created using AI that alters the content in some way to manipulate the intended audiences. Videos can be altered by swapping the subjects’ faces or even altering their voices.
      • You can show some examples of deepfakes (https://www.youtube.com/watch?v=B4jNttRvbpU till 3:15).
      • Generative Adversarial Networks (GANs): This is a process that uses the power of two AI’s to create the most realistic version of the subject. This is done by a process of trial and error between the two AI units until the second AI is unable to detect which version of the initial image is real and which one has been manipulated by AI. 
      • Talk about both the good and the bad of deepfakes.
        • Some benefits of being able to use deep fakes include being able to translate films and videos into other languages, and use it as a way to age correct (for actors/actresses to appear younger on-screen; to help law enforcement find victims and suspects).
        • However, deep fakes can also be problematic. The public’s trust in videos, audio, and images online may deteriorate due to deepfakes, which means that even when people are telling the truth, it may not be believed. On the other hand, someone can be accused of doing or saying something they did not do or say, which can have wide-ranging consequences, especially if that person is a community leader or political leader. Doctored images, audio, and videos can be (and have been) presented as evidence in court, which can impact the legal systems and their rulings. Of course, deep fakes can also be used to commit crimes online, such as fraud, manipulation, and even abuse. 
  • Body:
    • Types of deep fakes
      • https://www.youtube.com/watch?v=B4jNttRvbpU (5:48 – end)
      • There are a few ways that audio, images, and videos can be manipulated (https://www.aiforeducation.io/ai-resources/uncovering-deepfakes).
      • One of these methods, called face swapping, is when people can change the faces of the person in the image or video with another person’s face. This is on a more advanced level than using Photoshop to swap someone’s face, and typically, it’s very hard to tell when someone’s face has been swapped this way.
      • Another such method is called voice cloning when AI is used to imitate someone’s voice onto an audio recording. These voices can be very realistic and can have far-reaching consequences. In this article the author explores how a mother was scammed into thinking her daughter was kidnapped. The scammers used AI to mimic the voice of her daughter crying for help in the background, and it was so believable that the mother thought that was her actual daughter until she was able to verify that her actual daughter was safe with her dad.
      • Another technique that is used to create deepfakes is lip-syncing. People’s lips can be manipulated to match a different audio. All these techniques can be used together to create audio, images, and visuals of people saying or doing things they did not say or do.
      • Reflection question: Why do you think this could be problematic? How does this tie into bullying?
        • Some responses can include cyberbullying, identity theft, misinformation, or disinformation
    • Detecting deep fakes
      • Go over some techniques that can be used to detect deepfakes. The website below provides a guide to deepfake detection.
        • Explore how visual inconsistencies can give away what’s fake and what’s real.
        • Explore how lighting should look realistically, and how to identify when it doesn’t appear consistent.
        • If it’s a person, look at how the features of the person look. Do they look natural, or is there something off in the way the arm bends, or the way they stretch?
        • Another thing to look at is audio-visual inconsistencies. For example, does the facial expression match what is being said? Are there abrupt cuts in the videos, or prolonged movements that appear unnatural?
        • Students should also pay attention to people’s blink patterns and breathing patterns to determine whether they are blinking naturally, or breathing normally.
        • Of course, like with other online sources, another big one: is the source of the material reliable? Please mention (or ask a question to reflect on this topic) how deep fakes are getting harder to detect, and sometimes even reputable sources can let things fall through the cracks. 
      • https://www.realitydefender.com/blog/deepfake-detection-guide
      • Play a game, to see how many students can guess correctly whether the images and videos are real or deep fakes. https://www.bbc.co.uk/bitesize/articles/zg78239
    • Activity: Fake or For Real?
      • Prepare some deepfake images, voices, and videos ahead of the class period. Have some real ones in there as well, to make things a bit more challenging. Since this will be a competitive challenge, prepare some sort of prize for the winning group. (Pro tip: prepare a few extras, just in case you have more than one winning group). 
      • Divide the class into groups of 4. Provide each group with 2-3 deepfake images/voices/videos to analyze along with a checklist of how to spot deepfakes. Once the group has had time to analyze the given materials, they must present their analysis to the class. The groups will have to vote on whether the material is fake or real as a team (one vote per team). The teacher can then expose whether the material is a deepfake or real, with a point awarded to the group for each correctly identified material. The group with the most points wins. 
  • Close:
    • https://www.spotdeepfakes.org/en-US/result (exit ticket quiz)
    • Go to this website and (either individually or collectively as a class) have students take the quiz on the website. It’s ten questions long, and it serves as a good summary and reflection of the class lesson. 
Bots and Fake Accounts

Lesson Goals: Students will learn about trolling, AI bots and fake accounts online, how to recognize them, and what sort of impact they have on our society. 

Source: Adobe Express Stock Images: An image of a desk, with a laptop, a desk lamp, a pencil holder, and a message session with a Chatbot on the laptop screen.

Materials/Resources for the Lesson:

Lesson Structure:

  • Open: Discussion
    • Being with a discussion with the class. Ask students about their social media usage and whether they have encountered any trolls, bots or fake accounts. 
  • Body: Two Sections
    • Section 1: Define and Learn
    • Section 2: Human or Not? Activity
      • https://www.humanornot.ai/ 
      • As a class play the game together, with the teacher in charge of typing the responses. Students can participate by offering messages to write back. The game is designed to have a two-minute text conversation with the game and determine (based on the information from the conversation) whether the user on the other end is a human or a bot. This can also be a good way to teach about the safe usage of AI, as the teacher guides the responses. For example, if the question is posed asking for personal information or to meet up somewhere, teachers can use that as a teaching moment to show how students can maneuver situations like that. Teachers can also talk about strategies they use to differentiate between real and bot messages. 
  • Close:
    • Open up the floor for discussion and questions. Ask some reflection questions such as:
      • “If you are met with some trolling online, what are some strategies you can use to deal with the trolls?”
      • “Were you surprised to find out that there were AI bots bullying users?”
      • “How difficult was it to differentiate between the human and the bot on the online game? As AI becomes more and more advanced, do you think it will be harder to distinguish between human creations and content generated by AI? What sort of impact do you think this will have on our society?”
Digital Trafficking in the Age of AI

Lesson Goals: The digital trafficking lesson under the Social Media Module is recommended before exploring this lesson. This is because students will be expected to be familiar with certain terms and topics about trafficking in general. This lesson is designed to explore how AI has exacerbated the issue of digital trafficking. 

*Disclaimer: This lesson explores mature topics such as the causes of addiction and the signs & symptoms of addiction. Teachers are advised to take the necessary precautions and prepare for this class accordingly. As always, teachers are encouraged to tailor these lessons to cater to the needs of their students.

Source: Adobe Express Stock Images; In this image, a silhouette of a man with headphones on can be seen in the forefront with computer screens in the background.

Materials/Resources for the Lesson:

Lesson Structure:

  • Open: Video and Discussion
    • https://www.youtube.com/watch?v=7mH93tQ4n8M Show this video to the class and ask them some reflection questions to think about as they watch the video.
    • Examples of discussion questions to pose:
      • “What do you think Jennifer did wrong here?”
      • “How could Jennifer have avoided this situation? What are some specific actions Jennifer took that were problematic?”
      • “What are some safe practices to use online to avoid being in situations like Jennifer’s?”
      • “Have you come across similar situations online? How did you react?”
  • Body: Role Playing Activity
    • Students will be divided into small groups. Each group will be assigned a scenario they have to analyze, discuss, and present to the whole class. The class will have a discussion on each scenario. Each of these scenarios will feature the misuse of AI by traffickers online. Students should think about what they would do in these situations, and how to be safe online. Students will discuss how common this scenario is, and why this should not be taken lightly.  Examples of scenarios are listed below:
      • Scenario 1: You make a new friend online! You start receiving random messages from this account, and they seem friendly enough, so you start talking to them. They ask you for personal information about your name, your parents, your life, etc. You start to trust them, as they are never aggressive with their messages or unkind. Unfortunately, what you don’t know is that this is an AI chatbot created by a trafficker to lure young people like you and gain their trust. They want to collect information about you and eventually be able to meet in person with you without raising any concerns from adults around you. 
      • Scenario 2: You find a video in your DM on Instagram from an account you don’t know, but when you open it up, it’s a video of you spilling something all over your clothes. Except, you don’t remember EVER doing anything like this. Matter of fact, you don’t even own clothes that look like what “you” are wearing in the video. Unfortunately, you receive a message along with the video. It reads, “Looks like you are a clumsy person. This would be so embarrassing if it got out to your classmates. Do you want me to share it with them?” You are upset, but you are confident that this is not you. Still, if this video is shared, others will think it’s you because it looks so real. You type back, “no, please. Don’t share it. What do you want?” They ask you to meet them in person or do something else that would further incriminate you.
      • Scenario 3: You receive a friend request from someone who looks like they could attend your school. You look at their profile and you see that they have a small friend group, but you see they share hobbies and interests similar to others in your age group. You decide to accept their friend request. What you don’t know is that this is a trafficker who has created multiple fake accounts to create a believable backstory for their account. They used deepfakes and AI-generated pictures for their profile. They are using all these accounts they created to reach out to potential victims such as yourself online. 
      • Scenario 4: You see an ad for a traveling opportunity and you’re interested. It promises to pay for your whole trip to any destination you wish as long as you can provide a small recap/reaction video and upload it. It seems too good to be true, but the ad says that they are a new travel agency and they are trying to increase their popularity. This seems to make sense to you, and this is too good of an opportunity to give up. So you contact the number on the ad ready to get your travel on! Oops! Unfortunately, what you don’t know is that the ad was created by a trafficker, and they took advantage of the targeted ads and recommendations to target you for the ad as they knew how much you enjoy traveling, but your posts always appear wishful (maybe you don’t have the funds to travel). They may ask you to come in person to fill out some forms as an excuse to meet with you in person. You think this is normal procedure, so you go to meet them. 
      • Scenario 5: You receive an ad to complete a survey to see which Disney character you are most like. You think, “Ah let’s see what it says!” and proceed to take the quiz. What you don’t know is that the quiz was not another one of those fun games; it was created by a trafficker, to collect as much information about you as possible without looking suspicious. This information can then be used to further target you based on your interests, and groom you to trust the user on the other end. 
      • Scenario 6: You are going through a bit of a mental health challenge, and you see a group online that looks like a support group for people struggling through similar situations as you. You decide to join. Unfortunately, what you don’t know is that the support group was created by a trafficker who is planning to use the information they collect about you and your peers in the group to target you further. Support groups are a safe space where people are often vulnerable and lower their guard down. Certain information may be shared during these group meetings that the trafficker can then use to exploit the victims. 
      • You can also use real scenarios that have been on the news. Here are some examples:
      • Once everyone has had a chance to present their scenarios, bring the class together and discuss how AI is being used to combat digital trafficking.
  • Close:
    • Once the students have had a chance to analyze these scenarios, lead a class discussion, encouraging students to make some observations in their own lives.
    • Some questions to keep in mind: 
      • What should you do in situations like these? What would you do (be honest)?
      • Should you share personal information with people online or trust someone you don’t know? 
      • How do you know whether the person on the other end is a human or a bot? Even if you could know they are human, how can you be sure of their motives?
Using AI as a Tool

Lesson Goals: Students will learn how to safely use AI Tools and understand the risks and benefits they may experience while using the tools. This lesson includes AI tools that can be used in academic settings and as inspiration for creativity. 

Source: Adobe Express Stock Images; An image of a computer desk, with a monitor, speakers, game controller, and other devices on the table. An AI image generator software is being used on the monitor.

Materials/Resources for the Lesson:

Lesson Structure:

  • Open: Ask students what they know about AI. Start a discussion about the ways AI can be used in everyday settings. 
  • Body: AI Tools
    • Safely using AI tools for academic and creative purposes. Below is a list of tools that can be used for academic and creative purposes. It is recommended that teachers try these tools out for themselves and determine which ones to share with their classes.
      • AI tool to take and organize notes
      • AI tool for tutoring on many subjects and at various levels of study
        • https://studymonkey.ai/ (pricing options available; the basic plan is limited but free, but there are plans for $8/month and $10/month with more options)
      • AI tool for writing prompts
      • AI image generation for creative inspiration
      • AI tools that help with editing and creating videos and animations
        • https://www.krikey.ai/ (animation AI tool; there is a basic plan available for free, but it is a limited version. There are other options available). 
    • Ethics and Risks
  • Close: Let’s Create a List of AI Norms
    • With the whole class, create a list of AI Norms to set some guidelines on the safe use of AI. Students can choose to write these onto a large poster and hang them up in the classroom. 

Published by

Kala Bhattar

Kala Bhattar has her BA in Political Science with a double minor in International Studies and Human Rights at the University of Alabama at Birmingham (UAB). She has worked with the Institute for Human Rights (IHR) at UAB since fall of 2021 as an intern blog writer and has also been working on some of the side projects conducted by the IHR. As a blog writer, she has written on topics of both domestic and international importance, such as issues of food insecurity and homelessness in the United States, but also has captured the broader struggles of people around the world, especially in her series on Environmental Justice. The IHR has been an important platform for Bhattar to spread awareness about contemporary issues and has introduced her and prepared her for many opportunities that have assisted her in pursuing her academic goals.

Leave a Reply

Your email address will not be published. Required fields are marked *