Interdisciplinary Approaches to Teaching & Learning
The material and the type of learners you’re dealing with are the
Artificial intelligence writing programs, also known as AI text generators, use machine learning algorithms to produce written content. They are increasingly used in various academic settings for generating essays, research papers, and even scientific articles. Artificial intelligence writing programs have been on the rise and have made headlines in the past couple of months due to a new program called ChatGPT.
While these programs can save students and researchers time and effort, they raise concerns about academic integrity and intellectual property. Educators are reacting in various ways to the uptick in AI-generated material, with some welcoming its involvement and collaboration in the classroom and others left unsure about how to handle the change.
So, let’s talk about artificial intelligence in the academic world. We’ll go over what it is, how students and others are using it in ways that could impact education in both positive and negative ways, how to detect it, the implications, and what educators can do.
AI writing is any piece of writing generated by a machine. The quality of the writing produced by artificial intelligence has improved exponentially in the past year, and differentiating between text written by an artificial intelligence program and text written by a human is challenging.
The release of ChatGPT in November of 2022 has elevated concerns about students having access to powerful AI writing tools. ChatGPT is a free AI writing tool created by OpenAI that allows users to interact with an AI writer. Because of the easy availability to students, academic circles are concerned about the impact AI programs will have on classroom instruction and academic integrity.
So, let’s talk about the current state of AI writing and how it is influencing the classroom environment.
AI writing is convincing. Unlike straight plagiarism, it isn’t easy to detect, as it writes brand new content based on what the user inputs and the relevant information it can find on any given topic. One of the concerns about AI writing tools is that they can sidestep plagiarism detectors.
While plagiarism checks can determine if the same text appears elsewhere on the internet, these same tools do not work for AI-generated text because it is usually unique. It is difficult to determine, and even harder to prove, if and when a student has turned in an AI-written assignment.
There aren’t really any limits regarding what topics AI writing programs can create text about. These programs are trained on a near-limitless amount of information, so if there is information on a given topic, you can expect AI tools to be able to write about it. Whether you want a poem about history, food, or travel or an essay about the origins of tea, expect ChatGPT to be able to write about it.
But if you are writing about a niche topic, such as a person who isn’t well known, it is possible that ChatGPT will not have access to enough information to generate material.
An important thing to remember is that the information provided by ChatGPT will not necessarily be factual, as the tool pulls from the sources it can access. Some recommend treating ChatGPT more like a toy than a tool and expect it to provide material lacking depth and complexity compared to a human writer.
The sky seems to be the limit when it comes to what AI writing tools can do. AI writing tools have done everything from crafting student essays to passing high-level exams such as the Wharton MBA final. Recently, AI passed four different law school tests from the University of Minnesota. However, it should be noted that it achieved low passing grades in each instance.
In the content writing world, ChatGPT has created news summaries, product descriptions, blog posts, and website content. AI tools also don’t pull only from nonfiction. They have written short stories, novels, song lyrics, and other creative writing assignments.
Many are using ChatGPT for outside-the-box uses such as brainstorming, plotting out novels and scripts, telling jokes, finding recipes, creating workout plans, generating questions for a trivia night or truth or date, finding song lyrics, and planning location-specific date nights.
AI writing programs scour information from a variety of sources that it has been trained on, including books, websites, and scholarly articles. It then pulls relevant information on a given topic to formulate its response. The most recent version of ChatGPT was fed 570GB of data, which amounts to approximately 300 billion words.
These programs then mimic the writing of others and speech patterns to draft unique content. With ChatGPT, students can feed information to the chatbot, which will provide an answer in response. Users can request anything from an answer to a question to an original poem or a vacation itinerary. When asked how ChatGPT works, the program responded with:
“ChatGPT is a transformer-based language model developed by OpenAI. It uses deep learning algorithms to generate text based on the input it receives. The model has been trained on a massive data set of text from the internet, allowing it to generate humanlike responses to various types of prompts. When a user inputs a message, ChatGPT uses the input to make predictions about the next word in a sequence, then generates a response based on those predictions. The model uses self-attention mechanisms to consider the context of the input and generate a more informed response.”
When given the prompt to write a short poem about tigers, the program provided the following:
“With stripes so bold and eyes so bright, The tiger roams with power and might. Through forests deep and mountains high, It stalks its prey beneath the sky.
Majestic creature of grace and strength, The king of beasts at any length. Its roar echoes in the night, Declaring to all its might.
Oh, tiger, in all your glory, Nature’s beauty, wild and hoary. A symbol of power, courage, and flair, You roam the earth without a care.”
Educator reactions to AI writing programs are widespread. Some educators are excited about the technology and figuring out ways to incorporate it into the classroom. Other educators are panicking, worried about how it will impact their classroom, and unsure how to handle this new reality.
Many educators embracing AI writing tools see programs such as ChatGPT as a powerful means of collaboration. Students can use these tools to have simulated conversations with public figures throughout history, learn about various topics, or brainstorm story ideas. When used ethically, they can be a powerful tool for sorting through research information.
Because these AI programs are not going away any time soon, teaching students to use these tools in a way that values academic integrity is a conversation that many educators feel is worthwhile. For instance, some educators advocate for using ChatGPT during the initial research stages but not during test taking or any situation where you need to be sure that the student has retained what they have learned.
In one instance, a teacher used ChatGPT to evaluate student work, and the feedback provided by the app was seen as more useful and detailed than their own. This has raised questions about whether the tool could become a valuable asset for feedback and personalized tutoring.
Some educators have taken a firm stance against AI writing programs and are banning their use in the classroom and becoming vocal about schoolwide or districtwide banning of such tools.
These educators are concerned about the ability of ChatGPT to assist students with cheating and the spread of misinformation, as the information is only as accurate as the databases it is pulled from. There is also trepidation that using AI writing programs could inhibit a student’s ability to learn to write and stifle creativity and problem-solving skills.
Other educators just aren’t sure how to respond to AI-written content. They don’t know how to incorporate it ethically or detect it in the classroom. These educators are left without the knowledge or support to navigate the changing technological landscape with their students.
One concern of those unsure about AI writing in the classroom but not strongly opposed to it is that students do not just access the internet on school devices. Many students have their own phones and laptops that they can use outside of class. Some teachers are expressing that though ChatGPT is frustrating, trying to detect AI-written assignments is an undertaking that would not be sustainable, so other options need to be explored.
Schools are reacting in a largely negative fashion to the increased usage of artificial intelligence writing programs and are implementing policies against them. Some schools are enacting schoolwide bans. Public schools in Seattle, Los Angeles, and New York City have banned the use of ChatGPT and other related tools on any devices connected to the district’s network.
However, some institutions, such as Columbia University, recognize that programs such as ChatGPT are here to stay and that it is important to prepare their students for a future with these types of tools in it.
Student reactions seem to be as varied as educators. Some students think that collaboration with these tools could be a positive experience in the classroom. Since it will be prevalent in the future, they should learn how to work with it. Many students think that the panic regarding ChatGPT is overblown and that it is meant as a toy or for brainstorming and outlining, not delivering a finished piece of work.
Other students are concerned about the ethics of its usage and the tools’ unreliability. There is concern among students that using ChatGPT could impact their critical thinking skills and creativity, and they worry that if some students use ChatGPT to cheat, it could affect other students in the class. There is also concern about the future of literacy skills, communication skills, and writing as a whole.
And still, other students fall in the middle of the spectrum, wanting their schools and teachers to offer ChatGPT as a tool for those students who would like to use it, and believe that the topic of AI writing should be a conversation and that forbidding the technology will not prevent its use. The use should be up to students with ethical parameters put in place.
So, now that we know what AI-generated writing is, how it is used, and how the academic world and students are reacting, let’s talk about what can be done about it. There are a few places where educators and institutions can start to limit the use of unwanted AI in their classrooms and encourage only wanted interactions with tools such as ChatGPT.
Making the expectations of AI-written content clear is an excellent first step. Each educator or school must decide what those policies will be. Educators can go into this knowing that it may be an evolving policy, with room for alteration as time goes on and they see what works in their classroom.
Each class syllabus should have an academic integrity section with each teacher’s expectations regarding original work. Educators can go the extra step and bring the conversation into the classroom early on in each course so that students are clear on what the expectations are.
Respect between students and teachers can go a long way toward preventing cheating and plagiarism. Educators can explain to students the negative impacts of using AI-generated material and how it inhibits learning of the subject matter.
If students are interested in using ChatGPT, there are ways to include these students in the process of exploring the negative impacts of overreliance on these tools. Some educators have created lesson plans around fact-checking ChatGPT assignments to show that they are not always accurate. Students could also compare their essays to those written by ChatGPT, paying special attention to the emotion and depth of written assignments.
One of the main ways that educators can balance the use of AI in their classroom is to craft lesson plans with ChatGPT in mind. For instance, students could be allowed to use the tool for outlining or research but must put away their devices when writing final written content or participating in live discussions.
For tests and essays, the focus can be on the information learned during in-class discussions and hands-on labs and assignments. AI cannot generate writing from material that does not exist online, so focusing on discussions and offline material is a great way to create AI-proof assignments.
So, if an educator expects AI-generated content to be used in their classroom without their consent, how can they determine if they are correct? While none of these options are foolproof, there are a few approaches for when AI content is expected. Though it may be hard to prove anything, more conversation in the classroom around academic integrity may be in order.
With increasingly large class sizes, this is not always possible. However, if educators can get to know their student’s voices through their previous writing and discussion, then seeing an assignment with a distinctly different voice can be a useful early detection alert. Educators can even talk to students about the importance of having their own writing voice.
As AI writing becomes more prolific, you can expect tools that differentiate between human-written and AI-written text to be on the rise. In addition to these, OpenAI is in the process of determining whether watermarking some of its writing is possible. However, even if OpenAI decides to add a digital watermark to its text, there will likely be other programs that will rise in popularity, and it is unlikely all of them will be watermarked.
So, let’s look at two different types of tools on the market. They are AI detection tools as well as authorship detection tools.
Two tools that are currently available and claim to be able to detect AI writing are Turnitin and RoBERTa Base OpenAI Detector.
Turnitin focuses on academic writing and provides educators with a dashboard for analyzing student assignments and checking for overlap with content databases. Remember that no tool will be perfect. Even Turnitin can be wrong.
RoBERTa Base OpenAI Detector was created by the same company that created ChatGPT. It claims it can detect AI writing created by its own tools and says it has a 95% accuracy rating for detecting GPT-generated text.
There are ways to fool these tools, such as altering a few words and paraphrasing.
Authorship detection tools work similarly to AI detection tools but analyze the writing against other work known to be written by the same person. Because of this, most of these tools don’t detect just AI but also contract cheating and general plagiarism. They simply help you determine whether a specific student wrote this piece. Two such tools come from Turnitin and Unicheck.
The downside is that educators would have to have a significant amount of a student’s previous writing to compare against and would have to feel confident that it was original work.
A proctored exam has a specific set of rules the student must follow. A proctor, someone trained in supervising tests to ensure students follow the guidelines, observes these types of exams either remotely or in person.
Students taking these exams remotely will typically have to demonstrate that they have nothing in their direct vicinity that they can use during the test other than permitted material. They would do this by turning their cameras on and allowing the proctor to see what is nearby. Often, the exams will be taken as the test-taker shares their screen or via programs that do not allow access to other computer tabs during the testing window.
An in-person proctored or monitored exam would require that the student put away all materials except those needed for the test. Proctors would then also stay in the room and ensure that students keep their eyes on their own paper and remain focused on completing the test.
Another option is to have all work where students may not use AI tools completed in the physical classroom or in a remote setting that is monitored. This would be difficult and stressful to do for all assignments. However, this method can be used when needed to determine if a student is learning what they should and absorbing the information being taught.
This approach takes the concept of a proctored exam but applies it to other assignments as well. Think of the Blue Book essays of the past. In these situations, using AI-written content will not be an option for students.
The consensus, whether one is excited about AI writing tools like ChatGPT or firmly against them, is that the way educators teach will need to shift in response. Communication within educational institutions as a whole and within classrooms is essential. Students, educators, and administration need to be straightforward about policies and expectations and what will happen if a student is suspected of using AI writing software without permission.
These tools will continue to grow and evolve and will be a part of a student’s life in some capacity. Bringing the conversation about AI, if not the tools themselves, into the classroom will help prepare students for a future where these tools are part of their workplace and lives.
The material and the type of learners you’re dealing with are the
It isn’t easy to imagine a science class without lab experiments. The
Pedagogy is the method behind the art of teaching. It considers how