In the rapidly evolving landscape of artificial intelligence, ChatGPT has emerged as a groundbreaking development, offering a powerful and versatile tool for natural language understanding and conversation generation. Developed by OpenAI, this variant of the renowned GPT model is poised to redefine human-computer interactions, with implications spanning from chatbots to virtual assistants and beyond.
Could you tell that that paragraph was created by ChatGPT? All I did was type “Write a lede about ChatGPT for my journalism article” in the chat, and within seconds, the system produced this response. Fascinating, isn’t it?
Amid ongoing debates about its use and application, one fact remains indisputable: AI and new tools like ChatGPT are undeniably shaping our future. Within the academic realm, educators grapple with the potential benefits and associated risks of these technologies.
According to Premium Prep College Counseling, an educational service, gaining admission to college has become significantly more difficult for today’s students compared to a generation ago. Given the pressures students face as part of a generation driven by the pursuit of perfection, they may show a reduced appreciation for traditional trial-and-error learning methods. Consequently, some students may be tempted to turn to ChatGPT for precise answers.
“I say in class, ‘Don’t be afraid to be wrong; that’s how you learn.’ But everyone’s really reluctant to be that person who raises their hand and is wrong,” Ms. Holly Hunnewell, English teacher, said.
Using ChatGPT poses an academic integrity risk for students, particularly in terms of plagiarism. Since it can generate text in a human-like manner, students may be tempted to use the tool to produce essays, assignments or other academic work without proper attribution or without fully understanding the content they are submitting.
“You wanna comprehend whatever you’re learning, and most of the time, when students use ChatGPT, they just put text in, copy and paste it, and they’re done. They don’t understand what’s happening, and later on, that comes back to affect them, because it required the information they needed when they just used ChatGPT,” Mary Akhsharumova ‘25 said.
Recognizing the potential for students to use ChatGPT dishonestly, Flintridge Sacred Heart has introduced a new classroom-based academic integrity policy this year. This new policy emphasizes that any use of ChatGPT not authorized by the teacher constitutes plagiarism.
“The school is looking for ways to help students responsibly use AI and ChatGPT so that it can be a real assistance, but that it’s not a crutch that gets students to avoid the hard work of critical thinking,” Dr. William Hambleton, Religious Studies Teacher, said. “That’s the intent of the policies—to ensure that it’s used as an appropriate tool and not as an opportunity to avoid the cognitive dissonance that comes with learning.”
ChatGPT is a tool that can be used for cheating, but it doesn’t explicitly endorse academic dishonesty. Nevertheless, some argue that students may resort to cheating, with or without ChatGPT.
“I do think that there is a rampant culture of academic dishonesty. I think it happens [in general] like death by a thousand papercuts. It’s lots of little things that people do, whether it’s borrowing homework or copying and pasting this. That problem is independent of ChatGPT,” Ms. Christine Orihuela, Visual Arts teacher, said.
When generating content, ChatGPT doesn’t engage in independent thinking or deliver any new thought. Instead, it draws from an extensive database that can identify patterns and connections among various sources.
“The reason ChatGPT is useful is because it’s finding connections between existing knowledge, it’s not actually creating any new knowledge,” Mr. Ty Buxman, science teacher, said.
Because ChatGPT draws from various sources, it incorporates a range of voices, and occasionally, what it produces lacks a unified tone that humans can achieve when they write themselves.
“I would really like to use it [ChatGPT] to teach nuance and voice. Here, plug this prompt into Chat and see what it comes up with. If you agree with it and think it’s good, consider how you could make it your own. How can you add the nuance or the voice that makes it ‘Madison’s’ essay?” Ms. Hunnewell said. “I think that students can use it to recognize the power and value of identifying what it means to develop their own voice.”
ChatGPT can also be a valuable tool to assist those facing writer’s block in brainstorming ideas and kickstarting the creative process.
“I think it can benefit our generation by helping people who may have some issues with creative thinking. It can help people stimulate their thought process,” Akhsharumova said.
Although ChatGPT can be a valuable brainstorming tool, it’s crucial to use it wisely. Excessive reliance on this approach might lead to a long-term dependency, potentially impairing students’ critical thinking skills and their ability to independently generate ideas.
“I don’t like the idea of it [ChatGPT] generating ideas because that’s what I want you to be doing as students, not having something else do it for you. You [students] have to put in the hard work. Trust me, sometimes I don’t feel like it either,” Ms. Hunnewell said. “During the summer, I read a mystery instead of Madame Bovary because my mind is tired, and I don’t want to engage in that level of critical thinking. But I also know that I’m not going to get any better at thinking by doing that.”
When working with ideas produced by ChatGPT, users must ensure proper credit is given to the original source.
“It’s [ChatGPT] learning how to make images from other images. Sometimes, the images that it produces arguably are plagiarizing the work of other people. So, I think that that is unfortunate, particularly in the digital art space. Digital artists that have been spending a lot of time making their work and that somebody else is just making a derivative of their work without their permission,” Ms. Orihuela said.
Additionally, there are instances when the information AI generates may not be entirely accurate or reliable.
“I tried using ChatGPT for one of my math research projects at my old school because I couldn’t find any data online. It went horrible. It was making up its own data. Also, when I asked for citations it didn’t actually give me citations. It just gave me where it “came from” but not the year, the date, or when it was published,” Summer Li ‘25 said.
ChatGPT is not entirely reliable, and its potential to provide incorrect information highlights the need for people not to overly depend on it. People may also want to exercise caution when inputting information into ChatGPT, as it updates its knowledge based on the new information it’s given.
“Every time you put something into ChatGPT, it’s keeping it. You gotta think about whether or not you’re okay with that. If somebody cuts and pastes somebody’s paper that’s original, all of a sudden ChatGPT will claim it as its own because it has seen it before,” Ms. Jeannie Finley, Director of Academic Technology said.
Currently, the legal, copyright, personal ethics and school ethics aspects related to ChatGPT are still largely uncharted.
“It’s a little messy right now until we can come up with more rules, laws, general social agreements on how we’re gonna function with artificial intelligence,” Ms. Finley said. “Right now, there’s really nothing preventing AI from taking over anything. Both globally and nationally, we don’t have a lot of limitations to it. They’re starting to explore that right now. I think the government is always kind of late to the game when it comes to technology, so I think that’s a problem.”
Despite the uncertainties surrounding it, ChatGPT will continue to evolve.
“I don’t think it’s ever helpful to finger wave and be like ‘It’s bad, don’t use it.’ I feel like there’s always going to be something new there will always be something new—first, it was the radio, then the TV and the internet—and it’s better to learn how to use it responsibly rather than banning it,” Ms. Orihuela said.
In a world of ongoing technological advancement, there’s a clear distinction between older generations, who may be less tech-savvy, and Gen Z, who have essentially grown up using technology.
“For me, it has to be a team approach. One of the challenges of people in my generation is we’re working with people in your generation who are digital natives and who in some ways, take to the new technology more naturally than some of us do, and so I just think that the best way to utilize it is to work in partnership with teachers who help students understand the big picture of things, while students help teachers understand the details of technology,” Dr. Hambleton said.
To use ChatGPT responsibly in education, it’s crucial for both teachers and students to be on the same page when it comes to ethical considerations. This involves cultivating a mutual understanding and a joint commitment to maintaining honesty while using this tool. This cooperative approach ensures the integrity of ChatGPT’s application in education.
“There are teachers who may not want to put in the work to create original content because, for teachers, there’s a lot of stuff to make it [their job] easier, like generating slides. So, I think both students and teachers have to be honest about what we are learning and why it’s important to learn this. I believe the ‘why’ is more critical than ever because, without the ‘why,’ that usually causes people to take shortcuts and cheat,” Ms. Orihuela said.
Apart from students and teachers cooperating and offering mutual support, there must also be a shared ethical understanding that people agree upon when using it.
“I use ChatGPT. I know when I’m using it or why I’m using it,” Mr. Buxman said, “Ms. Finley uses ChatGPT for different reasons that I might think are illegal and she might think, ‘Oh no big deal.’ And so if we have a different ethic of what the proper use is, then that’s when it becomes problematic.”
Ms. Finley and Mr. Buxman agree that AI and ChatGPT represent a huge leap into the future, and that it’s important for people not to disregard their impact.
“Most people don’t understand how their tech works, and people really need to understand how artificial intelligence is learning. It’s helpful to know what it can do, what is happening with it and what is happening behind it,” Ms. Finley said. “It matters how we use it and that we’re aware we’re interacting with it, so I just really wish everybody was learning about it.”