As AI technology continues to advance at a rapid pace, it is seamlessly integrating itself into our daily lives. It seems to be inescapable. You can’t spend much time on social media without consuming AI content of some sort, whether that’s watching a generative AI video with an inconspicuous Sora label, or having an amusing conversation with an AI chatbot. As AI has grown in popularity, it has proven to be more than just an online companion or a device for generating videos – it’s also seen as a valuable tool for students seeking academic aid.
But how often are our students here at Flintridge Sacred Heart Academy using AI in their schoolwork?
On the Hill, there has recently been an overwhelming increase of Honor Code violations, many of which are directly linked to AI use. FSHA Honor Council Moderator and Science Teacher, Can- dace Toogood, describes the sudden change.
“For this semester, we have more cases [in Honor Council] than we had all of last year combined,” says Mrs. Toogood. “There has been a significant increase in the amount of referrals, and the majority of the referrals that we are receiving are somehow related to AI usage.”
There isn’t a clear, distinguished answer to the question of why FSHA is experiencing this dramatic increase. However, the recent flourishing and growth of the AI industry could offer an explanation.
ChatGPT, a commonly-used AI-powered app, reached a milestone of 800 million weekly users in August, doubling its old count of roughly 400 million weekly users back in February 2025.
Sora, a generative AI app, hit over 1 million downloads in under five days when it launched last September. Maybe the recent pique of interest in AI worldwide is being reflected in our own school community, through the increase of AI use, and the resulting Honor Code violations.
But not all AI use will earn a student a referral to the Honor Council. Mrs. Toogood believes there are ways to responsibly use AI on schoolwork and remain academically honest.
“If a student is struggling to understand a concept…you may be able to use [AI] to simplify terminology, to break down complex content into words you may understand,” Mrs. Toogood says.
As Mrs. Toogood describes, some students use AI for their schoolwork to better understand key ideas in their assignments. An anonymous Tolog uses AI to aid her with schoolwork, which she says helps her grasp essential concepts.
“I use AI to help me study and understand concepts,” she explains. “For example, if I’m preparing for a biology test I’ll give [AI] the information from the study guide and ask it to quiz me and test me on [the information].”
This method of feeding information into AI to create quiz questions or study guides seems to be a popular one. Other Tologs have commended AI’s ability to help them achieve excellent exam scores and stronger comprehension of course material.
However, despite this benefit of easy conceptualization, and consequently easy understanding that AI provides, Mrs. Toogood explains how it can still harm a student’s overall learning experience.
“My personal belief is that [AI] is harming students’ learning processes, because it’s removing the critical thinking component. For instance, something as simple as Grammarly removes the ability for students to understand grammatical error,” she elaborates.
Numerous studies support this opinion that heavy reliance on AI can have a damaging effect on a student’s critical thinking skills. For students who continually use AI to complete research and writing tasks display reasoning and argumentation skills inferior to the skills of those who write and research traditionally.
The negative effect on critical thinking skills that these studies reveal is an important insight into the risk a student takes when they decide to repeatedly use AI to complete tasks that they arecapable of doing independently. Unfortunately, the shortcut approach of using AI on assignments is so appealing to stressed high school students that many don’t consider the consequences of frequent AI dependence.
An inappropriate use of AI can also conflict with Veritas, or truth, a value of which every FSHA student is encouraged to uphold.
To be aligned with Veritas means many things; it means staying true to yourself and who you are, letting your actions be guided with true, good intent, and being dedicated to the genuine pursuit of knowledge.
It’s tricky to decipher the line between AI use that aligns with Veritas and AI use that does not.
“Students are able to use AI, but it has to be used correctly,” states Mrs. Toogood.
Which begs the question: What exactly is correct AI use?
When looking at AI with Veritas in mind, it’s evident that whether or not AI use is correct can only be determined by our intentions. If students use AI with the intent to just get an assignment out of the way with a quick copy and paste, we’re throwing away a learning opportunity in exchange for a brief “brain break”.
If we use AI in moderation, with the intent of understanding, learning, and verifying the information it presents to us, we are actively embracing Veritas and the genuine pursuit of knowledge.