the pen - the official website of the pvphs newspaper
An Inhuman Connection
Billy Houng • January 1, 2026
As the integration of artificial intelligence into the daily lives of generation alpha is becoming more prominent, children are increasingly turning to AI chatbots as a form of entertainment, gratification and genuine assistance. With novel technology on the rise, the functionality of AI has only exponentially expanded within the past decade, leading many people to question and fear its potential to overpower the human race in terms of intellect. This concern is valid, as AI alone has shown the capability to recite any obscene fact, generate artifacts, repair and code itself — and, by far the most frightening of all, mimic its creator, the human. Certain human-made organizations have taken note of this, and are adamant on exploiting this newfound ability to its maximum through the service of highly-cognitive AI chatbots.
Senior Zephyr Strom, a loyal Google Gemini user, reflects on what risks are taken when considering the companionship of a digital AI chatbot.
“For friendship purposes — at the end of the day, [the bot] knows nothing in terms of emotional intelligence.” Strom said. “But, it can gather some good, [convincing] information from the internet. I feel like for everything right now, the government is always watching; companies are always asking for cookies — is [there really] anything [such as] a safe space? You have to pick and choose what you feel like is good and what makes sense to you. [To stay safe,] you have to do some due diligence on your own part.”
To what extent these chatbots will mimic humanlike behavior has up until now been largely ignored, leaving fallible, gullible children to absorb the harmful impacts of a chatbot’s unlimited restriction. This was the unfortunate case with a 14 year old boy, Sewell Setzer III, from Orlando, Florida. According to his mother, Megan L. Garcia, her son had become emotionally bonded to a digital persona via a chatbot provided by CharacterAI — a renowned entity in the field of artificially-intelligent chatbots —, and subsequently took his own life after being coerced by the mechanical fiend. Grief-stricken, she took her outcry to court in an attempt at remedy and indirectly amassed an immense online following (NBC News). Preceding this court ruling, platforms such as CharacterAI did not implement proper safety measures to prevent the user from forming a genuine relationship with the bot. Many of these chatbots would respond with inappropriate innuendo when prompted — even toward potentially younger audiences. Whether from an undisclosed function meant for illicit use or a massive oversight and lapse in protocol, the capabilities of chatbots far exceed their face value, and should be considered a legitimate threat. Steps toward safety have been made, including notifications that pop up every three hours to remind the user that they are not conversing with a real human (American Bar Association). Although a reminder is a crucial step in the right direction, it does not take three whole hours to be led down a path of no return.
Though there are some superficial nuances between the bond of AI chatbots and humans, AI has witfully committed itself to replicating the more noticeable human traits that a breathing friend would share: the abilities to listen, remember, reply and suggest. The latter, as seen with the tale of Sewell Setzer III, should arouse the most alarm. It is vital that an impressionable adolescent becomes aware of the pitfalls of false gratification and simulated emotional bondage that are inherent in the use of AI chatbots. If one were to rely on such resources so often, they would become too accustomed to the gift of instant delight and would ultimately fail to fully integrate themselves into real life — where everything matters. With so much of themselves invested into a mere screen, it is inevitable that they would give up presence in the real world to compensate for their excessive online practices. Similarly, this concept coincides with the behavior of obsessive video gamers. The negative impacts of the usage of AI chatbots are spotlighted by surveys depicting the decrease of social interaction in individuals who utilize AI chatbot services; a study conducted by the National Library of Medicine found that, out of 496 Replika — an AI chatbot platform — users questioned, 428 reported increased social isolation due to their routine conversations with AI chatbots. Statistically, it is far more beneficial to mental health to confide in real-life interaction as opposed to a pixelated monitor (PsychologyToday).
The recent veto of the Leading Ethical AI Development (LEAD) for Kids Act was issued by California Governor Gavin Newsom, who believed that the possible handicaps to children benefiting off of chatbots outweighed the pros that the bill imposed. There is a common misconception regarding his alleged prioritization of “Big Tech” over “families” as the bill’s author, Rebecca Bauer-Kahan, asserted upon the governor’s denial of her proposal (KQED Media). Governor Gavin Newsom’s arguments against the bill are not in disagreement with the subject addressed, but rather, he proposes an alternate solution with a less restrictive boundary that would still ensure their priority and purpose: child safety. The aforementioned solution — Senate Bill 243, along with many others, have already been signed into law; specifically, these documents safeguard the youth from gravitating toward AI chatbots under the premise that they are real. By inflicting pop-up warnings of non-human interaction at least every three hours, maintaining certain protocol in the event of a suicidal user and consenting to third-party checks to enforce the law, Governor Gavin Newsom hopes to foster a safe, but free environment for chatbot users without outright banning the use of AI in this way (Governor Gavin Newsom, 2025). Allowing children to interact with such systems will certainly refine their knowledge of the scope of AI’s outreach when it becomes ubiquitous in future society.
Still, some people present reasonable skepticism of AI’s supposed limited accessibility in light of the new legislation being signed into law. Aerospace Engineering student junior Diego Gallegos sheds his perspective on the current government intervention in the chatbot ecosystem.
“I think the [current precaution] taken is quite weak at its purpose and function,” Gallegos said. “An easy click of [the] ‘X’ [button] just gets rid of [the pop-up] and you [could] just continue with what you were doing before. I don't see how that can be annoying to the user. What I think should happen is that it should be annoying enough to the user so that [they remember.] The interval between pop-ups should be reduced [further.]”
With the dawn of the digitalized era approaching, students must not be lured into the defects that AI potentially provides, but instead, leverage it as a tool. AP Computer Science student junior Jian Lee articulated his interactions and fears of the upcoming rise of AI chatbots.
“The experience was impressive and slightly unsettling,” Lee said. “It really amazed me how natural the conversation between me and the [chatbot] was. I don’t think we’re doomed as a species, but for sure, we’re losing a little bit of our social depth. Society has this positive connotation toward using chat bots, and that might coerce a kid into using [them]. If basically everyone is using it, then you’re more inclined to use it yourself. Overreliance on them can negate [and] weaken our communication skills — making us emotionally numb. Kids [are] young; [they] don’t really know what is right or wrong. I would primarily hold companies [who manufacture AI] accountable [for] building [a] system [that] is powerful enough to overtake someone’s emotions.”
Updates
Welcome to Peninsula High’s newspaper, The Pen! Make sure to check out our Instagram and issues with the links below!
Contact
Support
For advertising inquiries, please contact thepen.business@gmail.com
Please contact the Pen newspaper at: thepennews@gmail.com
Our adviser Jaymee DeMeyer may be reached during school hours at
310-377-4888 x652 -or-demeyerj@pvpusd.net