NYC Public Schools' Move to Ban ChatGPT Exposes Adults' Failures to Teach Healthy and Responsible Technology Usage
Abstinence will fail to prevent the inevitable presence of AI in education.
This past Tuesday, New York City's Department of Education announced a ban on the popular chatbot ChatGPT from its schools' devices and networks. It’s easy to understand why. The bot can write essays instantaneously, opening up a pandora’s box of ways to cheat, right?
I asked ChatCPT, “What were the keys to the Allies winning World War II?” and here’s what it gave me:
There were several key factors that contributed to the Allies' victory in World War II. One of the most important was the superiority of the Allies' military forces and tactics, which allowed them to gain the upper hand over the Axis powers. The Allies also had a significant advantage in terms of industrial production, which allowed them to outproduce the Axis powers in terms of weapons, ammunition, and other supplies. Additionally, the Allies benefited from the fact that they were able to coordinate their efforts more effectively than the Axis powers, thanks in part to strong leadership from figures such as Franklin D. Roosevelt and Winston Churchill. Finally, the Allies were able to secure the support of key countries such as the Soviet Union, which helped to turn the tide of the war in their favor.
That’s a pretty good answer.
The question is whether using this tool to get the answer is harmful to a student’s education. Is technology making it “too easy” and therefore getting in the way of a student actually learning anything?
If all a student does on a history test is pull out their phone, pop this essay question in, and cut and paste, after not studying at all, that’s probably not going to be much of an educational experience.
Then again, are students even able to use phones in the classroom like this while taking tests? That has to be where the issue is, right? Tests? Because if kids are getting essay questions like this for homework, is there much difference between using ChatGPT and just using Google? Google gave me plenty of insights into this same question. Maybe not as efficiently, but the answers were there for the pasting:
I didn’t think kids could have phones on during tests, but not having a school-aged child, I decided to look it up. I was blown away by what I found: Every single NYC public school has its own individual cell phone policy.
That’s insanity.
So, in addition to doing all the things they need to do to cater to and advocate for the needs of kids in their own local area, each principal is tasked with originating and implementing guidelines for healthy and responsible technology usage in school.
That’s something they’re not even close to being qualified to do—considering the fact that most adults haven’t quite figured out how to invite technology into their own lives in a productive way. It’s a question that sociologists, technologists, and educators are still debating—especially as studies suggest that kids are often happier the less they use the mobile social applications and networks that dominate their cell usage. If you’re going to have citywide standards for what they should be learning, and how teachers should be teaching, then surely there should be best practices for the presence and usage of technology in the classroom—particularly when it comes to kids and cell phones.
What does ChatGPT say when you ask if kids should have cell phones?
“On one hand, cell phones can be a useful tool for keeping in touch with children and ensuring their safety. They can also help children to feel connected and included in social activities. On the other hand, cell phones can be a distraction, and it is important for parents to set boundaries and limits around their use. It is also important for parents to teach their children responsible phone usage and to monitor their phone activity to ensure that it is appropriate.”
Pretty diplomatic, but it presupposes the idea that parents know how to “teach their children responsible phone usage” when they haven’t figured it out themselves. The average American spends 5 hours and 24 minutes on their mobile device each day. On average, Americans check their phones at least 96 times per day, or once every ten minutes.
These phone addicts are the people who are going to teach our kids how to manage to have both the sum of all human knowledge and an endless supply of sucker-punch videos in the palm of their hand?
Whether it’s the usage of Google, ChatGPT, or the consumption of misinformation over the internet, it’s really clear that we need a better approach to teaching kids about the unavoidable presence of tech in their life.
Instead of banning the use of new technology, wouldn’t it be better to double down on the intrinsic value of actually learning something versus just copying and pasting it? There are always going to be ways to cheat the system in life. Do we curtail cheating by making it harder to cheat or do we just instill values that make people not want to take that road because it only hurts them? Let’s put honesty and hard work on a pedestal while making cheating socially unacceptable—not just something hard to get away with.
Or, how about using the tech to teach kids to actually fact-check? ChatGPT told you that the Allies had an industrial production advantage, but is that true? How do you know?
It says the Allies coordinated better. Can you find examples of that and examples of where the Axis powers failed to coordinate?
Are there other factors the bot didn’t choose to include in the answer that you can make a strong case for?
Let’s also double down on focusing the lessons of history on today’s global landscape. After all, what good is memorizing the key dates and milestones around the growth of Fascism in Europe leading up to World War II if you can’t even recognize the growth of Fascism in your own country in the present day?
Instead of preventing the gathering of quick answers in the modern age, teaching someone the ability to interpret and evaluate an answer gathered at the speed of light is a much more relevant skill than the ability to trudge through firsthand sources to summarize what’s already been researched, summarized, and reprinted infinitely before.
This all reminds me of how adults thought it best to teach kids about sex—i.e. basically not at all—for decades. We simultaneously tried to hide information from them while pushing them toward abstinence. The fear was that the more you expose them to it, the more they’ll want to have it and make bad decisions around it.
Not only did that result in lots of misinformation, abuse happening in secret, widespread shame, and, among LGBTQ+ kids in particular, increased suicide rates—but the lack of access to proper health care and birth control made teen pregnancy commonplace.
Turns out that, when it comes to kids, when you arm them with information, communicate openly with them in a non-judgmental way, and provide them with tools that increase their agency and self-determination, instead of trying to pull the wool over their eyes, they actually do pretty well by themselves.
By trying to keep them away from AI and failing to help them discover productive and positive uses for it, we’re going to have a whole generation of kids that will fall further behind the rest of the world in technological proficiency. These kids should be learning how to train their own AI models. They should be debating the ethics of it. They should be actively discussing what it means for privacy, copyright, and a whole host of other complex issues.
Instead, we’ve got our educational leaders ducking their heads in the sand and holding kids down with them. Figuring out the right place for disruptive technology in education is hard, and controversial so… why bother?
Just let teachers deal with its inevitable leak into the classroom and provide them no thoughtful pathway to realistically deal with it.
Great Post on an important topic. Analogous to the debate on whether allowing kids to use calculators meant their math ability would deteriorate. Forty years later and more than 20% of people cannot correctly divide 550 by 2 in their heads (check recent polls on media channels, incl linkedin). Is that better or worse than it would have been before calculators? Would anyone argue that calculators should only be used to fact check? AI exists and won't go away. The key is to get in front of it, set the tone, outline best practice and ethical standards. Bans rarely work.
"Turns out that, when it comes to kids, when you arm them with information, communicate openly with them in a non-judgmental way, and provide them with tools that increase their agency and self-determination, instead of trying to pull the wool over their eyes, they actually do pretty well by themselves."
How do we know this?