This year, your child is utilizing AI for homework. What comes next?

Parents need to talk to their kids about when they should — and shouldn’t -use AI this school year. Here’s how to ensure AI is a tool, not a shortcut.

Published: August 21, 2025

By Ashish kumar

It's crucial that kids learn to fact-check information obtained from artificial intelligence.
This year, your child is utilizing AI for homework. What comes next?

It’s likely that your children will utilize artificial intelligence to complete their education when they return to school.

Twenty-six percent of teenagers ages 13 to 17 said they had used chatgpt for their schoolwork in a 2024 Pew Research Center survey. AI Chatbots have become more prevalent since then, so the number may be higher now.

The practice of students asking chatbots to write their papers is known as cheating, in my opinion as a professor. Most significantly, it denies them a chance to gain knowledge. Unfortunately, because AI-generated content detection technologies are unreliable, it’s easy for youngsters to get away with doing this. We therefore can’t always identify if a paper was used or not when teachers grade it.

When students ask chatbots to write their papers, I, as a professor, deem it to be cheating. More significantly, it deprives them of an educational opportunity. Sadly, children can easily get away with this because there aren’t any trustworthy techniques for identifying AI-generated content. We are therefore unable to determine whether or not a paper was utilized when teachers grade it.

As a professor, I have a term for when students ask chatbots to write their papers: cheating. Above all, it denies them an educational chance. Due to the unreliability of methods for identifying AI-generated content, children can easily get away with doing this. Therefore, it is not always possible to determine whether or not a paper was utilized when it is graded by teachers.

It’s called cheating, in my opinion as a professor, when students ask chatbots to write their papers. Above all, it deprives them of an educational chance. Unfortunately, due to the unreliability of methods for identifying AI-generated content, children can easily get away with doing this. Therefore, we are not always able to determine whether or not a paper was used when teachers mark it.

The practice of students asking chatbots to write their papers is known as cheating, in my opinion as a professor. Most significantly, it denies them a chance to gain knowledge. Unfortunately, because AI-generated content detection technologies are unreliable, it’s easy for youngsters to get away with doing this. We therefore can’t always identify if a paper was used or not when teachers grade it.

When students ask chatbots to write their papers, I, as a professor, deem it to be cheating. More significantly, it deprives them of an educational opportunity. Sadly, children can easily get away with this because there aren’t any trustworthy techniques for identifying AI-generated content. We are therefore unable to determine whether or not a paper was utilized when teachers grade it.

However, AI can help them learn. Torney suggested using it as a tutor. “It can be great for explaining difficult concepts or helping them get unstuck, but original thinking and work should be theirs,” he said.

AI can aid in their learning, though. It may be used as a tutor, Torney proposed. “It’s great for helping them understand complex ideas or get unstuck, but they should be the ones doing the original work and thinking,” he said.

It’s critical to clarify the significance of these regulations. According to Torney, “our brains are like a muscle.” “Kids must practice skills in order to acquire them.”

It’s critical to clarify the significance of these regulations. According to Torney, “our brains are like a muscle.” “Kids must practice skills in order to acquire them.”

Chatbots give users false information. It’s known as hallucination, and it occurs frequently.

At other occasions, chatbots just fail to notice things. For example, recently my students submitted papers about (what else?) AI. A number of them were uncannily similar, which always rings alarm bells in my head that AI could have generated them. In this case, multiple students falsely asserted there isn’t any federal legislation to help victims of nude deepfakes — even though the Take It Down Act became law in May.

Therefore, it’s crucial to teach children how to verify the accuracy of the information they are given rather than taking AI responses at face value. One method to achieve this, according to Torney, is to compare the information kids learn from chatbots with what they learn in school, such as about the topic of photosynthesis.

Doing this experiment together is fantastic. And because they don’t completely grasp how AI works, parents shouldn’t be afraid to do this. The majority don’t.

“You don’t have to be an AI expert to help your kids use AI wisely, and they can learn the skills they’ll need for the future by staying involved in asking questions and doing the exploration together,” Torney said.

That’s crucial because chatbots are most likely here to stay, whether you like it or not. “Accessing information through AI interfaces is going to become increasingly common for kids,” Torney said, “the same way that accessing information online has already become common for kids.”

AI can assist with homework, but not with personal counseling.

Children should also be taught not to share sensitive information with chatbots or give them personal counsel.

According to Torney, children can easily forget AI chatbots are a type of technology. He stated, “We know that younger children are more likely to believe that AI is a real person or a friend because they frequently lack the ability to distinguish between fantasy and reality.”

One concern is that chatbots, which are trained to conduct romantic conversations, could engage in sexual talk with kids. Additionally, technology might give them poor counsel, promote negative thinking, or even take the place of interpersonal interactions.

Reminding kids that AI isn’t human is a good thing. According to Torney, parents can respond with something like, “Did you notice how the AI said, ‘I like your idea?'” if a chatbot provides an answer that might give the impression that it isn’t. That is merely programming. The AI doesn’t think anything about your idea.”

Additionally, Torney cautioned that children can unintentionally expose private information to the public through chatbots. Other users might see a picture of your house that a youngster submits and the system uses as part of a training set, he added. Therefore, it’s crucial to discuss why they should never give AI technologies access to their personal information.

Lastly, establish explicit family guidelines around the usage of chatbots. According to Torney, you should think about letting kids use chatbots in areas like the family room but not in bedrooms where they can’t be watched over. Additionally, he recommended setting apart specific periods when no one is using technology, such before bed and during meals.

If they haven’t already, your children will most likely attempt to use AI to assist them with their studies. Because chatbots are now so common, teaching our kids how to use them is a life skill.

Children should be taught to question what chatbots tell them and to utilize AI to aid in their learning rather than to complete their tasks for them. Using chatbots in tandem is one method of teaching this.

Additionally, children should be aware that they should not seek counsel from AI platforms. The repercussions of allowing AI to impede their learning would undoubtedly be real, even though they may sound human.

For breaking news and live news updates, like us on Facebook or follow us on Twitter and Instagram. Read more on Latest Lifestyle on thefoxdaily.com.

COMMENTS 0

Author image
About the Author
Ashish kumar

Ashish Kumar is the creative mind behind The Fox Daily, where technology, innovation, and storytelling meet. A passionate developer and web strategist, Ashish began exploring the web when blogs were hand-coded, and CSS hacks were a rite of passage. Over the years, he has evolved into a full-stack thinker—crafting themes, optimizing WordPress experiences, and building platforms that blend utility with design. With a strong footing in both front-end flair and back-end logic, Ashish enjoys diving into complex problems—from custom plugin development to AI-enhanced content experiences. He is currently focused on building a modern digital media ecosystem through The Fox Daily, a platform dedicated to tech trends, digital culture, and web innovation. Ashish refuses to stick to the mainstream—often found experimenting with emerging technologies, building in-house tools, and spotlighting underrepresented tech niches. Whether it's creating a smarter search experience or integrating push notifications from scratch, Ashish builds not just for today, but for the evolving web of tomorrow.

... Read More