hmrtexas.com

Understanding Child Psychology: Why AI Misses the Mark

Written on

The New York Times Quiz Challenge

Recently, an intriguing quiz was featured in the New York Times, presenting readers with ten brief writing samples. These samples were derived from real essay prompts utilized by the National Assessment of Educational Progress. Some pieces were authored by actual fourth graders, while others were crafted by ChatGPT, an AI developed by OpenAI, instructed to mimic a child's writing style, including the occasional mistake. The challenge posed to readers was to identify which pieces were human-written and which were generated by the AI.

Experts, including a fourth-grade teacher, a writing tutor, an education professor, and renowned children's author Judy Blume, were brought in for analysis. Surprisingly, none of them achieved a perfect score. However, I managed to identify all ten accurately. Here's my analysis.

Identifying the Differences in Writing

The ten samples corresponded to three distinct prompts. The initial prompt asked writers to depict their lunchtime experiences at school. It was evident to me which responses were human-generated and which were produced by the AI.

The main distinctions stemmed from two factors. Firstly, children's responses tended to be more straightforward. For example, one child wrote:

"When I get to the lunchroom, I find an empty table and sit there to eat my lunch. My friends join me. I open my lunch, starting with my sandwich, then my drink, followed by my fruit, and finally, my treat."

Another wrote:

"We eat lunch from 11:45 am to 12:00 pm. Everyone chats with friends until the lunch supervisors signal us to go to recess. After we've all cleared out, the janitors and lunch supervisors clean the tables just in time for the seventh and eighth graders to enter for their lunch."

These essays typically followed a clear sequence: “First I do x, then y, then z,” offering detailed descriptions but lacking an overarching theme. In contrast, the AI-generated responses attempted to construct a cohesive narrative, as exemplified by this passage:

“Overall, lunchtime is a wonderful opportunity to take a break from classes and spend time with my friends. I always look forward to it, and I enjoy myself. Although the cafeteria can be crowded and noisy, it remains a lively place.”

The abstract phrasing and vocabulary like "opportunity" and "lively" clearly indicated a machine's handiwork.

The second factor highlighted the use of idiomatic expressions common among children, such as "lunch moms," which I could envision being used by kids to refer to parent volunteers. Another child described their meal as a "cold lunch," cleverly hinting at their sandwich and fruit. These idiomatic phrases suggest a familiarity with community or familial language that I doubted an AI could replicate. The experts reached the same conclusions for similar reasons.

The Sensory Detail in Children’s Writing

The subsequent two prompts required the writers to narrate a story. One asked them to imagine waking up as the President of the United States, while the other envisioned a castle appearing overnight outside their window.

Again, I observed a similar pattern. The children's writing seemed to express impressions and facts searching for a unifying theme. In contrast, the AI's writing lacked the rich sensory details present in the children's narratives.

In a previous article, I discussed how children are naturally more attuned to a wide array of sensory experiences. I referenced art historian Clive Bell, who noted how adults often neglect the sensory world in favor of abstract labels.

For instance, the child who wrote about becoming president included vivid details like "royal blue" sheets and a "soft down comforter." Another child described the castle's "damp" interiors and a "wood chain door" that "sounded like it needed oil on the hinges."

I found the sensory descriptions in the children's writing much more engaging and evocative than those produced by the AI. Despite its efforts, the AI’s responses felt more adult-like. This is likely because adults often equate consciousness with brain function alone, while in reality, it encompasses the entire nervous system.

Children naturally understand this distinction, even if adults have forgotten it. An AI, devoid of physical form, will inherently adopt a more adult-like perspective, viewing prompts as objectives and employing abstract concepts to achieve them.

In contrast, children focus on the journey rather than the endpoint, experiencing a succession of feelings. Only later do they learn to organize these feelings into the various narrative structures — such as setting, characters, conflict, and themes — that writers use.

ChatGPT lacks this understanding, and given its disembodied nature, I wonder if it ever could.

Exploring AI Limitations in Understanding Childhood

This video discusses the differences between human creativity and the limitations of AI, particularly in the context of children's imaginative capacities.