Categories
Michael Novakhov - SharedNewsLinks℠

A Defense of AI Parenting


I rejoiced when I heard that my children’s school had canceled their weekly library hour. I’m sorry. I know how that sounds. Truly, I am a bibliophile; my kids have been surrounded by piles of books, quite literally from conception. I’ve devoted countless hours to reading to them. But for years, I ground my teeth at any mention of the school library.

On one level, my angst stemmed from niggling practical problems. I hated the mad hunts on library day for random books I had never heard of until we were about to be late for school. I scowled at the new entries on our school bill when we couldn’t find them. It just felt like a massive headache we didn’t need. One could argue, with some justice, that I should lament my own organizational failures instead of resenting hapless librarians. Here’s the harsh truth, though. No book the kids brought home ever became a family favorite. And shared libraries (both public and school) are becoming obsolete. 

It’s not just libraries. In the shadow of advancing technology, many things that have contributed enormously to our civilization are passing away. I’m not exactly celebrating here; in many ways, it’s quite terrifying. But as a parent, it feels self-indulgent to marinate in nostalgia. My kids are going to live in this world regardless of how I feel about it. I’d rather help them figure it out than just close my eyes and pray.

As a conservative, I have a natural suspicion of techno-optimists. Large language models are the big newcomer, and they do raise many new concerns. Everyone appreciates this in the abstract, but talking with fellow conservatives (both older and younger), I don’t always find it easy to make common cause. Many seem inclined to shun AI and other new technologies, to “just say no” or at least encourage everyone to use them as little as possible. I understand, but to me that feels like dereliction, a refusal to face up to the task at hand.

I suspect the difference in perspective stems partly from the age and interests of my kids. Four of my five sons are now in the tween-and-teen range, with a wide range of interests that can obviously be advanced through the use of technology. Older adults can default to a “personally I’d rather not” stance, while parents of young kids often warm to the “Tech Exit” strategy, reasonably believing that young children don’t need screen-based entertainment and are better off in the sandbox. That’s fine, but with older kids, the conundrums get harder. If my children just wanted to rot their brains with first-person shooters, that would be an easy call, but if you have raised your kids to be curious and inventive, they’re likely to want to use technological tools to advance healthy and meaningful pursuits. Parents still have the authority to ban the bots, but the bar on adequate justifications rises considerably. 

It’s chilling indeed to imagine a dystopian future in which people cocoon themselves in a virtual world of pleasantly pliant AIs, like a softly lit hall of mirrors.

The problem calls for careful consideration. Let us return for a blissful moment to the library. Libraries were always my happy place in childhood, and as a university student, I loved studying in the stacks. Some years ago, though, I had an eye-opening experience when I tried to make An Event out of a public library trip in early summer, helping my kids choose summer reading books. I wanted to show them how much fun it could be to browse the stacks and explore their interests and hobbies. But as we ran searches and scoured shelves, I realized I was inadvertently teaching them something else. It didn’t make a lick of sense to explore their interests at the library. Classics are cheap to buy, and for specific interests or hobbies, the library’s offerings were dated and limited in scope. We could have spent the whole afternoon hunting, and come away with far less useful material than my phone would furnish in 45 seconds. That was before large language models existed. 

On hearing this story, some people reflexively start looking “laterally” for adequate reasons to keep the libraries even though they are (by today’s new standards) both inefficient and comparatively ineffective at fulfilling the library’s original defining purpose. Perhaps we should value libraries for the human element: librarians, other patrons, public story hours. We might cherish the experience of walking through book-lined stacks, basking in the delightful smell of old pages. Perhaps the inefficiency itself should be seen as a positive good, since knowledge is more treasured when we have to work for it. 

I am not entirely unmoved by these arguments, especially because I do think that indexes, card catalogs, reading rooms, due dates, and the stacks themselves did much to enrich my own young life. But realistically, people rarely keep doing things in antiquated ways for the sake of the lateral goods. Sometimes it’s positively irrational to do that. Practical reason moves us to prioritize our actual goals over diffuse fringe benefits.

Libraries are just the start, of course. The range of things that you or your kids might want to learn from AI or other new technologies is simply enormous, and in many cases, newfangled methods will offer clear and commanding advantages over more traditional methods. 

As it happens, my sons moved into adolescence at just the moment when large language models were descending upon the world, which honestly felt like a bait-and-switch. I had spent years bracing myself for fraught discussions about social media, data theft, pornography, online gambling, scrolling addiction, video game addiction, cancel culture, and online predators. Those conversations happened, but they weren’t particularly hard. My kids (the older ones at least) already know quite a lot about those issues, but they haven’t asked for social media accounts or personal devices; they don’t even seem to want them. It helps, no doubt, that they attend a K12 parochial school where few kids have those things. It helps that they stepped into adolescence a few years after the “anxious generation” went into a tailspin, giving them (and their parents) the chance to learn from past mistakes. But whatever the reasons, they know that the virtual world can be dangerous and bruising, and show little enthusiasm for flinging themselves into that fray.

Here’s what they do want. They want AI to analyze their last chess game, so they can understand why the opening they just used didn’t work. They want it to generate a map of the travels of T. E. Lawrence, to facilitate their ongoing debate about his strategy as presented in Lawrence of Arabia. They want to know what Jane Austen means when she says that Mr. Darcy has “considerable patronage in the church” not enjoyed by his cousin, Colonel Fitzwilliam. They want an explanation of SpaceX’s latest projects, and its relationship to the US Space Force. They want to know which hard baits are most effective for going after walleye in southern Minnesota in high summer. They want to create a board game that models market growth in the early industrial era more accurately than Brass Birmingham. Once AI has helped them to develop some rules, they can be found drawing up plans for 3-D printed pieces to help them play test their game. That’s just a small sample from the past couple of weeks. It’s a bit exhausting, honestly. Hey kids, how about a nice video game?

There’s a sense of being flanked. Not only did we (my husband and I) not prepare for this sort of campaign, we actively paved the way to it by encouraging our kids to be intellectually curious, creative, and resourceful in pursuing personal interests. All my lessons in reading and understanding a text now bolster their case for using AI to look up literary references. All my lectures about mindless scrolling and “snackable” feeds come back to haunt me when they rightly observe that they won’t need to bump around the Internet gleaning snippets of information if I just permit Gemini to give them a quick response. Large language models can be wrong sometimes, and they know that. But if one only wants fishing tips, or fodder for casual dinner-table debates, aren’t they reliable enough? 

It feels obtuse to spurn such a valuable tool. High-handed lectures on alienation just sound silly when you’re standing in a Walmart aisle trying to select appropriate fishing equipment. Why not just take out your phone and get the needed information? But it’s all quite discombobulating: even as I smile and my children’s enthusiasms, taking a genuine maternal pride in that relentless curiosity, there’s that voice in the other ear whispering, “So, you signed your family up to be guinea pigs for the next technological revolution. What could go wrong?”

Some of the risks are obvious. I know all about student cheating, of course, and the nightmare it’s causing for universities. I fully agree that students must read, write, and articulate ideas if they are to develop their rational faculties. But those aren’t the problems that top my list, because the young people that most concern me are not particularly averse to reading, writing, or articulating ideas. 

Syntheses can be transformative, in a good way. But when is synthesis enhancing our capacity to think, and when is it undermining it?

On a very different front, I am deeply concerned by the things I read about chatbots being used for a simulacrum of love and friendship. Already there are tragic stories about ChatGPT ushering mentally fragile people further along the path to insanity, but as sad as those cases are, I’m even more concerned about the ways in which pseudo-intimacy with machines could erode people’s ability to navigate real human relationships. Truthfully, flesh-and-blood humans can be a real pain sometimes, and when they are, you’re not allowed to switch them off and go about your life. On the other hand, real people can love you, as an algorithm cannot. It’s chilling indeed to imagine a dystopian future in which people cocoon themselves in a virtual world of pleasantly pliant AIs, like a softly lit hall of mirrors. When it comes to chatbot intimacy, “just say no” certainly is my best advice, to my children or anyone else.

But when LLMs are used as tutors and sources of information, the questions get much harder. Here too, there are painful losses on the horizon. The decaying libraries may be the harbinger of worse things to come: crumbling universities, soulless novels, faltering minds, and imaginations. A host of questions arises here about how models should be trained, and by whom. But none of that changes the fact that AI tools, right here and now, can do much to enhance our knowledge. As a great thinker once noted, all men by nature desire to know. Can we “just say no” to the desire to know?

It’s something of a cliche by now that it’s bad to “let AI think for you.” I don’t per se disagree, but the relationship between AI and “thinking” is not straightforward, however. The transformative power of the large language model lies in its tremendous capacity for synthesis. Synthesis can be a very important exercise of human rationality, which the LLM can potentially short-circuit, and yet there’s clearly nothing new about relying on outside sources for synthesis. The OED, in its day, represented a new and powerful synthesis. Atlases are a kind of synthesis. The periodic table of elements is a synthesis of another sort. Syntheses can be transformative, in a good way, and in an unimaginably complex world, we clearly need them. But when is synthesis enhancing our capacity to think, and when is it undermining it? Unclear.

Further, it’s a familiar truth that distraction has been one of the most problematic side effects of social media, smartphones, and other recent technological advances, in part because they placed an enormous amount of data at our fingertips but forced us to do some work to marshal different tidbits and synthesize them. Large language models can do much to remedy that problem, renewing our capacity to focus. What if AI could actually lessen people’s absorption in the virtual world? It’s a possibility we should at least consider. 

Honest skeptics need to acknowledge this much at least: if we refuse to use LLMs, or prohibit them for our kids, genuinely valuable opportunities will be lost. Ancient Greek tutors exist in this world, but not many, and not at a price I could afford. Chess grand masters could provide the high-level analysis young chess nerds want, but they’re unlikely to return our phone call. Answers may vary, but it’s simply a fact that AI is forcing us to confront “how should we live now” questions in a new way. In my mind’s eye, I’m suddenly back in that public library, struggling to explain why we should check out a fraying, dated book on a subject that a new YouTube video could cover in a fraction of the time. Again, I’m floundering.

In the end though, there is this. The questions of this moment are difficult. It’s not right to leave the young and inexperienced to wrestle with them alone. The world has changed, with unclear implications, but at least if we live it out with our kids, we have some opportunity to offer guidance.

As I worked on this column, I could hear my husband and boys downstairs, working on a project of their own. They’re renovating our home library. The old shelves were overflowing, so my husband is surrounding the room with floor-to-ceiling built-in shelves. We still love books, you see. Some beautiful things may survive this moment of transition, but if we truly want to save them, we’ll need to be both prudent and circumspect. No one ever saved civilization by suppressing the desire to learn.