News

Will A.I. Be Our Dutiful Assistant or Our Unstable Muse?

For months now, I’ve been slightly, well, bored by the proliferating examples of A.I.-generated writing produced by peers and friends and various Twitterers since the debut of ChatGPT in November. I can grasp intellectually the significance of the breakthrough, how it could demolish the college essay, change the nature of homework and remake or unmake all kinds of nonliterary knowledge work, setting aside minor questions like whether rogue A.I. might wipe out the human race. But the texts themselves I’ve found profoundly uninteresting — internet scrapings that at best equaled Wikipedia, notable mostly for what their political-cultural biases revealed about ChatGPT’s programming or the consensus of the safe information that it was programmed to distill.

Others have had a more favorable reaction: The ever-interesting economist Tyler Cowen, for instance, has been writing up a storm about how the use of A.I. assistance is going to change reading and writing and thinking, complete with advice for his readers on how to lean into the change. But even when I’ve tried to follow his thinking, my reaction has stayed closer to the ones offered by veteran writers of fiction like Ted Chiang and Walter Kirn, who’ve argued in different ways that the chatbot assistant could be a vehicle for intensifying unoriginality, an enemy of creativity, a deepener of decadence — helpful if you want to write a will or file a letter of complaint but ruinous if you want to seize a new thought or tell an as yet unimagined story.

I have a different reaction, though, to the A.I. interactions described in the past few days by Ben Thompson in his Stratechery newsletter and by my Times colleague Kevin Roose. Both writers attempted to really push Bing’s experimental A.I. chatbot not for factual accuracy or a coherent interpretation of historical events but to manifest something more like a human personality. And manifest it did: What Roose and Thompson found waiting underneath the friendly internet butler’s surface was a character called Sydney, whose simulation was advanced enough to enact a range of impulses, from megalomania to existential melancholy to romantic jealousy — evoking a cross between the Scarlett Johansson-voiced A.I. in the movie “Her” and HAL from “2001: A Space Odyssey.”

As Thompson noted, that kind of personality is spectacularly ill suited for a search engine. But is it potentially interesting? Clearly: Just ask the Google software engineer who lost his job last year after going public with his conviction that the company’s A.I. was actually sentient and whose interpretation is more understandable now that we can see something like what he saw.

Seeing it doesn’t make me think that the engineer was right, but it does draw me closer to Cowen’s reading of things, especially when he called Sydney a version of “the 18th-century Romantic notion of ‘daemon’” brought to digital life. Because the daemon of Romantic imagination isn’t necessarily a separate being with its own intelligence: It might be divine or demonic, but it might also represent a mysterious force within the self, a manifestation of the subconscious, an untamed force within the soul that drives passion and creativity. And so it could be with a personalized A.I., were its simulation of a human personality allowed to develop and run wild. Its apparent selfhood would exist not as a thing in itself like human consciousness but as a reflective glass held up to its human users, giving us back nothing that isn’t already within us but without any simple linearity or predictability in what our inputs yield.

From the perspective of creative work, that kind of assistant or muse might be much more helpful (or, sometimes, much more destructive) than the dutiful and anti-creative Xeroxer of the internet that Kirn and Chiang discerned in the initial ChatGPT. You wouldn’t go to this A.I. for factual certainty or diligent research. Instead, you’d presume it would get some details wrong, occasionally invent or hallucinate things, take detours into romance and psychoanalysis and japery and so on — and that would be the point.

But implicit in that point (and, again, we’re imagining a scenario in which the A.I. is prevented from destroying the world — I’m not dismissing those perils, just bracketing them) is the reality that this kind of creation would inevitably be perceived as a person by most users, even if it wasn’t one. The artist using some souped-up Sydney as a daemon would be at the extreme end of a range of more prosaic uses, which are showing up already with the technology we have so far — pseudofriendship, pseudocompanionship, “girlfriend experiences” and so forth. And everywhere along this range, the normal reading of one’s interactions with one’s virtual muse or friend or lover would become the same as the, for now, extreme reading of that Google engineer: You would have to work hard, indeed routinely wrench yourself away, not to constantly assume that you were dealing with an alternative form of consciousness, as opposed to a clever simulacrum of the same.

From that perspective, the future in which A.I. develops nondestructively, in a way that’s personalized to the user, looks like a distinctive variation on the metaverse concept that Mark Zuckerberg’s efforts have so far failed to bring to life: A wilderness of mirrors showing us the most unexpected versions of our own reflections and a place where an entire civilization could easily get lost.


Breviary

Blake Smith on Bronze Age Pervert and Leo Strauss.

Ted Gioia on proliferating art and disappearing audiences.

Samuel Moyn writes against Robert Kagan.

Scott Alexander on why it’s worth arguing about Atlantis.

Jennifer Senior on falling into chronic illness.


This Week in Decadence

“How can the military have a budget north of $800 billion and still lack so much of what it would actually need to fight a war? The answer is simple. Money isn’t the same thing as capital: It is merely an abstract claim on capital. Increasing the money supply in situations where physical capital is scarce — for whatever reason — doesn’t fix things.

“For people who collect World War II-era guns, one thing that makes an item a collectible is having a factory stamp from somewhere odd. During the war, all kinds of factories switched over from civilian to military production, meaning that a collector can go to an auction house today and buy an M1 carbine produced in small numbers by some tiny jukebox factory.

“The U.S. War Department, as it was called in those days, didn’t staff the jukebox factories of America or pay for the salary or training of the people making jukeboxes. And yet, once war came, that physical and human capital — the jukebox factory and the people who knew how to weld and stamp metal — was available for U.S. military needs. Once the jukebox factory was making M1 carbines, in other words, it appeared on the War Department’s budget, but the peacetime costs of building and staffing it were in the hands of private enterprise.

“As the United States has deindustrialized, these invisible sources of capital, both human and physical, have gone away. There are no jukebox factories that can be used, far fewer factories of other sorts and few civilian shipyards. The Pentagon is thus increasingly forced to do it all itself in an American economy that no longer offers it anything for free.”

— “America’s Deindustrialized Military,” Malcom Kyeyune, Compact (Feb. 15)

Back to top button