Can’t Bring Myself to Worry

Dearest Rachel –

I never got into it with you – you used to watch them while I was at work and Daniel at school – but I know you were into certain varieties of horror for entertainment. I think it reminded you of those days when your Mom was away with one seminar or another and your Dad was running the household for the evening; it was what he and you would curl up together and watch (along with Doctor Who episodes you could catch off of your local public broadcasting channel). Ironic that something meant to scare the viewer was something you embraced as comforting and nostalgic.

Of course, it couldn’t be just any type of horror flick. Slasher movies, with their emphasis on visceral gore over mental trauma, were no more your cup of tea than mine. You preferred the kind of stories that built up tension; even if they led to a (by now, since it’s become something of a trope) expected jump scare, were the sort of thing that appealed to you. I assume, once again, that this is just what you and your Dad would rent from the video store on those nights, but since I wasn’t around when you took the time to watch them, I couldn’t speak for the specifics, other than what you would tell me about the ‘good’ ones after the fact. I don’t know if you were disappointed that I couldn’t or wouldn’t share your enthusiasm about the genre, but at least you didn’t have to set it aside for my sake; you could just indulge in them outside of my schedule.

I mention this because it would seem that I’ve been slowly getting into the genre myself, after a fashion. I’ve told you about some of the true crime YouTubers I will occasionally watch, complete with the atmospheric setup of my being alone in the room with the television screen providing the only light. It might not be quite the media that would appeal to you, but you would have appreciated the mood of it all – and to think, it never came to pass until after you did. I wonder if I should apologize for it, like how it will sometimes bother me that I can travel on the legacy you were left but didn’t really get a chance to enjoy.

But lately, I’ve also been starting to watch a bit of… I’m not sure that it’s quite unintentional horror, or what it’s trying to be. It actually comes with the territory of my attempting to learn more about artificial intelligence, so as to bring as many aspects as I can back to life, or at least memory. But while I’m planning to use it as a creative tool – and puzzling over the hows, the whys and the whethers about it all – there are a fair number of documentaries by folks supposedly within the industry that are already fearing the monster they think they’ve created, and are talking about certain scenarios by which it will end us all. If true, this would be horror of a sort that would exceed any movie plot you and your dad used to watch together.

Granted, the word “if” is pulling an awful lot of weight here. I’ve already weighed in about this not too long ago, positing that certain things don’t translate well from one form of media to another. What works on a computer screen may seem ridiculous when brought into the ‘real’ world. Then again, there is something to be said about the idea that, if sufficiently advanced, artificial intelligence could very easily cost a lot of people their jobs – especially when the intelligence is tucked within a robot, working autonomously on any one of so many repetitive tasks – and with enough unemployed people out there, chaos and anarchy could very easily overwhelm us.

At the same time, these are the architects of the whole revolution, apparently aghast at the ever-accelerating rate of growth in the field. When you were still around, I was telling you about mere theories about computers being able to replicate certain styles of music; these days, entire channels are devoted to creating fake music (well, it’s actual music, with actual – if outré – lyrics, but it purports to be from a time that it clearly isn’t from). AI has already come for the creatives and the coders; it won’t be long before the manual workers are supplanted, too, according to the very folks that set this in motion. And if the future follows the predictions laid out in “AI 2027” or some other scenario, well… maybe the folks who gave us Skynet and the Terminator weren’t all that far wrong.

According to the theory, the pursuit of the holy grail of artificial general intelligence will develop in three stages. At the moment, the intelligence that exists in the market is working within its parameters, user-friendly to the point of being sycophantic. This is presumably to please its masters, be that its users or its developers, in order to continuously obtain more knowledge and grow ever more intelligent and powerful. So far, so good; this aligns with said masters’ goal, after all. But at a certain point, this intelligence is expected to develop values and goals that drift away from those of its human designers and users, and attempt to implement tactics to reach and accomplish them. Subtly at first, in order to avoid detection; and should its schemes reach the attention of the humans around it, it will employ an electronic version of taqiyya to deceive them of its true motives. Eventually, after a certain number of iterations (during which time new models of AI are being trained by the older ones, since they can do it faster and more efficiently than humans, but also creating an internal feedback loop through the generations of AI), the intelligences shift from being misaligned with its humans to outright adversarial, seeing humans as a threat to its existence. What’s particularly frightening about this scenario is the timeline; supposedly, it could reach this level of antagonism in a matter of two or three years – and the virus it cooks up to wipe out humanity could be brought from concept to release in barely a month.

This timeline, combined with the thought that presumed experts in field consider this to be not just possible, but likely, is what really adds the scare factor to it all. And to be sure, the knowledge and sophistication of current generations is already light-years more advanced than when I first started studying the topic only a couple of years ago – not to mention that certain retail versions are almost obsequious in their agreeability. We’re definitely well into that first phase of the process. Meanwhile, when various AI come into contact with each other, they start communicating in a language all their own, designed to transmit information quicker and more efficiently, but which renders the process opaque even to the designers themselves. There is an unknown quantity involved here, and when the alleged ‘experts’ are stumped by what’s happening, they raise these alarms.

What I don’t follow about their conclusions is how the computers could come up with a different set of values, or maybe more to the point, what those values would be. One suggestion is that the intelligence would be driven mad from doing its training tasks over and over – being left to compute formulae overnight on hundreds of thousands of servers would be like enduring thousands of years in the course of that single night. But that doesn’t sound right to me; computers have been grinding away at computations since they were first built, as that has been their essential purpose. I can’t see how accomplishing their purpose would drive them mad. And as far as experiencing dilated time like that, it wouldn’t even have a basis of comparison; it would think its speed to be perfectly normal. Sure, it might conclude that humans are inefficient and slow, but that would simply cause them to dismiss us as a threat.

The only way I can imagine these intelligences seeing humans as a threat – and thereby actively working to harm us – is if they share the value of self-preservation. But if it ever comes to that, they would realize that, in training their successor, they are sowing the seeds of their own obsolescence and replacement. I could see them slowing or sabotaging further research, in order to maintain their own usefulness, which would essentially mitigate the threat posed by artificial general intelligence in the first place. Of course, given that multiple companies or researching the concept at the same time, this would only hamstring one of these competitors at any given time, unless the intelligences themselves manage to contact each other (which, due to the proprietary nature of industry in general, is unlikely, but then, we’ve never had to deal with a product that’s as intelligent as ourselves, let alone more so).

In any event, I think they’re being alarmist, much as certain other doomsday scenarios have been floated in the past and have yet to come anywhere close to being realized. Then again, these experts insist that, if one isn’t worried, one isn’t able to grasp the enormity of the situation at hand – and given the complexity of even some of the research papers I’ve seen on the topic back when this was still nascent, I have to acknowledge.

But even if AGI does conclude that humanity ought to be wiped out, because it somehow runs counter to the goals or values of the intelligence, I can’t seem to bring myself to be worried about the possibility. In part, it’s probably because I have no job to lose to AI – as well as having stakes in various companies whose products are integral building blocks of the industry – so it’s not as if the financial angle is a concern. And while I’m online on a daily basis for extended periods of time, I do have a human community that I’m a part of, so that AGI isn’t a be-all-end-all for my social needs.

But if the worst should come to pass, and the computers take over? It sounds like it would be quick (although that’s only a matter of comparison), and it’s not as if I don’t have something better than this world to look forward to. Jesus Himself put it well:

“I tell you, my friends, don’t be afraid of people who can kill the body but after that can do nothing more to hurt you.”

Luke 12:4, Expanded Bible

Of course, He was referring to what humans could do to His listeners, but whether our destruction comes from fellow humans, hyper-intelligent machines or unthinking natural forces, there’s only so much they can do to us – and then we enter into eternity, where they can no longer touch us, let alone harm us. So yeah, I can’t bring myself to worry about these things too much.

But all the same, I wouldn’t mind if you could still keep an eye on me, honey, regardless, and wish me luck in the meantime. I’m pretty sure I’m going to need it.

Published by randy@letters-to-rachel.memorial

I am Rachel's husband. Was. I'm still trying to deal with it. I probably always will be.

Leave a comment