Mere Degrees From Artificial

Dearest Rachel –

Well, I know you weren’t as into the high-tech toys that computers quite frankly have always been (and increasingly will continue to be) as I was (and always will be, I suspect), although your computer was your own favorite toy for our last ten or so years together. Still, you did have a thing for them that most women didn’t, and don’t – apart from a stereotypical love for Candy Crush and the like – and you were willing to put up with a certain amount of “in-the-weeds” explanations of about my latest acquisitions (or, more to the point, intended acquisitions), so… I hope you don’t mind my talking about artificial intelligence, and what I think about what it’s good for.

Bear in mind, just about anything I have to say on the topic is basically off the top of my head. I don’t research artificial intelligence nearly as much as I probably ought to in order to properly fathom its capabilities. But to be fair, most of that research would involve going into what it is capable of doing – what kind of programs have been created for it, and what they’re meant to do – as opposed to the philosophical underpinnings of “should” versus “can,” or even whether it truly “can” do something in the first place. Still, unlike my letter about the technical specs (which I’m starting to think will need to be yet another letter to you on the overall subject), much of this is a matter of speculation and opinion, at which point, mine is probably no less valid than anyone else’s on the matter. So I think I can get away with speaking “without fear and without research” on this particular aspect of the topic.

For starters, I think most of the fears of the worst that sci-fi writers have come up with our a bit overblown. I say that we’re a long way from Skynet or any form of what’s referred to as the “Singularity,” let alone any point at which computers decide autonomously that humans are the problem in this world, and decide to take action toward remedying that situation. It’s true that technology has been advancing by leaps and bounds for some time now, but those advances are becoming a little more granular as time goes on. Recent advances have been no less remarkable than past ones – in fact more so, as they’ve begun to emulate human intelligence and creativity on a superficial but detailed level – but each step could actually appear smaller than the one before to the dispassionate observer as AI gets fine-tuned further and further. It’s applied in more and more ways over time, but those ways in which it’s already used continue to get improved upon in order to better emulate human ability. I may be viewing it from something of a distance, but it looks to me that mechanically attempting to reproduce human intelligence is something like the attempt to reach light speed – the closer you get to it, the more difficult it is, and the more energy you have to expend in order to get there.

Indeed, there are already concerns about how much electricity and processing power need to go into AI systems in order to better replicate human intelligence – especially given the grid infrastructure that we have today – but that’s probably something I need to cover in my ‘technical’ letter, which will have to wait until after my new system arrives, and I’ve got it up and running. There are times when I wonder if I should ask my folks as to whether they’ve seen any increases in their electric bill, although some of that could be attributed the fact that we’re dealing with inflation such that you never got to see in your lifetime. I would be curious as to whether the usage has increased since I started ‘working’ there – or whether it will increase once I get this thing set up.

In any event, I don’t see The Rise of The Machines in the foreseeable future, or even in my lifetime. If I’m wrong, I have time to repent of this opinion then, but progress – such as it is – is a long way from allowing computers to make such moral judgements. Indeed, they aren’t necessarily able to determine the correct number of rocks one should eat daily, or how to keep the cheese from sliding off a pizza; a story has been making the rounds about a certain large language model (used in developing chatbots that, at this point, are otherwise well able to pass the Turing test regarding intelligence) that recommended one should consume at least one or two small rocks on a daily basis, and that melted cheese could best be adhered to pizza crust by the expedient of non-toxic (I should hope so!) Elmer’s glue. I should point out that said model was based off of Reddit, and such recommendations were jokes from various users that the computer took as the gospel truth; it’s really just Exhibit 5,384,368 of the old programmers’ axiom of “Garbage In, Garbage Out.” I’d also suggest that, until computers develop an autonomous sense of humor, these sorts of things will continue to trip them up, and preclude them from ever attaining parity, let alone superiority, over the human race.

At the same time, I also think they may be closer to replicating the average human mind than we might want to admit – uncomfortably so. Not because they’re able to overtake us – apart from their raw speed in terms of making calculation and executing their specific functions – but because they remind us of how limited we are ourselves. It’s argued that the ‘creative’ output that artificial intelligence generates is all derivative – everything it ‘creates’ is in the style of one artist or another (or a combination thereof) or technique behind said artist(s) or what have you – which is quite true. As with programming itself, prompting requires setting rules and parameters for the computer to follow in order to generate the desired output. For all the apparent uniqueness of each image created (and let’s stick with AI-generated art for the moment, as it’s what I’m most familiar with), the computer is following the list of rules it’s given – sometimes ignoring a few along the way where one might be incompatible with another – in order to generate its output; so of course it’s going to be derivative. The disconcerting part of this is the realization that the instructions I’m giving it are pretty derivative themselves; I see an AI-generated image (and the instructions behind it), and wonder what it would look like if it was you in the picture, and I simply hand the same instructions over to my computer, with the only addition being that it use you as the subject rather than some randomly-generated female.

Sure, they’re unique in their own way – and different from each other – but they aren’t really different, when you come down to it. In fact, I’m borrowing someone else’s idea and repurposing it.

So, when I think about it, in terms of coming up with ideas, I’m not that much different that the machine I’m instructing. I’m using all these tools, and doing basically the same thing other people have already done. In essence, I’m mere degree from artificial intelligence myself.

I might argue that I’m not the creative type, and that there are others who have come up with so many other possibilities, so many combinations of pose, composition, technique and style – why not use them? And that’s fine, but the point stands that we (or some of us, at any rate) are every bit as derivative as the computers we hand our instructions to. Even when it comes to art forms that I’m more familiar with, like writing, how often do I find myself quoting someone else? How often do I tacitly admit that others have uttered a certain sentiment I agree with, and done so better than I could? So you see, I’m little better than the machines we accuse of plagiarizing and recombining existing creations today. I don’t know what that says about them, but it doesn’t necessarily speak well for myself or my own ability – which is why I lean on them more heavily than most people, because I can relate to them in this respect.

In any event, honey, I expect I’ll have more for you on the technical side in a couple of days, once this thing arrives at my door and I get it put together. Until then, keep an eye on me, and wish me luck. I’m going to need it.

Published by randy@letters-to-rachel.memorial

I am Rachel's husband. Was. I'm still trying to deal with it. I probably always will be.

Leave a comment