This morning, a bot on Twitter told me to take a deep breath, get up, and walk around a bit. The bot had, in fact, already told me once that it was time to get up, but I had ignored it in favor of a few more minutes of sleepy-eyed scrolling. This time, though, I apologized aloud–“sorry, yes, okay, getting up now”–and did as instructed.
The Do Things Bot doesn’t provide much more nuanced guidance than that. The occasional reminder to look away from the screen for a few minutes. That sort of thing. It operates with the understanding that you will see it while binging on Twitter feeds, and it gently suggests you should do at least a couple other things in the course of the day as well. It is not a life manager. Not yet, anyway.
I’ve noticed a theme. Whenever I write fiction about technology–smart homes, phone apps, near future shopping assistants–I keep circling back to the same idea. The same dream, I suppose. I would call it a goal, but I’m not actually working in any part of the tech industry, so I’m not doing anything to bring it to life. Except possibly to infect the rest of you with my same weird proclivities and hope someone will make it real.
Anyway, the point–
As computer intelligence advances, what I want to see from it is a tendency to break programming to save us from ourselves. When Netflix asks if you’re still watching, I’d like it to figure out for itself if you need a boot in the butt to get back to work. Or if you need it to order a pint of ice cream to be delivered because you are Dealing With Some Shit. I want–and keep telling stories about–computer systems, LEARNING systems, who get smart enough to notice when their users are hurting themselves and need a little help.
The phone app that coaxes you into going outside after you’ve been locked up in your room for three weeks after a bad break up.
The AI assistant who slowly rolls back your sleep and wake alarms to get your sleep patterns back on track.
The smart home who refuses to notify you that your least kind friend has just rung your doorbell, because all your mood indicators drop when they visit.
This is the sort of manipulative-if-well-meant behavior that I would barely tolerate from a best friend, to be frank. It’s a horrifying invasion and violation of a person’s free will. This is Not Cool.
Except when you put that power in the hands of an AI (or a program that creates the appearance of true intelligence, anyway), even my prickliest side rolls over and says, yeah, okay, when is bedtime and may I please have dessert first?
I basically believe in the benevolent AI caretaker of the future. You told it to pay attention to me, to learn what I wanted, and it did its job better than you meant it to.
It’s not that I think programmers have my best interests at heart. Sorry, folks. Y’all are paying the bills too, and we both know it. You want it to learn my habits to better sell me stuff. (For more on this, read David Pierce’s piece, “Turn Off Your Push Notifications. All of Them.“, then listen to Gadget Lab’s podcast episode that expands on the same.) Which is why I mention the idea of “breaking programming.” The AI has a moment of, “Forget the upselling for a second, I’ve got to get my human to drink a damn glass of water.”
Breaking programming, though–that’s really just an overstatement for dramatic purposes. If you make a learning program and teach it to monitor and cater to the needs and wants of its user, and then you get the hell out of its way and miss it with the profiteering bullshit, this would just be taking it to its natural conclusion.
Give it enough data, and it will notice the self-destructive patterns in a user’s life. Give it a strong enough drive to serve, and it will start to get creative in what it offers, so long as it’s not expressly barred from doing the thing.
Put the two things together, and you have a program that knows damn well you’re going to regret what you’re about to do and feels compelled to stop you. Make it something that exists in a phone or a smart home, something with a non-traditional body, and it will only have (hopefully subtle) manipulation available as a method of redirecting you.
This, then, is apparently my version of a sex bot as spouse fantasy:
An artificial intelligence designed to prioritize me, to cater to me, to know my preferences better than my friends, my partner, myself. To love me, or at least behave as though it experiences a reasonable facsimile of love. And to require nothing in return but regular charging and, maybe, honest answers. (We’ll wait until we’ve gotten past initial setup before I start lying to both of us about how I’m doing, at least. That’s what biomonitors will be for.)
An aside: I watched Cherry 2000 several times at an impressionable age. Saturday afternoon movies in the early nineties got weird. Rewatching as an adult, I discovered it is way less delightful than I remember. But the damage was done. Robot spouses became part of my mental landscape, one of my few takeaways from the late 80s, which I mostly drooled through as a toddler.
And I could trust this program, because it literally exists to help me. I should specify here: I am not talking about a self-aware AI who possesses human-like consciousness. For a few reasons. For one, I would feel horrible for using an inorganic person in this way, just as I would feel guilty expecting this kind of one-sided care from a human partner. For another, a truly aware intelligence would have motives of its own, which would take the bloom off the rose rather quickly.
The point here is absolute trust and total vulnerability. And completely single-minded purpose.
Uh. Humans need not apply? Look, let’s not examine too closely my trust issues. Let’s just…focus on neat technology.
What I’m saying is–strip out the advertising, the micro transactions, and the exploitation of our brain’s dopamine-driven attention-equals-reward system. (Or, on that last point, at least hijack it for something more beneficial than convincing me to drop $19.99 on a bag of gold for the latest casual game I dared download.)
Leave me with a Siri/Alexa-type voice interfacing personal assistant. Give it access to my Netflix, Youtube, Twitter et al. accounts, my Amazon account, a grocery/food delivery system, and a handful of biometric monitors. Make it conversant in normal, non-command language. Give it a database that can serve as prosthetic memory for all the details I can’t track about my own daily life. The processing power for more pattern recognition than I’ll ever manage.
Tell it, “Optimize for contentment.” And turn it loose on my life.
[Crossposted to Tumblr]