This tweet thread possessed me this morning – not just because it was based in science fiction and in the conceit that fiction allows us to try things out before we actually have to do them, but in the current trend to find a way to “uber” everything.
After Riker delivers a devastating presentation that proves Data is an elaborate machine, Picard joins Guinan for a drink.
Guinan warns Picard that civilizations love nothing more than to create “disposable people,” to do the jobs no one else wants, with no recourse. pic.twitter.com/SMrDQ4ibG9
— Frisco Uplink (@_danilo) January 25, 2020
That line: civilizations love nothing more than to create “disposable people,” to do the jobs no one else wants, with no recourse. is chilling.
In the olden days…
In the olden days (when I was a kid), there was a car assembly factory (belonging to Ford). We were told (perhaps apocryphally) that it was filled with sophisticated robots assembling the vehicles with no supervision and, perhaps because I was *that* sort of geek, I remember having a conversation with some other geeks about robot rights. Perhaps it was spurred on by a story in 2000AD with Ro-Jaws and Hammerstein or perhaps it was just a zeitgeist – but we talked about this stuff. How smart did they have to be? Did they have sensors to receive the world? How many sensors and programs need to be running for them to be considered sentient or cognisant. We are becoming frighteningly aware now that animals (which we were told didn’t have feelings and didn’t have souls) are in many cases a thousand times more humane to each other and to other species than we could have realised. The difference is that everyone having a camera in their pocket means we can see the times where a elephant saves a drowning deer or a dog tries to resuscitate a suffocating fish. We have the evidence right in front of us. So in that context, how sentient does a robot have to be before we can give it rights? Does it just need to react to danger/pain?
Obviously destiny is filled with delicious irony.
It turns out that we don’t really have this problem.
As it implies in the twitter thread, we didn’t need to create a robotic underclass because the actual videos from Boston Dynamics (as opposed to the Corridor Digital parody above) show that making good androids is hard. What is easier is to turn people into workers without rights. And even better if we can use algorithms (machine learning, artificial intelligence, whatever you want to call it) to manage people. Been away from your desk for more than 5 minutes? Reported to HR. Didn’t process enough forms? Reported to HR. Took too many sick days? Fired. Or better still – you’re on a zero hours contract so you can never get fired – because you were never properly hired. You just don’t get allocated any hours by the algorithm as it has learned that you’re “unreliable” from the narrow rules it inherits from the perfect employee handbook.
The algorithm doesn’t care that your mother died. Or that you’ve developed cancer. Or that you need oxygen to breathe – it just records data and interprets.
Instead of having robot slaves, we made them the supervisors
In that sentence above; “civilizations love nothing more than to create “disposable people,” to do the jobs no one else wants, with no recourse.“, the reference was robots but now it is people. Real people.
There is therefore an onus on us, as creators, founders, mentors, advisors, investors and colleagues to look for solutions that not only provide profit (after all, we live in a capitalist society) but that also work for the good of humanity. We’ve seen plenty of startups come through the doors that solve problems, real problems, for profit and they’re definitely making life easier for humans but solutions that really save lives or really repair our environment, services that really lift people out of poverty or that help them navigate an ever more confusing and restrictive society are rare.
We’d like to see more. And we’ll cover this in a later blog post.