Header Ads

Laying a trap for self-driving cars

We spend a lot of time and words on what autonomous cars can do, but sometimes it’s a more interesting question to ask what they can’t do. The limitations of a technology are at least as important as its capabilities. That’s what this little bit of performance art tells me, anyway.
You can see the nature of “Autonomous trap 001” right away. One of the first and most important things a self-driving system will learn or be taught is how to interpret the markings on the road. This is the edge of a lane, this means it’s for carpools only, and so on.
British (but Athens-living) artist James Bridle illustrates the limits of knowledge without context — an issue we’ll be coming back to a lot in this age of artificial “intelligence.”
A bargain-bin artificial mind would know that one of the most critical rules of the road is never to cross a solid line with a dashed one on the far side. But of course it’s just fine to cross one if the dashes are on the near side.
A circle like this with the line on the inside and dashes on the outside acts, absent any exculpatory logic, like a roach hotel for dumb smart cars. (Of course, it’s just a regular car he drives into it for demonstration purposes. It would take too long to catch a real one.)
It’s no coincidence that the trap is drawn with salt (the medium is listed as “salt ritual”); the idea of using salt or ash to create summoning or binding symbols for spirits and demons is an extremely old one. Knowing the words of command or secret workings of these mysterious beings allowed one power over them.
Here too a simple symbol “binds” the target entity in place, where ideally it would remain until its makers got there and… salvaged it? Or until someone broke the magic circle — or until whoever was in the driver’s seat took over control from the AI and hit the gas.
Imagine a distant future in which autonomous systems have taken over the world and knowledge of their creation and internal processes has been lost (or you could just play Horizon: Zero Dawn) — this simple trap might appear to our poor debased descendants to be magic.
What other tricks might we devise that cause inexorably a simple-minded AI to stop, pull over, or otherwise disable itself? How will we protect against them? What will the crime against mechanized AIs be — assault, or property damage? Strange days ahead.
Keep an eye on Bridle’s Vimeo or blog — the video above is a temporary one and the performance, like most things, is a “work in progress.”

No comments:

Powered by Blogger.