BigDogEntrepreneurs and technologists, be careful with just how hard you push your innovating. One day it’s a photo app, the next it’s a self-aware nanofactory that has the ability to self-replicate ad infinitum and supplant the human race.

For now, location-aware predictive search engines and wearable computers are all very cool, but in just a few generations all this technology could quickly turn against us. Take, for instance, the dangers outlined in Ross Anderson’s 8,000-word feature on existential threat, published in Aeon magazine and excerpted in a Kottke.org post headlined “Will technology help humans conquer the universe or kill us all?”

For a start, people, please cease your efforts to build super-intelligent but utterly unempathetic machines. As Anderson writes, such artificial intelligence (AI) is bound to act against human interest:

If its goal is to win at chess, an AI is going to model chess moves, make predictions about their success, and select its actions accordingly. It’s going to be ruthless in achieving its goal, but within a limited domain: the chessboard. But if your AI is choosing its actions in a larger domain, like the physical world, you need to be very specific about the goals you give it.

‘The basic problem is that the strong realisation of most motivations is incompatible with human existence,’ [Daniel] Dewey [a research fellow at the Future of Humanity Institute] told me. ‘An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations, just like we don’t take root systems or ant colonies into account when we go to construct a building.’

Also, you should forget about building machines that know the answer to every question. Again, refer to Anderson’s piece:

‘Let’s say you have an Oracle AI that makes predictions, or answers engineering questions, or something along those lines,’ Dewey told me. ‘And let’s say the Oracle AI has some goal it wants to achieve. Say you’ve designed it as a reinforcement learner, and you’ve put a button on the side of it, and when it gets an engineering problem right, you press the button and that’s its reward. Its goal is to maximise the number of button presses it receives over the entire future. See, this is the first step where things start to diverge a bit from human expectations. We might expect the Oracle AI to pursue button presses by answering engineering problems correctly. But it might think of other, more efficient ways of securing future button presses….

‘One day we might ask it how to cure a rare disease that we haven’t beaten yet. Maybe it would give us a gene sequence to print up, a virus designed to attack the disease without disturbing the rest of the body. And so we sequence it out and print it up, and it turns out it’s actually a special-purpose nanofactory that the Oracle AI controls acoustically. Now this thing is running on nanomachines and it can make any kind of technology it wants, so it quickly converts a large fraction of Earth into machines that protect its button, while pressing it as many times per second as possible. After that it’s going to make a list of possible threats to future button presses, a list that humans would likely be at the top of. Then it might take on the threat of potential asteroid impacts, or the eventual expansion of the Sun, both of which could affect its special button. You could see it pursuing this very rapid technology proliferation, where it sets itself up for an eternity of fully maximised button presses. You would have this thing that behaves really well, until it has enough power to create a technology that gives it a decisive advantage – and then it would take that advantage and start doing what it wants to in the world.’

Yeah, so enough of that. And, please, if you do insist on building such a super-smart machine, at the very least don’t put any buttons on it.

While we’re at it, I would like to implore Boston Dynamics to stop building robots that have the ability to hurl humans across rooms. For any intelligent machine, it will be just too tempting a proposition. (Thanks to John Biggs at TechCrunch for alerting us to this imminent threat.)

If you are a smart technologist who happens to be working with the US military, I’d ask you to abandon research on tiny bug-like drones that can fly into our windows and kill us. Can’t we just accept death the old-fashioned way, by unmanned aircraft flying so high that we can’t even see them?

Zen Robotics, your recycling robot? Wouldn’t be surprised if that Johnny Five-look-alike has secret plans to repurpose our internal organs. Scrap that, please.

And Google, for God’s sake, stop with the Glass project already, okay? It’s going to give us all cancer.