There's been a spate of articles (like this one) recently about the potential risks and ethics of lethal AIs - drones that can acquire targets autonomously and robots, like the Terminator, hunting down and killing people. Whilst I've blogged about this before I was surprised to suddenly see the web alive with speculation and comment. I think I've tracked down the source; a new Cambridge University research centre called "The Cambridge Project for Existential Risk."
The CSER is co-founded by Cambridge philosophy professor Huw Price, astrophysicist professor Martin Rees and Skype's co-founder Jaan Tallinn. Prof Price says, "It seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology." He adds that as robots and computers become smarter than humans, we could find ourselves at the mercy of "machines that are not malicious, but machines whose interests don't include us".