In an apparent move to feed its smart-hardware ambitions, Google has bought an artificial intelligence startup, DeepMind, for somewhere in the ballpark of $500 million. Considering all of the data Google sifts through, and the fact that it might be getting into robotics, it's not completely absurd that they'd want some software to give a robotic helping hand. (Facebook apparently wanted the company, too, and they've already made moves to wrangle their ownsprawlingweb of information.) But the other part of thisstory is a little stranger: the deal reportedly came under the condition that Google create an "ethics board" for the project.
What, exactly, does that mean? No idea. It's unclear how the board would be structured, who'd be on it, or when it would be consulted. The London-based DeepMind doesn't seem particularly sinister, either: the company has mostly used its software in fields like e-commerce and gaming. The point is software like this could eventually be used for work in ethical gray areas, and DeepMind might've wanted to get ahead of the issues.
Which, good. The more decisions we cede to machines, the more we need human oversight of those decisions. A simple "Don't be evil" mantra might not cut it.