Speech recognition software isn't perfect, but it is a little closer to human this week, as a Microsoft Artificial Intelligence and Research team reached a major milestone in speech-to-text development: The system reached a historically low word error rate of 5.9 percent, equal to the accuracy of a professional (human) transcriptionist. The system can discern words as clearly and accurately as two people having a conversation might understand one another.
On a clear day, the sky hides nothing from space. For more than half a century, spy satellites have circled the globe, taking pictures of the world below. First launched by the United States and the Soviet Union as ways to keep tabs on each other, satellite photography progressed from a state secret to a common mapping tool, with public photos taken from space available to anyone with an internet connection. But what if satellites did more? What if, instead of just showing us what the world looks like from above, they interpreted those images to identify buildings and other objects?
There are plenty of bots working today to produce basic, numbers-driven reports on sports and finance results, but the Olympics showed that one robo-reporter can be quite prolific. A single Chinese bot named Xiaomingbot produced 450 stories about results from the games.
In artificial intelligence research, everyone is talking about style transfer. It takes traits from one piece of art, like the brushstrokes of a painting, and applies it to another image. It's the software behind the popular photo app Prisma, and Twitter bots like the now-defunct DeepForger.
Artificial intelligence lets us offload tasks onto machines—they're beginning to tag our photos, drive our cars, and fly our drones. These A.I. systems occasionally make wrong decisions while doing these things, as speculated in the recent Tesla Autopilot crash or mishearing a voice command, but new research suggests that hackers with experience in A.I. could force these algorithms to make wrong and potentially harmful decisions.