Instead of unthinking digital drafting boards, our computers could soon be intelligent partners in design – first, though, they’ll need to figure out how to fold a shirt.
Last month I shook hands with the future. Well, it was more of a pincer than a hand, but it was definitely the future. This pincer belonged to BRETT, the Berkeley Robot for Eliminating Tedious Tasks, created by the UC Berkeley People and Robots Initiative. I interrupted BRETT tying knots in a thick piece of red rope. His real trick though, is to do the laundry.
While laundry may be an annoying chore for us fleshy humans, for a silicon-brained robot, it’s insanely difficult, comprising a complex series of steps in a messy real-world environment. Watching BRETT in action – it takes about ten minutes to fold a single towel – is sure to dispel any fears of a Terminator-style uprising any time soon. But BRETT is part of a new approach to artificial intelligence known as ‘deep learning’, where instead of tasks being pre-programmed, the system is able to train and improve itself based on experience.
Deep learning, combined with exponential advances in computing power, has ushered in a new age of breakthroughs in artificial intelligence research. Previously insoluble problems such as computer vision, speech recognition and natural language have been cracked, leading to new technologies we take for granted, such as Siri, and new products on the near horizon, such as driverless cars. […]