Meet Lorek, A new human-interaction robot from Brown University

Researcher from the University of Brown have created a robot who can deal with uncertainty from human requests. A person can ask the robot for an item and the robot will try to determine which item the user requested. If the robot is unsure, because there are multiple copies of the same item on the table, then the robot will ask confirmation for which specific copy the user wants.

While this seems very trivial, handling uncertainty is a big part of human interaction and a big problem for computer scientists who are trying to develop human-robot interactions.

As for our industry, these kind of advancements really start shining a light on distant possibilities for creating better, richer and more natural experiences between humans and robots.

Source: Wired

A.I. Duett – making music in a team with artificial intelligence

A new Google A.I. experiment is showing a very impressive way to use artificial intelligence and human creativity to make music together.
Several technologies are used, including some tools from the TensorFlow magenta” project, which has its focus on using machine learning to create compelling art and music.

Deep neural networks and machine-learning are key players of artificial intelligence. They are simulating basic information processing of the brain and are more and more used in many products.

More information: aiexperiments.withgoogle.com

IBM releases new API for their Quantum Computing platform

IBM released today a new API for their Quantum Experience platform that enables developer to build interfaces between its existing five quantum bit (qubit) cloud-based quantum computer and classical computers, without needing a deep background in quantum physics.

They also released an upgraded simulator on the IBM Quantum Experience that can model circuits with up to 20 qubits. In the first half of 2017 IBM also plans to release a full SDK on the IBM Quantum Experience for users to build simple quantum software programs.

The power of Quantum computing is magnitudes higher that traditional processing, and as it becomes more accessible to developers we will see huge improvements in all software fields like AI, Language processing, VR and much more.

more info: IBM News room

New Boston Dynamics wheeled robot unveiled

A robot called “Handle”, built from the Google owned company Boston Dynamics, was unveiled in a leaked investor presentation. It has wheels, is self balancing and able to carry heavy weights. A broad product range of Boston Dynamics robots is also covered within this video, so it gives a nice overview.

Robots will support us in many, even unexpected, tasks and are one of the most expressive ways to bring machine intelligence to (our) live(s).

Nike ID – augmented reality in-store configurator

A Nike Store in Paris is showing a very nicely executed augmented reality installation, which lets the customers design their own personalized pair of shoes. The UX seems to be straight forward and is very similar to the online configurator. So, if you are in the area around the Avenue de Champs-Élysées in Paris, give it a try!

More information: uploadvr.com

Artificial Intelligence empowered Hearing Aid

Ear

New AI filter algorithms are able to improve greatly conventional hearing aids with selecting and amplifying specific sounds. The benefit is, that i.e. only one speech out of a crowd can be separated. Beside the people with hearing loss, voice recognition in smart phones, conversations in noisy environments and many other applications could also benefit from this technology.

More information and audio examples: spectrum.ieee.org

RadarCat gives computers a sense of touch

With the help of a new system from Scotland’s University of St Andrews, a computer or smartphone may soon be able to tell the difference between an apple and an orange, or an empty glass and a full one, just by touching it. The system draws from a database of objects and materials it’s been taught to recognize, and could be used to sort items in warehouses or recycling centers, for self-serve checkouts in stores or to display the names of objects in another language.

read more: theverge.com

Neural network behind Google Translate is (maybe) learning a type of own language

image01

The neural network behind the Google Translate service is under suspect to create a common representation of meaning, independend from language. Even for the Google engineers it seems to be hard to understand what is exactaly going on, but it is working with a measurable higher quality than ever before. The network was able to translate from Spanish to Portuguese by itself, without guidance, after it was trained to translate from Portuguese to English and English to Spanish.

Deep neural networks and machine-learning are key players of artificial intelligence. They are simulating basic information processing of the brain and are more and more used in many products.

More information: research.googleblog.com