Google unveils Daydream 2.0 featuring Chrome VR, YouTube VR and more

One of the major updates slated for later this year is Daydream 2.0 (codename Euphrates), announced by Google during a keynote focused on VR and AR at day 2 of I/O 2017. The standalone VR headset is being developed along with Qualcomm and will feature ‘WorldSense’ tracking tech as well as the latest Snapdragon 835 processor. It will also include two wide-angle cameras along with motion sensors to detect movement, and will most likely ship with a Daydream controller.

Users will be able to use Chrome in Daydream to browse the web while in virtual reality, access WebVR content with full Chrome Sync capabilities and have the possibility to screenshot, capture or cast any virtual experience on to a Chromecast-equipped TV. Separately, Google is also bringing augmented reality features to Chrome for Tango-supported phones. Development will also become much easier with Instant Preview, which allows developers to make changes on a computer and see them reflected on a VR headset in seconds.

The new system will be available on all current Daydream devices later this year, including the Galaxy S8 and S8+ and LG’s upcoming flagship device.

Shenzhen: The Silicon Valley of Hardware

Years before any new shiny piece of tech arrives in the West, people in Shenzhen are already bored of it and on the next train. You might not fully realise it, but Shenzhen is truly the Tomorrowland of today’s world.

The proximity to an enormous infrastructure of hardware manufacturers combined with a lot of engineering talent and creativity makes Shenzhen the place to be if you want a glimpse of the future.

This special Wired documentary is just over an hour long, but for those who have not seen it yet will not regret the time spent. It gives a fascinating insight into how and where most of the world’s computer hardware is designed and built. If you want to know what’s coming next…the compass points squarely into one direct: Shenzhen.

The next evolution of NFC: electromagnetic emissions sensing

Engineers at Disney Research have already proven that it is possible to accurately identify devices by sensing the devices electromagnetic emission.

Now a team of scientists at Carnegie Mellon University’s Future Interfaces Group has created a prototype smartphone fitted with this electromagnetic-sensing capability that can identify and communicate with other devices. While there are still some hurdles to overcome, this is already looking like a promising piece of technology!

If this becomes ubiquitous, then all you need is a smartphone and you have your digital fingerprint of the future, and then new areas for creative development will suddenly open up!

 

 

WebVR: Mozilla’s VR for the browser

Mozilla, makers of the popular Firefox browser, now offer immersive room-scale VR through a web browser and without downloads or installs. Introducing WebVR and A-Frame. That’s right. The apps in the platform can run within cheap smartphone headsets or more powerful sets such as Oculus Rift and HTC Vive. Both the Javascript API platform and HTML framework are open source and require no linear algebra or programming languages like C++ to develop.

Sharing is as fast and simple as sharing a web page, and it’s open to anybody,” said Sean White, senior vice president of emerging technologies at Mozilla. The new platform is expected to grow considerably in the next five years, providing new VR experiences in the fields of education, creative expression and product development.

 

Adidas and Carbon Launch First Tailored 3D-Printed Sneakers

Adidas has teamed up with 3D printing startup Carbon to mass produce its latest sneaker, the Futurecraft 4D. While 3D printers are generally not designed for manufacturing scale and lack the production-grade elastomers needed for a demanding athletic footwear application, Carbon’s rapid product development process enables adidas to iterate over 50 different lattices for the midsole before landing on the current design.

This partnership exemplifies how new technologies and materials are paving the way for custom, high-performance products that meet the unique needs of each customer.

Nadja – a chatbot with emotional intelligence

Nadja was developed from the Australian government to improve their National Disability Insurance Scheme, a service for people with disabilities. The bot helps to find information and is making it accessible in a more human way. The bot is able to read emotions through a webcam and is reacting them in a subtle way. Like AI, EI (emotional intelligence) is getting better and is learning from being used.
The technology behind this is by the company Soul Machines and the voice is Cate Blanchett’s.

Deep neural networks and machine-learning are key players of artificial intelligence. They are simulating basic information processing of the brain and are more and more used in many products.

more information: www.soulmachines.com
via: thenextweb.com

Real time facial projection mapping

This collaborative work of YABAMBI, Ishikawa Watanabe Lab, The University of Tokyo,TOKYO and Nobumichi Asai (WOW) is showing a performance, where a 1,000 fps projection system combined with super speed sensor is used to produce an outstanding immersion.

This demo is a nice example what can be achieved with latest sensor and projection technology. Also it can be very well adapted to many other appliances.

The Globe of Economic Complexity

The Globe of Economic Complexity is an interactive 3D map that visualises 15 trillion dollars of world trade. Created by Owen Cornec, who was a data visualisation fellow at Harvard University at the time, it presents an insurmountable amount of data in such a way that any layman can comprehend it.

This is a stunning example of how creativity and technology can be combined to represent complex topics in such ways that it becomes more than the sum of just its parts.

Source: The Globe of Economic Complexity

Meet Lorek, A new human-interaction robot from Brown University

Researcher from the University of Brown have created a robot who can deal with uncertainty from human requests. A person can ask the robot for an item and the robot will try to determine which item the user requested. If the robot is unsure, because there are multiple copies of the same item on the table, then the robot will ask confirmation for which specific copy the user wants.

While this seems very trivial, handling uncertainty is a big part of human interaction and a big problem for computer scientists who are trying to develop human-robot interactions.

As for our industry, these kind of advancements really start shining a light on distant possibilities for creating better, richer and more natural experiences between humans and robots.

Source: Wired