Bristol University scientists presenting a method for creating three-dimensional haptic shapes in mid-air using focused ultrasound. This approach applies the principles of acoustic radiation force, whereby the non-linear effects of sound produce forces on the skin which are strong enough to generate tactile sensations, a technology we hope will soon be available in conjunction with Microsoft HoloLens for a perfect holographic tactile experience.
This mid-air haptic feedback eliminates the need for any attachment of actuators or contact with physical devices. The user perceives a discernible haptic shape when the corresponding acoustic interference pattern is generated above a precisely controlled two-dimensional phased array of ultrasound transducers.
Project Tango is an exploration into giving mobile devices a human-scale understanding of space and motion.
The leading brand, if you will, in the coming consumer virtual reality space is Facebook’s Oculus Rift platform. Oculus by itself is straight up old-school virtual reality. But a special project with Leap Motion, a high-fidelity, in-the-air gestures product already on the market, adds reality to Oculus. By bolting a Leap Motion device to the front of an Oculus Rift headset and mapping hand gestures into to a virtual reality scene, users can see their own hands in the virtual space (of course, software can make them gorilla hands, robot hands, you name it), and those hands can manipulate 3D virtual objects in the simulation.
This is the Project Tango idea inside out. Instead of the real world being duplicated to create a hybrid real-virtual environment, the environment is fully simulated but the user is duplicated or, at least parts of the user.
Edible Packaging, Heads-Up Movement and Haptic Technology—just a few items from our annual list of 100 Things to Watch for the year ahead.
It’s a wide-ranging compilation that reflects developments surfacing across sectors including technology, television, food and spirits, retail, health care and the arts. The list also includes new types of goods or businesses, new behaviors and ideas with the potential to ladder up to bigger trends.
Pre-release Augmented Reality Glasses – C Wear from Penny is now available for order, for delivery later in 2014.
Meta, the augmented reality technology company, has captured the attention of the gadget world with the launch of the Meta Pro, the $3,000US headset that aims to bridge the gap between fully immersive virtual reality tools such as the Oculus Rift and (relatively) more subtle wearable devices such as Google Glass. Meta’s CEO and founder Meron Gribetz show’s how the glasses can be used in place of traditional CAD software to design a 3D printed object using hand gestures.
“We are seeing technology-driven networks replacing bureacratically-driven hierarchies,” says VC and futurist Fred Wilson, speaking on what to expect in the next ten years.
BeAnotherLab, an interdisciplinary group of students at the University Pompeu Fabra, in Barcelona, has relied on an early version of Oculus Rift as part of an on-going research project called “The Machine To Be Another.” The concept is just what the name suggests. An early experiment let participants experience the creative process through someone else’s eyes, in real time. The latest undertaking is even wackier. It lets men and women swap bodies. (Note: The video contains nudity.)
Much like its counterparts, K-Glass is designed to offer users an everyday augmented reality (AR) experience. According to the developers, users will be able to walk up to a restaurant and have its name, menu, available tables and a 3D image of different food displayed in front of their eyes.
A point of difference that could distinguish the K-Glass technology from other head-mounted displays, and one emphasized by the researchers, is the approach used to generate the augmented reality experience. Rather than using methods such as algorithms, facial recognition, motion tracking, barcodes and QR codes to establish and deliver a virtual reality like other head-mounted displays, K-Glass is designed to replicate the process our brains use to establish our surroundings.
This all revolves around an AR processor based on the Visual Attention Model (VAM), which reproduces the ability of the human brain to categorize relevant and irrelevant visual data. When we process visual data we use sets of neurons that, though connected, work independently on different stages of the decision making. One set of neurons completes part of the process and relays the information onto a the next set, before ultimately a set of decider neurons determine what data is required and what can be done away with.
In basing the artificial neural network on the brain’s central nervous system, the team says it was able to compartmentalize the processing of data, resulting in less congestion and significantly improved energy efficiency. According to its creators, K-Glass can deliver 1.22 TOPS (tera-operations per second) while running at 250 MHz, using 778 mW powered by a 1.2V supply. The team says this equates to a 76 percent improvement on power consumption of similar devices.
“Our processor can work for long hours without sacrificing K-Glass’s high performance, an ideal mobile gadget or wearable computer, which users can wear for almost the whole day,” says Hoi-Jun Yoo, Professor of Electrical Engineering at KAIST.