“We are seeing technology-driven networks replacing bureacratically-driven hierarchies,” says VC and futurist Fred Wilson, speaking on what to expect in the next ten years.
BeAnotherLab, an interdisciplinary group of students at the University Pompeu Fabra, in Barcelona, has relied on an early version of Oculus Rift as part of an on-going research project called “The Machine To Be Another.” The concept is just what the name suggests. An early experiment let participants experience the creative process through someone else’s eyes, in real time. The latest undertaking is even wackier. It lets men and women swap bodies. (Note: The video contains nudity.)
Much like its counterparts, K-Glass is designed to offer users an everyday augmented reality (AR) experience. According to the developers, users will be able to walk up to a restaurant and have its name, menu, available tables and a 3D image of different food displayed in front of their eyes.
A point of difference that could distinguish the K-Glass technology from other head-mounted displays, and one emphasized by the researchers, is the approach used to generate the augmented reality experience. Rather than using methods such as algorithms, facial recognition, motion tracking, barcodes and QR codes to establish and deliver a virtual reality like other head-mounted displays, K-Glass is designed to replicate the process our brains use to establish our surroundings.
This all revolves around an AR processor based on the Visual Attention Model (VAM), which reproduces the ability of the human brain to categorize relevant and irrelevant visual data. When we process visual data we use sets of neurons that, though connected, work independently on different stages of the decision making. One set of neurons completes part of the process and relays the information onto a the next set, before ultimately a set of decider neurons determine what data is required and what can be done away with.
In basing the artificial neural network on the brain’s central nervous system, the team says it was able to compartmentalize the processing of data, resulting in less congestion and significantly improved energy efficiency. According to its creators, K-Glass can deliver 1.22 TOPS (tera-operations per second) while running at 250 MHz, using 778 mW powered by a 1.2V supply. The team says this equates to a 76 percent improvement on power consumption of similar devices.
“Our processor can work for long hours without sacrificing K-Glass’s high performance, an ideal mobile gadget or wearable computer, which users can wear for almost the whole day,” says Hoi-Jun Yoo, Professor of Electrical Engineering at KAIST.
The Q-Warrior is the latest version of BAE’s helmet-mounted display technology based on its Q-Sight range of display systems. Now undergoing field testing by the US military, it looks a bit like something an Apache helicopter pilot might wear, but that’s about as far as the similarity goes. Instead of controlling FLIR cameras and look-and-shoot weapon pods, Q-Sight is intended to give foot soldiers and special forces “heads-up, eyes out, finger on the trigger” situational awareness, friend-or-foe identification, and the ability to coordinate a small unit even when away from their vehicles.
Consisting of a large eye projector screen that is low power demand, low fatigue and has fast day/night transition, yet delivers high transmission, high-resolution color in a collimated, high-luminance, high-resolution see-through display, the Q-Warrior is also designed to allow for large movements of the helmet while maintaining the overlay of the display on the real world.
BAE says that the Q-Warrior will provide soldiers with their own portable command, control, and communications system in 3D with exact target designation and charting. With the Q-Warrior, a soldier will be able to see the location of friendly warplanes, including their speed, altitude, and payload, as well as being able to designate targets. The display will also track friendly and enemy forces with symbols overlaid on the real-world view, navigational waypoints and related data, and visual feeds from drones and other platforms.
MyndPlay are a UK based Biotechnology and Media company and creators of the MyndPlay Mind Controlled Video platform which allows viewers to control the video narrative, story, and direction using their brainwaves and emotions.
3D modeling system for 3D printing, built around a natural two-handed interface. MakeVR features Collaborate3D™, a collaborative mode that allows up to five local or remote users to model together in real time within the same virtual environment. https://www.kickstarter.com/projects/89577853/makevr-3d-modeling-and-printing-for-everyone