The MYO natural user interface armband lets you use the electrical activity in your muscles to wirelessly control your computer, phone, and other digital technologies.
The main difference between Visual SyncAR and other AR technologies is playing content back in relation to what’s happening on the bigger screen, and in the video below you can see how it might be used to show animation alongside a music video.
The idea sounds interesting, even if NTT’s early ideas for applications aren’t the most exciting — it suggests using the technology in digital signage to show special English-language content on the cellphones of Japan’s foreign visitors.
Metaio, the Germany-based augmented reality company, announced a deal with ST-Ericsson which will see the latter integrate a specialized AR processor into the next generation of its mobile chipsets.
Metaio says their AREngine processor will dramatically increase the speed and precision needed to run and expand the use of augmented reality tasks on mobile phones, while also being more efficient in its power usage. The notion of an augmented reality processor integrated into mobile device certainly distinguishes itself from the downloadable AR apps we have seen become popular over the last year or so, that are limited in what they can do outside of a singular, often gimmicky, task.
The Meta prototype headset consists of an Epson Moverio BT-100 with a low-latency 3D camera mounted on top. Meta isn’t just using off-the-shelf Moverio headsets, either. The company has inked a deal with Epson to collaborate on augmented reality technologies.
Epson’s existing headset runs for up to six hours, though that’s using a wired remote control unit with a battery pack, so Meta and Epson are looking to replace the LCD screens in the existing Moverio with OLED panels from providers such as MicroOLED; that should introduce improvements in both visibility and power consumption.
According to the initial promo video the new headset is being positioned as an ideal accessory for the web-obsessed social media user. News articles can be browsed by sweeping through, and then grabbing, preview bubbles floating in mid-air, webpages can be overlaid over elements of real world and physical “thumbs-up” motion can “Like” a Facebook post.
New features of Google Glass project unveiled, with the company encouraging “creative individuals” to pitch in ideas. Features shown in this video posted to YouTube include voice activated commands such as “take a picture” and features people having Skype like video chats on the displays, which appear just in front of the eyes. It also has sat nav, and the ability to record film, translate words and pull up pictures when prompted.
You can download it for for iPad 2,3/ iPhone 4+ and try yourself – https://itunes.apple.com/us/app/ar-panoramas-3d-augmented/id583318495
Big Data and Machine Learning
All of the context that is derived from the Internet of Things will generate huge amounts of data (so called Big Data) and using techniques such as Machine Learning to sift and learn from that data will enable technology to do much more on our behalf and actually begin to anticipate our needs. I’m sure that’ll be fun maybe one day it will be useful vs bloody frustrating.
Ray discusses his new role at Google, how his research interests connect with his latest book How To Create A Mind and how technology will advance to produce a “cybernetic friend”
“The project we plan to do is focused on natural language understanding,” said Kurzweil. “We want to give computers the ability to understand the language that they’re reading.”