Wednesday 21 April 2010

Should we send robots to space that looks like us?


Which is exactly what NASA is planning to do. It's called Robonaut 2. For the moment he will be an assistant for human astronaut on the international space station with simple tasks, like tools handling. For the moment, the Robonaut 2 is build from behaviors. Automatic tasks. But in the next stage, it will have options to make some decissions in the actions.

Now, it makes sense to build a humanoid robot to work with astronauts, because we like to interact with humanoids rather than 26-arm robot. We simply prefer the human look from non-human look. It's very well explained in this Wired article. There is still an issue of uncanny valley - robots looking very close, but not exactly as live humans, will trigger our repulsion towards them. We just might have higher cognitive expectations towards human-like robots. However, guys in NASA thought about this problem, and made the robots look like they are from Star Wars, and we all know that EVERYONE loves Star Wars.


More interesting is a question, whether they are going to send Robonauts into deep space or even Mars, without human crew. It might not be rational from the functional perspective, because the 26-arm-octopus robot will be able to complete more tasks at any given time, than Robonaut. Hell, it will be much cheaper to make rather less than more robots, so one octopus robot might be much better deal that 10 Robonauts. Still, maybe it would be better to distribute different tasks into series of neuronal networks instead it to one network, so from the separate interactions we would get some interesting individual differences between the robots instead of having just one hiper-robo-organism.

At the end of the day, it might only be a question of software and neuronal network design. Everything will be connected to one neural network, but task are going to be distributed to smaller, singular networks. There will be goal like 'maximize the collection of data from any environment, analyze it and adapt accordingly to maximize the survival of the mother-ship'. Mother-ship will be actually one robot, just composed of many different robots for different tasks. And both Robonaut fans and octopus robot fans will be happy :)

[image credit: NASA]

Wednesday 14 April 2010

How would my dream tablet look like?

I wake up to the subtle ambient sound played from my tablet. It's different everyday. The tablet monitors my body movements during the night and it learns how long my REM/NREM sleep stages least and wake me up in the most optimal time (Sleep Cycle App). When I get up my tablet has already downloaded articles and news that are relevant to my research and personal interests (Mendeley or Papers), plus blogs, feeds and info that might be interesting for me (YourVersion, or rather some intelligent RSS system that learns my information-sucking habits). During breakfast I  scan through available info and choose which topics I want to explore more, saving them for later (Instapaper) and marking articles that are particularly interesting. On the way to work I get my e-mails, check calendar for daily goals, and look at social networking sites (Facebook, Twitter, Buzz, MySpace, LinkedIn, YouTube) which are clouded into one app. I quickly and silently dictate some short replies (using something similar to Google Speech Recognition on Android). 

When I get to work, longer journal and magazine articles are already downloaded to my device and integrated with my references database (Mendeley), with keywords and tags extracted, and with a visual representation of words frequency (Wordle). Reading is fully tactile with ability to make notes and comments on articles text (Skim). If I have any idea, bookmark or snapshot I want to make, I switch between applications and connect any info to my notebook database in the cloud (Evernote). Screen is easy for eyes, non-reflective, low-power consuming and with good contrast (next generation E-Ink). If you look close on the device, this tablet has 64GB drive, a 5-7 days on single battery charge, WiFi, 3G, GPS, low-weight, 3D microphone, accelerometer, headphone socket and front camera for making smooth video conferences (Skype). On my way back from work I listen to free music from the app that maps my musical preferences and habits (Last.fm) and allows me to listen everything for free (Spotify).

Consuming media, browsing, creating, communicating - everything is effortless and intuitive with my dream tablet, with multi-touch, Text 2.0 with eye-tracking, voice recognition and algorithms that constantly learn and map my behavior to create the better interface for me.

And what is certain - my dream tablet is closer than you think ;-)

Thursday 8 April 2010

Skinput, Text 2.0 and Atari Pink Floyd


You sit on the sofa, relax and you flick your finger - immediately projector in front of your turn on, and a tiny remote control icons are displayed on the skin of your arm. You tap the icons and switch the channels, another tap on the skin and you run some applications in background to get e-mails and news. Than you just flick your fingers, and display a nice table on your entire shoulder showing tunes that you have in your music library or list of movies. You 'click' your skin where the title is displayed...

This is Skinput - a new technology that will turn your skin into control surface. Researchers in Carnagie Mellon, in collaboration with Microsoft Research designed a sensor that will register vibrations and sound conducted by your skin. The sensor will also contain simple LED based projector, that will display icons, titles, whatever images you want on your skin, allowing you to control any devices around you. In the simplest format, you will be able to give commands to your devices only using simple finger taps, without even displaying anything on your skin. And mind you - this is Chris Harrison PhD project. Neat.

Next - tablets! Favourite media topic for the last couple of months, my technology search engines are filled 90% with iPad reviews. But one thing came through as potentially very interesting - eye-tracking tablet with Text 2.0. The idea is that tiny camera in front of your tablet will register your eye movement. You will be able to focus on a word and see definition of it, but Text 2.0 is something more - it will know when you skim the text and it will fade out irrelevant words for you. It will learn your reading habits, with possibility to give you feedback. Most important, it is completely new possibility of controlling applications just by LOOKING at particular points on the screen and vast possibilities for visual perception research. Above - Apple patent for eye-tracking - it's coming ;-)

Finally, some music, check out this album by Brad Smith - he made the entire Pink Floyd album 'Dark Side of the Moon' in the chiptune, atari-sound-like version. Especially listen to side two - awesomeness in 8-bit clothing - it made my day ;-)