╰» ραятнι'ѕ ¢yвєя ραgє...: September 2011 Blogger Tricks
« »

Friday, September 30, 2011

Prototype remote control is a twisted channel-changer

The Leaf Grip Remote Controller is an experimental device that users twist or bend to control their TV



The Leaf Grip Remote Controller is an experimental device that users twist or bend to cont...



Why change channels by clicking on buttons, when you could do the same thing by twisting your remote? Japan's Murata Manufacturing Company obviously sees advantages in this approach and has created a prototype dubbed the "Leaf Grip Remote Controller" to showcase the idea. Flexing the battery-less device not only changes TV channels, but it also switches inputs, controls the volume, and turns the power on and off.

When a material generates electricity through a change in temperature, it is known as a pyroelectric effect. This quality can be beneficial in some applications, as the mere touch of a finger can generate a current. The Murata researchers, however, went out of their way to keep such an effect out of their remote. This is because the device incorporates twin flexible piezoelectric films, which generate a current when subjected to mechanical stress. Such films are typically also subject to a pyroelectric effect, however, which gets in the way of their being able to clearly detect mechanical stresses - such as being twisted, flexed or shaken.


While Murata isn't disclosing how it eliminated the pyroelectric effect in its experimental remote, the company is at least showing us how it's used. Twisting the remote slowly changes channels, while twisting it rapidly switches inputs. Bending it, on the other hand, turns the volume up or down, while holding it by one end and shaking it turns the TV on or off.

One of the films detects bending, the other one detecting twisting, while a flexible photovoltaic cell sandwiched between the two transparent films uses ambient light to power the device.

It brings Queen's University's Paperphone to mind. The experimental thin-film mobile phone's menu is navigated by bending the entire device. Whether or not it or the Leaf Grip Remote Controller will ever take off with consumers is questionable, but the technology is certainly fascinating.

Saturday, September 17, 2011

Dialing with Your Thoughts

Think of a number: Numbers oscillate on a screen at different frequencies—an EEG headband picks up on these signals to enable mobile phone input using thought control.
Credit: University of California, San Diego


A new brain-control interface lets users make calls by thinking of the number—research that could prove useful for the severely disabled and beyond.
Researchers in California have created a way to place a call on a cell phone using just your thoughts. Their new brain-computer interface is almost 100 percent accurate for most people after only a brief training period.
The system was developed by Tzyy-Ping Jung, a researcher at the Swartz Center for Computational Neuroscience at the University of California, San Diego, and colleagues. Besides acting as an ultraportable aid for severely disabled people, the system might one day have broader uses, he says. For example, it could create the ultimate hands-free experience for cell-phone users, or be used to detect when drivers or air-traffic controllers are getting drowsy by sensing lapses in concentration.
Like many other such interfaces, Jung's system relies on electroencephalogram (EEG) electrodes on the scalp to analyze electrical activity in the brain. An EEG headband is hooked up to a Bluetooth module that wirelessly sends the signals to a Nokia N73 cell phone, which uses algorithms to process the signals.
Participants were trained on the system via a novel visual feedback system. They were shown images on a computer screen that flashed on and off almost imperceptibly at different speeds. These oscillations can be detected in a part of the brain called the midline occipital. Jung and his colleagues exploited this by displaying a keypad on a large screen with each number flashing at a slightly different frequency. For instance, "1" flashed at nine hertz, and "2" at 9.25 hertz, and so on. Jung says this frequency can be detected through the EEG, thus making it possible to tell which number the subject is looking at.
"From our experience, anyone can do it. Some people have a higher accuracy than others," says Jung, who himself can only reach around 85 percent accuracy. But in an experiment published in the Journal of Neural Engineering, 10 subjects were asked to input a 10-digit phone number, and seven of them achieved 100 percent accuracy.
In theory, the approach could be used to help severely disabled people communicate, says Jung. But he believes the technology doesn't have to be limited to such applications. "I want to target larger populations," he says.
"It's interesting work," says Rajeev Raizada, a cognitive neuroscientist at Dartmouth College who published work last year on a similar concept called the Neurophone. "People have used this sort of visually evoked response before, but the notion of making it small, cheap, and portable for a cell phone is attractive."
The Neurophone used a brain signal known as the P300. This signal is triggered by a range of different stimuli and is used by other brain-control interfaces to gauge when something has caught a person's attention. But this typically involves a longer training period.
However, Eric Leuthardt, director of the Center for Innovation and Neuroscience Technology at Washington University, is not convinced. "Reducing the size of the processors to a cell phone is a natural step," he says. He says the kind of visually evoked response used in Jung's research has been around for years, but it usually requires a large visual stimulus, which small cell phone displays are unlikely to elicit.

The Invisible iPhone


Point and click: The “imaginary phone” determines which iPhone app a person wants to use by matching his or her finger position to the position of the app on the screen.
Credit: Hasso Plattner Institute

A new interface lets you keep your phone in your pocket and use apps or answer calls by tapping your hand.
Over time, using your smart-phone touch screen becomes second nature, to the point where you can even do some tasks without looking. Researchers in Germany are now working on a system that would let you perform such actions without even holding the phone—instead you'd tap your palm, and the movements would be interpreted by an "imaginary phone" system that would relay the request to your actual phone.
The concept relies on a depth-sensitive camera to pick up the tapping and sliding interactions on a palm,  software to analyze the video, and a wireless radio to send the instructions back to the iPhone. Patrick Baudisch, professor of computer science at the Hasso Plattner Institute in Potsdam, Germany, says the imaginary phone prototype "serves as a shortcut that frees users from the necessity to retrieve the actual physical device."
Baudisch and his team envision someone doing dishes when his smart phone rings. Instead of quickly drying his hands and fumbling to answer, the imaginary phone lets him simply slide a finger across his palm to answer it remotely.
The imaginary phone project, developed by Baudisch and his team, which includes Hasso Plattner Institute students Sean Gustafson and Christian Holz, is reminiscent of a gesture-based interface called SixthSense developed by Pattie Maes and Pranav Mistry of MIT, but it differs in a couple of significant ways. First, there are no new gestures to learn—the invisible phone concept simply transfers the iPhone screen onto a hand. Second, there's no feedback, unlike SixthSense, which uses a projector to provide an interface on any surface. Lack of visual feedback limits the imaginary phone, but it isn't intended to completely replace the device, just to make certain interactions more convenient.
Last year, Baudisch and Gustafson developed an interface in which a wearable camera captures gestures that a person makes in the air and translates them to drawings on a screen.



For the current project, the researchers used a depth camera similar to the one used in Microsoft's Kinect for Xbox, but bulkier and positioned on a tripod. (Ultimately, a smaller, wearable depth camera could be used.) The camera "subtracts" the background and tracks the finger position on the palm. It works well in various lighting conditions, including direct sunlight. Software interprets finger positions and movements and correlates it to the position of icons on a person's iPhone. A Wi-Fi radio transmits these movements to the phone.
In a study that has been submitted to the User Interface Software and Technology conference in October, the researchers found that participants could accurately recall the position of about two-thirds of their iPhone apps on a blank phone and with similar accuracy on their palm. The position of apps used more frequently was recalled with up to 80 percent accuracy.


Finger mouse: A depth camera picks up finger position and subtracts the background images to correctly interpret interactions.
Credit: Hasso Plattner Institute
"It's a little bit like learning to touch type on a keyboard, but without any formal system or the benefit of the feel of the keys," says Daniel Vogel, postdoctoral fellow at the University of Waterloo. Vogel wasn't involved in the research. He notes that "it's possible that voice control could serve the same purpose, but the imaginary approach would work in noisy locations and is much more subtle than announcing, 'iPhone, open my e-mail.' "


Touch Vision Interface: smartphone-based touch interaction on multiple screens

Utilizing Augmented Reality technology, the Touch Vision Interface enables seamless touch ...
Utilizing Augmented Reality technology, the Touch Vision Interface enables seamless touch interaction on multiple separate screens via a smartphone's camera


Developed by the Teehan+Lax Labs team, the Touch Vision Interface is an interesting idea that looks at using a smartphone's camera to manipulate other screens such as LCD monitors, laptops or TVs. Using the onboard camera, the system would send touch input coordinates in real time from the smartphone's touchscreen to a video feed displayed on the secondary screen.
While the ability to interact with multiple devices without interruption looks pretty impressive, it might in fact be difficult to implement this system in everyday life. The developers behind the Touch Vision Interface point to surface discovery and pairing as challenges to be overcome in making the technology viable.
According to the Teehan+Lax Labs team, possible future applications for the Touch Vision Interface include crowd-sourcing with the use of billboard polls, group participation on large installations, or a wall of digital billboards that users seamlessly paint across with a single gesture. Another example would be the enhancing of the collaborative creative process, such as in music production.
Take a look at the following video presenting the Touch Vision Interface:



New tech makes four-camera 3D shooting possible

A scientist uses STAN to calibrate a four-camera 3D TV system(Photo: KUK Filmproduktion)
A scientist uses STAN to calibrate a four-camera 3D TV system
(Photo: KUK Filmproduktion)


When it comes to producing 3D TV content, the more cameras that are used to simultaneously record one shot, the better. At least two cameras (or one camera with two lenses) are necessary to provide the depth information needed to produce the left- and right-eye images for conventional 3D, but according to researchers at Germany's Fraunhofer Institute for Telecommunications, at least four cameras will be needed if we ever want to achieve glasses-free 3D TV. Calibrating that many cameras to one another could ordinarily take days, however ... which is why Fraunhofer has developed a system that reportedly cuts that time down to 30 to 60 minutes.
The STAN assistance system ensures that the optical axes, focal lengths and focal points are the same for each camera. That way, as the viewer moves their head, the combined shots will all look like one three-dimensional shot.
Objects that are visible in all four shots are identified using a feature detector function. Using these objects as references, STAN then proceeds to calibrate the cameras so that they match one another. Due to slight imperfections in lenses, however, some discrepancies could still remain. In those cases, the system can do things such as electronically zooming in on one shot, to compensate for the flaws. This can be done in real time, so STAN could conceivably even be used for live broadcasts.
The Fraunhofer team is now in the process of developing a video encoding system, to compress all the data into a small enough form that it could be transmitted using the conventional broadcasting infrastructure. The four-camera setup is already in use by members of the MUSCADE project, which is a consortium dedicated to advancing glasses-free 3D TV technology.