╰» ραятнι'ѕ ¢yвєя ραgє...: 2011 Blogger Tricks
« »

Wednesday, November 30, 2011

airmouse-wearable-mouse


The AirMouse wearable mouse

It’s no secret... Studies have shown that excessive mouse usage can cause repetitive stress injuries. Unfortunately for most of us, “excessive” can mean anything more than a few hours a day. Fortunately, however, there are alternative styles of mice out there designed to be easier on the hands and arms. One of the more interesting ones to come along in a while is the AirMouse, made by Canadian firm Deanmark Ltd. What makes it unique is the fact that you wear it like a glove.
Deanmark founders Mark Bajramovic and Oren Tessler met in university, where Mark learned first-hand (no pun intended) what it’s like to OD on mousing. “Half way through our first year, I developed a computer mouse related RSI (Repetitive Stress Injury) and lost the use of my right hand and arm for several weeks,” he tells us. “Numbness, pain, most things that you hear about with RSI’s, I had it.” Later that semester, Mark and Oren heard about an ergonomic mouse being marketed in Europe. While they thought that particular product wasn’t perfect, it got them thinking about designing their own. The AirMouse is the result.
The wireless mouse utilizes an optical laser, and can run for a week without recharging. According to the company website, the clinically-tested product works by aligning itself with the ligaments of your hand and wrist. This lets you keep your hand in a neutral position, and transmits more of your vector force than would be possible with a regular mouse. Not only does this make it easier on your hand, but it increases your mousing speed and accuracy as well. The mouse is also designed to remain inactive until your hand is placed in a neutral, flat position, so you can easily go back and forth between typing and mousing. Other ergonomic designs have strayed from the AirMouse’s style of traditional flat, one-dimensional mousing, but Mark and Oren’s market research indicated that consumers tend to reject such products.
The AirMouse should be available for purchase within the next 6 to 12 months, at a price of $US129.

EXOdesk: 40-inch multitouch desk set for CES debut



ExoPC has posted a video of its new 40-inch multi-touch desk on YouTube

ExoPC has posted a video of its new 40-inch multi-touch desk on YouTube

The teaser video (below) doesn't offer a ton of information about the computer, but does show off a widget hub in the corner of the desk you can use to launch applications on the screen, and the ability to pull down a timeline populated with news information, tweets, or other alerts from the top corner of the table. Both the widgets and the timeline can be casually swiped away when you're done with them, and the screen and location of the widgets can be customized to meet your own personal needs. The ExoPC also supports full-screen applications, showing off in the video an app that instantly turns the computer into an electronic piano.Could multi-touch desks be the wave of the future? ExoPC thinks so, and has posted a video of its new 40-inch multitouch desk on YouTube - a desk it plans on officially announcing at the Consumer Electronics Show at the beginning of January.
Multi-touch desk computers aren't really anything new. Samsung for instance recently announced the Samsung SUR40, a 40-inch, 1080p multitouch table running Microsoft's Surface software. Where the ExoPC stands out, however, is in its price tag. While the SUR40 and other table computers are designed for businesses (and priced that way, the SUR40 is US$8,400!), the ExoPC is instead priced at a modest $1,299 making it affordable for average consumers.
The Samsung SUR40 is expected to be a computer replacement, however, the ExoPC also appears to be something you would use as a replacement for a traditional desk, and a supplement for your actual computer.
We'll certainly be keeping an eye out for the ExoPC in January. If you have to get your hands on a table PC now, Samsung has already opened up pre-orders for the SUR40.

Watch Live TV on your Airtel Mobile for Free

Airtel Live TV
After our last post - Watch Top News Channels Live on your Airtel Mobile for Free, we have come up with new channels which you can stream live and free anytime, anywhere on your GPRS (Edge) enabled Airtel mobile for free. No subscription charges apply. Your operator would charge you if you watch these channels on other mobiles like BSNL, Aircel, !dea et cetera. In such case, please subscribe to a data plan provided by your service operator before watching live tv on your mobile;

·【STAR PLUS】
·【STAR ONE】
·【SONY TV】
·【SAB TV】
·【UTV BINDASS】
·【UTV MOVIES】
·【TLC】
·【ZOOM TV】
·【FASHION TV】
*Science & Technology*
·【DISCOVERY CHANNEL】
·【DISCOVERY SCIENCE】
·【DISCOVERY TURBO】
·【ANIMAL PLANET】
*Kids' Entertainment*
·【CARTOON NETWORK】
·【DISNEY ENGLISH】
·【DISNEY HINDI】
·【DISNEY XD HINDI】
*Music*
·【CHANNEL V】
·【MAA MUSIC】
·【KUBER MUSIC】
·【RAJ MUSIX】
·【RAJ MUSIX KANNADA】
*Religion / Faith*
·【GURBANI】
·【GOD TV】
·【DHARAM TV】
·【SRISANKARA TV】
*Regional Channels*
·【ETV】
·【ETV 2】
·【ETV URDU】
·【ETV BIHAR】
·【ETV BANGLA】
·【ETV ORIYA】
·【ETV MARATHI】
·【ETV RAJASTHAN】
·【RAJ TV】
·【RAJ DIGITAL PLUS】
·【STAR JALSA】
·【STAR MAJHA】
·【STAR PRAVAH】
·【STAR ANANDA】
·【STAR VIJAY】
·【JAYA TV】
·【JAYA PLUS】
·【TV9】
·【TV9 MUMBAI】
·【TV9 KANNADA】
·【NE TV】
·【NE HIFI】
·【NE BANGLA】
·【ASIA NET】
·【MAKKAL TV】
·【MAA TV】
·【TV1】
·【SVBC】
·【VASANTH TV】
·【SAKSHI TV】
·【FOCUS TV】
·【TIME TV】
·【KAIRALI TV】
·【INDIA VISION】
·【HAMAR TV】
·【HY TV】
·【VISSA TV】


NEWS:
·【Aaj Tak】
·【Star News】
·【IBN7】
·【CNN IBN】
·【India News】
·【India Tv】
·【CNBC TV18】
·【NDTV India】
·【NDTV 24X7】
·【Headlines Today】
·【TIMESNOW】
·【BBC World】
·【UTV Bloomberg】
·【NDTV Profit】

Friday, September 30, 2011

Prototype remote control is a twisted channel-changer

The Leaf Grip Remote Controller is an experimental device that users twist or bend to control their TV



The Leaf Grip Remote Controller is an experimental device that users twist or bend to cont...



Why change channels by clicking on buttons, when you could do the same thing by twisting your remote? Japan's Murata Manufacturing Company obviously sees advantages in this approach and has created a prototype dubbed the "Leaf Grip Remote Controller" to showcase the idea. Flexing the battery-less device not only changes TV channels, but it also switches inputs, controls the volume, and turns the power on and off.

When a material generates electricity through a change in temperature, it is known as a pyroelectric effect. This quality can be beneficial in some applications, as the mere touch of a finger can generate a current. The Murata researchers, however, went out of their way to keep such an effect out of their remote. This is because the device incorporates twin flexible piezoelectric films, which generate a current when subjected to mechanical stress. Such films are typically also subject to a pyroelectric effect, however, which gets in the way of their being able to clearly detect mechanical stresses - such as being twisted, flexed or shaken.


While Murata isn't disclosing how it eliminated the pyroelectric effect in its experimental remote, the company is at least showing us how it's used. Twisting the remote slowly changes channels, while twisting it rapidly switches inputs. Bending it, on the other hand, turns the volume up or down, while holding it by one end and shaking it turns the TV on or off.

One of the films detects bending, the other one detecting twisting, while a flexible photovoltaic cell sandwiched between the two transparent films uses ambient light to power the device.

It brings Queen's University's Paperphone to mind. The experimental thin-film mobile phone's menu is navigated by bending the entire device. Whether or not it or the Leaf Grip Remote Controller will ever take off with consumers is questionable, but the technology is certainly fascinating.

Saturday, September 17, 2011

Dialing with Your Thoughts

Think of a number: Numbers oscillate on a screen at different frequencies—an EEG headband picks up on these signals to enable mobile phone input using thought control.
Credit: University of California, San Diego


A new brain-control interface lets users make calls by thinking of the number—research that could prove useful for the severely disabled and beyond.
Researchers in California have created a way to place a call on a cell phone using just your thoughts. Their new brain-computer interface is almost 100 percent accurate for most people after only a brief training period.
The system was developed by Tzyy-Ping Jung, a researcher at the Swartz Center for Computational Neuroscience at the University of California, San Diego, and colleagues. Besides acting as an ultraportable aid for severely disabled people, the system might one day have broader uses, he says. For example, it could create the ultimate hands-free experience for cell-phone users, or be used to detect when drivers or air-traffic controllers are getting drowsy by sensing lapses in concentration.
Like many other such interfaces, Jung's system relies on electroencephalogram (EEG) electrodes on the scalp to analyze electrical activity in the brain. An EEG headband is hooked up to a Bluetooth module that wirelessly sends the signals to a Nokia N73 cell phone, which uses algorithms to process the signals.
Participants were trained on the system via a novel visual feedback system. They were shown images on a computer screen that flashed on and off almost imperceptibly at different speeds. These oscillations can be detected in a part of the brain called the midline occipital. Jung and his colleagues exploited this by displaying a keypad on a large screen with each number flashing at a slightly different frequency. For instance, "1" flashed at nine hertz, and "2" at 9.25 hertz, and so on. Jung says this frequency can be detected through the EEG, thus making it possible to tell which number the subject is looking at.
"From our experience, anyone can do it. Some people have a higher accuracy than others," says Jung, who himself can only reach around 85 percent accuracy. But in an experiment published in the Journal of Neural Engineering, 10 subjects were asked to input a 10-digit phone number, and seven of them achieved 100 percent accuracy.
In theory, the approach could be used to help severely disabled people communicate, says Jung. But he believes the technology doesn't have to be limited to such applications. "I want to target larger populations," he says.
"It's interesting work," says Rajeev Raizada, a cognitive neuroscientist at Dartmouth College who published work last year on a similar concept called the Neurophone. "People have used this sort of visually evoked response before, but the notion of making it small, cheap, and portable for a cell phone is attractive."
The Neurophone used a brain signal known as the P300. This signal is triggered by a range of different stimuli and is used by other brain-control interfaces to gauge when something has caught a person's attention. But this typically involves a longer training period.
However, Eric Leuthardt, director of the Center for Innovation and Neuroscience Technology at Washington University, is not convinced. "Reducing the size of the processors to a cell phone is a natural step," he says. He says the kind of visually evoked response used in Jung's research has been around for years, but it usually requires a large visual stimulus, which small cell phone displays are unlikely to elicit.

The Invisible iPhone


Point and click: The “imaginary phone” determines which iPhone app a person wants to use by matching his or her finger position to the position of the app on the screen.
Credit: Hasso Plattner Institute

A new interface lets you keep your phone in your pocket and use apps or answer calls by tapping your hand.
Over time, using your smart-phone touch screen becomes second nature, to the point where you can even do some tasks without looking. Researchers in Germany are now working on a system that would let you perform such actions without even holding the phone—instead you'd tap your palm, and the movements would be interpreted by an "imaginary phone" system that would relay the request to your actual phone.
The concept relies on a depth-sensitive camera to pick up the tapping and sliding interactions on a palm,  software to analyze the video, and a wireless radio to send the instructions back to the iPhone. Patrick Baudisch, professor of computer science at the Hasso Plattner Institute in Potsdam, Germany, says the imaginary phone prototype "serves as a shortcut that frees users from the necessity to retrieve the actual physical device."
Baudisch and his team envision someone doing dishes when his smart phone rings. Instead of quickly drying his hands and fumbling to answer, the imaginary phone lets him simply slide a finger across his palm to answer it remotely.
The imaginary phone project, developed by Baudisch and his team, which includes Hasso Plattner Institute students Sean Gustafson and Christian Holz, is reminiscent of a gesture-based interface called SixthSense developed by Pattie Maes and Pranav Mistry of MIT, but it differs in a couple of significant ways. First, there are no new gestures to learn—the invisible phone concept simply transfers the iPhone screen onto a hand. Second, there's no feedback, unlike SixthSense, which uses a projector to provide an interface on any surface. Lack of visual feedback limits the imaginary phone, but it isn't intended to completely replace the device, just to make certain interactions more convenient.
Last year, Baudisch and Gustafson developed an interface in which a wearable camera captures gestures that a person makes in the air and translates them to drawings on a screen.



For the current project, the researchers used a depth camera similar to the one used in Microsoft's Kinect for Xbox, but bulkier and positioned on a tripod. (Ultimately, a smaller, wearable depth camera could be used.) The camera "subtracts" the background and tracks the finger position on the palm. It works well in various lighting conditions, including direct sunlight. Software interprets finger positions and movements and correlates it to the position of icons on a person's iPhone. A Wi-Fi radio transmits these movements to the phone.
In a study that has been submitted to the User Interface Software and Technology conference in October, the researchers found that participants could accurately recall the position of about two-thirds of their iPhone apps on a blank phone and with similar accuracy on their palm. The position of apps used more frequently was recalled with up to 80 percent accuracy.


Finger mouse: A depth camera picks up finger position and subtracts the background images to correctly interpret interactions.
Credit: Hasso Plattner Institute
"It's a little bit like learning to touch type on a keyboard, but without any formal system or the benefit of the feel of the keys," says Daniel Vogel, postdoctoral fellow at the University of Waterloo. Vogel wasn't involved in the research. He notes that "it's possible that voice control could serve the same purpose, but the imaginary approach would work in noisy locations and is much more subtle than announcing, 'iPhone, open my e-mail.' "


Touch Vision Interface: smartphone-based touch interaction on multiple screens

Utilizing Augmented Reality technology, the Touch Vision Interface enables seamless touch ...
Utilizing Augmented Reality technology, the Touch Vision Interface enables seamless touch interaction on multiple separate screens via a smartphone's camera


Developed by the Teehan+Lax Labs team, the Touch Vision Interface is an interesting idea that looks at using a smartphone's camera to manipulate other screens such as LCD monitors, laptops or TVs. Using the onboard camera, the system would send touch input coordinates in real time from the smartphone's touchscreen to a video feed displayed on the secondary screen.
While the ability to interact with multiple devices without interruption looks pretty impressive, it might in fact be difficult to implement this system in everyday life. The developers behind the Touch Vision Interface point to surface discovery and pairing as challenges to be overcome in making the technology viable.
According to the Teehan+Lax Labs team, possible future applications for the Touch Vision Interface include crowd-sourcing with the use of billboard polls, group participation on large installations, or a wall of digital billboards that users seamlessly paint across with a single gesture. Another example would be the enhancing of the collaborative creative process, such as in music production.
Take a look at the following video presenting the Touch Vision Interface:



New tech makes four-camera 3D shooting possible

A scientist uses STAN to calibrate a four-camera 3D TV system(Photo: KUK Filmproduktion)
A scientist uses STAN to calibrate a four-camera 3D TV system
(Photo: KUK Filmproduktion)


When it comes to producing 3D TV content, the more cameras that are used to simultaneously record one shot, the better. At least two cameras (or one camera with two lenses) are necessary to provide the depth information needed to produce the left- and right-eye images for conventional 3D, but according to researchers at Germany's Fraunhofer Institute for Telecommunications, at least four cameras will be needed if we ever want to achieve glasses-free 3D TV. Calibrating that many cameras to one another could ordinarily take days, however ... which is why Fraunhofer has developed a system that reportedly cuts that time down to 30 to 60 minutes.
The STAN assistance system ensures that the optical axes, focal lengths and focal points are the same for each camera. That way, as the viewer moves their head, the combined shots will all look like one three-dimensional shot.
Objects that are visible in all four shots are identified using a feature detector function. Using these objects as references, STAN then proceeds to calibrate the cameras so that they match one another. Due to slight imperfections in lenses, however, some discrepancies could still remain. In those cases, the system can do things such as electronically zooming in on one shot, to compensate for the flaws. This can be done in real time, so STAN could conceivably even be used for live broadcasts.
The Fraunhofer team is now in the process of developing a video encoding system, to compress all the data into a small enough form that it could be transmitted using the conventional broadcasting infrastructure. The four-camera setup is already in use by members of the MUSCADE project, which is a consortium dedicated to advancing glasses-free 3D TV technology.

Monday, August 22, 2011

Scientists Extract Images Directly From Brain


Brain reading
Pink Tentacle reports that researchers at Japan’s ATR Computational Neuroscience Laboratories have developed a system that can “reconstruct the images inside a person’s mind and display them on a computer monitor.”
According to the researchers, further development of the technology may soon make it possible to view other people’s dreams while they sleep.
The scientists were able to reconstruct various images viewed by a person by analyzing changes in their cerebral blood flow. Using a functional magnetic resonance imaging (fMRI) machine, the researchers first mapped the blood flow changes that occurred in the cerebral visual cortex as subjects viewed various images held in front of their eyes. Subjects were shown 400 random 10 x 10 pixel black-and-white images for a period of 12 seconds each. While the fMRI machine monitored the changes in brain activity, a computer crunched the data and learned to associate the various changes in brain activity with the different image designs.
Then, when the test subjects were shown a completely new set of images, such as the letters N-E-U-R-O-N, the system was able to reconstruct and display what the test subjects were viewing based solely on their brain activity.
The researchers suggest a future version of this technology could be applied in the fields of art and design — particularly if it becomes possible to quickly and accurately access images existing inside an artist’s head. The technology might also lead to new treatments for conditions such as psychiatric disorders involving hallucinations, by providing doctors a direct window into the mind of the patient.
ATR chief researcher Yukiyasu Kamitani says, “This technology can also be applied to senses other than vision. In the future, it may also become possible to read feelings and complicated emotional states.”


Child Brain reading

May be in future you could use this technology to understand why your kid is crying? ;-)
The research results appear in the December 11 issue of US science journal Neuron.

Friday, July 1, 2011

Advanced robotics - meet the real life C-3PO


It looks like a stripped-down version of Star Wars character C-3PO.
But this robot is science fact not fiction - and one of the most advanced in the world.
Ecci, as it has been named, is the first ever robot to have 'muscles' and 'tendons', as well as the 'bones' they help move. All made of a specially developed plastic.
And most advanced of all, it also has a brain with the ability to correct its mistakes - a trait previously only seen in humans.
World first: Ecci's creators say it is the most advanced robot ever created with tendons, muscles, bones, a brain and the visual capability of a human
World first: Ecci's creators say it is the most advanced robot ever created with tendons, muscles, bones, a brain and the visual capability of a human
Ecci looks like a stripped-down version of Star Wars character C-3PO seen here in a scene from the film with R2D2
Ecci looks like a stripped-down version of Star Wars character C-3PO seen here in a scene from the film with R2D2
Developed by a team of scientists at the University of Zurich, Ecci, is short for Eccerobot. Ecce in Latin means Lo or Behold.
The robot uses a series of electric motors to move the joints the tendons are connected to.
And a computer built into the brain of Ecci allows him to learn from his mistakes.


    If, for example, a movement is causing him to stumble or drop something – the information is studied and analysed to avoid making the same mistake next time.
    The creation also has the same vision capacity of humans too, despite only having one cyclops style eye.
    The scientists now hope their creation will usher in a whole new generation for robots - and could aid development of artificial limbs.
    Learning from its mistakes: The robot has a brain that allows him to analyse data if a movement causes him to stumble or drop something - meaning it does not happen again
    Learning from its mistakes: The robot has a brain that allows it to analyse data if a movement causes it to stumble or drop something - thereby ensuring it does not happen again
    Rolf Pfeifer, who is director of the laboratory for artificial intelligence at the University said 'It opens up a lot of possibilities but in particular it will help us to understand better how the human moving apparatus works – a complicated task.
    'If we can make a robot hand operate like ours then it opens up all sorts of possibilities for artificial limbs. It would also mean a robot that moved like a person could take over some of the jobs done by people where human hands are needed.'
    Scientists have worked on the multi-million pound project for three years, with funding provided partially by private enterprise alongside two million euros from EU funds.
    The team now plan to present a more complete version of Ecci in two months time.


    Sparsh- A bonus Invention by Pranav Mistry helps store data in your body


    How often does you work suffer, just coz you don’t have enough Gigs left in you pen-drive to hold and transfer data? This situation crops up with me all the time.
    Transmitting files and folders from one workstation to another is a bit of an ordeal. Researcher Pranav Mistry of the Media Lab at the Massachusetts Institute of Technology has cut the knot. He eased out this pain by making it as easy as picking up things from one place and keeping them to another just like we do with physical objects.
    Pranav Mistry who attracted global attention for his invention-Sixth Sense and even received Popular Science 2009 Invention Award has struck back again with ‘Sparsh’. He has designed a system (Sparsh) in such a manner that if a user desires to copy a data item from one device to another, all he has to do is touch the screen to copy it (from the source) conceptually saving it in the user’s body, and then touching the other device (the destination) to which they want to paste the copied content.
    For instance, you wish to call a long lost friend and you look for that person’s phone number on your laptop. Generally you will see and type the numbers from your laptop to your Smartphone, but if both devices are operating Sparsh, just lay a thumb on the phone number on your laptop’s screen, and then touch your smartphone’s keypad. The system would automatically know what you have transmitted is a phone number and it will then dial it for you. How cool is that?
    How it works? The first touch copies the phone number to a temporary file in either a Dropbox or an FTP account and the second touch recuperates the data. This require both devices to be running the software and for a user to be signed into their Dropbox or FTP account. It works for any type of data, be it a photo, an address or a link to a YouTube clip.
    Presently Sparsh toils on an application such as smartphones, tablets and other computers, although Mistry states that, “the ideal home for Sparsh is to be built into an OS, so that it can provide the copy-paste feature across all applications”. He says it’s currently possible to incorporate this into Google’s Android mobile operating system and that his team has also implemented a browser-based version.
    Among some of his prior works, Pranav has invented Mousless which is an invisible computer mouse;thirdEye – Multiple viewers can see different things on a single screen; QUICKiES intelligent sticky notes that can be searched, located and can send reminders and messages; inktuitive – An intuitive physical design workspace; a pen that can draw in 3D; invent – design of a programming language for children; RoadRunner 2.01 – a 3D car racing game; TaPuMa a public map that can act as Google of physical world and a lot more.
    Pranav also won Young Innovator Award TR35 by Technology Review and recently also named on 2010 Creativity 50 list — the list of the most influential and inspiring inventive personalities of 2010.Pranav holds a Master in Media Arts and Sciences from MIT and Master of Design from IIT Bombay (now known as Mumbai) besides his Bachelor degree in Computer Engineering from Gujarat University. Pranav’s research interests include Ubiquitous computing, Gestural and Tangible Interaction, AI, Augmented reality, Machine vision, Collective intelligence and Robotics.

    An Inivisible Computer Mouse – Yet Another Invention By Pranav Mistry


    An Inivisible Computer Mouse - Yet Another Invention by Pranav Mistry
    Pranav Mistry, who earlier had made headlines for his invention Sixth Sense and even received Popular Science 2009 Invention Award for it, has now invented yet another similar device and this time its invisible – A mouse and amusingly it costs just 20$ to build its prototype.
    The perpetual changes in computer technology & web has seen many evolutions, right from large room size CPUs to miroprogrammed slim netbooks, heavy bulky monitors to thin LCDs, few MBs capacity hard disks to trillion capacity HDs but in all these what remained nearly unchanged and un-evolved is mouse – moving it around to help us interact computer.
    Mouseless is an invisible computer mouse project done in in MIT Fluid Interfaces Group headed by Pranav Mistry, this invisible mouse provides the familiarity of interaction of a physical mouse without actually needing a real hardware mouse, hence removes the requirement of having a physical mouse altogether but still provides the intuitive interaction of a physical mouse that everyone is familiar with.
    An Inivisible Computer Mouse - Yet Another Invention by Pranav Mistry
    Mouseless consists of – (1) an Infrared (IR) laser beam, (2) an Infrared camera and both embedded in the computer itself. The laser beam module is modified with a line cap and placed such that it creates a plane of IR laser just above the surface the computer sits on. The user cups their hand, as if a physical mouse was present underneath, and the laser beam lights up the hand which is in contact with the surface. The IR camera detects those bright IR blobs using computer vision. The change in the position and arrangements of these blobs are interpreted as mouse cursor movement and mouse clicks. As the user moves their hand the cursor on screen moves accordingly. When the user taps their index finger, the size of the blob changes and the camera recognizes the intended mouse click.
    However, the possibilities of this mouse is just beyond the capabilities of a modern mouse we use these days as Pranav Mistry stated on its website – As we improve our computer vision algorithms, an extensive library of gestures could be implemented in addition to mouse movement and mouse clicks. Typical multitouch gestures, such as zooming in and out, as well as novel gestures, such as balling one’s fist are all possible. In addition, the use of multiple laser beams would allow for recognition of a wider range of free hand motions, enabling novel gestures that the hardware mouse cannot support.

    Wireless Electricity Is Here (Seriously)



    Wireless Electricity

    EnlargeRyan Tseng
    Ryan Tseng, CEO of WiPower, says his system is cheaper and better than rival eCoupled's. | 
    EnlargeWireless Electricity
    Ryan Tseng holds his wirelessly lit lightbulb 3 inches above its power source. 


    I'm standing next to a Croatian-born American genius in a half-empty office in Watertown, Massachusetts, and I'm about to be fried to a crisp. Or I'm about to witness the greatest advance in electrical science in a hundred years. Maybe both.
    Either way, all I can think of is my electrician, Billy Sullivan. Sullivan has 11 tattoos and a voice marinated in Jack Daniels. During my recent home renovation, he roared at me when I got too close to his open electrical panel: "I'm the Juice Man!" he shouted. "Stay the hell away from my juice!"
    He was right. Only gods mess with electrons. Only a fool would shoot them into the air. And yet, I'm in a conference room with a scientist who is going to let 120 volts fly out of the wall, on purpose.
    "Don't worry," says the MIT assistant professor and a 2008 MacArthur genius-grant winner, Marin Soljacic (pronounced SOLE-ya-cheech), who designed the box he's about to turn on. "You will be okay."
    We both shift our gaze to an unplugged Toshiba television set sitting 5 feet away on a folding table. He's got to be kidding: There is no power cord attached to it. It's off. Dark. Silent. "You ready?" he asks.
    If Soljacic is correct -- if his free-range electrons can power up this untethered TV from across a room -- he will have performed a feat of physics so subtle and so profound it could change the world. It could also make him a billionaire. I hold my breath and cover my crotch. Soljacic flips the switch.
    Soljacic isn't the first man to try to power distant electronic devices by sending electrons through the air. He isn't even the first man from the Balkans to try. Most agree that Serbian inventor Nikola Tesla, who went on to father many of the inventions that define the modern electronic era, was the first to let electrons off their leash, in 1890.
    Tesla based his wireless electricity idea on a concept known as electromagnetic induction, which was discovered by Michael Faraday in 1831 and holds that electric current flowing through one wire can induce current to flow in another wire, nearby. To illustrate that principle, Tesla built two huge "World Power" towers that would broadcast current into the American air, to be received remotely by electrical devices around the globe.
    Few believed it could work. And to be fair to the doubters, it didn't, exactly. When Tesla first switched on his 200-foot-tall, 1,000,000-volt Colorado Springs tower, 130-foot-long bolts of electricity shot out of it, sparks leaped up at the toes of passersby, and the grass around the lab glowed blue. It was too much, too soon.
    But strap on your rubber boots; Tesla's dream has come true. After more than 100 years of dashed hopes, several companies are coming to market with technologies that can safely transmit power through the air -- a breakthrough that portends the literal and figurative untethering of our electronic age. Until this development, after all, the phrase "mobile electronics" has been a lie: How portable is your laptop if it has to feed every four hours, like an embryo, through a cord? How mobile is your phone if it shuts down after too long away from a plug? And how flexible is your business if your production area can't shift because you can't move the ceiling lights?
    The world is about to be cured of its attachment disorder.