╰» ραятнι'ѕ ¢yвєя ραgє...: 2013 Blogger Tricks
« »

Monday, December 16, 2013

Wireless charging for iphone

iQi Mobile is designed to let you take advantage of wireless charging using standard soft ...


While wireless charging capability is built into the latest Nexus handsets, iPhone users looking to unplug are faced with the prospect of shelling out for a wireless-ready (and often bulky) case like the Powermat series. The iQi Mobile looks to slim down the equation with a wireless power solution that works with most iPhone soft cases.
As the name suggests, the iQi Mobile uses the Qi (pronounced “i-chee”) standard and is compatible with iPhone 5, 5C, 5S & iPod Touch (5G). It consists of a 0.5 mm thick, credit-card like receiver unit and a flexible plug for the phone's Lightning connector that allows it to be squeezed behind a standard soft case. From there it enables charging at a rate faster than USB 2.0 with any Qi compatible charging pad, including the Koolpuck charger which is being bundled with the iQi Mobile offering.
iQi Mobile
Funds are being raised to bring the iQi Mobile to market via Indiegogo where the campaign has almost quadrupled its US$30,000 goal with 18 days still left to run.
The company says early backers of the campaign will receive their iQi Mobile perks before Christmas. A full retail launch is slated for 2014 with the iQi Mobile receiver to be priced at $35 and the receiver/Koolpuck charger bundle to cost $85.
The video pitch for the iQi Mobile is below.
Source: Indiegogo

Sunday, December 15, 2013

Attempt To Convert Prof Hawking’s Brainwaves Into Speech

An American scientist is to unveil details of work on the brain patterns of Prof Stephen Hawking which he says could help safeguard the physicist’s ability to communicate.
Prof Philip Low said he eventually hoped to allow Prof Hawking to “write” words with his brain as an alternative to his current speech system which interprets cheek muscle movements.
Prof Low said the innovation would avert the risk of locked-in syndrome.
Intel is working on an alternative.
Prof Hawking was diagnosed with motor neurone disease in 1963. In the 1980s he was able to use slight thumb movements to move a computer cursor to write sentences.
His condition later worsened and he had to switch to a system which detects movements in his right cheek through an infrared sensor attached to his glasses which measures changes in light.
Because the nerves in his face continue to deteriorate his rate of speech has slowed to about one word a minute prompting him to look for an alternative.
The fear is that Prof Hawking could ultimately lose the ability to communicate by body movement, leaving his brain effectively “locked in” his body.
In 2011, he allowed Prof Low to scan his brain using the iBrain device developed by the Silicon Valley-based start-up Neurovigil.
Prof Hawking will not attend the consciousness conference in his home town of Cambridge where Prof Low intends to discuss his findings, but a spokesman told the BBC: “Professor Hawking is always interested in supporting research into new technologies to help him communicate.”
Decoding brainwaves
The iBrain is a headset that records brain waves through EEG (electroencephalograph) readings – electrical activity recorded from the user’s scalp.
Prof Low said he had designed computer software which could analyse the data and detect high frequency signals that had previously been thought lost because of the skull.”
An analogy would be that as you walk away from a concert hall where there’s music from a range of instruments,” he told the BBC.”As you go further away you will stop hearing high frequency elements like the violin and viola, but still hear the trombone and the cello. Well, the further you are away from the brain the more you lose the high frequency patterns.
The iBrain device collects EEG data which it transfers to a computer
“What we have done is found them and teased them back using the algorithm so they can be used.”
Prof Low said that when Prof Hawking had thought about moving his limbs this had produced a signal which could be detected once his algorithm had been applied to the EEG data.
He said this could act as an “on-off switch” and produce speech if a bridge was built to a similar system already used by the cheek detection system.
Prof Low said further work needed to be done to see if his equipment could distinguish different types of thoughts – such as imagining moving a left hand and a right leg.
If it turns out that this is the case he said Prof Hawking could use different combinations to create different types of virtual gestures, speeding up the rate he could select words at.
To establish whether this is the case, Prof Low plans trials with other patients in the US.
Intel’s effort
The US chipmaker Intel announced, in January, that it had also started work to create a new communication system for Prof Hawking after he had asked the firm’s co-founder, Gordon Moore, if it could help him.
It is attempting to develop new 3D facial gesture recognition software to speed up the rate at which Prof Hawking can write.
“These gestures will control a new user interface that takes advantage of the multi-gesture vocabulary and advances in word prediction technologies,” a spokeswoman told the BBC.
“We are working closely with Professor Hawking to understand his needs and design the system accordingly.”
Intel began working with Prof Hawking after he wrote a letter to its co-founder Gordon Moore in 2011

Revolv home automation

Once you start automating your home with electronic locks, lights, switches, and other components, the number of apps on your phone can multiply very quickly. Worse, it seems like you should be able to easily link their behavior together yet can’t. 


Revolv aims to be the one system and app to rule all home automation devices
After all, if you want the lights to turn off when you leave the house, shouldn’t the thermostat also turn down? The Revolv home automation system and associated smart phone app aim to simplify things with one centralized control hub that promises easy setup, no additional support fees, and an evolving lineup of supported devices and features.
Using Revolv is designed to be simple. Once plugged into a central location for optimal Wi-Fi coverage, the unit automatically adds supported devices from the home network, while others need to be added manually via a walkthrough on the Revolv app.
The unit boasts seven wireless radios supporting ten different wireless protocols, with Insteon, Wi-Fi, and Z-Wave currently covered, and more including ZigBee to be rolled out later. What it doesn’t require is an Ethernet connection, creating an account, or monthly fees, with the system registering your phone simply by using the phone’s flash. Only iOS devices are currently supported, but Android, Windows Phone and others will follow.
The Revolv device is controlled through your iOS gadget, with Android support coming
Once devices are enabled, the fun begins. You can design different home "scenarios," such as coming home, vacation, relaxation, movie night, or bedtime, with rules being triggered by the phone’s proximity, motion sensors, or based on time. Currently the GeoSense proximity trigger is only enabled for one phone, though this will be updated early next year. However, multiple phones can control a Revolv system.
The company is also working on more complex conditional rules, as might apply in households where not everyone has a smartphone. Also planned is a feature that recognizes when people are still at home, despite the GeoSense-linked user leaving the house with their phone.
Revolv's app allows users to set their automation scenarios and triggers
Belkin WeMo light switches and electrical outlets, Philips Hue and Insteonlightbulbs, Kwikset and Yale locks, and wireless speakers from Sonos are just a few of the devices currently supported, with a full list that is being regularly updated found on Revolv’s website.
The Revolv system is currently available for US$299.
If you’ve started to automate your home, what would you do with Revolv? Let us know in the comments.
Below is Revolv’s video pitching the “sexiness” of home automation.
Source: Revolv

Saturday, December 14, 2013

Create your own sixthsense device

Hi friends you are all know that Pranav Mistry developed a Sixth sense technology, which is a wearable gestural interface that augments the physical world around us with digital information and lets us use natural hand gestures to interact with that information.

He reveals his Sixth Sense technology project to the world as opensource..



HARDWARE: 

Camera

The camera is the key input device of the SixthSense system. The camera acts as a digital eye of the system. It basically captures the scene the user is looking at. The video stream captured by the camera is passed to mobile computing device which does the appropriate computer vision computation. The major functions of the camera can be listed as:
  • Captures user’s hand movements and gestures (used in reorganization of user gestures)
  • Captures the scene in front and objects the user is interacting with (used in object reorganization and tracking)
  • Takes a photo of the scene in front when the user performs a ‘framing’ gesture
  • Captures the scene of projected interface (used to correct the alignment, placement and look and feel of the projected interface components)

Projector

The projector is the key output device of the SixthSense system. The projector visually augments surfaces, walls and physical objects the user is interacting with by projecting digital information and graphical user interfaces. The mobile computing device provides the projector with the content to be projected. The projector unit used in prototype runs on a rechargeable battery. The major functions of the projector can be listed as:
  • Projects graphical user interface of the selected application onto surfaces or walls in front
  • Augments the physical objects the user interacting with by projecting just-in-time and related information from the Internet
Suggested Products: You can buy either laser (AAXAMicrovision) or L.E.D (3M MPro110) projectors.

Mirror

The mirror reflects the projection coming out from the projector and thus helps in projecting onto the desired locations on walls or surfaces. The user manually can change the tilt of the mirror to change the location of the projection. For example in application where the user wants the projection to go on the ground instead of the surface in front, he can change the tilt of the mirror to change the projection. Thus, the mirror in the SixthSense helps in overcoming the limitation of the limited projection space of the projector.
Suggested Product: Any 1”X1” first surface mirror

Microphone

The microphone is an optional component of the SixthSense. It is required when using a paper as a computing interface. When the user wants to use a sheet of paper as an interactive surface, he or she clips the microphone to the paper. The microphone attached this way captures the sound signals of user’s touching the paper. This data is passed to computing device for processing. Later, combined with the tracking information about user’s finger, the system is able to identify precise touch events on the paper. Here, the sound signal captured by the microphone provides time information whereas the camera performs tracking. The applications enabled by this technique are explained earlier.

Mobile computing device

The SixthSense system uses a mobile computing device in user’s pocket as the processing device. The software program enabling all the features of the system runs on this computing device. This device can be a mobile phone or a small laptop computer. The camera, the projector and the microphone are connected to this device using wired or wireless connection. The detail of the software program that runs on this device is provided in next section. The mobile computing device is also connected to the Internet via 3G network or wireless connection.
Suggested Product: Any Windows computer

Now that you have all these pieces, you need a way to combine them. We recommend using Lego strips to form the base. The projector, camera, and mirror assembly can be directly put onto this base. You can also use Velcro to combine the products.

Suggested Hardware Components

These are the basic pieces that you should buy. You can choose any brand, and the following list are those items that worked well for us:
  • A mirror assembly
    1. Front faced mirror is best, 1”X1” first surface mirror (this can be purchased from anywhere, i.e. Ebay)
  • Laptop Computer: Any Windows computer (this will act as the mobile computing device)

SOFTWARE:

How to run the Software Component of SixthSense

The prototype system runs on windows platform and majority of the code is written in C++ and C#. We will be uploading newer versions as it is being developed; this will also include a mobile version.
NOTE: We are moving the new code to Git this week. Meanwhile you can download it at the link below.

WUW v0.1 beta

Download

or you can download latest release tag from: github

Monday, December 9, 2013

InTouch Technology

How many mobile electronic devices to you have now? A smartphone, a laptop, a tablet, digital camera, maybe even a smart watch? And how often is it necessary to transfer pictures, documents or videos, between your devices? The inTouch technology developed by researchers from the VTT Research Center of Finland lets a ring, bracelet, or even a smart fingernail act as a conduit to transfer information between devices simply and securely – even when the devices are owned by different people.
You have probably wished you had such a feature at some point. I was recently at an event where I was working with five friends. We had all taken pictures at the event, and now we wanted to share the pictures with each other. But we all had different devices with different operating systems. We uploaded the pictures – one at a time – to Facebook or another online service, and then downloaded them one at a time to our devices, and then again to our laptops or desktop computers once we got home. But what if there was a better way?
The team at the Smart Interaction Solutions lab at the VTT Technical Research Center of Finland, led by Dr. Jani Mäntyjärvi has been experimenting with different devices in different forms that can act as a “touch conduit” of information between different devices.
The basic idea is that the user wears a ring, or bracelet, watch, or even a “smart fingernail” (i.e. a small chip embedded in an artificial fingernail) that has some small amount of memory, an antenna, but no battery. The system would require the devices to have a special antenna that sends out enough energy to power the ring or other inTouch device, just like how an RFID(Radio Frequency Identification) chip works.
When the user touches their device with an inTouch ring, for example, a special icon appears. If the user wants to upload a small amount of information, like a website address, or a small picture, the data is actually stored in the ring. Then when the user touches another device equipped with the same technology, they can initiate a download from the ring back into the device.
Sending a friend a website address is a simple task of touching your smartphone on the web page, selecting the “upload” icon, and then touching the friend’s device and selecting download. The information is wirelessly transferred via the ring.
Because the ring has limited storage capacity, the cloud is used as an intermediary for larger amounts of data, such as video files. The inTouch software uploads the file to a cloud service, so the ring only has to store a link to the data. Then, once the user touches the device to which they wish to send the video file, it knows where to go to download it.
This technique does require some modifications to the devices involved – a special antenna and transmitter/receiver needs to be incorporated into each smartphone, tablet, or computer in addition to the Wi-Fi, cellular, and Bluetooth wireless technologies that may already be there.
Dr. Mäntyjärvi sees this type of wearable information conduit as enabling quite a few applications besides just simple file sharing. The ring can act as a password or security device that can unlock doors, or perhaps start your car. A bracelet could be used as an ID card to enable industrial equipment, or identify specific operators. The ring could also contain a link to a person's medical information that doctors could access in the emergency room.
The chips and antennas involved are very small, and since they have no batteries, can be placed in any number of wearable accessories, clothing, jewellery, or, as mentioned, a fashionable fake fingernail.
This person uses a fingernail-shaped chip to transfer data between a tablet and a smartpho...

This person uses a fingernail-shaped chip to transfer data between a tablet and a smartphone with just a touch.

This data ring makes the connection between two devices by transferring data via a wireles...

This data ring makes the connection between two devices by transferring data via a wireless antenna.

The touch device can also take the shape of a bracelet.  You still have to touch the devic...

The touch device can also take the shape of a bracelet. You still have to touch the device to confirm the connection.


You can see a demonstration of the technology in the following video.
Source: VTT

Wednesday, July 10, 2013

3D Internet


Also known as virtual worlds, the 3D Internet is a powerful new way for you to reach consumers, business customers, co-workers, partners, and students. It combines the immediacy of television, the versatile content of the Web, and the relationship-building strengths of social networking sites likeFace book . Yet unlike the passive experience of television, the 3D Internet is inherently interactive and engaging. Virtual worlds provide immersive 3D experiences that replicate (and in some cases exceed) real life.
People who take part in virtual worlds stay online longer with a heightened level of interest. To take advantage of that interest, diverse businesses and organizations have claimed an early stake in this fast-growing market. They include technology leaders such as IBM, Microsoft, and Cisco, companies such as BMW, Toyota , Circuit City , Coca Cola, and Calvin Klein, and scores of universities, including Harvard, Stanford and Penn State .
Introduction of 3D Internet
The success of 3D communities and mapping applications, combined with the falling costs of producing 3D environments, are leading some analysts to predict that a dramatic shift is taking place in the way people see and navigate the Internet.
The appeal of 3D worlds to consumers and vendors lies in the level of immersion that the programs offer.

The experience of interacting with another character in a 3D environment, as opposed to a screen name or a flat image, adds new appeal to the act of socializing on the Internet.

Advertisements in Microsoft's Virtual Earth 3D mapping application are placed as billboards and signs on top of buildings, blending in with the application's urban landscapes.

3D worlds also hold benefits beyond simple social interactions. Companies that specialize in interior design or furniture showrooms, where users want to view entire rooms from a variety of angles and perspectives, will be able to offer customized models through users' homePCs .

Google representatives report that the company Google is preparing a new revolutionary product called Google Goggles, an interactive visor that will present Internet content in three dimensions. Apparently the recent rumors of a Google phone refers to a product that is much more innovative than the recent Apple iPhone.
Google's new three dimensional virtual reality :
nyone putting on "the Googgles" - as the insiders call them - will be immersed in a three dimensional "stereo-vision" virtual reality called 3dLife. 3dLife is a pun referring to the three dimensional nature of the interface, but also a reference to the increasingly popular Second Life virtual reality.
The "home page" of 3dLife is called "the Library", a virtual room with virtual books categorized according to the Dewey system. Each book presents a knowledge resource within 3dLife or on the regular World Wide Web. If you pick the book for Pandia, Google will open the Pandia Web site within the frame of a virtual painting hanging on the wall in the virtual library. However, Google admits that many users may find this too complicated.
mirage-innovations-lightvu-goggle
Apparently Google is preparing a new revolutionary product called Google Goggles, an interactive visor which will display Internet content in three dimensions.
A 3D mouse lets you move effortlessly in all dimensions. Move the 3D mouse controller cap to zoom, pan and rotate simultaneously. The 3D mouse is a virtual extension of your body - and the ideal way to navigate virtual worlds like Second Life.
The Space Navigator is designed for precise control over 3D objects in virtual worlds. Move, fly and build effortlessly without having to think about keyboard commands, which makes the experience more lifelike.
Controlling your avatar with this 3D mouse is fluid and effortless. Walk or fly spontaneously, with ease. In fly cam mode you just move the cap in all directions to fly over the landscape and through the virtual world
Hands on: Exit Reality:The idea behind ExitReality is that when browsing the web in the old-n-busted 2D version you're undoubtedly using now, you can hit a button to magically transform the site into a 3D environment that you can walk around in and virtually socialize with other users visiting the same site. This shares many of the same goals as Google's Lively (which, so far, doesn't seem so lively), though ExitReality is admittedly attempting a few other tricks.
Installation is performed via an executable file which places ExitReality shortcuts in Quick Launch and on the desktop, but somehow forgets to add the necessary ExitReality button to Firefox's toolbar . After adding the button manually and repeatedly being told our current version was out of date, we were ready to 3D-ify some websites and see just how much of reality we could leave in two-dimensional dust.
Exit Reality is designed to offer different kinds of 3D environments that center around spacious rooms that users can explore and customize, but it can also turn some sites like Flickr into virtual museums, hanging photos on virtual walls and halls. Strangely, it's treating Ars Technical as an image gallery and presenting it as a malformed 3D gallery .
3D Shopping is the most effective way to shop online. 3DInternet dedicated years of research and development and has developed the worlds' first fully functional, interactive and collaborative shopping mall where online users can use our 3DInternet's Hyper-Reality technology to navigate and immerse themselves in a Virtual Shopping Environment. Unlike real life, you won't get tired running around a mall looking for that perfect gift; you won't have to worry about your kids getting lost in the crowd; and you can finally say goodbye to waiting in long lines to check out.

Monday, June 10, 2013

Mind-controlled quadcopter flies using imaginary fists

Researchers at the University of Minnesota have done away with all that tedious joystick work by developing a mind-controlled quadcopter. It may seem like the top item of next year’s Christmas list, but it also serves a very practical purpose. Using a skullcap fitted with a Brain Computer Interface (BCI), the University's College of Science and Engineering hopes to develop ways for people suffering from paralysis or neurodegenerative diseases to employ thought to control wheelchairs and other devices.
The aim of the Minnesota team led by biomedical engineering professor Bin He is to develop ways of developing thought-control devices that can work reliably at high speed, without the need for surgical implantation. This means extensive real-world testing and, though spectacular, flying is actually a very simple activity in a three-dimensional environment without the complications of obstacles and terrain that a ground vehicle encounters. That’s one of the reason why it was possible to make a plane that flies on autopilot decades before a self-driving car was even considered. So for developing mind-controlled devices, something like a quadcopter is an inexpensive option because is it relatively easy to control the variables of the experiment.
The quadcopter used was an AR Drone 1.0 built by Parrot SA of Paris, France. It is configured to fly with forward motion pre-set and the operator is able to use mind control to make it go left or right or up and down. A video camera mounted on the front provides a field of view pointing directly forward, an arrangement designed to promote a sense of embodiment in the operator and enhance feedback.
The EEG skullcap
The key feature of the mind-controlled quadcopter was the non-invasive BCI skullcap. Invasive BCI are used for controlling robot limbs and have shown some success, but embedding these are a major surgical undertaking with risks of infection and rejection.
Non-invasive versions can avoid these problems we've already seen projects that use this approach for controlling wheelchairs and robotic appendages.
Even if the ultimate goal is an implanted interface, a non-invasive BCI can help in the process by allowing the patient to become familiar with a BCI before the procedure, especially for a progressive neurodegenerative disorder, such as amyotrophic lateral sclerosis, where early implantation isn’t warranted.
The skullcap BCI is based on electroencephalography (EEG). Inside the cap are 64 electrodes that record tiny currents of electrical activity in the brain. These are usually very complex and chaotic, which is why you can’t strap someone to an EEG machine and read their mind, but brain activity in the motor centers of the central cortex can be more readily identified. The system uses closed-loop sensing, processing and actuation. The cap picks up the signals, they are conveyed to a computer for processing and the output is in the form of commands to the quadcopter’s control system by WiFi.
"It’s completely noninvasive," says Karl LaFleur, a senior biomedical engineering student involved in the project. "Nobody has to have a chip implanted in their brain to pick up the neuronal activity."
The experiment used five subjects – three women and two men in their twenties. Other subjects manipulated the quadcopter using a conventional keyboard to act as a control group. The test subjects were trained to control the quadcopter by imagining opening or closing their fists. Imagining making a left-hand fist causes the brain to fire activity in the area of the motor cortex controlling the left hand, the system detects this and tells the quadcopter to turn left. Imagining a right-hand fist makes it turn right, and imagining making both fists makes it go up and then down again.
The mind-controlled quadcopter navigting the course
This is something of a first because it required precise mapping of the brain. "We were the first to use both functional MRI (Magnetic resonance imaging) and EEG imaging to map where in the brain neurons are activated when you imagine movements," LaFleur says. "So now we know where the signals will come from."
The training involved working with simulators that resembled an old Ponggame from the 1970s. The subjects had to learn to move a cursor on a screen left and right, then to move it up and down as well. Once they’d mastered this, they were set to controlling a simulated quadcopter in a virtual environment.
In the final experiment, a standard-size university gymnasium was kitted out with two large balloon rings suspended from the ceiling. The object of the exercise was to fly the quadcopter through the rings.
The operators faced away from the area, so they could only see through the quadcopter’s camera as a way of providing feedback on performance. The results showed an over 90 percent success rate in navigating the course once the system had been calibrated and the subjects had familiarized themselves with the layout.
Map of the brain as the subject imagines making one fist, then the other, then both
"Our study shows that for the first time, humans are able to control the flight of flying robots using just their thoughts sensed from a non-invasive skull cap," says Bin He. "It works as good as invasive techniques used in the past.
"We envision that they’ll use this technology to control wheelchairs, artificial limbs or other devices. Our next step is to use the mapping and engineering technology we've developed to help disabled patients interact with the world. It may even help patients with conditions like autism or Alzheimer’s disease or help stroke victims recover. We’re now studying some stroke patients to see if it’ll help rewire brain circuits to bypass damaged areas."
The findings of the team were published in the Journal of Neural Engineering.
The video below shows the mind-controlled quadcopter in action.

Sunday, February 24, 2013

Man With Thought-Controlled Bionic Leg to Climb Chicago Tower


 Man With Thought Controlled Bionic Leg to Climb Chicago Tower (video)
After losing his right leg in a motorcycle accident, Zac Vawter, 31-year-old software engineer will put his thought-controlled bionic leg to the ultimate test on Sunday when he attempts to climb 103 flights of stairs to the top of Chicago’s Willis Tower, one of the world’s tallest skyscrapers. If all goes well, he’ll make history with the bionic leg’s public debut.
Vawter has signed up to become a research subject, , helping to test a trailblazing prosthetic leg that’s controlled by his thoughts. The robotic leg responds to electrical impulses from muscles in his hamstring. Vawter will think, “Climb stairs,” and the motors, belts and chains in his leg will synchronize the movements of its ankle and knee. Vawter hopes to make it to the top in an hour, longer than it would’ve taken before his amputation, less time than it would take with his normal prosthetic leg — or, as he calls it, his “dumb” leg.
A team of researchers will be cheering him on and noting the smart leg’s performance. Vawter can not keep the leg after the experiment, the bionic limb will stay behind in Chicago. Researchers will continue to refine its steering. Taking it to the market is still years away.
“Somewhere down the road, it will benefit me and I hope it will benefit a lot of other people as well,” Vawter said about the research at the Rehabilitation Institute of Chicago.
Bionic — or thought-controlled — prosthetic arms have been available for a few years, thanks to pioneering work done at the Rehabilitation Institute. With leg amputees outnumbering people who’ve lost arms and hands, the Chicago researchers are focusing more on lower limbs. Safety is important. If a bionic hand fails, a person drops a glass of water. If a bionic leg fails, a person falls down stairs.

man with bionic leg to climb chicago skyscraper 6 Man With Thought Controlled Bionic Leg to Climb Chicago Tower (video)
In this Oct. 25, 2012 photo, physical therapist assistant Suzanne Finucane, right, helps Zac Vawter as he practices walking with an experimental "bionic" leg at the Rehabilitation Institute of Chicago. After losing his right leg in a motorcycle accident, the 31-year-old software engineer signed up to become a research subject, helping test a trailblazing prosthetic leg that's controlled by his thoughts. He will put this leg to the ultimate test Sunday, Nov. 4 when he attempts to climb 103 flights of stairs to the top of Chicago's Willis Tower, one of the world’s tallest skyscrapers. (AP Photo/Brian Kersey

The Willis Tower climb will be the bionic leg’s first test in the public eye, said lead researcher Levi Hargrove of the institute’s Center for Bionic Medicine. The climb, called “SkyRise Chicago,” is a fundraiser for the institute with about 2,700 people climbing. This is the first time the climb has played a role in the facility’s research.
To prepare, Vawter and the scientists have spent hours adjusting the leg’s movements. On one recent day, 11 electrodes placed on the skin of Vawter’s thigh fed data to the bionic leg’s microcomputer. The researchers turned over the “steering” to Vawter. He kicked a soccer ball, walked around the room and climbed stairs.
It started with surgery in 2009. When Vawter’s leg was amputated, a surgeon repositioned the residual spaghetti-like nerves that normally would carry signals to the lower leg and sewed them to new spots on his hamstring. That would allow Vawter one day to be able to use a bionic leg, even though the technology was years away. The surgery is called “targeted muscle reinnervation” and it’s like “rewiring the patient,” Hargrove said. “And now when he just thinks about moving his ankle, his hamstring moves and we’re able to tell the prosthesis how to move appropriately.”
Experts not involved in the project say the Chicago research is on the leading edge. Most artificial legs are passive. “They’re basically fancy wooden legs,” said Daniel Ferris of the University of Michigan. Others have motorized or mechanical components but don’t respond to the electrical impulses caused by thought.