Researchers at Disney Research and ETH Zurich have demonstrated that consumer-grade light-emitting diode (LED) bulbs can, with some modifications, do double duty — both illuminating a room and providing a communications link for devices in that room.
This visible light communication (VLC) system would be suitable for connecting the many devices, such as appliances, wearable devices, sensors, toys and utilities, that could comprise the Internet of Things, or IoT, said Stefan Schmid, a Ph.D. student at Disney Research and ETH Zurich.
LEDs can both produce light and serve as light sensors. By having individual LEDs alternate between sending modulated light signals and serving as receivers of signals, it is possible to create a network of bulbs that can send messages to each other and connect to devices, while having no discernible effect on room lighting.
Schmid and his colleagues designed and implemented such a VLC system, demonstrating that it is a viable way to interconnect devices within a room.
Disney Research is a network of research laboratories supporting The Walt Disney Company.
Its purpose is to pursue scientific, and technological innovation to advance the company’s broad media and entertainment efforts.
It has facilities in Los Angeles, Pittsburgh, Boston and Zurich. Research topics include computer graphics, video processing, computer vision, robotics, radio and antennas, wireless communications, human-computer interaction, displays, data mining, machine learning, and behavioral sciences.
Disney Research is managed by an internal Disney Research Council co-chaired by Disney-Pixar’s Ed Catmull and Walt Disney Imagineering’s Bruce Vaughn, and including the directors of the individual labs.
The Latest Updated Research News:
Disney Research research articles from Innovation Toronto
- LED Bulbs Can Both Light a Room, Provide a Communications Link – June 30, 2016
- Scientists invent robotic ‘artist’ that spray paints giant murals – April 9, 2016
- Disney Research robot can juggle, play catch
- Get ready for the iBall – Disney breakthrough could lead to round phones and realistic robot eyes
- Carnegie Mellon-Disney Motion Tracking Technology Is Extremely Precise and Inexpensive With Minimal Lag
- Ambient Energy Could Replace Batteries
- Disney’s Aireal delivers precise tactile feedback out of thin air
- Cheap, color, holographic video
- Robots Learn Proper Handoff, Follow Digitized Human Examples
- Disney develops “face cloning” technique for animatronics
- Touch Your Philodendron and Control Your Computer: Technology Turns Any Plant Into an Interactive Device
- Revolutionary Technology Enables Objects to Know Your Touch
- SideBySide Projection System Enables Projected Interaction Between Mobile Devices
- Motion capture system makes actors the camera instead of putting them in front of it
- Tactile Technology for Video Games Guaranteed to Send Shivers Down Your Spine
- Innovation Far Removed From the Lab
Learning algorithm rewarded for building confidence over time
Researchers at Disney Research and Boston University have found that a machine learning program can be trained to detect human activity in a video sooner and more accurately than other methods by rewarding the program for gaining confidence in its prediction the longer it observes the activity.
It seems intuitive that the program would grow more confident that it is detecting, say, a person changing a tire, the longer it observes the person loosening lugnuts, jacking up the car and subsequently removing the wheel, but that’s not the way most computer models have been trained to detect activity, said Leonid Sigal, senior research scientist at Disney Research.
“Most training techniques are happy if the computer model gets 60 percent of the video frames correct, even if the errors occur late in the process, when the activity should actually be more apparent,” Sigal said. “That doesn’t make much sense. If the model predicts a person is making coffee even after it sees the person put pasta into boiling water, it should be penalized more than if it made the same incorrect prediction when the person was still just boiling water.”
Shugao Ma, a Ph.D. student in computer science at Boston University and a former intern at Disney Research, found that this change in training methods resulted in more accurate predictions of activities. The computer also was often able to accurately predict the activity early in the process, even after seeing only 20 to 30 percent of the video. Likewise, the program can detect that an activity is finished if its confidence that it is observing that activity begins to drop.