Editor's Picks


Intel backs IU Professor Minje Kim's deep learning project

Feb. 1, 2017

Minje Kim, an assistant professor of intelligent systems engineering at the School of Informatics and Computing at IU Bloomington, has received a gift from Intel to pursue a method of lowering the power and computing cost of deep learning processes in artificial intelligence. Intel sought a portfolio of research projects focused on compelling new human-computer interaction advancements that have HCI on the precipice of a breakthrough.

Minje Kim


As smart devices have become more ubiquitous, advances in deep learning have allowed AI to reach a near-human level. Deep learning allows complicated intelligence jobs -- such as computer vision, near real-time language translation and music recognition to be performed quickly, but such computing comes at a cost. Because neural networks present each of the millions of parameters of a computation in up to 64-bit forms, the computations required are both sizeable and hungry for power.

Such computations aren’t a problem on laptop or desktop computers, but smart devices are limited by battery life and processing power. Communication with the cloud creates delays and slows down the process. If an application drains a battery too quickly and/or is slow presenting results, the app will be of limited use.

Kim’s project, Bitwise Deep Recurrent Neural Networks for Efficient Context-Aware Pervasive Systems, focuses on changing the parameters from multi-bit encoding to a single bit while retaining nearly all of the information. The resulting computations would require less power and memory, allowing processes to run locally on smart devices without losing the near human-level performance of deep learning.

The research plans to accomplish three things. First, the work aims to devise a proper training mechanism for Bitwise Deep Recurrent Neural Networks by transplanting some recent successful training policies for ordinary bitwise neural networks. Second, it will measure the success of the training method by checking whether the performance catches up to the upper bound defined by the state-of-the-art comprehensive deep recurrent neural networks systems that tend to necessitate a lot more resources. Finally, the run-time efficiency of Bitwise Deep Recurrent Neural Networks systems will be accessed in a proper mockup hardware implementation.

“The idea is to somehow come up with a way to streamline the procedure while still keeping the high-performance aspect,” Kim said. “If you can rely on a single-bit logical gate for computation (e.g. XNOR), there will be a great savings in the amount of power and memory required.”

Intel requested proposals to address technological areas such as robotics and autonomous machines, virtual and augmented reality and visualization, simulation, and modeling, advance imaging and displays, wearables and/or human activity or state monitoring systems. The computing giant also looks to address application domains such as smart spaces, whole-home personal assistants, person-to-person interaction, group conflict resolution and personal coaching.

Back to the Inside IU homepage

Read more Technology stories »