Here's a brief overview of an exciting project I did during my first semester at UMN.
Touch is still a key user interface and with all the sensors in our smartphones, we can expect to see more of this technology as devices begin to adopt what can be called the “adaptable interface” (which will eventually give way to invisible interfaces). Gesture recognition might seem a mismatch with phones since people tend to keep them close at hand, but it is believed there are plenty of reasons people would want to manage their handsets with motions.
For example, once a phone is positioned on a car dashboard, a driver could utilize gestures to answer or ignore an incoming call, activate voice recognition features, zoom in or out on a map and adjust the device’s volume.
My aim was to build a system that leverages mobile device touch screens and their built-in inertial sensors and vibration motor to infer hand postures.
Touch is still a key user interface and with all the sensors in our smartphones, we can expect to see more of this technology as devices begin to adopt what can be called the “adaptable interface” (which will eventually give way to invisible interfaces). Gesture recognition might seem a mismatch with phones since people tend to keep them close at hand, but it is believed there are plenty of reasons people would want to manage their handsets with motions.
For example, once a phone is positioned on a car dashboard, a driver could utilize gestures to answer or ignore an incoming call, activate voice recognition features, zoom in or out on a map and adjust the device’s volume.
My aim was to build a system that leverages mobile device touch screens and their built-in inertial sensors and vibration motor to infer hand postures.
- This includes one- or two-handed interaction and the use of thumb or index finger.
- Use machine learning tools to classify data.
- NO use of additional device instrumentation!