Good afternoon everyone,
Dr. Yang Li is visiting our lab on Wednesday, April 6th after his TUX talk on Tuesday. I am putting together the demo schedule for him. His will be seeing demos at10:30 am - 12 am and 2:15pm - 5pm. Please let me know by Monday night if you want to give a demo with the time you prefer.
Thanks,
Haijun
You can find his bio here:
Yang Li is a Senior Research Scientist in Human Computer Interaction and Mobile Computing at Google. He leads the Predictive User Interfaces group at Google. He is also an affiliate faculty member in Computer Science & Engineering at the University of Washington. He earned a Ph.D. degree in Computer Science from the http://english.cas.ac.cn/ Chinese Academy of Sciences, and conducted postdoctoral research in http://www.eecs.berkeley.edu/ EECS at the http://www.berkeley.edu/ University of California at Berkeley. He has published over 50 papers in the field of Human Computer Interaction, including 29 publications at CHI, UIST and TOCHI. He has constantly served on the program committees of top-tier HCI and mobile computing conferences.
Yang's research focuses on novel tools and methods for creating mobile interaction behaviors, particularly regarding emerging input modalities (such as http://yangl.org/pdf/gesturesearch-uist2010.pdf gestures and http://youtu.be/JJSZGdMYV9s cameras), https://www.youtube.com/watch?v=xGqn1FQRQPQ cross-device http://googleresearch.blogspot.com/2013/09/projecting-without-projector-sha ring.html interaction and http://dl.acm.org/citation.cfm?id=2647355 predictive user interfaces. Yang wrote https://play.google.com/store/apps/details?id=com.google.android.apps.gestu research&hl=en Gesture Search, a popular Android app for random access of mobile content using gestures. Yang develops http://youtu.be/8OXExn29OTE software tool support and http://yangl.org/pdf/protractor-chi2010.pdf recognition methods by drawing insights from http://yangl.org/pdf/motiongestures-chi2011.pdf user behaviors, and leveraging techniques such as machine learning, computer vision and http://yangl.org/pdf/crowdlearner.pdf crowdsourcing to make complex tasks simple and intuitive.