Dan Bohus
Senior Principal Researcher

dbohus@microsoft.com
+1-425-706-5880
Microsoft Research
One Microsoft Way
Redmond, WA, 98052



research agenda
My work is focused on the study and development of computational models for multimodal, physically situated interaction. The long-term question that drives my research agenda is how can we create systems that reason more deeply about their surroundings and seamlessly participate in interactions and collaborations with people in the physical world? Examples include human-robot interactive systems, embodied conversational agents, intelligent spaces, AR/VR, etc.

Physically situated interaction hinges critically on the ability to model and reason about the dynamics of human interactions, including processes such as conversational engagement, turn-taking, grounding, interaction planning and action-coordination. Creating robust solutions that function in the real-world brings to the fore numerous AI challenges. Example questions include issues of representation (e.g., what are useful formalisms for creating actionable, robust models for multiparty dialog and interaction), machine learning methods for multimodal inference from streaming sensory data, predictive modeling, decision making and planning under uncertainty and temporal constraints, etc. My work aims to address such challenges and create systems that operate and collaborate with people in the physical world.

Prior to joining Microsoft Research, I obtained my PhD degree from the Computer Science Department at Carnegie Mellon University, Pittsburgh, PA. My curriculum vitae is available here.


activities & news
Oct '24: I'm looking forward to attending the XR-SPro workshop at ISMAR this year and giving an invited talk on SIGMA.
Oct '24: We will be presenting two papers at ICMI this year, one on detecting user confusion by leveraging behavioral signals, and the other on efforts towards building more ecologically valid benchmarks for situated collaboration.
April '24: We open-sourced SIGMA, a mixed-reality system for research on physical task assistance. Read more in this blog post and in this arxiv technical report.
Oct '23: I had a great time participating in the UIST XR & AI workshop.
Oct '23: We recently released HoloAssist, an egocentric human interaction dataset useful for developing interactive AI assistants for the physical world. Read more about the dataset and associated challenges in this ICCV paper and MSR Blog post.
Oct '23: Together with Sean Andrist, and Zongjian Li and Mohammad Soleymani, we presented a tutorial on building multimodal interactive applications for research at ICMI'2023.
Jan '23: I presented an invited talk on situated language interaction at the SIVA'23 - Workshop on Socially Interactive Human-like Virtual Agents.
Dec '22: We have released beta version 0.18 of Plaform for Situated Intelligence, continuing to refine support for building mixed reality applications, and further evolving debugging and visualization capabilities.
Nov '22: I presented an invited talk on our work on Continual Learning about Objects in the Wild at the NeurIPS 2022 Workshop on Human in the Loop Learning.
Nov '22: I presented paper our paper on Continual Learning about Objects in the Wild at ICMI'2022..
Aug '22: I presented an invited keynote on challenges and opportunities in physically situated language interaction at the IEEE SIRRW-Roman Workshop.
Jul '21: I presented an invited keynote on situated interaction at the Robotics for People (R4P) workshop at RSS'2021.
Apr '21: We are hosting a virtual workshop on Platform for Situated Intelligence, an open-source framework for building multimodal, integrative-AI systems. For more information and to register, please see the event website.
Mar '21: We published a technical report with an in-depth description of the Platform for Situated Intelligence open-source framework for building multimodal systems.
Sep '20: We released version 0.13 of Platform for Situated Intelligence, on open-source framework for development and research on multimodal, integrative-AI systems. Here's a blog post and a webinar describing the framework.
Oct '19: I was awarded the Community Service Award this year at ICMI'2019, and our 2009 paper on dialog in the open world was declared a Ten-Year Technical Impact Award Runner-up. It's been great being part of this community for the past ten years!
Sep '19: I gave a keynote presentation on Situated Interaction at SigDial'2019 in Stockholm.
Jul '19: The 3rd volume of The Handbook of Multimodal-Multisensor Interfaces: Language Processing, Software, Commercialization, and Emerging Directions has been published, containing a chapter I co-authored with Eric Horvitz on Situated Interaction.
Mar '19: We presented a demonstration of Platform for Situated Intelligence at HRI'2019.
May '18: Together with Sean Andrist, Maya Cakmak and Sid Shrinivasa, I am co-organizing the 2018 MSR-UW Summer Institute on Social Robotics this July.
Nov '17: Our demo paper on Platform for Situated Intelligence received the Best Demostration Award at ICMI'2017.
Oct '17: Our recent papers on scene shaping and on diagnosing failures in physically situated systems deployed in-the-wild were accepted for publication at the AAAI Fall Symposium and ICSR!
Jul '17: We introduced the Platform for Situated Intelligence project in the Integrative-AI session at the MSR Faculty Summit.
Jul '17: I gave an invited talk on physically situated language interaction at MSR Cambridge AI Summer School.
Dec '16: The special issue of AI Magazine on Turn taking and coordination in human-machine interaction that I have co-edited is now published. It's a fun and interesting collection of articles on the topic. A big thanks to the contributors and my co-editors!
Aug '16: Sean Andrist from University of Wisconsin-Madison has recently joined our group! Check out his great work on gaze and human robot interaction here. Looking forward to expanding our research in the HRI space!
Jun '16: I have started serving as a member of the steering board for ICMI, the International Conference on Multimodal Interaction.
May '16: I gave an invited talk at the workshop on Designing Speech and Multimodal Interactions for Mobile, Wearable, and Pervasive Applications at CHI 2016 in San Jose.
Dec '15: I gave an invited talk on opportunities and challenges in situated dialog at ASRU 2015 in Scottsdale, AZ.
Apr '15: I am serving as Program Chair for ICMI'2015 to be held later this year, in November in Seattle. Paper submission deadline is May 15th.
Mar '15: I co-organized and attended the AAAI Spring Symposium on Turn-taking and Coordination in Human-Machine Interaction at Stanford University.
Nov '14: I attended ICMI'2014, and gave an invited keynote presentation at the co-located The 7th Workshop on Eye Gaze in Intelligent Human Machine Interaction.
Sep '14: Zhou Yu started as an intern with Eric Horvitz and myself. Looking forward to a fun and productive fall!
Aug '14: With Sean Andrist, Bilge Multu, David Schlangen and Eric Horvitz, I am organizing a AAAI Spring Symposium on Turn-Taking and Coordination in Human-Computer Interaction
Aug '14: Two papers, one on generating hesitations based on forecasting models and one on communicating about uncertainty in embodied agents were recently accepted for presentation at ICMI'2014.
Aug '14: We have deployed a 3rd Directions Robot, on the 4th floor in Building 99. Full coverage!
May '14: I am co-organizing the ICMI'14 workshop on Understanding and Modeling Multiparty, Multimodal Interactions.
Apr '14: A piece on our research featured on Engadget.
videos
SIGMA: an open-source mixed-reality system for research in procedural task assistance
Platform for Situated Intelligence overview
Situated interaction project overview
Video highlighting work on communicating about uncertainty in embodied agents
Directions Robot video
selected media coverage
- Microsoft teaches robots how to deal with groups and draw from memory, in engadget.com
- Can robots have social intelligence?, in phys.org
- Microsoft Research is building a smart virtual assistant with its Situated Interaction project, in mspoweruser.com
- Could virtual assistants be made to understand us better?, in bbc.com
- Computers learn to listen, and some talk back, in nytimes.com
- Ability to 'see' advances artificial intelligence, in sfgate.com
- Microsoft aims to turn PCs into personal assistants, teachers, in usatoday.com
- Microsoft demos robotic receptionist, in cnet.com