2016: DOLL wins Phase II STTR for its CART project with MIT

In Phase II, DOLL will develop an integrated system that, based on a PAMELA-based mission plan for a detailed repair plan, will continuously observe the actions of the human performing the repair task.  DOLL will develop a machine-learning-based object detector that will dynamically determine the state and position of each of the items involved in the repair task, along with the state of the human's hands.  We will develop a belief state estimator that will make sense of these observations and determine their context and progress within the mission plan for the repair. In order to provide mixed-initiative guidance to the user, we will develop a voice interaction capability that will inform the user of the completion of each major step in the repair, inform the user of any mistakes, and allow the user to ask questions about their progress.