In most existing machine perception systems, the perception components are statically configured, so that sensor data is processed in the same, bottom-up manner each sensing cycle. The parameters of components in such a system are also statically tuned to operate optimally under very specific conditions. If higher level goals, context, or the environment change, the specific conditions for which the static configuration is intended may no longer hold. As a result, the static systems are prone to error because they cannot adapt to the new conditions; they are too inflexible.
In addition to their inflexibility, existing machine perception systems are often not well integrated into the autonomous systems to which they provide information. As a result, they are unaware of the autonomous system’s overall goals, and therefore, cannot make intelligent observation prioritization decisions in support of these goals. In particular, it may not be necessary or useful for the perception system to be aware of every aspect of a situation, and it may be detrimental, due to resource contention and time limits, to the overall goal.
DOLL addresses these challenges using a dynamic Active Perception approach in which reasoning about context is used to actively and effectively allocate and focus sensing and action resources. This system reasons about dynamically changing goals, contexts, and conditions, and therefore, is able to change to a more appropriate process flow configuration, or to better parameter settings in an intelligent way. The Active Perception approach prioritizes the system’s overall goals, so that perception and state-changing actions are optimally combined to achieve the goals. As a result the approach is more robust to environmental uncertainty.
Active perception draws on models to inform context-dependent tuning of sensors, direct sensors towards phenomena of greatest interest, to follow up initial alerts from cheap, inaccurate sensors with targeted use of expensive, accurate sensors, and to intelligently combine results from sensors with context information to produce increasingly accurate results. The model-based approach deploys sensors to build structured interpretations of situations to meet mission-centered decision making requirements.