Dynamic Object Language Labs, Inc. was established in 1993 to promote the synergistic combination of Dynamic and Open Languages, Object Oriented Programming, and Artificial Intelligence.  We conduct advanced research and development aimed at providing a brighter and safer future through advances in computer based technology and its application to real world problems.  DOLL’s central theme is to approach complex problems from a domain specific language perspective. Our research interests include Advanced Languages, Artificial Intelligence, Computer Vision, Planning, Cyber Security, and Expert Systems.  DOLL has a long history of exciting work in Self Adaptive Software, a software architecture concept that DOLL helped introduce in the 1990's. Self Adaptive Software has been used to build robust vision systems, intelligent spaces/rooms, intelligent control systems, and cyber security systems.  DOLL is currently performing on multiple projects in the area of cyber security, including self adaptive and game theoretic approaches.  DOLL has a long history in the areas of vision architectures, robust vision systems, and intelligent perceptual systems.


In most existing machine perception systems, the perception components are statically configured, so that sensor data is processed in the same, bottom-up manner each sensing cycle. The parameters of components in such a system are also statically tuned to operate optimally under very specific conditions. If higher level goals, context, or the environment change, the specific conditions for which the static configuration is intended may no longer hold. As a result, the static systems are prone to error because they cannot adapt to the new conditions; they are too inflexible.

In addition to their inflexibility, existing machine perception systems are often not well integrated into the autonomous systems to which they provide information. As a result, they are unaware of the autonomous system’s overall goals, and therefore, cannot make intelligent observation prioritization decisions in support of these goals. In particular, it may not be necessary or useful for the perception system to be aware of every aspect of a situation, and it may be detrimental, due to resource contention and time limits, to the overall goal.

DOLL addresses these challenges using a dynamic Active Perception approach in which reasoning about context is used to actively and effectively allocate and focus sensing and action resources. This system reasons about dynamically changing goals, contexts, and conditions, and therefore, is able to change to a more appropriate process flow configuration, or to better parameter settings in an intelligent way. The Active Perception approach prioritizes the system’s overall goals, so that perception and state-changing actions are optimally combined to achieve the goals. As a result the approach is more robust to environmental uncertainty.

Active perception draws on models to inform context-dependent tuning of sensors, direct sensors towards phenomena of greatest interest, to follow up initial alerts from cheap, inaccurate sensors with targeted use of expensive, accurate sensors, and to intelligently combine results from sensors with context information to produce increasingly accurate results. The model-based approach deploys sensors to build structured interpretations of situations to meet mission-centered decision making requirements.


In order to plan for and accomplish a complex set of interdependent activities, it is useful to have a specification of the goals and the activities that need to be accomplished to satisfy those goals. Motivated by the world of military planning, we use the term Mission Modeling to refer to the activity of capturing and representing all of the goals, preferences, and constraints, along with the detailed execution plan that satisfies each of them. DOLL's specification-based mission modeling approach supports the top-down generation of detailed interdependent activity plans, an execution mechanism that ensures that all activities are performed in the correct order, and provides continuous assessment of the status of all subparts of the plan along with the aggregate status of the overall mission. Using this assessment mechanism during plan execution, the contributing impact of the successful and unsuccessful mission activities can be determined, allowing the overall mission success to be maximized. Since assessment is performed continuously, mission resiliency is achieved. For a resource-constrained problem, these assessments allow the user and/or the system to drop those activities from the plan that have the least negative impact on the overall mission success. DOLL has initially applied this approach to planning problems in the military cybersecurity domain.  It is also applicable to any problem that can be decomposed into a collection of subtasks. For example, it can be applied to commercial domains such as manufacturing, and diagnosis and repair.


DOLL has developed active cyberdefense systems, and has evaluated these systems on testbeds that are able to simulate a variety of cyberattacks including denial of service, corruption through malware, exfiltration, and process termination.  DOLL's approach is intelligent and dynamic in that it treats an attack as a battle for which defensive missions must be planned, and analysis and compensation resources must be allocated in an optimal manner.  Thus, the approach leverages DOLL's Mission Modeling technology.

The system incorporates sensor fusion filters, hypothesis generators, and state estimators to develop mission situation awareness.  The system uses these first to respond tactically to signs of corruption in key components, and strategically to look for longer attack plans in progress. The first step in the tactical processing is to identify the effect on mission components’ health of the events identified.  Included in the state being estimated is the level of trust of components.  Based on hypotheses regarding cyber-attack patterns, and subsequent tests to confirm or refute an attack, the system may decide that a particular component can no longer be trusted.

The system also includes resource allocation capabilities that assign hosts to tasks.  One way of responding to an attack is to re-configure components, possibly instantiating a component with a particular task on a new host, if the old host is thought to be compromised. Information about task constraints and priorities is used to decide optimal allocation of hosts to component task combinations.