KAI Framework

This short whitepaper describes the overall approach and functional capabilities of the Knowledge Analytics (KAI) suite of analytics tools (Tool Suite), framework, and technologies for building complex decision support systems and analytic functionality. These tools and technologies can be applied to multiple domains and systems which record high volumes of data that can benefit from “automated intelligence”—multivariate rule-based decision support, data mining, and predictive analytics designed to detect “signals” such as associations across instances of large volumes of aggregated data. Typical systems include intrusion detection and security analytics, health care system enterprise-level Electronic Health Records (EHRs) and large data registries, as well as systems that provide diverse location-aware services, library management, fraud detection, and data validation services. Our approach addresses the high cost of software development resulting from error-prone exchanges of information between “requirements” managers/analysts (e.g. Subject matter Experts (SMEs)) and software developers, and also addresses the well-known “software brittleness” problem--the increased difficulty/expense of “fixing” existing software that may appear reliable, but fails badly when presented with unusual data or needs to be altered in a seemingly minor way. The KAI tools enables Subject Matter Experts (SME) and domain analysts to build and maintain sophisticated intelligent systems without the need for architects, designers, and programmers. In short, the KAI tool suite allows SMEs and analysts to build and iteratively improve the behavior of intelligent systems. These capabilities are provided using an advanced information modeler, a state-of-the-art rule-based system, integration with statistical tools, and a SOA based architecture approach. For the purposes of this paper, we have selected the development of an easy to understand application to illustrate how the framework can be used. The application is defined as follows: diabetic patients of a large healthcare system use a Web-based or fat client to enter multiple daily glucose readings from a glucose test monitor physically located at their home for the purposes of monitoring how actual daily blood sugars taken at specific times of the day (e.g. pre-breakfast, bedtime, etc.) compare against guidelines-based recommended daily glucose limits that evidence has shown can minimize the risks of diabetes complications (e.g. heart disease, kidney failure, etc.) . The results across all patients are asynchronously sent to a server that aggregates the results for each patient and uses the application to perform analytics on the raw data, evaluate patient specific or population-level data against rules, and execute actions which notify care providers of population-level care quality inferences from the data, or patient-specific deleterious trends which warrant real-time alerts of needed adjustments in treatment plan such as recommend dietary changes, add a new medication or intensify/modify the amount of insulin taken, etc. Example rules could be: 1) “identify patients in the population whose daily glucose 95% confidence intervals indicate their estimated A1c to be > 8%”; or 2) “identify patients whose 30 day average of pre-breakfast glucose is > 180 mg/dL and are on bedtime insulin” and send a message to both primary care provider and patient of the potential need to increase bedtime insulin. So how is it possible to accomplish these objectives? The following diagram that provides an overview of the KAI offering: Our capabilities can be separated into 2 distinct functional areas: a. the Tool Suite that allows SMEs and Analysts to define the information about their domain and application in the form of an information model, a behavioral model, and analytical data (Specification Phase); and b. the Framework that implements the operational system. Here are the capabilities of the KAI technologies that support such development by SMEs: A Model Editor is provided to allow SMEs and Analysts to define an Information model with “Ontological Structure” that supports computable expressions of concepts and relationships within the domain of interest (e.g. evidence-based diabetes care pathways/guidelines). The KAI model is compatible with UML and OWL in many important ways and allows users to define subsumption (class/subclass, abstract/persistent data types, aggregation, and assertions of ontological meaning). Since this model will form the basis for other artifacts including database schema, instances, GUIs, rules, parsers/messaging, and analytics, it contains more information than normally is included in models developed by UML or OWL technologies. The KAI tool suite provides a model editor where the model can be either collaboratively developed or shared/imported via a file transfer. For our example, SMEs such as diabetes educators or endocrinologists could use the KAI tool suite to define abstract and persistent types that provide for all of the functional capabilities required by the application: data entry, data validation, rule processing, analytics, etc. As an example, an abstract type could be an “Observation” which could be a base type with parameters such as “Observation Name”, “SNOMED CT code”, “result”, “units”, “date/time”, “Observer ID”, “Reporter ID”, etc., and a persistent, multiple “Glucose Observation” type that extends “Observation”, since “Observation” is defined as an abstract type that could apply to any type of test or reading. The “Glucose Observation” could “extend” the “Observation” parameters with properties such as how observation relates to a meal (e.g. the ontological assertion that reading is “pre-breakfast”), meter type, acceptable limits, critical values, etc. SMEs also provide ontological (contextual) meaning to types and parameters using the KAI Model Editor. For our example application, our model would include patient and provider demographics (to allow identification and inform notification/alerting services) and models of how key concepts (e.g. “diabetes”, “glucose reading”, “bedtime insulin”) are related and can be abstracted at the appropriate level of granularity from healthcare system medical records and daily patient reports (e.g. via semantic associations with disease ontologies such as ICD-9, SNOMED CT. LOINC and RxNorm). The KAI tool suite provides the capability to compile the Information Model into executable code. Currently, we are translating the model into Java code, where the translated code inherits framework classes with hooks into the KAI framework. These hooks provide parsing, messaging, fact model extraction, validation, and other framework functionality. We will provide translation to other introspective languages such as C# in the future. The “Glucose Observation” type is translated into a Java class called GlucoseObservation.java with “get” and “set” methods for each parameter defined in the type, and then compiled. The KAI Framework and tool suite provides two types of persistence: Database and XML flat files. The persistence mechanism handles instances of the model that are created, deleted, or modified. The decision to use XML or a database has various ramifications and depends on the application that is being developed. Generally, one would select XML persistence for small to medium sized systems where a high level of performance is critical, whereas database persistence would be selected for medium to large data intensive systems. In both cases, the KAI tool suite automatically generates the database and XML schema and access methods used by the persistence framework. The tool suite translates the model into database and XML schemas, so there is a table called “GlucoseObservation” that defines attributes (columns) for each parameter in the model type. At this point, the SMEs and domain analysts have defined their information model, and through the tool suite, have compiled the model and generated the Database or XML schemas and access methods. So how are instances generated, transmitted to the web service, validated, persisted, etc. There are several mechanisms to do this: GUIs and legacy data adaptors.

buy viagra no prescription
weight loss pills
viagra in usa
diet pills that work for women
fastest way to lose belly fat
tadalafil canada
lipitor medication
strattera generic

Behavioral Model (Rules and Rule Editor)\n The discussion above describes how SMEs can define and compile an information model, provide persistence through a database or XML flat files, and then generate instances of the model through automated GUIs. Instances come to the framework web service in XML messages and are then parsed, validated, persisted, and now ready for intelligent processing. So how do SMEs define what they want to have happen when instances of the model that they define are transmitted to the system? They define a Behavioral Model with executable rules that define the behavior of the system.\n Rules are defined in 2 parts: the conditional statement (when some condition occurs) and the Action that should happen if the condition is true. If the condition is true, the rule is said to “Fire”. The KAI tool suite provides a Rule Editor that allow SMEs to define rules that are evaluated when events occur within the system (a new “Glucose Observation” is received). There are 3 components of rule processing and execution:

Analytics Engine\n The KAI tool suite provides analytical tools that can be used by users (analysts) via the tool suite or the Rule Engine. For instance, the analyst might want to know the median or mean blood glucose reading taken in the morning over the past month and the variance that could be associated with an A1c. Rules can be defined to use statistical package plugin augmentations to the Analytics Engine, such as when a rule might use an expected value or risk probability in its condition.

The KAI Framework is comprised of system security, enterprise messaging, framework services, and other capabilities that are combined into an Enterprise Service Bus (ESB) where messages and events are received or generated, validated, persisted, and intelligently acted upon in a way that satisfies our user’s needs. Events enter the framework as web service transmissions or internal timer events and are acted upon as defined by rules and implemented in the rule engine. Events contain either model formatted messages or externally defined messages (such as HL7 messages or any other type of message). External messages are transmitted to a framework adaptor that translates the message into an internal model define message and places it on the bus. The framework is composed of Adaptors, Plugins, and Services and combined in a way that provides horizontal scalability, fault tolerance, and extensibility.\n Background: Technologies and principals used in the KAI tools and framework can be traced back to our seminal work on “data fusion” and “targeting” systems for the DoD and intelligence community. These systems were designed to detect “signals” from diverse data sources in the presence of uncertainty and noise. A foundational effort along these lines was performed by the development of a web-based surveillance and targeting system for the intelligence community which was designed to use real-time multi-source intelligence to rapidly identify battlefield targets and other possible threats. These threats and targets were then combined with ontological reasoning and rules to help accelerate the decision-making process performed by human intelligence analysts. We developed and deployed complex algorithms that were designed to perform multi-modal data interpretation processes. The reasoning and analysis tasks were directed at integrating or fusing “imperfect cues” contained in dissimilar ISR sensor data (ELINT, MSI, SAR) so as to establish, with a degree of probabilistic certainty, the presence, identity, and current/future position of a “target”. In a manner analogous to human sensory integration models, automated processes were designed that follow a natural data abstraction and association/correlation hierarchy to find “hidden” patterns or associations in large intelligence data stores. These tasks required the concurrent integration of numerous complex tools/algorithms (e.g. Max Likelihood/MHT, Bayesian accrual, Dempster-Shafer, Neural Networks, etc.) stemming from distinct technical disciplines: probability/estimation theory, image processing/pattern recognition, sensor-specific digital signal processing, vehicle dynamics and artificial intelligence.\n Another related effort was our performance on an application of similar technologies in support of mobile warfighters. In this instance, semantics and rules were used to evaluate sources of intelligence information. Ontologies and semantic reasoners were employed to deduce threat associations and distill meaningful reports from large volumes of raw data (e.g. tactical imagery, SIGINT reports) and transmit compressed warnings/advisories (“actionable intelligence”) to mobile war zone users via handheld communications and computing devices.\n Accomplishments: The KAI Tool Suite, Framework, and Technologies are continually refined and enhanced with new capabilities being added frequently. However, advanced capabilities are currently functional:\n 1. Model and ontological specification (Model Editor)

2. Model Translation and Compilation in Java is currently available