Friday, July 26, 2013

Business Intelligence Overview: Companies Remove Guess-Work To Make Sound Decisions

Companies seeking to remove the guess work from important business decisions have turned to business intelligence solutions.With business intelligence companies can use facts (based on historical data) to determine what promotional campaigns were successful and which ones were not. Companies can also view product or service performance to understand what products/services are needed most in a location. And, companies can get to know their customers better to build more compelling brands and marketing strategies. Today, even more complex buying and other patterns can be discovered through data mining, which is discussed later in this post.

In short, business intelligence answers business questions to help companies improve their performance and sales. And, in this sluggish economy, more and more businesses are turning to business intelligence to make sure they make the right choices. This post provides an overview of business intelligences and the Microsoft technologies used to support business intelligence solutions.

What Is Business Intelligence


Business Intelligence (BI) gives organizations the ability to store and use data captured over the years. With a BI system organizations can use their historical data to view product or service performance. They can also make predictions on who is most likely to buy their product or service; as well as identify the aspects that most likely makes a customer return to make repeat sales. Companies can also evaluate the usefulness of its promotional campaigns. For example, sales and marketing managers can view resulting sales from a promotional campaign. And, as companies capture the demographic makeup of their customers they can use that information to know who is buying what. Some companies have even begun to use BI with their eCommerce systems. For example, Market Basket Analysis is used to recommended products to customers based on the items in the customer's shopping cart. 

BI systems not only give organizations the ability to make informed decisions; but it also provides them an opportunity to select key performance indicators (such as sales target, product performance or sales by salesperson) to monitor an organization's performance. Some companies set targets (i.e., indicate the sales the company should reach by a given period) and use scorecards to monitor the company’s performance. These are just a few of the capabilities BI systems bring to the table. The following paragraphs discuss how BI systems are designed and the Microsoft products used to build them.

The Database Processing for Business Intelligence Systems



Traditional business applications use Online Transaction Processing (OLTP). With OLTP systems the data in the database is expected to change frequently. New data is frequently added, existing data is updated; and, tables may even change. In addition, traditional relational database concepts are used to relate the tables. This approach makes it easier for the business to make updates and perform queries against the changing data. Analyzing data in an OLTP system is difficult because the data consistently changes due to the transactions performed against it.

On the other hand, an organization that wants to analyze data (as well as gain a better understand of its business performance, customers, etc) can implement a data warehouse. Unlike the OLTP systems, the data warehouse system supports Online Analytical Processing (OLAP) and is designed solely to support analyzing and retrieving data. Transactions are not performed against a data warehouse. And, updates are only made to the data warehouse to correct errors in the existing data. Other than that the existing data never changes. Instead, new data is cleansed (by removing partial or corrupt data) and then added to the data warehouse so organizations can use historical data for decision support.

One way in which a data warehouse can be updated is based on using the Microsoft SQL Server Integration Service (SSIS). A Database Administrator, or other skilled personnel, can create packages that extract and transform data (i.e., convert data into a relational data source or other format) and then load the data into a table in the data warehouse. This process is commonly referred to as the Extract-Transform-Load (ETL) Process.

BI Database Design & Dimension Modeling


If a company builds its own BI system using Microsoft products the Microsoft SQL Server database is used to store the data. SQL Server BI Developer Studio is used to create the cubes used to retrieve the data from the data warehouse. The client application that displays the data may bebuilt using Microsoft Office SharePoint Server, which has a Business Intelligence site. Or, Microsoft Excel can be used; or, even Visual Studio .NET can be used to build custom applications. (Note that information regarding the client applications will be covered in a future post that focuses on BI client applications, scorecards and dashboards.)

Many companies turn to Microsoft for their business intelligence solutions. This is particularly true of companies whose data is already stored in a SQL Server database. To build a business intelligence system Microsoft SQL Server Analysis Service (SSAS) is installed when SQL Server is installed. Microsoft SQL Server Analysis Service uses a server component implemented as a Microsoft Windows Service.



Microsoft SQL Server Analysis Service supports XML Analysis (XMLA) to share data with client applications.

The database part of the business intelligence system, commonly referred to as a data warehouse, includes dimension tables, fact tables and indexes. Dimension tables capture the descriptive data for each business activity. These tables include the details users will want to study to make important decisions about the business. For example, users may want to view a customer’s age, education, geographic location, marital status, primary language, etc. stored in the Customer dimension table. This information helps marketing managers and others better understand how to market to customers. The primary key of each dimension table is indexed to improve data warehouse performance.

Examples of Dimension tables include Customers, Promotions, Employees, Departments, Products, etc. The following picture shows the Customer dimension table included in the AdventureWorks data warehouse designed by Microsoft for a bogus corporation.



The Fact table, in a data warehouse, contains the numeric values for the data descriptions in the Dimension tables. Hence, each dimension table is associated with a Fact table. The data in Fact tables are key to answering questions about the business. For example, if a company wants to track the effectiveness of a promotion a Sales Fact table is created with fields that capture the amount sold (per sale), quantity sold (per sale), amount the product cost the store (per sale), etc. The fact table is indexed on the primary key developed from the foreign keys used to link the dimension tables to the fact table. The dimension table(s) connected to the Fact table would include the Product table, Date table, the Promotion table, etc.



The modeling concept used to design a data warehouse database is called dimensional modeling. To design a database using this concept tables are grouped into dimensional models. Each dimensional model includes a fact table to manage the facts related to the dimension table(s), as discussed above. The fact tables are then linked to the relevant dimension tables. With dimensional modeling a model is created to represent data that supports each business activity. For example, one model captures sales, another captures inventory, another captures employees, etc.

Data warehouse database designers apply either a star or a snowflake design. The complexity of  the data defines the structure used to design the model. The star is the simplest design. With the star design each dimensional model has one or more dimension table(s) linked to a fact table using a primary key/foreign key relationship, as shown in the following example.


The snowflake structure is more complex than the star.. With the snowflake structure the dimension tables are linked to one another and then to the fact table using a primary key/foreign key relationship. For example, a product table might have a product category table related to it with the Fact table related to the product table. The AdventureWorks data warehouse includes a Product table that has a Product Category table related to the Product table and a Product Subcategory table related to the Product Category table. This means there are three dimension tables related to one another to describe the products. The Product table is linked to the Fact table. And, other dimension tables,as applicable, are linked to the fact table based on the business activity being modeled. Notice, however, this table structure is more complex than the star, which does not use subsets of data to describe data. This is just one of several ways to design a snowflake structure.

Data Mining

Once the data warehouse is in place, a developer can use the SQL Server BI Developer studio to build a cube, which is the component that interfaces with the database to retrieve the data. The cube stores the dimensions (from the selected dimension tables) and measures from the selected Facts tables. The dimension and measures selected are based on a question the business wants to answer; or, information users want to analyze such as the sales performance.


If more complex analysis is desired a data mining structure can be built from the cube using the Data Mining Wizard.


Data mining provides data warehouse users the ability to see patterns or correlations that exist in the data.However, note that these patterns would not be realized without using algorithms to perform additional data processing. The following paragraphs outline a few of the Microsoft algorithms provided by Analysis Services.


Algorithms that Help With Inventory Management, Promotions, etc.



Many grocery stores, online eCommerce sites and other businesses capture and store customer transactions (also referred to as market basket transactions). This data is often analyzed to understand the combination of items people purchase. For example, in cities that experience snow storms, consumers always buy bread, milk and wine the day before the storm. This information is used to understand consumers' purchasing behaviors to support inventory management and build more effective promotions. The algorithm that makes identifying associations possible is the Association Algorithm. In fact, some eCommerce sites use the Association Algorithm to recommend additional items a customer might be interested in based on the items placed in the online shopping cart. The Association Algorithm is also used to support medical diagnosis.




Clustering People into Homogeneous Groups


The Microsoft Clustering algorithm groups cases in a dataset into clusters to reveal conclusions people probably would not reach without the algorithm. The processed data can be used to view those with similar characteristics; or, rules can be applied to omit those with similar characteristics so that the anomalies remain. For example, the algorithm may present a cluster that shows women who buy paper-plates on Friday also buy lipstick.

Making Predictions


The Microsoft Decision Trees Algorithm is a classification and regression Algorithm used for predictive modeling. For example, a company would use the Microsoft Decision Tree Algorithm if the sales team wanted to predict whether a customer is likely to purchase a particular product or service. This would ensure time is only spent marketing to customers who are likely to buy the product or service.  The prediction would be based on information stored in the data warehouse including buying patters and demographic data

Predictions, Factors and Classifications


The Microsoft Logistic Regression Algorithm is a highly flexible Algorithm that supports multiple analytical tasks. For example, it can use demographic data to make predictions about a person or group of persons'  risk for a certain disease. The algorithm may also be used to weight the factors that contribute to a specific outcome. For example, what factors cause a customer to return to the same store.

These are just a few of the models available when using the Data Mining Structure Wizard in Microsoft's SQL Server BI Developer Studio.

Summary


Companies seeking to remove the guess work from important business decisions have turned to business intelligence solutions.With business intelligence companies can use facts (based on historical data) to improve their performance and sales.

The database part of the business intelligence system, commonly referred to as a data warehouse, includes dimension tables, fact tables and indexes. Dimension tables capture the descriptive data for each business activity. The Fact table contains the numeric values for the data descriptions in the Dimension tables. The modeling concept used to design a data warehouse database is called dimensional modeling. Data warehouse database designers apply either a star or a snowflake design. The complexity of  the data defines the structure used to design the model.

Once the data warehouse is in place, a developer can use the SQL Server BI Developer studio to build a cube, which is the component that interfaces with the database to retrieve the data. The cube stores the dimensions (from the selected dimension tables) and measures from the selected Facts tables. If more complex analysis is desired a data mining structure can be built from the cube using the Data Mining Wizard.

Data mining provides data warehouse users the ability to see patterns or correlations that exist in the data; but, these patterns would not be realized without using algorithms to perform additional data processing. SQL Server BI Developer studio includes a number of  algorithms to help organizations realize patterns to increase sales and build a more loyal customer base.

To learn more about Business Intelligence visit Microsoft's Business Intelligence home page. To learn more about Microsoft SQL Server Business Intelligence Development Studio visit Introducing Business Intelligence Development Studio.

5CT9V3W3AEN9

Sunday, July 7, 2013

Medical Information Technology: Electronic Medical Records

Gone are the days when nurses had to wear a wrist watch, with a second hand, to count the patient's pulse and respirations. And nurses no longer stand grasping an oral thermometer for 60 seconds to take the patient's temperature. These are just a couple of the patient care tasks that are easier to perform thanks to up-to-date technology used in many hospitals. This article provides a basic overview on why the use of technology has increased in the healthcare industry. It also takes a look at some of the electronic medical record capabilities. And, it discusses interface engines--the technology used to share medical data.

Why Technology Has Become Popular


January 2010, the 111th Congress passed the Patient Protection and Affordable Care Act (commonly referred to as the Affordable Care Act) . Two months later President Obama signed the Act into law. The Affordable Care Act "encourages the use of, integration with, and coordination of health information technology (including [the] use of technology that is compatible with electronic medical records and personal health records) ..."

While the Act encouraged the use of technology; the real incentive came through the promise of Meaningful Use incentive payments. The Centers for Medicare & Medicaid Services (a division of the US Department of Health & Human Services) promised two forms of incentive payments: One from Medicare and the other from Medicaid. The Medicare and Medicaid Electronic Health Record (EHR) Incentive Programs promised incentive payments to eligible professionals and medical facilities as they adopt, implement or upgrade and demonstrate meaningful use of certified EHR technology. The Medicare EHR Incentive Program promised up to $44,000; and, the Medicaid EHR Incentive Program promised up to $63,750.

A Look at the Electronic Medical Record Capabilities

45 CFR Part 170 - Health Information Technology Standards, Implementation Specifications, and Certification Criteria and Certification Programs for Health Information Technology defines the capabilities and certification criteria electronic health records must meet. The following section outlines some of the information found in 45 CFR Part 170.

Data Standardization

One key aspect of the Electronic Health Record (also referred to as the electronic medical record) is that patient care information is captured using standardized data. Some key benefits to the health industry adopting standards to classify diseases, procedures, treatments, etc. is compelling. For example, hospitals can share syndromic surveillance data (used for early detection of biologic terrorism) with the Center for Disease Control (CDC). And, the CDC's system would be able to store data received and generate accurate reports although the data came from different organizations and different systems.

Following is a list of some of the data standards used by electronic medical record systems:
End-User Tasks

This section outlines some of the software capabilities found in an electronic health record. It also includes example screenshots from CPRS--the electronic medical records system implemented by the US Department of Veterans Affairs implemented.
  • Real-Time Status Updates: When a user first logs into the system the user sees the following cover, which provides real-time status updates on the selected patient.
  • Record a Patient's Problem: The ability to document the patient's problem using ICD-10 or other applicable standard is necessary.
  • Electronic prescribing: A facility should be able to electronically transmit a patient's prescription directly to a pharmacy.
  • Electronic submission to public health agencies for surveillance or reporting: Facilities should be able to transmit messages to the Center for Disease Control or other organizations.
  • Capture a Patient's Progress: This capability ensures medical staff can add daily notes about a patient's response to treatment and care.

  • Audit Trail for User Actions: The Electronic Health Record should record actions by documenting the date, time, patient identification, and user identification associated with data that is created, modified, accessed, or deleted.
  • Perform Drug-drug, drug-allergy interaction checks: "The Electronic Health Record should automatically and electronically generate and indicate in real-time, notifications at the point of care for drug-drug and drug-allergy contraindications."
  • Capture, store & manage diagnosis information.
  • Generate clinical reminders: The system should generate clinical reminders based on relevant clinical data to support preventive care. For example, based on a man's race, age and other demographics a reminder may prompt the facility to contact the patient to schedule prostate cancer screening.

  • Patient medications, labs, x-rays/images: The electronic health record should provide a way for physicians to order patient medications, labs, x-rays/images.
The following pictures show additional capabilities provided through CPRS. For example, staff can  record a discharge summary.

Staff can capture and manage autopsy results that help determine the cause of death. In CPRS this information is accessed using the Labs tab.

 Nurses can also electronically document a patient's admission assessment.


Note: The US Department of Veterans Affairs also built VistA Web, a web-based user interface that allows practitioners to view a patient's record from any location. The premise was patients should be able to receive care from any clinic or hospital. Unfortunately, many employees don't know about VistA web and even fewer know how to use it. This proves no matter how impressive the features of a system are; they are useless without end-user training.

For additional information on Electronic Health Record capabilities review 45 CFR Part 170.

How Medical Information Technology Is Integrated


For years many hospitals used individual systems for laboratory transactions (i.e., capture lab results), to admit/discharge patients and to manage patient accounts and billing. But, there was an imbalance because some information was electronic and some wasn't. When it was decided the health systems should be able to share data (both internally and externally) someone needed to define a way in which health data could be passed from one system to another.

An organization called the Health Level Seven International (HL7) developed a group of health standards that outlined the message types, formats, segments, fields, data types and symbols that could be used to share health data. Today these message types, formats, segments, fields, data types and symbols are used to build HL7 messages to share health data.

The name HL7 was used, for the standards and organization, because the HL7 messages use the Seventh Layer (also called the Application Layer) of the Open Systems Interconnection (OSI) model to share the health data. 



One might envision an HL7 message to look like an email message; but it does not. Instead these messages include a combination of data and symbols. Developers use a special HL7 Mapping Tool to work with HL7 messages. Following is an HL7 message displayed using standard encoding rules. Note that the recommended XML format exists, although it is not shown in this article:

MSH|^~\&|TBHS|The Best Hospital System|SBHS|Second Best Hospital System|20130701143619||ADT^A04|MSG.0|D|2.4
EVN||20040328134602.1234+0600||03
PID|1||A74328~A74333~A74331||Doe^John^Ellie|Doer|20040328134602.1234+0600|M|||3117 Clopper Road&90238^^Atlanta^GA^30076-1455||404-693-3432^^^joe.doe@mybus.com~(404)555-2312^^^Jan.doe@herbus.com~(404)221-7218^^^jane@herbus.com~(404)463-9900|||||A743324335||IN35489768457^G^19980319
PV1|1|C||U|||3472390832^Reinhold^Wesley|3472390832^Simmons^Connor|||||||||||A74324|||||||||||||||||||||||||20040328134602.1234+0600|20040328134602.1234+0600|1243.00|43824.12||3897.00


There are several important things to know about an HL7 message. First, there are various types of HL7 messages. Each message type is identified by its three-letter code and a trigger event code. For example, the above message is an ADT-A01 message. ADT tells us the message is associated with a patient admit/visit activity. A01 is the trigger event, which is registering a patient.  Whenever a patient is registered as part of the patient care admittance or visit process, an ADT-A01 message can be sent to store the patient's identity and other patient registration information. There are a number of HL7 messages that can be sent based on events that occur throughout the patient care process. A few of the HL7 messages are as follows:

ADT-A01 - Admit/Visit Notification
ADT-A02 - Transfer a Patient
ADT-A03 - Discharge/End Visit
ADT-A06 - Change an outpatient to an inpatient
ADT-A07 - Change an inpatient  to an outpatient
ADT-A11 - Cancel Admit/Visit Notification
ADT-A12 - Cancel Transfer
VXU-V04 - Unsolicited Vaccination record update
SSR-U05 - Specimen Status Request

Every HL7 message is made up of segments. Some segments are required others are not. And, the segments included vary from message to message. Although it should be noted that the MSH segment is included in every message because it contains important information about the message.

Within a segment there are fields that contain information relevant to the associated event. Following is a table that outlines the segments and descriptions included in the ADT-A01 message:

Segment Description
MSH Message header
EVN Event type
PID Patient identification data
PV1 Patient visit details
[{OBX}] Observation / Result information
[{AL1}] Patient allergy details
[{DG1}] Diagnosis details

The above table shows the segments that make up the ADT-A01 message. As previously mentioned each segment includes fields that contain data related to the event. For example, the PID segment includes information about the patient such as family name (last name), given name (first name), mother's maiden name, date/time of birth, patient address, along with some demographic data fields. Notice some of the above segments are enclosed in brackets []. The brackets mean the segments are optional  and are not required. This means you may or may not see a message with the [OBX], [AL1] and [DG1] segments.  Curly brackets {} mean the segment may contain repeating fields so that additional information may be added. The following picture shows how the data may look using an HL7 Developer Mapping Tool.


Just as fields in a database table are assigned a data type; fields in an HL7 message are also assigned data types. Some of the common data types include string, date/time, money, numeric, etc. In addition to fields (that hold data) HL7 messages include symbols. For example, each field begins and ends with a field separator that looks like this: |  If a field does not contain any data and the field beside it does contain data the message field separators look like this: | |field with data|   Adding a field separator, even if no data is included, ensures each field falls in its correct position based on the HL7 standard's definition of the message.

If a field has more than one part a component separator ^ is used. For example,  |Doe^John^I^^| field includes multiple components such as the patient's last name, first name and middle initial.


The ~ character is used to identify a repeating field. The following image shows an example of this character and how the repeating field works.

HL7 also includes an escape character that looks like this: / . This character can be used with other characters to ensure message field data is presented in its proper context. For example, if message data includes the ampersand sign (&) the "&" would be added to the message as follows /T/ so the message data relays as an ampersand. This is done because the "&" symbol is also an encoded symbol in HL7.

As previously mentioned, all fields in a segment are accounted for through the field separators. Within a message the identity of the segment will always be first. The following partial message shows the MSH segment. After the segment identity the first position holds a field separator. The second position defines the encoding characters ^~\&. The third position includes the name of the sending application (TBHS). The fourth position includes the name of the sending facility "The Best Hospital System", etc.
 

MSH|^~\&|TBHS|The Best Hospital System|SBHS|Second Best Hospital System|20130701143619||ADT^A04|MSG.0|D|2.4

The MSH segment of every message will always contain the same fields because segments are defined by the standard. Through the standard's definition systems built by different organizations or even different developers can share data because everyone knows the rules regarding what is or is not acceptable.

Defining the Data to Share


Whether two systems in the same medical facility want to share data; or, two systems in two different facilities want to share data--information regarding what will be shared must be defined. Most organizations produce a guide that describes the HL7 messages to be shared and relevant information about those messages. For example, the Center for Disease Control (CDC) developed an HL7 Implementation Guide describing immunization messages that can be shared. The US Department of Veterans Affairs produced an HL7 Interface Specification or handbook that contains the HL7 message information. Regardless of what the guide is called it must include details about the HL7 messages, segments and fields that will be shared.


These documents typically describe the message types that will be shared, the segments and fields as well as what fields will be repeated. The guides will also define what fields are required. If this information is not captured sharing data will become a problem. For example, if a receiving system requires data that the sending system does not include, the receiving system will not accept the message. In addition, the acceptable field lengths are also included in the document. And, if Z segments (which are locally defined segments not outlined in the HL7 standards) will be used these segments are also described in detail along with other HL7 message details

HL7 Interface (or Integration) Engines


To share data between medical systems (whether internal or external) an interface engine is required. An HL7 Interface Engine is created by installing the applicable interface engine software on a computer running the appropriate operating system.

An HL7 Message Mapping tool must be used to map the HL7 segment fields to database table fields. This ensures the interface engine knows what database table fields the HL7 segment fields are associated with. A Software Developer also must build two components (if the interface engine doesn't come with them). These components reside on the interface engine. One component assembles database table data into an HL7 message. The interface engine then sends the data to the receiving system. The second component strips away the HL7 symbols so only data that can be inserted into a database table remains. Interface Engines include listeners that look out for incoming HL7 messages. As these messages are received they are properly processed and the data is inserted into the applicable database table.

 

Another message type worth discussing is the Acknowledgment or ACK HL7 message type. Most interface engines can be configured to support Acknowledgment messages, which are particularly useful for messages sent between two different local area networks (LANs). When a sending interface engine sends a message the receiving interface engine can return an Acknowledgement message.  The Acknowledgement includes information identifying the message received. One key purpose of the ACK is to let the sending system know the message sent was received.

There are a number of HL7 Interface Engines available on the market. Following is a list of some of the HL7 Interface Engines available today:


For more information about HL7 you can visit Health Level Seven International.