A data-driven approach to healthcare delivery is the new standard key stakeholders – providers, payers, pharmaceutical companies and more – rely on to control costs while improving outcomes. Digitization of data is making this possible. From clinical and medical claims, electronic medical record (EMR), clinical trials, genomics and IoT data, to structured and unstructured data generated by patients and devices, digitalized data is paving way for evidence-based practice and personalized medicine. Data assets are now becoming new competitive advantages among healthcare organizations.
Apart from traditional retrospective analytics and quality measure reporting, healthcare organizations are now investing in meaningful advanced analytic solutions such as predictive analytics and data mining, which will allow healthcare stakeholders to use digitized information to provide real-time clinical decision support, improve care and control costs.
The five V’s of healthcare data
Today, as healthcare is generating data in huge volume, with diverse variety, velocity and with improved veracity, big data is becoming the default solution to store, aggregate and process health information. The marriage of big data and predictive analytics is what enables the fifth ‘v’ of healthcare data: value.
Big data platforms provide efficient storage, computing and processing, which enable high end data crunching required for data mining and predictive analytics. Big data technologies provide ideal platforms for developing complex predictive models, including faster data read and load capabilities for performance and scalability, and easy integration of traditional and non-traditional databases. Predictive models, along with complex event processing, can be used to generate real-time alerts and notifications, which can be easily integrated into line of business (LOB) healthcare applications for improved outcomes.
Monetizing opportunities with predictive analytics from big data
By leveraging predictive analytics to generate value from big data, healthcare organizations can control operating costs and improve clinical outcomes.
For example, each year, one out of three senior citizens has an accidental fall, and one out of five result in significant injury such as a broken bone or head injury. In 2016 alone, the cost of treatment for accidental falls is estimated to be $34 billion, with 67 percent being spent on hospitalization. Reducing operating costs and preventing costly inpatient treatments is a significant risk mitigation for payers and providers.
The existing data can be mined to determine patterns leading to an accidental fall, and patterns can be used to identify patients who share a similar medical history, but have not had an accidental fall. Both types of patients can be surveyed using a standard risk of fall assessment by their primary care physician, which can be stored in a data lake. The patient’s responses can then be mapped to their medical history for payers to build a predictive model that stratifies the risk of these patients suffering accidental falls.
An outreach initiative focusing on high and medium-risk patients can be initiated. The outreach can include education about preventative measures or the assignment of a personal emergency response system (PERS) device. By mining the data and predicting the risk of an accidental fall, the outreach can be more focused and efficient, which reduces the cost of the initiative considerably.
From the provider perspective, a combination of these techniques can be used to predict clinical outcomes to replace expensive inpatient treatments with lower cost treatments such as home health or even an ambulatory care facility without compromising quality of care. Predictions can range from hospital readmission rates, patient risk stratification, hospital acquired infections, disease progression, diagnosis accuracy, to claims analytics like denial management and identifying high-cost patients.
As big data analytics technology matures and big data platforms continue to enable aggregation of variety of data into data lakes, more complex analytic use cases can be addressed to achieve the objective in a reasonably short time.
Roadmap to implement predictive analytics, while leveraging big data
An iterative approach to big data analytics provides stakeholders an opportunity to analyze the results and determine the ROI, while ensuring capabilities are developed to act on the analytics-generated intelligence.
Explore data and define the problem areas. Exploring the existing data will allow stakeholders to determine the problem areas which are significant, such as the number of patients suffering, cost of care or unfavorable clinical outcomes, and which problems have the highest opportunity, such as saving costs or improving outcomes. Exploration should be done until a clear business objective can be defined.
Define use cases. The defined problem area should be broken down in smaller components to analyze related data issues or gaps that may arise. For example, lack of relevant data elements or lack of quality of data could lower the accuracy of predictive models and eventually prevent the widespread adoption of analytics. Based on the exploratory analysis on the available data sets, a hypothesis is formed to target the use case under question.
Design and deploy the solution. Once the business objective is defined and use cases are identified, the next step is to develop a predictive analytics solution. Predictive analytics solution development can follow standard industry practices such as Cross Industry Standard Process for Data Mining (CRISP-DM) or Sample, Explore, Model, Modify and Assess (SEMMA). Both methods are iterative and validate the statistical analysis that has been performed. The outcome of the predictive analytics solution is the actionable intelligence; for example, root causes for accidental falls among seniors, key factors affecting claim denials, etc.
Enrich the data lake. Quality and accuracy of predictive models depend on the volume, variety and veracity of the data. Predictive analytics is a dynamic process, and often times, mining of the data reveals additional problem areas. To solve additional use cases, data lakes need to be enriched with variety of data including social, device or wearables data and non-healthcare data. Tapping these sources, such as diet and exercise regimens and vital signs collected by patient-wearable technologies, is the next step to enriching the information in the data lake. Predictive models are continuously monitored for the standard errors to make sure they are giving reasonably accurate predictions for the continuously changing healthcare data of the organization.
Big data and related technologies have catalysed predictive analytics in a big way by providing easy and efficient access to variety of healthcare data and providing technologies to speed up the process. An iterative and incremental implementation approach in which stakeholders can define and solve business problems, that gradually increase in complexity, enables healthcare organizations and stakeholders to identify real opportunities for monetization and to improve overall health outcomes.
Please follow and ‘Like’ us