Protecting patient privacy and helping health systems stay compliant with complex security and privacy requirements is a challenge in the best of circumstances, and in today’s ever-changing technological landscape, it can be nearly impossible. One particularly thorny issue that the industry has been wrestling with has been the emergence of new technologies that leverage artificial intelligence (AI) to aid compliance teams in detecting and remediating threats to privacy. Traditionally outside the province of compliance teams, such platforms are powerful, but often poorly understood, additions to a modern privacy team.
We provide more context and specifics on this topic in the industry standard publication Health Law and Compliance Update (HLCU), edited by John Steiner and published by Wolters Kluwer. The 2018 edition includes a chapter on “How Artificial Intelligence is Transforming Healthcare Privacy, Security and Compliance.”
HLCU has been published annually since 2003. The 2018 edition marks the first time a piece has been published that addresses the fascinating issues related to artificial intelligence as a change agent for the better. While we don’t want to dive too deep into the chapter, as we encourage you to take a look at the full book, there are key points highlighted below from the chapter, and some new points that have arisen in the field even in the past several months.
Determining identities of patients, providers, staff and affiliates throughout a healthcare system is an incredible challenge, and one that frequently produces false positives and requires substantial effort. In a world with numerous, frequently updated and poorly-reconciled disparate data sources, it is difficult to keep track of individuals, their roles and responsibilities. Yet, that is the type of activity that is a routine part of effective compliance.
Fortunately, modern platforms allow for robust identity resolution. That enables health systems to blend together different databases, whether in the EHR, HR systems, or Active Directory. In turn, discrepancies can be resolved and a single, unified picture becomes available of each user and each patient. Aided by modern machine learning techniques, these systems achieve levels of accuracy with minimal human intervention.
Clinical Context -> Low False Positives
Detecting inappropriate events in healthcare systems, whether privacy violations or otherwise, usually requires well honed knowledge of clinical environments. Most non-health care individuals do not understand that privacy professionals usually have a solid grasp of clinical knowledge. That knowledge comes from experience with detecting subtle patterns of inappropriate activity in healthcare settings. Unfortunately, most of this work is manual, and even with a traditional, rule-based privacy monitoring system in place, monitoring reports often result in false positives. Creating those reports often requires tedious clinical, administrative and IT sleuthing that is done manually. This is a perfect task for the automating capacities of AI - able to understand the typical behaviors of clinicians, researchers, admins, billing teams, and the numerous other roles present in hospitals. AI is capable of building unique profiles for every user, telling you what’s normal, and telling you when something’s gone awry. This monitoring and reporting capacity nearly eliminates false positives and provides greater assurance that when an alert comes to you, it has been thoroughly scrutinized.
Effective compliance requires development and sound analysis of facts. Gathering facts for analysis and interpretation increasingly is more efficient and reliable with assistance from AI platforms. Modern AI-based privacy platforms can assemble the facts of a case and provide natural language explanations of why an alert was brought to your attention. This innovation saves time. Report-writing is limited to the discretionary and interview-related comments that come from human judgment. Now, AI is finally unburdening privacy teams from the rote work of report-writing to focus on what they do best - get to the bottom of the human subtleties in these cases, appreciate the legal nuances of truly novel situations, and drive culture change throughout their organizations.
Protenus has a next-generation architecture and world-class team that delivers a system designed to put “eyes on” each access to patient data and determine the level of risk associated with that access. By viewing each of the millions of accesses to patient data that occur in an EHR every day, a modern AI platform is capable of verifying both the riskiness (the sensitivity of the data viewed) as well as the suspiciousness (the probability that an access is improper), synthesize those scores, and provide a deep rationale for why a given access was appropriate or inappropriate. In this way, we move our industry from an “audit-based” paradigm, to a truly “confidence-based” paradigm.
At the end of the day, AI being used in privacy monitoring isn’t about a whiz-bang fancy new algorithm (though certainly Protenus spends a lot of time on those), it’s about the tangible ways that advanced tools help make your life easier and allow you to focus on what’s more important to you, in and out of the workplace.
We’re very proud to describe and usher in this new generation of technology, as well as help explain this new way of thinking about the solutions available. As compliance responsibiilities evolve, both in the eyes of patients and oversight agencies, Protenus will help those who must protect healthcare information be proactive and effective.