Share this
A Sorting Hat? The Protenus Classifier Is So Much More
by Erem Kazancıoğlu, Senior Data Scientist on May 4, 2020
At the heart of Protenus Compliance Analytics platform lies the Artificial Intelligence (AI) engine that monitors millions of accesses to patient data every day and determines how likely it is for each to warrant further investigation. This AI engine, or the “classifier" as it’s known in data science, is a relatively little nugget compared to all the different processes our clients’ data go through every day to produce cases which are narratives explaining why certain interactions have been flagged as suspicious. Just a fraction, less than 2%, of a day’s run is spent on classification, while the rest goes to pre-processing, data aggregation, feature calculation, and alert and case generation.
The classifier, however, is the end result of a rigorous and ever-repeating cycle of research and development that involves collaboration among data scientists and data engineers (the usual suspects), but also members of our DevOps and customer success teams. This post describes how our team takes a classifier from research all the way to production (and back).
Training dataset
The classifier needs a set of examples for suspicious and appropriate behavior so it can learn from those examples and accurately categorize new data that we process every day. We have two sources for this training data:
- Set of clients who give us permission to use their case resolutions for research purposes.
- Our experts who use their years of experience working in healthcare workflows to assess the suspiciousness of accesses that did not result in a case.
This second source of labeled data is crucial to the performance of the classifier, as it enhances the diversity of types of accesses and helps us rely not only on resolutions of cases that we have sent to our clients. These two types of resolutions, along with a set of characteristics of those accesses that our research has found to have strong information on suspiciousness, which we call “classifier features,” constitute the dataset that we use to train the classifier.
Classifier training and performance evaluation
Protenus data scientists conduct research on finding the best performing classifier using Python in Jupyter environment, which is part of the standard toolkit for data science. Our team employs a rigorous methodology that uses data science best practices, such as hyperparameter tuning and k-fold stratified cross-validation. The team investigates numerous classifier versions using standardized evaluation plots such as Precision-Recall curves and score density distributions before picking a classifier that delivers the best performance overall and in each case category.
Release testing and preparation
A crucial part of the pre-release evaluation is to test the classifier in a production-like environment where we could monitor how the classifier would have worked if we had released it. Fortunately, Protenus’ DevOps team has set up a development environment that closely mimics our production environment, complete with production data copied over from a few clients who have given us permission to do so. We run a day's analytics in the development environment to make sure that our pipeline runs with the new classifier without any issues. We also check that the distribution of scores across alert categories and across all accesses reflects what we have observed in our research. These sanity checks ensure that we don’t have any surprises once the classifier is released.
Release
The actual release of the classifier is the most straightforward part of this whole process, thanks to the design of Alert Generator and the analytics pipeline. We simply change a small configuration parameter to point our analytics to the new classifier, drop the actual classifier file to a specified production S3 bucket, and — voilà! — a new classifier is now in production.
Post-release performance monitoring
Our job is not done once the classifier is released. We keep a close eye on alert volumes and use a standardized set of graphs and metrics to monitor alert volumes, case resolutions, and false-positive rates across all clients and all alert categories to make sure that the classifier is performing as expected. Protenus’ data science team also collaborates with the customer success team in this process. Customer Success Managers systematically collect any and all feedback from clients about alert quality and bring these to our team’s attention.
Furthermore, the data science team and the customer success team hold biweekly meetings to discuss feedback from customers as well as false positives (what did we get wrong?) as well as false-negative patterns (what did we miss?) and to chart actionable next steps. Even though I summarize performance monitoring as a step here in the release cycle, this process actually never stops. We are always watching how we are doing and always thinking about the feedback from our clients.
Repeat
New feedback from our clients and new case resolution data that keeps on coming every day prompt a new cycle of research-release-monitor, which, like performance monitoring, never really stops. We are currently working on an about-monthly classifier release cadence, which means that the classifier would always reflect recent data on case resolutions and the feedback from our clients gets regularly incorporated into our analytics.
I hope this article gives you an overview of the steps involved in Protenus classifier research and release process. Stay tuned for future blog posts where we will dive deeper into each of these components.
If you’d like to read more about data science at Protenus, please check out my coworker’s articles on Scala with Data Science or visit our website.
Share this
- September 1, 2024 (1)
- August 1, 2024 (1)
- July 1, 2024 (1)
- June 1, 2024 (1)
- May 1, 2024 (1)
- March 1, 2024 (2)
- February 1, 2024 (3)
- January 1, 2024 (1)
- December 1, 2023 (1)
- November 1, 2023 (3)
- October 1, 2023 (3)
- September 1, 2023 (1)
- August 1, 2023 (1)
- July 1, 2023 (2)
- April 1, 2023 (1)
- March 1, 2023 (1)
- February 1, 2023 (1)
- December 1, 2022 (3)
- November 1, 2022 (3)
- October 1, 2022 (1)
- September 1, 2022 (1)
- August 1, 2022 (2)
- June 1, 2022 (4)
- May 1, 2022 (5)
- April 1, 2022 (1)
- March 1, 2022 (4)
- February 1, 2022 (3)
- November 1, 2021 (2)
- October 1, 2021 (3)
- September 1, 2021 (3)
- August 1, 2021 (3)
- July 1, 2021 (4)
- June 1, 2021 (2)
- May 1, 2021 (2)
- April 1, 2021 (2)
- March 1, 2021 (5)
- February 1, 2021 (2)
- January 1, 2021 (1)
- December 1, 2020 (1)
- November 1, 2020 (2)
- October 1, 2020 (2)
- September 1, 2020 (3)
- August 1, 2020 (2)
- July 1, 2020 (2)
- June 1, 2020 (6)
- May 1, 2020 (3)
- April 1, 2020 (4)
- March 1, 2020 (2)
- February 1, 2020 (4)
- January 1, 2020 (2)
- December 1, 2019 (2)
- November 1, 2019 (1)
- October 1, 2019 (1)
- September 1, 2019 (1)
- August 1, 2019 (1)
- June 1, 2019 (1)
- April 1, 2019 (1)
- February 1, 2019 (1)
- January 1, 2019 (1)
- December 1, 2018 (2)
- November 1, 2018 (2)
- October 1, 2018 (2)
- September 1, 2018 (3)
- August 1, 2018 (1)
- July 1, 2018 (2)
- June 1, 2018 (2)
- May 1, 2018 (1)
- April 1, 2018 (1)
- March 1, 2018 (2)
- February 1, 2018 (6)
- January 1, 2018 (2)
- September 1, 2017 (2)
- August 1, 2017 (2)
- June 1, 2017 (2)
- May 1, 2017 (1)
- April 1, 2017 (1)
- March 1, 2017 (2)
- February 1, 2017 (5)
- January 1, 2017 (2)
- December 1, 2016 (3)
- November 1, 2016 (5)
- October 1, 2016 (4)
- September 1, 2016 (8)
- August 1, 2016 (4)
- July 1, 2016 (4)