How to strike the balance between privacy and personalization in healthcare and beyond

The trade-off between widespread technology adoption and responsible use often lies on the spectrum of privacy. When it comes to technologies fueled by data, such as artificial intelligence (AI), it’s even harder to strike the balance between equitable access and inherent risk. This is felt heavily in the healthcare industry, as regulations around information sharing are generally more stringent than those for other verticals.

healthcare privacy personalization balance

Because of laws like HIPAA, healthcare has had a head start in changing its approach to handling personally identifiable information (PII) and other sensitive information, while still leveraging technology and working with third parties to streamline processes. And they’ve figured out how to do this without sharing their valuable data. This is contradictory to the long-held belief that SaaS companies require customer data to improve services and get accurate, unbiased insights—it’s simply not the case.

Though it sounds implausible that less data sharing would equate to more specialization, it’s a reality that technologies like natural language processing (NLP) make it possible to achieve. Advances in areas like transfer learning can now be used to build models and then locally optimize them. This is a unique approach that not only safeguards patient information, but also makes it possible to deliver highly personalized care.

Transfer learning is a machine learning method in which a model is developed for a task and is then reused as the starting point for a model on another secondary task. Domain experts, like doctors and other professionals, can leverage the pretrained models on their organization’s data to localize them, reflecting their own patient populations, and they can do it without the need for a data scientist or compromising their most valuable asset: their data.

When companies share customer information, whether it be with a partner or vendor, it identifies individuals personally. In the case of healthcare, this includes medical information. The vulnerability of that information is in the hands of that trusted third party, whether they have strong security measures in place or not. The risk of a data breach, human error, or information leaking is very real, especially if your data is all stored in one place.

On the flip side, access to this data becomes another challenge. Figuring out who within an organization or department should have access to the data and for what purpose often takes longer than the timespan in which the data is relevant. The point being: this information is risky to store and difficult to access. But most importantly, it’s just not necessary to yield the health and business insights that matter.

Beyond privacy, localizing AI and NLP models not only mitigates risks with storing and accessing data, but it also empowers providers to deliver better care. Here’s why: most clinical evidence is based on research and clinical trials done on white males. This is true even for pregnancy-induced conditions, like gestational diabetes, and their treatments. The one-size-fits-all criteria is not applicable in medicine, so it’s understandable why this data would be problematic.

Starting with a general AI model enables caregivers to work within the parameters of a certain condition, such as kidney disease, and then fine-tune it to a given patient population. This level of personalization provides many benefits, and one of them is better serving marginalized groups. For example, black patients, who are often under-insured, are underrepresented in general models.

Transfer learning allows for a level of specialization to treat this population more accurately. This can be applied to all patients, conditions, symptoms, and can be done down to the hospital or clinic level.

Factors like age, disease distribution, and social determinants of health play a big role in equitable access to healthcare, and AI is the only realistic way to distill the data in a way that enables us to make sense of it. But how we responsibly handle this data has to be a big part of the conversation – and making sure it stays in the right hands is a good first step. Better yet, transfer learning makes it possible to glean the same, in some cases, better insights without the risk of mishandling or compromising sensitive data.

While strict regulations in healthcare are nothing new, as we hear about more high-profile data breaches and ransomware attacks, all industries are going to have to rethink their data privacy hygiene. Additionally, as more personalized care and customer experiences become paramount to serving different populations, AI and NLP are technologies that modern businesses must get acquainted with.

Striking the balance between privacy and personalization is not easy, but applications in healthcare have proven that it’s possible and can even improve outcomes.

Don't miss