Building AI That Respects Our Privacy

Until laws can move at the speed of innovation, we'll see a discrepancy between the protections offered and the risks associated with technology.

Arjun Bhatnagar, CEO, Cloaked

January 18, 2024

3 Min Read
Silhouette of a human head with a digital overlay
Source: marcos alvarado via Alamy Stock Photo

COMMENTARY

As a technologist, I experiment for a living. I consider it my job to break things in order to make new ones — daily.

In 2020, it wasn't the tech that was broken, it was me, and I leaned into artificial intelligence (AI) for the fix. I built a crude AI system that aggregated my Fitbit data and healthcare records with my financial information, email, and more. My "databox" started to tell me when to exercise, when to drink less, and when to slow down on spending. I started to feel better. I also started to grasp how powerful having my data in one place could be when, unbeknownst to me, the app had a conversation with my then girlfriend.

While this experience helped me see the light, it also highlighted the dark places we could find ourselves if we don't have control over our own data. Four years later, we're having heated debates about the role AI should play in our lives and what we need to do to control the negative ramifications of it.

How to Integrate Privacy Into AI

Like most things, there is no black and white approach to the ethical application of AI. So as the world grapples with its impact and we wait for legislation to protect us, we innovators have a responsibility — and I'd argue an opportunity — to start implementing privacy best practices, or standards that we need to protect and uphold basic human rights. This one step alone could help us start to envision an AI-centric world that works to our advantage, while limiting its negative impact.

Here's where we can start:

  • Model per user: Let's shift from a big aggregate model to an isolated model trained on individual user data sets. By having individual data influence models, we make them private.

  • Closed systems: Take this a step further by shifting AI models to closed systems, like laptops, where data is trained on a device close to each person versus being held somewhere else.

  • Transparency and tracking across models: Adding transparency on data sets can be used to train models. By watermarking data, videos, and images, we can understand originating sources and build AI models that access or ignore certain data.

  • Data removal rights: Tracking can give us the chance to integrate data removal rights. If individuals opt out of having personal information shared, the AI model can adjust accordingly.

What We Can Do Today

However, until these are in place, we need to discover ways to work with the current system. Here is what we can do now to take advantage of the benefits of AI while protecting our data:

  • Be aware: Understand how AI platforms collect, store, and use the data you share. Read privacy policies, pay attention to notifications, do a quick Google search, and do your due diligence.

  • Limit sharing: Avoid providing any information to an AI platform that is not absolutely necessary. Do not ask questions or provide responses that contain any personal data. The last thing you want to do is share your banking information with your chatbot only to find that it has been hacked.

  • Understand limitations: While we've done a great job of teaching AI how to interact with us, we still need to remember that assigning human characteristics to machine learning doesn't work past surface use. This is one uncanny valley it's important not to get stuck in.

  • Employ situational awareness: Believe it or not, it's easy to begin interacting with some AI without even knowing it. This can occur when chatbots mimic customer service reps or scammers use bots to perpetrate phishing schemes. Some organizations introduce them as onboarding guides.

Until federal law can move at the speed of innovation, we'll see a discrepancy between the protections offered and the risks associated with technology, leaving each of us to mitigate risk through self-education and ethical responsibilities. So let's take up the challenge with a moral compass and consumer-forward approach that can help artificial intelligence work harder for us, faster — while maintaining our fundamental right to privacy.

About the Author(s)

Arjun Bhatnagar

CEO, Cloaked

Arjun Bhatnagar is the CEO of Cloaked, the consumer-first privacy company dedicated to bringing humanity to the Internet. Over the course of his career, Arjun has successfully started two companies, taught coding at MIT, worked as a partner at a venture firm, and founded a nonprofit dedicated to bringing education to underserved communities. In 2016, Arjun and his brother Abhijay Bhatnagar sold their first startup: Hey! HeadsUp. Arjun understands more than 15 coding languages and is dedicated to making the world a better place through people-centric innovation.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights