Are You Talking to a Carbon, Silicon, or Artificial Identity?
In the triumvirate of identity types, protecting the identity, privacy, and data of carbon-based forms — humans — is key. Safeguards must be in place as AI becomes more interactive.
ChatGPT and Bard AI are making the news for all kinds of reasons. People are charmed and unnerved by the quirky responses and how seemingly sentient these chatbots are.
But artificial intelligence (AI) has existed for years, and the issues today aren't just related to a greater capacity for writing term papers and malware. These new tools simply add to an already complex ecosystem of identities, which may be carbon-based, silicon-based, or now, increasingly complex artificial identities created by AI.
Identity Trifecta
First, let's define these different identity types:
Carbon-based identity: This one is easy. It's humans. We have names, we have personally identifiable information, mannerisms, and so much more. And we live, breathe, and interact with carbon, silicon, and artificial identities every day.
Silicon-based identity: A silicon-based identity is something that we all interact with but rarely or never think about as an entity: our cars, smartphones, and millions of other items that rarely exist or interact on their own.
Artificial-based identity: If you've ever read a novel, you've met a whole cast of artificial identities. Each character is a fictional creation with a backstory, internal motivations, and seemingly interactive relationships. The new chatbots can create detailed identities in moments.
Why Does Identity Matter?
A silicon-based identity integrates with you and your identity. Take your smartphone, for example. You set it up and customize it so that you can log in with your fingerprint or your face. You use it to set up online banking accounts, social media profiles, and much more. Essentially, you create a link between your carbon identity and your silicon identity, which allows you to interact with your bank and other peer-to-peer connections. When you sell or give away the phone, you know you need to wipe all that data, otherwise the next user could use connections that you authenticated to access your financial, work, and other personal accounts. Even after removing the data, the phone still exists as a silicon identity, and it will integrate with the next carbon entity that sets it up.
Artificial entities are increasingly easy to create, however. AI can generate faces that people find trustworthy and have difficulty distinguishing from real ones. Combined with chatbots, some of these entities have LinkedIn profiles and can easily — and convincingly — reach out to prospective customers without adding to an organization's staffing costs or training requirements. Other chatbots may be answering your banking questions, responding to fraud alerts, or authorizing transactions. They may help you book a flight, facilitate a product return, or respond to insurance queries.
Who — or What — Are You Talking To?
How to know which type of entity you're talking to is a question that's only going to become more challenging in the coming months and years. Today, most chatbots can answer simple questions with canned responses. Most users probably find them frustratingly inadequate and try to interact with a carbon entity instead. But it's getting harder to tell them apart. As these solutions become more available, flexible, and interactive, you need to consider who you want to share protected information with.
It may seem safe to share your banking information with an artificial identity, forgetting that if you share your checking account info and authorize them to take money out, you have very little control over when and how that happens. There are some financial regulations in place to help consumers, but they only help after something goes wrong. When your credit or debit card is compromised, you can dispute fraudulent transactions, prevent future charges, and get a new card. That can limit the potential fallout.
Health information is far more complicated. If you share information with an artificial identity about your carbon entity, such as that you have diabetes and high blood pressure, that information goes somewhere you can no longer control. If this private health information is disclosed, either because the artificial entity you shared it with was created for malicious purposes or because the data collected wasn't properly secured and encrypted, that part of your identity is now compromised and you cannot make that information private again.
Protect Data, Privacy, and Identity
As the different types of identities become increasingly intertwined and complicated, it's important for organizations to make it clear when an identity interfacing directly with people is artificial, why they're using it, and how.
They also need to protect the identity, privacy, and data of the carbon identities, regardless of how it is collected. That means how this information is collected, transmitted, and encrypted is important.
As we move forward, we must make certain that we protect data, privacy, and identity by challenging and validating access for all these different types of identities. Although artificial identities are becoming interactive and seemingly human, the responsibility to protect carbon identities is paramount.
About the Author
You May Also Like
DevSecOps/AWS
Oct 17, 2024Social Engineering: New Tricks, New Threats, New Defenses
Oct 23, 202410 Emerging Vulnerabilities Every Enterprise Should Know
Oct 30, 2024Simplify Data Security with Automation
Oct 31, 2024Unleashing AI to Assess Cyber Security Risk
Nov 12, 2024