Handing over your business data to artificial intelligence companies comes with inherent risks.

Kit Merker, CEO, Plainsight Technologies

March 7, 2024

3 Min Read
Digital rendering of a neuron, representing AI
Source: Kiyoshi Takahase Segundo via Alamy Stock Photo


Artificial intelligence (AI) is challenging our preexisting ideas of what's possible with technology. AI's transformative potential could upend a variety of diverse tasks and business scenarios by applying computer vision and large vision models (LVMs) to usher in a new age of efficiency and innovation.

Yet, as businesses embrace the promises of AI, they encounter a common peril: Every AI company seems to have an insatiable appetite for the world's precious data. These companies are eager to train their proprietary AI models using any available images and videos, employing tactics that sometimes involve inconveniencing users, like CAPTCHAs making you identify traffic lights. Unfortunately, this clandestine approach has become the standard playbook for many AI providers, enticing customers to unwittingly surrender their data and intellectual contributions, only to be monetized by these companies.

This isn't an isolated incident confined to a single bad apple in the industry. Even well-known companies such as Dropbox and GitHub have faced accusations. And while Zoom has since Zoom has shifted its stance on data privacy, such exceptions merely underscore the norm within the industry.

Risks of Sharing Business Data With AI 

Handing over your business data to AI companies comes with inherent risks. Why should you help train models that may ultimately benefit your competitors? Moreover, in instances where the application of AI could contribute to societal well-being — such as identifying wildfires or enhancing public safety — why should such data be confined to the exclusive benefit of a few tech giants? The potential benefits of freely sharing and collaboratively improving such data should be harnessed by communities worldwide, not sequestered within the vaults of a select few tech corporations.

To address these concerns, transparency is the key. AI companies should be obligated to clearly outline how they intend to use your data and for what specific purposes. This transparency will empower businesses to make informed decisions about the fate of their data and guard against exploitative practices.

In addition, businesses should maintain control over how their data is used. Granting AI companies unrestricted access risks unintended consequences and compromises privacy. Companies must be able to assert their authority in dictating the terms under which their data is used, ensuring alignment with their values and objectives.

Permission should be nonnegotiable. AI companies must seek explicit consent from businesses before utilizing their data. This not only upholds ethical standards but also establishes a foundation of trust between companies and AI providers.

Lastly, businesses aren't just data donors; they are contributors to the development and refinement of AI models. They deserve compensation for the use of their data. A fair and equitable system should be in place, acknowledging the value businesses bring to the further development of AI models.

Safeguard Against Data Exploitation 

The responsibility lies with businesses to safeguard their data and interests. A collective demand for transparency, control, permission, and fair compensation can pave the way for an era in which AI benefits businesses and society at large, fostering collaboration and innovation while safeguarding against the pitfalls of unchecked data exploitation.

Don't surrender your business data blindly — demand a future where AI works for you, not the other way around.

About the Author(s)

Kit Merker

CEO, Plainsight Technologies

A proven technology industry leader with over 20 years of experience building software products, Kit Merker brings in-depth leadership and results-driven expertise in team performance, product development, and business growth. He is CEO of Plainsight Technologies, the comprehensive vision AI factory that brings computer vision to any business. He previously held the position of Chief Growth Officer at Nobl9, a software reliability platform company helping software teams optimize their delivery to enhance customer satisfaction and ensure sustainable business growth. Merker began his career as a software engineer at Microsoft and his past experiences also include executive roles in operations, M&A, strategic partnerships, product management, and more. He was part of the executive team that grew JFrog (NASDAQ: FROG) to a multibillion-dollar company and was one of the first product managers for Kubernetes and related container initiatives at Google.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like

More Insights