Five worthy reads: The privacy implications of AI

Five worthy reads is a regular column on five noteworthy items we’ve discovered while researching trending and timeless topics. This week, we explore the relationship between AI and data privacy.

 

artificial intelligence

From smart devices and voice assistants to mediating traffic and enhancing personalized shopping experiences, artificial intelligence (AI) has found widespread application in many aspects of life. By revolutionizing complex problem solving across a wide spectrum of human endeavors, AI has grown in popularity, enticing vendors and other service providers to jump onto the AI bandwagon.

A typical AI project starts with problem identification, then moves on to gather a precious resource: the data that will be used to train AI. This training data is often collected from multiple sources, including customer actions on the company’s own websites, blogs, social media accounts, and, in some cases, from third-party sources. After collecting the data, data scientists and machine learning (ML) specialists step in to identify the right algorithm for solving the problem. Finally, a code aligning to the algorithm is written and pushed into the development environment.

The most crucial step in the entire process is collecting the right data. The more data, the better these computers can identify algorithms based on patterns in the captured data. This is why AI requires excessive amounts of quality data, so it can more accurately predict which actions should be carried out and when.

If organizations don’t have quality training data, they often acquire it from third-party sources. However, with data breaches and privacy violations making headlines, consumer advocates have called third-party data collection practices into question. Moreover, users themselves are becoming increasingly concerned about the usage of data collected from them. For the same reason, various countries are establishing their own privacy laws to prohibit the misuse of personally identifiable information (PII) collected from individuals.

With these laws coming into play, organizations focusing on AI projects need to be cautious when it comes to collecting and using training data, and consider the impact of a data privacy law violation. Not only could the organization incur a heavy fine, but it may lose credibility among its customers.

With that said, here are five interesting reads on how data privacy is closely related to AI projects.

1. AI, privacy and data ethics

Data ethics should be a central consideration for companies and individuals developing or deploying AI. They need to establish policies and processes that ensure the process of data collection and its use in AI projects is legal, proportionate, and just.

2. AI, ML, and data analytics in the age of privacy regulations

Most challenges surrounding AI data training can be mitigated by creating a regulation-compliant, transparent, fair, and secure practice for data collection and usage. These practices include data de-identification, data encryption, and synthetic data generation.

3. Rethinking privacy for the AI era

With the rise of AI, the concept of privacy has become complex. Consumers today face an endless stream of lengthy user agreements, hastily clicking “accept” without realizing what privacy rights they may be giving away. Most data collected from consumers is used to provide helpful services—but it can also carry potential risks.

4. Data privacy regulations’ implications on AI

Everyone wants automations, but the foundation of a successful AI project is having quality data. While most companies view data privacy laws as extra overhead, they can actually help companies drive a data quality program to leverage more advanced technology.

5. The rise of data and AI ethics

Due to the burgeoning role of data in citizens’ everyday lives, governments are increasingly considering their regulatory responsibilities. For example, the European Union’s GDPR, California’s Consumer Privacy Act, and more privacy laws have been drafted to rein in unethical use of AI.

Even though AI and ML show a lot of promise, organizations intending to build AI models need to be prepared for data privacy security regulations. Setting up guidelines for dealing with training data will help address many legal and ethical issues. Being prepared will not only reduce the risk of incurring fines, but improve your organization’s overall security posture.

** Optrics Inc. is an Authorized ManageEngine partner


The original article can be found here:

https://blogs.manageengine.com/corporate/manageengine/2019/12/13/five-worthy-reads-the-privacy-implications-of-ai.html

Leave a Reply