Systems must prioritize the privacy of individuals and the rights of data subjects

At that time, the Artificial intelligence (AI) is part of many daily activities that we carry out. In fact, 84% of people around the world use an AI-enabled device or service today. As more and more people share their data to be exploited by these systems, trust becomes the cornerstone of interactions with organizations. This trust arises when the devices or services used do what we expect them to, in the same way that we have come to believe that a banking app will perform accurate transactions.

However, how companies use our data or AI to provide us with multiple benefits may vary, as will the associated risks. Not all data is handled the same way, and not all AI technologies have the same creation process. In this context, the regulation it is fundamental.

If data is like oil, we could say that when refined, you get AI gasoline. It is this combination that can bring value to individuals and businesses in the data economy. However, there is one key element that has yet to be meaningfully addressed in regulatory frameworks: the different risks what data-driven business models mean to people.

In IBMwe believe that there are two distinct categories of data-driven business models:

High risk people, which use people’s data as a source of revenue (monetization of external data). In this model, people misunderstand how their data is accessed, used in the data economy, or the level of risk they take in providing it.

The low risk, which use the data to improve operations, products or services (internal monetization of data or valuation of data). In general, people can hope that their data does not come out of this relationship or they can vote with their wallet if they are not satisfied.

This distinction will allow us to seek a more appropriate regulation with a better balance. For example, you can adjust the regulatory burden to be commensurate with the risks of data-driven business models, increase the transparency of data resale, require buyers to verify that the data has been processed lawfully and transparent, among other obligations.

From an artificial intelligence perspective, IBM We believe that these systems should prioritize the people’s privacy and the rights of data owners. That’s why we’ve called on the world for precise regulation of AI, to establish stricter controls and policies on the technology’s end uses, where the risk of harm to society is greatest. pupil.

In this line, it becomes essential that the regulation of AI takes into account three principles: first, the purpose of AI is enhance human intelligence, not replace it; second, that the data and knowledge generated belong to their creator; and third, that powerful new technologies like AI need to be transparent, explainable and mitigating harmful and inappropriate biases.

Our call to action is clear: it’s essential to build trust without stifling innovation, while preserving problematic use cases so that a change or fix is ​​on the way. Other aspects to guide regulation around artificial intelligence are:

-Demand transparency. Organizations need to know when, where and how they are using AI and data. For example, if a person is talking to an AI virtual assistant, they should be informed that they are talking to the AI ​​and not a live person.

-Proposing different rules for different use cases. Policies should reflect the distinctions between high-risk and low-risk applications. For example, the risks posed by a virtual assistant are not the same as those of an autonomous vehicle.

The ways organizations use people data and AI are continually evolving. This is not about prematurely implementing new data protection rules or banning the technology, as both are cross-cutting axes of innovation. It’s about collaborating to address the risks of data monetization, promoting the responsible advancement of technology, and pushing the boundaries of innovation by pooling our resources and expertise to improve our collective well-being and individual.

At IBM, we are optimistic. We believe that by applying science and innovation to real-world problems, we can create a better, more sustainable, equitable and secure future.

Continue reading:

ChatGPT vs. Bard: what artificial intelligence brings
The United States has launched an international initiative for the responsible military use of artificial intelligence
How artificial intelligence can help empathize with people with mental health issues

Categorized in: