No menu items!

If a bot screws up, who answers in court? How Brazil wants to regulate AI

If artificial intelligence (AI) makes a wrong decision, who answers: the person who decided to use it or its developer?

The questioning sounds like something out of science fiction, but Brazil now has a bill (PL 2338/2023) with principles and rules for using and applying the technology.

Presented last week, the text resulted from a working group of jurists, specialists, and civil society representatives, which generated a draft bill at the request of the Senate in December 2022, which is now going through the legislative process.

Artificial intelligence systems should have human oversight, and people have the right to an explanation if they think a decision is wrong (Photo internet reproduction)

If any doubt remains, the AI developer can be held responsible if it is not transparent about the risks of a certain action suggested by the system.

In a nutshell, it defines a series of rules for companies implementing and developing artificial intelligence systems – such as how to analyze the risks that the technology can cause – and people’s rights, such as knowing if you are dealing with an AI system and in case there is an automated decision, understanding what led to that.

WHY IS IT IMPORTANT?

Artificial intelligence is not necessarily new – email systems have been using AI to filter out spam messages, for example, for years, and the content recommendation of your favorite streaming service also uses a kind of technology to point out movies and series.

The point is that AI has also been used in sensitive areas, for example:

  • Facial recognition used by law enforcement: a person may be approached on the street because a system has mistaken them for a wanted person. This type of technology has already been shown to be discriminatory (especially against black people) and ineffective by several experts.
  • Retirement appeal analysis: someone may have a retirement appeal denied by an AI system; in March, an analysis by the TCU (Federal Audit Court) indicated that the INSS (National Social Security Institute) used an artificial intelligence that tended to deny the right to retirement.

WHAT DOES THE BILL SAY?

The bill’s premise is to create norms for the “development, implementation, and responsible use of artificial intelligence systems in Brazil”.

TRANSPARENCY: “AM I TALKING TO AN AI?”

Inspired by the AI Act, European legislation not yet enacted on the topic, the Brazilian project is based on the potential risk of artificial intelligence with the logic of people’s rights, according to Marina Garrote, a researcher at the Data Privacy Brazil Research Association.

“The Brazilian law requires, for example, that if you are talking to a customer service chatbot [chat robot], there is a clear identification that it is an AI,” she explains.

The concept of transparency extends to several areas.

Even if a customer service chatbot is used without much risk, with the law, people should be clear about whether decisions made about them have been based on artificial intelligence and what criteria have been used.

LAW: “HEY, WHY DID THIS SYSTEM MAKE THIS DECISION?”

Artificial intelligence systems should have human oversight, and people have the right to an explanation if they think a decision is wrong.

For Luiz Philipe Oliviera, coordinator of Data Protection and Artificial Intelligence at OAB-SP (Brazilian Bar Association), this is a rule already present in LGPD (General Data Protection Law), which is now complemented with the bill, which dictates rules for the public sector.

“In the government treatment of AI, they continue to serve the same principles of public administration, such as transparency, that the person always has the right to know the reason for a decision,” he said.

PUNISHMENTS AND FINES: ASSESSMENT ACCORDING TO THE DEGREE OF RISK

Artificial intelligence systems will be classified according to the degree of risk, and companies must do a previous study to understand the consequences of using that technology.

The bill prohibits, for example, the use of AI in classifying and ranking people based on social behavior and biometric identification systems in public safety activities.

On high-risk activities, it defines, for example:

  • management and operation of critical infrastructure, such as transit and water and electricity supply networks
  • education and professional training system
  • evaluation for concession or revocation of private and public services considered essential
  • evaluation of people’s debt capacity
  • establishment of priorities for emergency response services
  • autonomous vehicles, when their use may generate risks
    criminal investigation

In case of a problem with artificial intelligence, such as a decision-making system that affects many people, the company or the government must disclose the infraction, suspend the service (when convenient) and even ban the processing of databases.

The simple fine can be R$50 million (US$1 million0 per infraction or up to 2% of revenue.

AUTONOMOUS AUTHORITY

The text foresees the creation of an autonomous authority that will ensure that companies comply with the regulation.

The authority will also be able to reclassify artificial intelligence systems according to the degree of risk, and it will be the one to impose fines on those who break the rules.

With information from UOL

Check out our other content

×
You have free article(s) remaining. Subscribe for unlimited access.