Artificial intelligence and machine learning: International regulation from ethical, moral, social and economic aspects

In this article, the vice-president of Services of the Chamber of Commerce Brazil-Canada (CCBC), Paulo Perrotti, debates the impacts of new technologies 


By Paulo Salvador Ribeiro Perrotti*

The first relevant point to note is the practical and tangible application of artificial intelligence (AI) and machine learning in the economic, commercial and financial environment. As the basis of this technology is to collect and diagnose logical patterns through massive and statistical data analysis, as well as consumer habits, and provide repetitive and automated customized solutions, one of the main concerns is outliers or the application of conditional biases.  

Os outliers são dados que se diferenciam drasticamente de todos os outros e que fogem da normalidade, podendo causar anomalias nos resultados obtidos por meio de algoritmos e sistemas de análise. Os vieses condicionados podem induzir a tecnologia a conclusões abstratas, pouco práticas e preconceituosas; isto ocorre porque o tratamento dos dados é feito de forma genérica, abrangente e repetitiva, deixando de lado dados e preferências específicas ou de minorias – que podem sofrer abusos ou tratamentos inadequados quando estas informações são contabilizadas e aplicadas de forma automatizada e sem filtros. Neste sentido, os sistemas de IA podem criar resultados tendenciosos com base enviesada de discriminação em fatores como sexo, gênero e raça. 

As such, there have been an increasing number of tools that attempt to diagnose a model and identify whether there is any bias. This issue is currently a concern for almost all information technology professionals. The recently released Global AI Adoption Index 2021 survey found that 94% of IT professionals report that it is important for their business to be able to explain how AI arrived at a particular decision in order to justify whether or not there was a conditional bias. 

Following this same point of view, there is a risk that certain conclusions, based on machine learning, will not be inclusive or will treat an individual or even a collectivity in an inappropriate way, precisely because it does not fit the standards of other users, since it has not conformed to the rules imposed as standard for that particular technological environment. 

The Canadian example 

The world depends more and more on digital technology to connect, work, and innovate. In this sense, in my studies on the subject – and especially during my participation in C2 Montreal, one of the most representative festivals of creativity and innovation for business in the world – I learned that the government of Canada, a country at the forefront of and consistently adopting technological solutions, is implementing a series of measures to ensure that Canadians can benefit from the latest technologies, and are confident that their personal information is protected, and that companies are acting responsibly. 

The Minister of Innovation, Science and Industry, François-Philippe Champagne, together with Canada’s Minister of Justice and Attorney General, David Lametti, presented the Digital Charter Implementation Act, 2022, which will significantly strengthen data privacy in the country, as well as create new rules for the responsible development and use of artificial intelligence. 

The Digital Charter Implementation Act, 2022 will include three basic proposals:  

  1. Creation of the Consumer Privacy Protection Act;  
  1. Establishment of a personal data and information protection court; and  
  1. Regulation of AI through the Artificial Intelligence and Data Act (Aida).  

Consumer privacy protections will address the needs of Canadians who rely on digital technology to do their jobs. This regulation will ensure that the privacy of Canadians is protected and that innovative companies can benefit from clear rules as technology continues to evolve, and will also address the following topics: 

  1. increased control and transparency when Canadians’ personal information is handled by organizations; 
  1. freedom to move data from one organization to another in a secure manner; 
  1. ensuring that their information can be requested and that it can be disposed of when it is no longer needed; 
  1. establish stronger levels of protection for minors, including by limiting the right of organizations to collect or use this type of information, and holding organizations to a higher standard when dealing with minors’ information; 
  1. provide the Privacy Commissioner of Canada (Canada’s data privacy commissioner, who is a non-partisan “ombudsman” and an official of the Canadian Parliament) with broad powers to make requests, including the ability to order a company to stop collecting data or using personal information; and 
  1. establishing significant fines for non-compliant organizations – with fines of up to 5% of global revenues or $25 million, whichever is greater, for the most serious infractions. 

On the other hand, Aida will introduce new rules to strengthen the confidence of Canadians in the development and deployment of AI systems in order to: 

  1. protect citizens by ensuring that high-impact AI systems are developed and deployed in a way that identifies, assesses and mitigates the risks of harm and bias; 
  1. establish an AI commissioner to support the Minister of Innovation, Science and Industry in fulfilling the portfolio’s responsibilities, including monitoring company compliance, ordering third-party audits, and sharing information with other regulators; and 
  1. outline prohibitions and criminal charges in relation to the use of illegally obtained data for AI development, or where the reckless deployment of AI entails serious harm, as well as where there is fraudulent intent to cause substantial economic loss through its deployment. 

The stated purposes of Aida are: (i) to regulate international and interprovincial trade and commerce in AI systems by establishing common requirements applicable across Canada for the design, development and use of such systems; and (ii) to prohibit certain conduct in relation to AI systems that could result in serious harm to individuals or their interests. Aida defines “harm” as (a) physical or psychological harm to an individual, (b) damage to an individual’s property, or (c) economic loss to an individual. 

AI regulation will focus on people who perform a “regulated activity,” which means any entity that performs any of the following activities:  

  1. processing or making available for use any data related to human activities for the purpose of designing, developing, or using an AI system; or  
  1. designing, developing, or making available for use an AI system or managing its operations.  

These references are so broad that it is easy to imagine that many AI systems fall within the meaning of a regulated activity. Aida imposes regulatory requirements for AI systems in general and for AI systems specifically referred to as “high-impact systems,” the assessment of which is still subjective and depends on several factors. Their impact levels vary based on the system’s effect on the rights, health and welfare of individuals or communities, economic interests and sustainability of an ecosystem, reversibility, and duration.  

When an AI system meets the definition of “high impact,” the responsible party must: 

  1. establish measures to identify, assess, and mitigate risks of harm or biased outcomes that may result from the use of the system;  
  1. establish measures to monitor compliance with the mitigation measures and the effectiveness of those mitigation measures;  
  1. where the system is made available for use, publish on a public website a plain language description of the system that explains, among other definitions, how the system is to be used, the types of content it is intended to generate, and the types of decisions, recommendations, or predictions it is intended to make and the risk mitigation measures established;  
  1. where the operation of the system is being managed, publish on a public website a plain language description of the system that explains, among other things, how the system is to be used, the types of content it is intended to generate and the decisions, recommendations or predictions it makes and the mitigation measures established; and  
  1. notify the responsible authority if use of the system results or is likely to result in material harm.  

In this regard, the authority may require any person responsible for a high-impact system to stop using it (or making it available for use) when it has reasonable grounds to believe that its use gives rise to a serious risk of imminent harm. Similarly, it may also require the person being audited to implement any measure specified in the order to address anything mentioned in an audit report; or for a person to publish on a publicly available website certain information, including audit details, provided that it does not require the disclosure of confidential business information. 

On the topic of “anonymization,” under the Aida a person who conducts a regulated activity and who processes or makes available for use anonymized data in the course of that activity will be required to establish measures with respect to (a) the manner in which data is anonymized, and (b) the use or management of anonymized data. 

As we well know, AI has numerous application areas, such as facial or speech recognition, autonomous vehicles, chatbots, navigation, targeted marketing, personalized learning, and support for recruiting and candidate selection. One of the main concerns it generates is the potential for misuse of personal information, since large amounts of data are required to develop a machine learning system.  

Therefore, those who violate Aida may be held liable for administrative monetary penalties. The authority may create an enforcement model for these penalties, which will take into account: classifying violations as minor, serious, or very serious; instituting investigative inquiries; defining the right of defense; determining the scope and amount of administrative penalties that may be imposed; regulating reviews or appeals of findings that a violation has been committed; imposing administrative monetary penalties; and regulating compliance agreements. 

Aida is relying on notable international collaborations, which include the Global Partnership in Artificial Intelligence (GPAI, which has 25 member countries, including Canada), UNESCO’s work in AI, and projects by standards-setting organizations – including the Institute of Electrical and Electronics Engineers (IEEE) and the International Telecommunication Union (ITU).  

Regulatory challenges 

There is extensive international consensus on the key legal and regulatory challenges of AI, such as safety (including performance robustness and cybersecurity), transparency, accountability, human control, bias mitigation, and privacy protection.  

There is, however, less consensus on how to address its broad potential and the social and economic effects of widespread AI adoption.  

The most advanced and comprehensive proposal for the “cross-cutting” regulation of AI is the Artificial Intelligence Act proposed by the European Union (EU). As with EU privacy laws such as the General Data Protection Regulation (GDPR), it is hoped that this will become a de facto requirement for businesses on an international basis – the so-called “Brussels effect.” The bill prohibits some of the uses of AI, such as “subliminal techniques,” and defines others as “high-risk,” such as biometric identification. For these the legislation requires special measures, including risk management, data governance, documentation, record keeping, transparency, human oversight, accuracy, robustness, and cybersecurity.  

The latest draft of the law also includes requirements for general AI systems, which can be used for high-risk applications. The Artificial Intelligence Act is not expected to go into effect until 2024, but companies should start addressing it now, as its entry into force will require the development and implementation of technical solutions that can have a material effect on product and service development.  

However, the scarcity of AI experts makes regulatory work on an international basis difficult. Because AI is generally applicable, across all industries and activities, all laws potentially apply to its use; and some, particularly related to privacy, are already significant considerations.  

*Paulo Salvador Ribeiro Perrotti is Vice President of Services at CCBC, an entity in which he held the presidency between 2017 and 2021, CEO of LGPD Solution and lead auditor ISO 27001 (ISO international certification related to information security).  In addition, he is a professor of Cyber Security in the post-graduation program of Faculdade de Engenharia de Sorocaba (Facens) and of Offensive Cybersecurity with Certification (CEH) by Acadi-TI. He has a specialization in Canadian and Quebec Law from Université de Québec à Montreal (Uqàm), an MBA from FGV-SP, specialization in Computer Law (LLM) from Ibmec-SP, in Financial Market from Instituto Finance and in Social Responsibility from ESPM-SP, besides being Certified Secure Computer User (CSCU) by EC-Council and member of the Special Commission on International Relations and Privacy and Data Protection Commission of OAB-SP.  He holds a specialization degree in Business Intelligence from Dominican College of San Raphael and in Negotiation Techniques from Berkley University. He is responsible for ESG (environment, social and governance) of the Blockchain Research Institute (BRI) in Brazil, and also acts as a fixed columnist of this theme in the portals Procurement Digital  and SolutionHub. Contact: [email protected]