Like with digital privacy, the European Union is leading the worldwide effort when it comes to regulating the use of AI. The European Commission has released a draft proposal of the Artificial Intelligence Act (AI Act) in april of 2021 that aims to ban certain AI applications, including ones considered as high-risk AI systems.
The most interesting aspect of this bill is not to limit technology or its development, but to ensure EU citizens that AI coming to the market will be used to protect their health, safety and fundamental rights. Here is how the European Commission aims to accomplish these goals.
The draft proposal distinguishes four levels of risk when using algorithms:
- Minimal Risk – A code of conduct is sufficient
- Limited Risk – These applications requires transparency
- High Risk – These types of AI needs to be assessed before use
- Unacceptable Risk – Use of AI is prohibited
According to the draft proposal, the vast majority of AI systems currently used in the EU are at ‘minimal risk’ level. Tools like spam filters or AI-enabled video games, won’t be subject to new legal requirements, but may be subjected to a code of conduct.
The main applications within the limited risk category are synthetic media like deep fakes, AI systems that are intended to interact with people, and AI-powered emotion recognition systems or biometric categorization systems. The reason for this is that people should have the right to know if they are watching a deepfake, interact with a computer or are subjected to emotion recognition systems like those already in use in many call centers. Systems authorized by law to detect, prevent, investigate or prosecute criminal offenses are excepted from these transparency rules.
High risk systems
High risk systems may include:
- Infrastructure that could put the life and health of citizens at risk – such as self-driving technology.
- Educational or vocational training such as scoring of exams.
- Safety components such as robot-assisted surgery.
- Employment assistance such as CV-sorting software.
- Law enforcement such as evaluation of the reliability of evidence.
- Asylum and border control management such as the verification of authenticity of travel documents.
These systems will be subjected to an assessment and get a Conformité Européenne (CE) certification before they can be used within the European Union. Developers should also disclose information about the characteristics, capabilities and limitations of the AI system and the systems must be designed to be overseen by humans.
The proposal specifically mentiones four types of technologies that will be banned from use within the EU: social scoring, dark-pattern AI, manipulation and real-time biometric identification systems.
Surprisingly it does not propose a ban on autonomous weaponry like this robot dog with a huge canon on its back.
What is AI?
The draft proposal acknowledges that AI technology is still rapidly evolving, but has mentioned a few specific methods that are currently in use.
- Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods, including deep learning;
- Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
- Statistical approaches, Bayesian estimation, search and optimization methods.
Cathelijne Muller of ALLAI has proposed additional requirements to the AI Act to ensure AI systems that are developed and used within the European Union are held to standards in regard to quality, performance and trustworthiness. As Muller explained: “Not all decisions can be reduced to ones and zero’s.” The AI Act’s message to lawmakers states that the EU should:
- Broaden the ban on social scoring to private actors
- Ban AI-driven biometric recognition by public and private actors
- Avoid normalizing/mainstreaming high-risk AI
- Demand third-party conformity assessments for all high-risk AI
- Arrange for certain decisions to remain the ultimate prerogative of humans
- Include all requirements of the Ethics Guidelines for Trustworthy AI
The act was approved by the European Economic and Social Committee in September 2021 by a majority of 225 in favor, 3 against and 6 abstentions.
Other important regulatory impact for organizations
While the Regulatory framework proposal on artificial intelligence and AI Act are the most notable around the regulation of machine learning practices, there are other initiatives by the European Commission as well to guide companies and organizations in the digital transformation and the future of data and algorithms.
The first major step that the European Commission has taken is the Data Act, the signs of which were revealed in the February 2020 Data Strategy Communication. The 2021 Consultation and Inception Impact Assessment will shape the EU’s digital economy and ensure that data doesn’t fall in the hands of bad actors. It is vital to understand that machine learning, the fundamental block of AI, relies on data and more data will only allow for AI to better learn, discern and make decisions.
However, the Data Act will ensure how data is funnelled into the process with a solid framework for digital trust and complementing GDPR, which allows for removal of digital borders, portability of data and opening up public sector data. It also advocates for fair distribution of usage rights across the value chain and calls for legal certainty in terms of whether database rights can cover machine-generated data and safeguarding intellectual property rights when it comes to data processed by foreign cloud service providers.
The Netherlands, in particular, is asking the Commission to take control of encouraging trusted data sharing in Europe. It is also promoting interoperability as defined earlier, and the Dutch government welcomes the initiative and is looking forward to the full proposal for the Data Act, which is expected to be detailed at the end of this year.
Data Governance Act
The Data Governance Act (DGA) was first announced in 2020 as a legislative proposal that aims to create a framework to facilitate data-sharing. The DGA covers the data of public bodies, private companies and the citizens and it defines how to safely enable sharing of sensitive data held by public bodies while also regulating data sharing by private companies. It also allows citizens to exercise their right on data using GDPR laws.The DGA is essentially classified into four parts.
- Part I: Prohibition of exclusive sharing of protected data by the governments.
- Part II: Creation of neutral data intermediaries, which enables legal and operational separation of data intermediary services from other parties.
- Part III: A European label for data altruistic organisations that serve the general interest.
- Part IV: A European Data Innovation Board composed of public and private parties advising the Commission on standards and actions promoting data sharing.
Digital Services Act
The European Commission has always been ahead of the curve when it comes to governing user data, which forms the basis of new AI services. In order to strengthen its existing rules governing digital services in the EU, the Commission proposed two legislative initiatives and the first of the two is called Digital Services Act (DSA). The Digital services, according to the European Commission, includes a large category of online services. Whether you are a simple website or a cloud service provider or an online platform, you are part of Digital services.
The Digital Services Act (DSA) aims to better protect consumers and their fundamental rights while using the services mentioned above. It also aims to establish transparency and a clear accountability framework for online platforms. It does so with the idea of fostering innovation, growth and competitiveness within the single market. From intermediary services to very large platforms, the DSA will improve the effectiveness of removing illegal content, protect users’ fundamental rights and promote freedom of speech.
Digital Markets Act
The Digital Markets Act is the second piece of a legislative initiative to ensure that large online platforms don’t gain unnecessary power and act as gatekeepers in the market. Along with the Digital Services Act, the Digital Markets Act forms the basis of Europe’s digital strategy that will help proliferate fair behaviour and competitiveness. The DMA defines gatekeeper as any large online platform with strong economic position, intermediate position and has a durable position in the market.
With its clear description, the DMA helps business users with a fair business environment who rely on these gatekeepers. It also will help startups and innovators to compete in the online platform environment. The innovation means that consumers will have more and better services and be able to switch service providers with fair pricing. Gatekeepers, meanwhile will be able to innovate and offer new services but not by using unfair practices or by creating an undue advantage in the market.
While the Data Act aims to govern use of data, GAIA-X wants to be the data infrastructure built by Europe for Europe. The project was first presented to the general public at the Digital Summit 2019 in Dortmund, Germany. The primary goal of GAIA-X is to develop a data infrastructure that is efficient, competitive, secure, and trustworthy. In other words, it will act as a federated European data infrastructure supported by representatives of business, science and administration from Germany and France.
GAIA-X does not want to be a cloud service provider and it is thus founded as an international non-profit organisation based in Belgium. It will bring openness, transparency and European connectivity that will result in a digital ecosystem unlike any. It will be a backbone to existing technology like hyperscalers and will link different elements with the help of open interfaces and standards.
The central element to GAIA-X is the use of open technologies that are secure and creation of GAIA-X nodes that are clearly identifiable. It will also use software components from a common repository and allow for a uniform data and service room. The GAIA-X allows for companies and citizens of the European Union to collate and share data and also keep control over them. “They should decide what happens to their data, where it is stored, and always retain data sovereignty,”