When it comes to Artificial Intelligence (AI), there are a lot of concerns about what could go wrong. With AI becoming increasingly advanced, some people worry that it could become uncontrollable and threaten humanity.
That’s why it’s crucial to have rules and regulations in place for AI. Just like any other technology, AI should be governed by laws that ensure its safe use.
Why does Artificial Intelligence need rules and regulations?
When it comes to AI, there are a lot of different opinions on whether or not rules and regulations are necessary.
Some people believe that AI should be allowed to evolve without interference, while others believe AI needs to be carefully monitored to prevent negative consequences.
Here are a few key reasons artificial intelligence should have rules and regulations.
To ensure safety: With AI becoming increasingly sophisticated, there is a risk that it could become uncontrollable and pose a threat to humans.
By having rules and regulations in place, we can minimise the risks associated with AI.
To prevent abuse: Just like with any technology, there is always the potential for misuse.
For example, an AI system could be designed to exploit other systems’ vulnerabilities or manipulate people for malicious purposes.
By having rules and regulations in place, we can help to prevent such abuse.
To promote ethical behaviour: As AI becomes more powerful, it will play an increasingly important role in our lives and society. Therefore, we must ensure that AI behaves ethically.
By having rules and regulations in place, we can help to ensure that AI always acts in our best interests.
Several different regulatory approaches could be taken, but any framework must take into account the unique nature of AI.
For example, regulation may need to be applied differently to different AI systems, depending on their capabilities and intended use.
It is also important to consider how best to deal with autonomous AI systems capable of making decisions without human input. As these systems become more advanced, it will become increasingly difficult for humans to understand or predict their behaviour.
This could pose a serious risk if these systems were not subject to appropriate regulation.
How is Europe dealing with Artificial Intelligence?
The European Commission has proposed a set of rules for artificial intelligence (AI), The AI Act, regulating the technology.
The proposals, which are still at the consultation stage, would apply to AI systems developed or deployed in the EU and require companies to take measures to ensure their products are safe and comply with ethical principles.
The proposal for the EU AI Act will become law once the Council (representing the 27 EU Member States) and the European Parliament agree on a common version of the text.
Upon establishing the rules and regulations, companies must conduct risk assessments of their AI products and take steps to mitigate any identified risks. They would also need to provide customers with clear information about how their AI system works and what it is designed to do.
The following are key updates from the proposal to the latest developments:
On April 21, 2021, the European Commission published a proposal to regulate artificial intelligence throughout the EU.
On July 20, 2021, the Slovenian Presidency of the Council of the European Union organised a virtual conference on the regulation of artificial intelligence (AI), ethics, and fundamental rights.
On August 6, 2021, The Commission closed the AI Act’s public consultation period and received 304 submissions.
On August 6, 2021, a study on the ethical and legal implications of biometric techniques commissioned by the European Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs was published.
On November 29, 2021, the rotating presidency of the EU Council shared the first compromise text on the AI Act draft. The text included major changes in the areas of social scoring, biometric recognition systems, and high-risk applications.
On December 1, 2021, the European Parliament’s internal market and civil liberties committees will jointly lead negotiations on the AI Act, with Brando Benifei (S&D, Italy) being the lead negotiator and Dragoş Tudorache (Renew, Romania) from the civil liberties committee. Members of both committees will work together to ensure that the AI Act meets the needs of both businesses and consumers.
On January 25, the European Parliament’s Internal Market and Civil Liberties committees had their first joint meeting to discuss the AI Act proposal.
On February 2, the European Commission outlined its new Standardisation Strategy, which sets out the approach to standards within the Single Market and globally. Standards are essential for the Single Market’s functioning and ensuring that European businesses can compete on a global scale, said the Commission.
On February 3, the French Presidency of the Council circulated compromise texts of Articles 16-29 and 40-52 of the proposed AI Act. These texts cover the obligations of users and providers of high-risk systems as well as harmonised standards, conformity assessments, and transparency obligations for certain AI systems.
On March 2, the European Parliament’s Committee on Legal Affairs published its amendments to the AI Act. The following day, the Parliament’s Committee on Industry, Research and Energy published its draft opinion on the AI Act.
On April 20, Brando Benifei and Dragoș Tudorache, Members of the European Parliament leading the charge on the AI Act in the IMCO and LIBE committees, published their draft report.
On May 13, the French Presidency of the Council published a text proposing the regulation of general-purpose AI systems. These AI systems can complete various tasks, such as understanding images and speech, generating audio and videos, detecting patterns, answering questions, and translating text.
June 1 was the deadline for each political group of the European Parliament to submit amendments to the AI Act. In total, thousands of amendments were submitted and briefly summarised here.
On June 15, the French Presidency of the Council of the EU circulated their final compromise text before Czech took over the presidency.
On June 17, the Czech Presidency of the Council of the EU shared a discussion paper with other EU governments, listing the main priorities of the AI Act.
On September 5, the Committee on Legal Affairs (JURI) at the European Parliament adopted its opinion on the AI Act, making it the last committee to do so in the Parliament.
The Commission says that their aim is not to stifle innovation but to create a level playing field for companies developing AI products and ensure that citizens can trust that these products are safe and ethically sound.