Intelligence (AI) is making breathtaking advances. Many believe that AI can assist in dealing with societal issues such as aging, climate change, food safety, and healthcare.
Like every other sector, the government can gain exceptional benefits by integrating Artificial Intelligence into every aspect of their work. However, the usage of AI raises ethical issues since it can be used to manipulate individuals in various scenarios.
According to Dr. Linnet Taylor, there is a growing problem in AI governance: the gap between what people are experiencing from AI technologies and regulatory and policy discussions about those technologies. As she explains in this interview with platform for government affairs Binnenlands Bestuur.
Dr. Linnet Taylor is an Associate Professor at the Tilburg Institute for Law, Technology, and Society in the Netherlands. Her research primarily focuses on the use of new sources of digital data in governance and around issues of human and economic development.
Data usage by the Dutch government
The Netherlands is one of the countries that is well placed to take advantage of the opportunities offered by digitalisation, including AI in particular.
In 2019, the Dutch government released its Strategic action plan for artificial intelligence, presenting a range of policy initiatives to strengthen the Netherlands’ competitiveness in AI on the global market.
“In the Netherlands, the boundaries of what is possible are constantly being explored,” says Taylor in her interview with Binnenlands Bestuur.
Latest technologies are equipping the government with remarkable capabilities to monitor and surveil individual people. Talking about the usage of data by the Dutch government, she says, “There are two levels in the Netherlands where public values are involved in data governance: at the top, by the board, and bottom-up. There is a hole in the middle.”
She adds, “We try to investigate the level in between, the connecting area between the experiences of residents and what happens at the top of the board.”
Lack of accountability
When asked about her recent white paper on Urban Data Governance Clinic and how it highlighted that the major chunk of data governance mainly focuses on personal data and that the privacy of the public is still vulnerable, she says, “We asked project staff from urban technology projects to participate in a three-day workshop and asked questions about the techniques, the challenges and possible resistance to them. We found out that there was no accountability structure for the interests of residents.”
European legislation
The European Commission has proposed various legislative initiatives to create a safe digital space for the users where the fundamental rights are protected and establish a level playing field for businesses.
Talking about the European legislation, she says, “The Digital Markets Act, the Data Governance Act, the Artificial Intelligence Act, and a whole lot of European legislation are coming up in the digital field. The latter is part of the GDPR, in the Netherlands, it is translated as the AVG, but the AVG is mainly about the data itself. Not about how it is used with AI. That layer, the usage layer, is very important. It concerns, for example, facial recognition in public space.”
She continues, “There will be three categories: banned AI, high-risk AI, and other forms. Divided by risk, so organizations need to evaluate that. The GDPR/AVG gave clear test methods for data, hopefully, they will also be used for AI.”
The Netherlands embraces innovation
The Netherlands is one of the most innovative countries in the world with an open economy. And that’s the reason why clashes happen between the law and the public and private sectors that want to move forward, says Taylor.
“The EU regulates, but before that becomes national law, there must be a debate in all countries. It will get interesting. We try to contribute by looking at what technology should and should not do,” she concludes.