The EU Commission presented the outlines of its AI strategy in February 2020. In a statement now presented by ITAS the researchers point out that the risk regulation of AI requires democratic consideration and decision-making processes.
How can the European Union use the innovation potential of AI and at the same time effectively regulate the risks of the technology? Possible answers to this question were put up for discussion by the European Commission in its White Paper “On Artificial Intelligence – a European approach to excellence and trust”, which was presented in February 2020. In order to contribute to the Europe-wide debate, researchers working at ITAS in the BMBF project “Governance of and by Algorithms” have submitted a statement on the White Paper. In particular, the ITAS researchers point out that numerous normative decisions must be taken for a regulation of AI systems to be considered legitimate. These include, for example, the question of which basic values are to be taken into account or omitted, how they are to be concretised for risk assessments, how value conflicts are to be resolved or how risks for particular population groups are to be dealt with. This requires intensive “political processes of social consideration, which must be democratically legitimized and controlled”. Science can only provide advice.
To the news of the institute: http://www.itas.kit.edu/2020_026.php
The statement is available at: https://www.itas.kit.edu/pub/v/2020/orua20a.pdf