The Montaigne Centre for Rule of Law and Administration of Justice at the Utrecht School of Law of the Faculty of Law, Economics and Governance, is looking for a PhD candidate to conduct research on the regulation of algorithmic decision-making under the supervision of Prof Nadya Purtova.
Algorithmic decision-making (ADM), i.e. decision-making facilitated by automated means such as Machine Learning and other computational tools, has a profound influence on life in a digital society. At the same time it presents many risks. There is a flourishing body of literature addressing the various aspects of unfair process and outcomes of algorithmic decision-making, such as bias, lack of transparency and accountability, discrimination, influence of Big Tech, impact on autonomy and democracy, etc. In data protection law the desire to include automated decision-making within the scope of data protection and thereby provide legal protection against risks of ADM where other legal domains might not is stretching the GDPR material scope to the extent that makes everything personal data and the GDPR the law of everything digital. This raises concerns as to the effectiveness of the GDPR. Another concern is the actual ability of the GDPR - arguably meant to enable control over personal data - to resolve all digital problems. Finally, the protections that the GDPR provides regarding automated decision-making only concern individual decision-making (i.e. when the ADM is based on personal data AND is directed at an individualized/distinguished from a group natural person). The GDPR does not provide any safeguards if a decision is directed towards a group where members are not individualized. Against this background, the potential of other legal domains such as consumer, constitutional or administrative law, but also the recently proposed AI Act, to tackle the challenges of algorithmic decision-making is recognized more widely.
Despite the flourishing scholarly debate on regulation of algorithmic decision-making and the current focus on ADM regulation under the GDPR and in the AI Act, a number of questions remain underexplored:
- What should we understand by a decision in algorithmic decision-making? Should design of technology be included?
- Given that to be future-proof law in principle should be technology-neutral and only address specific technologies when they present specific risks, what exactly about algorithmic decision-making - if anything - justifies technology-specific regulatory intervention? Is it automation and involvement of any computational technique, or lack of transparency and other challenges associated with more advanced computational techniques such as machine learning and predictive models?
- What - if anything - justifies regulating individual algorithmic decision-making separately from regulation of algorithmic decision-making pertaining to groups? Should government policymaking facilitated by algorithms be regulated as an instance of algorithmic decision-making?
- (To what extent) should regulation of ADM be general or sectoral?
- What inspiration for regulating ADM can we find in existing legal domains?
- Finally, to address the risks of algorithmic decision-making, is it necessary and sufficient to regulate technologies used in ADM (e.g. in the AI Act) or the decision-making process?
The objective of this PhD project would be to examine if a general approach to the regulation of ADM can be developed and what this approach could be. The PhD candidate will have an opportunity and be expected to further specify this research objective in consultation with the supervisor and develop their own research plan.