Sahana: Hi everyone! Welcome to In Limbo Conversations. Today, we have with us Anastasia Siapka. She is a doctoral researcher at the Centre for IT & IP Law in KU Leuven. Her research focuses on the intersection of law, ethics and emerging technologies, especially on automation of work. You can click here to learn more about her work!
Thank you for joining In Limbo Conversations, Anastasia! It is great having you on board!
Anastasia: Hello Sahana. It’s really great to join the In Limbo Conversations. Thank you very much for inviting me and congratulations on launching this initiative!
Sahana: In this conversation, I thought we could focus on two broad themes. First, some points you have explored in your dissertation titled, “The Ethical and Legal Challenges of Artificial Intelligence: The EU response to biased and discriminatory AI” as it could figure in the current pandemic situation. And second, the infodemic during this pandemic and how virtue epistemology could help us to respond to it.
Starting with the points related to your dissertation! In the dissertation, you practiced an interdisciplinary research methodology to to look at the main ethical and legal challenges which Narrow AI, especially in its data-driven Machine Learning (ML) form, poses in relation to bias and discrimination across the EU.
You mentioned that there could be times when individuals and corporations could use technical particularities of AI to mask their discriminatory intent. In the current pandemic situation, could you share some ways in which this could happen? It would be really helpful if you could share examples (if any) of individuals or corporations whom you feel could conceal their discriminatory intent through such technicalities of AI in the current situation.
..It is always hard, though, to discern in abstracto when discriminatory intent indeed exists, as we would need access to the specific agent(s)’ motivational structure. However, especially with respect to the current COVID-19 crisis, there are several examples in which the use of AI might have discriminatory outcomes, although not necessarily discriminatory aims as well. An example I am currently researching is the use of online proctoring services for student examination instead of the invigilation that usually takes place at examinations with physical presence...
Anastasia: Indeed, in my dissertation, after elaborating on a range of ways in which biases can infiltrate AI systems, without their developers’ being necessarily aware of that possibility, I note that the opacity which is, to varying degrees, inherent in allegedly neutral AI systems could potentially be used as a pretext to conceal violations of law or discriminatory patterns. A (not so) hypothetical example would be if people’s postcodes were included in the training datasets of an AI algorithm as seemingly neutral information but with the aim of implementing digital redlining practices and exclude, for instance, members of a protected group from the services of a financial institution or online retailer. It is always hard, though, to discern in abstracto when discriminatory intent indeed exists, as we would need access to the specific agent(s)’ motivational structure.
However, especially with respect to the current COVID-19 crisis, there are several examples in which the use of AI might have discriminatory outcomes, although not necessarily discriminatory aims as well. An example I am currently researching is the use of online proctoring services for student examination instead of the invigilation that usually takes place at examinations with physical presence. Such services, often driven by Machine Learning algorithms, demand that students stare at the screen at all times; if they don’t, they are considered suspicious for cheating. However, if we think of students with certain disabilities, ADHD, tics or students who are also carers of children that need to be watched or breastfed, the demand for uninterrupted staring might pose significant burdens to some of these groups of people. Likewise, such services use facial recognition to authenticate whether the user taking the exam is indeed the student who will be graded. Yet, we know that AI algorithms have a terrible track record with identifying students who are black, Asian or transgender. If the facial recognition system of such services is unable to track or recognise these students, they won’t be able to take the exam at all, which again will pose significant burdens to the members of such groups. In both cases, there is a real risk of discrimination, which gets even more aggravated if we consider the varying adverse outcomes that a delay in getting one’s degree and entering the job market might have based on one’s socioeconomic background. Respectively, the same flaws might taint the monitoring AI systems used not only in education but also in employment settings, making the discrimination risks posed by AI even more prominent.
Sahana: What do you feel would be the central features of a Trustworthy AI that could help to navigate the current pandemic situation? What kind of assistance could such AI offer to us, especially in combating discrimination?
..why, in the first place, do we need to seek assistance from AI, trustworthy or not, to combat discrimination or other societal problems? I think that policy- and decision-makers need to answer this question in a justified manner before resorting to AI solutions. It’s not that AI, as well as other emerging technologies, don’t have a role to play in tackling this pandemic, but it’s both unrealistic and dangerous if such a role is a leading and not a supporting one...
Anastasia: In concluding my dissertation, I refer to the ideal of Trustworthy AI as ‘an ethical and legal compass helping policy-makers navigate between the Scylla of unconditionally accepting efficient but inexplicable, potentially biased AI systems and the Charybdis of guaranteeing maximalist but innovation-strangling protection of individual and collective rights’. This ideal has been specified by the High-Level Expert Group on AI (AI HLEG), a group of experts appointed by the European Commission. In their Ethics Guidelines for Trustworthy AI, they posit that, to be deemed trustworthy, AI systems should incorporate three features: they should be lawful, which refers to legal compliance; ethical, which calls for abiding by ethical principles and values; and robust, which is both a technical and a social requirement. Relatedly, in the industry there’s been a mushrooming of initiatives on ‘fair’ Machine Learning or fairness/anti-discrimination by design. I am quite unconvinced, though, as to how fairness or anti-discrimination can be algorithmically modelled through these approaches, especially when philosophers and other theorists have been so long trying to define and explain such rich concepts without achieving consensus.
But, I want to shift this question a little bit and ask: why, in the first place, do we need to seek assistance from AI, trustworthy or not, to combat discrimination or other societal problems? I think that policy- and decision-makers need to answer this question in a justified manner before resorting to AI solutions. Instead, what I personally see unravelling in this pandemic is again a wave of what Morozov calls ‘technological solutionism’, roughly the belief that all complex social problems are computable and can be resolved as long as the right algorithms are in place (see for example the number of hackathons launched at its outset). Without first paying due consideration to the analogue solutions available, e.g. to counter the school closures, many took for granted that the responses to the effects of the pandemic should involve digital technologies: across all domains, there’s been a massive, harried and unplanned influx to such services and tools. I find this particularly worrying, because, despite the fact that such digital services and tools are commercially-driven, they seem to be gradually replacing society’s core infrastructures. It’s not that AI, as well as other emerging technologies, don’t have a role to play in tackling this pandemic, but it’s both unrealistic and dangerous if such a role is a leading and not a supporting one.
Sahana: You have talked about AI, specifically as it features in EU laws, especially the General Data Protection Regulation (GDPR). Could you share some general principles that you feel could be legally enforced to increase the probability of Trustworthy AI participating in the public spaces?
Anastasia: The General Data Protection posits a list of data quality principles that should be respected when data are being processed, regardless of the exact type of technology involved. These are: lawfulness, fairness and transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity and confidentiality, and accountability. These principles are, of course, welcome in theory. What I tried to establish with my dissertation, though, is that, due to its distinct traits, AI is at odds with their practical implementation. For instance, the fact that certain AI algorithms find patterns and correlation among data on their own, means that often the purpose of such data processing becomes clear only after its occurrence: yet, this is inimical to the principle of purpose limitation, which demands the articulation of explicit and specified purposes as a precondition for such processing. Similarly, when AI is designed, developed or deployed the goal is usually to glean as many data, if not all, available. Exactly because the purpose of the data processing is not definite in advance, it is likely for such systems to capture as many data as possible on the off-chance they might prove useful. Again, this opposes the principle of data minimisation, which demands that the data processed should be restricted to those relevant to and necessary for the purposes of processing. Therefore, I am hesitant as to how the GDPR’s principles can be helpful in that regard.
Recently, though, particularly in relation to COVID-19, a publication from the UK’s Alan Turing Institute, titled Tackling COVID-19 through responsible AI innovation: Five steps in the right direction, has articulated five steps for responsible AI innovation, which I find handy, especially as they move beyond the law and take into account the processes underlying the creation of AI systems and their surrounding environment. Based on their guidance, the first step should be open science and responsible data sharing, which includes opening up research processes as a way to catalyse error detection and correction but also ensuring that the data shared are optimised for privacy, security, integrity. The second step consists in adopting a self-reflective critical attitude and being sensitive to the values and the context of research and innovation processes. The Institute focuses on the CARE & Act framework in particular, which entails a request to consider the context, anticipate impacts, reflect on purposes, engage inclusively and act responsibly. For the next step, the Institute suggests that establishing a common language, one framed around ethical principles, is key to maintain an open dialogue when conflicting positions and their underlying values need to be balanced. The Institute’s fourth step seems to encompass and overlap with a few of the GDPR’s principles; in particular, they call for the generation and cultivation of public trust through transparency, accountability and informed consent. Finally, in the final step, they stress the importance of striving for innovation that includes and represents all relevant communities, especially those that are vulnerable or socioeconomically disadvantaged, and paying attention to lived injustice. It seems to me that such an approach better reflects the need to not just make AI systems trustworthy but, most importantly, ensure the trustworthiness of the actors and procedures involved. Otherwise, focusing too much on the technology itself might be a convenient scapegoat to avoid allocating human responsibility.
Sahana: That ends the first segment about AI!
Now, coming to the second segment about the current infodemic and Virtue Epistemology. Based on the philosophical approach of virtue epistemology, you suggested the development of intellectual virtues as a means to counteract the coronavirus ‘infodemic’ in the blog post titled “How to navigate the coronavirus ‘infodemic’”.
Recently, there have been attempts by organizations like FB to combat the infodemic. Do you think the way in which we would evaluate intellectual virtues as practiced by organisations and business companies would be different from the way in which it is practiced by individuals? Could you please share a bit about such differences?
...Just as the cultivation of moral virtues depends, among others, on one’s habituation, upbringing and training, the availability of moral exemplars and the peace and prosperity of their environment, similarly the epistemic environment in which individuals find themselves plays a crucial role in determining how praise- or blameworthy one’s behaviour is, as well as the degree of one’s responsibility. For an organisation that has ample resources, it might be easier to act virtuously in the context of the infodemic than it would be for an individual lacking digital literacy or the time to read and challenge all possible views...
Anastasia: Although there’s not much consensus in theory, I believe indeed that virtues and vices can be displayed by and attributed to individuals as well as institutions. But, when evaluating virtues, we need to account for the different situation of an individual versus an organisation or company. And this appears to be the case whether we talk of intellectual virtues, as in excellences of the mind, or moral virtues, in the sense of excellences of the character, or even to a fusion of the two, considering that some scholars don’t differentiate between them. An important element of Aristotelian philosophy is the doctrine of the mean, according to which virtue is a mean or moderate state between two extremes: the vice of deficiency and the vice of excess. This mean state is different for everyone according to their circumstances, the nature of the situation at hand and the particular person involved, so it is a relative, not an absolute state. Indeed, Aristotle acknowledged that circumstances outside our control, such as the availability of material resources, influence our ability to develop a virtuous character. Just as the cultivation of moral virtues depends, among others, on one’s habituation, upbringing and training, the availability of moral exemplars and the peace and prosperity of their environment, similarly the epistemic environment in which individuals find themselves plays a crucial role in determining how praise- or blameworthy one’s behaviour is, as well as the degree of one’s responsibility. For an organisation that has ample resources, it might be easier to act virtuously in the context of the infodemic than it would be for an individual lacking digital literacy or the time to read and challenge all possible views. Also, the opacity of AI systems, which we touched upon in the previous questions, is likewise relevant here: when organisations/companies deploy such systems they often affect the processes by which individuals can form justified true beliefs. Social media platforms such as Facebook have a direct role in the management and production of information and knowledge, so these remarks are definitely applicable to them. So, I tend to think that there are higher demands for organisations/companies than individuals and that the former should actually create the conditions that will enable the latter to develop and exercise moral and intellectual virtues. When social media platforms support or even tolerate the circulate of false information, such as the cases mentioned in my blogpost, they certainly do not achieve said conditions.
Sahana: I thought I could wrap up our interview with an update on your perspective.
You had written the blog post quite early- I think- when the pandemic had just begun. As the pandemic has developed, have you felt there could be other virtues that are relevant to fighting the infodemic?
...by focusing on the characteristics of the agents/decision-makers themselves, their reasoning process (e.g. how do they weigh competing values and factors), and their responsiveness to context, particularly to the morally salient features of the context, virtue ethics might be a better alternative for evaluating morality and allocating moral praise or blame in times of uncertainty...
Anastasia: Well, my views have evolved since March, and I think that my blogpost remains incomplete as long as there isn’t an equivalent analysis of the virtues that institutions should display apart from individuals. So, your previous question is definitely on-point. Also, my blogpost touches upon virtue epistemology but over the past months I’ve found that virtue ethics more broadly (I follow Linda Zagzebski in understanding virtue epistemology as a branch of virtue ethics) can be a fruitful perspective in light of the pandemic. Because the situation we’re dealing with now is novel and, to a certain extent, unpredictable, judgements about the actual or expected consequences of the different courses of action become more difficult, so the consequentialist ethics often underlying business and policy decisions appears to be a weaker option. Instead, by focusing on the characteristics of the agents/decision-makers themselves, their reasoning process (e.g. how do they weigh competing values and factors), and their responsiveness to context, particularly to the morally salient features of the context, virtue ethics might be a better alternative for evaluating morality and allocating moral praise or blame in times of uncertainty. The virtues that I put forward in my blogpost, namely intellectual courage, intellectual humility, intellectual tenacity, intellectual autonomy and open-mindedness are quite paradigmatic in virtue epistemology. But if I were to supplement these with virtues that are particularly pertinent to the pandemic, I would say that honesty, as in truthful communication even of risks or unpleasant information, is definitely among those, the vice of arrogance (or even hybris) as well, in the sense that scientific arrogance should be countered by involving the public in deliberative processes instead of dismissing its input, the virtue of compassion, including the intention not to harm others and the exercise of care and empathy especially towards others’ suffering, but also the role-virtues that stem from relevant professions, such as medical professionals, technologists and policy-makers. Where the demands stemming from these virtues conflict, the virtue of practical wisdom, denoting the knowledge and understanding of how to act right in different concrete circumstances and especially when there are different alternatives involved, can help with weighting the opposing factors. Overall, this emphasis that virtue ethics and epistemology place on practical wisdom seems to me very appealing in our so-called uncertain, ‘post-truth’ era, so I believe its applicability to the COVID-19 pandemic as well as infodemic warrants further examination.
Sahana: Thank you so much Anastasia for taking out the time to join me and share your perspective!
Anastasia: Thank you very much for this interview, Sahana! It’s been an absolute pleasure.