Artificial Intelligence. Addressing ethical and social challenges.

High-level Meeting with Non-confessional Organisations

Contribution to the discussion by Giulio Ercolessi, European Humanist Federation president


European Commission Article 17 high-level meeting of Commission Vice President Andrus Ansip with non confessional organisations, June 18th 2018.

Article 17 of the Treaty on the Functioning of the European Union provides that the Union shall maintain an open, transparent and regular dialogue with philosophical and non-confessional organisations on equal terms with churches and religious associations or communities. With its 65 Member Organisations in 20 different countries, the European Humanist Federation is the largest umbrella organisation of humanist associations in Europe, promoting a secular Europe, defending equal treatment of everyone regardless of religion or belief, fighting religious conservatism and privilege in Europe and at the EU level, and is therefore the main counterpart of the European Institutions in article 17 dialogue with the philosophical and non-confessional organisations.


Artificial Intelligence and its ethical and social challenges, the theme that the Commission chose to discuss this year in the framework of art. 17 interconvictional dialogue with the “non confessional” representatives, is new territory for most of our organisations. And it is therefore inevitable for most of us today to express opinions that are in tune with our respective cultural and ethical values, rather than ideas that could be considered the outcome of a thorough discussion among our members.

It is quite obvious that risks and opportunities are easier to understand than the ways to prevent risks  and enhance opportunities are easy to formulate.

But I think that the Commission is right in discarding any “luddite” temptation. Encouraging research, investment and innovation is the right and only responsible option for Europe. Our Union could not afford to be cut off from the most promising field of technological and industrial innovation in the years to come; and it must not renounce to take part in the definition of international standards.

If it is obvious that no efficient regulation could be possible at member states level, and that also in order to avoid the risk of “ethical dumping”, it is perhaps not useless to recollect here that also what is often defined “ethical dumping” could become or be considered unethical, depending on one’s own ethical choices. That is, for example, in my opinion, the case of the limitations provided in most of our member states in the area of embryonic stem cells research, that is one of the most promising areas for medical innovation, and could help find treatments for the most deadly, dreadful and invalidating diseases of our times. These limitations, that, by the way, are also extremely harmful to the European economy, if considered ethical by those who merely identify embryos with human beings, may well be considered highly unethical by those who, like myself, do not share that view and are much more concerned with the burden of human suffering that is the result of this largely enforced ban on that kind of scientific research.

The importance of the EU market is such that an efficient regulation on the applications of AI introduced in the EU could not be ignored by the international stakeholders and investors. International standards can be largely determined by EU decisions, as no international player could afford the risk of being cut off from our market. The predictability of the European regulation could also stimulate investments in the EU. But enforcing the EU regulations would be much easier and they would be much more capable of contributing to the definition of international standards if they were introduced timely and promptly. Influencing the definition of international standards would be much easier now than when, if we Europeans are late, other – and probably less stringent – standards were already being enforced and the international players were demanded to adapt them to our decisions. Hence not just the need, but also the absolute urgency of a European provision on AI regulations.

Value setting, as was rightfully highlighted in the preparatory materials for this meeting, is a different process than the machine learning provided for by AI. But this should not be considered a peculiarity of machine learning alone. The link between value setting and learning is nowadays often very problematic for humans, too. The claim for educational systems more and more focused on immediately useful technological skills – and the technological developments related to AI could well strengthen that trend – and the lack of any serious education to citizenship in many of our countries, are actually producing a high number of formally highly educated individuals that are incapable of grasping the basic historical, legal and economic foundations of our societies and of our civilisation. That is what the Spanish philosopher José Ortega y Gasset had already diagnosed as “barbarism of specialisation” in 1930, at the time of its very first epiphany. This “barbarism” and the related widespread lack of critical sense, also in the comprehension of political dilemmas and in the appreciation of their seriousness, is indeed one of the most important and underestimated roots of the present widespread populist and authoritarian surge.

Seen under this perspective, the challenge posed by AI in its relation with European ethical-political values is not extremely different from that posed by multiculturalism, which is an enrichment for our societies as long as it is contained within the embankment of our common European constitutional heritage (largely enshrined today in the Nice charter and therefore in the EU treaties). If and when it overflows that embankment, extreme multiculturalism turns into a risk for our liberties and for our living together.

These challenges are not only more difficult to be tackled by our societies than by the less pluralistic and more holistic or organic ones such as China or other authoritarian non Western systems: it is also more difficult for us Europeans than for the US, as the bond that keeps Americans together and upon which their Union is founded is the Constitution itself (however diverse its interpretations may appear or be).

The actual process of machine learning largely depends on the existing societal pluralism, and may therefore absorb its contents from within and from outside the above mentioned constitutional embankments.

From our point of view, the main risks concern a possible reinforcement of forms of discrimination and the possible picking up by algorithms of obscurantist social stereotypes.

An algorithm may be conceived biased from the beginning, as a conscious or unconscious consequence of the bias nurtured by its makers.

That was seemingly the case of a facial recognition software introduced by Google in 2015. A young African-American couple realised that one of their photos had been tagged under the “gorilla” tag. The explanation for this dysfunction lied in the kind of data with which the algorithm was trained to recognize people. In this case, it is likely that it mainly, if not exclusively, consisted of pictures of white people (other examples also exist of racist biases in image recognition software to the detriment of Asian people). As a result, the algorithm considered that a black person had more similarity to the “gorilla” object that it had been trained to recognize than to the “human” object.

In other cases it may be unclear whether the bias and discrimination are the result of the algorithm itself or of its interaction with users.

That is the case of the gender bias revealed in the functioning of “Adsense”, Google's advertising platform. In 2015, researchers from the Carnegie Mellon University and the International Computer Science Institute highlighted how biased it was at the expense of women. Using a software called “Adfisher”, they created 17,000 profiles and then simulated web browsing to conduct a series of experiments. They found out that women were systematically offered lower paid jobs than those offered to men with a similar level of qualification and experience. Fewer women received online advertisements offering them jobs paid more than $ 200.000 per year. The precise causes are difficult to establish, though. It is of course conceivable that such a bias was the result of the will of the advertisers themselves: they would then deliberately choose to send different offers to men and women. But it is also possible that this phenomenon be the result of a reaction of the algorithm to the data it received. In this case, men may on average have been more inclined to click on ads advertising the highest paid jobs, whereas women would have resorted to self-restrain, in tune with a mechanisms that is well-known and described in social sciences. Therefore, the sexist bias resulting from the functioning of the algorithm would be nothing more than the reproduction of a pre-existing bias in the society.

In other cases, the discriminatory result may be totally unintentional.

In April 2016, it was revealed that Amazon had excluded from one of its new services (free home delivery in 24h) neighbourhoods mainly populated by disadvantaged people in Boston, Atlanta, Chicago, Dallas, New York and Washington. Initially, an algorithm from Amazon had found, by analyzing the data at its disposal, that the neighbourhoods in question offered little opportunity for profit to the company. Even though Amazon's objective was certainly not that of excluding any particular area from its services because of their predominantly black population, this proved to be the result of the use of this algorithm. It is therefore obvious that Amazon's algorithm had the effect of reproducing pre-existing discriminations, even if no intentional racism was here at work.

Even more evident of a non intentional result was the case of Tay, a “learning” robot supposed to enter into conversations on Twitter. In less than 24 hours, Tay converted from its humanist and politically correct original attitude to a racist, sexist and xenophobic discourse, as a consequence of its interaction with what people were writing in their responses. Microsoft apologized and recalled that Tay had been built on the basis of “cleaned up” and “filtered” public data, which clearly turned out to be no sufficient precaution, once it was left to operate “autonomously” on Twitter and in interaction with other non-proprietary data. This poses a real question: how to train algorithms and AI to use public data without incorporating the worst traits of humanity?

We should therefore be aware that the risk of AI becoming the vehicle for reinforced bias and discrimination may depend: 1) on the choices made by the programmers that create the algorithm; 2) on the data absorbed by the system in its interaction with the public; 3) on the simple circumstance that sometimes the “logical” choice is inconsistent with our ethical and constitutional values.

Even more important, the reasons that may lead to such results are not transparent and therefore those responsible are not accountable to a democratic widespread social control. Such failure in transparency may well result into information asymmetry to the detriment of fairness, civility and the public.

Our legal culture has developed tools capable of dealing with similar situations. Responsibility may sometimes be the consequence of no subjective fault. That was the slowly established consequence of the interpretation of provisions concerning the deeds of animals since the introduction of the Code Napoléon, and of the reversal of the burden of proof in some dangerous industrial activities. We should build the way we deal with AI on those existing grounds, in a difficult trade-off between the risk of allowing bad practices and that of a paralysis of innovation and investment that Europe cannot afford, and that would inure to the benefit of less demanding international players and regulators.

Finally, reacting to what I heard in previous interventions, I would also stress the need of very strict provisions in the application of AI in the field of medicine and in that of the Judiciary.

Concerning medicine, it should be taken into account that a widespread application of AI could on the one hand further jeopardise individual self-determination in decisions concerning one’s personal health and fate; and on the other hand, especially in relation with the decisions of the insurance industry based on “predictive medicine”, lead to the same results that are more and more the consequence of “defensive medicine”, i.e. inaction.

As far as the use of AI in judicial decisions is concerned, I believe it should be simply avoided. Not only, today already, even much less sophisticated technologies – beginning with the simple “copy and paste” option in our computers writing apps – are sometimes and in some countries weakening the level of the constitutional guarantees, especially in the field of criminal justice and of habeas corpus provisions. Indeed, the entire fabric of the European and Western liberal legal culture would be put at risk if the interpretation of laws were not put in condition of facing and adapting to the ever new social situations, and to the continuous redefinition of our social and linguistic structures that any legal system must be able to cope with. As a consequence of its incapability to deal with value setting, AI should have no room in interpreting the law. The dream of never evolving interpretations of laws and/or constitutional provisions has always been the historical dystopia of reactionary legal thinkers – both in Roman and in Common Law systems – on both sides of the Atlantic.

At this link a clip on the meeting by the European Commission Audiovisual Services.

Share

The participants to the High-level Meeting

EU Commission Vice President Andrus Ansip with the participants in the High-level meeting.

 

Creative Commons License
Files released on this site by Giulio Ercolessi are licensed under a
Creative Commons Attribution-Non-Commercial-No Derivative Works 2.5 Italy License .
Permissions beyond the scope of this license may be available at http://www.giulioercolessi.eu/Contatti.php.