“Alicem” is a case in point. Alicem is a smartphone app developed by the State to offer the French people a national identity solution for online administrative procedures. It uses face recognition as a technological solution to activate a user account and allow the person to prove their digital identity in a secure way.
After its authorization by decree of May 13, 2019 and the launch of the experimentation of a prototype among a group of selected users a few months later, Alicem was due to be released for the general public by the end of 2019.
However, in July of the same year, La Quadrature du Net, an association for the defense of rights and freedoms on the Internet, filed an appeal before the Council of State to have the decree authorizing the system annulled. In October 2019, the information was relayed in the general press and the app was brought to the attention of the general public. Since then, Alicem has been at the center of a public controversy surrounding its technological qualities, potential misuses and regulation, leading to it being put on hold to dispel the uncertainties.
At the start of the summer of 2020, the State announced the release of Alicem for the end of the autumn, more than a year later than planned in the initial roadmap. Citing the controversy on the use of facial recognition in the app, certain media actors argued that it was still not ready: it was undergoing further ergonomic and IT security improvements and a call to tender was to be launched to build “a more universal and inclusive offer” incorporating, among other things, alternative activation mechanisms to facial recognition.
Controversy as a form of “informal” technology assessment
The case of Alicem is similar to that of other controversial technological innovations pushed by the State such as the Linky meters, 5G and the StopCovid app, and leads us to consider controversy as a form of informal technology assessment that defies the formal techno-scientific assessments that public decisions are based on. This also raises the issue of a responsible innovation approach.
Several methods have been developed to evaluate technological innovations and their potential effects. In France, the Technology Assessment – a form of political research that examines the short- and long-term consequences of innovation – is commonly used by public actors when it comes to technological decisions.
In this assessment method, the evaluation is entrusted to scientific experts and disseminated among the general public at the launch of the technology. The biggest challenge with this method is supporting the development of public policies while managing the uncertainties associated with any technological innovation through evidence-based rationality. It must also “educate” the public, whose mistrust of certain innovations may be linked to a lack of information.
The approach is perfectly viable for informing decision-making when there is no controversy or little mobilization of opponents. It is less pertinent, however, when the technology is controversial. A technological assessment focused exclusively on scholarly expertise runs the risk of failing to take account of all the social, ethical and political concerns surrounding the innovation, and thus not being able to “rationalize” the public debate.
Participation as a pillar of responsible innovation
Citizen participation in technology assessment – whether to generate knowledge, express opinions or contribute to the creation and governance of a project – is a key element of responsible innovation.
Participation may be seen as a strategic tool for “taming” opponents or skeptics by getting them on board or as a technical democracy tool that gives voice to ordinary citizens in expert debates, but it is more fundamentally a means of identifying social needs and challenges upstream in order to proactively take them into account in the development phase of innovations.
In all cases, it relies on work carried out beforehand to identify the relevant audiences (users, consumers, affected citizens etc.) and choose their spokespersons. The definition of the problem, and therefore the framework of the response, depends on this identification. The case of Linky meters is an emblematic example: anti-radiation associations were not included in the discussions prior to deployment because they were not deemed legitimate to represent consumers; consequently, the figure of the “affected citizen” was nowhere to be seen during the discussions on institutional validation but is now at the center of the controversy.
Experimentation in the field to define problems more effectively
Responsible innovation can also be characterized by a culture of experimentation. During experimentation in the field, innovations are confronted with a variety of users and undesired effects are revealed for the first time.
However, the question of experimentation is too often limited to testing technical aspects. In a responsible innovation approach, experimentation is the place where different frameworks are defined, through questions from users and non-users, and where tensions between technical efficiency and social legitimacy emerge.
If we consider the Alicem case through the prism of this paradigm, we are reminded that technological innovation processes carried out in a confined manner – first of all through the creation of devices within the ecosystem of paying clients and designers and then through the experimentation of the use of artifacts already considered stable – inevitably lead to acceptability problems. Launching a technological innovation without participation in its development by the users undoubtedly makes the process faster, but may cost its legitimization and even lead to a loss of confidence for its promoters.
In the case of Alicem, the experiments carried out among “friends and family”, with the aim of optimizing the user experience, could be a case in point. This experimentation was focused more on improving the technical qualities of the app than on taking account of its socio-political dimensions (risk of infringing upon individual freedoms and loss of anonymity etc.). As a result, when the matter was reported in the media it was presented through an amalgamation of face recognition technology use cases and anxiety-provoking arguments (“surveillance”, “freedom-killing technology”, “China”, “Social credit” etc.). Without, however, presenting the reality of more common uses of facial recognition which carry the same risks as those being questioned.
These problems of acceptability encountered by Alicem are not circumstantial ones unique to a specific technological innovation, but must be understood as structural markers of the contemporary social functioning. For, although the “unacceptability” of this emerging technology is a threat for its promoters and a hindrance to its adoption and diffusion, it is above all indicative of a lack of confidence in the State that supersedes the reality of the intrinsic qualities of the innovation itself.
This text presents the opinions stated by the researchers Laura Draetta and Valérie Fernandez during their presentation at the Information Mission on Digital Identity of the National Assembly in December 2019. It is based on the case of the biometric authentication app Alicem, which sparked controversy in the public media sphere from the first experiments.
Laura Draetta, a Lecturer in Sociology, joint holder of the Responsibility for Digital Identity Chair, Research Fellow Center for Science, Technology, Medicine & Society, University of California, Berkeley, Télécom Paris – Institut Mines-Télécom and Valérie Fernandez, Professor of Economics, Holder of the Responsibility for Digital Identity chair, Télécom Paris – Institut Mines-Télécom