Researchers, the cyber-ramparts of critical infrastructures

,
Les chercheurs, cyber-remparts des infrastructures critiques, critical infrastructures, cyber protection

Cyber protection for nuclear power stations or banks cannot be considered in the same way as for other structures. Frédéric Cuppens is a researcher at IMT Atlantique and leader of the Chair on Cybersecurity of Critical Infrastructures. He explains his work in protecting operators whose correct operation is vital for our country. His Chair was officially inaugurated on 11 January 2017, strengthening state-of-the-art research on cyberdefense.

 

The IMT chair you lead addresses the cybersecurity of critical infrastructures. What type of infrastructures are considered to be critical?

Frédéric Cuppens: Infrastructures which allow the country to operate correctly. If they are attacked, failure could place the population at risk or be seriously damaging to the execution of essential services for citizens. A variety of domains are covered by the operators of these infrastructures’ activities, and this diversity is illustrated in our Chair’s industrial partners, which include stakeholders in energy generation and distribution — EDF; telecommunications — Orange and Nokia; defense — Airbus Defence and Space. There are also sectors which are perhaps initially less obvious but which are just as important such as banks and logistics, and for this we are working with La Société Générale, BNP Paribas and La Poste[1].

Also read on I’MTech: The Cybersecurity of Critical Infrastructures Chair welcomes new partner Société Générale

 

The Chair on Cybersecurity of Critical Infrastructures is relatively recent, but these operators did not wait until then to protect their IT systems. Why are they turning to researchers now?

FC: The difference now is that more and more automatons and sensors in these infrastructures are connected to the internet. This increases the vulnerability of IT systems and the severity of potential consequences. In the past, an attack on these systems could crash internal services or slow production down slightly, but now there is a danger of major failure which could put human lives at risk.

 

How could that happen in concrete terms?

FC: Because automatons are connected to the internet, an attacker could quite conceivably take control of a robot holding an item and tell it to drop it on someone. There is an even greater risk if the automaton handles explosive chemical substances. Another example is an attack on a control system: the intruder could see everything taking place and send false information. In a combination of the two examples, the danger is very great: an attacker could take control of whatever he wants and make it impossible for staff members to react by preventing control.

 

How do you explain the vulnerability of these systems?

FC: Systems in traditional infrastructures, like the ones in question, were computerized a long time ago. At this point, they were isolated from an IT point of view. As they weren’t designed to be connected to the internet, the security must now be updated. Today, cameras or automatons can still possess vulnerabilities because their primary function is to film and handle objects and not necessarily be resistant to every possible kind of attack. This is why our role is first and foremost to detect and understand the vulnerabilities of these tools depending on their cases of use.

 

Are cases of use important for the security of a single system?

FC: Of course, and measuring the impact of an attack according to an IT system’s environment is at the core of our second research focus. We develop adapted measurements to identify the direct or potential consequences of an attack, and these measurements will obviously have different values depending on whether an attacked automaton is on a military boat or in a nuclear power station.

 

In this case, can your work with each partner be reproduced for protecting other similar infrastructures, or is it specific to each case?

FC: There are only a limited number of automaton manufacturers for critical applications: there must be 4 or 5 major suppliers in the world. The cases of use do of course have an effect on the impact of the intrusion, but the vulnerabilities remain the same. Part of what we do can therefore be reproduced, of course. On the other hand, we have to be specific with regard to the measurement of impact. The same line is taken by ordering institutions in research. The projects of Investments for the Future programs on both the French national and European H2020 scale strongly encourage us to work on specific cases of use. That said, we still sometimes address topics that are not linked to a particular case of use, but which are more general.

 

The new topics that the Chair plans to address include a project called Cybercop 3D for visualizing attacks in 3D. It seems a rather unusual concept at first sight.

FC: The idea is to improve control tools, which are currently similar to a spreadsheet with different colored lines to facilitate visualizing the data on the condition of the IT system. We could use 3D technology to allow computer engineers to view a model of places where intrusions are taking place in real time, and improve the visibility of correlations between events. This would also provide a better understanding of attack scenarios, which are currently presented as 2D tree views and quickly become unreadable. 3D technology could improve their readability.

 

The issue in hand is therefore to improve the human experience in measures taken against the attacks. What is the importance of this human factor?

FC: It is vital. As it happens, we are planning to launch a research topic on this subject by appointing a researcher specializing in qualitative psychology. This will be a cross-cutting topic, but will above all complement our third focus which develops decision-making tools to provide the best advice for people in charge of rolling out countermeasures in the event of an attack. The aim is to see whether, from a psychological point of view, the solution proposed to humans by the decision-making tool will be interpreted correctly. This is important because, in this environment, staff are used to managing accidental failure and do not necessarily respond by thinking it is a cyber attack. It is therefore necessary to make sure that when the decision-making tool proposes something, it is understood correctly. This is all the more important given that the operators of critical systems do not follow an automatization rationale. It is still humans who control what happens.

 

[1] In addition to the operators of critical infrastructures mentioned, the Chair’s partners also include Amossys, a company specializing in cybersecurity expertise. In addition, there are institutional partners with Région Bretagne and FEDER, Fondation Télécom and IMT’s schools: IMT Atlantique, Télécom ParisTech and Télécom SudParis.

 

 

 

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *