About TAKOS

The proposed approach can be depicted in the Figure below. For an overview of these techniques along with a complete description of the synthesis, we refer the reader to the associated research report.

image post

The general context is as follows. A system $ {\cal G}$ produces sequences of events belonging to an alphabet $ \Sigma$ . Some of these events, in an alphabet $ \Sigma_o\subseteq\Sigma$ , are observable by an external attacker. Among the possible executions of the system, some of these are said to be secret. A projection map defines the observability of events. We are interested in various notions of opacity for the secret on the considered system. That is, from the sequence of observable events, the attacker should not be able to deduce whether the current execution of the system, according to the considered notion of opacity, is secret or not. In this case, the secret $ S$ is said to be opaque w.r.t. the considered system and its projection map.

When analyzing opacity offline (i.e., model-checking the system) (Figure (a)), we analyze the specification of the system and check whether the underlying specified system is opaque or not. If the system is not opaque, we provide an explanation of the opacity leakage.

When verifying opacity at runtime (Figure (b)), we introduce a runtime verifier which observes the same sequence of observable events as the attacker. The runtime verifier is in charge of producing verdicts related to the preservation or violation of the considered notion of opacity. With such a mechanism, the system administrator may react and (manually) take appropriate measures.

When enforcing opacity at runtime (Figure (c)), we introduce a runtime enforcer to which the sequence of observable events is directly fed. This mechanism modifies its input sequence and produces a new one in such a way that the desired notion of opacity is preserved by the output sequence. Runtime enforcement may also be seen as a way to automatically prevent opacity violation.