Model Checking. In this section we present a mechanical verification of the 3ROM algorithm using the model checking approach for its ease, feasibility, and quick examination of the problem space, to verify correctness of our formal proof of the algorithm. The Symbolic Model Verifier (SMV) [20] was used in the modeling of this algorithm. SMV’s language description and modeling capability provide relatively easy translation from the pseudo-code. SMV semantics are synchronous composition, where all assignments are executed in parallel and synchronously. Thus, a single step of the resulting model corresponds to a step in each of the components.
Model Checking. It is often the case that a property of interest is expressible as a temporal logic formula but not as an invariant. Such properties are beyond the scope of proper Event-B proof methodology but not necessarily beyond the scope of Event-B. ProB [RD18] and the ProB plug-in [RD7] provide model checking of LTL and CTL formulas such that the syntax of atomic formulas is the same or almost the same as the syntax of predicate expressions in Event-B. In preliminary work for pilot deployment in 2008, SSF used ProB in search for deadlocks in a “root machine” of an Event-B model of BepiColombo SIXS/MIXS OBSW requirements. A xxxxxxxx was found indeed, and the machine was revised accordingly. While using ProB for more detailed machines we have found it difficult to compute consistent instances of the used Event-B contexts. In Autumn 2011, SSF used ProB in checking of LTL formulas that concern the “3 modes and 2 partners” case of the mode synchronisation protocol designed for a distributed AOCS. Except for the formulas, the protocol descriptions were written in Event-B using Rodin platform. The ProB plug-in was used for producing machines for the stand-alone ProB that was used for the actual model checking. Though the plug-in itself has model checking facilities, SSF preferred to use the stand-alone tool in order to avoid unnecessary feature interaction and in order to utilise late improvements that were available for the stand-alone tool but not for the plug-in. The Event-B description of the mode synchronisation protocol got revised several times as ProB reported counterexamples to expected properties. The found errors were modelling errors, i.e. mismatches between the verbal specification and the Event-B description. (The verbal specification itself was later revised due to an error that was found in inspection without any tool.) Some attempts were made in order to use ProB in the “3 modes and 3 partners” case. However, ProB tended to run out of memory almost regardless of the intended model checking activity. A support to memory efficient state space generation actually got implemented in ProB soon after the developers had been informed about the problem. In model checking, it is often wise to consider several formalisms and tools. For the period December 2011 – January 2012, SSF has used model checking outside Event-B, still working on conjectured invariants that had originally been written to be proven in Rodin platform. Most of these experiments have been done us...
Model Checking. Model-checking takes place at two different levels of an INTO-CPS multi- model. First, the RT-Tester Model-Based Test Case Generator (RTT-MBT), which adds model-based testing functionality to the RT-Tester test system, can model-check individual FMUs against desired properties, which are spec- ified in LTL. Second, RTT-MBT will be able to model-check the complete multi-model against desired properties. The model-checking interaction is illustrated in Figure 13. RTT-MBT uses XMI-exported test models to carry out both automated model-based testing, as well as model-checking at the two levels, that is, single components as well as multi-models. Further details are given in Section 6.7. Modelio P2: XMI export RT-Tester (RTT-MBT) Figure 13: Overview of model-checking and automated model-based testing. 5 Example Use Case for the INTO-CPS Tech- nologies The INTO-CPS technologies will enable new workflows for model-based de- sign of CPSs from conception to realisation. Seven baseline technologies are combined to form a tool chain that supports this. In order to give an over- all picture of how these technologies come together, we give an illustrative example of the technologies. We consider a putative development of a CPS, from scratch, that utilises all features of the INTO-CPS technologies. We simplify the workflow by omitting iteration or feedback between levels. This is only one such way in which the technologies might be used. For example, not every development will begin with an entirely blank state, and therefore not all steps are necessary. The choice of approach will de- pend on the experience of the team, their existing practices and the needs of their customers. Deliverable D3.1a [FGPP15a] considers in more detail how the INTO-CPS technologies might be used and what workflows they enable. In the following description, terms in bold are baseline tools or INTO-CPS tools. Terms in italics correspond to activities in the ontology that produce artifacts of traceability and provenance (Deliverable D3.1b [FGPP15b]). At each step in the development, engineers can store and retrieve arte- facts using the INTO-CPS Application. Design rational and Design Notes are attached to artefacts, as well as information about which en- gineer created or modified them. Data can be retrieved to reconstruct the design rationale at any time, enable traceability throughout the entire design process. Using Modelio, engineers can construct an architectural model of a system expr...
Model Checking. Model checking in INTO-CPS shall be applied to discrete event (DE) test models, but also support continuous time (CT) models via suitable abstrac- tion mechanisms, see Deliverable D5.1c [BF15]. The RT-Tester Model-Based Test Case Generator (RTT-MBT) is an upgrade for RT-Tester that adds model-based testing to the RT-Tester test system. One particular feature of RTT-MBT is bounded model checking (BMC) of LTL specifications for DE systems, where the entire model to be checked consists of a SUT on the one hand, and an environment model on the other. Arbitrary LTL specifications can be verified, where the atomic propositions typically range over outputs of the SUT, model variables that are internal to the SUT, and timers. For ex- ample, let us assume that a test model shall express the following behaviour: If some input voltage is below 10 units, the SUT shall set an output error flag within 10 time units. This is a typical property to be checked using model checking techniques in RTT-MBT. The core feature of model checking in INTO-CPS is the integration and con- figuration of different test models into a single SUT configuration, to which established model checking techniques can then be applied. This approach allows the combined behaviour of several components to be checked, and takes into account the interaction between these components. The config- uration has to be performed via the main INTO-CPS Application, which then invokes RTT-MBT so as to perform the actual model checking. Details about modelling the interfaces and connections between the different system components are given in [BLL+15].
Model Checking. Certain Event-B properties are currently almost beyond the scope of the Event-B proof methodology. One of such properties is deadlock-freedom. Deadlocks are reachable states where no event is enabled. An event is enabled when and only when all the guards and “guard-like actions” of the event are simultaneously satisfiable. From this definition it follows that deadlock-freedom is expressible as an invariant. However, mere expressing of such an invariant is somewhat uncomfortable, whereas not many people can be expected to successfully use Rodin Platform for producing all proofs needed for proving such an invariant as a whole for a given nontrivial Event-B machine. Deadlock-freedom is a somewhat naïve property in the sense that deadlock-freedom can be artificially ensured by including a totally unconstrained event in the concerned Event- B machine. LTL can be used for expressing more “advanced” properties. Any invariant in any Event-B machine is easily expressible in LTL. So, deadlock-freedom is expressible in LTL, too, though LTL does not have any built-in way for compact expressing of deadlock-freedom. For a given Event-B machine, the animator and model checker ProB (as a standalone tool or as a plugin of Rodin Platfom) can be used for checking deadlock-freedom (without any formulation effort) or an invariant (without any additional formulation effort) or a property expressed in LTL. Whenever a check gets completed with a negative result, a counterexample in the form of a sequence of event occurrences is obtained and can be inspected. Positive results reported by such checks are almost inevitably incomplete because a typical check concerns only some part of the reachability graph of the Event-B machine and because even when ProB’s advanced heuristics such as the symmetry method are used, the reachability graph can be far too large to be completely checked using ProB “within reasonable time” and without exhausting the available memory. For convenience, ProB has a configurable timeout parameter such that when the limit expressed by the parameter is reached, ProB terminates the check with a timeout message. The animation and model checking facilities in ProB are based on enumeration of reachable states and on simple evaluation of guards and actions. Therefore, for each constant or abstract set in the Event-B contexts directly or indirectly seen by the Event-B machine of interest, ProB replaces the constant or set with an “enumerative expression” in a way comp...
Model Checking. Model checking begins with an identical structure model description import and model check modelling, Figure 13 as we saw for simulation models previously. Figure 14 shows the structure around both the model checking and model check test creation activities, these respectively output model check results and model check test case. The model check test result is the first time we have seen evidence that models meet or do not meet the spec- ification, thus we see that it may connect via the OSLC_am:verifies or into:doesNotVerify relations. The relationships between the files involved in model checking are shown in Figure 15 Figure 14: BDD showing the activities of model checking Figure 15: BDD showing the file elements around model checking Figure 16: BDD showing simulation FMU generation
Model Checking. In this section we present a mechanical verification of the 3ROM algorithm using the model checking approach for its ease, feasibility, and quick examination of the problem space, to verify correctness of our formal proof of the algorithm. The Symbolic Model Verifier (SMV) was used in the modeling of this algorithm on a PC with 4GB of memory running Linux20. SMV’s language description and modeling capability provide relatively easy translation from the pseudo-code. SMV semantics are synchronous composition, where all assignments are executed in parallel and synchronously. Thus, a single step of the resulting model corresponds to a step in each of the components. A number of cases for each fault model were model checked. In particular, for the node-fault model, scenarios with F = 0..3 and K = 4..10, respectively, were model checked with the weaker assumptions, that is, ∑cj ≥ F+1 and ∑Xi ≥ F+2. Model checking of the link-fault model requires specific number of link faults being considered. Two cases with F = 2, K = 7, and F = 3, K = 10, were model checked. Model checking of larger graphs and with more number of node and link faults can readily be accommodated. Due to space limitations we do not discuss the SMV models in detail. The models can be found online at xxxx://xxxxxxx.xxxx.xxxx.xxx/people/mrm/publications.htm.
Model Checking. M | M • Modeling all of the possible states and behaviors of the system using a suitable formal- ism. Typically some sort of finite transition systems such as finite automata over finite • •
Model Checking. The MIT Press. Xxxxx X. Xxx. 2014. OpenJML: Software verification for Java 7 using JML, OpenJDK, and Eclipse. In Proceedings of the 1st Workshop on Formal Integrated Development Environment (EPTCS’14). 79–92. Xxxxx X. Xxx and Xxxxxx Xxxxxx. 2004. ESC/Java2: Uniting ESC/Java and JML. In Proceedings of the
Model Checking. The MIT Press. Xxxxx X. Xxx. 2014. OpenJML: Software verification for Java 7 using JML, OpenJDK, and Eclipse. In Proceedings of the 1st Workshop on Formal Integrated Development Environment (EPTCS’14). 79–92. Xxxxx X. Xxx and Xxxxxx Xxxxxx. 2004. ESC/Java2: Uniting ESC/Java and JML. In Proceedings of the Xxxxxxxx Xxxxxxxxx and Xxxxxx Xxxxxxxxxx. 2007. Extraction of bug localization benchmarks from history. In Proceedings of the 22nd IEEE/ACM International Conference on Automated Software Engineering (ASE’07). 433–436. Xxxxxx Xxxxxxxx, X. Xxxxxx X. Xxxxx, Xxxx Xxxxxxxxxxx, Xxxx Xxxxxx, Xxxxx X. Xxxx, and Xxxxxx Xxxxx. 2002. Extended static checking for Java. In Proceedings of the ACM SIGPLAN Conference on Program- ming Language Design and Implementation (PLDI’02). 234–245. Xxxxx Xxxxxx and Xxxx Xxxxxxxxx. 2009. Regression verification. In Proceedings of the 46th Annual Design Automation Conference (DAC’09). 466–471.