Illustrating scenario Sample Clauses

Illustrating scenario. ‌ To illustrate the IRM method, extended with the above-described concepts, we elaborate on a scenario from the Science Cloud Case Study, which itself is an extension of the base scenario already described in [BGeH+12]. In this scenario, several heterogeneous network nodes, forming an open-ended cloud platform, run a third-party service, such us a user’s program which involves computational-intensive scientific calculations. This way, the client running e.g., on a smartphone, can take advantage of the nearby available computational power (laptops, tablets, servers, etc.) by offloading parts of the application into other nodes that handle the back-end application logic. These nodes can be part of a traditional cloud infrastructure, if available, and thus leased on demand while the system is running and its demand for resources grows. The general assumptions are that a) the application’s back-end can be partitioned into several nodes (allows ”scaling out”) and b) the mechanism of effectively migrating application parts across different nodes exists. Given the above scenario, the goal of the system under development is twofold: i) guarantee an upper limit in the rendering delay observed by the user, ii) distribute the off-loaded computation according to the capacity and current load of every contributing node. Figure 3 shows a possible IRM graph for the above scenario. The design starts with the identified top-level invariant stating that ”Load is balanced while expected QoS is kept”. The ”expected QoS” has been quantified by the SPL formula that specifies an upper bound on the application’s response time (500 ms). This invariant can be decomposed into two possible sub-invariants, based on the situation the system resides in and specifically based on whether extra computational power from a cloud data center is needed. In the first case (Cloud not needed situation) invariant (2) is decomposed into one assumption