Scenario Description. In Xxxxxxxx’x work [22] a decentralized payment system is envisioned. The essence is to have a consortium of unknown participants achieve consensus [26]. To achieve this, Bitcoin uses a public permissionless blockchain, allowing anyone to participate. Each participant owns one or more Bitcoin accounts. An account is identi- fied by a public cryptographic key, and managed by the corresponding private key. Each account may hold a number of tokens, which represent a value, and can be seen as ‘coins’. Coin ownership can be transferred by transactions. A transaction, in principle, contains the account of the sender, the account of the receiver, the number of coins transferred, and the signature of the sender. Trans- actions created by participants are collected by other participants called miners. These miners independently solve a moderately-hard cryptographic puzzle. The miner that solves the puzzle first, obtains the privilege to propose a new state of accounts, based on the transactions collected. A miner proposes a new state by presenting a sequence of transactions called a block. Note that only miners may write to the blockchain. Each block holds the hash of its previous block, linking all blocks into a block-chain.
Scenario Description. In this third scenario a public permissioned blockchain called Hyperledger Fabric by IBM [5] is used. This blockchain tracks certificates in a supply chain of table grapes. In this scenario [11], a farmer in South Africa produces organic grapes, and presents such a claim to a certification authority. This authority issues a certificate to the farm, allowing the farm to certify its grapes. Grapes are stored in boxes, which are identified by a unique barcode. To ensure a correct certification process, certification authorities are accred- ited by an accreditation authority. The certification authority stores the certifi- cate it receives from an accreditation authority on the blockchain. Additionally, details of the certification authority are stored on the blockchain, so that anyone may see which party certified a farm. This entire process is audited. An audi- tor may revoke the certificate issued by the certification authority, for example, after the discovery of unauthorized pesticides [31] being used in the production of the fruits. An auditor also may revoke accreditations made by the accreditation authority. Here, both revocation types are recorded on the blockchain. The grape boxes are shipped to resellers in Europe, after which the grapes are sold to supermarkets, and eventually to customers. Since it is unknown who may purchase the grapes, public verifiability is required. This allows all parties involved to query the blockchain for the validity of the organic certificate. Also, change of ownership is recorded in the blockchain, and provenance of the labeled boxes can be determined. From this description we observe that there are mul- tiple, known writers. However, these writers are not trusted, as can be observed from the cascading audit trail from farmer to auditor.
Scenario Description. The robotics scenario consists of a structured environment of width W and depth D, initially un- known to the robots. The structure of the environment mimics that of a building floor. A team of R robots called rescuers (Fig. 1(a)) is deployed in a special area called the deployment area within the environment. The size of the deployment area is always assumed sufficient to house all the robots. We imagine that some kind of disaster has happened, and the environment is occasionally ob- structed by debris (Fig. 1(b)) that the robots can move. In addition, a portion of the environment is dangerous for robot navigation due to the presence of radiation (Fig. 1(c)). We assume that prolonged exposition to radiation damages the robots. Short-term exposition increases a robot’s sensory noise. Long-term damage eventually disables the robot completely. To avoid damage, the robots can use debris to build a protective wall, thus reaching new areas of the environment. Damage is simulated through a function dr(t) that increases with exposition time t from 1 to 10. The value of dr(t) is used as a scale factor for the natural sensory noise of a robot, until it reaches the value 10, which corresponds to a disabled robot. We imagine that a number V of victims (Fig. 1(d)) are trapped in the environment and must be rescued by the robots. Each victim is suffering a different injury characterized by a gravity Gv. The health hv(t; Gv) of each victim, initially in the range (0,1], deteriorates over time. When hv = 0, the victim is dead. The robots must calculate a suitable rescuing behavior that maximizes the number S of victims rescued. This can be seen as a problem of distributed consensus. A victim is considered rescued when it is deposited in the deployment area alive. In addition, each victim has a different mass Mv. The higher the mass, the larger the number of robots required to carry it to the deployment area. To perform its activities, a robot r must take into account that it has limited energy er. As the robot works, its energy level decreases according to a function er(t). If the energy reaches 0, the robot A reference of all the symbols and their meaning is reported in Table 1.
2.4.1 and 2.4.2, we sketch two possible variants that focus on different behaviors.
(a) (b) (c) (d) Figure 1: (a) A rescuer robot. (b) Debris is simulated with grippable cylinders. (c) Radiation is simulated with lights (the yellow blobs in the picture). (d) Victims are simulated with robots.
Scenario Description. The idea behind the scenario we discuss here is that of an autonomic cloud computing platform; or, in other words, a distributed software system which is able to execute applications in the presence of certain difficulties such as leaving and joining nodes, fluctuating load, and different requirements of applications to be satisfied. The cloud is based on voluntary computing and using peer-to-peer technology to provide a platform- as-a-service. We call this cloud the Science Cloud Platform (SCP) since the cloud is intended to run in an academic environment (although this is not crucial for the approach). The interaction of these three topics mentioned is discussed in the next section. An illustrative picture of how such a cloud may be composed is shown in Figure 11. In our cloud scenario, we assume the following properties of nodes: Nodes have vastly different hardware, which includes CPU speed, available memory and also additional hardware like specialized graphics processing etc. Also, a node may have different security levels. With regard to the applications, we assume that: An application has requirements on hardware, i.e. where it can and wants to be run (CPU speed, available memory, other hardware) An application is not a batch task. Rather, it has a user interface which is directly used by clients in a request-based fashion. The main scenario of the science cloud is based on what the cloud is supposed to do, i.e. run, and continue running in the case of changing nodes and load, applications. The document [ASC12] has listed three smaller scenarios which we combine here to a general scenario which describes how the cloud manages adaptation. On top of this basic scenario, other scenarios may be imagined which improve specific aspects such as how to distribute load based on particular kinds of data or how to improve response times. The basic cloud scenario focuses on application resilience, load distribution and energy saving. In this scenario, we imagine apps being deployed in the cloud which need to be started on an appropriate node based on its SLA (requirements). The requirements may include things like CPU speed of the node to be run on, memory requirements, or similar things. Once the app is started, we can imagine that problems occur, such as that a node is no longer able to execute an app due to high load (in which case it must move the app somewhere else) or due to a complete node failure (in which case another node must realize this and take...
Scenario Description. In this report we show architectural aspects of the e-mobility case study and extend the S0 scenario by adaptation mechanisms for partially competitive and partially cooperative mobility. We concretize the scenario and develop a set of components and ensembles that form the architecture of the e-mobility demonstrator. This section presents the concretization and the architectural high-level view. Section
Scenario Description. The safety of large civil engineering structures like dams requires a comprehensive set of efforts, which must consider the structural safety, the structural monitoring, the operational safety and maintenance, and the emergency planning [1]. The consequences of failure of one of these structures may be catastrophic in many areas, such as: loss of life (minimizing human casualties is the top priority of emergency planning), environmental damage, property damage (e.g., dam flood plain), damage of other infrastructures, energy power loss, socio-economic impact, among others. The risks associated with these scenarios can be mitigated by a number of structural and non- structural preventive measures, essentially to try to detect in advance any signs of abnormal behaviour, allowing the execution of corrective actions in time. The structural measures are mainly related to the physical safety of the structures, while the non-structural measures can comprise a broad set of concerns, such as operation guidelines, emergency action plans, alarm systems, insurance coverage, etc. In order to improve the structural safety of large civil engineering structures, a substantial technical effort has been made to implement or improve automatic data acquisition systems able to perform real-time monitoring and trigger automatic alarms. This paradigm creates an imminent deluge of data captured by automatic monitoring systems (sensors), along with data generated by large mathematical simulations (theoretical models). Besides the fact that these monitoring systems can save lives and protect goods, they can also prevent costly repairs and help to save money in maintenance.
Figure 1 Schematic representation of the instruments’ location
Scenario Description. Particle physics studies the basic constituents of matter. Research in these matters normally requires a technical infrastructure comprising particle colliders and detectors where particles are accelerated and made collide with each other or against fixed targets, or by studying particles of astronomical origin. All experiments require very complex and expensive infrastructure and for these reasons most particle physics experiments take place in the context of international collaborations. The creation of a new experiment is usually a long process spanning many years, including the development of the particle detection apparatus, data acquisition systems, data processing and data analysis. Due to the specific characteristics of each experiment and the associated costs it is very unlikely that data obtained by a past particle physics experiment can be fully reproduced again by a new one. Therefore the data and the results from particle physics experiments are unique and must be saved for the future. In the past it was wrongly thought that the potential of the data and results produced by an experiment was exhausted within the lifetime of the collaboration. However there are many cases where old data can be useful: • New theories can lead to new predictions of physics effects that were not probed in the data when the experiment was running. • Sometimes there is a need to cross check results from new experiments against results obtained by other previous experiments. • The discovery of new phenomena in future experiments may demand that data from older experiments be analysed in search for things not yet known. • New analysis techniques and Monte Carlo simulation models may create the opportunity to reprocess data and obtain higher precision results. • New ideas for studies may appear in ranges of energy only available in old experimental data. • Combined analysis by joining data from several experiments at once offers the possibility to reduce statistical and/or systematic uncertainties, or even to perform completely new analysis. This may require access to old data. Moreover in particle physics it is estimated that the scientific production that can be attained from the data beyond the end of the experimental programme by continuing further analysis taking advantage of the whole data and better statistics represents 5 to 10 percent of the total scientific outcome. However the complexity of the particle physics experiments is also reflected in the data, in t...
Scenario Description. We illustrate the performance awareness here on a restricted version of the ASCENS cloud case study. In particular, the scenario we consider is that of a user travelling in a train or a bus, who wants to do productive work using a tablet computer or review travel plans and accommodation. The tablet notes the presence of an offload server machine located in the bus itself, and to save battery, it offloads most computationally intensive tasks to that machine. Later, when the bus approaches its destination, the offload server notifies the tablet that its service will soon become unavailable and tasks will start moving back to the tablet. When the bus enters the terminal, the tablet will discover another offload server, provided by the terminal authority, and move some of its tasks to the newly found machine. The challenge is in predicting which deployment scenario will deliver the expected performance – that is, when is it worth offloading parts of the application to a different computer. For our example, we assume that the application has a frontend that cannot be migrated (such as the user interface which obviously has to stay with the user, Af in our example) and a backend that can be offloaded (typically the computationally intensive tasks, Ab in our example). Figure 2 depicts the adaptation architecture (the used notation is that of component systems, except for interfaces which
Scenario Description. For the Smart Supply Chain, the main objective is the improvement of the efficiency of the transportation of components from the supplier plants to FCA production plants, monitoring parameters related to the conditions of the containers during the transportation, in order to be able to react to events than can happen during the travel, that can impact on the physical condition of the components or on the expected delivering date. To reach this goal, travelling containers conditions will be monitored using an HW product prototype called “Outdoor LOGistic TrackER” (OLOGER from now), developed by Cefriel, that will be integrated with MIDIH platform. The first round of experiment (end by M18) will be focused on logistic data coming from these devices. The second round of Experiment (M27), will extend data sources, including other data sources like weather and traffic information, and will require to use other FIWARE lane components of the MIDIH platform. In the first round, the data acquisition system, including transmission, management and storage of IoT industrial logistic data (DiM, Data in Motion), will rely on the MindShpere/FIWARE lane. - Data Ingestion: the ingestion of raw data from the field to FIWARE /MindSphere will leverage on Data Collector modules. MIDIH foreground component MASAI. - Data Processing: the analysis of logistic data (DaR, Data at Rest) in order to produce useful insight and information about the logistic process, will leverage on MindSphere components and ad-hoc logic. - Data Persistence: Mongo DB to manage the storage and loading of data - Data Visualization: visualization of output data will be done leveraging on a Production Logistic Optimization application developed within MIDIH by Cefriel (CC6). [For more details concerning the Business Scenarios and Objectives, please refer to D5.1] The background and foreground components in this scenario (first round) are shown in the following Table 4. MongoDB T4.2, T4.3, T4.4 FIWARE DONE BACKGROUND MASAI MindSphere T4.2 FIWARE DONE3 FOREGROUND
Scenario Description. In the Smart Factory scenario, the proposed solution is the development of a system to control and analyse the quality control and process control data. The aim is to provide capabilities of visualization and predictive maintenance to the production line. For this, MIDIH will develop a solution to provide the blue-collar workers and plant supervisors with the capability to visualize and prevent the factory production. In addition to this quality control, a machine and tooling status control module will be developed. Smart Factory scenario background consists of a FIWARE and APACHE lanes with several components: • Data ingestion: Data Collector to enable physical level to FIWARE (i.e. OPC UA, non OPC UA, etc.). • Data bus: Orion Context Broker to manage context information or XXXXX to integrate data streams. • Data processing: CEP Siddhi, Logstash and TensorFlow to analyse events and create complex events or to elaborate files with information when services are executed. • Data persistence: Druid to manage which data must be loaded. • Data visualization: Ruby on Rails and Nginx to present the data. The background and foreground components in this scenario are shown in the following Table 5. Orion Context Broker (OCB) T4.2, T4.3, T4.4 FIWARE DONE BACKGROUND HADOOP T4.4 APACHE DONE HIVE T4.4 APACHE DONE IDAS T4.2, T4.3, T4.4 FIWARE DONE MIDIH Connectors T4.3 FIWARE DONE FOREGROUND