Architecture and Specification. The framework follows a three-tier structure, the service layer that is in charge of high-level services such as docker deployment and two important elements of FogFlow: • Task Designer: Web tool to visually manage, create and delete the tasks that will be deployed on the fog/cloud nodes. • Topology Master: Component that based on the configured application topology, will decide when, where and which of the tasks are deployed, whether in the cloud or at the ends The second layer is the context layer, where all context data is located, this layer is in charge of manage, store and distribute the data through the distributed application. This layer is mainly composed by the following two components: • IoT Discovery: it is in charge of processing context information with its id, attributes, metadata, etc., and allowing other elements to query and subscribe to the data. It is used by the local IoT Broker to query for entities located in other IoT Brokers contained on different edge nodes. • IoT Broker: it is in charge of manage the local context entities that can be produced by nearby IoT Devices integrated on the Edge Node, providing a single view of all the entities that provides input streams to the tasks deployed on the edge node. It can be also used to provide output streams to be consumed by tasks deployed on other Edge nodes or in the Cloud. Finally, there is the data processing layer, which, as its name suggests, where the data from input streams is processed, optionally producing output streams after process or analyse the input. This layer is mainly composed by the following two components: • Worker: coinciding with the topology master, each worker launches his tasks in docker containers on his local machine. Defines the inputs and outputs of tasks and manages them according to their priority. • Operator: The operators are the objects that contain all the data processing logic of a service topology. The way in which data is processed in FogFlow is through tasks, as we have said previously on this document, but the way in which FogFlow deals with tasks is through the operators. An operator can be implemented using python or JavaScript, in the FogFlow documentation there is a developed and explained example of how to implement an operator. The Figure 9 illustrates how the different interactions between the Worker, IoT Broker and deployed tasks occurs. This interaction is further explained in detail next.
Architecture and Specification. The architecture for communications is described on Figure 20. On that picture process for Registration, Orchestration, consumption and production are described.
Architecture and Specification. Figure 21 – Arrowhead Framework Robotic Arm Demonstrator – Architecture diagram Figure 22 – Screenshot of the consumer Web GUI
Architecture and Specification. Figure 28 –
Architecture and Specification. HW/SW Prerequisite
Architecture and Specification. All previous subsystems have been implemented using Docker5. Docker technology has been chosen as it is an easy way to assembly complex and pre-configured services architectures and it allows their deployment on multiple different environments with a high success probability. The following Figure 5 shows a more detailed view about the subsystems introduced in previous section. 5 xxxxx://xxx.xxxxxx.xxx/ As a general criterion, the interconnection between containers has been realized by means of the docker subnets, whose access is restricted only to the needed communications, avoiding exposing a container to other ones not requiring its services. PORTAL, IDM and MKPL expose to the docker host only the necessary ports to permit normal user and/or administrators to access to their services. These ports - internal to each container - must be published ("mapped") to available and reachable docker host ports to make services accessible from the outside. In particular, as further described in the following sections, the 8080 port of the PORTAL and the 8000 ports of both IDM and MKPL allow direct access via HTTP to the WEB interfaces of the corresponding containers. Alternatively, to the 8080 port, the port 8009 gives the possibility, if necessary, to reach the PORTAL efficiently using a reverse proxy through the AJP protocol.
Architecture and Specification. The FIWARE OPC UA Agent is a software component that connects automation systems (Field Environment), which implement the OPC UA9 standard connection technology, to the Future Internet Platform’s information bus – i.e., the Publish/Subscribe Context Broker10. The Publish/Subscribe Context Broker, of which the Orion Context Broker (OCB) is the reference implementation, is a Generic Enabler of the FIWARE platform that exposes a standard interface for applications to interact with field devices and with each other. Information producers and consumers integrate themselves with the Publish/Subscribe Context Broker through the OMA NGSI API. The FIWARE OPC UA Agent is a module of IDAS, the reference implementation of the FIWARE Backend Device Management GE11. It translates OPC UA address spaces into NGSI contexts (the FIWARE standard data exchange model) without exposing the underlying OPC UA binary communication protocol to applications. The FIWARE OPC UA Agent is also able to deal with security aspects of the FIWARE platform (e.g. enforcing authentication and authorization on the communication channel) and provide other common services. For the first version of the FIWARE OPA UC Agent (see Figure 11), it is assumed that the underlying devices expose an OPC UA service API through an OPC UA TCP Binding or uses specific libraries to communicate with the middleware. For convenience, we indicated these components with the OPC UA Server blocks, which are anyway not part of the developed component. 9 xxxxx://xxxxxxxxxxxxx.xxx/about/opc-technologies/opc-ua/ 11
Architecture and Specification. The following picture (Figure 13) depicts the MASAI detailed architecture, where we can identify three different layers: Communication, Data Handling and Broker layer: Figure 13 – MASAI Architecture The different layers of this architecture are:
Architecture and Specification. The most important functionalities are:
1. collection and transfer of all types of data (Data Ingestion)
2. real-time analytics that processes huge streaming data in order to predict and detect events based on underlying patterns and correlations (processing Data in Motion) and
3. storage layer for persisting all type of data, like past data, meta-data, models (Data Persistance)
4. data-analytics services on multidimensional and complex data, including exploratory analysis, multivariate analysis, predictive analytics and deep learning (processing Data at Rest)
5. visualization services to enable users to contextualize, understand and apply results for better decision making. There is also the need for the orchestration of the tasks in order to achieve a common goal (Workflow Management) – see Figure 19
Architecture and Specification. As depicted in Figure 20 the main expected functionalities are directly related with FIWARE components. A brief description of the capabilities of each one of the functions presented can be found in the equivalent section in Annex B. The following sections aim to provide the description of these layers.