Common use of Efficient Application Data Handling Clause in Contracts

Efficient Application Data Handling. To provide better throughput, application replication coordination and comparison may be imple- mented by a printed circuit board of special hardware, such as the self-checking pair host interface hardware used by SAFEbus [32]; alternatively, it may be performed by tasks distributed across Figure 17. VL Agreement List - Maintained for Each Task a network. In the latter case, i.e. where the replication is performed over a network, additional steps are required to ensure that the replicated computational task set achieves and maintains the required degree of state consistency necessary to produce identical outputs. This requires that the replicated tasks agree on initial state and all input data that is causal to internal state changes. For networks without high-integrity, where value correctness cannot be guaranteed, the agree- ment process requires the retransmission and comparison of all values received by all consumers. Such exchanges can demand significant software and messaging overheads. For high-integrity net- working technology, where value correctness can be guaranteed (for example, technologies such as self-checking TTEthernet, SAFEbus, or the BRAIN), the software and messaging overheads can be reduced by requiring agreement only on reception status as part of a hierarchical agreement structure. However, even with this reduction, agreement may still constitute a significant software overhead, if the agreement is to be performed in software and the agreement exchanges need to meet typical avionics real-time requirements. For this reason, it is advantageous for agreement exchanges to be implemented primarily in hardware, with minimal software involvement. Hardware-assisted application data agreement can be done with an ingress (received frames) agreement scheme that utilizes the determinism of time-triggered frame exchange, although asyn- chronous variants may also be possible. For each replicated set of tasks, the network hardware within each node maintains a list of the Virtual Links (VLs), TTEthernet’s frame identifiers, that require ingress agreement among the tasks. Each frame agreement list also has a dedicated VL or VLs to perform the ingress agreement exchange. See Figure 17. As frames are received by an ES, they are checked against each of the agreement frame lists. If the frame is found on an agreement list, the receiving host adds the reception status to the exchange VL’s frame buffer. As additional frames are received, each status is also written to the exchange buffer. The precise organization of the buffer is implementation specific. However, an example simple mapping is shown in Figure 18. In this example, the index of the frame ID in the Figure 18. Ingress Frame Processing and Exchange Buffer Building VL list is used as an index into the assigned exchange VL buffer space. The reception status word is written at the indexed location. If the replicated system data flow is time-triggered, then in accordance with the time-triggered schedule, it can be determined what time all ingress frames should have arrived for the task set. Following this point, as shown in Figure 19, the content of the exchange buffer is transmitted. This frames is routed such that it will arrive at the other ESs that are replicating the associated task. On reception of the exchange frame, each receiving node compares the status of the remote reception with its local reception status for each frame of the agreement list. If the status for both local and remote host indicate that the frame has been received OK, then the associated frame is marked with an agreed status. If either of the local or remote frame reception status indicate that the frame was not received, the frame is marked with non-agreed status. The agreed status is made available to all data consumers. This agreed status is then signaled as part of each frame’s fresh data indication to the hosts. Using this additional status, the hosts are able to identify which data has been received by all of the replicates. The time-triggered schedule is also used to clear frames status at a known point within the schedule to allow the frame status processing to resume in the next cycle. In practice, this may be implemented in conjunction with high-level buffer processing, such as the maintenance of ping-pong buffer schemes. The above mechanism enables software to simply identify which data was received by all parties of the replicated task set. Using this information, the software can decide which data to use for replica-determinate calculations and, if necessary, substitute “safe-defaults” to mitigate missing

Appears in 5 contracts

Samples: ntrs.nasa.gov, ntrs.nasa.gov, ntrs.nasa.gov

AutoNDA by SimpleDocs
Time is Money Join Law Insider Premium to draft better contracts faster.