Fault Management. Full details of the fault reporting process and contact names and numbers are set out in the Operations and Maintenance Manual.
Fault Management. Boomerang will provide fault management and recovery of the System. This includes logging of alarms, acting on alarms, and resolving the cause or limiting the effect.
Fault Management. The NOC will recognize, isolate, log faults and restore Services in the network. A situation may arise when the Optical Wavelength may experience degradation or outage. When calling in the outage, the Customer should provide an accurate description of the trouble with circuit identification to the NOC. It is essential that the Customer be able to provide physical access to their demarcation point to Rogers at the time of trouble should Rogers be the provider of the Customer’s Access Network. Rogers will perform diagnostic testing and physical loopback testing to determine the root cause of the outage.
Fault Management. Rogers will provide comprehensive Fault Management Services. These include:
4.2.1 Problem isolation, identification, and resolution on either primary or back up service.
4.2.2 Repair and restoral of Services supplied by Rogers.
4.2.3 Centralized coordination and management of service repairs, including dispatch of on-site technicians (as procured), trouble-ticket tracking, proactive notification and status reporting.
4.2.4 Centralized coordination and management of hardware repairs (only available to you if you have purchased Hardware maintenance through Rogers and has a valid Hardware vendor maintenance contracts). Restoral timelines are dependent on the Hardware vendor maintenance coverage and contract terms.
4.2.5 Restoration of CE firewall configuration following Network, access, and/or Hardware repairs. To ensure the integrity of the CE firewall configuration for restoration purposes, Rogers will assume responsibility for the configuration of the your CE firewall. Fault Management of your CE firewall is contingent upon the following conditions:
i. Documented IOS/Software bugs, defects, or vulnerabilities;
ii. Damaged or faulty Hardware, or;
iii. Your other systems, equipment, or applications.
Fault Management. In case of API malfunctions, the Customer can contact the support (Customer Support Center) indicated in the Mercedes-Benz /developers Portal or in the XENTRY Shop at the contact address indicated there.
Fault Management. 15 4.4 Accounting............................................................... 15 4.5
Fault Management. Fault Management involves the process of monitoring traps and alarms on all service providing elements and links in order to allow for sectionalization, identification, and resolution of a problem with the delivery of Services. With respect to each of the Service Bundles, Fault Management is described in Schedule A.
Fault Management. Features Viewing of the IFU's active alarms (time- stamped) stored on the individual IFU. Allows user to lag a note with each alarm type. Viewing of the Alarm History Log [*] events (time-stamped) stored on the individual IFU (an event is either an alarm set or alarm clear). Mask/Unmask of all alarms. Displays the Modem Lock status. Configuration of the SET/CLEAR threshold values for certain Threshold Crossing Alarm (TCA) parameters.
6.1.5.1 The following TCA parameters RSSI can be configured: SQM IFU Internal Temperature
6.1.5.2 The following alarms can be generated Payload Offline when corresponding fault of threshold- crossing conditions occur. When conditions no longer exist, the corresponding alarm-clear is generated. Exciter Unlock Modem Unlock Power Supply Failure for Tx Power Supply Failure On Unswitched Rail Tx Failure SONET Clock LOL from RF RSSI Too Low [*] Confidential Treatment Requested TNS-38 100 Mbps Invisible Fiber(TM) Internet: Product Specification 21-Dec-1999 Security Level: TNS and ART Under Non-Disclosure Document Number: SY-SPE-23-fe Audience: TNS and ART Under Non-Disclosure Revision: 1.0.27.3 Authors: Engineering, Product Management Release Status: Approved
Fault Management. In the event that You become aware of any mobile Service fault, Mobile Network fault or Mobile Communications Equipment fault, You should notify Us immediately by contacting the Mobile Customer Care Team. Once it has been established a fault exists, We will use our reasonable endeavours to remedy any such faults.
Fault Management. Service instances throw faults to signal the occurrence of unpredictable circumstances within the runtime environment, which force deviations from the control flow expected by the implementation of the service. The intention is to characterize problems for clients in the assumption that, based on the characterization, they may react in some useful way. In particular, gCube services ought to be designed so as to return three broad types of faults, depending on whether the fault is deemed to be unavoidable across any instance of the service (unrecoverable), perhaps avoidable at some instance other than the one which observes it (retry-equivalent), or perhaps avoidable in the future at the same instance which observes it (retry-same). A resilient client may then exploit these broad semantics to react in ways which are perhaps more useful than by gracefully desisting. In particular, a client which is presented with either one of the last two types of fault can try to recover accordingly - by trying with other instances or by retrying with the same instance but at later time. Similarly, a client that is presented with an unrecoverable fault may avoid consuming further resources in the attempt. The framework offers a number of facilities for dealing with the three types of gCube faults. First, it predefines their type declarations for immediate importing within service interfaces. Second, it provides default implementations of these interfaces which: • serialize and de-serialize on the wire in accordance with gCube requirements; • need not be included in stub distributions; • can be handled as normal exceptions within the code (e.g. may wrap other exceptions and be caught in try/catch blocks). Third, it mirrors faults with lightweight exceptions for convenience of use within the service implementation: as faults and exceptions are freely convertible, a service may be designed to: • convert exceptions into corresponding faults as these exit the scope of the service • convert faults received from remote services into the corresponding exceptions and let these percolate up the call stack. While the transparencies discussed so far relate to the fulfillment of system requirements, the framework offers also tools that may support developers in the implementation of service-specific semantics, as shown below.