Network Latency Sample Clauses

Network Latency. Network Latency is defined as the average time taken for an IP packet to traverse a pair of backbone Company POPs on the Company Network. The Company Network Latency Guarantee means that the average monthly network latency between North American Company POPs shall not to exceed eighty five (85) ms. In the event that guaranteed network latency metrics are not met during any one calendar-month period, Company will provide a credit equivalent to one (1) day of Service Charge.
AutoNDA by SimpleDocs
Network Latency. The network latency will average less than 25 ms per element averaged across all elements on the local portion of SI&T’s network. After being notified by Customer of network latency in excess of the limit specified above, SI&T will use commercially reasonable efforts to determine the source of such excess network latency and to correct such problem to the extent that the source of the problem is on SI&T’s Network. If SI&T fails to remedy such network latency within four (4) hours of verification and if the average network latency for the preceding 30 days has exceeded the rates specified above, Customer may request a one (1) day Service Credit for that particular event. Customer may not request network latency service credit more than once for any given calendar day. Network Latency across an element is defined as the average time taken for data to make a round trip across such element. Elements in the transport circuit include routers, switches, circuits and other components. Test points for latency are designated solely by SI&T. Testing must be done during a period in which the only traffic on the circuit is the test traffic. Average latency is not measured when a circuit is experiencing a service outage. In the case of continuous high latency exceeding the limits of this SLA, SI&T reserves the right to recommend the disconnection of the affected circuit without penalty of breach.
Network Latency. Network Latency is defined as the round-trip delay (in milliseconds) of packets transported between specific WIN POP locations across the WIN Data Network and does not apply to Third Party Provider local access circuits. Network Latency is calculated based on an aggregate monthly measurement average between specific WIN PoP endpoints.
Network Latency. As the primary locus of data moves from disk to flash or even DRAM, the network is becoming the primary source of latency in remote data access. Network latency is an expression of how much time it takes for a packet of data to get from one point to another. Several factors contribute to network latency, including not only the time it takes for a packet to travel in the cable, but also the time the equipment/switch uses to transmit, receive, buffer, and forward the packet. Total packet latency is the sum of all of the path latencies and of all the switch latencies encountered along the route (usually reported as RTT, Round Trip Time). A packet that travels over N links will pass through N −1 switches. The value of N for any given packet will vary depending on the amount of locality that can be exploited in an application’s communication pattern, the topology of the network, the routing algorithm, and the size of the network. However, when it comes to typical case latency in a large-scale data centre network, path latency is a very small part of total latency. Total latency is dominated by the switch latency which includes delays due to buffering, routing algorithm complexity, arbitration, flow control, switch traversal, and the load congestion for a particular switch egress port. Note that these delays are incurred at every switch in the network, and hence these delays are multiplied by the hop count. One of the possible ways to reduce hop count is to increase the radix of the switches. Increased switch radix also means fewer switches for a network of a given size and therefore a reduced CapEx cost. Reduced hop count and fewer switches also lead to reduced power consumption. For electrical switches, there is a fundamental trade-off due to the poor scaling of both signal pins and per pin bandwidth. For example, one could choose to utilize more pins per port which results in a lower radix, but a higher bandwidth per port. Another option is to use fewer pins per port which would increase the switch radix, but the bandwidth of each port would suffer. Photonics may lead to a better solution, namely the bandwidth advantage due to spatial/spectrum division multiplexing and the tighter signal packaging density of optics, i.e., high-radix switches are feasible without a corresponding degradation of port bandwidth.
Network Latency. Masergy will meet or be less than the average roundtrip latency times between Masergy hubs shown below on Table 1 (Network Latency). Latency is calculated by averaging five (5) minute latency measurements between Masergy's inter-city transit backbone routers monthly. In the event that Masergy fails to meet the latency measurement set forth in Table 1 (Network Latency) in any given calendar month during the term of Masergy's agreement with Customer, and Customer has Service between the affected hubs, Customer will be eligible to receive a credit equal to one week of its affected site's monthly recurring Masergy Service fees (excluding local access circuit charges) for the month in which the average latency measurement is not met. In order to be eligible for the Network Latency credit, Customer must notify Masergy of the latency failure within thirty (30) business days of the end of the month in which the failure occurred. Customer should open a trouble ticket and make claims via the Masergy Portal application, which can be accessed by clicking on the Log In link located on the Masergy website xxx.xxxxxxx.xxx; claims may also be submitted via electronic mail sent to xxxxxxxxxxxxx@xxxxxxx.xxx. Latency failures caused by Force Majeure events do not apply and any resulting latency data will not be used in the calculation of the monthly latency measurement.
Network Latency. The average network transit delay (“Latency”) will be measured via roundtrip pings on an ongoing basis every five minutes to determine a consistent average monthly performance level for Latency between edge. Edge locations are defined at Customer sites, trading partner locations, or Exchange locations. Latency is calculated as follows: Target Latency Goal = Minimum Latency + (Per Mile Latency * Round Trip Miles* Between Customer Edges) Region Minimum Latency Per Mile Latency If Goal Exceeded By Intra U.S. 10ms .02ms 1-10ms 11-20ms >20ms International 20ms .03ms 1-10ms 11-20ms >20ms Credit as % of Lumen Financial Connect Port MRC of Affected Service* 10% 20% 30% To simplify calculations, air miles are used to generate latency targets. For example, if location A is 100 air miles from location B (i.e. 200 miles roundtrip) the latency target would be 20ms + (.02 ms * 200) = 24 ms. Route miles are used in lieu of air miles only when the number of route miles is greater than 2x the number of air miles. *subject to requirements and limitations in Section 4 (ii) Exchange Connectivity Latency to New York, Chicago, and London Data Centers. Global Exchange Connectivity Latency metrics are calculated one way in milliseconds. The Global Exchange Connectivity Latency Goal in this subsection is applicable only if a Customer location is within the Lumen Data Center listed in the table below. The Global Exchange Connectivity Latency Goal is applicable to one connection of a primary/secondary resilient connection to the Exchange listed in the table below. The table below reflects measurements one way in milliseconds. Global Exchange Connectivity Latency Goals are measured using monthly averages. Exchange Lumen Data Center Remedy (Credit is applied to Lumen Financial Connect Port MRC of the Affected Service) LO4 LO1 NJ2 NJ2X NJ1 XX0 XX0 Failure to meet the Goal qualifies Customer for 25% of the Lumen Financial Connect Port MRC (Credit cannot be combined with Network Availability SLA credit.) SFTI EU 0.25 1 -- -- -- -- -- LSE 0.5 1 -- -- -- -- -- BATS EU 0.25 1 -- -- -- -- -- BOX -- -- 0.25 0.25 0.3 0.25 10 BATS US -- -- 0.25 0.25 0.25 1 10 CBOE -- -- 10 10 10 10 0.25 CME -- -- 10 10 10 10 0.25 ICE -- -- 10 10 10 10 0.25 ISE -- -- 1 1 1 1.5 10 NASDAQ NLX -- -- 0.25 0.5 0.5 1 10 NYSE SFTI US -- -- 0.25 0.25 0.25 1 10 (c) Packet Delivery. Packet Delivery will be measured on an ongoing basis every five minutes to determine a consistent average monthly performance level for packe...
Network Latency. (a) Definition. Round trip delay (“RTD”) targets are based on the geographical location of Customer’s ACs plus core PoP to PoP connectivity. For CE Location distances of less than 100 kilometers (km) from IPC’s regional PoP, add 3 milliseconds (msec) per each AC entering IPC’s extranet to the table in Addendum B. For CE Locations distances of more than 100 kilometers (km) from IPC’s regional PoP, add 2 msec per each AC entering IPC’s extranet for the first 100 km and add a further 2 msec for each additional 100 km or portion thereof beyond the initial 100km to the table in Addendum B. Latency is measured using IPC’s network management systems and is the sole and conclusive measurement for the purpose of this guarantee.
AutoNDA by SimpleDocs
Network Latency. (a) Definition. Round trip delay (“RTD”) targets are based on the geographical location of Customer’s ACs plus core PoP to PoP connectivity. For all Connexus Ethernet Network Services, add 2 milliseconds (msec) to the tables in Addendums A, B, and C per each AC entering IPC’s network for each 100 kilometers (km) or portion thereof from IPC’s regional PoP to the Customer’s Location. Latency is measured using IPC’s network management systems and is the sole and conclusive measurement for the purpose of this guarantee. The figures in Addendums A, B, and C provide PoP to PoP latency targets and are based upon the shortest route between PoPs.
Network Latency. (a) There are two measures of latency (which is measured in milliseconds):
Network Latency. 3.10.7.1. Network Latency (or round trip delay) is defined as the average time taken for an IP packet to make a round trip between backbone hubs on the Provider’s network backbone. Unless stated otherwise in the relevant SoW, the monthly network latency performance target for IP Transit services is as follows: • Less than or equal to 35 ms within Europe • Less than or equal to 95 ms across the Atlantic
Draft better contracts in just 5 minutes Get the weekly Law Insider newsletter packed with expert videos, webinars, ebooks, and more!