The term latency is used synonymously for delay time. Generally, it is the time interval from the end of an event or command to the beginning of the response to that event or, as in computational latency, the result of the task.
- Computational Latency is the computational time required for a node or computer to complete a given task. Computational Latency is due to the central processing unit and its input/output units and can be shortened by using computers with high computing power.
- In communication networks, there is network latency. In Ethernet networks, this latency is caused by signal propagation times in the transmission medium and Ethernet switching. The latency time describes the period of time that elapses until a bit received at a switch port leaves the destination port of the switch again. This period is specified in microseconds and depends on the switching method used: Cut-through method or Store-and-forward method.
- In mobile communications technology, latency times have been reduced dramatically in higher-generation mobile networks. Whereas latency times in 2nd generation( 2G) networks were still between 400 ms and 500 ms for GPRS and EDGE, they have been reduced to 200 ms to 400 ms for UMTS( 3G). In the case of Long Term Evolution( LTE), the 4th generation( 4G) mobile communications network, there was a further reduction to 20 ms to 80 ms, and in the case of 5G, latency is expected to be between 2 ms and 5 ms, undercut only by Ultra-High Reliability and Low Latency Communication( uRLLC) with 1 ms to 2 ms.
- In optical systems, latency is due to the distance that can be bridged and to the media and process transitions. Like electrical signals, light transmissionshave a transit time that depends on the distance traveled and the speed of light. It is 3.33 µs per kilometer. Since light travels more slowly in optical fibers due to the modes, the transit time is about 4.8 µm. As far as media transitions are concerned, O/E converters and E/O converters cause a signal delay at high transmission rates, which in optical systems increases the latency accordingly.
- In internetworking, latency is about the processing time required by the internetworking components - routers, switches and gateways - to check and route the data packets and to possibly exchange the header or change the hop in the time-to-live field.
- In ring systems, latency is the time it takes a data packet to go around a token ring when it is traffic-free. The latency increases with the number of connected stations.
- For memory devices, memory latency is the time required by the memory device to generate the first word after receiving the command and address. A distinction is made between the latency for the columns, the Columm Access Strobe( CAS) and that for the rows, the Row Access Strobe( RAS). Both latencies represent the number of pause cycles and are specified as dimensionless numbers. The smaller the number, the better the latency. For example, a DIMM module with a CAS latency of 2 (CL2) is faster than a module with a CAS latency of 3 (CL3), even if the former module has a longer access time. With Reduced Latency DRAM( RLDRAM) there are special memory modules that are characterized by reduced latency compared to other RAMs.
- In the case of hard disks, the latency is the time required on average for a half revolution of the hard disk. It is reduced proportionally to the rotation speed. This is also referred to as rotational latency.
- In audio technology, latency is the delay time caused by digital signal processing( DSP) in a computer. This can be the delay time between the input and output of a sound card or other audio component. In audio, latency can range from a few milliseconds to several hundred milliseconds. At low latencies an echo effect is perceptible, at higher ones an audible delay effect. In addition, latency plays a significant role in Audio over Ethernet( AoE), since latency is unpredictable due to the stochastic access method.