What is Latency in Live Video Streaming?
Suppose you are watching a football game via your over-the-top (OTT) streaming service. Meanwhile, your next-door neighbor is watching the same game on traditional TV, loudly celebrating goals and agonizing over penalties that you have to wait another 30 seconds to see.
Alternatively, perhaps you are watching a live talent competition, your anticipation building for the winner’s reveal, when your Twitter or Facebook feed – typically generated by TV viewers – ruins the surprise 15 seconds before you see it.
For viewers, the big problem with video latency is the frustration of viewing the moment after it happens. Over time, video latency frustrations for viewers become video latency problems for content providers.
With time-sensitive video content – such as TV sports, games, and news, or pure OTT content like e-sports and interactive shows – viewers expect to watch events as they unfold. In the world of real-time entertainment, video latency problems do not merely ruin surprises; if left unresolved, they damage the confidence viewers have in their OTT content providers.
What Causes Video Latency? The Factors, from Camera to Display
A number of steps in the glass-to-glass journey affect video latency:
- Video encoding pipeline duration
- Ingest and packaging operations
- Network propagation and transport protocol
- Content delivery network (CDN)
- Segment length
- Player policies
- Playhead positioning
With traditional adaptive bitrate streaming, video latency mainly depends on media segment length. For example, if your media segments are six seconds long, your player will already be six seconds late compared to the actual absolute time when it requests the first segment.
Moreover, each additional media segment that the player buffers before the actual start of playback will increase the time to the first decoded video frame.
Although many things generate video latency – video encoding pipeline duration, ingest and packaging operations duration, network propagation delays and CDN buffering (if any) – the player itself carries a great share of overall video latency.
How to Measure Video Latency
Other methods exist, but the easiest way to measure end-to-end video latency is as follows:
- Use a tablet running a clapperboard application
- Film it with a camera connected to your video encoder
- Publish your video stream to your origin
- Deliver to your player through a CDN
- Put the player alongside the clapperboard tablet
- Take a picture of the two screens
- Do the timecode subtraction, and you have your number
How to Minimize Video Latency for Live Streaming
That OTT video latency lags behind broadcast TV and social media is not the only concern on content providers’ minds. Here are a few other live streaming latency issues to consider.
Flash and Real-Time Messaging Protocol (RTMP): Flash-based applications using RTMP streaming used to work well for video latency, but with the deprecation of Flash and web browsers steadily reducing support for or blocking the plug-in, Content Delivery Networks (CDNs) have also begun deprecating RTMP – which already had limited scale on the delivery side – forcing content providers to take alternative action.
Scale and Reliability vs. Video Latency: One choice that solves scale problems: Switching to HTML5-friendly streaming technologies, such as HTTP Live Streaming (HLS), Dynamic Adaptive Streaming over HTTP (DASH or MPEG-DASH), and the Common Media Application Format (CMAF).
These streaming technologies distribute over HTTP, which means delivery is cacheable, and CDNs can deliver higher volumes with greater efficiency.
However, with both scale and reliability achieved, tens of seconds are added to end-to-end delivery times, tipping the scales against video latency.
Interactive Features: Other content providers may choose to develop personal broadcasting services with interactive features; however, a video signal delay is generally not acceptable for these use cases.
The reason: If video is delayed by up to 30 seconds from the time it is first captured on a camera to the time it is viewed, interactivity that requires real-time feedback is rendered impossible.
In addition, those who want to develop synchronized second screen, social watching, gaming, or gambling applications need to control video streaming latency on a fine-grained level.
What is a Good Video Latency Sweet Spot for Live Streaming?
Video latency can be viewed in three distinct categories, each defined by high and low boundaries.
However, the ways reduced video latency, low video latency and ultra-low video latency differ from broadcast video latency is worth knowing, if a little tricky to explain.
Six seconds of broadcast video latency is often the average in the field, which means that the sweet spot for OTT video latency is somewhere in the low range of the reduced video latency category, or in the high range of the low latency category. Getting close to five seconds maximizes the chances of competing with broadcast and social network feeds.
Moreover, depending on the OTT video encoder’s location in the content preparation workflow, the latency reduction objective increases as encoders are positioned further downstream in the chain.
Best Practices to Achieve Low Latency Video in Live Streaming Applications
At a high level, here are the main actions to conduct in order to get your video streaming solution into the low latency category:
- Measure video latency at every step in the workflow
- Optimize your video encoding pipeline
- Choose the right segment duration for your requirements
- Build the appropriate architecture
- Optimize (or replace) your video player(s)
How to Choose the Right Segment Size for Video Packaging
The segment duration you choose has a mechanical effect on the video latency in almost every player. For example, you can achieve five-second latency with one-second duration segments. Choose two-second duration segments, and the result will be between seven and 10 seconds of video latency – unless you optimize the player’s settings.
The fundamental rule is to “right size” based on your requirements. So, if video latency of seven seconds or below is not critical, choose two-second segments.
If your player uses two-second segments, raise the GOP length from one to two seconds to increase the encoding quality at constant bitrate.
In addition, if you use HLS as the ingest format, you can reduce stress on the origin storage and packaging computation by using two-second segments when ingesting.
Remember These Video Latency Facts and Tips
- Video latency for live streaming is not an insurmountable problem. You can minimize it, with some efforts
- Standard HLS and DASH technologies enable scalable low latency over HTTP
- Today’s video latency standard for live streaming: Less than 10 seconds
- A stable four-five seconds of video latency is possible now, if your business requires it
- The chunked CMAF ecosystem is maturing and will soon enable stable sub-four second video latency