Grand Challenge on
Adaptation Algorithms for Near-Second Latency

Organized and sponsored by

Twitch

Challenge Description

At Twitch, we have been successful in delivering ultra-low-latency streams to millions of viewers. However, as we drive towards near-second latency, we are finding that existing adaptation algorithms are not able to keep up - smaller player buffers do not provide enough time to respond to changing network conditions. We see this as a key challenge blocking the streaming community from reducing latency at scale.  

Considerations

While there are several new adaptation algorithms, which aim to solve the challenges of low-latency streaming, we believe that our challenge addresses a new set of problems. These problems are based on our experience with delivering low-latency streaming at scale:

Small Buffer

As a rule, the player buffer, measured in seconds, must be less than the latency it is targeting. When latency is in the 1-second range, players will have less than a second to adapt to bandwidth drops, packet losses, etc.

Variable Playback Rate

Low-latency players manipulate the media playback rate in order to maintain a latency target. When the player falls behind, it speeds up this rate - consequently, the player’s buffer is prevented from growing past the latency target. Buffer-based algorithms like BOLA struggle to upswitch when the buffer is not allowed to grow.

Minimum Delay Chunked-Transfer Downloading

Our challenge adds the constraint that the player makes an early request for the segments to be sent in chunked-transfer mode - meaning that the player requests the segment before the first chunk is available. This effectively removes the round-trip time (RTT) from the segment request, resulting in an even download rate once frames begin streaming. We have found that clustering-based bandwidth estimation algorithms, such as the one used in ACTE, do not perform well when data arrives in an even cadence as opposed to in bursts.

Unstable Network Conditions

Real-world network conditions, especially on mobile networks, are often unstable. A client’s bandwidth tends to fluctuate during the playback. Players are challenged with balancing the number of bitrate switches against preventing rebuffering.

Fairness

Clients watching the same stream (or a different stream from the same server) may be competing for the same resources - if a client is too greedy, it risks degrading the experience of other viewers on the network. Adaptation algorithms should minimize their network usage to avoid causing bottlenecks.

Task, Dataset and Test Environment

The purpose of this challenge is to design an adaptation algorithm tailored towards HTTP chunked transfer streaming in the near-second (1-2s) latency range. It should minimize rebuffering while maximizing bandwidth utilization given the considerations above. The algorithm must also be fair to other clients viewing the same stream - its performance should not come at the expense of another. The proposed algorithm must be implementable on the web and within an HTML5-based player.

In order to standardize and streamline development, Twitch is providing a testbed against which to evaluate implementations. Detailed information is provided at the links below. A test stream is also provided, along with network profiles which reproduce our representative conditions (soon).

Testbed's GitHub page

Twitch's blog post

Evaluation Criteria

The efficacy of the proposed algorithm will be evaluated based upon the following criteria (not in order):

  • Average selected bitrate (Mbps)
  • Average buffer occupancy (s)
  • Average live latency (s)
  • Average number of bitrate switches
  • Average number of stalls
  • Average stall duration (s)

The criteria above will be fed into a QoE model (such as ITU-T Rec. P.1203 QoE, see this for example) to produce a single score which will be used to compare the implementations. The score of Twitch player will form the baseline score. We have not yet decided on which model to use but this will be provided when the test data is released. Algorithms will also be evaluated on their fairness; the exact methodology will be provided when the dataset is released.

Important Dates

  • Submissions: Apr. 3, 2020
  • Acceptance notifications: Apr. 22
  • Camera-ready submissions: May 1
  • Entry presentations: During the MMSys week

Submission Guidelines

Submissions should provide enough details for the implemented algorithm in a short technical paper, prepared according to the guidelines provided for the Open Dataset and Software Track. A link to the code (preferably a GitHub project) must also be included in the paper. Complete your submission at https://mmsys2020challenges.hotcrp.com.

The authors will be notified after a review process and the authors of the accepted papers need to prepare a camera-ready version, so that their papers can be published by ACM DL.

Winners are to be chosen by a committee appointed by the challenge organizers and results will be final. If contributions of sufficient quality are not received, then some or all of the awards may not be granted. The challenge is open to any individual, commercial or academic institution.

Awards

The winner will be awarded 5,000 USD and the runner-up will be awarded 2,500 USD.

Questions

If you have any questions, please contact the organizers.