Skip to content

Latest commit

 

History

History
 
 

AdaptiveStreaming

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
page_type languages products urlFragment extendedZipContent description
sample
csharp
windows
windows-uwp
AdaptiveStreaming
path target
SharedContent
SharedContent
path target
LICENSE
LICENSE
Shows various features of the AdaptiveMediaSource object.

Adaptive streaming sample

Shows various features of the AdaptiveMediaSource object, used in conjunction with MediaSource, MediaPlaybackItem, MediaPlayer and MediaPlayerElement.

Note: This sample is part of a large collection of UWP feature samples. You can download this sample as a standalone ZIP file from docs.microsoft.com, or you can download the entire collection as a single ZIP file, but be sure to unzip everything to access shared dependencies. For more info on working with the ZIP file, the samples collection, and GitHub, see Get the UWP samples from GitHub. For more samples, see the Samples portal on the Windows Dev Center.

This sample demonstrates the following:

Using a simple adaptive streaming URI

In XAML, a Uri in the Source property of a MediaPlayerElement goes through a format converter which calls MediaSource.CreateFromUri. An AdaptiveMediaSource is created implicitly when a manifest URI (.m3u8, .mpd, or their mime types) is used with MediaSource.CreateFromUri(uri).

There are no properties on the MediaPlayerElement that will retrieve the implicit media data source. No properties nor event handlers of the AdaptiveMediaSource can be retrieved when it is created implicitly.

This scenario might be appropriate if you are using adaptive streaming for a few media assets as part of your application, such as a help page or an instructional video. You can still get a reference to the MediaSource, and create a MediaPlaybackItem from it, and then use their properties and event handlers.

This scenario is not appropriate for apps whose main purpose is adaptive streaming video playback. See the other scenarios for alternatives.

Registering event handlers

There are several events which inform the app of the state of adaptive streaming. Combining them with the MediaPlayer, MediaPlaybackItem, and MediaSource events gives the app a lot of information about the media session.

In this scenario, we demonstrate the appropriate location to register event handlers, object lifetime management, as well as setting and retrieving custom properties.

We provide a simple example of updating text on the UI with current download and playback bitrates with icons from the Segoe MDL2 Assets font.

We also introduce the concept of attaching CustomProperties to a MediaSource and reading them in event handlers.

Modify network calls made by the AdaptiveMediaSource

The AdaptiveMediaSource allows the app to manage some or all aspects of content download. This can be done by modifying the arguments of the DownloadRequested event handler, by passing an HttpClient to the constructor, or by handling downloads via app code.

This scenario will use encryption key requests as an example. In HLS, the default key exchange is "Identity", also known as Clear-Key, which is done over HTTPS. Some apps may desire to improve upon this by adding additional authentication and authorization to the HTTPS GET for the key. We will use the key services available in Azure Media Services, and show several implementations.

This sample implements both "Url Query Parameter" and "Authorization Header" mechanisms for passing tokens to the Key Delivery system in Azure Media Services.

However, for simplicity, we are skipping the Azure Access Control System (ACS) token-exchange code and have directly hardcoded a Bearer token via our AdaptiveContentModel static class.

For more information on acquiring and passing tokens to the Key Delivery system in Azure Media Services, see the references below.

This scenario also demonstrates HDCP session management. The underlying OutputProtectionManager will enforce the policy requested by HdcpSession, and will apply the maximum protection of any session on the platform. The app should only concern itself with the policy it has applied via its session: it is not possible to query the actual HDCP state as imposed by another session or the PlayReady protection system. For this sample, we respond to the protection level by imposing a lower DesiredMaxBitrate in order to restrict content on unsecured devices.

Not shown in this scenario, but related in concept:

  • Passing a Stream to a manifest in the constructor of AdaptiveMediaSource, to replace the first web request for a Uri.
  • Getting a copy of the downloaded bytes after they have been consumed by the platform in DownloadCompleted.

Tune the AdaptiveMediaSource download and bitrate switching heuristics

This scenario shows ways in which the app can tune the adaptive media source.

Setting initial, minimum or maximum bitrates is typical and expected.

Setting InboundBitsPerSecondWindow, BitrateDowngradeTriggerRatio, and DesiredBitrateHeadroomRatio should be done only after extensive testing under a range of network conditions.

Register for, create and consume Metadata Cues

This scenario shows ways in which the app can consume or create timed metadata.

The AdaptiveMediaSource creates TimedMetadataTracks, and fires Cues for

  • ID3 tags within TS content
  • emsg boxes within fragmented MP4 content
  • Comment tags found in HLS manifests

This sample shows how to register to consume this data in the app, and demonstrates how SCTE-35 can be parsed from emsg boxes and used to schedule ads.

The captioning system will also publish TimedMetadataTracks for: WebVTT segments within an HLS presentation, additional files (TTML, SRT or WebVTT) added to the source, or 608 captions within SEI NALUs of H.264. Although it is not demonstrated in this sample, the app can opt to render the captions itself by following a process similar to one used for data cues in this scenario.

The app can also create a Custom TimedMetadataTrack and add it to a MediaSource. This provides a uniform event-based mechanism to consume timed-synchronized information. Here we demonstrate playback progress reporting using a custom class TrackingEventCue. We re-use this concept in the next sample to provide reporting on ads inserted into a main MediaPlaybackItem.

Create advertisements within an AdaptiveMediaSource MediaPlaybackItem

This scenario shows simple ad-insertion with an AdaptiveMediaSource. Unlike browser-based ad-insertion, apps are not affected by ad-blockers.

We add two pre-roll ads, two mid-roll ads at 10% into the main content, and a post-roll ad. For each of these ads, and the main content, we re-use the Custom TimedMetadataTracks from the Metadata sample to raise playback progress events.

Live Seekable Range

This scenario demonstrates seeking within the MediaTimeRange exposed by MediaPlayer.MediaPlaybackSession.GetSeekableRanges() This is often called a DVR Window and is used in over the top television (OTT) broadcasts and live event streaming.

The sample also demonstrate how to further limit the size of the DVR window using the AdaptiveMediaSource properties:

  • MinLiveOffset: The leading edge of the DRV window, as imposed by the content or platform.

  • DesiredLiveOffset: The application-controlled leading-edge of the DRV window. It allows the app to provide additional back-off from the leading edge of the Content to manage Content Distribution Network conditions.

  • DesiredSeekableWindowSize: The application-controlled DVR window depth. This allows the app to limit how far back into the DVR window the user can seek.

  • MaxSeekableWindowSize: The DRV window depth, as imposed by the content.

The sample demonstrates the use of AdaptiveMediaSourceCorrelatedTimes, an object which provides the offsets between the platform's media Position, the original PresentationTimeStamp within the media segments, and the content encoding time (EXT-X-PROGRAM-DATE-TIME in HLS and its equivalent in DASH).

Note The Windows universal samples require Visual Studio to build and Windows 10 to execute.

To obtain information about Windows 10 development, go to the Windows Dev Center

To obtain information about Microsoft Visual Studio and the tools for developing Windows apps, go to Visual Studio

Related topics

Samples

VideoPlayback

Reference

AdaptiveMediaSource
AdaptiveMediaSourceAdvancedSettings
Windows.Media.Streaming.Adaptive namespace
How client pass tokens to Azure Media Services Key delivery services
An End-to-End Prototype of AES Encryption with ACS Authentication and ACS Token Authorization.

System requirements

Client: Windows 10

Server: Windows Server 2016 Technical Preview

Phone: Windows 10

Build the sample

  1. Start Microsoft Visual Studio and select File > Open > Project/Solution.
  2. Starting in the folder where you unzipped the samples, go to the Samples subfolder, then the subfolder for this specific sample, then the subfolder for your preferred language (C++, C#, or JavaScript). Double-click the Visual Studio Solution (.sln) file.
  3. Press Ctrl+Shift+B, or select Build > Build Solution.

Run the sample

The next steps depend on whether you just want to deploy the sample or you want to both deploy and run it.

Deploying the sample

  • Select Build > Deploy Solution.

Deploying and running the sample

  • To debug the sample and then run it, press F5 or select Debug > Start Debugging. To run the sample without debugging, press Ctrl+F5 or selectDebug > Start Without Debugging.