Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Loading Remote Video Stored in Azure Blob Storage in a MR App

DZone's Guide to

Loading Remote Video Stored in Azure Blob Storage in a MR App

Want to play video in your MR app? See how blob storage plays a role, and let's dive into Unity development to give users a good experience.

· IoT Zone ·
Free Resource

The title of this blog post kind of gives away that this is actually two blog posts in one:

  • How to prepare and load videos into Azure
  • How to load these videos back from Azure and show these in a floating video player that is activated upon being looked at.

The Basic Idea

The UI-to-demo loading and playing the video is a simple plane that gets a ‘MovieTexture’ applied to it. When you look at the plane (i.e. the gaze strikes the Plane), MovieTexture’s “Play” method is called, and the video starts playing. When you don’t look at it for about three seconds, the MovieTexture’s “Pause” method is called. It’s not rocket science.

Two posts ago, I introduced a BaseMediaLoader: a as simple base class for downloading media. We are going to re-use that in this post, as loading video – as you will see – is not that different from loading audio.

Prepare and Upload the Video

If you have read my post about loading audio, you might have guessed – you can’t just upload an MP4 file to blob storage, then download and play it. Unity seems to have a preference for off-center open source formats. You will need to convert your movie to the OggTheora. You can do this with the command line tool “ffmpeg”. The documentation on it is not very clear, and the default conversion yields a very low-quality movie (think early years YouTube). I have found the following parameters give a quite reasonable conversion result:

 ffmpeg.exe -i .\Fireworks.mp4 -q:v 8 fireworks.ogv 

-q:v 8 gives a nice video quality. Also, the original 121605 kb movie is compressed to about 40000 kb. The resulting OGV need to be uploaded to Azure blob storage. I used the Storage Explorer for that. That also makes it easy to get a shared access signature URL.

Video Player Components

The video player itself is pretty simple – a plane to display the movie on, text to tell the user to start playing it by looking at the plane, and an AudioSource. You can just about see in this image below, depicted by a very vague loudspeaker icon

image

image

imageNote the video player is about 3 meters from the user, and a bit off-center to the left – preventing it from auto starting immediately, which it would do if it were to appear right ahead. The video plane is rotated 90/90/270° to make it appear upright with the proper direction to the user.

The VideoPlayer script

The VideoPlayer script is actually doing all the work – downloading the video, playing it when the gaze hits, and pausing the playback after a timeout of 2 seconds (‘Focus Lost Timeout’). It starts pretty simply:

using System.Collections;
using HoloToolkit.Unity.InputModule;
using UnityEngine;
using UnityEngine.Networking;

public class VideoPlayer : BaseMediaLoader, IFocusable
{
    public GameObject VideoPlane;

    public AudioSource Audio;

    public GameObject LookText;

    public float FocusLostTimeout = 2f;

    private MovieTexture _movieTexture;

    private bool _isFocusExit;

    protected void Start()
    {
        VideoPlane.SetActive(false);
        LookText.SetActive(false);
    }
}


Notice all components are explicitly defined, that is – although they are within one prefab, you still have to drag the plane, the text, and the AudioSource into the script’s fields. Initially, it turns off everything – if there’s nothing downloaded (yet), show nothing. If you are on a slow network, you will see the player disappear for a while, then reappear.

The most important part of this script consists of these two methods:

protected override IEnumerator StartLoadMedia()
{
    VideoPlane.SetActive(false);
    LookText.SetActive(false);
    yield return LoadMediaFromUrl(MediaUrl);
}

private IEnumerator LoadMediaFromUrl(string url)
{
    var handler = new DownloadHandlerMovieTexture();

    yield return ExecuteRequest(url, handler);

    _movieTexture = handler.movieTexture;
    _movieTexture.loop = true;
    Audio.loop = true;

    VideoPlane.GetComponent<Renderer>().material.mainTexture = _movieTexture;
    Audio.clip = handler.movieTexture.audioClip;
    VideoPlane.SetActive(true);
    LookText.SetActive(true);
}


Remember, from BaseMediaLoader, that StartLoadMedia is called as soon as MediaUrl changes. That turns off the UI again (in case it was already turned on because a different file was loaded previously). Then we need a DownloadHandlerMovieTexture.

Then we set the loop property for both the movie texture and the AudioSource to true, and after that, we apply the movie texture to the Videoplane's Renderer material texture so it will indeed show the movie. Since that will only play a silent movie, we need to extract the movie texture's audioClip property value and put that in our audio source, and both make the plane and the text visible, inviting the user to have a look

Then we have these two simple methods to actually start and pause playing. Notice you have to start calling the movie texture's Play method and the AudioSource's Play method, but for pausing, it's enough to call just the movieTexture's Play. One of those weird Unity idiosyncrasies.

private void StartPlaying()
{
    if (_movieTexture == null)
    {
        return;
    }
    _isFocusExit = false;
    if (!_movieTexture.isPlaying)
    {
        LookText.SetActive(false);
        _movieTexture.Play();
        Audio.Play();
    }
}

private void PausePlaying()
{
    if (_movieTexture == null)
    {
        return;
    }
    LookText.SetActive(true);
    _movieTexture.Pause();
}


Notice the setting of _onFocusExit to false for StartPlaying. We need that later. Finally, the methods that actually are fired when you are looking at or away from the plane, as defined by IFocusable

public void OnFocusEnter()
{
   StartPlaying();
}

public void OnFocusExit()
{
    _isFocusExit = true;
    StartCoroutine(PausePlayingAfterTimeout());
}

IEnumerator PausePlayingAfterTimeout()
{
    yield return new WaitForSeconds(FocusLostTimeout);
    if (_isFocusExit)
    {
        PausePlaying();
    }
}


If the user stops looking at the plane, _onFocusExit is set to true and a coroutine starts that first waits for the defined time. If that time has passed and the user still does not look at the plane, the video will actually be paused. This way, you prevent small head movements that make the gaze cursor wander off the plane for a short period of time from stopping and starting the movie repeatedly — which is a bad user experience.

No Controls?

The floating audio player I described earlier has a fancy slider that showed progress and made it possible to jump to any piece of the audio. Unfortunately, a movie texture does not support a time property that you can get and set to random access parts of the movie, and jump to a specific point. You can only move forward, and only by setting the loop property to true do you actually end up at the start again, because moving to start does not work either. I don't know why this is, but that's the way it seems to be.

Conclusion

Showing video is almost as easy as playing audio, and in many ways is similar. The default Unity capabilities allow only for limited control, but it's a nice way to, for instance, show instructional videos. Be aware that playing videos in a resource-constricted device (read: HoloLens) might ask for a lot of resources. Consider smaller low-res videos in this case. Testing is always key.

Topics:
iot ,azure blob storage ,mixed reality ,video ,ux ,tutorial

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}