Wowza REST APIs and HTTP Providers

This article show the different ways to make calls to Wowza Media Engine from external applications and environments for various purposes  such as getting server status , listeners , connections , applications and its streams etc .

HTTP Providers

HTTP Providers are Java classes that are configured on a per-virtual host basis.

Some pre packaged HTTP providers that return data in XML  :

1. HTTPConnectionCountsXML

Returns connection information like Vhost , application , application instance , message in bytes rate , message out byte rates etc.

http://%5Bwowza-ip-address%5D:8086/connectioncounts

Screenshot from 2015-11-24 20:23:51

2. HTTPConnectionInfo
Returns detailed connection information such as

http://%5Bwowza-ip-address%5D:8086/connectioninfo

server=1

3. HTTPServerVersion

Returns the Wowza Media Server version and build number. It’s the default HTTP Provider on port 1935.

url : http://%5Bwowza-ip-address%5D:1935

Wowza Streaming Engine 4 Monthly Edition 4.1.1 build13180

4. HTTPLiveStreamRecord

gets the web interface to record online streams

url : http://%5Bwowza-ip-address%5D:8086/livestreamrecord

Screenshot from 2015-11-24 20:22:16

5. HTTPServerInfoXML

Returns server and connection information

url :http://%5Bwowza-ip-address%5D:8086/serverinfo

Screenshot from 2015-11-24 20:34:08

6. HTTPClientAccessPolicy .

It is used for fetching the Microsoft Silverlight clientaccesspolicy.xml from the conf folder.

7. HTTPCrossdomain

To get the Adobe Flash crossdomain.xml file from [install-dir]/conf folder.

8.HTTPProviderMediaList

Dynamic method for generating adaptive bitrate manifests and playlists from SMIL data.

9.HTTPStreamManager

The Stream Manager returns all applications and their stream in web interface.

url http://%5Bwowza-ip-address%5D:8086/streammanager).

Screenshot from 2015-11-24 20:38:32

10 .HTTPTranscoderThumbnail

Returns a bitmap image from the source stream being transcoded.

url: http://%5Bwowza-ip-address%5D:8086/transcoderthumbnail?application=%5Bapplication-name%5D&streamname=%5Bstream-name%5D&format=%5Bjpeg or png]&size=[widthxheight]

Each HTTP provider can be configured with different request filter and authentication method ( none , basic , digest).  We can even create our own substitutes for the HTTP providers as defined in the next section .

Extending HTTProvider2Base

The following code snippet describes the process of creating a Wowza Web services that return a json containing all the values .

Imports to build a HTTP provider


import com.wowza.wms.application.*;
import com.wowza.wms.vhost.*;
import com.wowza.wms.http.*;
import com.wowza.wms.httpstreamer.model.*;

//since we want to return in json format

import org.json.simple.JSONObject;

The class declaration is as folllows


public class DCWS extends HTTProvider2Base
{

....

}

The code to extract application names


public JSONObject listChannels(){

JSONObject obj=new JSONObject();

//get params from virtual host and iterate through it
List<String> vhostNames = VHostSingleton.getVHostNames();
Iterator<String> iter = vhostNames.iterator();
while (iter.hasNext())
{
String vhostName = iter.next();
IVHost vhost = (IVHost)VHostSingleton.getInstance(vhostName);
List<String> appNames = vhost.getApplicationNames();
Iterator<String> appNameIterator = appNames.iterator();

int i=0;
while (appNameIterator.hasNext())
{
String applicationName = appNameIterator.next();

try {
String key = "channel"+ (++i);
obj.put(key, URLEncoder.encode(applicationName, "UTF-8"));
}

catch (UnsupportedEncodingException e) {
e.printStackTrace();
}
}
}
return obj;
}

The code which responds to HTTP request

TBD..

Ref :

Wowza RTMP Authentication with Third party Token provider over Tiny Encryption Algorithm (TEA)

this article is focused on  Wowza RTMP Authentication with  Third party Token provider over Tiny Encryption Algorithm (TEA)  and  is a continuation of the previous post about setting up a basic RTMP Authentication module on Wowza Engine above version 4.

The task is divided into 3 parts .

  1. RTMP Encoder Application
  2. Wowza RTMP Auth module
  3. Third party Authentication Server

The component diagram is as follows :

Copy of Publisher App iOS

The detailed explanation of the components are :

1.Wowza RTMP Auth module

The Wowza Server receives a rtmp stream url in the format as :

rtmp://username:pass@wowzaip:1935/Application/stteamname

It considers the username and pass to be user credentials . RTMP auth Module invokes the getPassword() function inside of deployed application class  passing the username as parameter.  The username is then  encrypted using TEA ( Tiny Encryption algorithm)

TEA is a block cipher  which is based on symmetric ( private) key encryption . Input is a 64 bit of plain or cipher text with a 128 bit key resulting in output of cipher or plain text respectively.

The code for encryption  is


TEA.encrypt( username, sharedSecret );

The code to make a connection to third party auth server is


 url = new URL(serverTokenValidatorURL);
 
 URLConnection connection;
 connection = url.openConnection();
 connection.setDoOutput(true);

OutputStreamWriter out = new OutputStreamWriter(connection.getOutputStream());
 out.write("clientid=" + TEA.encrypt( username, sharedSecret ););
 out.close(); 

The sharedsecret is the common key which is with both the Auth server and wowza server . It must be atleast a 16 digit alphanumeric / special character based key . An example of shared secret is abcdefghijklmnop .The value can be stored as property in Application.xml file.

<Property>
<Name>secureTokenSharedSecret</Name>
<Value><![CDATA[abcdefghijklmnop]]></Value>
</Property>

<Property>
<Name>serverTokenValidatorURL</Name>
<Value>http://127.0.0.1:8080/TokenProvider/authentication/token</Value&gt;
</Property>

The values of serverTokenValidatorURL is the third party auth server listening for REST POST request .

The code for receiving the incoming  resulting json data is


	ObjectMapper mapper = new ObjectMapper();
	JsonNode node = mapper.readTree(connection.getInputStream()); 
	node = node.get("publisherToken") ;
	String token = node.asText();
        String token2 =TEA.decrypt(token, sharedSecret);

2.Third party Authentication Server

The 3rd party Auth server stores the passwords for users or performs oauth based authentication . It uses a shared secret key to decrypt the token based on TEA as explained in above section .

The code to decrypt the incoming clientId


TEA.decrypt(id, sharedSecret);

Add own custom logic to check files , databases etc for obtaining the password corresponding to the username as decrypted above.

The code to encrypt the password for the user if exists or send invalid response if non exists is


        try {

            String clientID = TEA.decrypt(id, sharedSecret);
            
            String token= findUserPassword(clientID);
            
             token = TEA.encrypt(token, sharedSecret); 
                        
            return "{\"publisherToken\":\""  + token+ "\"}";
            
        }catch (Exception ex) {

            return "{\"error\":\"Invalid Client\"}";
        }

The final callflow thus becomes :

Copy of Publisher App iOS (1)

Screenshots :

Screenshot_2015-09-16-20-22-37Screenshot_2015-09-17-18-36-23Screenshot_2015-09-16-20-22-42Screenshot_2015-09-16-20-23-30

Wowza Secure URL params Authentication for streams in an application

To secure the publishers for a common application through username -password specific for stream names , this post is useful . It  uses Module Core Security to prompt back the user for supplying credentials.

The detailed code to check the rtmp query-string for parameters  and performs the checks –  is user is allowed to connect and is user allowed to stream on given stream name is given below .

Initialize the hashmap containing publisher clients and IapplicationInstance

HashMap <Integer, String> publisherClients =null;
IApplicationInstance appInstance = null;

On app start initilaize the IapplicationInstance object .

public void onAppStart(IApplicationInstance appInstance)
{
    this.appInstance = appInstance;
}

Onconnect is called called when any publisher tries to connects with media server. At this event collect the username and clientId from the client.
Check if publisherclient contains the userName which client has provided else reject the connection .

public void onConnect(IClient client, RequestFunction function, AMFDataList params)
{

AMFDataObj obj = params.getObject(2);
AMFData data = obj.get("app");

if(data.toString().contains("?")){

   String[] paramlist = data.toString().split();
   String[] userParam = paramlist[1].split("=");
   String userName = userParam[1];

   if(this.publisherClients==null){
       this.publisherClients = new HashMap<Integer, String>();
}

if(this.publisherClients.get(client.getClientId())==null){
    this.publisherClients.put(client.getClientId(),userName);
} else {
    client.rejectConnection();
}
}
}

AMFDataItem: class for marshalling data between Wowza Pro server and Flash client.

As the event user starts to publish a stream after sucessful connection Onpublishing function is called . It extracts the stream name from the client ( function extractStreamName() )and checks if user is allowed to stream on the given streamname (function isStreamNotAllowed()) .

public void publish(IClient client, RequestFunction function, AMFDataList params)
{
String streamName = extractStreamName(client, function, params);
if (isStreamNotAllowed(client, streamName))
{
sendClientOnStatusError(client, NetStream.Publish.Denied, "Stream name not allowed for the logged in user: "+streamName);
client.rejectConnection();
}
else{
invokePrevious(client, function, params);
}

}

Function when publisher disconnects from server . It removes the client from publisherClients.

public void onDisconnect(IClient client)
{
if(this.publisherClients!=null){
this.publisherClients.remove(client.getClientId());
}
}

The function to extract a streamname is


public String extractStreamName(IClient client, RequestFunction function, AMFDataList params)
{
String streamName = params.getString(PARAM1);
if (streamName != null)
{
String streamExt = MediaStream.BASE_STREAM_EXT;

String[] streamDecode = ModuleUtils.decodeStreamExtension(streamName, streamExt);
streamName = streamDecode[0];
streamExt = streamDecode[1];
}

return streamName;
}

The fucntion to check if streamname is allowed for the given user


public boolean isStreamNotAllowed(IClient client, String streamName)
{
WMSProperties localWMSProperties = client.getAppInstance().getProperties();
String allowedStreamName = localWMSProperties.getPropertyStr(this.publisherClients.get(client.getClientId()));
String sName="";
if(streamName.contains("?"))
sName = streamName.substring(0, streamName.lastIndexOf(&amp;amp;quot;?&amp;amp;quot;));
else
sName = streamName;
return !sName.toLowerCase().equals(allowedStreamName.toLowerCase().toString()) ;
}

On adding the application to wowza server make sure that the ModuleCoreSecurity is present under Modules in Application.xml

<Module>
<Name>ModuleCoreSecurity</Name>
<Description>Core Security Module for Applications</Description>
<Class>com.wowza.wms.security.ModuleCoreSecurity</Class>
</Module>




Also ensure that property securityPublishRequirePassword is present under properties

<Property>
<Name>securityPublishRequirePassword</Name>
<Value>true</Value>
<Type>Boolean</Type>
</Property>

Add the user credentials as properties too. For example to give access to testuser with password 123456 to stream on myStream include the following ,

<Property>
<Name>testUser</Name>
<Value>myStream</Value>
<Type>String</Type>
</Property>

Also include the mapping of user and password inside of conf/publish.password file

# Publish password file (format [username][space][password])
# username password

testuser 123456


Wowza RTMP Authenticate Module

To purpose of the article is the use the RTMP Authentication Module in wowza Engine .  This will enable us to intercept a connect request with username and password to be checked from any outside source like – database , password file , third party token provider , third party oauth etc.  Once the password provided by user is verified with the authentic password form external sources the user is allowed to connect and publish.

Step 1 : Create a new Wowza Media Server Project in Eclipse .  It is assumed that user has already integrated WowzaIDE into eclipse .

File -> New -> Wowza Media Server Project  

Step 2: Give any project name . I named it as “RTMPAuthSampleCode”.

wowza RTMP Auth
wowza RTMP Auth

Step 3 :   Point the location to existing Wowza Engine installed in local environment .

It is usually in /usr/local/WowzaStreamingEngine/

Wowza RTMP Auth
Wowza RTMP Auth

Step 4 : Proceed with the creation , uncheck the event methods as we are not using them right now .

Screenshot from 2015-09-17 13:10:24

Step 5: Put the code in class.

The class RTMPAuthSampleCode extends AuthenticateUsernamePasswordProviderBase . Its mandatory to define getPassword(String username ) and userExists(String username).  ModuleRTMPAuthenticate will invoke getPassword for connection request from users .

Screenshot from 2015-09-17 13:11:58

We can add any source of obtaining password for a given username which will be matched to the password supplied by user . If it matches he will be granted access otherwise we can return null or error message .

We may use various ways of obtaining user credentials like databse , password files , third part token provider etc . I will be discussing more ways to do RTMP authenticate esp using a third part token provider which using TEA.encrypt and shared secret in the next blog.

Step 6: Build the project and Run.

Project-> Build the Project 

Run -> Run Configurations … -> WowzaMediaServer_RTMPAuthSampleCode

To modules in my ubuntu 64 bit   version 14.04 system , I also need to provide

-Dcom.wowza.wms.native.base=”linux” inside of the VM Arguments . Its highlighted in figure below.

Screenshot from 2015-09-17 13:12:23

Step 7: Click Run to start the wowza Media Engine

Step 8 : Open the Manager Console of Wowza.

web based GUI interface of managing the application and checking for incoming streams . The manager script can be started with

sudo ./usr/local/WowzaStreamingEngine/manager/bin/startmgr.sh

The console can be opened at http://127.0.0.1:8088

Screenshot from 2015-09-17 13:53:58

Also you can see that RTMPAuthSampleCode.jar would have been copied to /usr/local/WowzaStreamingEngine/lib folder.

Step 9: Add module to applications

Add folder “RTMPAuthSampleCode” inside /usr/local/WowzaStreamingEngine/applications folder .

Step 10 : Add conf

Add folder “RTMPAuthSampleCode” inside /usr/local/WowzaStreamingEngine/conf  folder

Copy paste Application.xml from conf folder inside RTMPAuthSampleCode folder and make the following changes .

Add the ModuleRTMPAuthenticate module to Modules

<Module> <Name>ModuleRTMPAuthenticate</Name> <Description>ModuleRTMPAuthenticate</Description> <Class>com.wowza.wms.security.ModuleRTMPAuthenticate</Class> </Module>

and comment ModuleCoreSecurity

<!--    <Module>
     <Name>ModuleCoreSecurity</Name>
     <Description>Core Security Module for Applications</Description>
     <Class>com.wowza.wms.security.ModuleCoreSecurity</Class>
</Module> -->

Step 11: Add property usernamePasswordProviderClass to Properties .

usualy present inside Application at the bootom of Application.xml file

<Property>
<Name>usernamePasswordProviderClass</Name>
<Value>com.wowza.wms.example.authenticate.RTMPAuthSampleCode</Value>
</Property>

Step 12 : Make Authentication.xml file inside /usr/local/WowzaStreamingEngine/conf folder.

Note that from wowza 4 and later versions the Authentiocation.xml has come bundled with wms-server.jar which is inside of lib folder .   However for me , without giving a explicit Authentication.xml file the program froze and using my own simple authentication.xml gave problems with the digest . Hence follow the below process to get a working Authentication.xml file inside conf folder

Expand the archive and  inside the extracted folder wms-server copy the file from location wms-server/com/wowza/wms/conf/Authentication.xml to /usr/local/WowzaStreamingEngine/conf.

Step 13 : Restart Wowza Media Engine .

Step 14 : Use any RTMP encoder as Adobe Live Media Encoder or Gocoder or your own app ( could not use this with ffmpeg ) and  try to connect to application RTMPAuthSampleCode with username test and password 1234.

Step 15 : Observer the logs for incoming streams and traces from getpassword  .

 If you want the user test to have permission to publish stream to this application then return 1234 from getPassword else return null .

References :

  1. Media security overview
    http://www.wowza.com/forums/content.php?115-MediaSecurity-AddOn-Package-(SecureToken-RTMP-RTSP-Authentication-and-more
  2. How to integrate Wowza user authentication with external authentication systems (ModuleRTMPAuthenticate)
    http://www.wowza.com/forums/content.php?236-How-to-integrate-Wowza-user-authentication-with-external-authentication-systems-%28ModuleRTMPAuthenticate%29
  3. How to enable username/password authentication for RTMP and RTSP publishing
    http://www.wowza.com/forums/content.php?449-How-to-enable-username-password-authentication-for-RTMP-and-RTSP-publishing
  4. configuration ref 4.2 http://www.wowza.com/resources/WowzaStreamingEngine_ConfigurationReference.pdf

continue : Streaming / broadcasting Live Video call to non webrtc supported browsers and media players

This blog is in continuation to the attempts / outcomes and problems in building a WebRTC to RTP media framework that successfully stream / broadcast WebRTC content to non webrtc supported browsers ( safari / IE ) / media players ( VLC ).


Attempt 4: Stream the content to a WebRTC endpoint which is hidden in a video call . Pick the stream from vp8 object URL send to a streaming server

This process involved the following components :

  • WebRTC API : simplewebrtc on Chrome
  • Transfer mechanism from client to Streaming server:  webrtc media channel

Problems : No streaming server is qualified to handle a direct webrtc input and stream it on network .


Attempt 4.1 : Stream the content to a WebRTC endpoint . Do WebRTC Endpoint to RTP Endpoint bridge using Kurento APIs. 

Use the RTP port and ip address to input into a ffmpeg or gstreamer or VLC terminal command and out put a live H264 stream on another ip and port address .  

This process involved the following components :

  • API : Kurento
  • Transfer mechanism : HTML5 webrtc client -> application server hosting java -> media server -> application for webrtc media to RTP media conversation -> RTP player

Screenshots of attempts with Wowza to stream RTP from a IP and port

kurentowowoza

Problems : The stream was black which means 100% loss.

Lesson learned : RTP is not suitable for over the intgernet transmission especially with firewalls


Attempt 4.2 : Build a WebRTC Endpoint to Http endpoint in kurento and force the video audio encoding to be that of H264 and PCMU.

Code snippet for adding constraints to output media via pipeline and forcing choice of codecs( H264 for video and PCMU for audio ).

MediaPipeline pipeline = kurento.createMediaPipeline();
WebRtcEndpoint webRtcEndpoint = new WebRtcEndpoint.Builder(pipeline).build();
HttpGetEndpoint httpEndpoint=new HttpGetEndpoint.Builder(pipeline).build();

org.kurento.client.Fraction fr= new org.kurento.client.Fraction(1, 30);
VideoCaps vc= new VideoCaps(VideoCodec.H264,fr);
httpEndpoint.setVideoFormat(vc);

AudioCaps ac= new AudioCaps(AudioCodec.PCMU, 65536);
httpEndpoint.setAudioFormat(ac);

webRtcEndpoint.connect(httpEndpoint);

Alternatively one can opt to use gstreamer filter to force the output in raw format.

// basic media operation of 1 pipeline and 2 endpoints
MediaPipeline pipeline = kurento.createMediaPipeline();
WebRtcEndpoint webRtcEndpoint = new WebRtcEndpoint.Builder(pipeline).build();
RtpEndpoint rtpEndpoint = new RtpEndpoint.Builder(pipeline).build();

// adding Gstream filters
GStreamerFilter filter1 = new GStreamerFilter.Builder(pipeline, &quot;videorate max-rate=30&quot;).withFilterType(FilterType.VIDEO).build();
GStreamerFilter filter2 = new GStreamerFilter.Builder(pipeline, &quot;capsfilter caps=video/x-h264,width=1280,height=720,framerate=30/1&quot;).withFilterType(FilterType.VIDEO).build();
GStreamerFilter filter3 = new GStreamerFilter.Builder(pipeline, &quot;capsfilter caps=audio/x-mpeg,layer=3,rate=48000&quot;).withFilterType(FilterType.AUDIO).build();

// connecting all poin ts to one another
webRtcEndpoint.connect (filter1);
filter1.connect (filter2);
filter2.connect (filter3);
filter3.connect (rtpEndpoint);

// RTP SDP offer and answer
String requestRTPsdp = rtpEndpoint.generateOffer();
rtpEndpoint.processAnswer(requestRTPsdp);

End result : The output is still webm based and doesnt work on h264 clients.


Attempt 5  : Use a RTP SDP Endpoint ( ie a SDP file valid for a given session ) and use it to play the WebRTC media over Wowza streaming server

This process involved the following components

  1. WebRTC Stream and object URL of the blob containing VP8 media
  2. Kurento  WebRTC Endpoint  bridge to generate SDP
  3. Wowza Streaming server

Snippet used for kurento to generate a SDP file from WebRTC to RTP bridge

@RequestMapping(value = &quot;/rtpsdp&quot;, method = RequestMethod.POST)
private String processRequestrtpsdp(@RequestBody String sdpOffer)
throws IOException, URISyntaxException, InterruptedException {

//basic media operation of 1 pipeline and 2 endpoinst
MediaPipeline pipeline = kurento.createMediaPipeline();
WebRtcEndpoint webRtcEndpoint = new WebRtcEndpoint.Builder(pipeline).build();
RtpEndpoint rtpEndpoint = new RtpEndpoint.Builder(pipeline).build();

//connecting all poin ts to one another
webRtcEndpoint.connect (rtpEndpoint);

// RTP SDP offer and answer
String requestRTPsdp = rtpEndpoint.generateOffer();
rtpEndpoint.processAnswer(requestRTPsdp);

// write the SDP conector to an external file
PrintWriter out = new PrintWriter(&quot;/tmp/test.sdp&quot;);
out.println(requestRTPsdp);
out.close();

HttpGetEndpoint httpEndpoint = new HttpGetEndpoint.Builder(pipeline).build();
PlayerEndpoint player = new PlayerEndpoint.Builder(pipeline, requestRTPsdp).build();
httpEndpoint.connect(rtpEndpoint);
player.connect(httpEndpoint);

// Playing media and opening the default desktop browser
player.play();
String videoUrl = httpEndpoint.getUrl();
System.out.println(&quot; ------- video URL -------------&quot;+ videoUrl);

// send the response to front client
String responseSdp = webRtcEndpoint.processOffer(sdpOffer);

return responseSdp;
}

End result : wowza doesnt not recognize the WebRTC SDP and play the video

screenshot of wowza with SDP input

Screenshot from 2015-01-30 15:28:59

Attempt 5.1 : Use a RTP SDP Endpoint ( ie a SDP file valid for a given session ) and use it to play the WebRTC media over Default Ubuntu media player 

SDP file formed contains contents such as :

v=0
o=- 3631611195 3631611195 IN IP4 192.168.0.119
s=Kurento Media Server
c=IN IP4 192.168.0.119
t=0 0
m=audio 42802 RTP/AVP 98 99 0
a=rtpmap:98 OPUS/48000/2
a=rtpmap:99 AMR/8000/1
a=rtpmap:0 PCMU/8000
a=ssrc:2713728673 cname:user59375791@host-ad1117df
m=video 35946 RTP/AVP 96 97 100 101
a=rtpmap:96 H263-1998/90000
a=rtpmap:97 VP8/90000
a=rtpmap:100 MP4V-ES/90000
a=rtpmap:101 H264/90000
a=ssrc:93449274 cname:user59375791@host-ad1117df

End result : wowza doesnt not recognize the WebRTC SDP and play the video : deformed media

screenshot of playing from a SDP file

Screenshot from 2015-01-29 17:42:21

Attempt 5.2 : Use a RTP SDP Endpoint ( ie a SDP file valid for a given session ) and use it to play the WebRTC media over VLC using socket input

End result : nothing plays

screenshot of VLC connected to play from socket and failure to play anything

Screenshot from 2015-01-21 17:49:52

Attempt 5.3: Create a WebRTC endpoint and connected it to RTP endpoint via media pipelines . Also make the RTP SDP offer and answering the same . Play with ffnpeg / ffplay / gst playbin

String requestRTPsdp = rtpEndpoint.generateOffer();
rtpEndpoint.processAnswer(requestRTPsdp);

Write the requestRTPsdp to a file and obtain a RTP connector endpoint with Application/SDP .It plays okay with gst playbin ( 10 secs without audio ). Successful attempt to play from a gst playbin

gst-launch -vvv playbin uri=file:///tmp/test.sdp 
donekurento streaming

but refuses to be played by VLC , ffplay and even wowza . The error generated with

ffmpeg -i test.sdp -vcodec copy -acodec copy -f mpegts output-file.ts

or

ffmpeg -re -i test.sdp -vcodec h264 -acodec mp3 -f mpegts "udp://192.168.4.26:5000"

End result : This results in “Could not find codec parameter for stream1 ( video:h263, none ) .Other errors types are , Could not write header for output file output file is empty nothing was encoded”

Error screenshots of trying to play the RTP SDP file with ffmpeg

ffmpeg error kurebto1
ffmpeg error kurebto2

Attempt 6 : Use a WebRTC capable media and streaming server ( eg Kurento )  to pick a live stream of VP8 .

Convert the VP8 to H264  ( ffmpeg / RTP endpoint )

Convert H264 to Mp4 using MP4 parser and pass to a streaming server  ( wowza)

End Result : yes it did work on mozilla but with considerable lag


Update : Thankfully the updates to WebRTC standards mandated the support for PCMU and AVC/H264 CB profile in the media stack of the browser thus solving the “from scratch buildup of transcoder between webrtc and non webrtc endpoints”.

  • Video Codecs : RFC 7742 specifies that all WebRTC-compatible browsers must support VP8 and H.264’s Constrained Baseline profile for video.
  • Audio Codecs : RFC 7874 specifies that browsers must support at least the Opus codec as well as G.711’s PCMA and PCMU formats.

The latest Webrtc specification lists a set of codecs which all compliant browsers are required to support which includes chrome 52 , Firefox , safari , edge.

References :

  1. RFC7742: WebRTC Video Processing and Codec Requirements
  2. RFC 7874: WebRTC Audio Codec and Processing Requirements

Read more about Webrtc Audio Video Codecs

Streaming / broadcasting Live Video call to non webrtc supported browsers and media players

As the title of this article suggests I am going to pen my attempts of streaming / broadcasting Live Video WebRTC call to non WebRTC supported browsers and media players such as VLC , ffplay , default video player in Linux etc.

Some of the high level archietctures for streaming Webrtc Video to multiple endpoints can be viewed in the post below.

Aim : I will be attempting to create a lightweight WebRTC to raw/h264 transcoder by making my own media engine which takes input from WebRTC peerconnection or getusermedia. I am sharing my past experiments in hope of helping someone whose objective may be to acheive the same since many non webrtc supported endpoints ( Rpi , kisosks , mobile browsers ) could benifit heavily from webrtc streaming . Even if your objective is not the same as mine, you may gain some insigh on what not to do when making a media transcoder.


Attempt 1 : use one to many brodcasting API in js

<table class=”visible”> 
<tr> 
<td style=”text-align: right;”> 
<input type=”text” id=”conference-name” placeholder=”Broadcast Name”> </td> 
<td> <select id=”broadcasting-option”> <option>Audio + Video</option> <option>Only Audio</option> <option>Screen</option> </select> </td> 
<td> <button id=”start-conferencing”>Start Broadcasting</button> </td> </tr> 
</table> 
<table id=”rooms-list” class=”visible”></table> 
<div id=”participants”></div> 
http://”RTCPeerConnection-v1.5.js” http://”firebase.js” 
http://”broadcast.js” 
http://”broadcast-ui.js”

It uses API fromwebrtc-experiment.com. The broadcast is in one direction only where the viewrs are never asked for their mic / webcam permission .

problem : The broadcast is for WebRTC browsers only and doesnt support non webrtc players / browsers

Attempt 1.1: Stream the media directly to nodejs through websocke

window.addEventListener('DOMContentLoaded', function () {

    var v = document.getElementById('v');
    navigator.getUserMedia = (navigator.getUserMedia ||
        navigator.webkitGetUserMedia ||
        navigator.mozGetUserMedia ||
        navigator.msGetUserMedia);

    if (navigator.getUserMedia) {
// Request access to video only
        navigator.getUserMedia(
            {
                video: true,
                audio: false
            },
            function (stream) {
                var url = window.URL || window.webkitURL;
                v.src = url ? url.createObjectURL(stream) : stream;
                v.play();

                var ws = new WebSocket('ws://localhost:3000', 'echo-protocol');
                waitForSocketConnection(ws, function () {

                    console.log(" url.createObjectURL(stream)-----", url.createObjectURL(stream))
                    ws.send(stream);

                    console.log("message sent!!!");
                });

            },
            function (error) {
                alert('Something went wrong. (error code ' + error.code + ')');
                return;
            }
        );
    } else {
        alert('Sorry, the browser you are using doesn\'t support getUserMedia');
        return;
    }
});

//Make the function wait until the connection is made...
function waitForSocketConnection(socket, callback) {
    setTimeout(
        function () {
            if (socket.readyState === 1) {
                console.log("Connection is made")
                if (callback != null) {
                    callback();
                }
                return;

            } else {
                console.log("wait for connection...")
                waitForSocketConnection(socket, callback);
            }

        }, 5); // wait 5 milisecond for the connection...
}

Problem : The video is in form of buffer and doesnot play

Attempt 2: Record the WebRTC media ( 5 secs each ) into chunks of webm format->  transfer them to other end -> append the chunks together like a regular file 

This process involved the following components :

  • Recorder Javascript library : RecordJs
  • Transfer mechanism : Record using RecordRTC.js -> send to other end for media server -> stitching together the small webm files into big one at runtime and play
  • Programs :

Code for video recorder

navigator.getUserMedia(videoConstraints, function (stream) {

    video.onloadedmetadata = function () {
        video.width = 320;
        video.height = 240;

        var options = {
            type: isRecordVideo ? 'video' : 'gif',
            video: video,
            canvas: {
                width: canvasWidth_input.value,
                height: canvasHeight_input.value
            }
        };

        recorder = window.RecordRTC(stream, options);
        recorder.startRecording();
    };
    video.src = URL.createObjectURL(stream);
}, function () {
    if (document.getElementById('record-screen').checked) {
        if (location.protocol === 'http:')
            alert('https is mandatory to capture screen.');
        else
            alert('Multi-capturing of screen is not allowed.Have you enabled flag: "Enable screen capture support in getUserMedia"?');
    } else
        alert('Webcam access is denied.');
});

Code for video append-er

var FILE1 = '1.webm';
var FILE2 = '2.webm';
var FILE3 = '3.webm';
var FILE4 = '4.webm';
var FILE5 = '5.webm';

var NUM_CHUNKS = 5;
var video = document.querySelector('video');

window.MediaSource = window.MediaSource || window.WebKitMediaSource;
if (!!!window.MediaSource) {
    alert('MediaSource API is not available');
}

var mediaSource = new MediaSource();
video.src = window.URL.createObjectURL(mediaSource);

function callback(e) {
    var sourceBuffer = mediaSource.addSourceBuffer('video/webm; codecs="vorbis,vp8"');
    GET(FILE1, function (uInt8Array) {
        var file = new Blob([uInt8Array], {type: 'video/webm'});
        var i = 1;
        (function readChunk_(i) {
            var reader = new FileReader();
            reader.onload = function (e) {
                sourceBuffer.appendBuffer(new Uint8Array(e.target.result));
                if (i == NUM_CHUNKS) mediaSource.endOfStream();
                else {
                    if (video.paused) {
                        video.play(); // Start playing after 1st chunk is appended.
                    }
                    readChunk_(++i);
                }
            };
            reader.readAsArrayBuffer(file);
        })(i); // Start the recursive call by self calling.
    });
}

mediaSource.addEventListener('sourceopen', callback, false);
mediaSource.addEventListener('webkitsourceopen', callback, false);
mediaSource.addEventListener('webkitsourceended', function (e) {
    logger.log('mediaSource readyState: ' + this.readyState);
}, false);

// function get the video via XHR
function GET(url, callback) {
    var xhr = new XMLHttpRequest();
    xhr.open('GET', url, true);
    xhr.responseType = 'arraybuffer';
    xhr.send();
    xhr.onload = function (e) {
        if (xhr.status != 200) {
            alert("Unexpected status code " + xhr.status + " for " + url);
            return false;
        }
        callback(new Uint8Array(xhr.response));
    };
}

Shortcoming of this approach

  1. The webm files failed to play on most of the media players
  2. The recorder can only either record video or audio file at a time .

Attempt 2.Chunking and media proxy

Since the previous approach failed to support on webrtc endpoinst , the next iteration of this approach was to channel the webrtc media via a nodejs server thus disrupting the peer to peer media strem in favour of centralized / proxied emdia stream. This would enable me to obtain raw media packets form teh stream using low level C based vp8 decoder libraries and then re encode them to h364 or other media formats suitable for endpoints .

In theory media could be reencoded jusing openH264 library and the frame could be then send to players

let mediaSource = new MediaSource();
let sourceBuffer = mediaSource.addSourceBuffer('video/mp4; codecs=vp9', 
    new VP9Decoder());
let buffer = await loadBuffer();
sourceBuffer.appendBuffer(buffer);

Further extending for uncompressed video

let mediaSource = new MediaSource();
let sourceBuffer = mediaSource.addSourceBuffer('video/raw; codecs=yuv420p');
for (let p in demuxPAckets()) {
    let frame = await codec.decode(p);
    sourceBuffer.appendBuffer(frame);
}

Atleast that was the plan .

Attempt 2.1:  Record the WebRTC media ( 5 secs each ) into chunks of webm format ( RecordRTC.js) >  Use Kurento JS script ( kws-media-api,js) to make a HTTP Endpoint to recorded Webm files  -> append the chunks together like a regular file at runtime 

// UI elements
function getByID(id) {
    return document.getElementById(id);
}

var recordAudio = getByID('record-audio'),
    recordVideo = getByID('record-video'),
    stopRecordingAudio = getByID('stop-recording-audio'),
    stopRecordingVideo = getByID('stop-recording-video'),
    broadcasting = getByID('broadcasting');

var canvasWidth_input = getByID('canvas-width-input'),
    canvasHeight_input = getByID('canvas-height-input');

var video = getByID('video');
var audio = getByID('audio');

// Audio video constraints
var videoConstraints = {
    audio: false,
    video: {
        mandatory: {},
        optional: []
    }
};

var audioConstraints = {
    audio: true,
    video: false
};

// Recording and stop recording - to be convrted into real time capture and chunking 
const ws_uri = 'ws://localhost:8888/kurento';
var URL_SMALL = "http://localhost:8080/streamtomp4/approach1/5561840332.webm";

var audioStream;
var recorder;

recordAudio.onclick = function () {
    if (!audioStream)
        navigator.getUserMedia(audioConstraints, function (stream) {
            if (window.IsChrome) stream = new window.MediaStream(stream.getAudioTracks());
            audioStream = stream;
            audio.src = URL.createObjectURL(audioStream);
            audio.muted = true;
            audio.play();
            // "audio" is a default type
            recorder = window.RecordRTC(stream, {
                type: 'audio'
            });
            recorder.startRecording();
        }, function () {
        });
    else {
        audio.src = URL.createObjectURL(audioStream);
        audio.muted = true;
        audio.play();
        if (recorder) recorder.startRecording();
    }
    window.isAudio = true;
    this.disabled = true;
    stopRecordingAudio.disabled = false;
};

Recording and stop recording video inot small media files ( chunks )

recordVideo.onclick = function () {
    recordVideoOrGIF(true);
};
stopRecordingAudio.onclick = function () {
    this.disabled = true;
    recordAudio.disabled = false;
    audio.src = '';

    if (recorder)
        recorder.stopRecording(function (url) {
            audio.src = url;
            audio.muted = false;
            audio.play();

            document.getElementById('audio-url-preview').innerHTML = '&amp;amp;amp;amp;lt;a href="' + url + '" target="_blank"&amp;amp;amp;amp;gt;Recorded Audio URL&amp;amp;amp;amp;lt;/a&amp;amp;amp;amp;gt;';
        });
};
function recordVideoOrGIF(isRecordVideo) {
    navigator.getUserMedia(videoConstraints, function (stream) {

        video.onloadedmetadata = function () {
            video.width = 320;
            video.height = 240;

            var options = {
                type: isRecordVideo ? 'video' : 'gif',
                video: video,
                canvas: {
                    width: canvasWidth_input.value,
                    height: canvasHeight_input.value
                }
            };

            recorder = window.RecordRTC(stream, options);
            recorder.startRecording();
        };
        video.src = URL.createObjectURL(stream);
    }, function () {
        if (document.getElementById('record-screen').checked) {
            if (location.protocol === 'http:')
                alert('&amp;amp;amp;amp;lt;https&amp;amp;amp;amp;gt; is mandatory to capture screen.');
            else
                alert('Multi-capturing of screen is not allowed. Capturing process is denied. Are you enabled flag: "Enable screen capture support in getUserMedia"?');
        } else
            alert('Webcam access is denied.');
    });

    window.isAudio = false;

    if (isRecordVideo) {
        recordVideo.disabled = true;
        stopRecordingVideo.disabled = false;
    } else {
        recordGIF.disabled = true;
        stopRecordingGIF.disabled = false;
    }
}

stopRecordingVideo.onclick = function () {
    this.disabled = true;
    recordVideo.disabled = false;

    if (recorder)
        recorder.stopRecording(function (url) {
            video.src = url;
            video.play();
            document.getElementById('video-url-preview').innerHTML = '&amp;amp;amp;amp;lt;a href="' + url + '" target="_blank"&amp;amp;amp;amp;gt;Recorded Video URL&amp;amp;amp;amp;lt;/a&amp;amp;amp;amp;gt;';

        });
};

Broadcasting the chunks to media engine

function onerror(error) {
    console.log(" error occured");
    console.error(error);
}

broadcast.onclick = function () {
var videoOutput = document.getElementById("videoOutput");
KwsMedia(ws_uri, function (error, kwsMedia) {
    if (error) return onerror(error);
    // Create pipeline
    kwsMedia.create('MediaPipeline', function (error, pipeline) {
        if (error) return onerror(error);
        // Create pipeline media elements (endpoints &amp;amp;amp;amp;amp; filters)
        pipeline.create('PlayerEndpoint', {uri: URL_SMALL}, function (error, player) {
                if (error) return console.error(error);

                pipeline.create('HttpGetEndpoint', function (error, httpGet) {
                    if (error) return onerror(error);
                    // Connect media element between them
                    player.connect(httpGet, function (error, pipeline) {
                        if (error) return onerror(error);
                        // Set the video on the video tag
                        httpGet.getUrl(function (error, url) {
                            if (error) return onerror(error);
                            videoOutput.src = url;
                            console.log(url);
                            // Start player
                            player.play(function (error) {
                                if (error) return onerror(error);
                                console.log('player.play');
                            });
                        });
                    });

                    // Subscribe to HttpGetEndpoint EOS event
                    httpGet.on('EndOfStream', function (event) {
                        console.log("EndOfStream event:", event);
                    });
                });
            });
    });
}, onerror);
}

problem : dissecting the live video into small the files and appending to each other on reception is an expensive , time and resource consuming process . Also involves heavy buffering and other problems pertaining to real-time streaming .

Attempt 2.2 : Send the recorded chunks of webm to a port on linux server. Use socket programming to pick up these individual files and play using  VLC player from UDP port of the Linux Server

Screenshot from 2015-01-22 15:32:51

End Result : Small file containers play but slow buffering makes this approach non compatible for streaming files chunks and appending as single file.

Attempt 2.3: Send the recorded chunks of webm to a port on linux server socket . Use socket programming to pick up these individual webm files and convert to H264 format so that they can be send to a media server. 

This process involved the following components :

  • Recorder Javascript library : RecordJs
  • Transfer mechanism :WebRTC endpoint -> Call handler ( Record in chunks ) -> ffmpeg / gstreamer to put it on RTP -> streaming server like wowza – > viewers
  • Programs : Use HTML webpage Webscoket connection -> nodejs program to write content from websocket to linux socket -> nodejs program to read that socket and print the content on console

Snippet to transfer the webm recorder files over websocket to nodejs program

// Make the function wait until the connection is made.
function waitForSocketConnection(socket, callback) {
    setTimeout(
        function () {
            if (socket.readyState === 1) {
                console.log("Connection is made")
                if (callback != null) 
                    callback();
            } else {
                console.log("wait for connection...")
                waitForSocketConnection(socket, callback);
            }
        }, 5); // wait 5 milisecond for the connection...
}

function previewFile() {
    var preview = document.querySelector('img');
    var file = document.querySelector('input[type=file]').files[0];
    var reader = new FileReader();

    reader.onloadend = function () {
        preview.src = reader.result;
        console.log(" reader result ", reader.result);

        var video = document.getElementById("v");
        video.src = reader.result;
        console.log(" video played ");

        var ws = new WebSocket('ws://localhost:3000', 'echo-protocol');
        waitForSocketConnection(ws, function () {
            ws.send(reader.result);
            console.log("message sent!!!");
        });

    }

    if (file) {
        // converts to base64 encoded string of the file data
        //reader.readAsDataURL(file);
        reader.readAsBinaryString(file);
    } else {
        preview.src = "";
    }
}

Program for Linux Sockets sender which creates the socket for the webm files in nodejs

var net = require('net');
var fs = require('fs');
var socketPath = '/tmp/tfxsocket';
var http = require('http');
var stream = require('stream');
var util = require('util');

var WebSocketServer = require('ws').Server;
var port = 3000;
var serverUrl = "localhost";

var socket;
/*----------http server -----------*/
var server = http.createServer(function (request, response) {});
server.listen(port, serverUrl);
console.log('HTTP Server running at ', serverUrl, port);

/*------websocket server ----------*/
var wss = new WebSocketServer({server: server});

wss.on("connection", function (ws) {
    console.log("websocket connection open");
    ws.on('message', function (message) {
        console.log(" stream recived from broadcast client on port 3000 ");
        var s = require('net').Socket();
        s.connect(socketPath);
        s.write(message);
        console.log(" send the stream to socketPath", socketPath);
    });

    ws.on("close", function () {
        console.log("websocket connection close")
    });
});

Program for Linux Socket Listener using nodejs and socket . Here the socket is in node /tmp/mysocket

var net = require('net');
var client = net.createConnection("/tmp/mysocket");
client.on("connect", function() {
    console.log("connected to mysocket");
});
client.on("data", function(data) {
    console.log(data);
});
client.on('end', function() {
    console.log('server disconnected');
});

Output 1: Video Buffer displayed

Screenshot from 2015-01-22 15:35:06 (copy)

Output 2 : Payload from Video displayed that shows the pipeline works but no output yet.

Screenshot from 2015-01-23 12:57:35

ffmpeg format of transfering the content from socket to UDP IP and port

ffmpeg -i unix://tmp/mysocket -f format udp://192.168.0.119:8083

problems of this approach : The video was on a passing stage from the socket and contained no information as such when tried to play / show console.


Attempt 3 : Use existing media engine like kurento to do the transocding for me

Send the live WebRTC stream from Kurento WebRTC endpoint to Kurento HTTP endpoint then play using Mozilla VLC web plugin

VLC mozilla plugin can be embedded by :

name="video2"
autoplay="yes" loop="no" hidden="no"
target="rtp://@192.165.0.119:8086" />

screenshot of failure on part of Mozilla VLC plugin to play from a WebRTC endpoint

Screenshot from 2015-01-29 10:37:06
Screenshot from 2015-01-29 10:37:17
Screenshot from 2015-01-29 12:06:14

problem : VLC mozilla plugin was unable to play the video and mozilla playback only was a difficult optio for most consumers .

Contnued on next article : continue : Streaming / broadcasting Live Video call to non webrtc supported browsers and media players

More article on simmilar topics