Tag Archives: FFMpeg

crtmpserver + ffmpeg

This post will show the process of installing , running and using crtmpserver on ubuntu 64 bit machine with gstreamer .

gcc and cmake

We shall build gstreamer directly from sources . For this we first need to determine if gcc is installed on the machine .

If not installed then  run the following command

GNU Compiler Collection (GCC) is a compiler system produced by the GNU Project supporting various programming languages( C, C++, Objective-C, Fortran, Java, Ada, Go etc).

sudo apt-get install build-essential

once it is isnatlled it can be tested with printing the version

Screenshot from 2016-06-09 11-24-33.png

cmake is a software compilation tool.It uses compiler independent configuration files, and generate native makefiles and workspaces that can be used in the differemt compiler environment .

Crtmpserver

To get the source code from git install git first . Then clone the project from https://github.com/j0sh/crtmpserver

sudo apt-get git
git clone https://github.com/j0sh/crtmpserver.git
cd crtmpserver/builders/cmake

Next we create all makefile’s using cmake .

cmake .

Output should look as follows

Screenshot from 2016-06-09 11-47-05

Run make to do compilation

make

Screenshot from 2016-06-09 11-57-19

Run using following command . If should print out a list of ports and their respecting functions

./crtmpserver/crtmpserver crtmpserver/crtmpserver.lua

+—————————————————————————–+
| Services|
+—+—————+—–+————————-+————————-+
| c | ip | port| protocol stack name | application name |
+—+—————+—–+————————-+————————-+
|tcp| 0.0.0.0| 1112| inboundJsonCli| admin|
+—+—————+—–+————————-+————————-+
|tcp| 0.0.0.0| 1935| inboundRtmp| appselector|
+—+—————+—–+————————-+————————-+
|tcp| 0.0.0.0| 8081| inboundRtmps| appselector|
+—+—————+—–+————————-+————————-+
|tcp| 0.0.0.0| 8080| inboundRtmpt| appselector|
+—+—————+—–+————————-+————————-+
|tcp| 0.0.0.0| 6666| inboundLiveFlv| flvplayback|
+—+—————+—–+————————-+————————-+
|tcp| 0.0.0.0| 9999| inboundTcpTs| flvplayback|
+—+—————+—–+————————-+————————-+
|tcp| 0.0.0.0| 6665| inboundLiveFlv| proxypublish|
+—+—————+—–+————————-+————————-+
|tcp| 0.0.0.0| 8989| httpEchoProtocol| samplefactory|
+—+—————+—–+————————-+————————-+
|tcp| 0.0.0.0| 8988| echoProtocol| samplefactory|
+—+—————+—–+————————-+————————-+
|tcp| 0.0.0.0| 1111| inboundHttpXmlVariant| vptests|
+—+—————+—–+————————-+————————-+

If you the following types of errors while pushing a stream to crtmpserver , they just denote they your pipe is not using the correct format.

/home/altanai/crtmpserver/sources/thelib/src/netio/epoll/tcpacceptor.cpp:154 Client connected: 127.0.0.1:55524 -> 0.0.0.0:8080
/home/altanai/crtmpserver/sources/thelib/src/netio/epoll/iohandlermanager.cpp:119 Handlers count changed: 11->12 IOHT_TCP_CARRIER
/home/altanai/crtmpserver/sources/thelib/src/protocols/http/basehttpprotocol.cpp:281 Headers section too long
/home/altanai/crtmpserver/sources/thelib/src/protocols/http/basehttpprotocol.cpp:153 Unable to read response headers: CTCP(16) <-> TCP(13) <-> [IHTT(14)] <-> IH4R(15)
/home/altanai/crtmpserver/sources/thelib/src/netio/epoll/tcpcarrier.cpp:89 Unable to signal data available
/home/altanai/crtmpserver/sources/thelib/src/netio/epoll/iohandlermanager.cpp:129 Handlers count changed: 12->11 IOHT_TCP_CARRIER
/home/altanai/crtmpserver/sources/thelib/src/protocols/protocolmanager.cpp:45 Enqueue for delete for protocol [IH4R(15)]
/home/altanai/crtmpserver/sources/thelib/src/application/baseclientapplication.cpp:240 Protocol [IH4R(15)] unregistered from application: appselector
/home/altanai/crtmpserver/sources/thelib/src/netio/epoll/tcpacceptor.cpp:154 Client connected: 127.0.0.1:44964 -> 0.0.0.0:9999
/home/altanai/crtmpserver/sources/thelib/src/netio/epoll/iohandlermanager.cpp:119 Handlers count changed: 11->12 IOHT_TCP_CARRIER
/home/altanai/crtmpserver/sources/thelib/src/protocols/ts/inboundtsprotocol.cpp:211 I give up. I'm unable to detect the ts chunk size
/home/altanai/crtmpserver/sources/thelib/src/protocols/ts/inboundtsprotocol.cpp:136 Unable to determine chunk size
/home/altanai/crtmpserver/sources/thelib/src/netio/epoll/tcpcarrier.cpp:89 Unable to signal data available
/home/altanai/crtmpserver/sources/thelib/src/netio/epoll/iohandlermanager.cpp:129 Handlers count changed: 12->11 IOHT_TCP_CARRIER
/home/altanai/crtmpserver/sources/thelib/src/protocols/protocolmanager.cpp:45 Enqueue for delete for protocol [ITS(17)]
/home/altanai/crtmpserver/sources/thelib/src/application/baseclientapplication.cpp:240 Protocol [ITS(17)] unregistered from application: flvplayback
/home/altanai/crtmpserver/sources/thelib/src/netio/epoll/tcpacceptor.cpp:154 Client connected: 127.0.0.1:37754 -> 0.0.0.0:1935
/home/altanai/crtmpserver/sources/thelib/src/netio/epoll/iohandlermanager.cpp:119 Handlers count changed: 11->12 IOHT_TCP_CARRIER
/home/altanai/crtmpserver/sources/thelib/src/protocols/rtmp/inboundrtmpprotocol.cpp:77 Handshake type not implemented: 85
/home/altanai/crtmpserver/sources/thelib/src/protocols/rtmp/basertmpprotocol.cpp:309 Unable to perform handshake
/home/altanai/crtmpserver/sources/thelib/src/netio/epoll/tcpcarrier.cpp:89 Unable to signal data available
/home/altanai/crtmpserver/sources/thelib/src/netio/epoll/iohandlermanager.cpp:129 Handlers count changed: 12->11 IOHT_TCP_CARRIER
/home/altanai/crtmpserver/sources/thelib/src/protocols/protocolmanager.cpp:45 Enqueue for delete for protocol [IR(19)]
/home/altanai/crtmpserver/sources/thelib/src/application/baseclientapplication.cpp:240 Protocol [IR(19)] unregistered from application: appselector
/home/altanai/crtmpserver/sources/thelib/src/netio/epoll/tcpacceptor.cpp:154 Client connected: 127.0.0.1:48368 -> 0.0.0.0:6666
/home/altanai/crtmpserver/sources/thelib/src/protocols/liveflv/inboundliveflvprotocol.cpp:51 _waitForMetadata: 1
/home/altanai/crtmpserver/sources/thelib/src/netio/epoll/iohandlermanager.cpp:119 Handlers count changed: 11->12 IOHT_TCP_CARRIER
/home/altanai/crtmpserver/sources/thelib/src/protocols/liveflv/baseliveflvappprotocolhandler.cpp:45 protocol CTCP(16) <-> TCP(20) <-> [ILFL(21)] registered to app flvplayback
/home/altanai/crtmpserver/sources/thelib/src/protocols/liveflv/inboundliveflvprotocol.cpp:102 Frame too large: 6324058
/home/altanai/crtmpserver/sources/thelib/src/netio/epoll/tcpcarrier.cpp:89 Unable to signal data available
/home/altanai/crtmpserver/sources/thelib/src/netio/epoll/iohandlermanager.cpp:129 Handlers count changed: 12->11 IOHT_TCP_CARRIER
/home/altanai/crtmpserver/sources/thelib/src/protocols/protocolmanager.cpp:45 Enqueue for delete for protocol [ILFL(21)]
/home/altanai/crtmpserver/sources/thelib/src/protocols/liveflv/baseliveflvappprotocolhandler.cpp:58 protocol [ILFL(21)] unregistered from app flvplayback

ffmpeg

Download and install ffmpeg from git

 git clone https://git.ffmpeg.org/ffmpeg.git ffmpeg
cd ffmpeg

Once the source code is obtained we need to configure , make and make install it .
We need to have following plugins for muxing and ecoding like libx264 for h264parse , so we configure with the following options

./configure \
  --prefix="$HOME/ffmpeg_build" \
  --pkg-config-flags="--static" \
  --extra-cflags="-I$HOME/ffmpeg_build/include" \
  --extra-ldflags="-L$HOME/ffmpeg_build/lib" \
  --bindir="$HOME/bin" \
  --enable-gpl \
  --enable-libass \
  --enable-libfreetype \
  --enable-libopus \
  --enable-libtheora \
  --enable-libvorbis \
  --enable-libx264 \
  --enable-libx265 \
  --enable-nonfree

the make and make install

make
sudo make install

Screenshot from 2016-06-09 16-59-49

Incase of errors  on ffmpeg configure command , you need to install the respective missing / not found library

libass

sudo apt-get install libass-dev

lamemp3

sudo apt-get install libmp3lame-dev

libaacplus

sudo apt-get install autoconf
sudo apt-get install libtool

wget -O libaacplus-2.0.2.tar.gz http://tipok.org.ua/downloads/media/aacplus/libaacplus/libaacplus-2.0.2.tar.gz
tar -xzf libaacplus-2.0.2.tar.gz
cd libaacplus-2.0.2
./autogen.sh --with-parameter-expansion-string-replace-capable-shell=/bin/bash --host=arm-unknown-linux-gnueabi --enable-static

make
sudo make install

libvorbis
compressed audio format for mid to high quality (8kHz-48.0kHz, 16+ bit, polyphonic) audio and music at fixed and variable bitrates from 16 to 128 kbps/channe. It is from the same reank as MPEG4 AAC

wget http://downloads.xiph.org/releases/vorbis/libvorbis-1.3.2.tar.bz2
tar -zxvf libvorbis-1.3.2.tar.bz2
cd libvorbis-1.3.2
./configure && make && make install

libx264
encoding video streams into the H.264/MPEG-4 AVC compression format, and is released under the terms of the GNU GPL.

git clone git://git.videolan.org/x264
cd x264
./configure --host=arm-unknown-linux-gnueabi --enable-static --disable-opencl
make
sudo make install

libvpx
libvpx is an emerging open video compression library which is gaining popularity for distributing high definition video content on the internet.

sudo apt-get install checkinstall
git clone https://chromium.googlesource.com/webm/libvpx
cd libvpx
./configure
make
sudo checkinstall --pkgname=libvpx --pkgversion="1:$(date +%Y%m%d%H%M)-git" --backup=no     --deldoc=yes --fstrans=no --default

librtmp
librtmp provides support for the RTMP content streaming protocol developed by Adobe and commonly used to distribute content to flash video players on the web.

sudo apt-get install libssl-dev
cd /home/pi/src
git clone git://git.ffmpeg.org/rtmpdump
cd rtmpdump
make SYS=posix
sudo checkinstall --pkgname=rtmpdump --pkgversion="2:$(date +%Y%m%d%H%M)-git" --backup=no --deldoc=yes --fstrans=no --default

Reference:
http://www.videolan.org/developers/x265.html
https://trac.ffmpeg.org/wiki/CompilationGuide/RaspberryPi
http://wiki.serviio.org/doku.php?id=howto:linux:install:raspbian
http://lame.sourceforge.net/

Additionally “pkg-config –list-all” command list down all the installed libraries.


RTMP streaming

1.start the stream from linux machine using ffmpeg

ffmpeg -f video4linux2 -s 320x240 -i /dev/video0 -f flv -s qvga -b 750000 -ar 11025 -metadata streamName=aaa "tcp://<hidden_ip>:6666/live";

Screenshot from 2016-06-11 17-50-02

2.view the incoming packets and stats on terminal at crtmpserver

Screenshot from 2016-06-11 17-53-22

3.playback the livestream from another machine

using ffplay
ffplay -i rtmp://server_ip:1935/live/ccc

Screenshot from 2016-06-09 15-43-58

RTSP streaming

1.start the rtsp stream from linux machine using ffmpeg

here using resolution 320×240 and stream name test

ffmpeg -f video4linux2 -s 320x240 -i /dev/video0 -an -r 10 -c:v libx264 -q 1 -f rtsp -metadata title=test rtsp://server_ip:5554/flvplayback

crtmp2

2.view the incoming packets and stats on terminal at crtmpserver

3.playback the livestream from another machine using

ffplay

ffplay rtsp://server_ip:5554/flvplayback/test

Screenshot from 2016-06-09 18-17-07

VLC

vlc rtsp://server_ip:5554/flvplayback/test

 

 

Advertisements

continue : Streaming / broadcasting Live Video call to non webrtc supported browsers and media players

This blog is in continuation to the attempts / outcomes and problems in building a WebRTC to RTP media framework that successfully stream / broadcast WebRTC content to non webrtc supported browsers ( safari / IE ) / media players ( VLC )


Attempt 4: Stream the content to a WebRTC endpoint which is hidden in a video call . Pick the stream from vp8 object URL send to a streaming server

This process involved the following components :

  • WebRTC API : simplewebrtc on Chrome
  • Transfer mechanism from client to Streaming server:  webrtc media channel

Problems : No streaming server is qualified to handle a direct webrtc input and stream it on network .


Attempt 4.1 : Stream the content to a WebRTC endpoint . Do WebRTC Endpoint to RTP Endpoint bridge using Kurento APIs. 

Use the RTP port and ip address to input into a ffmpeg or gstreamer or VLC terminal command and out put a live H264 stream on another ip and port address .  

This process involved the following components :

  • API : Kurento
  • Transfer mechanism : HTML5 webrtc client -> application server hosting java -> media server -> application for webrtc media to RTP media conversation -> RTP player

Screenshots of attempts with Wowza to stream from a ip and port

kurentowowoza

problems :

  • The stream was black ie no video content .

Attempt 4.2 : Build a WebRTC Endpoint to Http endpoint in kurento and force the video audio encoding to be that of H264 and PCMU.

code for adding constraints to output media and forcing choice of codecs

MediaPipeline pipeline = kurento.createMediaPipeline();
    WebRtcEndpoint webRtcEndpoint = new WebRtcEndpoint.Builder(pipeline).build();
    HttpGetEndpoint httpEndpoint=new HttpGetEndpoint.Builder(pipeline).build();

    org.kurento.client.Fraction fr= new org.kurento.client.Fraction(1, 30);         
    VideoCaps vc= new VideoCaps(VideoCodec.H264,fr);
    httpEndpoint.setVideoFormat(vc);

    AudioCaps ac= new AudioCaps(AudioCodec.PCMU, 65536);
    httpEndpoint.setAudioFormat(ac);

    webRtcEndpoint.connect(httpEndpoint);

code for using gstreamer filter to force the output in raw format . It is a alternate solution to above

//basic media operation of 1 pipeline and 2 endpoinst
MediaPipeline pipeline = kurento.createMediaPipeline();
WebRtcEndpoint webRtcEndpoint = new WebRtcEndpoint.Builder(pipeline).build();
RtpEndpoint rtpEndpoint = new RtpEndpoint.Builder(pipeline).build();

//adding Gstream filters 
GStreamerFilter filter1 = new GStreamerFilter.Builder(pipeline, &quot;videorate max-rate=30&quot;).withFilterType(FilterType.VIDEO).build();
GStreamerFilter filter2 = new GStreamerFilter.Builder(pipeline, &quot;capsfilter caps=video/x-h264,width=1280,height=720,framerate=30/1&quot;).withFilterType(FilterType.VIDEO).build();
GStreamerFilter filter3 = new GStreamerFilter.Builder(pipeline, &quot;capsfilter caps=audio/x-mpeg,layer=3,rate=48000&quot;).withFilterType(FilterType.AUDIO).build();

//connecting all poin ts to one another 
webRtcEndpoint.connect (filter1); 
filter1.connect (filter2); 
filter2.connect (filter3); 
filter3.connect (rtpEndpoint);

// RTP SDP offer and answer
String requestRTPsdp = rtpEndpoint.generateOffer();
rtpEndpoint.processAnswer(requestRTPsdp);

problem : The output is still webm


Attempt 5  : Use a RTP SDP Endpoint ( ie a SDP file valid for a given session ) and use it to play the WebRTC media over Wowza streaming server

This process involved the following components

  1. WebRTC Stream and object URL of the blob containing VP8 media
  2. Kurento  WebRTC Endpoint  bridge to generate SDP
  3. Wowza Streaming server

code for kurento to generate a SDP file from WebRTC to RTP bridge

@RequestMapping(value = &quot;/rtpsdp&quot;, method = RequestMethod.POST)
private String processRequestrtpsdp(@RequestBody String sdpOffer)
throws IOException, URISyntaxException, InterruptedException {

//basic media operation of 1 pipeline and 2 endpoinst
MediaPipeline pipeline = kurento.createMediaPipeline();
WebRtcEndpoint webRtcEndpoint = new WebRtcEndpoint.Builder(pipeline).build();
RtpEndpoint rtpEndpoint = new RtpEndpoint.Builder(pipeline).build();

//connecting all poin ts to one another 
webRtcEndpoint.connect (rtpEndpoint);

// RTP SDP offer and answer
String requestRTPsdp = rtpEndpoint.generateOffer();
rtpEndpoint.processAnswer(requestRTPsdp);

// write the SDP conector to an external file
PrintWriter out = new PrintWriter(&quot;/tmp/test.sdp&quot;);
out.println(requestRTPsdp);
out.close();

HttpGetEndpoint httpEndpoint = new HttpGetEndpoint.Builder(pipeline).build();
PlayerEndpoint player = new PlayerEndpoint.Builder(pipeline, requestRTPsdp).build();
httpEndpoint.connect(rtpEndpoint);
player.connect(httpEndpoint);

// Playing media and opening the default desktop browser
player.play();
String videoUrl = httpEndpoint.getUrl();
System.out.println(&quot; ------- video URL -------------&quot;+ videoUrl);

// send the response to front client
String responseSdp = webRtcEndpoint.processOffer(sdpOffer);

return responseSdp;
}

problems : wowza doesnt not recognize the WebRTC SDP and play the video

screenshot of wowza with SDP input

Screenshot from 2015-01-30 15:28:59


Attempt 5.1 : Use a RTP SDP Endpoint ( ie a SDP file valid for a given session ) and use it to play the WebRTC media over Default Ubuntu media player 

SDP file formed contains contents such as :

v=0
o=- 3631611195 3631611195 IN IP4 192.168.0.119
s=Kurento Media Server
c=IN IP4 192.168.0.119
t=0 0
m=audio 42802 RTP/AVP 98 99 0
a=rtpmap:98 OPUS/48000/2
a=rtpmap:99 AMR/8000/1
a=rtpmap:0 PCMU/8000
a=ssrc:2713728673 cname:user59375791@host-ad1117df
m=video 35946 RTP/AVP 96 97 100 101
a=rtpmap:96 H263-1998/90000
a=rtpmap:97 VP8/90000
a=rtpmap:100 MP4V-ES/90000
a=rtpmap:101 H264/90000
a=ssrc:93449274 cname:user59375791@host-ad1117df

problem : deformed media

screenshot of playing from a SDP file

Screenshot from 2015-01-29 17:42:21


Attempt 5.2 : Use a RTP SDP Endpoint ( ie a SDP file valid for a given session ) and use it to play the WebRTC media over VLC using socket input

problem : nothing plays

screenshot of VLC connected to play from socket and failure to play anything

Screenshot from 2015-01-21 17:49:52

Attempt 5.3: Create a WebRTC endpoint and connected it to RTP endpoint via media pipelines . Also make the RTP SDP offer and answering the same . Play with ffnpeg / ffplay / gst playbin

String requestRTPsdp = rtpEndpoint.generateOffer();
rtpEndpoint.processAnswer(requestRTPsdp);

Write the requestRTPsdp to a file and obtain a RTP connector endpoint with Application/SDP .It plays okay with gst playbin ( 10 secs without audio )

Successful attempt to play from a gst playbin

gst-launch -vvv playbin uri=file:///tmp/test.sdp 

donekurento streaming

but refuses to be played by VLC , ffplay and even wowza . The error generated with

ffmpeg -i test.sdp -vcodec copy -acodec copy -f mpegts output-file.ts

or

ffmpeg -re -i test.sdp -vcodec h264 -acodec mp3 -f mpegts “udp://192.168.4.26:5000”

are

Could not find codec parameter for stream1 ( video:h263, none ) .Other errors types are , Could not write header for output file output file is empty nothing was encoded

Error screenshots of trying to play the RTP SDP file with ffmpeg

ffmpeg error kurebto1 ffmpeg error kurebto2


Attempt 6 : Use a WebRTC capable media and streaming server ( eg Kurento )  to pick a live stream of VP8 . Convert the VP8 to H268  ( ffmpeg / RTP endpoint ) . Convert H268 to Mp4 using MP4 parser and pass to a streaming server  ( wowza)

In process . to be updated .

Streaming / broadcasting Live Video call to non webrtc supported browsers and media players

As the title of this article suggests I am going to pen my attempts of streaming / broadcasting Live Video WebRTC call to non WebRTC supported browsers and media players such as VLC , ffplay , default video player in Linux etc .

I am currently attempting to do this by making my own MP4 engine from WebRTC feed . However I am sharing my past experiments in hope of helping someone whose objective is not the same as mine and might get some help from these threads .


Attempt 1 : use one to many brodcasting API :

<!DOCTYPE html>
<html id=”home” lang=”en”>

<head>
<meta http-equiv=”Content-Type” content=”text/html; charset=UTF-8″>
<meta charset=utf-8>
<meta name=”viewport” content=”width=device-width, initial-scale=1.0, user-scalable=no”>
<meta name=”author” content=”altanai”>
<meta http-equiv=”X-UA-Compatible” content=”IE=edge,chrome=1″>

<link rel=”stylesheet” type=”text/css” href=”style.css”>

</head>

<body>

<table class=”visible”>
<tr>
<td style=”text-align: right;”>
<input type=”text” id=”conference-name” placeholder=”Broadcast Name”>
</td>
<td>
<select id=”broadcasting-option”>
<option>Audio + Video</option>
<option>Only Audio</option>
<option>Screen</option>
</select>
</td>
<td>
<button id=”start-conferencing”>Start Broadcasting</button>
</td>
</tr>
</table>
<table id=”rooms-list” class=”visible”></table>

<div id=”participants”></div>

<script src=”RTCPeerConnection-v1.5.js”></script>
<script src=”firebase.js”></script>
<script src=”broadcast.js”></script>
<script src=”broadcast-ui.js”></script>

</body>

</html>
 

It uses API fromwebrtc-experiment.com. The broadcast is in one direction only where the viewrs are never asked for their mic / webcam permission .

problem : The broadcast is for WebRTC browsers only and doesnt support non webrtc players / browsers


Attempt 1.1: Stream the media directly to nodejs through websocket


window.addEventListener('DOMContentLoaded', function() {

var v = document.getElementById('v');
navigator.getUserMedia = (navigator.getUserMedia || 
navigator.webkitGetUserMedia || 
navigator.mozGetUserMedia || 
navigator.msGetUserMedia);

if (navigator.getUserMedia) {
// Request access to video only
navigator.getUserMedia(
{
video:true,
audio:false
}, 
function(stream) {
var url = window.URL || window.webkitURL;
v.src = url ? url.createObjectURL(stream) : stream;
v.play();

var ws = new WebSocket('ws://localhost:3000', 'echo-protocol');
waitForSocketConnection(ws, function(){

console.log(" url.createObjectURL(stream)-----", url.createObjectURL(stream))
ws.send(stream);

console.log("message sent!!!"); 
});

},
function(error) {
alert('Something went wrong. (error code ' + error.code + ')');
return;
}
);
}
else {
alert('Sorry, the browser you are using doesn\'t support getUserMedia');
return;
}
});

//Make the function wait until the connection is made...
function waitForSocketConnection(socket, callback){
setTimeout(
function () {
if (socket.readyState === 1) {
console.log("Connection is made")
if(callback != null){
callback();
}
return;

} else {
console.log("wait for connection...")
waitForSocketConnection(socket, callback);
}

}, 5); // wait 5 milisecond for the connection...
}

problem : The video is in form of buffer and doesnot play


Attempt 2: Record the WebRTC media ( 5 secs each ) into chunks of webm format->  transfer them to other end -> append the chunks together like a regular file 

This process involved the following components :

  • Recorder Javascript library : RecordJs
  • Transfer mechanism : Record using RecordRTC.js -> send to other end for media server -> stitching together the small webm files into big one at runtime and play
  • Programs :

Code for video recorder

navigator.getUserMedia(videoConstraints, function(stream) {

video.onloadedmetadata = function() {
video.width = 320;
video.height = 240;

var options = {
type: isRecordVideo ? 'video' : 'gif',
video: video,
canvas: {
width: canvasWidth_input.value,
height: canvasHeight_input.value
}
};

recorder = window.RecordRTC(stream, options);
recorder.startRecording();
};
video.src = URL.createObjectURL(stream);
}, function() {
if (document.getElementById('record-screen').checked) {
if (location.protocol === 'http:')
alert('&lt;https&gt; is mandatory to capture screen.');
else
alert('Multi-capturing of screen is not allowed. Capturing process is denied. Are you enabled flag: "Enable screen capture support in getUserMedia"?');
} else
alert('Webcam access is denied.');
});

Code for video append-er

var FILE1 = '1.webm';
var FILE2 = '2.webm';
var FILE3 = '3.webm';
var FILE4 = '4.webm';
var FILE5 = '5.webm';

var NUM_CHUNKS = 5;
var video = document.querySelector('video');

window.MediaSource = window.MediaSource || window.WebKitMediaSource;
if (!!!window.MediaSource) {
alert('MediaSource API is not available');
}

var mediaSource = new MediaSource();

video.src = window.URL.createObjectURL(mediaSource);

function callback(e) {

var sourceBuffer = mediaSource.addSourceBuffer('video/webm; codecs="vorbis,vp8"');

GET(FILE1, function(uInt8Array) {

var file = new Blob([uInt8Array], {type: 'video/webm'});
var i = 1;

(function readChunk_(i) {

var reader = new FileReader();

reader.onload = function(e) {

sourceBuffer.appendBuffer(new Uint8Array(e.target.result));

if (i == NUM_CHUNKS) mediaSource.endOfStream();

else {
if (video.paused) {
video.play(); // Start playing after 1st chunk is appended.
}
readChunk_(++i);
}

};

reader.readAsArrayBuffer(file);

})(i); // Start the recursive call by self calling.
});
}

mediaSource.addEventListener('sourceopen', callback, false);
mediaSource.addEventListener('webkitsourceopen', callback, false);
mediaSource.addEventListener('webkitsourceended', function(e) {
logger.log('mediaSource readyState: ' + this.readyState);
}, false);

// function get the video via XHR
function GET(url, callback) {

var xhr = new XMLHttpRequest();
xhr.open('GET', url, true);
xhr.responseType = 'arraybuffer';
xhr.send();

xhr.onload = function(e) {

if (xhr.status != 200) {
alert("Unexpected status code " + xhr.status + " for " + url);
return false;
}

callback(new Uint8Array(xhr.response));
};
}

Shortcoming of this approach

  1. The webm files failed to play on most of the media players
  2. The recorder can only either record video or audio file at a time .

Attempt 2.1: Record the WebRTC media ( 5 secs each ) into chunks of webm format ( RecordRTC.js) >  Use Kurento JS script ( kws-media-api,js) to make a HTTP Endpoint to recorded Webm files  -> append the chunks together like a regular file at runtime 


function getByID(id) {
return document.getElementById(id);
}

var recordAudio = getByID('record-audio'),
recordVideo = getByID('record-video'),
stopRecordingAudio = getByID('stop-recording-audio'),
stopRecordingVideo = getByID('stop-recording-video'),
broadcasting=getByID('broadcasting');

var canvasWidth_input = getByID('canvas-width-input'),
canvasHeight_input = getByID('canvas-height-input');

var video = getByID('video');
var audio = getByID('audio');

var videoConstraints = {
audio: false,
video: {
mandatory: {},
optional: []
}
};

var audioConstraints = {
audio: true,
video: false
};

const ws_uri = 'ws://localhost:8888/kurento';
var URL_SMALL="http://localhost:8080/streamtomp4/approach1/5561840332.webm";


var audioStream;
var recorder;

recordAudio.onclick = function() {
if (!audioStream)
navigator.getUserMedia(audioConstraints, function(stream) {

if (window.IsChrome) stream = new window.MediaStream(stream.getAudioTracks());
audioStream = stream;

audio.src = URL.createObjectURL(audioStream);
audio.muted = true;
audio.play();

// "audio" is a default type
recorder = window.RecordRTC(stream, {
type: 'audio'
});
recorder.startRecording();
}, function() {});
else {
audio.src = URL.createObjectURL(audioStream);
audio.muted = true;
audio.play();
if (recorder) recorder.startRecording();
}


window.isAudio = true;

this.disabled = true;
stopRecordingAudio.disabled = false;
};

stopRecordingAudio.onclick = function() {
this.disabled = true;
recordAudio.disabled = false;
audio.src = '';

if (recorder)
recorder.stopRecording(function(url) {
audio.src = url;
audio.muted = false;
audio.play();

document.getElementById('audio-url-preview').innerHTML = '&lt;a href="' + url + '" target="_blank"&gt;Recorded Audio URL&lt;/a&gt;';
});
};

recordVideo.onclick = function() {
recordVideoOrGIF(true);
};


function recordVideoOrGIF(isRecordVideo) {
navigator.getUserMedia(videoConstraints, function(stream) {

video.onloadedmetadata = function() {
video.width = 320;
video.height = 240;

var options = {
type: isRecordVideo ? 'video' : 'gif',
video: video,
canvas: {
width: canvasWidth_input.value,
height: canvasHeight_input.value
}
};

recorder = window.RecordRTC(stream, options);
recorder.startRecording();
};
video.src = URL.createObjectURL(stream);
}, function() {
if (document.getElementById('record-screen').checked) {
if (location.protocol === 'http:')
alert('&lt;https&gt; is mandatory to capture screen.');
else
alert('Multi-capturing of screen is not allowed. Capturing process is denied. Are you enabled flag: "Enable screen capture support in getUserMedia"?');
} else
alert('Webcam access is denied.');
});

window.isAudio = false;

if (isRecordVideo) {
recordVideo.disabled = true;
stopRecordingVideo.disabled = false;
} else {
recordGIF.disabled = true;
stopRecordingGIF.disabled = false;
}
}

stopRecordingVideo.onclick = function() {
this.disabled = true;
recordVideo.disabled = false;

if (recorder)
recorder.stopRecording(function(url) {
video.src = url;
video.play();
document.getElementById('video-url-preview').innerHTML = '&lt;a href="' + url + '" target="_blank"&gt;Recorded Video URL&lt;/a&gt;';

});
};


/*--------------------------broadcasting -----------------------------------*/

function onerror(error)
{
console.log( " error occured");
console.error(error);
};

broadcast.onclick = function() {
var videoOutput = document.getElementById("videoOutput");

KwsMedia(ws_uri, function(error, kwsMedia)
{
if(error) return onerror(error);

// Create pipeline
kwsMedia.create('MediaPipeline', function(error, pipeline)
{
if(error) return onerror(error);

// Create pipeline media elements (endpoints &amp; filters)
pipeline.create('PlayerEndpoint', {uri: URL_SMALL},
function(error, player)
{
if(error) return console.error(error);

pipeline.create('HttpGetEndpoint', function(error, httpGet)
{
if(error) return onerror(error);

// Connect media element between them
player.connect(httpGet, function(error, pipeline)
{
if(error) return onerror(error);
// Set the video on the video tag
httpGet.getUrl(function(error, url)
{
if(error) return onerror(error);

videoOutput.src = url;

console.log(url);

// Start player
player.play(function(error)
{
if(error) return onerror(error);

console.log('player.play');
});
});
});

// Subscribe to HttpGetEndpoint EOS event
httpGet.on('EndOfStream', function(event)
{
console.log("EndOfStream event:", event);
});
});
});
});
},
onerror);

}

problem : dissecting the live video into small the files and appending to each other on reception is an expensive , time and resource consuming process . Also involves heavy buffering and other problems pertaining to real-time streaming .


Attempt 2.2 : Send the recorded chunks of webm to a port on linux server . Use socket programming to pick up these individual files and play using  VLC player from UDP port of the Linux Server

Screenshot from 2015-01-22 15:32:51


Attempt 2.3: Send the recorded chunks of webm to a port on linux server socket . Use socket programming to pick up these individual webm files and convert to H264 format so that they can be send to a media server. 

This process involved the following components :

  • Recorder Javascript library : RecordJs
  • Transfer mechanism :WebRTC endpoint -> Call handler ( Record in chunks ) -> ffmpeg / gstreamer to put it on RTP -> streaming server like wowza – > viewers
  • Programs : Use HTML webpage Webscoket connection -> nodejs program to write content from websocket to linux socket -> nodejs program to read that socket and print the content on console

Program to transfer the webm recorder files over websocket to nodejs program

//Make the function wait until the connection is made...
function waitForSocketConnection(socket, callback){
setTimeout(
function () {
if (socket.readyState === 1) {
console.log("Connection is made")
if(callback != null){
callback();
}
return;

} else {
console.log("wait for connection...")
waitForSocketConnection(socket, callback);
}

}, 5); // wait 5 milisecond for the connection...
}

function previewFile() {
var preview = document.querySelector('img');
var file = document.querySelector('input[type=file]').files[0];
var reader = new FileReader();

reader.onloadend = function () {

preview.src = reader.result;
console.log(" reader result ", reader.result);

var video=document.getElementById("v");
video.src=reader.result;
console.log(" video played ");

var ws = new WebSocket('ws://localhost:3000', 'echo-protocol');

waitForSocketConnection(ws, function(){
ws.send(reader.result); 
console.log("message sent!!!"); 
});

}

if (file) {
// converts to base64 encoded string of the file data
//reader.readAsDataURL(file);

reader.readAsBinaryString(file);

} else {
preview.src = "";
}
}

Program for Linux Sockets sender which creates the socket for the webm files

var net = require('net');
var fs = require('fs');
var socketPath = '/tmp/tfxsocket';
var http = require('http');
var stream = require('stream');
var util = require('util');

var WebSocketServer = require('ws').Server;
var port = 3000;
var serverUrl = "localhost";

var socket;
/*--------------------------------http server -----------------------------*/
var server= http.createServer(function (request, response) {

});

server.listen(port, serverUrl);

console.log('HTTP Server running at ',serverUrl,port);

/*--------------------------------websocket server -----------------------------*/

var wss = new WebSocketServer({server: server});

wss.on("connection", function(ws) {
console.log("websocket connection open");

ws.on('message', function (message) {
console.log(" stream recived from broadcast client on port 3000 ");

var s = require('net').Socket();
s.connect(socketPath);
s.write(message);

console.log(" send the stream to socketPath",socketPath); 
});

ws.on("close", function() {
console.log("websocket connection close")
});

});

Program for Linux Socket Listener using nodejs and socket . Here the socket is in node /tmp/mysocket

var net = require('net');

var client = net.createConnection("/tmp/mysocket");

client.on("connect", function() {
console.log("connected to mysocket");
});

client.on("data", function(data) {
console.log(data);
});

client.on('end', function() {
console.log('server disconnected');
});

Output 1: Video Buffer displayed

Screenshot from 2015-01-22 15:35:06 (copy)

Output 2 : Random data from Video displayed

Screenshot from 2015-01-23 12:57:35

ffmpeg format of transfering the content from socket to UDP IP and port

ffmpeg -i unix://tmp/mysocket -f format udp://192.168.0.119:8083

problems of this approach : The video was on a passing stage from the socket and contained no information as such when tried to play / show console


Attempt 3 : Send the live WebRTC stream from Kurento WebRTC endpoint to Kurento HTTP endpoint . play using  Mozilla VLC web plugin

VLC mozilla plugin can be embedded by :


name=”video2″
autoplay=”yes” loop=”no” hidden=”no”
target=”rtp://@192.165.0.119:8086″ />

screenshot of failure on part of Mozilla VLC plugin to play from a WebRTC endpoint

Screenshot from 2015-01-29 10:37:06Screenshot from 2015-01-29 10:37:17

Screenshot from 2015-01-29 12:06:14

problem : VLC mozilla plugin was unable to play the video

………………………………………………………………………………………………………………..

The 4th , 5th and 6th sections of this article are in the next blog :

continue : Streaming / broadcasting Live Video call to non webrtc supported browsers and media players