Bandwidth are dependant on network strength and is affected by the other users on the network. Under hetrogenious network conditions Bandwidth estimation is a critical step to improve call quality and end user exeprince.
An unreliable network / fluctiating one will cause some packets to be delivered on time and some to be delayed more thn others, causing them to come in bursts. JitterBuffer is an effective methodology for Jitter management which ensures a steady delivery of apckets even when the peers transmit at flucting rates.
A jitter buffer is a buffer that consumes packets as soon as they arrive and keep them untill the frame can be fully reconstructed. At the point when all apckets have bee filled in buffer ( in any order ) it emiits it for decoding which the play can playback to user. Note that serveral RTP packet can have the same timestamp is they are part of the same video frame.
(+) dynamically manages unordered packets and reconstrcts a frame after accumulating all packets
(-) can introduce latency for packets that arrive early
(-) Need active resisizing by means of feedback
for hi speed and goog network jitterbuffer can ve small sized
for congested and disruptive networks it is better to keep a longer buffer which can also add some latency
(-) buffer has limited capacity so the packet can expire if not received within a duration “jitterBufferDealy”.
Reduced resolution, framerate, bit rate are effective for congestion control however not suited to the case of High defintaion video conferecing such as gaming , telehealth of broadcast of concert as it may hinder with user experience.
using the I-frame , P-frame and B frame efficiently in the codec combines with predictive machine learning models make packet loss unnoticible to the human eye. Marker ( M bit) in the RTP packet structure marks keyframes.
Partial frames given to decoder are unprocessable, then PLI message is send to the sender. As the sender receives pli message it will produce new I-frames to help the reciver decore the frames.
a=rtpmap:100 VP9/90000
a=rtcp-fb:100 goog-remb
a=rtcp-fb:100 transport-cc
a=rtcp-fb:100 ccm fir
a=rtcp-fb:100 nack
a=rtcp-fb:100 nack pli
a=fmtp:100 profile-id=2
a=rtpmap:101 rtx/90000
a=fmtp:101 apt=100
FIR
PIL
request a full key frame from the sender , when new memeber enters the session.
request a full key frame from the sender, when partial frames were given to the decoder, but it was unable to decode them
causes of making PLI request could be decoder crash or heavy loss
Congestion is created when a network path has reached its maximum limits which could be due to
failures(switches, routers, cables, fibres ..)
over subscription and operating at peak bandwidth.
broadcast storms
Inapt BGP routing and congestion detection
BGP is responsisble for finiding the shortest routable path for a packet
The direct consequences of congestion for any network transport can be
High Latency
Connection Timeouts
Low throughput
Packet loss
Queueing delay
With respect to WebRTC streams too, if a network has congestion, the buffer will overflow and packets will be droppped. Due to excessive dropping of packets both transmission time and jitter increases.To overcome this adaptive buffereing is used as jitter increases or decreases.
A congestion notifier and detection algorithm can analyze the RTCP metrics for possible congestion in the network route and suggest options to overcome it. Part of Adaptive Bitrate and Bandwidth Estimation process.
Rate limiting the sending information is one way to overcome congestion, even though it could lead to bad call quality at the reciver’s end and non typical for realtime communciation systems
Bandwidth estimation and congestion control are ofetn paird in as a operational unit. Primarily packet loss and inter packet arrival times drives the bandwidth estimation and enable GCC to flagcongestion.
On the receiver side TMMBR/TMMBN (Temporary Maximum Media Stream Bit Rate Request/Notification) and REMB(Receiver Estimated Maximum Bitrate ) exchange the bandwodth estimates.
On the sender side TWCC(Transport wide congestion control) can be used.
Other congestion control algorithms
QUIC Loss Detection and Congestion Control RFC 9002
Coupled Congestion Control for RTP Media rfc8699
NADA: A Unified Congestion Control Scheme for Real-Time Media – Network Working group
Self-Clocked Rate Adaptation for Multimedia RMCAT WG
SCReAM – Mobile optimised congestion control algorithm by Ericson
High definition video stream requires low/no packet loss and fast recovery if any. RTP intrinsically has no means for recovering packet loss. Instead, low bit rate redundancy can be added to packets themselves to make up for any loss. Retransmission of lost packets can be a feature developed over RTP using sequence numbers head in RTP.
Geographical distances can add significant delay in Transmission time.Transmission time is an important metric in the Call Quality analysis however calculating transmission time as sthe different of timestamp of sending and timestamp of receiving requires perfect sync of systems clock which is unreliable.
Latency is calculated from getting user media encoding transmission , network delays , buffering , decoding and playback. There are many factors involved in latency management such as queing delays , media path, CPU utilization etc.
Optimize Compute resource
mobile agents have lesser computative power
Camera with features such as auto focus or other adjustments will taker more time to cappture
network should be of suited bandwidth and strength
Reduce information to be encoded and sent
Subject focus and blurring backgroud
Filtering noise at source
Voice Activity Detection (VAD)
send extra data in FEC only is there is voice activity detected in packet
Since we know that synchorinizaing clocks in distributed systems is a tough task and mostly avoided by wither using NTP or using other means of synchronization
Webrtc uses Stream Control Transmission Protocol (SCTP) over DTLS connection as an alternative to TCP and UDP.
Features :
multihoming : one or both endpoints of a connection can consist of more than one IP address. This enables transparent failover between redundant network paths
Multistreaming transmit several independent streams of chunks in parallel
SCTP has similarities to TCP retransmission and partial reliability like UDP.
Heartbest to keep connection alive with exponential backoff if packet hasnt arrived.
Validation and acknowledgment mechanisms protect against flooding attack
SCTP frames data as datagrams and not as a byte stream
(+) SCTP enables WebRTC to be multiplexing
(+) It has flow control and congestion avoidance support
End to end encryption model of WebRTC is a good defence to MIM ( man in middle ) attacks howver it is not yet 100% foolproof. I discussed more security loopholes and concerns in WebRTC and Realtime communication platfroms in this article WebRTC App and webpage Security.
Traditionally 2 separte ports for RTP aand RTCP were used in SIP / RTP based realtime communications systems. Thus demultiplexisng of the traffic of these data streams is peformed at the transport later.
With rtcp-mux the NAT tarversal si simplified as onlya single port is used for media and control messages .
(+) easier to manage security by gathering ICE candidates for a single port only instead of 2
(+) increases the systesm capacity for media session using the same number of ports
(+) further simplified using BUNDLE as all media session and their control messages flow on the same port .
WebRTC has rtcp-mux capabilities thus simplifying the ICE candidate pairing
Echo is the sound of your own voice reverberating. If the amplitude of such a sound is high and intervals exceed 25 ms, it becomes disruptive to the conversation. Its types can be acoustic or hybrid. Echo cancellers need to eliminate the echo while still preserving call quality and not disrupting tones such as DTMF.
Usually the background or reflected noise which is an undesired voiceband energy transfers from the speaker to the microphone and into the communication network. Mostly found in a hands-free set or speakerphone. In a multiparty call scenario, it could also occur due to unmatched volume levels, challenging network conditions on one party, background noise, double talk or even proximity between user and microphone
In a public telephone system, local loop wiring is done using two-wire connections carrying bidirectional voice signals. In PBX, a two-to-four wire conversion is done using a hybrid circuit which does not perform perfect impedance matches resulting in a Hybrid echo.
An efficient echo canceller should cancel out the entire echo tail while not leading to any packet loss. It needs to be adaptive to changing IP network bandwidth and algorithm should function equally well in conference scenarios where there may be more than one echo sources. Benchmarking tools like MOS (Mean Opinion scores ) are used to gauge the results. Often voice quality enhancement technologies are also integrated into AEC modules, such as :
This post is about making performance enhancements to a WebRTC app so that they can be used in the area which requires sensitive data to be communicated, cannot afford downtime, fast response and low RTT, need to be secure enough to withstand and hacks and attacks.
As a communication agent become a single HTML page driven client, a lot of authentication, heartbeat sync, web workers, signalling event-driven flow management resides on the same page along with the actual CPU consumption for the audio-video resources and media streams processing. This in turn can make the webpage heavy and many a time could result in a crash due to being ” unresponsive”.
Here are some my best to-dos for making sure the webrtc communication client page runs efficiently
CLS metrics measures the sum total of all individual layout shift scores for every unexpected layout shift that occurs during the entire lifespan of the page.
To have a good user interactionn experiences, the DOM elements should display as less movement as possible so that page appears stable . In the opposite case for a flickering page ( maybe due to notification DOM dynamically pushing the other layout elements ) it is difficult to precisely interact with the page elements such as buttons .
The main thread is where a browser processes runs all the JavaScript in your page, as well as to perform layout, reflows, and garbage collection. therefore long js processes can block the thread and make the page unresponsive.
Unoptimized JS code takes longer to execute and impacts network , parse-compileand memory cost.
If your JavaScript holds on to a lot of references, it can potentially consume a lot of memory. Pages appear janky or slow when they consume a lot of memory. Memory leaks can cause your page to freeze up completely.
Some effective tips to spedding up JS execution include
Cross-site request forgery (CSRF) attacks rely on the fact that cookies are attached to any request to a given origin, no matter who initiates the request.
While adding cookies we must ensure that if SameSite =None , the cookies must be secure
SameSite to Strict, your cookie will only be sent in a first-party context. In user terms, the cookie will only be sent if the site for the cookie matches the site currently shown in the browser’s URL bar.
Set-Cookie: promo_shown=1; SameSite=Strict
You can test this behavior as of Chrome 76 by enabling chrome://flags/#cookies-without-same-site-must-be-secure and from Firefox 69 in about:config by setting network.cookie.sameSite.noneRequiresSecure.
Key Performance Indicators (KPIs) are used to evaluate the performance of a website . It is crticial that a webrtc web page must be light weight to acocmodate the signalling control stack javscript libs to be used for offer answer handling and communicating with the signaller on open sockets or long polling mechnism .
Lighthouse tab in chrome developer tools shows relavnat areas of imporevemnt on the webpage from performmace , Accesibility , Best Practices , Search Engine optimization and progressive Web App
Page attributes under Chrome developers control depicts the page load and redering time for every element includeing scripts and markup. Specifically it has
Time to Title
Time to render
Time to inetract
Networking attributes to be cofigured based on DNS mapping and host provider. These Can be evalutaed based on chrome developer tool reports
Other page interaction crtiteria includes the frames their inetraction and timings for the same.
In the screenhosta ttcjed see the loading tasks which basically depcits the delay by dom elements under transitions owing to user interaction . This ideally should be minimum to keep the page responsive.
The above functions ( old and new ) estimates the memory usage of the entire web page
these calls can be used to correlate new JS code with the impact on memery and subsewuntly find if there are any memeory leaks. Can also use these memery metrics to do A/B testing .
Loading assests over CDN , minfying sripts and reducing over all weight of the page are good ways to keep the page light and active and prevent any chrome tab crashes.
The non critical compoenents could then be loaded on async .
Lazy load must be used for large files like js paylaods which are costly to load. To send a smaller JavaScript payload that contains only the code needed when a user initially loads your application, split the entire bundle and lazy load chunks on demand.
Codecs signifies the media stream’s compession and decompression. For peers to have suceesfull excchange of media, they need a common set of codecs to agree upon for the session. The list codecs are sent between each other as part of offeer and answer or SDP in SIP.
As WebRTC provides containerless bare mediastreamgtrackobjects. Codecs for these tracks is not mandated by webRTC . Yet the codecs are specified by two seprate RFCs
RFC 7878 WebRTC Audio Codec and Processing Requirements specifies least the Opus codec as well as G.711’s PCMA and PCMU formats.
RFC 7742 WebRTC Video Processing and Codec Requirnments specifies support for VP8 and H.264’s Constrained Baseline profile for video .
In WebRTC video is protected using Datagram Transport Layer Security (DTLS) / Secure Real-time Transport Protocol (SRTP). In this article we are going to dicuss Audio/Video Codecs processing requirnments only.
WebRTC is free and opensource and its woring bodies promote royality free codecs too. The working groups RTCWEB and IETF make the sure of the fact that non-royality beraning codec are mandatory while other codecs can be optional in WebRTC non browsers .
WebRTC Browsers MUST implement the VP8 video codec as described in RFC6386 and H.264 Constrained Baseline described in RFC 7442.
Most of the codesc below follow Lossy DCT(discrete cosine transform (DCT) based algorithm for encoding. Sample SDP from offer in Chrome browser v80 for Linux incliudes these profile :
AVC’s Constrained Baseline (CBP ) profile compliant with WebRTC.
propertiary, patented codec, mianted by MPEG / ITU
Constrained Baseline Profile Level 1.2 and H.264 Constrained High Profile Level 1.3 . Contrained baseline is a submet of the main profile , suited to low dealy , low complexity. suited to lower processing device like mobile videos
Multiview Video Coding – can have multiple views of the same scene ,such as stereoscopic video.
Other profiles , which are not supporedt are Baseline(BP), Extended(XP), Main(MP) , High(HiP) , Progressive High(ProHiP) , High 10(Hi10P), High 4:2:2 (Hi422P) and High 4:4:4 Predictive
supported containers are 3GP, MP4, WebM
Parameter settings:
packetization-mode
max-mbps, max-smbps, max-fs, max-cpb, max-dpb, and max-br
sprop-parameter-sets: H.264 allows sequence and picture information to be sent both in-band and out-of-band. WebRTC implementations must signal this information in-band.
Supplemental Enhancement Information (SEI) “filler payload” and “full frame freeze” messages( used while video switching in MCU streams )
Already used for video conferencing on PSTN (Public Switched Telephone Networks), RTSP, and SIP (IP-based videoconferencing) systems.
suited for low bandwidth networks
(-) not comaptible with WebRTC
but many media gateways incldue realtime transcoding existed between H263 based SIP systems and vp8 based webrtc ones to enable video communication between them
H.265 / HEVC
proprietary format and is covered by a number of patents. Licensing is managed by MPEG LA .
Container – Mp4
Interoprabiloity between non WebRT Compatible and WebRTC compatible endpoints
With the rise of Internet of Things many Endpoints especially IP cameras connected to Raspberry Pi like SOC( system on chiops )n wanted to stream directly to the browser within theor own provate network or even on public network using TURN / STUN.
The figure below shows how such a call flow is possible between an IP cemera ( such as Baby Cam ) and its parent monitoring it over a WebRTC suppported mobile phone browser . The process includes streaming teh content from IOT device on RTSP stream and using realtime trans-coding between H264 and VP8
Interoprabiloity between non WebRT Compatible and WebRTC compatible endpoints
Opus is a lossy audio compression format developed by the Internet Engineering Task Force (IETF) targeting a broad range of interactive real-time applications over the Internet, from speech to music and supportes multiple compression algorithms
Constant and variable bitrate encoding – 6 kbit/s to 510 kbit/s
frame sizes – 2.5 ms to 60 ms
sampling rates – 8 kHz (with 4 kHz bandwidth) to 48 kHz (with 20 kHz bandwidth, where the entire hearing range of the human auditory system can be reproduced).
container- Ogg, WebM, MPEG-TS, MP4
As an open format standardized through RFC 6716, a reference implementation is provided under the 3-clause BSD license. All known software patents which cover Opus are licensed under royalty-free terms.
(+ ) flexible, suited for speech ( by SILK) and music ( CELT)
(+) support for mono and stereo
(+) inbuild FEC( Forward Error Correction) thus resilient to packet loss
(+) compression adjustability\ for unpredictable networks
(-) Highly CPU intensive ( unsuitable for embedded devices like rpi)
(-) processing and memory intensive
For all cases where the endpoint is able to process audio at a sampling rate higher than 8 kHz, it is w3C recommends that Opus be offered before PCMA/PCMU.
AAC (Advanvced Audio Encoding)
part of the MPEG-4 (H.264) standard. Lossy compression but has number pf profiles suiting each usecase like high quality surround sound to low-fidelity audio for speech-only use.
supported containers – MP4, ADTS, 3GP
G.711 (PCMA and PCMU)
G.711 is an ITU standard (1972) for audio compression. It is primarily used in telephony.
ITU published Pulse Code Modulation (PCM) with either µ-law or A-law encoding. vital to interface with the standard telecom network and carriers. G.711 PCM (A-law) is known as PCMA and G.711 PCM (µ-law) is known as PCMU
It is the required standard in many voice-based systems and technologies, for example in H.320 and H.323 specifications.
Fixed 64Kbpd bit rate
supports 3GP container formats
G.722
ITU standard (1988) Encoded using Adaptive Differential Pulse Code Modulation (ADPCM) which is suited for voice compression
7 kHz Wideband audio codec operating
Bitrate 48, 56 and 64 kbit/s.
containers used 3GP, AMR-WB
G722 improved speech quality due to a wider speech bandwidth of up to 50-7000 Hz compared to G.711 of 300–3400 Hz.
Comfort noise (CN)
artificial background noise which is used to fill gaps in a transmission instead of using pure silence. It prevents – jarring or RTP Timeout.
Should be used for streams encoded with G.711 or any other supported codec that does not provide its own CN. Use of Discontinuous Transmission (DTX) / CN by senders is optional
Internet Low Bitrate Codec (iLBC)
A opensource narrowband speech codec for VoIP and streaming audio.
8 kHz sampling frequency with a bitrate of 15.2 kbps for 20ms frames and 13.33 kbps for 30ms frames.
Defined by IETF RFCs 3951 and 3952.
Internet Speech Audio Codec (iSAC)
iSAC: A wideband and super wideband audio codec for VoIP and streaming audio. It is designed for voice transmissions which are encapsulated within an RTP stream.
16 kHz or 32 kHz sampling frequency
adaptive and variable bit rate of 12 to 52 kbps.
Speex
patent-free audio compression format designed for speech and also a free software speech codec that is used in VoIP applications and podcasts. May be obsolete, with Opus as its official successor.
AMR-WB Adaptive Multi-rate Wideband is a patented wideband speech coding standard that provides improved speech quality. This is codec is generally available on mobile phones.
wider speech bandwidth of 50–7000 Hz.
data rate is between 6-12 kbit/s, and the
DTMF and ‘audio/telephone-event’ media type
endpoints may send DTMF events at any time and should suppress in-band dual-tone multi-frequency (DTMF) tones, if any.
Describes the OAuth auth credential information which is used by the STUN/TURN client (inside the ICE Agent) to authenticate against a STUN/TURN server
what ICE candidates are gathered to support non-multiplexed RTCP.
negotiate – Gather ICE candidates for both RTP and RTCP candidates. If the remote-endpoint is capable of multiplexing RTCP, multiplex RTCP on the RTP candidates. If it is not, use both the RTP and RTCP candidates separately.
require – Gather ICE candidates only for RTP and multiplex RTCP on the RTP candidates. If the remote endpoint is not capable of rtcp-mux, session negotiation will fail.
If the value of configuration.rtcpMuxPolicy is set and its value differs from the connection’s rtcpMux policy, throw an InvalidModificationError. If the value is “negotiate” and the user agent does not implement non-muxed RTCP, throw a NotSupportedError.
An RTCPeerConnection object has a signaling state, a connection state, an ICE gathering state, and an ICE connection state.
An RTCPeerConnection object has an operations chain which ensures that only one asynchronous operation in the chain executes concurrently.
Also an RTCPeerConnection object MUST not be garbage collected as long as any event can cause an event handler to be triggered on the object. When the object’s internal slot is true ie closed, no such event handler can be triggered and it is therefore safe to garbage collect the object.
generates a blob of SDP that contains an RFC 3264 offer with the supported configurations for the session, including
descriptions of the local MediaStreamTracks attached to this RTCPeerConnection,
codec/RTP/RTCP capabilities
ICE agent (usernameFragment, password , local candiadtes etc )
DTLS connection
const pc = new RTCPeerConnection();
pc.createOffer()
.then(desc => pc.setLocalDescription(desc));
With more attributes
var pc = new RTCPeerConnection();
pc.createOffer({
mandatory: {
OfferToReceiveAudio: true,
OfferToReceiveVideo: true
},
optional: [{
VoiceActivityDetection: false
}]
}).then(function(offer) {
return pc.setLocalDescription(offer);
})
.then(function() {
// Send the offer to the remote through signaling server
})
.catch(handleError);
generates an SDPanswer with the supported configuration for the session that is compatible with the parameters in the remote configuration
var pc = new RTCPeerConnection();
pc.createAnswer({
OfferToReceiveAudio: true
OfferToReceiveVideo: true
})
.then(function(answer) {
return pc.setLocalDescription(answer);
})
.then(function() {
// Send the answer to the remote through signaling server
})
.catch(handleError);
Codec preferences of an m= section’s associated transceiver is said to be the value of the RTCRtpTranceiver with the following filtering applied
If direction is “sendrecv”, exclude any codecs not included in the intersection of RTCRtpSender.getCapabilities(kind).codecs and RTCRtpReceiver.getCapabilities(kind).codecs.
If direction is “sendonly”, exclude any codecs not included in RTCRtpSender.getCapabilities(kind).codecs.
If direction is “recvonly”, exclude any codecs not included in RTCRtpReceiver.getCapabilities(kind).codecs.
Send and receive MediaStreamTracks over a peer-to-peer connection. Tracks, when added to an RTCPeerConnection, result in signaling; when this signaling is forwarded to a remote peer, it causes corresponding tracks to be created on the remote side.
RTCRtpTransceivers interface describes a permanent pairing of an RTCRtpSender and an RTCRtpReceiver. Each transceiver is uniquely identified using its mid ( media id) property from the corresponding m-line.
They are created implicitly when the application attaches a MediaStreamTrack to an RTCPeerConnection via the addTrack(), or explicitly when the application uses the addTransceiver(). They are also created when a remote description is applied that includes a new media description.
dictionary RTCRtpCodecParameters {
required octet payloadType;
required DOMString mimeType;
required unsigned long clockRate;
unsigned short channels;
DOMString sdpFmtpLine;
};
payloadType – identify this codec. mimeType – codec MIME media type/subtype. Valid media types and subtypes are listed in [IANA-RTP-2] clockRate – expressed in Hertz channels – number of channels (mono=1, stereo=2). sdpFmtpLine – “format specific parameters” field from the “a=fmtp” line in the SDP corresponding to the codec
voiceActivityFlag of type boolean – Only present for audio receivers. Whether the last RTP packet, delivered from this source, contains voice activity (true) or not (false).
RTCRtpTransceiver Interface
Each SDP media section describes one bidirectional SRTP (“Secure Real Time Protocol”) stream. RTCRtpTransceiver describes this permanent pairing of an RTCRtpSender and an RTCRtpReceiver, along with some shared state. It is uniquely identified using its mid property.
Thus it is combination of an RTCRtpSender and an RTCRtpReceiver that share a common mid. An associated transceiver( with mid) is one that’s represented in the last applied session description.
Method stop() – Irreversibly marks the transceiver as stopping, unless it is already stopped. This will immediately cause the transceiver’s sender to no longer send, and its receiver to no longer receive. stopping transceiver will cause future calls to createOffer to generate a zero port in the media description for the corresponding transceiver and stopped transceiver will cause future calls to createOffer or createAnswer to generate a zero port in the media description for the corresponding transceiver
Access to information about the Datagram Transport Layer Security (DTLS) transport over which RTP and RTCP packets are sent and received by RTCRtpSender and RTCRtpReceiver objects, as well other data such as SCTP packets sent and received by data channels. Each RTCDtlsTransport object represents the DTLS transport layer for the RTP or RTCP component of a specific RTCRtpTransceiver, or a group of RTCRtpTransceivers if such a group has been negotiated via [BUNDLE].
Protocols multiplexed with RTP (e.g. data channel) share its component ID. This represents the component-id value 1 when encoded in candidate-attribute while ICE candadte for RTCP has component-id value 2 when encoded in candidate-attribute.
This interface candidate Internet Connectivity Establishment (ICE) configuration used to setup RTCPeerconnection. To facilitate routing of media on given peer connection, both endpoints exchange several candidates and then one candidate out of the lot is chosen which will be then used to initiate the connection.
const pc = new RTCPeerConnection();
pc.addIceCandidate({candidate:''});
candidate – transport address for the candidate that can be used for connectivity checks.
component – candidate is an RTP or an RTCP candidate
foundation – unique identifier that is the same for any candidates of the same type , helps optimize ICE performance while prioritizing and correlating candidates that appear on multiple RTCIceTransport objects.
ip , port
priority
protocol – tcp/udp
relatedAddress , relatedPort
sdpMid – candidate’s media stream identification tag
sdpMLineIndex
usernameFragment – randomly-generated username fragment (“ice-ufrag”) which ICE uses for message integrity along with a randomly-generated password (“ice-pwd”).
RTCIceCredentialType Enum : supports OAuth 2.0 based authentication. The application, acting as the OAuth Client, is responsible for refreshing the credential information and updating the ICE Agent with fresh new credentials before the accessToken expires. The OAuth Client can use the RTCPeerConnection setConfiguration method to periodically refresh the TURN credentials.
ICE candidate policy [JSEP] to select candidates for the ICE connectivity checks
relay – use only media relay candidates such as candidates passing through a TURN server. It prevents the remote endpoint/unknown caller from learning the user’s IP addresses
all – ICE Agent can use any type of candidate when this value is specified.
RTCBundlePolicy Enum
balanced – Gather ICE candidates for each media type (audio, video, and data). If the remote endpoint is not bundle-aware, negotiate only one audio and video track on separate transports.
max-compat – Gather ICE candidates for each track. If the remote endpoint is not bundle-aware, negotiate all media tracks on separate transports.
max-bundle – Gather ICE candidates for only one track. If the remote endpoint is not bundle-aware, negotiate only one media track. If the remote endpoint is bundle-aware, all media tracks and data channels are bundled onto the same transport.
If the value of configuration.bundlePolicy is set and its value differs from the connection’s bundle policy, throw an InvalidModificationError.
Interfaces for Connectivity Establishment
describes ICE candidates
interface RTCIceCandidate {
DOMString candidate;
DOMString sdpMid;
unsigned short sdpMLineIndex;
DOMString foundation;
RTCIceComponent component;
unsigned long priority;
DOMString address;
RTCIceProtocol protocol;
unsigned short port;
RTCIceCandidateType type;
RTCIceTcpCandidateType tcpType;
DOMString relatedAddress;
unsigned short relatedPort;
DOMString usernameFragment;
RTCIceCandidateInit toJSON();
};
RTCIceProtocol can be either tcp or udp
TCP candidate type which can be either of
active – An active TCP candidate is one for which the transport will attempt to open an outbound connection but will not receive incoming connection requests.
passive – A passive TCP candidate is one for which the transport will receive incoming connection attempts but not attempt a connection.
so – An so candidate is one for which the transport will attempt to open a connection simultaneously with its peer.
UDP candidate type
host – actual direct IP address of the remote peer
srflx – server reflexive , generated by a STUN/TURN server
prflx – peer reflexive ,IP address comes from a symmetric NAT between the two peers, usually as an additional candidate during trickle ICE
usernameFragment – randomly-generated username fragment (“ice-ufrag”) which ICE uses for message integrity along with a randomly-generated password (“ice-pwd”).
Access to information about the ICE transport over which packets are sent and received. Each RTCIceTransport object represents the ICE transport layer for the RTP or RTCP component of a specific RTCRtpTransceiver, or a group of RTCRtpTransceivers if such a group has been negotiated via [BUNDLE].
With SCTP, the protocol used by WebRTC data channels, reliable and ordered data delivery is on by default.
Sending large files
Split data channel message in chunks
var CHUNK_LEN = 64000; // 64 Kb
var img = photoContext.getImageData(0, 0, photoContextW, photoContextH),
len = img.data.byteLength,
n = len / CHUNK_LEN | 0;
for (var i = 0; i < n; i++) {
var start = i * CHUNK_LEN, end = (i + 1) * CHUNK_LEN;
dataChannel.send(img.data.subarray(start, end));
}
// last chunk
if (len % CHUNK_LEN) {
dataChannel.send(img.data.subarray(n * CHUNK_LEN));
}
The browser maintains a set of statistics for monitored objects, in the form of stats objects. A group of related objects may be referenced by a selector( like MediaStreamTrack that is sent or received by the RTCPeerConnection).
Statistics API extends the RTCPeerConnection interface
Until recently a customised or property extension could signal multiple media streams within an m-section of an SDP and experiment with media-level “msid” (Media Stream Identifier ) attribute used to associate RTP streams that are described in different media descriptions with the same MediaStreams. However, with the transition to a unified plan, they will experience breaking changes.
The previous SDP format implementation called “planB” was transitioned to “unified plan” in 2019.
Who it does effect ?
Uses various media tracks within m line in SDP such as for video stream and screen sharing simultaneously
Munges SDP, uses MCUs or SFUs
used track-based APIs addTrack, removeTrack, and sender.replaceTrack or legacy addstream removeStream exposed senders and receivers to edit tracks and their encoding parameters
Who it does not affect ?
This does not affect any application which has only single audio and video track.
Multiple media stream may be required for cases such as video and screen share stream in same SDP or in specific cases of SFU.
This implementation in Plan B will result in one “m=” line of SDP being used for video and audio. While within the video m= section multiple “a=ssrc” lines are listed for multiple media tracks.
In Unified Plan, every single media track is assigned to a separate “m=” section. Hence for video and screen sharing simultaneously two m sections will be created.
Interoperability between unified plan and plan B
A mismatch in SDP (between Plan B and Unified Plan) usually results :-
only Unified Plan client receives an offer generated by a Plan B client – the Unified Plan client must reject the offer with a failed setRemoteDescription() error.
only Plan B client receives an offer generated by a Unified Plan client – only first track in every “m=” section is used and other tracks are ignored
This article is aimed at explaining the intricacies and detailed offer answer flow in webrtc handshake and JSEP. You can read the following articles on WebRTC as a prereq before reading through this one. WebRTC has API s namely – Peerconnection , getUserMedia , Datachannel and getStats.
JSEP is used during signalling via w3c’s recommended RTCPeerConnectionAPI interface to set up a multimedia session. The multimedia session description specifies the critical components of setting up a session between local and remote such as transport ports, protocol, profiles. It also handles the interaction with the ICE state machine.
prereq : Setup Client side for the caller PeerConnectionFactory to generate PeerConnections PeerConnection for every connection to remote peer MediaStream audio and video from client device
Side initiating the session creates a offer by CreateOffer() API
As the caller initiates a new RTCPeerConnection() , the RTCSignalingState state is “stable” as remote and local descriptions are empty
As the caller initiates call and calls createOffer() , he now has offer SDP and procced to store offer locally with setLocalDescription(offer) the RTCSignalingState state is “have-local-offer” . After than caller send the offer to callee over signalling channel
Simillarily as the calle recives the offer, it starts with RTCSignalingState stable and then proceeds to store the Remote’s offer using setRemoteDescription(offer), its state is now “have-remote-offer”
The callee generates a provsional answer and for caller and stores it locally , state transitiosn to “have-local-pranswer“. The pranswer SDP is send to caller over signalling channel again .
Caller stores the callee’s pr answer SDP and state updates to “have-remote-pranswer”
Media Section : An m= section is generated for each RtpTransceiver that has been added to the PeerConnection. For the initial offer since no ports are available yet , dummy port 9 can be sadded. However if it is bundle only then port value is set to 0. Later the port value will be set to the port value of default ICE candidate.
DTLS filed “UDP/TLS/RTP/SAVPF” is followed by the list of codecs in order of priority.
“c=” line in msection too must be filled with dummy values if IP 0.0.0.0 as no candidates are available yet .
For each media format on the m= line, “a=rtpmap” for “rtx” with the clock rate of codec and “a=fmtp” to reference the payload type of the primary codec. “a=rtcp-fb” specified RTCP feedback
When createOffer is called a second (or later) time, or is called after a local description has already been installed, the processig is different due to gathered ICE candidates . However the <session-version> is not changed .
Additionally m section is updated if RtpTransceiver is added or removed
Each “m=” and c=” line MUST be filled in with the port, relevant RTP profile, and address of the default candidate for the m= section
If the m= section is not bundled into another m= section, update the “a=rtcp” with port and address of RTCP camdidate and add “a=camdidate” with “a=end-of-candidates”
Local Answer created by side receiving the session/ Callee
When createAnswer is called for the first time after a remote description has been provided, the result is known as the initial answer.
Each offered m= section will have an associated RtpTransceiver
Remote Destination / Callee can reject the m section by setting port in m line to 0 . It can reject msection if neither of the offered media format are supported , RtpTransceiver is stoopped etc.
For the initial offer the dummy port value of 9 is set as no ICE candudate is avaible yet. Simillarly “c=” line must contain the “dummy” value “IN IP4 0.0.0.0” too.
The <proto> field MUST be set to exactly match the <proto> field for the corresponding m= line in the offer.
If the answer contains any “a=ice-options” attributes where “trickle” is listed as an attribute, update the PeerConnection canTrickle property to be true.
SDP returned from createOffer or createAnswer MUST NOT be changed before passing it to setLocalDescription. After calling setLocalDescription with an offer or answer, the application MAY modify the SDP to reduce its capabilities before sending it to the far side.
Assume we have a MCU at location and want the video stream to relay via a Media Server.
SDP is used for session parsing and contians sequence of line with key value pairs. SDP is read, line-by-line, and converted to a data structure that contains the deserialized information.
Line “v=” , “o=”,”b=” and “a=” are processed . The “i=”, “u=”, “e=”, “p=”, “t=”, “r=”, “z=”, and “k=” lines are not used by this specification; they MUST be checked for syntax but their values are not used. Line “c=” is checked for syntax and ICE mismatch detection
“a= ” attribute could be : “a=group” , “s=”ice-lite” , “a=ice-pwd”, “a=ice-options” , “a=fingerprint”, “a=setup” , a=tls-id”, “a=identity” , “a=extmap”
Media Section Parsing
Line “m=” for media , proto , port , fmt in RTP
Attributes “a=” can be :
“a=rtpmap” or “a=fmtp” : map from an RTP payload type number to a media encoding name that identifies the payload format.
Packetization parameters as “a=ptime” , “a=maxptime” which define the length of each RTP packet.
Direction as “a=sendrecv” , a=recvonly , a=sendonly , a=inactive“
Muxing as “a=rtcp-mux” , “a=rtcp-mux-only”
RTCP attributes “a=rtcp” , “a=rtcp-rsize”
Line “c=” is checked.
Line “b=” for bandiwtdh , bwtype
Attribites for “a=” could be “a=ice-ufrag”, “a=”ice-pwd”, “a=ice-options” , “a=candidate”, “a=remote-candidate” , a=end-of-candidates” and “a=fingerprint”
Protocols using offer/answer are difficult to operate through Network Address Translators (NATs) since flow of media packets require IP addresses and ports of media sources and sinks within their messages. Also realtime media emphasises on reduced latency and decreased packet loss .
An extension to the offer/answer model, and works by including a multiplicity of IP addresses and ports in SDP offers and answers, which are then tested for connectivity by peer-to-peer connectivity checks. Checks done by STUN and TURN, also allows for address selection for multi-homed and dual-stack hosts
ICE allows the agents to discover enough information about their topologies to potentially find one or more paths by which they can communicate. Then it systematically tries all possible pairs (in a carefully sorted order) until it finds one or more that work.
Caller and callee performs checks to finalize the protocol and routing needed to establish a peer connection . Number of candudates are proposed till they mutually agree upon one . Peerconnection then uses that candiadte detaisl to initiate the connection .
While Applying a Local Description at the media engine level if m= section is new, WebRTC media stacks begins gathering candidates for it.
RTCPeerconnection specified canTrickleIceCandidates. ICE trickling is the process of continuing to send candidates after the initial offer or answer has already been sent to the other peer.
ICE TransportRole is responsible for Choosing a candidate pair.
ICE layer sets one peer as controlling and other as controlled agent. The controling agent makes the final decision as to which candidate pair to choose.
An agent identifies all CANDIDATE whic is a transport address. Types:
HOST CANDIDATE – directly from a local interface which could be Wifi, Virtual Private Network (VPN) or Mobile IP (MIP) if an agent is multihomed ( private and public networks) , it obtains a candidate from each IP address and includes all candidates in its offer.
STUN or TURN to obtain additional candidates. Types
translated addresses on the public side of a NAT (SERVER REFLEXIVE CANDIDATES)
The candidates are carried in attributes in the SDP offer . The remote peer also follows this process and gather and send lits own sorted list of candidates. Hence CANDIDATE PAIRS from both sides are formed.
PEER REFLEXIVE CANDIDATES – connectivity checks can produce aditional candidates espceialy around symmetric NAT
Since the same address is used for STUN. and media ( RTP/RTCP) Demultiplexing based on packet contents helps to identify which one is which.
Checks : ICE checks are performed in a specific sequence, so that high-priority candidate pairs are checked first.
TRIGGERED CHECKS – accelerates the process of finding a valid candidate
ORDINARY CHECKS – agent works through ordered prioritised check list by sending a STUN request for the next candidate pair on the list periodically.
Checks ensure maintaining frozen candidates and pairs with some foundation for media stream. Each candidate pair in the check list has a foundation and a state. States for candidates pairs
1.Waiting: A check has not been performed for this pair, and can be performed as soon as it is the highest-priority Waiting pair onthe check list.
2. In-Progress: A check has been sent for this pair, but the transaction is in progress.
3. Succeeded: A check for this pair was already done and produced a successful result.
4. Failed: A check for this pair was already done and failed, either never producing any response or producing an unrecoverable failure response.
5. Frozen: A check for this pair hasn’t been performed, and it can’t yet be performed until some other check succeeds, allowing this pair to unfreeze and move into the Waiting state.
Selecting low-latency media paths can use various techniques such as actual round-trip time (RTT) measurement. Controlling agent gets to nominate which candidate pairs will get used for media amongst the ones that are valid. There are 2 ways : regular nomination and aggressive nomination.
A CPasS ( communication platform as a service ) is a cloud-based communication platform like B2B cloud communications platform that provides real-time communication capabilities. This should be easily integrable with any given external environment or application of the customer, without him worrying about building backend infrastructure or interfaces. Traditionally, with IP protected protocols, licensed codecs maintaining a signalling protocol stack, and network interfaces building a communication platform was a costly affair. Cisco, Facetime, and Skype were the only OTT ( over the top) players taking away from the telco’s call revenue. However, with the advent of standardised, open-source protocol and codecs plenty of CPaaS providers have crowded the market making more supply than there is demand. A customer wanting to quickly integrate real-time communications on his platform has many options to choose from. This article provides an insight into how CPaaS solutions are architectured and programmed.
Call server + Media Server that can be interacted with via UA
Comm clients like sipphones , webrtc client , SDK ( software development kits ) or libraries for desktop , embedded and/or mobile platforms .
APIs that can trigger automated calls and perform preprogrammed routing.
Rich documentation and samples to build various apps such as call centre solutions , interactive auto-attendant using IVR , DTMF , conference solutions etc .
Some CPaaS providers also add features like transcribing ,transcoding , recording , playback etc to provide edge over other CPaaS providers
Cloud Services as Amazon Web service, Google Cloud, Microsoft Azure, IBM Cloud, Digital Ocean is great resources to host the multiple parts of a CPaaS system such as gateways, media servers, SIP Application servers, other servers for microservices including accounting, profile management, rest services etc. Often virtualized machines ( VMs) mounted on a larger physical remote datacentre are an ideal choice for VoIP and cloud communication providers.
(+) pay as you go
(+) no stress on resource management like cooling, rack space , wiring etc
(+) easy to setup
(-) not in premise, security is not in control
(-) outages in cloud infrastructures datacentre could lead to service disruption
Self hosted / inpremises Servers / private cloud
Marinating datacentre provides flexibility to extend and or develop tightly controlled use cases. It is often a requirement for secure communication platforms pertaining to government or banking communications such as turret phones.
Some approaches are to set up the server with Openstack to manage SDN ( software-defined network). Other approaches also involve VMWare to virtualize servers and then using docker container-managed via Kubernetes to dynamically spawn instances of server as load scaled up or down.
(+) more secure and controlled
(+) no monthly recurring fees to cloud vendors
(-) maintenance of racks and servers
(-) requires planning for high availability and geographical deployment for redundancy
I have come across so many small size startups trying to build CPaaS solutions from scratch but only realising it after weeks of trying to build an MVP that they are stuck with firewall, NAT, media quality or interoperability issues. Since there are so many solutions already out in the market it is best to instead use them as an underlying layer and build applications services using it such as call centre or CRM services making custom wrappers.
Tech insights and experiences
companies who have been catering to telco and communication domain make robust solutions based on industry best practices which beats novice solution build in a fortnight anyday
Keeping up with emerging trends
Market trends like new codecs , rich communication services , multi tenancy, contextual communication , NLP, other ML based enhancements are provided by CPaaS company and would potentially try to abstrct away the implementation details from their SDK users or clients.
Auto Scaling, High Availability
A firm specializing in CPaaS solution has already thought of clustering and autoscaling to meet peak traffic requirements and backup/replication on standby servers to activate incase of failure
CAPEX and OPEX
Using a CPaaS saves on human resources, infrastructure, and time to market. It saves tremendously on underlying IT infrastructure and many a times provides flexible pricing models.
Call Rates are very critical for billing and charging the users. Any updates from the customer or carriers or individuals need to propagate automatically and quickly to avoid discrepancies and negative margins.
CDR ( Call Detail Record ) processing pipeline
CDRs need to be processed sequentially and incrementally on a record-by-record basis or over sliding time windows. CDR can also be used for a wide variety of analytics including correlations, aggregations, filtering, and sampling.
Updating rate sheet ( charges per call or per second )
The following setup is ideal to use the new input rate sheet values via web UI console or POST API and propagate it quickly to the main DB via a queuing system such as SQS. Serverless operations such as using AWS lambda can be used via a trigger-based system for any updates. This ensures that any new input rates are updated in realtime and maintain fallback values in separate storage as s3 bucket too
In current Voip scenarios a call may be passing thorugh various telco providers , ISP and cloud telephony serviIn current VoIP scenarios, a call may be passing through various telco providers, ISP and cloud telephony service providers where each system maintains its own call records and billing. This in my opinion is duplication and missing a single source of truth. A decentralized, reliable and consistent data store via blockchain coudl potentially maintain the call records making then immutable and non diputable. Some more details on the concept are in the article below.
Unified communication services build around WebRTC should be vendor agnostic and multi-tenant and be supported by other Communication Service Providers (CSPs), SIP trunks, PBXs, Telecom Equipment Manufacturers (TEMs), and Communication Platform as a Service (CPaaS). This can happen if all endpoints adhere to SIP standards in most updated RFC. However since not all are on the boat , Session border controllers are a great way to mitigate the differences and provide seamless connectivity to signalling and media , which could be between WebRTC, SIP or PSTN, from TDM to IP .
Session Border Controllers ( SBC ) assist in controlling the signalling and usually also the media streams involved in calls and sessions.
They are often part of a VOIP network on the border where there are 2 peer networks of service providers such as backbone network and access network of corporate communication system which is behind firewall.
A more complex example is that of a large corporation where different departments have security needs for each location and perhaps for each kind of data. In this case, filtering routers or other network elements are used to control the flow of data streams. It is the job of a session border controller to assist policy administrators in managing the flow of session data across these borders. – wikipedia
SBC act like a SIP-aware firewall with proxy/B2BUA.
What is B2BUA?
A Back to back user agent ( B2BUA ) is a proxy-like server that splits a SIP transaction in two pieces:
on the side facing User Agent Client (UAC), it acts as server;
on the side facing User Agent Server (UAS) it acts as a client.
B2BUAs keep state information about active dialog. Read more here .
Remote Access
SBC mostly have public url address for teleworkers and a internal IP for enterprise/ inner LAN . This enables users connected to enterprise LAN ( who do not have public address ) to make a call to user outside of their network. During this process SBC takes care of following while relaying packets .
Security
Connectivity
Qos
Regulatory
Media Services
Statistics and billing information
Topology hiding
SBC hides and anonymize secure information like IP ports before forwarding message to outside world . This helps protect the internal node of Operators such as PSTN gateways or SIP proxies from revealing outside.
Explaining the functions of SBC in detail
1. Security
SBCs are often used by corporations along with firewalls and intrusion prevention systems (IPS) to enable VoIP calls to and from a protected enterprise network. VoIP service providers use SBCs to allow the use of VoIP protocols from private networks with Internet connections using NAT, and also to implement strong security measures that are necessary to maintain a high quality of service. The security features includes :
Prevent malicious attacks on network such as DOS, DDos.
Intrusion detection
cryptographic authentication
Identity/URL based access control
Blacklisting bad endpoints
Malformed packet protection
Encryption of signaling (via TLS and IPSec) and media (SRTP)
Stateful signalling and Validation
Toll Fraud – detect who is intending to use the telecom services without paying up
2. Connectivity
As SBC offers IP-to-IP network boundary, it recives SIP request from users like REGISTER , INVITE and routes them towards destination, making their IP. During this process it performs various operations like
NAT traversal
IPv4 to IPv6 inter-working
VPN connectivity
SIP normalization via SIP message and header manipulation
Multi vendor protocol normalization
Further Routing features includes : Least Cost Routing based on MoS ( Mean Opinion Score ) : Choosing a path based on MoS is better than chooisng any random path .
Protocol translations between SIP, SIP-I, H.323.
In essence SBC achieve interoperability, overcoming some of the problems that firewalls and network address translators (NATs) present for VoIP calls.
Automatic Rerouting
connectivity loss from UA for whole branch is detected by timeouts . But they can also be detected by audio trough SIP OPTIONS by SBC . In such connectivity loss , SBC decides rerouting or sending back 504 to caller .
4. QoS
To introduce performance optimization and business rules in call management QoS is very important . This includes the following :
Traffic policing
Resource allocation
Rate limiting
Call Admission Control (CAC)
ToS/DSCP bit setting
Recording and Audit of messages , voice calls , files
System and event logging
5. Regulatory
Govt policies ( such as ambulance , police ) and/ or enterprise policies may require some calls to be holding priority over others . This can also be configured under SBC as emergency calls and prioritization.
Some instances may require communication provider to comply with lawful bodies and provide session information or content , this is also called as Lawful interception (LI) . This enables security officials to collect specific information rather than examining all the traffic that passes through a particular router. This is also part of SBC. 6. Media services
Many of the new generation of SBCs also provide built-in digital signal processors (DSPs) to enable them to offer border-based media control and services such as- DTMF relay , Media transcoding , Tones and announcements etc.
WebRTC enabled SBC’s also provide conversion between DTLS-SRTP, to and from RTCP/RTP. Also transcoding for Opus into G7xx codecs
and ability to relay VP8/VP9 and H.264 codecs.
7. Statistics and billing information
SBC have an interface with and OSS/BSS systems for billing process , as almost all traffic that pass through the edge of the network passes via SBC. For this reason it is also used to gather Statistics and usage-based information like bandwidth, memory and CPU. PCAP traces of both signaling and media information of specific sessions .
New feature rich SBCs also have built-in digital signal processors (DSPs). Thus able to provide more control over session’s media/voice . They also add services like Relay and Interworking, Media Transcoding, Tones and Announcements, DTMF etc.
Session Border Controller for WebRTC , SIP , PSTN , IP PBX and Skype for business .
Diagram Component Description
Gateways provide compression or decompression, control signaling, call routing, and packetizing.
PSTN Gateway : Converts analog to VOIP and vice versa . Only audio no support for rich multimedia .
VOIP Gateway : A VoIP Gateway acts like a translator converting digital telecom lines to VoIP . VOIP gateway often also include voice and fax. They also have interfaces to Soft switches and network management systems.
WebRTC Gateway : They help in providing NAT with ICE-lite and STUN connectivity for peers behind policies and Firewall .
SIP trunking : Enterprises save on significant operation cost by switching to IP /SIP trunking in place of TDM (Time Division Multiplexing). Read more on SIP trunk and VPN here.
SIP Server : A Telecom application server ( SIP Server ) is useful for building VAS ( Value Added Services ) and other fine grained policies on real time services . Read more on SIP Servers here .
VOIP/SIP service Provider : There are many Worldwide SIP Service providers such as Verizon in USA , BT in europe, Swisscom in Switzerland etc .
Building a SBC
The latest trends in Telecommunications industry demand an open standardized SBC to cater to growing and large array of SIP Trunking, Unified Multimedia Communications UC&C, VoLTE, VoWi-Fi, RCS and OTT services worldwide . Building an SBC requires that it meet the following prime requirements :
software centric
Cloud Deploybale
Rich multimedia (audio , video , files etc) processing
open interfaces
The end product should be flexible to be deployed as COTS ( Commercial Off the shelf) product or as a virtual network function in the NFV cloud.
Multi Configuration , should be supported such as Hosted or Cloud deployed .
Overcome inconsistencies in SIP from different Vendors
Security and Lawful Interception
Carrier Grade Scaling
Flow Diagram
Thus we see how SBC became important part of comm systems developed over SIP and MGCP. SBC offer B2BUA ( Back to Back user agent) behavior to control both signalling and media traffic.
Setting up a ec2 instance on AWS for web real time communication platform over nodejs and socket.io using WebRTC.
Primarily a Web Call , Chat and conference platform uses WebRTC for the media stream and socketio for the signalling . Additionally used technologies are nosql for session information storage , REST Apis for getting sessions details to third parties.
Below is a comprehensive setup if ec2 t2.micro free tier instance, installation with a webrtc project module and samples of customisation and usage .
Amazon EC2 : These are elastic compute general purpose storage servers that mean that they can resize the compute capacity in the cloud based on load . 750 hours per month of Linux, RHEL, or SLES t2.micro instance usage. Expires 12 months after sign-up.
Some other products are also covered under free tier which may come in handy for setting up the complete complatorm. Here is a quick summary
Amazon S3 : it is a storage server. Can be used to store media file like image s, music , videos , recorded video etc .
Amazon RDS : It a relational database server . If one is using mysql or postgress for storing session information or user profile data . It is good option .
Amazon SES : email service. Can be used to send invites and notifications to users over mail for scheduled sessions or missed calls .
Amazon CloudFront : It is a CDN ( content delivery network ) . If one wants their libraries to be widly available without any overheads . CDN is a good choice .
Alternatively any server from Google cloud , azure free tier or digital ocean or even heroku can be used for WebRTC code deployment . Note that webrtc capture now requires htps in domain name.
Server Setup
Set up environment by installing nvm , npm and git ( source version control)
Since 2015 it has become mandatory to have only https origin request WebRTC’s getUserMedia API ie Voice, video, geolocation , screen sharing require https origins.
Note that this does not apply to case where its required to only serve peer’s media Stream or using Datachannels . Voice, video, geolocation , screen sharing now require https origins
For A POC purpose here is th way of generating a self signed certificate
Transport Layer Security and/or Secure Socket Layer( TLS/SSL) is a public/private key infrastructure.Following are the steps
1.create a private key
openssl genrsa -out webrtc-key.pem 2048
2.Create a “Certificate Signing Request” (CSR) file
create https certificate using self generate or purchased SSL certificates using fs , node-static and https modules . To know how to create self generated SSL certificates follow section above on SSL certificates.
var fs = require(‘fs’);
var _static = require(‘node-static’);
var https = require(‘https’);
var file = new _static.Server(&amp;amp;amp;amp;amp;amp;quot;./&amp;amp;amp;amp;amp;amp;quot;, {
cache: 3600,
gzip: true,
indexFile: &amp;amp;amp;amp;amp;amp;quot;index.html&amp;amp;amp;amp;amp;amp;quot;
});
the document start script that invokes the JS script
$('document').ready(function () {
sessionid = init(true);
var local = {
localVideo: "localVideo",
videoClass: "",
userDisplay: false,
userMetaDisplay: false
};
var remote = {
remotearr: ["video1", "video2"],
videoClass: "",
userDisplay: false,
userMetaDisplay: false
};
webrtcdomobj = new WebRTCdom(
local, remote
);
var session = {
sessionid: sessionid,
socketAddr: "https://localhost:8084/"
};
var webrtcdevobj = new WebRTCdev(session, null, null, null);
startcall();
});
Common known issues:
1.Opening page https://<web server ip>:< web server port>/index.html says insecure
This is beacuse the self signed certificates produced by open source openSSL is not recognized by a trusted third party Certificate Agency.
A CA ( Certificate Authority ) issues digital certificate to certify the ownership of a public key for a domain.
To solve the access issue goto https://<web server ip>:< web server port> and given access permission such as outlined in snapshot below
2.Already have given permission to Web Server , page loads but yet no activity .
if you open developer console ( ctrl+shift+I on google chrome ) you will notice that there migh be access related errros in red . If you are using different server for web server and signalling server or even if same server but different ports you need to explicity go to the signalling server url and port and give access permission for the same reason as mentione above.
3.no webcam capture on opening the page
This could happen due to many reasons
page is not loaded on https
browser is not webrtc compatible
Media permission to webcam are blocked
the machine does have any media capture devices attached
Driver issues in the client machine while accessing webcams and mics .
For the last couple of weeks , I have been working on the concept of rendering 3D graphics on WebRTC media stream using different JavaScript libraries as part of a Virtual Reality project .
What is Augmented Reality ?
Augmented reality (AR) is viewing a real-world environment with elements that are supplemented by computer-generated sensory inputs such as sound, video, graphics , location etc.
How is it diff. from Virtual Reality ?
Virtual Reality – replaces the real world with simulated one , user is isolated from real life , Examples – Oculus Rift & Kinect
Augmented Reality – blending of virtual reality and real life , user interacts with real world through digital overlays , Examples – Google glass & Holo Lens
Methods for rendering augmented Reality
Computer Vision
Object Recognition
Eye Tracking
Face Detection and substitution
Emotion and gesture picker
Edge Detection
web based Augmented Reality solution
Components for a Web base end to end AR solution
Web :
WebRTC getusermedia
Web Speech API
WebGL
css
svg
HTML5 canvas
sensor API
H/w :
Graphics driver
microphone and camera
sensors
3D :
Geometry and Math Utilities
3D Model Loaders and models
Lights, Materials,Shaders, Particles,
Animation
WebRTC
Web based Real Time communications
Definition for browser’s media stream and data
Awaiting more standardization , on a API level at the W3C and at the protocol level at the IETF.
Enable browser to browser applications for voice calling, video chat and P2P file sharing without plugins.
Enables web browsers with Real-Time Communications (RTC) capabilities
MIT : Free, open project
Code snippet for WebRTC API
1.To begin with WebRTC we first need to validate that the browser has permission to access the webcam.
Display the video as a plane which can be viewed from various angles in a given background landscape. Credits for code : https://stemkoski.github.io/Three.js/
1.Use code from slide 10 to get user’s webcam input through getUserMedia
Make a Screen , camera and renderer as previously described
Give orbital CONTROLS for viewing the media plane from all angles
controls = new THREE.OrbitControls( camera, renderer.domElement );
Add point LIGHT to scene
Make the FLOOR with an image texture
var floorTexture = new THREE.ImageUtils.loadTexture( 'imageURL.jpg' );
floorTexture.wrapS = floorTexture.wrapT = THREE.RepeatWrapping;
floorTexture.repeat.set( 10, 10 );
var floorMaterial = new THREE.MeshBasicMaterial({map: floorTexture, side: THREE.DoubleSide});
var floorGeometry = new THREE.PlaneGeometry(1000, 1000, 10, 10);
var floor = new THREE.Mesh(floorGeometry, floorMaterial);
floor.position.y = -0.5;
floor.rotation.x = Math.PI / 2;
scene.add(floor);</pre>
<pre>
6. Add Fog
scene.fog = new THREE.FogExp2( 0x9999ff, 0.00025 );
7.Add video Image Context and Texture.
video = document.getElementById( 'monitor' );
videoImage = document.getElementById( 'videoImage' );
videoImageContext = videoImage.getContext( '2d' );
videoImageContext.fillStyle = '#000000';
videoImageContext.fillRect( 0, 0, videoImage.width, videoImage.height );
videoTexture = new THREE.Texture( videoImage );
videoTexture.minFilter = THREE.LinearFilter;
videoTexture.magFilter = THREE.LinearFilter;
var movieMaterial=new THREE.MeshBasicMaterial({map:videoTexture,overdraw:true,side:THREE.DoubleSide});
var movieGeometry = new THREE.PlaneGeometry( 100, 100, 1, 1 );
var movieScreen = new THREE.Mesh( movieGeometry, movieMaterial );
movieScreen.position.set(0,50,0);
scene.add(movieScreen);
Step 4 : Camera
Camera types in three.js are CubeCamera , OrthographicCamera, PerspectiveCamera. We are using Perspective camera here . Attributes are field of view , aspect ratio , near and far clipping plane.
var camera = new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 0.1, 1000 );
Step 5: Renderer
Renderer uses a <canvas> element to display the scene to us.
var renderer = new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );
Step 6: . BoxGeometry object contains all the points (vertices) and fill (faces) of the cube.
var geometry = new THREE.BoxGeometry( 1, 1, 1 );
Step 7: Material
threejs has materials like – LineBasicMaterial , MeshBasicMaterial , MeshPhongMaterial , MeshLambertMaterial
These have their properties like -id, name, color , opacity , transparent etc. Use MeshBasicMaterial and color attribute of 0x00ff00, which is green.
var material = new THREE.MeshBasicMaterial( { color: 0x00ff00 } );
Step 8: Mesh
A mesh is an object that takes a geometry, and applies a material to it, which we then can insert to our scene, and move freely around.
var cube = new THREE.Mesh( geometry, material );
Step 9: By default, when we call scene.add(), the thing we add will be added to the coordinates (0,0,0). This would cause both the camera and the cube to be inside each other. To avoid this, we simply move the camera out a bit.
scene.add( cube );
camera.position.z = 5;
Step 10: Create a loop to render something on the screen
function render() {
requestAnimationFrame( render );
renderer.render( scene, camera );
}
render();
This will create a loop that causes the renderer to draw the scene 60 times per second.
Step 11 : Animating the cube
This will be run every frame (60 times per second), and give the cube a nice rotation animation
cube.rotation.x += 0.1;
cube.rotation.y += 0.1;
2. Shaded Material on Sphere
Stepp 1 : create a empty page and import three.min.js and jquery
<html>
<head>
<title>Shaded Material on Sphere </title>
<style>
body { margin: 0; }
canvas { width: 100%; height: 100% }
</style>
<script src="js/jquery.min.js"></script>
<script src="js/three.min.js"></script>
<script>// Our Javascript will go here.</script>
</head>
<body>
<div id="container"></div>
</body>
</html>
Step 2 : Repeat the same steps at in previous example
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera(45, 600/600 , 0.1, 10000);
var renderer = new THREE.WebGLRenderer();
renderer.setSize(600 , 600 );
$container.append(renderer.domElement);
scene.add(camera);
camera.position.z = 300; // the camera starts at 0,0,0 so pull it back
3. Create the sphere’s material as MeshLambertMaterial
MeshLambertMaterial is non-shiny (Lambertian) surfaces, evaluated per vertex. Set the color to red .
var sphereMaterial = new THREE.MeshLambertMaterial( { color: 0xCC0000 });
4. create a new mesh with sphere geometry ( radius, segments, rings) and add to scene
var sphere = new THREE.Mesh( new THREE.SphereGeometry( 50, 16, 16 ), sphereMaterial);
scene.add(sphere);
5. Light
Create light , set its position and add it to scene as well . Light can be point light , spot light , directional light .
var pointLight = new THREE.PointLight(0xFFFFFF);
pointLight.position.x = 10;
pointLight.position.y = 50;
pointLight.position.z = 130;
scene.add(pointLight);
6. Render the whole thing
renderer.render(scene, camera);
3. Complex objects like Torusknot
Step 1 : Same as before make scene , camera and renderer
scene = new THREE.Scene();
camera = new THREE.PerspectiveCamera(125, window.innerWidth / window.innerHeight, 1, 500);
camera.position.set(0, 0, 100);
camera.lookAt(new THREE.Vector3(0, 0, 0));
var renderer = new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );
Step 2 : Add the lighting
var light = new THREE.PointLight(0xffffff);
light.position.set(0, 250, 0);
scene.add(light);
var ambientLight = new THREE.AmbientLight(0x111111);
scene.add(ambientLight);
var geometry = new THREE.TorusKnotGeometry( 8, 2, 100, 16, 4, 3 );
var material = new THREE.MeshLambertMaterial( { color: 0x2022ff } );
var torusKnot = new THREE.Mesh( geometry, material );
torusKnot.position.set(3, 3, 3);
scene.add( torusKnot );
camera.position.z =25;
Step 4 : Do the animation and render on screen
var render = function () {
requestAnimationFrame( render );
torusKnot.rotation.x += 0.01;
torusKnot.rotation.y += 0.01;
renderer.render(scene, camera);
};
render();
TFX is a modular widget based WebRTC communication and collaboration solution. It is a customizable solution where developers can create and add their own widget over the underlying WebRTC communication mechanism . It can support extensive set of user activity such as video chat , message , play games , collaborate on code , draw something together etc . It can go as wide as your imagination . This post describes the process of creating widgets to host over existing TFX platform .
Prerequisites
It is required to have TFX Chrome extension installed and running from Chrome App Store under above . To do this follow the steps described in TangoFX v0.1 User’s manual.
Test TFX Sessions ?
TFX Sessions uses the browser’s media API’s , like getUserMedia and Peerconnection to establish p2p media connection . Before media can traverse between 2 end points the signalling server is required to establish the path using Offer- Answer Model . This can be tested by making unit test cases on these function calls .
TFX Sessions uses socketio based handshake between peers to ascertain that they are valid endpoints to enter in a communication session . This is determined by SDP ( Session Description Parameters ) . The same can be observed in chrome://webrtc-internals/ traces and graphs .
How to make widgets using TFX API ?
Step 1: To make widgets for TFX , just write your simple web program which should consist of one main html webpage and associated css and js files for it .
Step 2 : Find an interesting idea which is requires minimal js and css . Remember it is a widget and not a full fleshed web project , however js frameworks like requirejs , angularjs , emberjs etc , work as well.
Step 3: Make a compact folder with the name of widget and put the respective files in it. For example the html files or view files would go to src folder , javascript files would goto js folder , css files would goto css folder , pictures to picture folder , audio files to sound folder and so on .
Step 4 : Once the widget is performing well in standalone environment , we can add a sync file to communicate the peer behaviors across TFX network . For this we primarily use 2 methods .
SendMessage : To send the data that will be traversed over DataChannel API of TFX . The content is in json format and will be shared with the peers in the session .
OnMessage : To receive the message communicated by the TFX API over network
Step 5: Submit the application to us or test it yourself by adding the plugin description in in widgetmanifest.json file . Few added widgets are
Step 6 : For proper orientation of the application make sure that overflow is hidden and padding to left is atleast 60 px so that it doesnt overlap with panel padding-left: 60px; overflow: hidden;
Step 7 : Voila the widget is ready to go .
Simple Messaging Widget
For demonstration purpose I have summarised the exact steps followed to create the simple messaging widget which uses WebRTC ‘s Datachannel API in the back and TFX SendMessage & OnMessage API to achieve
Step 1 : Think of a general chat scenario as present in various messaging si
Step 2: Made a folder structure with separation for js , css and src. Add the respective files in folder. It would look like following figure:
//send message when mouse is on mesage dicv ans enter is hit
$("#messages").keyup(function(event){
if(event.keyCode == 13){
var msg=$('#MessageBox').val();
//send to peer
var data ={
"msgcontent":msg
}
sendMessage(data);
addMessageLog(msg);
$("#MessageBox").val('');
}
});
function addMessageLog(msg){
//add text to text area for message log for self
$('#MessageHistoryBox').text( $('#MessageHistoryBox').text() + '\n'+ 'you : '+ msg);
}
// handles send message
function sendMessage(message) {
var widgetdata={
"type":"plugin",
"plugintype":"relaymsg",
"action":"update",
"content":message
};
// postmessage
window.parent.postMessage(widgetdata,'*');
}
//to handle incoming message
function onmessage(evt) {
//add text to text area for message log from peer
if(evt.data.msgcontent!=null ){
$('#MessageHistoryBox').text( $('#MessageHistoryBox').text() +'\n'+ 'other : '+ evt.data.msgcontent );
}
}
window.addEventListener("message",onmessage,false);
Step 6: The end result is :
Developing a cross origin Widget ( XHR)
Let us demonstrate the process and important points to create a cross- origin widget :
step 1 : Develop a separate web project and run it on a https
step 2 : Add the widget frame in TFX . Following is the code I added to make an XHR request over GET
var xmlhttp;
xmlhttp=new XMLHttpRequest();
xmlhttp.onreadystatechange=function()
{
if (xmlhttp.readyState==4 &amp;&amp; xmlhttp.status==200)
{
document.getElementById("myDiv").innerHTML=xmlhttp.responseText;
}
}
xmlhttp.open("GET","https://192.168.0.119:8000/TFXCrossSiteProj/files/document1.txt",true);
xmlhttp.send();
step 3 : Using self made https we have have to open the url separately in browser and give it explicit permission to open in advanced setting. Make sure the original file is visible to you at the widgets url .
step 4: Adding permission to manifest for access the cross origin requests
Step 5 : Rest of the process are similar to develop a regular widget ie css and js .
Step 6: Resulting widget on TFX
Note 1 : In absence of changes to manifest file the cross origin request is meet with a Access-Control-Allow-Origin error .
Note 2: While using POST the TFX responds with Failed to load resource: the server responded with a status of 404 (Not Found)
Note 3: Also if instead of https http is used the TFX still responds with Failed to load resource: the server responded with a status of 404 (Not Found)
TFX is WebRTC based communication platform built entirely on open standards making it extensively scalable. The underlying API completely masks the communication aspect and lets the user enjoy an interactive communication session. It also supports easy to build widgets framework which can be used to build applications on the TFX platform .
TFX Sessions
TFX sessions is a part of TFX . It is a free Chrome extension WebRTC client that enables parties communicating and collaborating, to have an interactive and immersive experience. You can find it on Chrome Webstore here .
Features of TFX Sessions:
Through TFX, users can have instant multimedia Internet call sessions .
The core features are :
No signin or account management
No additional requirement like Flash , Silverlight or Java
URL based session management
secure WebRTC based communication
complete privacy with no user tracking or media flow interruption
Ability to share session on social network platforms like Facebook , twitter , linkedin , gmail , google plus etc
ability to choose between multiple cameras
The TFX platform has developer friendly APIs to help build widgets. Some of the pre-built widgets available on TFX are:
Coding
Drawing
Multilingual chat
Screen sharing
TFX sessions is free for personal use and can be downloaded from Chrome Webstore.
What is the differentiator with other internet call services?
No registration , login for account management required
Communication is directly between peer to peer ie information privacy.
Third party apps , services can be included as widgets on TFX platform.
Can be skimmed to be embedded inside Mobile app webview , iframe, other portals etc anytime .
TFX Sessions Integration Models
The 3 possible approaches for TFX Integration in increasing order of deployment time are :
WebSite’s widget on TFX chrome extension .
Launch TFX extension in an independent window from website
TFX call from embedded Window inside the website page
1. WebSite’s widget on TFX chrome extension .
This outlines the quickest deliverable approach of building the websites own customized widget on TFX widgets API and deployed on existing TFX communication setup .
Step 1 : Login using websites credentials to access the content
Step 2 : Access the website with the other person inside the TFX “ Pet Store “ Widget
2. Launch TFX in an independent window from “Click to Call” Button on website
This approach outlines the process of launching TFX in an independent window from a click of a button on website. However it is a prerequisite to have TFX extension installed on your Chrome browser beforehand.
Step 1 : Have TFX installed on chrome browser Step 2 : Trigger and launch TFX chrome extension window on click of button on webpage
3. TFX call from embedded Window inside Website page
This section if for the third approach which is of being able to make TFX calls from embedded Window inside of the webpage. Refer to sample screen below :
Step 1 : Have TFX embedded in an iframe inside the website
Step 2 : Make session on click of button inside the iframe.
Technical Details about TFX like architecture , widgets development , components description etc can be found here : TFX Platform
However, still, the security challenges with Web Server based WebRTC service are many for example :
If both the peers have a WebRTC browser then one can place a WebRTC call to callee anytime with an auto-answer. This might result in a denial of service(DoS) for the receiver.
Since the media is p2p and also can override firewalls settings through the TURN server, it can result in unwanted/ prohibited data being sent on the network.
Websocket packets are untraceable to detect whether they are used for normal web navigation or to share SDP hence one may secretly make no RTP calls to users through the web server and exchange information.
Threat from screen sharing, for example, a user might mistakenly share his internet banking screen or some confidential information / PII present on the desktop.
Giving long-term access to the camera and microphone for certain sites is also a concern. for example: in an unclosed tab on a site that has access to your microphone and camera, the remote peer can secretly be viewing your webcam and microphone inputs.
Clever use of User Interface to mask an ongoing call can mislead the user into believing that call has been cut while it is secretly still ongoing.
Network attackers can modify an HTTP connection through my Wifi router or hotspot to inject an IFRAME (or a redirect) and then forge the response to initiate a call to themselves.
As WebRTC doesn’t have a strong congestion control mechanism, it can eat up a large chunk of the user’s bandwidth.
By visiting chrome://webrtc-internals/ in chrome browser alone, one can view the full traces of all webRTC communication happening through his browser. The traces contain all kinds of details like signalling server used, relay servers, TURN servers, peer IP, frame rates etc which can jeopardise the security of VoIP service providers.
Ofcourse other challenges that arrive with any other webservice based architecture are also applicable here such as :
Malicious Websites which automatically execute the attacker’s scripts.
User can be induced to download harmful executable files and run them.
Improper use of W3C Cross-Origin Resource Sharing (CORS) to bypass SAME ORIGIN POLICY (SOP)
Unlike most conventional real-time systems (e.g., SIP-based softphones) WebRTC communications are directly controlled by a Web server over some signalling protocol which may be XMPP, WebSockets, socket.io, Ajax etc. This poses new challenges such as
A web browser might expose JavaScript APIs which allows web server to place a video call itself. This may cause web pages to secretly record and stream the webcam activity from the user’s computer.
malicious calling services can record the user’s conversation and misuse.
malicious webpages can lure users via advertising and execute auto calling services.
Since JavaScript calling APIs are implemented as browser built-ins, unauthorized access to these can also make users’ audio and camera streams vulnerable.
If programs and APIs allow the server to instruct the browser to send arbitrary content, then they can be used to bypass firewalls or mount denial of service attacks.
The general goal of security is to identify and resolve security issues during the design phase so they do not cost service provider time, money, and reputation at a later phase. Security for a large architecture project involves many aspects, there is no one device or methodology to guarantee that an architecture is now “secure” Areas that malicious individuals will attempt to attack include but are not limited to:
Improperly coded applications
Incorrectly implemented protocols
Operating System bugs
Social engineering and phishing attacks
As security is a broad topic touching on many sections of WebRTC this section is not meant to address all topics but instead to focus on specific “hot spots”, areas that require special attention due to the unique properties of the WebRTC service. There are several security-related topics that are of particular interest with respect to WebRTC. The are discussed in detail in sections below.
Today the browser acts as a TRUSTED COMPUTING BASE (TCB) where the HTML and JS act inside of a sandbox that isolates them both from the user’s computer.
With the latest tightening of patches around security concerns in webRTC platforms, a script cannot access a user’s webcam, microphone, location, file, desktop capture without the user’s explicit consent. When the user allows access, a red dot will appear on that tab, providing a clear indication to the user, that the tab has media access.
Figure depicting browser asking for user’s consent to access Media devices for WebRTC .Figure depicting Media Capture active on browser with red dot .
A type vulnerability typically found in Web applications (such as web browsers through breaches of browser security) that enables attackers to inject client-side script into Web pages viewed by other users.
A cross-site scripting vulnerability may be used by attackers to bypass access controls such as the same origin policy.
Cross-site scripting carried out on websites accounted for roughly 80.5% of all security vulnerabilities documented by Symantec as of 2007 according to Wikipedia.
Their effect may range from a petty nuisance to a significant security risk, depending on the sensitivity of the data handled by the vulnerable site and the nature of any security mitigation implemented by the site’s owner.
As the primary method for accessing WebRTC is expected to be using HTML5 enabled browsers there are specific security considerations concerning their use such as; protecting keys and sensitive data from cross-site scripting or cross-domain attacks, websocket use, iframe security, and other issues. Because the client software will be controlled by the user and because the browser does not, in most cases, run in a protected environment there are additional chances that the WebRTC client will become compromised. This means all data sent to the client could be exposed.
keys
hashes
registration elements (PUID etc.)
Therefore additional care needs to be taken when considering what information is sent to the client, and additional scrutiny needs to be performed on any data coming from the client.
(User Interface redress attack, UI redress attack, UI redressing) is a malicious technique of tricking a Web user into clicking on something different to what the user perceives they are clicking on, thus potentially revealing confidential information or taking control of their computer while clicking on seemingly innocuous web pages. It is a browser security issue that is a vulnerability across a variety of browsers and platforms, a clickjack takes the form of embedded code or a script that can execute without the user’s knowledge, such as clicking on a button that appears to perform another function. Compromised personal computer with installed adware, viruses, spyware such as trojan horses, etc. can also compromise the browser and obtain anything the browser sees.
The browser acts as a TRUSTED COMPUTING BASE (TCB) both from the user’s perspective and to some extent from the server’s. HTML and JavaScript (JS) provided by the web server can execute scripts on browser and generate actions and events . However browser operates in a sandbox that isolates these scripts both from the user’s computer and from server .
The users computer may have lot of private and confidential data on the disk . Browser do make it mandatory that user must explicitly select the file and consent to its upload before doing file upload and transfer transactions . However still it is not very rare that misleading text and buttons can make users click files .
Another way of accessing local resources is through downloading malicious files to users computer which are executable and may harm users computer.
We know that XMLHttpRequest() API can be used to secretly send data from one origin to other and this can be used to secretly send information without user’s knowledge. However now , SAME ORIGIN POLICY (SOP) in browser’s prevents server A from mounting attacks on server B via the user’s browser, which protects both the user (e.g., from misuse of his credentials) and the server B (e.g., from DoS attack).
SOP forces scripts from each site to run in their own, isolated, sandboxes. It enables webpages and scripts from the same origin server to interact with each other’s JS variables, but prevents pages from the different origins or even iframes on the same page to not exchange information.
As part of SOP scripts are allowed to make HTTP requests via the XMLHttpRequest() API to only those server which have same ORIGIN/domain as that of the originator .
CORS enables multiple web services to intercommunicate . Therefore when a script from origin A executes what would otherwise be a forbidden cross-origin request, the browser instead contacts the target server B to determine whether it is willing to allow cross-origin requests from A. If it is so willing, the browser then allows the request. This consent verification process is designed to safely allow cross-origin requests.
Once a WebSockets connection has been established from a script to a site, the script can exchange any traffic it likes without being required to frame it as a series of HTTP request/response transactions.
Even websockets overcome SOP and establish cross origin transport channels, they pose some challenging scenarios for a secure application deisgn.
WebSockets use masking technique to randomize the bits that are being transmitted , thus making it more difficult to generate traffic which resembles a given protocol , thus making it difficult for inspection from flowing traffic .
Jsonp is a hack designed to bypass origin restriction through script tag injection. A JSONp enabled server passes the response in user specified function
when we use <script> tags the domain limitation is ignored ie we can load scripts from any domain . So when we need to fetch get exchange data just pass callback parameters through scripts . For example
function mycallback(data){
// this is the callback function executed when script returns
alert("hi"+ data);</span>
}
var script = document.createElement('script');
script.src = '//serverb.com/v1/getdata?callback=mycallback'
document.head.appendChild(script)
There have been found vulnerabilities in the existing Java and Flash consent verification techniques and handshake.
Sender and receiver are able to share media stream after a offer answer handshake. But we already need one in order to do NAT hole-punching. Presuming the ICE server is malicious , in absence of transaction IDs by stun unknow to call scripts , it is not possible for the webpage of receiver to ascertain is the data is forged or original . Thus to prevent this the browser must generate hidden transaction Id’s and should not sharing with call scripts ,even via a diagnostic interface.
As soon as the callee sends their ICE candidates, the caller learns the callee’s IP addresses. The callee’s server reflexive address reveals a lot of information about the callee’s location.
To prevent server should suppress the start of ICE negotiation until the callee has answered.Also user may hide their location entirely by forcing all traffic through a TURN server.
Goal of webrtc based call services should be to create channel which is secure against both message recovery and message modification for all audio / video and data .
With the increasing requirement of screen sharing in web app and communication systems there is always a high threat of oversharing / exposing confidential passwords , pins , security details etc . This may either through some part of screen or some notification whihc pops up .
There is always the case when the user may believe he is sharing a window when in fact they are the entire desktop.
The attacker may request screensharing and make user open his webmail , payment settings or even net-banking accounts .
When user frequently uses a site he / she may want to give the site a long-term access to the camera and microphone ( indicated by ” Always allow on this site ” in chrome ). However the site may be hacked and thus initiate call on users’ computer automatically to secretly listen-in .
Unless the user checks his laptops glowing camera light LED or goes and monitors the traffic himself he would not know if there is active call in background, which according to him he had cut off . In such a case an attacker may pretend to cut a call shows red phone signs and supportive text but still keep the session and media stream active placing himself on mute .
Even if the calling service cannot directly access keying material ,it can simply mount a man-in-the-middle attack on the connection. The idea is to mount a bridge capturing all the traffic.
To protect against this it is now mandatory to use https for using getusermedia and otherwise also recommended to keep webrtc comm services on https or use strict fingerprinting . This section is derived from Security Considerations for WebRTC draft-ietf-rtcweb-security-08
We know that the forces behind WebRTC standardization are WHATWG, W3C, IETF and strong internet working groups. WebRTC security was already taken into consideration when standards were being build for it . The encryption methods and technologies like DTLS and SRTP were included to safeguard users from intrusions so that the information stays protected.
WebRTC media stack has native built-in features that address security concerns. The peer-to-peer media is already encrypted for privacy . Figure below:
WebRTC encrypts video and audio data via the SRTP (Secure Real-Time Protocol) method ensuring that IP communications – your voice and video traffic – can not be heard or seen by unauthorized parties.
What is SRTP ?
The Secure Real-time Transport Protocol (or SRTP) defines a profile of RTP (Real-time Transport Protocol), intended to provide encryption, message authentication and integrity, and replay protection to the RTP data in both unicast and multicast applications.
Earlier models of VOIP communication such as SIP based calls had an option to use only RTP for communication thereby subjecting the endpoint users to lot of problem like compromising media Confidentiality . However the WebRTC model mandates the use of SRTP hence ruling out insecurities of RTP completely. For encryption and decryption of the data flow SRTP utilizes the Advanced Encryption Standard (AES) as the default cipher.
For such end to end media encryption the shared secret is exchanged between the endpoints.
SDES ( SDP Security Description for Media Stream) ensures that plaintext containing SDP inside a SIP packet can flow end to end securily over TLS. This was a common practise in SIP endpoints in IMS and telco eco-systems to share SRTP secret key. How inview JS stack in browser and open code access SDES is not applicable to Webrtc systesm adn are largely outdated.
Currently DTLS (Datagram Transport Layer Security) is used by webrtc endpoints to multiplex a cryptographic key exchange. For WebRTC to transfer real time data, the data is first encrypted using the DTLS method. DTLS-SRTP handshake has both ends choose “half” of the SRTP key.
(+) Already built into all the WebRTC supported browsers from the start (Chrome, Firefox and Opera).
(+) On a DTLS encrypted connection, eavesdropping and information tampering cannot take place.
(-) Primary issue with supporting DTLS is it can put a heavy load on the SBC’s handling encryption/decryption duties.
(-) Interworking DTLS-SRTP to SDES is CPU intensive
SRTP from DTLS-SRTP end flows easily
SRTP from SDESC end requires auth+decrypt, and encrypt+auth
What is DTLS ?
DTLS allows datagram-based applications to communicate in a way that is designed to prevent eavesdropping, tampering, or message forgery. The DTLS protocol is based on the stream-oriented Transport Layer Security (TLS) protocol .
Together DTLS and SRTP enables the exchange of the cryptographic parameters so that the key exchange takes place in the media plane and are multiplexed on the same ports as the media itself without the need to reveal crypto keys in the SDP.
The media relay points which proxy media stream between endpoints can expose traffic and meta data howver due to end to end encrypted nature of WebRTC they cannot be used to decipher and listen in media packets.
TURN server
Mixers
Media engines
It is important that WebRTC’s SRTP stream is linkedin to another SRTP endpoint and RTP-SRTP gateways should be avoided.
In the recent months everyone has been trying to get into the WebRTC space but at the same time fearing that hackers might be able to listen in on conferences, access user data, or even private networks. Although development and usage around WebRTC is so simple , the security and encryption aspects of it are in the dim light.
A simple WebRTC architecture is shown in the figure below :
By following the simple steps described below one can ensure a more secure WebRTC implementation . The same applies to healthcare and banking firms looking forth to use WebRTC as a communication solution for their portals .
Ensure that the signalling platform is over a secure protocol such as SIP / HTTPS / WSS . Also since media is p2p , the media contents like audio video channel are between peers directly in full duplex.
To protect against Man-In-The-Middle (MITM) attack the media path should be monitored regularly for no suspicious relay.
User’s that can participate in a call , should be pre registered / Authenticated with a registrar service. Unauthenticated entities should be kept away from session’s reach .
WebRTC authentication certificate
Make sure that ICE values are masked thereby not rendering the caller/ callee’s IP and location to each other through tracing in chrome://webrtc-internals/ or packet detection in Wirehsark on user’s end.
As the signalling server maintains the number of peers , it should be consistently monitored for addition of suspicious peers in a call session. If the number of peers actually present on signalling server is more that the number of peers interacting on WebRTC page then it means that someone is eavesdropping secretly and should be terminated from session access by force.
It was observed that many a times non tech savy users simply agree to all permissions request from browser without actually consciously giving consent. Therefore user’s should be made aware of API in websites which ask for undue permissions . For example permission to :
Third party API should be thoroughly verified before sending their data on WebRTC DataChannel.
Before Desktop Sharing user’s should be properly notified and advised to close any screen containing sensitive information.
Support of WebRTC should not increase security risk to telecom network. Any device or software that is in the hands of the customer will be compromised, it is just a mater of time
All data received from untrusted/third party sources (i.e. all data from customer controlled devices or software) must be validated.
Expect that any data sent to the client will be obtained by malicious users
Ensure that the new service does not adversely impact the data security, privacy, or service of existing customers.
remove PII and sensitive information in meta data and other records or traces such as CDR ( Call detail Records)
For storing logs , recording , file , ssh keys or any others ensitive informaton encrypted by keys , we need a safe storage for keys and these tools are handy for password and key management – Dashlane , Lastpass , Bitwarden, 1Password so on.
Auto sign-in for WebRTC apps
Turn User Authentication On and enable Two-Factor Authentication/Bio-metrics. OTP based sign-on and captcha checks are also popular approaches to protect sign-in.
Public Wi-Fi
Even a WebRTC e2e encrypted connection can be tampered with on an insecure Wifi. Even though Man-in-middle cannot decipher message content, they can make out intelligible information from the packet size, frequency, end parties’ IP and ports in signalling, time delay for network detection of remote etc. For native clients, a precautionary measure is to enable Remote Lock and Data Wipe. Also advised to only use authorized apps to permit sensitive data such as image storage.
If you use a native WebRTC native app, there are mulitple thinsg that you need to be wary of.
Avoid All Jailbreaks : Jail-breaking a smartphone can enable the user to run unverified or unsupported apps, many of these apps carry security vulnerabilities. Majority of security exploits for Apple’s iOS only affect jailbroken iPhones.
Add a Mobile Security App : Mobile security reports shows that mobile operating systems such as iOS and (especially) Android are increasingly becoming targets for malware. Select a reputable mobile security app that extends the built-in security features of the device’s mobile operating system. Some well-known third-party security vendors offering mobile security apps for iOS, Android and Windows Phone – Avast, Kaspersky, Symantec
Also as a good practise Turn off the Bluetooth, Wi-Fi and NFC when not needed.
Information security ensures that both physical and digital data is protected from unauthorized access, use, disclosure, disruption, modification, inspection, recording or destruction.
Although WebRTC already has best secure tools in its spec list which provide end to end encrypted communication over SRTP DTLS as well as media device access mandatory from websites of secure origin over TLS, yet if the endpoints acting as peers themselves are compromised then all this is in vain . Hence security issues arises when
Endpoints are recording their media content and storing it on unsafe location such as public file servers
Endpoints are inturn re-streaming their incoming their media streams to unsafe streaming servers
Phishing , Pretexting , Baiting attacks , Quid pro quo , Tailgating , Water-Holing are soe of the common tactics to steal teh data of a nonsuspecting user . They are as much applicate to WebRTC based communication site as they are to any other trusted website such as banking , customer care contacts , falsh sale portals , cupon / discount sites etc .
Phonephishing – Voice phishing a criminal phone fraud, imporsonating legitimate caller such as a bank or tax agent and using social engineering over the telephone system to gain access to private personal and financial information for the purpose of financial reward.
Phishing – WebRTC data channel messages can be used as a method to do phishing by send malicious links posing as legitmate sender. It is hard to track such attacks since the data channels are p2p.
Impersonation attacks – spear-phishing , emails that attempt to impersonate a trusted individual or a company in an attempt to gain access to corporate finances , Human resource details , sesitive data. Business email compromise (BECs) also known as CEO fraud is a popular example of an impersonation attack. The fake email usually describes a very urgent situation to minimize scrutiny and skepticism.
Other social engineering tactics – Trickery , Influencing , Deception , Spying
Network security breaches
Inspite of the fact that webrtc is a p2p streaming framework , there are always signalling server required which do the initial handshake and enable the exchange fo SDP for the media to stream in peer to peer fashion . Some wellknown attacks that compromise networks and remote / cloud server are :
Viruses, worms and Trojan horses
Zero-day attacks
Hacker attacks
Denial of service attacks
Spyware and adware
It is upto the WebRTC/ VoIP service provider to detect emerging threats before they infiltrate network and compromise data. Some crticial compoenets to enhance security are Firewalls , Access Control Lists , Intrusion detection and prevention systems (IDS/IPS) , Virtual private networks (VPN)
GovernanceFramework – defines the roles, responsibilities and accountability of each person and ensures that you are meeting compliance.
Confidentiality: ensures information is inaccessible to unauthorized people via encryption
Integrity: protects information and systems from being modified by unauthorized people; provides accuracy and trustworthyness
Availability: ensures authorized people can access the information when needed and that all hardware and software are maintained properly and updated when necessary
Authentication, Authorization and Accountability(AAA): validate users autheticity via creds, enforcing policies on network resources after the user has gain access and ensuring accountability by means of monitoring and capturing the events done by the user
Non repudiation: is the assurance that someone cannot deny the validity of something. It provides proof of the origin of data and the integrity of the data.
As a first defence tactic , if a orignation ip address is sending malacious or malformed packets which depict an exploitation or attack , trigger and notification for tech team and execute script to block on the origin IP of attacker via security groups in AWS or other ACL list in hosted server . Can also implement temporary firewall block on it and later monitor it for more violations.
Incase a server is compromised beyond repair such as attacker taking control of the file system, drain the ongoing sessions from it and store cached storage with session state variable like CDR enteries. Activate the fallback / standby server and make the current server a honeypot to explore the attackers actions. Common attacks involve either of below techniques:
exploiting the VoIP system to get free internatoinal calls
ransomware activities such as scp the files out of server and leaveing behind a readme.txt file on root location asking for money transfer in return of data
bombard brute force DDOS attacks to bring down the system and make it incapible of catering to genuine requests , perhaps with the inetention of giving advantage to competitors.
As the media connections are p2p, even if we kill the signalling server, it will not affect the ongoing media sessions. Only the time duration ( probably 3 – 4 minutes ) it takes to restart the server , is when the users will not be able to connect to signalling server for creating new sessions. Therefore incase a system is under attack and non recoverable, just terminate it and respawn other server attaced to the domain name or floating IP or Load balancer.
Auto updates
Most browsers today like Google Chrome and Mozilla Firefox have a good record of auto-updating themselves withing 24 hours of a vulnerability of threat occurring.
Third party Call Control ( 3PCC)
If a call is confirmed to be compromised, it should be within the power of Web Application server rendering the WebRTC capable page to cut off the compromised call session by force censing termation request to endpoints or via turning off the TURN services if in use.