Bandwidth are dependant on network strength and is affected by the other users on the network. Under hetrogenious network conditions Bandwidth estimation is a critical step to improve call quality and end user exeprince.
An unreliable network / fluctiating one will cause some packets to be delivered on time and some to be delayed more thn others, causing them to come in bursts. JitterBuffer is an effective methodology for Jitter management which ensures a steady delivery of apckets even when the peers transmit at flucting rates.
A jitter buffer is a buffer that consumes packets as soon as they arrive and keep them untill the frame can be fully reconstructed. At the point when all apckets have bee filled in buffer ( in any order ) it emiits it for decoding which the play can playback to user. Note that serveral RTP packet can have the same timestamp is they are part of the same video frame.
(+) dynamically manages unordered packets and reconstrcts a frame after accumulating all packets
(-) can introduce latency for packets that arrive early
(-) Need active resisizing by means of feedback
for hi speed and goog network jitterbuffer can ve small sized
for congested and disruptive networks it is better to keep a longer buffer which can also add some latency
(-) buffer has limited capacity so the packet can expire if not received within a duration “jitterBufferDealy”.
Reduced resolution, framerate, bit rate are effective for congestion control however not suited to the case of High defintaion video conferecing such as gaming , telehealth of broadcast of concert as it may hinder with user experience.
using the I-frame , P-frame and B frame efficiently in the codec combines with predictive machine learning models make packet loss unnoticible to the human eye. Marker ( M bit) in the RTP packet structure marks keyframes.
Partial frames given to decoder are unprocessable, then PLI message is send to the sender. As the sender receives pli message it will produce new I-frames to help the reciver decore the frames.
a=rtpmap:100 VP9/90000
a=rtcp-fb:100 goog-remb
a=rtcp-fb:100 transport-cc
a=rtcp-fb:100 ccm fir
a=rtcp-fb:100 nack
a=rtcp-fb:100 nack pli
a=fmtp:100 profile-id=2
a=rtpmap:101 rtx/90000
a=fmtp:101 apt=100
FIR
PIL
request a full key frame from the sender , when new memeber enters the session.
request a full key frame from the sender, when partial frames were given to the decoder, but it was unable to decode them
causes of making PLI request could be decoder crash or heavy loss
Congestion is created when a network path has reached its maximum limits which could be due to
failures(switches, routers, cables, fibres ..)
over subscription and operating at peak bandwidth.
broadcast storms
Inapt BGP routing and congestion detection
BGP is responsisble for finiding the shortest routable path for a packet
The direct consequences of congestion for any network transport can be
High Latency
Connection Timeouts
Low throughput
Packet loss
Queueing delay
With respect to WebRTC streams too, if a network has congestion, the buffer will overflow and packets will be droppped. Due to excessive dropping of packets both transmission time and jitter increases.To overcome this adaptive buffereing is used as jitter increases or decreases.
A congestion notifier and detection algorithm can analyze the RTCP metrics for possible congestion in the network route and suggest options to overcome it. Part of Adaptive Bitrate and Bandwidth Estimation process.
Rate limiting the sending information is one way to overcome congestion, even though it could lead to bad call quality at the reciver’s end and non typical for realtime communciation systems
Bandwidth estimation and congestion control are ofetn paird in as a operational unit. Primarily packet loss and inter packet arrival times drives the bandwidth estimation and enable GCC to flagcongestion.
On the receiver side TMMBR/TMMBN (Temporary Maximum Media Stream Bit Rate Request/Notification) and REMB(Receiver Estimated Maximum Bitrate ) exchange the bandwodth estimates.
On the sender side TWCC(Transport wide congestion control) can be used.
Other congestion control algorithms
QUIC Loss Detection and Congestion Control RFC 9002
Coupled Congestion Control for RTP Media rfc8699
NADA: A Unified Congestion Control Scheme for Real-Time Media – Network Working group
Self-Clocked Rate Adaptation for Multimedia RMCAT WG
SCReAM – Mobile optimised congestion control algorithm by Ericson
High definition video stream requires low/no packet loss and fast recovery if any. RTP intrinsically has no means for recovering packet loss. Instead, low bit rate redundancy can be added to packets themselves to make up for any loss. Retransmission of lost packets can be a feature developed over RTP using sequence numbers head in RTP.
Geographical distances can add significant delay in Transmission time.Transmission time is an important metric in the Call Quality analysis however calculating transmission time as sthe different of timestamp of sending and timestamp of receiving requires perfect sync of systems clock which is unreliable.
Latency is calculated from getting user media encoding transmission , network delays , buffering , decoding and playback. There are many factors involved in latency management such as queing delays , media path, CPU utilization etc.
Optimize Compute resource
mobile agents have lesser computative power
Camera with features such as auto focus or other adjustments will taker more time to cappture
network should be of suited bandwidth and strength
Reduce information to be encoded and sent
Subject focus and blurring backgroud
Filtering noise at source
Voice Activity Detection (VAD)
send extra data in FEC only is there is voice activity detected in packet
Since we know that synchorinizaing clocks in distributed systems is a tough task and mostly avoided by wither using NTP or using other means of synchronization
Webrtc uses Stream Control Transmission Protocol (SCTP) over DTLS connection as an alternative to TCP and UDP.
Features :
multihoming : one or both endpoints of a connection can consist of more than one IP address. This enables transparent failover between redundant network paths
Multistreaming transmit several independent streams of chunks in parallel
SCTP has similarities to TCP retransmission and partial reliability like UDP.
Heartbest to keep connection alive with exponential backoff if packet hasnt arrived.
Validation and acknowledgment mechanisms protect against flooding attack
SCTP frames data as datagrams and not as a byte stream
(+) SCTP enables WebRTC to be multiplexing
(+) It has flow control and congestion avoidance support
End to end encryption model of WebRTC is a good defence to MIM ( man in middle ) attacks howver it is not yet 100% foolproof. I discussed more security loopholes and concerns in WebRTC and Realtime communication platfroms in this article WebRTC App and webpage Security.
Traditionally 2 separte ports for RTP aand RTCP were used in SIP / RTP based realtime communications systems. Thus demultiplexisng of the traffic of these data streams is peformed at the transport later.
With rtcp-mux the NAT tarversal si simplified as onlya single port is used for media and control messages .
(+) easier to manage security by gathering ICE candidates for a single port only instead of 2
(+) increases the systesm capacity for media session using the same number of ports
(+) further simplified using BUNDLE as all media session and their control messages flow on the same port .
WebRTC has rtcp-mux capabilities thus simplifying the ICE candidate pairing
Echo is the sound of your own voice reverberating. If the amplitude of such a sound is high and intervals exceed 25 ms, it becomes disruptive to the conversation. Its types can be acoustic or hybrid. Echo cancellers need to eliminate the echo while still preserving call quality and not disrupting tones such as DTMF.
Usually the background or reflected noise which is an undesired voiceband energy transfers from the speaker to the microphone and into the communication network. Mostly found in a hands-free set or speakerphone. In a multiparty call scenario, it could also occur due to unmatched volume levels, challenging network conditions on one party, background noise, double talk or even proximity between user and microphone
In a public telephone system, local loop wiring is done using two-wire connections carrying bidirectional voice signals. In PBX, a two-to-four wire conversion is done using a hybrid circuit which does not perform perfect impedance matches resulting in a Hybrid echo.
An efficient echo canceller should cancel out the entire echo tail while not leading to any packet loss. It needs to be adaptive to changing IP network bandwidth and algorithm should function equally well in conference scenarios where there may be more than one echo sources. Benchmarking tools like MOS (Mean Opinion scores ) are used to gauge the results. Often voice quality enhancement technologies are also integrated into AEC modules, such as :
This post is about making performance enhancements to a WebRTC app so that they can be used in the area which requires sensitive data to be communicated, cannot afford downtime, fast response and low RTT, need to be secure enough to withstand and hacks and attacks.
As a communication agent become a single HTML page driven client, a lot of authentication, heartbeat sync, web workers, signalling event-driven flow management resides on the same page along with the actual CPU consumption for the audio-video resources and media streams processing. This in turn can make the webpage heavy and many a time could result in a crash due to being ” unresponsive”.
Here are some my best to-dos for making sure the webrtc communication client page runs efficiently
CLS metrics measures the sum total of all individual layout shift scores for every unexpected layout shift that occurs during the entire lifespan of the page.
To have a good user interactionn experiences, the DOM elements should display as less movement as possible so that page appears stable . In the opposite case for a flickering page ( maybe due to notification DOM dynamically pushing the other layout elements ) it is difficult to precisely interact with the page elements such as buttons .
The main thread is where a browser processes runs all the JavaScript in your page, as well as to perform layout, reflows, and garbage collection. therefore long js processes can block the thread and make the page unresponsive.
Unoptimized JS code takes longer to execute and impacts network , parse-compileand memory cost.
If your JavaScript holds on to a lot of references, it can potentially consume a lot of memory. Pages appear janky or slow when they consume a lot of memory. Memory leaks can cause your page to freeze up completely.
Some effective tips to spedding up JS execution include
Cross-site request forgery (CSRF) attacks rely on the fact that cookies are attached to any request to a given origin, no matter who initiates the request.
While adding cookies we must ensure that if SameSite =None , the cookies must be secure
SameSite to Strict, your cookie will only be sent in a first-party context. In user terms, the cookie will only be sent if the site for the cookie matches the site currently shown in the browser’s URL bar.
Set-Cookie: promo_shown=1; SameSite=Strict
You can test this behavior as of Chrome 76 by enabling chrome://flags/#cookies-without-same-site-must-be-secure and from Firefox 69 in about:config by setting network.cookie.sameSite.noneRequiresSecure.
Key Performance Indicators (KPIs) are used to evaluate the performance of a website . It is crticial that a webrtc web page must be light weight to acocmodate the signalling control stack javscript libs to be used for offer answer handling and communicating with the signaller on open sockets or long polling mechnism .
Lighthouse tab in chrome developer tools shows relavnat areas of imporevemnt on the webpage from performmace , Accesibility , Best Practices , Search Engine optimization and progressive Web App
Page attributes under Chrome developers control depicts the page load and redering time for every element includeing scripts and markup. Specifically it has
Time to Title
Time to render
Time to inetract
Networking attributes to be cofigured based on DNS mapping and host provider. These Can be evalutaed based on chrome developer tool reports
Other page interaction crtiteria includes the frames their inetraction and timings for the same.
In the screenhosta ttcjed see the loading tasks which basically depcits the delay by dom elements under transitions owing to user interaction . This ideally should be minimum to keep the page responsive.
The above functions ( old and new ) estimates the memory usage of the entire web page
these calls can be used to correlate new JS code with the impact on memery and subsewuntly find if there are any memeory leaks. Can also use these memery metrics to do A/B testing .
Loading assests over CDN , minfying sripts and reducing over all weight of the page are good ways to keep the page light and active and prevent any chrome tab crashes.
The non critical compoenents could then be loaded on async .
Lazy load must be used for large files like js paylaods which are costly to load. To send a smaller JavaScript payload that contains only the code needed when a user initially loads your application, split the entire bundle and lazy load chunks on demand.
Codecs signifies the media stream’s compession and decompression. For peers to have suceesfull excchange of media, they need a common set of codecs to agree upon for the session. The list codecs are sent between each other as part of offeer and answer or SDP in SIP.
As WebRTC provides containerless bare mediastreamgtrackobjects. Codecs for these tracks is not mandated by webRTC . Yet the codecs are specified by two seprate RFCs
RFC 7878 WebRTC Audio Codec and Processing Requirements specifies least the Opus codec as well as G.711’s PCMA and PCMU formats.
RFC 7742 WebRTC Video Processing and Codec Requirnments specifies support for VP8 and H.264’s Constrained Baseline profile for video .
In WebRTC video is protected using Datagram Transport Layer Security (DTLS) / Secure Real-time Transport Protocol (SRTP). In this article we are going to dicuss Audio/Video Codecs processing requirnments only.
WebRTC is free and opensource and its woring bodies promote royality free codecs too. The working groups RTCWEB and IETF make the sure of the fact that non-royality beraning codec are mandatory while other codecs can be optional in WebRTC non browsers .
WebRTC Browsers MUST implement the VP8 video codec as described in RFC6386 and H.264 Constrained Baseline described in RFC 7442.
Most of the codesc below follow Lossy DCT(discrete cosine transform (DCT) based algorithm for encoding. Sample SDP from offer in Chrome browser v80 for Linux incliudes these profile :
AVC’s Constrained Baseline (CBP ) profile compliant with WebRTC.
propertiary, patented codec, mianted by MPEG / ITU
Constrained Baseline Profile Level 1.2 and H.264 Constrained High Profile Level 1.3 . Contrained baseline is a submet of the main profile , suited to low dealy , low complexity. suited to lower processing device like mobile videos
Multiview Video Coding – can have multiple views of the same scene ,such as stereoscopic video.
Other profiles , which are not supporedt are Baseline(BP), Extended(XP), Main(MP) , High(HiP) , Progressive High(ProHiP) , High 10(Hi10P), High 4:2:2 (Hi422P) and High 4:4:4 Predictive
supported containers are 3GP, MP4, WebM
Parameter settings:
packetization-mode
max-mbps, max-smbps, max-fs, max-cpb, max-dpb, and max-br
sprop-parameter-sets: H.264 allows sequence and picture information to be sent both in-band and out-of-band. WebRTC implementations must signal this information in-band.
Supplemental Enhancement Information (SEI) “filler payload” and “full frame freeze” messages( used while video switching in MCU streams )
Already used for video conferencing on PSTN (Public Switched Telephone Networks), RTSP, and SIP (IP-based videoconferencing) systems.
suited for low bandwidth networks
(-) not comaptible with WebRTC
but many media gateways incldue realtime transcoding existed between H263 based SIP systems and vp8 based webrtc ones to enable video communication between them
H.265 / HEVC
proprietary format and is covered by a number of patents. Licensing is managed by MPEG LA .
Container – Mp4
Interoprabiloity between non WebRT Compatible and WebRTC compatible endpoints
With the rise of Internet of Things many Endpoints especially IP cameras connected to Raspberry Pi like SOC( system on chiops )n wanted to stream directly to the browser within theor own provate network or even on public network using TURN / STUN.
The figure below shows how such a call flow is possible between an IP cemera ( such as Baby Cam ) and its parent monitoring it over a WebRTC suppported mobile phone browser . The process includes streaming teh content from IOT device on RTSP stream and using realtime trans-coding between H264 and VP8
Interoprabiloity between non WebRT Compatible and WebRTC compatible endpoints
Opus is a lossy audio compression format developed by the Internet Engineering Task Force (IETF) targeting a broad range of interactive real-time applications over the Internet, from speech to music and supportes multiple compression algorithms
Constant and variable bitrate encoding – 6 kbit/s to 510 kbit/s
frame sizes – 2.5 ms to 60 ms
sampling rates – 8 kHz (with 4 kHz bandwidth) to 48 kHz (with 20 kHz bandwidth, where the entire hearing range of the human auditory system can be reproduced).
container- Ogg, WebM, MPEG-TS, MP4
As an open format standardized through RFC 6716, a reference implementation is provided under the 3-clause BSD license. All known software patents which cover Opus are licensed under royalty-free terms.
(+ ) flexible, suited for speech ( by SILK) and music ( CELT)
(+) support for mono and stereo
(+) inbuild FEC( Forward Error Correction) thus resilient to packet loss
(+) compression adjustability\ for unpredictable networks
(-) Highly CPU intensive ( unsuitable for embedded devices like rpi)
(-) processing and memory intensive
For all cases where the endpoint is able to process audio at a sampling rate higher than 8 kHz, it is w3C recommends that Opus be offered before PCMA/PCMU.
AAC (Advanvced Audio Encoding)
part of the MPEG-4 (H.264) standard. Lossy compression but has number pf profiles suiting each usecase like high quality surround sound to low-fidelity audio for speech-only use.
supported containers – MP4, ADTS, 3GP
G.711 (PCMA and PCMU)
G.711 is an ITU standard (1972) for audio compression. It is primarily used in telephony.
ITU published Pulse Code Modulation (PCM) with either µ-law or A-law encoding. vital to interface with the standard telecom network and carriers. G.711 PCM (A-law) is known as PCMA and G.711 PCM (µ-law) is known as PCMU
It is the required standard in many voice-based systems and technologies, for example in H.320 and H.323 specifications.
Fixed 64Kbpd bit rate
supports 3GP container formats
G.722
ITU standard (1988) Encoded using Adaptive Differential Pulse Code Modulation (ADPCM) which is suited for voice compression
7 kHz Wideband audio codec operating
Bitrate 48, 56 and 64 kbit/s.
containers used 3GP, AMR-WB
G722 improved speech quality due to a wider speech bandwidth of up to 50-7000 Hz compared to G.711 of 300–3400 Hz.
Comfort noise (CN)
artificial background noise which is used to fill gaps in a transmission instead of using pure silence. It prevents – jarring or RTP Timeout.
Should be used for streams encoded with G.711 or any other supported codec that does not provide its own CN. Use of Discontinuous Transmission (DTX) / CN by senders is optional
Internet Low Bitrate Codec (iLBC)
A opensource narrowband speech codec for VoIP and streaming audio.
8 kHz sampling frequency with a bitrate of 15.2 kbps for 20ms frames and 13.33 kbps for 30ms frames.
Defined by IETF RFCs 3951 and 3952.
Internet Speech Audio Codec (iSAC)
iSAC: A wideband and super wideband audio codec for VoIP and streaming audio. It is designed for voice transmissions which are encapsulated within an RTP stream.
16 kHz or 32 kHz sampling frequency
adaptive and variable bit rate of 12 to 52 kbps.
Speex
patent-free audio compression format designed for speech and also a free software speech codec that is used in VoIP applications and podcasts. May be obsolete, with Opus as its official successor.
AMR-WB Adaptive Multi-rate Wideband is a patented wideband speech coding standard that provides improved speech quality. This is codec is generally available on mobile phones.
wider speech bandwidth of 50–7000 Hz.
data rate is between 6-12 kbit/s, and the
DTMF and ‘audio/telephone-event’ media type
endpoints may send DTMF events at any time and should suppress in-band dual-tone multi-frequency (DTMF) tones, if any.
Describes the OAuth auth credential information which is used by the STUN/TURN client (inside the ICE Agent) to authenticate against a STUN/TURN server
what ICE candidates are gathered to support non-multiplexed RTCP.
negotiate – Gather ICE candidates for both RTP and RTCP candidates. If the remote-endpoint is capable of multiplexing RTCP, multiplex RTCP on the RTP candidates. If it is not, use both the RTP and RTCP candidates separately.
require – Gather ICE candidates only for RTP and multiplex RTCP on the RTP candidates. If the remote endpoint is not capable of rtcp-mux, session negotiation will fail.
If the value of configuration.rtcpMuxPolicy is set and its value differs from the connection’s rtcpMux policy, throw an InvalidModificationError. If the value is “negotiate” and the user agent does not implement non-muxed RTCP, throw a NotSupportedError.
An RTCPeerConnection object has a signaling state, a connection state, an ICE gathering state, and an ICE connection state.
An RTCPeerConnection object has an operations chain which ensures that only one asynchronous operation in the chain executes concurrently.
Also an RTCPeerConnection object MUST not be garbage collected as long as any event can cause an event handler to be triggered on the object. When the object’s internal slot is true ie closed, no such event handler can be triggered and it is therefore safe to garbage collect the object.
generates a blob of SDP that contains an RFC 3264 offer with the supported configurations for the session, including
descriptions of the local MediaStreamTracks attached to this RTCPeerConnection,
codec/RTP/RTCP capabilities
ICE agent (usernameFragment, password , local candiadtes etc )
DTLS connection
const pc = new RTCPeerConnection();
pc.createOffer()
.then(desc => pc.setLocalDescription(desc));
With more attributes
var pc = new RTCPeerConnection();
pc.createOffer({
mandatory: {
OfferToReceiveAudio: true,
OfferToReceiveVideo: true
},
optional: [{
VoiceActivityDetection: false
}]
}).then(function(offer) {
return pc.setLocalDescription(offer);
})
.then(function() {
// Send the offer to the remote through signaling server
})
.catch(handleError);
generates an SDPanswer with the supported configuration for the session that is compatible with the parameters in the remote configuration
var pc = new RTCPeerConnection();
pc.createAnswer({
OfferToReceiveAudio: true
OfferToReceiveVideo: true
})
.then(function(answer) {
return pc.setLocalDescription(answer);
})
.then(function() {
// Send the answer to the remote through signaling server
})
.catch(handleError);
Codec preferences of an m= section’s associated transceiver is said to be the value of the RTCRtpTranceiver with the following filtering applied
If direction is “sendrecv”, exclude any codecs not included in the intersection of RTCRtpSender.getCapabilities(kind).codecs and RTCRtpReceiver.getCapabilities(kind).codecs.
If direction is “sendonly”, exclude any codecs not included in RTCRtpSender.getCapabilities(kind).codecs.
If direction is “recvonly”, exclude any codecs not included in RTCRtpReceiver.getCapabilities(kind).codecs.
Send and receive MediaStreamTracks over a peer-to-peer connection. Tracks, when added to an RTCPeerConnection, result in signaling; when this signaling is forwarded to a remote peer, it causes corresponding tracks to be created on the remote side.
RTCRtpTransceivers interface describes a permanent pairing of an RTCRtpSender and an RTCRtpReceiver. Each transceiver is uniquely identified using its mid ( media id) property from the corresponding m-line.
They are created implicitly when the application attaches a MediaStreamTrack to an RTCPeerConnection via the addTrack(), or explicitly when the application uses the addTransceiver(). They are also created when a remote description is applied that includes a new media description.
dictionary RTCRtpCodecParameters {
required octet payloadType;
required DOMString mimeType;
required unsigned long clockRate;
unsigned short channels;
DOMString sdpFmtpLine;
};
payloadType – identify this codec. mimeType – codec MIME media type/subtype. Valid media types and subtypes are listed in [IANA-RTP-2] clockRate – expressed in Hertz channels – number of channels (mono=1, stereo=2). sdpFmtpLine – “format specific parameters” field from the “a=fmtp” line in the SDP corresponding to the codec
voiceActivityFlag of type boolean – Only present for audio receivers. Whether the last RTP packet, delivered from this source, contains voice activity (true) or not (false).
RTCRtpTransceiver Interface
Each SDP media section describes one bidirectional SRTP (“Secure Real Time Protocol”) stream. RTCRtpTransceiver describes this permanent pairing of an RTCRtpSender and an RTCRtpReceiver, along with some shared state. It is uniquely identified using its mid property.
Thus it is combination of an RTCRtpSender and an RTCRtpReceiver that share a common mid. An associated transceiver( with mid) is one that’s represented in the last applied session description.
Method stop() – Irreversibly marks the transceiver as stopping, unless it is already stopped. This will immediately cause the transceiver’s sender to no longer send, and its receiver to no longer receive. stopping transceiver will cause future calls to createOffer to generate a zero port in the media description for the corresponding transceiver and stopped transceiver will cause future calls to createOffer or createAnswer to generate a zero port in the media description for the corresponding transceiver
Access to information about the Datagram Transport Layer Security (DTLS) transport over which RTP and RTCP packets are sent and received by RTCRtpSender and RTCRtpReceiver objects, as well other data such as SCTP packets sent and received by data channels. Each RTCDtlsTransport object represents the DTLS transport layer for the RTP or RTCP component of a specific RTCRtpTransceiver, or a group of RTCRtpTransceivers if such a group has been negotiated via [BUNDLE].
Protocols multiplexed with RTP (e.g. data channel) share its component ID. This represents the component-id value 1 when encoded in candidate-attribute while ICE candadte for RTCP has component-id value 2 when encoded in candidate-attribute.
This interface candidate Internet Connectivity Establishment (ICE) configuration used to setup RTCPeerconnection. To facilitate routing of media on given peer connection, both endpoints exchange several candidates and then one candidate out of the lot is chosen which will be then used to initiate the connection.
const pc = new RTCPeerConnection();
pc.addIceCandidate({candidate:''});
candidate – transport address for the candidate that can be used for connectivity checks.
component – candidate is an RTP or an RTCP candidate
foundation – unique identifier that is the same for any candidates of the same type , helps optimize ICE performance while prioritizing and correlating candidates that appear on multiple RTCIceTransport objects.
ip , port
priority
protocol – tcp/udp
relatedAddress , relatedPort
sdpMid – candidate’s media stream identification tag
sdpMLineIndex
usernameFragment – randomly-generated username fragment (“ice-ufrag”) which ICE uses for message integrity along with a randomly-generated password (“ice-pwd”).
RTCIceCredentialType Enum : supports OAuth 2.0 based authentication. The application, acting as the OAuth Client, is responsible for refreshing the credential information and updating the ICE Agent with fresh new credentials before the accessToken expires. The OAuth Client can use the RTCPeerConnection setConfiguration method to periodically refresh the TURN credentials.
ICE candidate policy [JSEP] to select candidates for the ICE connectivity checks
relay – use only media relay candidates such as candidates passing through a TURN server. It prevents the remote endpoint/unknown caller from learning the user’s IP addresses
all – ICE Agent can use any type of candidate when this value is specified.
RTCBundlePolicy Enum
balanced – Gather ICE candidates for each media type (audio, video, and data). If the remote endpoint is not bundle-aware, negotiate only one audio and video track on separate transports.
max-compat – Gather ICE candidates for each track. If the remote endpoint is not bundle-aware, negotiate all media tracks on separate transports.
max-bundle – Gather ICE candidates for only one track. If the remote endpoint is not bundle-aware, negotiate only one media track. If the remote endpoint is bundle-aware, all media tracks and data channels are bundled onto the same transport.
If the value of configuration.bundlePolicy is set and its value differs from the connection’s bundle policy, throw an InvalidModificationError.
Interfaces for Connectivity Establishment
describes ICE candidates
interface RTCIceCandidate {
DOMString candidate;
DOMString sdpMid;
unsigned short sdpMLineIndex;
DOMString foundation;
RTCIceComponent component;
unsigned long priority;
DOMString address;
RTCIceProtocol protocol;
unsigned short port;
RTCIceCandidateType type;
RTCIceTcpCandidateType tcpType;
DOMString relatedAddress;
unsigned short relatedPort;
DOMString usernameFragment;
RTCIceCandidateInit toJSON();
};
RTCIceProtocol can be either tcp or udp
TCP candidate type which can be either of
active – An active TCP candidate is one for which the transport will attempt to open an outbound connection but will not receive incoming connection requests.
passive – A passive TCP candidate is one for which the transport will receive incoming connection attempts but not attempt a connection.
so – An so candidate is one for which the transport will attempt to open a connection simultaneously with its peer.
UDP candidate type
host – actual direct IP address of the remote peer
srflx – server reflexive , generated by a STUN/TURN server
prflx – peer reflexive ,IP address comes from a symmetric NAT between the two peers, usually as an additional candidate during trickle ICE
usernameFragment – randomly-generated username fragment (“ice-ufrag”) which ICE uses for message integrity along with a randomly-generated password (“ice-pwd”).
Access to information about the ICE transport over which packets are sent and received. Each RTCIceTransport object represents the ICE transport layer for the RTP or RTCP component of a specific RTCRtpTransceiver, or a group of RTCRtpTransceivers if such a group has been negotiated via [BUNDLE].
With SCTP, the protocol used by WebRTC data channels, reliable and ordered data delivery is on by default.
Sending large files
Split data channel message in chunks
var CHUNK_LEN = 64000; // 64 Kb
var img = photoContext.getImageData(0, 0, photoContextW, photoContextH),
len = img.data.byteLength,
n = len / CHUNK_LEN | 0;
for (var i = 0; i < n; i++) {
var start = i * CHUNK_LEN, end = (i + 1) * CHUNK_LEN;
dataChannel.send(img.data.subarray(start, end));
}
// last chunk
if (len % CHUNK_LEN) {
dataChannel.send(img.data.subarray(n * CHUNK_LEN));
}
The browser maintains a set of statistics for monitored objects, in the form of stats objects. A group of related objects may be referenced by a selector( like MediaStreamTrack that is sent or received by the RTCPeerConnection).
Statistics API extends the RTCPeerConnection interface
Until recently a customised or property extension could signal multiple media streams within an m-section of an SDP and experiment with media-level “msid” (Media Stream Identifier ) attribute used to associate RTP streams that are described in different media descriptions with the same MediaStreams. However, with the transition to a unified plan, they will experience breaking changes.
The previous SDP format implementation called “planB” was transitioned to “unified plan” in 2019.
Who it does effect ?
Uses various media tracks within m line in SDP such as for video stream and screen sharing simultaneously
Munges SDP, uses MCUs or SFUs
used track-based APIs addTrack, removeTrack, and sender.replaceTrack or legacy addstream removeStream exposed senders and receivers to edit tracks and their encoding parameters
Who it does not affect ?
This does not affect any application which has only single audio and video track.
Multiple media stream may be required for cases such as video and screen share stream in same SDP or in specific cases of SFU.
This implementation in Plan B will result in one “m=” line of SDP being used for video and audio. While within the video m= section multiple “a=ssrc” lines are listed for multiple media tracks.
In Unified Plan, every single media track is assigned to a separate “m=” section. Hence for video and screen sharing simultaneously two m sections will be created.
Interoperability between unified plan and plan B
A mismatch in SDP (between Plan B and Unified Plan) usually results :-
only Unified Plan client receives an offer generated by a Plan B client – the Unified Plan client must reject the offer with a failed setRemoteDescription() error.
only Plan B client receives an offer generated by a Unified Plan client – only first track in every “m=” section is used and other tracks are ignored
This article is aimed at explaining the intricacies and detailed offer answer flow in webrtc handshake and JSEP. You can read the following articles on WebRTC as a prereq before reading through this one. WebRTC has API s namely – Peerconnection , getUserMedia , Datachannel and getStats.
JSEP is used during signalling via w3c’s recommended RTCPeerConnectionAPI interface to set up a multimedia session. The multimedia session description specifies the critical components of setting up a session between local and remote such as transport ports, protocol, profiles. It also handles the interaction with the ICE state machine.
prereq : Setup Client side for the caller PeerConnectionFactory to generate PeerConnections PeerConnection for every connection to remote peer MediaStream audio and video from client device
Side initiating the session creates a offer by CreateOffer() API
As the caller initiates a new RTCPeerConnection() , the RTCSignalingState state is “stable” as remote and local descriptions are empty
As the caller initiates call and calls createOffer() , he now has offer SDP and procced to store offer locally with setLocalDescription(offer) the RTCSignalingState state is “have-local-offer” . After than caller send the offer to callee over signalling channel
Simillarily as the calle recives the offer, it starts with RTCSignalingState stable and then proceeds to store the Remote’s offer using setRemoteDescription(offer), its state is now “have-remote-offer”
The callee generates a provsional answer and for caller and stores it locally , state transitiosn to “have-local-pranswer“. The pranswer SDP is send to caller over signalling channel again .
Caller stores the callee’s pr answer SDP and state updates to “have-remote-pranswer”
Media Section : An m= section is generated for each RtpTransceiver that has been added to the PeerConnection. For the initial offer since no ports are available yet , dummy port 9 can be sadded. However if it is bundle only then port value is set to 0. Later the port value will be set to the port value of default ICE candidate.
DTLS filed “UDP/TLS/RTP/SAVPF” is followed by the list of codecs in order of priority.
“c=” line in msection too must be filled with dummy values if IP 0.0.0.0 as no candidates are available yet .
For each media format on the m= line, “a=rtpmap” for “rtx” with the clock rate of codec and “a=fmtp” to reference the payload type of the primary codec. “a=rtcp-fb” specified RTCP feedback
When createOffer is called a second (or later) time, or is called after a local description has already been installed, the processig is different due to gathered ICE candidates . However the <session-version> is not changed .
Additionally m section is updated if RtpTransceiver is added or removed
Each “m=” and c=” line MUST be filled in with the port, relevant RTP profile, and address of the default candidate for the m= section
If the m= section is not bundled into another m= section, update the “a=rtcp” with port and address of RTCP camdidate and add “a=camdidate” with “a=end-of-candidates”
Local Answer created by side receiving the session/ Callee
When createAnswer is called for the first time after a remote description has been provided, the result is known as the initial answer.
Each offered m= section will have an associated RtpTransceiver
Remote Destination / Callee can reject the m section by setting port in m line to 0 . It can reject msection if neither of the offered media format are supported , RtpTransceiver is stoopped etc.
For the initial offer the dummy port value of 9 is set as no ICE candudate is avaible yet. Simillarly “c=” line must contain the “dummy” value “IN IP4 0.0.0.0” too.
The <proto> field MUST be set to exactly match the <proto> field for the corresponding m= line in the offer.
If the answer contains any “a=ice-options” attributes where “trickle” is listed as an attribute, update the PeerConnection canTrickle property to be true.
SDP returned from createOffer or createAnswer MUST NOT be changed before passing it to setLocalDescription. After calling setLocalDescription with an offer or answer, the application MAY modify the SDP to reduce its capabilities before sending it to the far side.
Assume we have a MCU at location and want the video stream to relay via a Media Server.
SDP is used for session parsing and contians sequence of line with key value pairs. SDP is read, line-by-line, and converted to a data structure that contains the deserialized information.
Line “v=” , “o=”,”b=” and “a=” are processed . The “i=”, “u=”, “e=”, “p=”, “t=”, “r=”, “z=”, and “k=” lines are not used by this specification; they MUST be checked for syntax but their values are not used. Line “c=” is checked for syntax and ICE mismatch detection
“a= ” attribute could be : “a=group” , “s=”ice-lite” , “a=ice-pwd”, “a=ice-options” , “a=fingerprint”, “a=setup” , a=tls-id”, “a=identity” , “a=extmap”
Media Section Parsing
Line “m=” for media , proto , port , fmt in RTP
Attributes “a=” can be :
“a=rtpmap” or “a=fmtp” : map from an RTP payload type number to a media encoding name that identifies the payload format.
Packetization parameters as “a=ptime” , “a=maxptime” which define the length of each RTP packet.
Direction as “a=sendrecv” , a=recvonly , a=sendonly , a=inactive“
Muxing as “a=rtcp-mux” , “a=rtcp-mux-only”
RTCP attributes “a=rtcp” , “a=rtcp-rsize”
Line “c=” is checked.
Line “b=” for bandiwtdh , bwtype
Attribites for “a=” could be “a=ice-ufrag”, “a=”ice-pwd”, “a=ice-options” , “a=candidate”, “a=remote-candidate” , a=end-of-candidates” and “a=fingerprint”
Protocols using offer/answer are difficult to operate through Network Address Translators (NATs) since flow of media packets require IP addresses and ports of media sources and sinks within their messages. Also realtime media emphasises on reduced latency and decreased packet loss .
An extension to the offer/answer model, and works by including a multiplicity of IP addresses and ports in SDP offers and answers, which are then tested for connectivity by peer-to-peer connectivity checks. Checks done by STUN and TURN, also allows for address selection for multi-homed and dual-stack hosts
ICE allows the agents to discover enough information about their topologies to potentially find one or more paths by which they can communicate. Then it systematically tries all possible pairs (in a carefully sorted order) until it finds one or more that work.
Caller and callee performs checks to finalize the protocol and routing needed to establish a peer connection . Number of candudates are proposed till they mutually agree upon one . Peerconnection then uses that candiadte detaisl to initiate the connection .
While Applying a Local Description at the media engine level if m= section is new, WebRTC media stacks begins gathering candidates for it.
RTCPeerconnection specified canTrickleIceCandidates. ICE trickling is the process of continuing to send candidates after the initial offer or answer has already been sent to the other peer.
ICE TransportRole is responsible for Choosing a candidate pair.
ICE layer sets one peer as controlling and other as controlled agent. The controling agent makes the final decision as to which candidate pair to choose.
An agent identifies all CANDIDATE whic is a transport address. Types:
HOST CANDIDATE – directly from a local interface which could be Wifi, Virtual Private Network (VPN) or Mobile IP (MIP) if an agent is multihomed ( private and public networks) , it obtains a candidate from each IP address and includes all candidates in its offer.
STUN or TURN to obtain additional candidates. Types
translated addresses on the public side of a NAT (SERVER REFLEXIVE CANDIDATES)
The candidates are carried in attributes in the SDP offer . The remote peer also follows this process and gather and send lits own sorted list of candidates. Hence CANDIDATE PAIRS from both sides are formed.
PEER REFLEXIVE CANDIDATES – connectivity checks can produce aditional candidates espceialy around symmetric NAT
Since the same address is used for STUN. and media ( RTP/RTCP) Demultiplexing based on packet contents helps to identify which one is which.
Checks : ICE checks are performed in a specific sequence, so that high-priority candidate pairs are checked first.
TRIGGERED CHECKS – accelerates the process of finding a valid candidate
ORDINARY CHECKS – agent works through ordered prioritised check list by sending a STUN request for the next candidate pair on the list periodically.
Checks ensure maintaining frozen candidates and pairs with some foundation for media stream. Each candidate pair in the check list has a foundation and a state. States for candidates pairs
1.Waiting: A check has not been performed for this pair, and can be performed as soon as it is the highest-priority Waiting pair onthe check list.
2. In-Progress: A check has been sent for this pair, but the transaction is in progress.
3. Succeeded: A check for this pair was already done and produced a successful result.
4. Failed: A check for this pair was already done and failed, either never producing any response or producing an unrecoverable failure response.
5. Frozen: A check for this pair hasn’t been performed, and it can’t yet be performed until some other check succeeds, allowing this pair to unfreeze and move into the Waiting state.
Selecting low-latency media paths can use various techniques such as actual round-trip time (RTT) measurement. Controlling agent gets to nominate which candidate pairs will get used for media amongst the ones that are valid. There are 2 ways : regular nomination and aggressive nomination.
A CPasS ( communication platform as a service ) is a cloud-based communication platform like B2B cloud communications platform that provides real-time communication capabilities. This should be easily integrable with any given external environment or application of the customer, without him worrying about building backend infrastructure or interfaces. Traditionally, with IP protected protocols, licensed codecs maintaining a signalling protocol stack, and network interfaces building a communication platform was a costly affair. Cisco, Facetime, and Skype were the only OTT ( over the top) players taking away from the telco’s call revenue. However, with the advent of standardised, open-source protocol and codecs plenty of CPaaS providers have crowded the market making more supply than there is demand. A customer wanting to quickly integrate real-time communications on his platform has many options to choose from. This article provides an insight into how CPaaS solutions are architectured and programmed.
SIP and WebRTC are many a times closely knit together as protocl, and media plane techologies to build a communication platform such as CPaaS , UCC, B2b call agent , call centre applicatioinsso on. This integration expected to continue to evolve and improve in order to meet the growing demands of users for high-quality, low-latency communication.
Sample CPass Architecture build on open source technologies
Over all Archietcture of Real Time Comunication ecosystem with Media management, CDR , processing pielines , real time analytics.
There are several assessment technologies that can be used for measuring the quality of WebRTC (Web Real-Time Communications) calls, including:
Mean Opinion Score (MOS): A standardized method for measuring the quality of voice and video calls, based on human perception.
Packet loss and jitter: Measures the amount of packet loss and variation in packet arrival times, which can impact the quality of a call.
Round-trip time (RTT): Measures the time it takes for a packet to travel from the sender to the receiver and back, which can affect the delay in a call.
Bitrate: Denotes the amount of data that is transmitted during a call, which can impact the quality of the audio and video.
Codecs chosen can impact the quality and bandwidth requirements of the call.
Network conditions
Quality of Service (QoS): Measures the quality of the network connection and the ability of the network to support real-time communications.
WebRTC specific metrics: such as video resolution, frames per seconds, audio level, and so on.
PESQ (Perceptual Evaluation of Speech Quality) predict subjective opinion scores of a degraded audio such as warping , varioioable delays
PSR( Peak signal to noise ration)
These technologies can be used in combination to provide a comprehensive assessment of the quality of a WebRTC call and to identify any issues that may be impacting the call quality.
Call server + Media Server that can be interacted with via UA
Comm clients like sipphones , webrtc client , SDK ( software development kits ) or libraries for desktop , embedded and/or mobile platforms .
APIs that can trigger automated calls and perform preprogrammed routing.
Rich documentation and samples to build various apps such as call centre solutions , interactive auto-attendant using IVR , DTMF , conference solutions etc .
Some CPaaS providers also add features like transcribing ,transcoding, recording , playback etc to provide edge over other CPaaS providers
(-) Self-hosted datacenters can be more expensive to set up and maintain, as they require the purchase of hardware and ongoing maintenance costs. (+) no monthly recurring fees to cloud vendors
(+) pay as you go
Scalability
(-) maintenance of racks and servers (-) requires planning for high availability and geographical deployment for redundancy
(+) no stress on resource management like cooling, rack space , wiring etc (+) easy to setup
Reliability
(-) limited to a single location and can be affected by local issues such as power outages.
Cloud providers typically have multiple data centers and will automatically route traffic. (-) outages in cloud infrastructures datacentre could lead to service disruption
Control and Security
(+) more controlled for security or access
(-) not in premise, security can be provisonoed by not in control
Cloud-based infrastructure
Cloud Services as Amazon Web service, Google Cloud, Microsoft Azure, IBM Cloud, Digital Ocean is great resources to host the multiple parts of a CPaaS system such as gateways, media servers, SIP Application servers, other servers for microservices including accounting, profile management, rest services etc. Often virtualized machines ( VMs) mounted on a larger physical remote datacentre are an ideal choice for VoIP and cloud communication providers.
Self hosted / inpremises Servers / private cloud
Marinating datacentre provides flexibility to extend and or develop tightly controlled use cases. It is often a requirement for secure communication platforms pertaining to government or banking communications such as turret phones.
Some approaches are to set up the server with Openstack to manage SDN ( software-defined network). Other approaches also involve VMWare to virtualize servers and then using docker container-managed via Kubernetes to dynamically spawn instances of server as load scaled up or down.
I have come across so many small size startups trying to build CPaaS solutions from scratch but only realising it after weeks of trying to build an MVP that they are stuck with firewall, NAT, media quality or interoperability issues. Since there are so many solutions already out in the market it is best to instead use them as an underlying layer and build applications services using it such as call centre or CRM services making custom wrappers.
Tech insights and experiences
Companies who have been catering to telco and communication domain make robust solutions based on industry best practices which beats novice solution build in a fortnight anyday.
Keeping up with emerging trends
Market trends like new codecs , rich communication services , multi tenancy, contextual communication , NLP, other ML based enhancements are provided by CPaaS company and would potentially try to abstrct away the implementation details from their SDK users or clients.
Auto Scaling, High Availability
A firm specializing in CPaaS solution has already thought of clustering and autoscaling to meet peak traffic requirements and backup/replication on standby servers to activate incase of failure
CAPEX and OPEX
Using a CPaaS saves on human resources, infrastructure, and time to market. It saves tremendously on underlying IT infrastructure and many a times provides flexible pricing models.
Call Rates are very critical for billing and charging the users. Any updates from the customer or carriers or individuals need to propagate automatically and quickly to avoid discrepancies and negative margins.
CDR ( Call Detail Record ) processing pipeline
CDRs need to be processed sequentially and incrementally on a record-by-record basis or over sliding time windows. CDR can also be used for a wide variety of analytics including correlations, aggregations, filtering, and sampling.
Updating rate sheet ( charges per call or per second )
The following setup is ideal to use the new input rate sheet values via web UI console or POST API and propagate it quickly to the main DB via a queuing system such as SQS. Serverless operations such as using AWS lambda can be used via a trigger-based system for any updates. This ensures that any new input rates are updated in realtime and maintain fallback values in separate storage as s3 bucket too
In current Voip scenarios a call may be passing thorugh various telco providers , ISP and cloud telephony serviIn current VoIP scenarios, a call may be passing through various telco providers, ISP and cloud telephony service providers where each system maintains its own call records and billing. This in my opinion is duplication and missing a single source of truth. A decentralized, reliable and consistent data store via blockchain coudl potentially maintain the call records making then immutable and non diputable. Some more details on the concept are in the article below.
Criticala communication services are essential for maintaining public safety and security, as well as for supporting critical infrastructure and operations. These services include:
Emergency Services E911 (Enhanced 911), emergency notification systems, which are used to quickly and effectively respond to emergencies and natural disasters.
Public Safety Services Police, fire, and ambulance services, which are responsible for maintaining public safety and security.
Public Warning Systems Amber Alerts and severe weather warnings, which are used to quickly and effectively notify the public of potential dangers.
Utilities Electricity, gas, water, and telecommunications, which are essential for maintaining critical infrastructure and operations.
Transportation Air traffic control, railroad, and public transportation systems, which are essential for maintaining mobility and supporting economic activity.
Government Related Services Military and defense operations, or announcements which are essential for maintaining national security and protecting citizens.
Industrial Automation Oil and gas, manufacturing, mining, and chemical plants, which are essential for maintaining operations and ensuring safety.
All of these services require reliable, high-quality, and low-latency communication to function effectively, and are considered as critical communication services because their failure to function could have serious consequences for public safety, security, and economic activity.
Major Components of Critical Communication Service
While these communiccation service are formed of multiple components and are constantly evolving, here are the major compoenents as per my knowkedge
Calling agents such as smartphones, radio phones, browsers, UCC clients, etc
Networks such as cellular ( access layer as towers, base stations, etc.) , satellite, or even private land mobile radio networks that are used for transmitting critical information
Call Routing through Emergency Services Routing Proxy (ESRP), BGF ( Border Gateway Function) to Call Answer in PSAP ( PublicSafyey Answering point)
Location gathering using GPS, browser collected location. Location Information Server (LIS), which can store location with IP address, mac address, and landline telephone number. LVF (location validation function) helps validate the location information.
Management and monitoring to keep the system up and working reliably
Integration as CAD( Computer Aided Dispatch), GIS (Geographical information System ), and notifiation system
WebRTC (Web Real-Time Communications) can be used in critical communication such as E911 as clients or as Gateways.
E911 WebRTC client
Webrtc client allow users to make emergency calls from their browsers. The browser’s locataion service can be budnled along with the call meta data to alert the service providers.
E911 WebRTC gateway
Webrtc gateways help connect the mordern WebRTC clients ( signalling and media) to old fashioned PSTN emergence serevice einfrastructure. The routing can be designed to automatically connect with the nearest PoP( point of presence ) whichh helps keeping the call in local region.
Public Safety Answering Point (PSAP) can be entirely made compatiblee for WebRTC media streams making calls 100% IP based. Tthis allow for more cost effecive and flexible solution building.
Compliance
It is important to note that while WebRTC technology is suitable for E911 communication, it is still subject to regulation and compliance, and the implementation of E911 services using WebRTC must comply with the regulations of the country and the state where it is being deployed.
NG911 (Next Generation 911) and E911 (Enhanced 911) are both systems used to route emergency calls to the appropriate emergency services center, but there are some key differences between the two:
E911
NG911
Network
Requires trunks and Data network
Emergence Service IP network ( ESInet)
Routing
class 5 switched for selective routing
IP seletive routing / dynamic
Media format
audioi only
multimedia ( video , audio , text etc)
integratioins
comples and limited
provides IP initerface for simpler integration and interoperability with other public safet systems
location service
obtainer from callers phone number
More ways for obtainer the caller’s location sucha s GPS coordinates
NG911 is a more modern and advanced version of 911 that uses IP-based networks and technologies, such as WebRTC and SIP, to handle emergency calls and associated data, while E911 is based on traditional PSTN (Public Switched Telephone Network) technology. IP-based emergency services have several advantages over traditional PSTN-based emergency services, including increased flexibility, scalability, and the ability to handle multimedia communications.
These systems are designed to provide low latency communication service while aldo being very robust to withstand harsh conditions and disasters. Typically ruggedized, the Push-to-talk (PTT) devices have button or switch on the device which when pressed and held, allows the user to transmit their voice to the other users in the group. This feature allows for quick and efficient communication, even in noisy or chaotic environments.
Mission critical comm such as used for medical Serevices which monitor the health of patients via sensors and provide immediate alert to concerned parties. Medical communication systems are designed to integrate with the clinical workflows and patient information systems, such as electronic health records (EHRs) to provide relevant information to the medical staff during the communication.
Upgrading infrstructure has vastly helped in the the area of providing immediate care and other emergency response services. Central to all these emeregncy communication is security. Implementing security measures, such as encryption, authentication, and firewalls, can protect emergency communication services from unauthorized access and attacks.
Unified communication services build around WebRTC should be vendor agnostic and multi-tenant and be supported by other Communication Service Providers (CSPs), SIP trunks, PBXs, Telecom Equipment Manufacturers (TEMs), and Communication Platform as a Service (CPaaS). This can happen if all endpoints adhere to SIP standards in most updated RFC. However since not all are on the boat , Session border controllers are a great way to mitigate the differences and provide seamless connectivity to signalling and media , which could be between WebRTC, SIP or PSTN, from TDM to IP .
Session Border Controllers ( SBC ) assist in controlling the signalling and usually also the media streams involved in calls and sessions. They are often part of a VOIP network on the border where there are 2 peer networks of service providers such as backbone network and access network of corporate communication system which is behind firewall.
A more complex example is that of a large corporation where different departments have security needs for each location and perhaps for each kind of data. In this case, filtering routers or other network elements are used to control the flow of data streams. It is the job of a session border controller to assist policy administrators in managing the flow of session data across these borders.
– wikipedia
SBC act like a SIP-aware firewall with proxy/B2BUA.
What is B2BUA?
A Back to back user agent ( B2BUA ) is a proxy-like server that splits a SIP transaction in two pieces:
on the side facing User Agent Client (UAC), it acts as server;
on the side facing User Agent Server (UAS) it acts as a client.
SBC mostly have public url address for teleworkers and a internal IP for enterprise/ inner LAN . This enables users connected to enterprise LAN ( who do not have public address ) to make a call to user outside of their network. During this process SBC takes care of following while relaying packets .
Security
Connectivity
Qos
Regulatory
Media Services
Statistics and billing information
Explaining the functions of SBC in detail
1. Security
SBCs provide security features such as encryption, authentication, and firewall capabilities to protect the network from unauthorized access and attacks. SBCs are often used by corporations along with firewalls and intrusion prevention systems (IPS) to enable VoIP calls to and from a protected enterprise network. VoIP service providers use SBCs to allow the use of VoIP protocols from private networks with Internet connections using NAT, and also to implement strong security measures that are necessary to maintain a high quality of service. The security features includes :
Prevent malicious attacks on network such as DOS, DDos.
Intrusion detection
cryptographic authentication
Identity/URL based access control
Blacklisting bad endpoints
Malformed packet protection
Encryption of signaling (via TLS and IPSec) and media (SRTP)
Stateful signalling and Validation
Toll Fraud – detect who is intending to use the telecom services without paying up
Topology hiding
SBC hides and anonymize secure information like IP ports before forwarding message to outside world . This helps protect the internal node of Operators such as PSTN gateways or SIP proxies from revealing outside.
2. Connectivity
As SBC offers IP-to-IP network boundary, it recives SIP request from users like REGISTER , INVITE and routes them towards destination, making their IP. During this process it performs various operations like
NAT traversal
IPv4 to IPv6 inter-working
VPN connectivity
SIP normalization via SIP message and header manipulation
Multi vendor protocol normalization
Further Routing features includes :
Least Cost Routing based on MoS ( Mean Opinion Score ) : Choosing a path based on MoS is better than chooisng any random path .
Protocol translationsSBCs can bridge WebRTC calls with other communication protocols such as SIP, H.323, and PSTN to enable communication between different systems and networks.
In essence SBC achieve interoperability, overcoming some of the problems that firewalls and network address translators (NATs) present for VoIP calls.
Automatic Rerouting
Connectivity loss from UA for whole branch is detected by timeouts . But they can also be detected by audio trough SIP OPTIONS by SBC . In such connectivity loss , SBC decides rerouting or sending back 504 to caller .
4. QoS
To introduce performance optimization and business rules in call management QoS is very important. This includes the following:
Traffic policing
Resource allocation
Rate limiting
Call Admission Control (CAC)
ToS/DSCP bit setting
Recording and Audit of messages , voice calls , files
System and event logging
SBCs can log call information and statistics, and provide real-time monitoring capabilities to troubleshoot and diagnose issues with WebRTC calls.
5. Regulatory
Govt policies ( such as ambulance , police ) and/ or enterprise policies may require some calls to be holding priority over others . This can also be configured under SBC as emergency calls and prioritization.
Some instances may require communication provider to comply with lawful bodies and provide session information or content , this is also called as Lawful interception (LI) . This enables security officials to collect specific information rather than examining all the traffic that passes through a particular router. This is also part of SBC.
6. Media services
Many of the new generation of SBCs also provide built-in digital signal processors (DSPs) to enable them to offer border-based media control and services such as- DTMF relay , Media transcoding , Tones and announcements etc.
WebRTC enabled SBC’s also provide conversion between DTLS-SRTP, to and from RTCP/RTP. Also transcoding for Opus into G7xx codecs and ability to relay VP8/VP9 and H.264 codecs.
Network Address Translation (NAT)
SBCs can handle Network Address Translation (NAT) to allow WebRTC clients behind a NAT to connect to other clients outside of the NAT.
7. Statistics and billing information
SBC have an interface with and OSS/BSS systems for billing process , as almost all traffic that pass through the edge of the network passes via SBC. For this reason it is also used to gather Statistics and usage-based information like bandwidth, memory and CPU. PCAP traces of both signaling and media information of specific sessions .
New feature rich SBCs also have built-in digital signal processors (DSPs). Thus able to provide more control over session’s media/voice. They also add services like Relay and Interworking, Media Transcoding, Tones and Announcements, DTMF etc.
SBCs act as a security gateway and traffic manager for WebRTC sessions, ensuring that the communication is secure, of good quality, and can traverse different networks and protocols.
Session Border Controller for WebRTC , SIP , PSTN , IP PBX and Skype for business .
Diagram Component Description
Gateways vs SBC
Gateways provide compression or decompression, control signaling, call routing, and packetizing.
PSTN Gateway : Converts analog to VOIP and vice versa . Only audio no support for rich multimedia .
VOIP Gateway : A VoIP Gateway acts like a translator converting digital telecom lines to VoIP . VOIP gateway often also include voice and fax. They also have interfaces to Soft switches and network management systems.
WebRTC Gateway : They help in providing NAT with ICE-lite and STUN connectivity for peers behind policies and Firewall .
SIP trunking : Enterprises save on significant operation cost by switching to IP /SIP trunking in place of TDM (Time Division Multiplexing). Read more on SIP trunk and VPN here.
SIP Server : A Telecom application server ( SIP Server ) is useful for building VAS ( Value Added Services ) and other fine grained policies on real time services . Read more on SIP Servers here .
VOIP/SIP service Provider : There are many Worldwide SIP Service providers such as Verizon in USA , BT in europe, Swisscom in Switzerland etc .
Building a SBC
The latest trends in Telecommunications industry demand an open standardized SBC to cater to growing and large array of SIP Trunking, Unified Multimedia Communications UC&C, VoLTE, VoWi-Fi, RCS and OTT services worldwide . Building an SBC requires that it meet the following prime requirements :
software centric
Cloud Deploybale
Rich multimedia (audio , video , files etc) processing
open interfaces
The end product should be flexible to be deployed as COTS ( Commercial Off the shelf) product or as a virtual network function in the NFV cloud.
Multi Configuration , should be supported such as Hosted or Cloud deployed .
Overcome inconsistencies in SIP from different Vendors
Security and Lawful Interception
Carrier Grade Scaling
Flow Diagram
Thus we see how SBC became important part of comm systems developed over SIP and MGCP. SBC offer B2BUA ( Back to Back user agent) behavior to control both signalling and media traffic.
Setting up a ec2 instance on AWS for web real time communication platform over nodejs and socket.io using WebRTC.
Primarily a Web Call , Chat and conference platform uses WebRTC for the media stream and socketio for the signalling . Additionally used technologies are nosql for session information storage , REST Apis for getting sessions details to third parties.
Below is a comprehensive setup if ec2 t2.micro free tier instance, installation with a webrtc project module and samples of customisation and usage .
Amazon EC2 : These are elastic compute general purpose storage servers that mean that they can resize the compute capacity in the cloud based on load . 750 hours per month of Linux, RHEL, or SLES t2.micro instance usage. Expires 12 months after sign-up.
Some other products are also covered under free tier which may come in handy for setting up the complete complatorm. Here is a quick summary
Amazon S3 : it is a storage server. Can be used to store media file like image s, music , videos , recorded video etc .
Amazon RDS : It a relational database server . If one is using mysql or postgress for storing session information or user profile data . It is good option .
Amazon SES : email service. Can be used to send invites and notifications to users over mail for scheduled sessions or missed calls .
Amazon CloudFront : It is a CDN ( content delivery network ) . If one wants their libraries to be widly available without any overheads . CDN is a good choice .
Alternatively any server from Google cloud , azure free tier or digital ocean or even heroku can be used for WebRTC code deployment . Note that webrtc capture now requires htps in domain name.
Server Setup
Set up environment by installing nvm , npm and git ( source version control)
Since 2015 it has become mandatory to have only https origin request WebRTC’s getUserMedia API ie Voice, video, geolocation , screen sharing require https origins.
Note that this does not apply to case where its required to only serve peer’s media Stream or using Datachannels . Voice, video, geolocation , screen sharing now require https origins
For A POC purpose here is th way of generating a self signed certificate
Transport Layer Security and/or Secure Socket Layer( TLS/SSL) is a public/private key infrastructure.Following are the steps
1.create a private key
openssl genrsa -out webrtc-key.pem 2048
2.Create a “Certificate Signing Request” (CSR) file
create https certificate using self generate or purchased SSL certificates using fs , node-static and https modules . To know how to create self generated SSL certificates follow section above on SSL certificates.
var fs = require(‘fs’);
var _static = require(‘node-static’);
var https = require(‘https’);
var file = new _static.Server(&amp;amp;amp;amp;amp;amp;quot;./&amp;amp;amp;amp;amp;amp;quot;, {
cache: 3600,
gzip: true,
indexFile: &amp;amp;amp;amp;amp;amp;quot;index.html&amp;amp;amp;amp;amp;amp;quot;
});
the document start script that invokes the JS script
$('document').ready(function () {
sessionid = init(true);
var local = {
localVideo: "localVideo",
videoClass: "",
userDisplay: false,
userMetaDisplay: false
};
var remote = {
remotearr: ["video1", "video2"],
videoClass: "",
userDisplay: false,
userMetaDisplay: false
};
webrtcdomobj = new WebRTCdom(
local, remote
);
var session = {
sessionid: sessionid,
socketAddr: "https://localhost:8084/"
};
var webrtcdevobj = new WebRTCdev(session, null, null, null);
startcall();
});
Common known issues:
1.Opening page https://<web server ip>:< web server port>/index.html says insecure
This is beacuse the self signed certificates produced by open source openSSL is not recognized by a trusted third party Certificate Agency.
A CA ( Certificate Authority ) issues digital certificate to certify the ownership of a public key for a domain.
To solve the access issue goto https://<web server ip>:< web server port> and given access permission such as outlined in snapshot below
2.Already have given permission to Web Server , page loads but yet no activity .
if you open developer console ( ctrl+shift+I on google chrome ) you will notice that there migh be access related errros in red . If you are using different server for web server and signalling server or even if same server but different ports you need to explicity go to the signalling server url and port and give access permission for the same reason as mentione above.
3.no webcam capture on opening the page
This could happen due to many reasons
page is not loaded on https
browser is not webrtc compatible
Media permission to webcam are blocked
the machine does have any media capture devices attached
Driver issues in the client machine while accessing webcams and mics .
For the last couple of weeks , I have been working on the concept of rendering 3D graphics on WebRTC media stream using different JavaScript libraries as part of a Virtual Reality project .
Augmented reality (AR) is viewing a real-world environment with elements that are supplemented by computer-generated sensory inputs such as sound, video, graphics , location etc.
How is AR diff. from VR(Virtual Reality) ?
Virtual Reality
Augmented Reality
replaces the real world with simulated one , user is isolated from real life , Examples – Oculus Rift & Kinect
blending of virtual reality and real life , user interacts with real world through digital overlays , Examples – Google glass & Holo Lens
Methods for rendering augmented Reality
Computer Vision
Object Recognition
Eye Tracking
Face Detection and substitution
Emotion and gesture picker
Edge Detection
Web based Augmented Reality platform building has for a Web base Components for end-to-end AR solution such as WebRTC getusermedia , Web Speech API, css, svg, HTML5 canvas, sensor API. Hardware components that can include Graphics driver, media capture devices such as microphone and camera, sensors. 3D Components like Geometry and Math Utilities, 3D Model Loaders and models, Lights, Materials,Shaders, Particles, Animation.
Browser’s media stream and data. Standardization , on a API level at the W3C and at the protocol level at the IETF. WebRTc enables browser to browser applications for voice calling, video chat and P2P file sharing without plugins.Enables web browsers with Real-Time Communications (RTC) capabilities.
Code snippet for WebRTC API
1.To begin with WebRTC we first need to validate that the browser has permission to access the webcam. Find out if the user’s browser can use the getUserMedia API.
function hasGetUserMedia() {
return !!(navigator.webkitGetUserMedia);
}
Get the stream from the user’s webcam.
var video = $('#webcam')[0];
if (navigator.webkitGetUserMedia) {
navigator.webkitGetUserMedia(
{audio:true, video:true},
function(stream) { video.src = window.webkitURL.createObjectURL(stream); },
function(e) { alert('Webcam error!', e); }
);
}
Display the video as a plane which can be viewed from various angles in a given background landscape. Credits for below code : https://stemkoski.github.io/Three.js/
1.Use code from slide 10 to get user’s webcam input through getUserMedia
Make a Screen , camera and renderer as previously described
Give orbital CONTROLS for viewing the media plane from all angles
controls = new THREE.OrbitControls( camera, renderer.domElement );
Make the FLOOR with an image texture
[sourcecode language="html"]
var floorTexture = new THREE.ImageUtils.loadTexture( 'imageURL.jpg' );
floorTexture.wrapS = floorTexture.wrapT = THREE.RepeatWrapping;
floorTexture.repeat.set( 10, 10 );
var floorMaterial = new THREE.MeshBasicMaterial({map: floorTexture, side: THREE.DoubleSide});
var floorGeometry = new THREE.PlaneGeometry(1000, 1000, 10, 10);
var floor = new THREE.Mesh(floorGeometry, floorMaterial);
floor.position.y = -0.5;
floor.rotation.x = Math.PI / 2;
scene.add(floor)
[/sourcecode]
6. Add Fog
scene.fog = new THREE.FogExp2( 0x9999ff, 0.00025 );
7.Add video Image Context and Texture.
video = document.getElementById( 'monitor' );
videoImage = document.getElementById( 'videoImage' );
videoImageContext = videoImage.getContext( '2d' );
videoImageContext.fillStyle = '#000000';
videoImageContext.fillRect( 0, 0, videoImage.width, videoImage.height );
videoTexture = new THREE.Texture( videoImage );
videoTexture.minFilter = THREE.LinearFilter;
videoTexture.magFilter = THREE.LinearFilter;
var movieMaterial=new THREE.MeshBasicMaterial({map:videoTexture,overdraw:true,side:THREE.DoubleSide});
var movieGeometry = new THREE.PlaneGeometry( 100, 100, 1, 1 );
var movieScreen = new THREE.Mesh( movieGeometry, movieMaterial );
movieScreen.position.set(0,50,0);
scene.add(movieScreen);
WASM ( Web Assembly) is portable binary-code format and a corresponding text format. It is used for facilitating interactions between c++ programs and their host environment such as Javascript code into the browsers.
Emscripten is a one of the compiler toolchain and is a way to compile C++ into WASM.
Web media APIs like MSE (Media Source Extensions) and EME (Encrypted Media Extensions)
AR Processing pipeline
credits : Media Pipe Google AI
On deveice Machine Learning piipeline consists of a platform solution such as MediaPipe above along with WASM. The WASM SIMD (Single instruction, multiple data for parallel process ) ML inerface can be XNNPACK or any other mobile platform based neural network inference framework. This is followerd by rendering.
GPU Accelerated segmentation ( WebGL) outperforms CPU Segmentation using WASM SIMD by talking the latency down from ~8.7ms to ~4.3 ms. Novel WebGL interface can via optimized fragment shaders using MRT.
Step 4 : Camera
Camera types in three.js are CubeCamera , OrthographicCamera, PerspectiveCamera. We are using Perspective camera here . Attributes are field of view , aspect ratio , near and far clipping plane.
var camera = new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 0.1, 1000 );
Step 5: Renderer
Renderer uses a <canvas> element to display the scene to us.
var renderer = new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );
Step 6: . BoxGeometry object contains all the points (vertices) and fill (faces) of the cube.
var geometry = new THREE.BoxGeometry( 1, 1, 1 );
Step 7: Material
threejs has materials like – LineBasicMaterial , MeshBasicMaterial , MeshPhongMaterial , MeshLambertMaterial
These have their properties like -id, name, color , opacity , transparent etc. Use MeshBasicMaterial and color attribute of 0x00ff00, which is green.
var material = new THREE.MeshBasicMaterial( { color: 0x00ff00 } );
Step 8: Mesh
A mesh is an object that takes a geometry, and applies a material to it, which we then can insert to our scene, and move freely around.
var cube = new THREE.Mesh( geometry, material );
Step 9: By default, when we call scene.add(), the thing we add will be added to the coordinates (0,0,0). This would cause both the camera and the cube to be inside each other. To avoid this, we simply move the camera out a bit.
scene.add( cube );
camera.position.z = 5;
Step 10: Create a loop to render something on the screen
function render() {
requestAnimationFrame( render );
renderer.render( scene, camera );
}
render();
This will create a loop that causes the renderer to draw the scene 60 times per second.
Step 11 : Animating the cube
This will be run every frame (60 times per second), and give the cube a nice rotation animation
cube.rotation.x += 0.1;
cube.rotation.y += 0.1;
2. Shaded Material on Sphere
Stepp 1 : create a empty page and import three.min.js and jquery
<html>
<head>
<title>Shaded Material on Sphere </title>
<style>
body { margin: 0; }
canvas { width: 100%; height: 100% }
</style>
<script src="js/jquery.min.js"></script>
<script src="js/three.min.js"></script>
<script>// Our Javascript will go here.</script>
</head>
<body>
<div id="container"></div>
</body>
</html>
Step 2 : Repeat the same steps at in previous example
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera(45, 600/600 , 0.1, 10000);
var renderer = new THREE.WebGLRenderer();
renderer.setSize(600 , 600 );
$container.append(renderer.domElement);
scene.add(camera);
camera.position.z = 300; // the camera starts at 0,0,0 so pull it back
3. Create the sphere’s material as MeshLambertMaterial
MeshLambertMaterial is non-shiny (Lambertian) surfaces, evaluated per vertex. Set the color to red .
var sphereMaterial = new THREE.MeshLambertMaterial( { color: 0xCC0000 });
4. create a new mesh with sphere geometry ( radius, segments, rings) and add to scene
var sphere = new THREE.Mesh( new THREE.SphereGeometry( 50, 16, 16 ), sphereMaterial);
scene.add(sphere);
5. Light
Create light , set its position and add it to scene as well . Light can be point light , spot light , directional light .
var pointLight = new THREE.PointLight(0xFFFFFF);
pointLight.position.x = 10;
pointLight.position.y = 50;
pointLight.position.z = 130;
scene.add(pointLight);
6. Render the whole thing
renderer.render(scene, camera);
3. Complex objects like Torusknot
Step 1 : Same as before make scene , camera and renderer
scene = new THREE.Scene();
camera = new THREE.PerspectiveCamera(125, window.innerWidth / window.innerHeight, 1, 500);
camera.position.set(0, 0, 100);
camera.lookAt(new THREE.Vector3(0, 0, 0));
var renderer = new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );
Step 2 : Add the lighting
var light = new THREE.PointLight(0xffffff);
light.position.set(0, 250, 0);
scene.add(light);
var ambientLight = new THREE.AmbientLight(0x111111);
scene.add(ambientLight);
var geometry = new THREE.TorusKnotGeometry( 8, 2, 100, 16, 4, 3 );
var material = new THREE.MeshLambertMaterial( { color: 0x2022ff } );
var torusKnot = new THREE.Mesh( geometry, material );
torusKnot.position.set(3, 3, 3);
scene.add( torusKnot );
camera.position.z =25;
Step 4 : Do the animation and render on screen
var render = function () {
requestAnimationFrame( render );
torusKnot.rotation.x += 0.01;
torusKnot.rotation.y += 0.01;
renderer.render(scene, camera);
};
render();
TFX is a modular widget based WebRTC communication and collaboration solution. It is a customizable solution where developers can create and add their own widget over the underlying WebRTC communication mechanism . It can support extensive set of user activity such as video chat , message , play games , collaborate on code , draw something together etc . It can go as wide as your imagination . This post describes the process of creating widgets to host over existing TFX platform .
Prerequisites
It is required to have TFX Chrome extension installed and running from Chrome App Store under above . To do this follow the steps described in TangoFX v0.1 User’s manual.
Test TFX Sessions ?
TFX Sessions uses the browser’s media API’s , like getUserMedia and Peerconnection to establish p2p media connection . Before media can traverse between 2 end points the signalling server is required to establish the path using Offer- Answer Model . This can be tested by making unit test cases on these function calls .
TFX Sessions uses socketio based handshake between peers to ascertain that they are valid endpoints to enter in a communication session . This is determined by SDP ( Session Description Parameters ) . The same can be observed in chrome://webrtc-internals/ traces and graphs .
How to make widgets using TFX API ?
Step 1: To make widgets for TFX , just write your simple web program which should consist of one main html webpage and associated css and js files for it .
Step 2 : Find an interesting idea which is requires minimal js and css . Remember it is a widget and not a full fleshed web project , however js frameworks like requirejs , angularjs , emberjs etc , work as well.
Step 3: Make a compact folder with the name of widget and put the respective files in it. For example the html files or view files would go to src folder , javascript files would goto js folder , css files would goto css folder , pictures to picture folder , audio files to sound folder and so on .
Step 4 : Once the widget is performing well in standalone environment , we can add a sync file to communicate the peer behaviors across TFX network . For this we primarily use 2 methods .
SendMessage : To send the data that will be traversed over DataChannel API of TFX . The content is in json format and will be shared with the peers in the session .
OnMessage : To receive the message communicated by the TFX API over network
Step 5: Submit the application to us or test it yourself by adding the plugin description in in widgetmanifest.json file . Few added widgets are
Step 6 : For proper orientation of the application make sure that overflow is hidden and padding to left is atleast 60 px so that it doesnt overlap with panel padding-left: 60px; overflow: hidden;
Step 7 : Voila the widget is ready to go .
Simple Messaging Widget
For demonstration purpose I have summarised the exact steps followed to create the simple messaging widget which uses WebRTC ‘s Datachannel API in the back and TFX SendMessage & OnMessage API to achieve
Step 1 : Think of a general chat scenario as present in various messaging si
Step 2: Made a folder structure with separation for js , css and src. Add the respective files in folder. It would look like following figure:
//send message when mouse is on mesage dicv ans enter is hit
$("#messages").keyup(function(event){
if(event.keyCode == 13){
var msg=$('#MessageBox').val();
//send to peer
var data ={
"msgcontent":msg
}
sendMessage(data);
addMessageLog(msg);
$("#MessageBox").val('');
}
});
function addMessageLog(msg){
//add text to text area for message log for self
$('#MessageHistoryBox').text( $('#MessageHistoryBox').text() + '\n'+ 'you : '+ msg);
}
// handles send message
function sendMessage(message) {
var widgetdata={
"type":"plugin",
"plugintype":"relaymsg",
"action":"update",
"content":message
};
// postmessage
window.parent.postMessage(widgetdata,'*');
}
//to handle incoming message
function onmessage(evt) {
//add text to text area for message log from peer
if(evt.data.msgcontent!=null ){
$('#MessageHistoryBox').text( $('#MessageHistoryBox').text() +'\n'+ 'other : '+ evt.data.msgcontent );
}
}
window.addEventListener("message",onmessage,false);
Step 6: The end result is :
Developing a cross origin Widget ( XHR)
Let us demonstrate the process and important points to create a cross- origin widget :
step 1 : Develop a separate web project and run it on a https
step 2 : Add the widget frame in TFX . Following is the code I added to make an XHR request over GET
var xmlhttp;
xmlhttp=new XMLHttpRequest();
xmlhttp.onreadystatechange=function()
{
if (xmlhttp.readyState==4 &amp;&amp; xmlhttp.status==200)
{
document.getElementById("myDiv").innerHTML=xmlhttp.responseText;
}
}
xmlhttp.open("GET","https://192.168.0.119:8000/TFXCrossSiteProj/files/document1.txt",true);
xmlhttp.send();
step 3 : Using self made https we have have to open the url separately in browser and give it explicit permission to open in advanced setting. Make sure the original file is visible to you at the widgets url .
step 4: Adding permission to manifest for access the cross origin requests
Step 5 : Rest of the process are similar to develop a regular widget ie css and js .
Step 6: Resulting widget on TFX
Note 1 : In absence of changes to manifest file the cross origin request is meet with a Access-Control-Allow-Origin error .
Note 2: While using POST the TFX responds with Failed to load resource: the server responded with a status of 404 (Not Found)
Note 3: Also if instead of https http is used the TFX still responds with Failed to load resource: the server responded with a status of 404 (Not Found)
TFX is WebRTC based communication platform built entirely on open standards making it extensively scalable. The underlying API completely masks the communication aspect and lets the user enjoy an interactive communication session. It also supports easy to build widgets framework which can be used to build applications on the TFX platform .
TFX Sessions
TFX sessions is a part of TFX . It is a free Chrome extension WebRTC client that enables parties communicating and collaborating, to have an interactive and immersive experience. You can find it on Chrome Webstore here .
Features of TFX Sessions:
Through TFX, users can have instant multimedia Internet call sessions .
The core features are :
No signin or account management
No additional requirement like Flash , Silverlight or Java
URL based session management
secure WebRTC based communication
complete privacy with no user tracking or media flow interruption
Ability to share session on social network platforms like Facebook , twitter , linkedin , gmail , google plus etc
ability to choose between multiple cameras
The TFX platform has developer friendly APIs to help build widgets. Some of the pre-built widgets available on TFX are:
Coding
Drawing
Multilingual chat
Screen sharing
TFX sessions is free for personal use and can be downloaded from Chrome Webstore.
What is the differentiator with other internet call services?
No registration , login for account management required
Communication is directly between peer to peer ie information privacy.
Third party apps , services can be included as widgets on TFX platform.
Can be skimmed to be embedded inside Mobile app webview , iframe, other portals etc anytime .
TFX Sessions Integration Models
The 3 possible approaches for TFX Integration in increasing order of deployment time are :
WebSite’s widget on TFX chrome extension .
Launch TFX extension in an independent window from website
TFX call from embedded Window inside the website page
1. WebSite’s widget on TFX chrome extension .
This outlines the quickest deliverable approach of building the websites own customized widget on TFX widgets API and deployed on existing TFX communication setup .
Step 1 : Login using websites credentials to access the content
Step 2 : Access the website with the other person inside the TFX “ Pet Store “ Widget
2. Launch TFX in an independent window from “Click to Call” Button on website
This approach outlines the process of launching TFX in an independent window from a click of a button on website. However it is a prerequisite to have TFX extension installed on your Chrome browser beforehand.
Step 1 : Have TFX installed on chrome browser Step 2 : Trigger and launch TFX chrome extension window on click of button on webpage
3. TFX call from embedded Window inside Website page
This section if for the third approach which is of being able to make TFX calls from embedded Window inside of the webpage. Refer to sample screen below :
Step 1 : Have TFX embedded in an iframe inside the website
Step 2 : Make session on click of button inside the iframe.
Technical Details about TFX like architecture , widgets development , components description etc can be found here : TFX Platform