AudioCodes WebRTC examples

Preface

AudioCodes Ltd. provides a WebRTC Gateway functionality on its Session Border Controllers that supports interworking of calls from clients using WebRTC to standard Voice over IP networks.

For browser-based WebRTC clients, AudioCodes provides a JavaScript API library (the “WebRTC Client SDK”) to easily integrate WebRTC calling with AudioCodes SBCs.

AudioCodes provides a similar SDK also for native iOS and Android applications.

The WebRTC Client SDK for web, is based on an open-source JavaScript SIP library named “JsSIP”.
In this document we demonstrate how to use the API to write WebRTC client phones.

WebRTC is one of the components of HTML 5 which is implemented on modern browsers.

Currently the WebRTC Client SDK supports:
Google Chrome, new Microsoft Edge, Mozilla Firefox, Apple Safari for Mac.
Partial supports: iOS Safari and Chrome for Android.

The WebRTC Client SDK uses modern JavaScript version.
Used JavaScript ES2015 features: class, let, for of, promises and ES2017 features: async/await.

This document is built as a series of fully functional examples.

For educational purposes, the examples use pure JavaScript, working directly with the browser’s HTML DOM elements (without using libraries such as jquery).
The WebRTC API can only be used when a web page is loaded securely from an HTTPS site.

If you would like to see these examples while operating, you should be able to upload them to your HTTPS site as well as have access to an AudioCodes SIP SBC server, configured with WebRTC and session licenses.
Another option at your disposal is the use of examples already demonstrated in this website.
You may reconfigure the SBC field with your own server.

View the source code with a browser’s developer tools: Ctrl+Shift+I (Chrome, Firefox) or Alt+Cmd+I (Safari) and select sources

Each example is single HTML page with JavaScript.
Exiting the page is the same as stopping the program in Windows.
Reloading the page will restart the program.

In an HTML file, the following is used:


<body onload = "documentIsReady()">
When the page is loaded, the documentIsReady function is called.
documentIsReady() function plays the same role as main() in C or java languages.

Ensure that your browser support WebRTC API

Check that your browser supports WebRTC


function documentIsReady() {
    if( !navigator.mediaDevices || !navigator.mediaDevices.getUserMedia )
      guiError("WebRTC is not supported");
    } else {
      guiInfo("WebRTC is supported");
    }
}

Here the result will be shown in an HTML page, instead of a JavaScript console.

Sometime opening the JavaScript console is difficult (e.g. Chrome for Android), or impossible (e.g. Chrome for iPad)

This simple example is suitable for learning how to see the source code in the browser.
Please run the example in a Chrome browser (this is similar in Firefox)

Press Ctrl/Shift/I to open the developer tools
Select 'Sources' tab
Select 'Page' sub-tab

You can see that sdk/webrtc-api-base/examples/0.webrtc_check_support URL contains:

Click the 1st and the 2nd files, and you'll see the source code.

If the JavaScript code is compressed you can click the {} icon to expand.
Later we'll describe how to debug JavaScript code, or locally change it using 'overrides'

In phone.js you can see how to get a reference to an HTML element defined in index.html,
and how to change style and innerHTML values.


function guiError(text) { guiStatus(text, 'Pink'); }
function guiInfo(text) { guiStatus(text, 'Aquamarine'); }

function guiStatus(text, color) {
    let line = document.getElementById('status_line');
    line.setAttribute('style', `background-color: ${color}`);
    line.innerHTML = text;
}

Checking available devices (camera, microphone)

Check for available devices

To make a phone call you need to at least have a microphone and headphones.
A video call also requires a web camera.
In this example, we check for the presence of these peripherals.

You can see here that WebRTC API uses the more modern callbacks in the form of 'Promise' (.then .catch)


function documentIsReady() {
    // Check devices: microphone must exist, camera is optional
    checkAvailableDevices()
        .then((camera) => {
            let str = 'microphone is found'
            if (camera)
                str = 'microphone and camera are found'
            guiInfo(str);
            console.log(str)
        })
        .catch((e) => {
            guiError(e);
            console.log(e);
        })
}

// Check WebRTC support. Check presence of microphone and camera.
function checkAvailableDevices() {
    if (!navigator.mediaDevices || !navigator.mediaDevices.getUserMedia)
        return Promise.reject('WebRTC is not supported');
    let cam = false,
        mic = false,
        spkr = false;
    return navigator.mediaDevices.enumerateDevices()
        .then((deviceInfos) => {
            deviceInfos.forEach(function (d) {
                console.log(d);  // print device info for debugging
                switch (d.kind) {
                    case 'videoinput':
                        cam = true;
                        break;
                    case 'audioinput':
                        mic = true;
                        break;
                    case 'audiooutput':
                        spkr = true;
                        break;
                }
            });
            // Chrome supports 'audiooutput', Firefox and Safari do not support.
            if (navigator.webkitGetUserMedia === undefined) { // Not Chrome
                spkr = true;
            }
            if (!spkr)
                return Promise.reject('Missing a speaker! Please connect one and reload');
            if (!mic)
                return Promise.reject('Missing a microphone! Please connect one and reload');

            return Promise.resolve(cam);
        });
}

Very simple click to call phone

In this example, we start using the AudioCodes WebRTC API

The click to call phone (for outgoing calls only), uses AudioCodes SBC anonymous user mode

The phone call is initiated from an anonymous user to a registered user.
Note: an anonymous user cannot call another anonymous user.

The phone doesn't ask the user for any information and does not save it in the browser.
In doing so, it can be safely used in public computers such as the ones available at airports, internet Cafes, or public libraries.

To use this phone, a Web Master should insert a link to it in an HTML page.
The HTML page with the link may be in an HTTP or HTTPS site.
The phone page should be used in an HTTPS site.

Run click-to-call example (with HTML link)

Run click-to-call example (with HTML form)

Run click-to-call example (with speech recognition/synthesis)

The 1st example uses an HTML link to jump to the phone.html page.
Callee user name or phone number set as URL parameter 'call'


 <a href="https:// ...some site.../phone.html?call=SantaClaus">Click to call SantaClaus</a>

The 2nd example does about the same, but the HTML link string is built dynamically using an HTML form.

The 3rd example is similar to the 2nd, but uses speech recognition.
While interesting, it is presently at an experimental stage of technology.

All three examples refer for the same web page: phone.html
Let's look at its source text

Included JavaScript files:

The phone GUI is simple and consists of the following:

Let's look at the source text of the phone.js

The following global variables are used:

Let's look at the documentIsReady() implementation:

API usage, and events sequence

As you can see, the phone is a state machine. Using phone API we can initiate some process (SBC connection, SBC registration, calling, hangup call, etc.) Results are asyncronious, we receive them with a delay, using a callback
Sometimes the result is not what we wanted.
For example, the user may not be available.

This phone features:

Note: this is a tutorial version of click-to-call phone. We provide with our SDK a more advanced version.

Tiny web phone

A tiny web phone example

After loading, the phone uses local storage to retrieve user account data (user name, password...)
If an account isn't set, it asks the user to enter the required data.

After initialization, the phone connects to the SBC server, and sends SIP REGISTER.
Now the SBC knows where the phone is located, so the phone can receive an incoming call.

This phone supports a single concurrent call.
If during a call, there is another incoming call, the phone will automatically answer with 'Busy here'
Incoming and outgoing audio & video calls are supported.

This phone's GUI is made up of a status line, a panel and video controls.
There are 5 panels: settings, dialer, outgoing call, incoming call and call established.
Only one panel is shown at a time while others are hidden.

The code for outgoing call is the same as in the previous example
Incoming calls can be answered or rejected


activeCall.answer(phone.AUDIO);

activeCall.reject();

The phone's actual code is fairly small (400 lines)

Simple web phone

A simple web phone example

It is still a simple (1100 lines), but more advanced phone.

Added features:

Desktop notification

Desktop notificaiton is now a standard browser API.
Desktop notification document


/-------- Desktop notification (for incoming call) ---------
function guiNotificationShow(caller) {
    if (Notification.permission !== "granted")
        return;
    guiNotificationClose();

    const options = {
        image: 'images/old-phone.jpg',
        requireInteraction: true
    }
    desktopNotification = new Notification("Calling " + caller, options);
    phone.log('desktopNotification created');
    desktopNotification.onclick = function(event) {
        event.target.close();
        desktopNotification = null;
    }
}

function guiNotificationClose() {
    if (desktopNotification) {
        desktopNotification.close();
        desktopNotification = null;
        phone.log('desktopNofification.close()');
    }
}

The user will be asked for permission to enable desktop notifications:


// Request permission to use desktop notification (for incoming call)
if (Notification.permission === 'default')
    Notification.requestPermission();

Redirect incoming call

User can answer, reject or redirect incoming call

Redirect uses SIP response 302 with Contact header to provide redirect address


// redirect incoming call
call.redirect(recallToAddress);

When calling phone receive callTerminated callback with redirect response,
it can recall automatically to the provided address.


    callTerminated: function(call, message, cause, redirectTo) {
        ac_log('phone>>> call terminated callback, cause=%o', cause);
        . . . . . .
        if (cause === 'Redirected') {
            ac_log('Redirect call to ' + redirectTo);
            let videoOption = call.hasSendVideo() ? phone.VIDEO : phone.AUDIO;
            activeCall = phone.call(videoOption, redirectTo);
        }
    }

Phone will reconnect to a previously connected server if the webpage is reloaded

Before the webpage is closed, the “beforeunload” event is called.
Make use of that event to save the connected server’s address to the local storage.

Save connected server address

    let serverAddress = phone.getServerAddress(); // currently connected server address
    if (serverAddress !== null) {
        let data = {
            address: serverAddress,
            time: new Date().getTime()
        }
        localStorage.setItem('phoneRestoreServer', JSON.stringify(data));
    }
Raise priority of previously connected server address

    // load configuration.
    serverConfig = guiLoadServerConfig();

    // if was page reloading, try reconnect previously connected server
    let restoreData = localStorage.getItem('phoneRestoreServer');

    // check if there was a saved server data in local storage
    if (restoreData !== null) {
        localStorage.removeItem('phoneRestoreServer');
        let restoreServer = JSON.parse(restoreData);
        let delay = Math.ceil(Math.abs(restoreServer.time - new Date().getTime()) / 1000);
        // Check if the stored data has a reasonable timestamp which indicates the there was a refresh not long ago.

        if (delay <=  phoneConfig.restoreCallMaxDelay) {
            // locate the array location of the server address we found in the stored data in the servers array.
            let ix = searchServerAddress(serverConfig.addresses, restoreServer.address);
            if( ix !== -1){
                ac_log('Page reloading, raise priority of previously connected server: "' + restoreServer.address + '"');
                // set high priority to the found server in the servers array we so we can be assure this address will be used
                const HIGH_PRIORITY = 1000;
                serverConfig.addresses[ix] = [restoreServer.address, HIGH_PRIORITY];
            } else {
                ac_log('Cannot find previously used server: ' + restoreServer.address + ' in configuration');
            }
        }
    }

    // proceed with the connection as we have allways done and after we have the highest priority to the desired server.
    phone.setServerConfig(serverConfig.addresses, serverConfig.domain, serverConfig.iceServers);

Call restoration if page is reloaded during an open call

Before closing the page, the "beforeunload" event is called.
Here you can check whether the active call exists.
To restore it, prepare a SIP replaces header and other information, and save it in the local storage.

Prepare restore call data

    if (activeCall !== null && activeCall.isEstablished() && phoneConfig.restoreCall) {
        let data = {
            callTo: activeCall.data['_user'],
            video: activeCall.getVideoState(), // sendrecv, sendonly, recvonly, inactive
            replaces: activeCall.getReplacesHeader(),
            time: new Date().getTime(),
            hold: `${activeCall.isLocalHold() ? 'local' : ''}${activeCall.isRemoteHold() ? 'remote' : ''}`,
            mute: `${activeCall.isAudioMuted() ? 'audio' : ''}${activeCall.isVideoMuted() ? 'video' : ''}`
        }
        localStorage.setItem('phoneRestoreCall', JSON.stringify(data));
    }
Restore call

After reloading the page, and registering on the SBC server, we will check if the saved call exists, if it does, we will try to restore it.


    let restoreData = localStorage.getItem('phoneRestoreCall');
    if( restoreData !== null ){
        let restore = JSON.parse(restoreData);
        let delay = Math.ceil(Math.abs(restore.time - new Date().getTime()) / 1000);
        if (delay > phoneConfig.restoreCallMaxDelay) {
	        ac_log('No restore call, delay is too long (' + delay + ' seconds)');
	        return false;
        }
        ac_log('Trying to restore call...');
        let videoOption = (restore.video === 'sendrecv' || restore.video === 'sendonly') ? phone.VIDEO : phone.AUDIO;
        guiMakeCallTo(restore.callTo, videoOption, ['Replaces: ' + restore.replaces], { 'restoreCall': restore });
    }

When a call is restored we will also restore its Hold & Mute states


let restore = activeCall.data['restoreCall'];
if (restore) {
    if (restore.hold !== '') {
        if (restore.hold.includes('remote')) {
            ac_log('Restore remote hold');
            guiWarning('Remote HOLD');
            activeCall.setRemoteHoldState(); // Set JsSIP session internal state for remote hold.
        }
        if (restore.hold.includes('local')) {
            ac_log('Restore local hold');
            guiHold();                       // Send hold re-INVITE
        }
    } else if (restore.mute !== '') {
        if (restore.mute.includes('audio')) {
            ac_log('Restore mute audio');
            guiMuteAudio();
        }
        if (restore.mute.includes('video')) {
            ac_log('Restore mute video');
            guiMuteVideo();
        }
    }
}

Support AudioCodes SBC switch over

Two SBC's in active and standby mode are used to support High Availability.
If the active SBC fails, the standby SBC will take-over.
From the client's point of view, the SBC connection will be closed/failed and then reconnect and re-login to the SBC.
This can occur during a call (there will be a short pause in the transmission of sound.)
In this case the UI state shouldn't be changed (stay in an open call).


    loginStateChanged: function(isLogin, cause) {
         switch (cause) {
             . . . .
             case "login":
                 ac_log('phone>>> loginStateChanged: login');
                 . . . . .
                 if (activeCall !== null && activeCall.isEstablished()) {
                     ac_log('Re-login done, active call exists (SBC might have switched over to secondary)');
                     guiShowPanel('call_established_panel');
                 }

Audio player

To make the phone resemble a stationary one, a sound was added for the ringing and phone tones (ringing, busy, DTMF)

Because WebRTC can only be used in a modern browser, audio playing uses AudioContext API.
Simple AudioPlayer implementation provided in utils.js

Sounds can be loaded from site, using the following encoding: mp3, aac and ogg (vorbis).
For modern browsers it’s not necessary to provide the same sound in alternative encodings, just use mp3 and check that it works on all supported browsers.

Tones can be generated by generateTone() or generateTonesSuite() methods.

Audio player: create, initialize, download sounds from the site, generate tones

    let SoundConfig = {
        generateTones: {
            // Ringing and busy tones vary in different countries, so those should be defined accordingly.
            // Here f - frequency Hz, t - duration seconds
            ringingTone: [{ f: 425, t: 1.0 }, { t: 4.0 }],
            busyTone: [{ f: 425, t: 0.48 }, { t: 0.48 }],
            disconnectTone: [{ f: 425, t: 0.48 }, { t: 0.48 }],
            autoAnswerTone: [{ f: 425, t: 0.3 }]
        },
        downloadSounds: [
            { ring: 'ring1' }, // ring1 by default (user can select other ring)
            'bell'
        ],
		. . .
    }
	

    let audioPlayer = new AudioPlayer();
    audioPlayer.init(ac_log);

    audioPlayer.downloadSounds('sounds/', SoundConfig.downloadSounds)
        .then(() => {
            // Concatenate user defined tones and DTMF tones defined in audioPlayer.
            let tones = Object.assign({}, SoundConfig.generateTones, audioPlayer.dtmfTones);
            return audioPlayer.generateTonesSuite(tones);
        })
        .then(() => {
            ac_log('AudioPlayer: sounds are ready:', audioPlayer.sounds);
        })
Play a sound

   audioPlayer.play({ name: 'ringingTone', loop: true, volume: 0.3 });

   audioPlayer.play({ name: 'busyTone', volume: 0.3, repeat: 4 });
Stop playing

   audioPlayer.stop();
Support of Google Chrome’s WebAudio autoplay policy

The following methods have been added:


function guiEnableSound() {
    if (!audioPlayer.isDisabled())
        return;
    audioPlayer.enable()
        .then(() => {
          ac_log('Sound is enabled')
        })
        .catch((e) => {
            ac_log('Cannot enable sound', e);
        });
}

Configure sounds/tones using config.js

The phone can use various ringback, busy and incoming calls ringtones (sounds)
Therefore, it is convenient to adjust the sound without rebuilding the phone, by modifying the SoundConfig in the file config.js


let SoundConfig = {
    generateTones: {
        // Phone ringing, busy and other tones vary in different countries.
        // Please see: https://www.itu.int/ITU-T/inr/forms/files/tones-0203.pdf

        /* Germany
        ringingTone: [{ f: 425, t: 1.0 }, { t: 4.0 }],
        busyTone: [{ f: 425, t: 0.48 }, { t: 0.48 }],
        disconnectTone: [{ f: 425, t: 0.48 }, { t: 0.48 }],
        autoAnswerTone: [{ f: 425, t: 0.3 }]
        */

        /* France
        ringingTone: [{f:400, t:1.5}, {t:3.5}],
        busyTone: [{ f: 400, t: 0.5 }, { t: 0.5 }],
        disconnectTone: [{ f: 400, t: 0.5 }, { t: 0.5 }],
        autoAnswerTone: [{ f: 400, t: 0.3 }]
        */

        /* Great Britain */
        ringingTone: [{ f: [400, 450], t: 0.4 }, { t: 0.2 }, { f: [400, 450], t: 0.4 }, { t: 2.0 }],
        busyTone: [{ f: 400, t: 0.375 }, { t: 0.375 }],
        disconnectTone: [{ f: 400, t: 0.375 }, { t: 0.375 }],
        autoAnswerTone: [{ f: 400, t: 0.3 }]
    },
    downloadSounds: [
        { ring: 'ring1' }, // ring1 by default (user can select other ring)
        'bell'
    ],
    play: {
        outgoingCallProgress: { name: 'ringingTone', loop: true, volume: 0.2 },
        busy: { name: 'busyTone', volume: 0.2, repeat: 4 },
        disconnect: { name: 'disconnectTone', volume: 0.2, repeat: 3 },
        autoAnswer: { name: 'autoAnswerTone', volume: 0.2 },
        incomingCall: { name: 'ring', loop: true, volume: 1.0, dropDisabled: true },
        incomingMessage: { name: 'bell', volume: 1.0 },
        dtmf: { volume: 0.15 }
    }
}

See code example above how SoundConfig used in methods: downloadSounds and generateTonesSuite

SoundConfig used for all sound plays:


audioPlayer.play(SoundConfig.play.outgoingCallProgress);
audioPlayer.play(SoundConfig.play.busy);
audioPlayer.play(SoundConfig.play.disconnect);
audioPlayer.play(SoundConfig.play.autoAnswer);
audioPlayer.play(SoundConfig.play.incomingCall);
audioPlayer.play(SoundConfig.play.incomingMessage);

To playing DTMF added key name:


audioPlayer.play(Object.assign({ 'name': key }, SoundConfig.play.dtmf));

Incoming call with Replaces header

Incoming INVITE can contain Replaces header (used for attended transfer).
In the case "incomingCall" callback argument replacedCall points to the call that should be replaced.
(for other cases replacedCall is null)

Developer should close replacedCall call, and auto answer to incoming call.
The new call should visually replace previous call in GUI.


incomingCall: function (call, invite, replacedCall) {
    ac_log('phone>>> incomingCall', call, invite, replacedCall);
    . . . .
    // If received INVITE with Replaces header
    if (replacedCall !== null) {
        ac_log('phone: incomingCall, INVITE with Replaces');

        // close the replaced call.
        replacedCall.data['terminated_replaced'] = true;
        replacedCall.terminate();

        // auto answer to replaces call.
        activeCall = call;
        activeCall.data['open_replaced'] = true;

        // Try to use the same video option as was used in replaced call.
        let videoOption = replacedCall.hasVideo() ? phone.VIDEO : (replacedCall.hasReceiveVideo() ? phone.RECVONLY_VIDEO : phone.AUDIO);
        activeCall.answer(videoOption);
        return;
    }
}

Phone prototype

Run phone prototype example

Added features:

Call history

To keep a user's call history, it's better to use IndexedDB.
If 'user name' is changed, the phone will clear its call history.
Call history is saved in a database, and also set in 'call_log_panel' as an unordered list



/**
 * Database with single store and with copy of the store in memory - objects list
 * Purpose: make the list persistent.
 * Key is part of record, based on current time, unique and has name 'id'
 * Number of objects in store is limited, oldest objects will be deleted.
 * If needed, additional stores can be added: override open(),
 * and use get(), put(), clear(), delete() methods with store name.
 */
class AbstractDb {
    constructor(dbName, storeName, maxSize) {
        this.dbName = dbName;
        this.storeName = storeName;
        this.maxSize = maxSize; // max number of objects
        this.db = null;
        this.list = []; // default store copy in memory.
        this.idSeqNumber = -1; // to generate unique key.
    }

    // Create store unique key. (no more than 1 million in the same millisecond)
    // key must be part or record and have name 'id'
    createId(time) {
        this.idSeqNumber = (this.idSeqNumber + 1) % 1000000; // range 0..999999
        return time.toString() + '-' + ('00000' + this.idSeqNumber.toString()).slice(-6);
    }

    // Open the database, if needed create it.
    open() {
        return new Promise((resolve, reject) => {
            let r = indexedDB.open(this.dbName);
            r.onupgradeneeded = (e) => {
                e.target.result.createObjectStore(this.storeName, { keyPath: 'id' });
            }
            r.onsuccess = () => {
                this.db = r.result;
                resolve();
            }
            r.onerror = r.onblocked = () => { reject(r.error); };
        });
    }

    // load records to memory, ordered by time, if needed delete oldest records
    load() {
        return new Promise((resolve, reject) => {
            if (this.db === null) { reject('db is null'); return; }
            let trn = this.db.transaction(this.storeName, 'readwrite');
            trn.onerror = () => { reject(trn.error); }
            let store = trn.objectStore(this.storeName)
            let onsuccess = (list) => {
                this.list = list;
                let nDel = this.list.length - this.maxSize;
                if (nDel <= 0) {
                    resolve();
                } else {
                    let r = store.delete(IDBKeyRange.upperBound(this.list[nDel - 1].id));
                    r.onerror = () => { reject(r.error); }
                    r.onsuccess = () => {
                        this.list = this.list.splice(-maxSize);
                        resolve();
                    }
                }
            }
            let onerror = (e) => { reject(e); }
            let getAll = store.getAll ? this._getAllBuiltIn : this._getAllCursor;
            getAll(store, onsuccess, onerror);
        });
    }

    _getAllBuiltIn(store, onsuccess, onerror) { // Chrome, Firefox
        let r = store.getAll();
        r.onerror = () => onerror(r.error);
        r.onsuccess = () => onsuccess(r.result);
    }

    _getAllCursor(store, onsuccess, onerror) { // Legacy Microsoft Edge
        let list = [];
        let r = store.openCursor();
        r.onerror = () => onerror(r.error);
        r.onsuccess = (e) => {
            let cursor = e.target.result;
            if (cursor) {
                list.push(cursor.value);
                cursor.continue();
            } else {
                onsuccess(list);
            }
        };
    }

    // Add new record. If needed delete oldest records
    add(record) {
        return new Promise((resolve, reject) => {
            if (this.db === null) { reject('db is null'); return; }
            let trn = this.db.transaction(this.storeName, 'readwrite');
            trn.onerror = () => { reject(trn.error); }
            let store = trn.objectStore(this.storeName)
            let r = store.add(record);
            r.onerror = () => { reject(r.error); }
            r.onsuccess = () => {
                this.list.push(record);
                let nDel = this.list.length - this.maxSize;
                if (nDel <= 0) {
                    resolve();
                } else {
                    r = store.delete(IDBKeyRange.upperBound(this.list[nDel - 1].id));
                    r.onerror = () => { reject(r.error); }
                    r.onsuccess = () => {
                        this.list = this.list.splice(-this.maxSize);
                        resolve();
                    }
                }
            }
        });
    }

    // Update record with some unique id.
    update(record) {
        let index = this.list.findIndex((r) => r.id === record.id);
        if (index == -1)
            return Promise.reject('Record is not found');
        this.list[index] = record;
        return this._exec('put', this.storeName, record);
    }

    // Delete record with the key (if store is default delete also from list)
    delete(id, storeName = this.storeName) {
        if (storeName === this.storeName) {
            let index = this.list.findIndex((r) => r.id === id);
            if (index == -1)
                return Promise.reject('Record is not found');
            this.list.splice(index, 1);
        }
        return this._exec('delete', storeName, id);
    }

    // Clear all store records
    clear(storeName = this.storeName) {
        this.list = [];
        return this._exec('clear', storeName);
    }

    get(key, storeName) {
        return this._exec('get', storeName, key);
    }

    put(record, storeName) {
        return this._exec('put', storeName, record);
    }

    // Single transaction operation.
    _exec(op, storeName, data) {
        return new Promise((resolve, reject) => {
            if (this.db === null) { reject('db is null'); return; }
            let trn = this.db.transaction(storeName, 'readwrite');
            trn.onerror = () => { reject(trn.error); }
            let store = trn.objectStore(storeName)
            let r;
            switch (op) {
                case 'clear':
                    r = store.clear();
                    break;
                case 'delete':
                    r = store.delete(data);
                    break;
                case 'put':
                    r = store.put(data);
                    break;
                case 'get':
                    r = store.get(data);
                    break;
                default:
                    reject('db: wrong request');
                    return;
            }
            r.onerror = () => { reject(r.error); }
            r.onsuccess = () => { resolve(r.result); }
        });
    }
}

/**
 * To keep phone call logs.
 */
class CallLogDb extends AbstractDb {
    constructor(maxSize) {
        super('phone', 'call_log', maxSize);
    }
}

Custom loggers

By default, all logs from JsSIP and AudioCodes API write to console.log
You can reassign them to a custom logger.


    function setConsoleLoggers() {
        let useTimestamp = phoneConfig.addLoggerTimestamp;
        let useColor = ['chrome', 'firefox', 'safari'].includes(phone.getBrowser());

        ac_log = function () { // Assign ac_log global variable. It's phone logger.
            let args = [].slice.call(arguments);
            let firstArg = [(useTimestamp ? createTimestamp() : '') + (useColor ? '%c' : '') + args[0]];
            if (useColor) firstArg = firstArg.concat(['color: BlueViolet;']);
            console.log.apply(console, firstArg.concat(args.slice(1)));
        };
        let js_log = function () {
            let args = [].slice.call(arguments);
            let firstArg = [(useTimestamp ? createTimestamp() : '') + args[0]];
            console.log.apply(console, firstArg.concat(args.slice(1)));
        };

        phone.setAcLogger(ac_log);     // It's AudioCodes SDK logger.
        phone.setJsSipLogger(js_log);  // It's JsSIP stack logger.
    }

Note: no need to add a timestamp to the log.
You should enable show console log timestamps in browser.

Our customers send us logs with discovered problems.
To our surprise, none of them sent logs with a timestamps.
Therefore, we decided that we would add the timestamp string before each log entry in our example logger function.

By default, the log is printed in the browser console window.
It can be sent via websocket to a server.
We added the websocket logger to our examples:


    function setWebsocketLoggers(url) {
       return new Promise((resolve, reject) => {
           let ws = new WebSocket('wss://' + url, 'wslog');
           ws.onopen = () => { resolve(ws); }
           ws.onerror = (e) => { reject(e); }
       })
           .then(ws => {
               const log = function () {
                   let args = [].slice.call(arguments);
                   ws.send([createTimestamp() + args[0]].concat(args.slice(1)).join() + '\n');
               };
               ac_log(`Sending log to "${url}"`);
               ac_log = log;               // It's phone logger.
               phone.setAcLogger(log);     // It's AudioCodes SDK logger.
               phone.setJsSipLogger(log);  // It's JsSIP stack logger.
           })
    }

If the websocket logger cannot connect to the cloud service, the browser console is used.


    function documentIsReady() {
        // Load configurations
        serverConfig = guiLoadServerConfig();
        phoneConfig = guiLoadPhoneConfig();
        userPref = guiLoadUserPref();
    
        // Set logger
        if (!serverConfig.logger) {
            setConsoleLoggers();                      // Use console logger.
            startPhone();
        } else {
            setWebsocketLoggers(serverConfig.logger)  // Use websocket logger.
                .catch((e) => {
                    setConsoleLoggers();              // Cannot connect. Use console logger.
                    ac_log('Cannot connect to logger server', e);
                })
                .finally(() => {
                    startPhone();
                })
        }
    }

Access to internal WebRTC objects

After the call is established, you have RTCPeerConnection using method call.getRTCPeerConnection(),
and local and remote WebRTC media streams using methods call.getRTCLocalStream() and call.getRTCRemoteStream().

Note: Developers may call the WebRTC API directly or use the provided SDK functions.
The functions called via phone.getWR() (webrtc wrapper) and return Promise.
View the examples below.

Print active call track information (SDK API)

async function printStreamsParameters() {
    if (activeCall === null) {
        ac_log('activeCall is null');
        return;
    }
    // Current video state set according answer SDP (hold answer will be ignored)
    ac_log('Video State current: ' + activeCall.getVideoState() + ' enabled: ' + activeCall.getEnabledVideoState());

    // WebRTC tracks
    let li = await phone.getWR().stream.getInfo(activeCall.getRTCLocalStream());
    let ri = await phone.getWR().stream.getInfo(activeCall.getRTCRemoteStream());
    ac_log(`Enabled Tracks: local ${li} remote ${ri}`)

    // WebRTC transceivers
    let ti = await phone.getWR().connection.getTransceiversInfo(activeCall.getRTCPeerConnection());
    ac_log(`Transceivers: ${ti}`);
}
Print active call track information (WebRTC API)

function printStreamsParameters() {
    if (activeCall === null) {
        ac_log('activeCall is null');
        return;
    }
    // Current video state set according answer SDP (hold answer will be ignored)
    ac_log('Video State current: ' + activeCall.getVideoState() + ' enabled: ' + activeCall.getEnabledVideoState());

    // WebRTC tracks
    ac_log(`Enabled Tracks: local ${getStreamInfo(activeCall.getRTCLocalStream())} remote ${getStreamInfo(activeCall.getRTCRemoteStream())}`)

    // WebRTC transceivers
    let conn = activeCall.getRTCPeerConnection();
    let ts = conn.getTransceivers();
    let at = getTransceiver(ts, 'audio');
    let vt = getTransceiver(ts, 'video');
    ac_log(`Transceivers: (${ts.length}) audio ${getTransInfo(at)} video ${getTransInfo(vt)}`, ts);
}

function getStreamInfo(st) {
    if( st === null )
      return 'stream is null'
    return `audio: ${getTrackInfo(st.getAudioTracks())} video: ${getTrackInfo(st.getVideoTracks())}`;
}

function getTrackInfo(tr) {
    return tr.length > 0 ? tr[0].enabled.toString() : '-'
}

function getTransceiver(transceivers, kind){
    for (let t of transceivers) {
        if (t.sender !== null && t.sender.track !== null && t.sender.track.kind === kind)
            return t;
        if (t.receiver !== null && t.receiver.track !== null && t.receiver.track.kind === kind)
            return t;
    }
    return null;
}

function getTransInfo(t){
    return t === null ? 'none' : `d=${t.direction} c=${t.currentDirection}`;
}
Print active call statistics (SDK API)

function printCallStats() {
    if (activeCall === null) {
        ac_log('activeCall is null');
        return;
    }
    let conn = activeCall.getRTCPeerConnection();
    phone.getWR().connection.getStats(conn, ['outbound-rtp', 'inbound-rtp'])
        .then(str => {
            ac_log('call stats: ' + str);
        })
        .catch(err => {
            ac_log('stat error', err);
        });
}
Print active call statistics (WebRTC API)

function printCallStats() {
    if (activeCall === null) {
        ac_log('activeCall is null');
        return;
    }
    let conn = activeCall.getRTCPeerConnection();
    let str = '';
    let types = ['outboud-rtp', 'inbound-rtp'];
    conn.getStats(null)
        .then(report => {
            report.forEach(now => {
                if (types.includes(now.type)) {
                    str += ' {';
                    let first = true;
                    for (let key of Object.keys(now)) {
                        if (first) first = false;
                        else str += ',';
                        str += (key + '=' + now[key]);
                    }
                    str += '} \r\n';
                }
            });
        })
        .then(() => {
            ac_log(str);
        });
}

Blind call transfer

Transferor

To transfer a call, the phone places the current call on hold and sends a SIP REFER message to the active call (thereby requesting it to initiate a new call to a different destination).


async function blindTransfer(transferTo) {
    ac_log('blind transfer ' + transferTo);

    //  wait until active call be on hold
    while (activeCall !== null && !activeCall.isLocalHold()) {
        try {
            await activeCall.hold(true);
        } catch (e) {
            await new Promise(resolve => setTimeout(resolve, 1000));
        }
    }
    if (activeCall === null)
        return;

    // send REFER
    activeCall.sendRefer(transferTo);
}

After a call transfer has initialized, the phone will check the process by using the “transferorNotification” callback.
If the call transfer fails, it will un-hold the current call.
If the call transfer succeeds, it will end the current call.


transferorNotification: function (call, state) {
    switch (state) {
        case 0:    // "in progress": REFER accepted or received NOTIFY 1xx
            break;

        case -1:   // "failed": REFER rejected or received NOTIFY >=300
            call.hold(false); // un-hold active call
            break;

        case 1:   // "success" received NOTIFY 2xx
            guiHangup(); // terminate active call
            break;
    }
}
Transferee

When the phone receives a REFER message it will call the address extracted from the Refer-To header.

To receive a REFER message, use the “transfereeREFER” callback. This callback will allow the phone to accept or reject incoming REFER messages.


transfereeRefer: function (call, refer) {
    if (transferCall === null) {
        ac_log('phone>>> transferee incoming REFER: accepted');
        return true;
    } else {
        ac_log('phone>>> transferee incoming REFER: rejected, because other transfer in progress');
        return false;
    }
}

If a REFER message is accepted, the SIP stack will automatically create a new call and start the calling process.
Developers should use the "transfereeCreatedCall" callback to receive a reference to the newly created call object.
Note: the transferee’s phone will have two calls working simultaneously, i.e. the call in which the REFER was received and a new outgoing call to the REFER’s specified destination.


transfereeCreatedCall: function (call) {
    ac_log('phone>>> transferee created call', call);
    transferCall = call; // Used until call will be established
    guiInfo('call transferring to ' + call.data['_user']);
    . . . . .
}

Receiving NOTIFY in/out of dialog

Developers should use the "incomingNotify" callback to receive incoming NOTIFY.

Note: to receive incoming in dialog NOTIFY used modified JsSIP.
(because it's not standard SIP extension)

Partial implementation of Broadsoft call control:


incomingNotify: function (call, eventName, from, contentType, body, request) {
    ac_log(`phone>>> incoming NOTIFY "${eventName}"`, call, from, contentType, body);
    if (call === null)
        return false; // skip out of dialog NOTIFY.
    if (eventName !== 'talk' && eventName !== 'hold')
        return false; // skip unsupported events
    if (activeCall === null)
        return false; // skip illegal state.

    if (eventName === 'talk') {
        if (!activeCall.isEstablished() && !activeCall.isOutgoing()) {
            ac_log('incoming NOTIFY "talk": answer call');
            // Choose the best available video option.
            let videoOption = activeCall.hasVideo() ? (hasCamera ? phone.VIDEO : phone.RECVONLY_VIDEO) : phone.AUDIO;
            guiAnswerCall(videoOption);
        } else if (activeCall.isEstablished() && activeCall.isLocalHold()) {
            ac_log('incoming NOTIFY "talk": un-hold call');
            call.hold(false);
        } else {
            ac_log('incoming NOTIFY "talk": ignored');
        }
    } else if (eventName === 'hold') {
        if (activeCall.isEstablished() && !activeCall.isLocalHold()) {
            ac_log('incoming NOTIFY "hold": set call on hold');
            activeCall.hold(true);
        } else {
            ac_log('incoming NOTIFY "hold": ignored');
        }
    }
    return true; // mark that we 'consume' the NOTIFY.
}

Incoming call custom header usage

Incoming INVITE may contain custom SIP headers.
In the example we check Alert-Info header.


incomingCall: function (call, invite, replacedCall){
    . . . .
    // Check if incoming INVITE contains Alert-Info header.
    let alertInfo = new AlertInfo(invite);
    ac_log(`alert-info header ${alertInfo.exists() ? ' exists' : 'does not exist'}`);
    if (alertInfo.hasAutoAnswer()) {
        ac_log('*** Used Alert-Info Auto answer ***');
        // Choose the best available video option.
        let videoOption = activeCall.hasVideo() ? (hasCamera ? phone.VIDEO : phone.RECVONLY_VIDEO): phone.AUDIO;
        guiAnswerCall(videoOption);
        return;
    }
    . . . .

"incomingCall" callback "invite" argument is JsSIP.IncomingRequest
Developer can get the header(s) by invite.getHeaders('alert-info')
JsSIP does not provide parser for 'Alert-Info', it is presented as raw header (string)
We use here custom Alert-Info parser (defined in utils.js).
In a similar way developer can use any custom SIP header.

Sending/Receiving out of dialog SIP MESSAGE

To receive (out of SIP dialog) SIP message used callback "incomingMessage"


incomingMessage: function (call, from, contentType, body, request) {
    ac_log('phone>>> incoming MESSAGE', from, contentType, body);
	. . . . .
}

To send SIP message, used phone.sendMessage()


function guiSendMessage() {
    let to = document.querySelector('#send_message_form [name=send_to]').value.trim();
    let text = document.querySelector('#send_message_form [name=message]').value.trim();
    if (to === '' || text === '')
        return;
    phone.sendMessage(to, text)
        .then((e) => {
            ac_log('message sent', e);
            guiInfo('Message sent');
        })
        .catch((e) => {
            ac_log('message sending error', e);
            guiError('Cannot send message: ' + e.cause);
        });
}

Note: please check sending result.
When receiver phone is not registered (off-line), will be catched error,
because SBC response: 404 "User not found"

Sending/Receiving in dialog SIP INFO

To receive (in SIP dialog) SIP INFO used callback "incomingInfo"


incomingInfo: function (call, from, contentType, body, request) {
    ac_log('phone>>> incoming INFO', call, from, contentType, body);
	. . . . .
}

To send SIP INFO message, used call.sendInfo()


function guiSendInfo() {
    let info = {test: 'test'};
    activeCall.sendInfo(JSON.stringify(info), 'application/json');
}

Local and remote video elements overlapping

Local and remote video elements can overlap each other on the screen.
Select the video size 'Custom', and use the mouse to drag the video element, and mouse wheel to increase or decrease its size.

You can see how it is implemented in the functions: guiSetVideoStyles(), guiUseMouse(), and functions starts with eventMouse...

Set audio and video constraints for browsers

Can be used browser names: "chrome", "firefox", "safari", "other".
or with os name, e.g. "chrome|windows", "chrome|android", ...
Possible OS names: "windows", "android", "macos", "ios", "linux", "other".

If for current browser much 2 entries, the latest replace the previous.
E.g. if current browser is Safari in iOS for the below example, at first will be set constraints for "safari", and then they will be replaced by 2nd matched entry "safari|ios".


let constraints = {
    chrome: { 
        audio: { echoCancellation: true },
        video: { aspectRatio: 1.0 }		   
    },
	
    firefox: { 
        audio: { echoCancellation: true }
    },
    
    safari: {
        audio: { echoCancellation: false } 
    },
	
    "safari|ios": {
        audio: { echoCancellation: false } 
    },
	
    other: {
        audio: { echoCancellation: true }
    }
};

phone.setBrowsersConstraints(constraints);

We don't check in the example if Chrome browser supports for example video constraint "aspectRatio".
It depends of used operating system, driver and web camera model.
Therefore, to avoid over constrained error we use optional format of constraints: default or with key word "ideal", and avoid using key word "exact"

Set audio and video constraints for the browser

For the purpose can be used setConstraints() method to set (or replace previously used) audio or video constraints for getUserMedia() method
Constraints can be set for audio (microphone) or video (camera) device.

It is important to use supported constraints and values ranges, or use optional constraints format
otherwise during call opening (in WebRTC method getUserMedia) will be over constrained error.
See also Capabilities, constraints, and settings

setConstraints() method parameters

Audio contraints example


let supported = navigator.mediaDevices.getSupportedConstraints();
let ac = {};

ac_log('Volume is supported: ', supported.volume ? true : false);
if (supported.volume){
    ac.volume = 0.7;
}

ac_log('Echo cancellation is supported: ', supported.echoCancellation ? true : false);
if (supported.echoCancellation){
    ac.echoCancellation = true;
}

if (Object.keys(ac).length > 0){ // Is not empty ?
    phone.setConstraints(null, 'audio', ac); // null means current browser.
}

Video contraints example


let supported = navigator.mediaDevices.getSupportedConstraints();
let vc = {};

ac_log('webcam facing mode is supported: ', supported.facingMode ? true : false);
if (supported.facingMode){
    vc.facingMode = { ideal: 'user' };
}

ac_log('webcam aspect ratio is supported: ', supported.aspectRatio ? true : false);
if(supported.aspectRatio){
    vc.aspectRatio = 1.0;
}

if (Object.keys(vc).length > 0){ // Is not empty ?
    phone.setConstraints(null), 'video', vc);
}

Add or remove single audio or video constraint for currently used browser

To add or remove single audio or video constraint without modification others constraints use setConstraint() method.


  // Set audio deviceId for getUserMedia()
  phone.setConstraint('audio', 'deviceId', 'some-device-id-string'); 

  // Set audio deviceId for getUserMedia() as "exact" constraint
  // If such device does not exist getUserMedia() promise will be rejected with OverconstrainedError
  phone.setConstraint('audio', 'deviceId', {exact: 'some-device-id-string'});

  // Remove audio deviceId for getUserMedia()
  phone.setConstraint('audio', 'deviceId', null);

Screen sharing

WebRTC supports screen-sharing video stream (See navigator.mediaDevices.getDisplayMedia()).
We know how to start sending video in audio call, or replace one video track to another in video call.
We can add custom header to outgoing re-INVITE.
Combining these techniques, we add screen-sharing support:


let stream = phone.openScreenSharing()
Create screen-sharing stream using navigator.mediaDevices.getDisplayMedia().
User should select of sharing type: full screen, some window or browser tab.


phone.closeScreenSharing(stream)
Close screen-sharing stream previously open by openScreenSharing().


call.startScreenSharing(stream)
For audio call it works the same as call.startSendingVideo()
For video call it replaces sending video track from web camera to screen-sharing video track.


call.stopSreenSharing()
Stop screen-sharing video, it can be also stopped by built-in browser button "Stop sharing".
For audio call it's the same as call.stopSendingVideo().
For video call will be restored previously sent video track from web camera.

Because screen-sharing can be terminated not only by stopScreenSharing(), but also using built-in in browser button, to phone added new callback:


callScreenSharingEnded(call, stream)
The callback used to update GUI
In multi call phone the same screen-sharing stream can be used in multiple startScreenSharing()
The callback allows you to keep the usage counter and close the stream if no call is using it.

For other side screen-sharing video is usual video.
To notify the other side that a screen-sharing video is being sent,
the client sends re-INVITE with the special header:
X-Screen-Sharing: on
or
X-Screen-Sharing: off

If call was replaced, it means the same as X-Screen-Sharing: off.

After page reloading screen-sharing will be restored if the user approves it.

Please see the API usage in single call and multi call phone prototypes.

SUBSCRIBE dialog

In SDK 1.15 added generic SIP SUBSCRIBE dialog
Used our subscriber/notifier JsSIP extension https://github.com/versatica/JsSIP/issues/708.

Please see usage example in single call phone prototype and phone prototype with ACD.

Phone prototype with an answering machine

Run phone prototype with an answering machine

Added feature:

When a user doesn't answer an incoming call for a set number seconds,
the answering machine will take its place and answer the call.

First, a greeting message is played.
The phone has a default greeting,
or the user can record a custom greeting using their microphone.

Then a beep sound is played.
The calling party can now record a voice message and hang up.
The maximum voice message time is limited.

The recorded message is stored in indexeddb, and will not be lost after the page reload.
There is a visual notification that the phone received a new voice message.
(In our example we changed the color and border color of the button 'Answering Machine')

In the panel of the answering machine, the user can:

To implement answering machine the following was used:

Because an answering machine complicates the code, and isn't needed for all custometers, this is done as a separate example.

Phone prototype with OAuth2 authorization

Run phone prototype with OAuth2

Added feature:

For this example we're using Keycloak server; an open source software specializing in Identity and Access Management.

The JavaScript phone uses the provided keycloak.js adapter

Changes in comparison with the phone prototype example

Keycloak adapter initialization


let authServerConfig = { url: 'https://webrtcoauth.example.com/auth',
                         realm: 'demo',
                         clientId: 'demoClient'};


function initializeAuthServer() {
    ac_log('keycloak: create adapter');
    keycloak = new Keycloak(authServerConfig);

    keycloak.onTokenExpired = () => {
        ac_log('keycloak: onTokenExpired callback');
        updateAuthToken();
    }

    ac_log('keycloak: init()');
    keycloak.init({ onLoad: 'login-required' })
        .then(() => {
            ac_log('keycloak: initialized');
            return keycloak.loadUserProfile();
        })
        .then(() => {
            // To disable auto call login() if updateToken() fails.
            keycloak.loginRequired = false;

            userAccount = {
                user: keycloak.profile.username,
                password: '',
                displayName: keycloak.profile.firstName + ' ' + keycloak.profile.lastName
            }

            phone.setOAuthToken(keycloak.token);

            initializePhone();
        })
        .catch((err) => {
            ac_log('keycloak: initialization error', err);
        });
}

When using a keycloak adapter, the phone is no longer a Single Page Application (SPA).

The first call to keycloak.init() sends the user to a Keycloak server HTML page where the user will enter a name and a password.
Afterwards they’ll be redirected to the page with the phone.
Loading the phone page calls keycloak.init again but following entering the credentials, it will not redirect the user to a login page.

Depending on the configuration of the server, the user name and password are rarely required (approximately once every 2 weeks)

In subsequent calls of the phone page HTML, keycloak.init() will load the phone page directly instead of redirecting to the Keycloak server.
It will be performed without entering the password, but still adds a few seconds to the startup.

REGISTER with access token example

The keycloak.js adapter provides a short-lived access token for SBC authorization. That includes authorization for SIP REGISTER and INVITE messages using an Authorization SIP header.


REGISTER sip:audiocodes.com SIP/2.0
Via: SIP/2.0/WSS kiagrmh8pngt.invalid;branch=z9hG4bK3892878
Max-Forwards: 69
To: <sip:johndoe@audiocodes.com>
From: "JohnDoe" <sip:johndoe@audiocodes.com>;tag=t831p2e9jk
Call-ID: prladr5faj5nqtpo5f7sun
CSeq: 14 REGISTER
Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJUdDl1TF9Ya0hSampFR2NUZFRlYXZ0dmxTc0pXYWplRHhIR1MzL
 XlVazhZIn0.eyJqdGkiOiJmOTVmYWEzZC02N2YxLTRmYjEtODlmOC1hMzQ5ZTc0Y2FlNzMiLCJleHAiOjE1MzEwNTQ4NjYsIm5iZiI6MCwiaWF0IjoxNTMxMDU
 0MjY2LCJpc3MiOiJodHRwczovL3dlYnJ0Y29hdXRoLmF1ZGlvY29kZXMuY29tL2F1dGgvcmVhbG1zL2RlbW8iLCJhdWQiOiJXZWJSVENEZW1vIiwic3ViIjoiM
 jQxZjlkNWEtMzhhNC00Y2Q1LTlhOWItYzBhYjIxNzJkZDZiIiwidHlwIjoiQmVhcmVyIiwiYXpwIjoiV2ViUlRDRGVtbyIsIm5vbmNlIjoiMzBhYTdhYWEtMjd
 jMC00NDAwLTllYjQtY2Y2NmQwZjQxYmM4IiwiYXV0aF90aW1lIjoxNTMxMDU0MjU3LCJzZXNzaW9uX3N0YXRlIjoiM2NlZDVhNmQtZjkwYy00NjU0LWIwYWItM
 DNjNmU5MTcxNWU0IiwiYWNyIjoiMCIsImFsbG93ZWQtb3JpZ2lucyI6WyIqIl0sInJlYWxtX2FjY2VzcyI6eyJyb2xlcyI6WyJ1bWFfYXV0aG9yaXphdGlvbiJ
 dfSwicmVzb3VyY2VfYWNjZXNzIjp7ImFjY291bnQiOnsicm9sZXMiOlsibWFuYWdlLWFjY291bnQiLCJtYW5hZ2UtYWNjb3VudC1saW5rcyIsInZpZXctcHJvZ
 mlsZSJdfX0sIm5hbWUiOiJIYWlmYTEgVXNlcjEiLCJwcmVmZXJyZWRfdXNlcm5hbWUiOiJoYWlmYXVzZXIxIiwiZ2l2ZW5fbmFtZSI6IkhhaWZhMSIsImZhbWl
 seV9uYW1lIjoiVXNlcjEiLCJlbWFpbCI6ImhhaWZhdXNlcjFAZXhhbXBsZS5jb20ifQ.hTnVH-wKGlPDwBreK6c2hxgxZq5jBd9FWplrfRzXHt6wm5cMbbGuJg
 myJtlcVXueCh9KgFgVe6T9i7VPrtmmgCCVLMOqKSRHZkQn6xcb52Ua0NO8v66qqZKzGToKAXNoJJBOzv8s0iFpxojbu0ZgRYzTkdNtNq2YAbsNfrQRYAPKtBZA
 Qdcm6alkU1YYqh4BVhEk5MehXYerQj8B8KmwzkmNTwJc34EhZ1CkFbyOO3bqumwSTPo4eOVDhcA82q8J4dw3kDkKZh9RpaV4RLsv-5FngPjX1CGMwFqsHd4EZ_
 v62nvrKLm3JxHMu1GLQGuhFwjw37iqaxL8XdWaHssLkA
Contact: <sip:5g8glmd1@kiagrmh8pngt.invalid;transport=ws>;+sip.ice;reg-id=1;+sip.instance="";expires=600
Expires: 600
Allow: INVITE,ACK,CANCEL,BYE,UPDATE,MESSAGE,OPTIONS,REFER,INFO
Supported: path,gruu,outbound
User-Agent: AudioCodes WebRTC phone
Content-Length: 0


Access token updating

Depending on the configuration of the server, the short-lived access token will expire after a set amount of time had elapsed (e.g. every 10 minutes), and will need to be updated by the keycloak server. The function is called when:


function updateAuthToken() {
    ac_log('keycloak: updateToken()');
    return keycloak.updateToken(-1)
        .then((refreshed) => {
            if (!refreshed) {
                ac_log('keycloak: token is still valid');
                return;
            }
            ac_log('keycloak: token is refreshed');
            phone.setOAuthToken(keycloak.token); // set access token for AudioCodes API.
        })
        .catch((e) => {
            ac_log('keycloak: Failed to refresh the access token', e);
            if (activeCall !== null && activeCall.isEstablished()) {
                ac_log('keycloak: re-login needed. Postponed because there is an active call.');
                authLoginPostponed = true;
            } else {
                ac_log('keycloak: login()');
                keycloak.login();
            }
        });
}

Test mode

To test the SBC, a test mode has been added. The test mode allows you to send REGISTER and INVITE SIP messages without an Authorization header, or with Authorization header that contains an incorrect access token value.

To enable the test mode, start the phone, open JavaScript console, and type:


    localStorage.setItem('authTests', 'r1 i1');  // set r1 and i1 test
Then reload the phone page.
When the test mode is used, the console will display:

    Warning: USED PHONE TEST MODE: authTests='r1 i1'

This can be set for one or more test names. Use space as the delimiter between test names.

Implemented tests:

To disable the test mode, start the phone, open JavaScript console, and type:


    localStorage.removeItem('authTests');

And then reload the phone page.

In a production version, instead of loading the authTests value from local storage, set an empty string value or completely remove the pieces of code marked with 'test code' comments.

Phone prototype with Automatic call distributor (ACD)

Run phone prototype with ACD

Added feature:

In this example SIP SUBSCRIBE dialogs used to work with the automatic call distributor (ACD) server.

To parse XML used Scott Means's Pure JavaScript XML parser https://github.com/smeans/pjxml
MIT license
The code converted to classes syntax.

Multi call phone prototype

Run multi call phone prototype

Added features:

Provided API supports multi calls.
Before we did not use this opportunity in phone prototypes,
because it is significantly complicates GUI and rarely necessary.

Nevertheless it is necessary for 3 way conference and atteneded call transfer.

GUI scheme

Conference

The conference model is very simple:
if the phone is in conference mode,
all callers are in the same conference room and hear (and see) each other.

All newer calls will be added to this conference room.

Conference mode can be changed during its operation: audio, audio and video or switched off.

Audio conference

To implement audio conferencing we’ll need to mix audio streams.
There is an Audio Context API that can help us achieve that.

It used to work exclusively in Mozilla’s Firefox browser, and not in Google’s Chrome browser.
Finally, in 2019 Chrome’s bug on the subject was fixed. (See Chromium issue 121673, reported in 2012)
Now we can implement audio conferencing via Google Chrome as well.

In the WebRTC phone, the microphone generates audio stream that is sent to a remote phone.
To create a conference we will replace it to an audio mixer stream.

Let's consider for example 3-way conferences.

We have 2 open calls:
1st call A - B
2nd call A - C

In normal phone mode, A send its microphone stream to B and C.
To create an audio conference:
The 1st call will send to B an audio stream containing audio mixed from the microphone and audio received from C.
The 2nd call will send to C an audio stream containing audio mixed from the microphone and audio received from B.

So here we’ll use 2 audio mixers (one mixer per each call).
We cannot use a single audio mixer to mix all streams and send it to all remote calls because then the remote user will receive and hear their own echo.

Similarly, we can create conferences this way that will include more participants.
For each call we will send an audio stream mixed from the microphone and other calls.

Note: about tracks and streams.
A known problem is that some Web APIs use only streams and others will use only tracks.
The stream contains an audio track, or an audio track and a video track.

WebRTC API supports streams with 2 or more audio tracks.
We don't use that (the possibility of sending multiple tracks) for conferencing, instead we send a single stream containing a single audio track (with its audio mixed)

For audio calls, the phone sends a local stream to a remote phone.
When we create a local stream we call WebRTC getUserMedia API (use only stream)
The stream contains the microphone audio track.
To mix the microphone’s audio with the remote side’s audio we use an Audio Context API (use only streams)
Then we get from the mixed stream an audio track and call the WebRTC sender method to replace the track (use only track)

Video conference

Work with video streams creates a heavy load on the processor so we’ll use a single video mixer for all calls to mix all incoming video streams and the local camera stream.

The same mixed video stream will be sent to all remote calls.
Each participant will see other participants including themselves (serving as an analogue to the audio conference’s echo).

The browser API is bizarre; we can’t simply mix the video streams the way we do for audio.
We’ll need to split the video streams to sequences of pictures, draw them in canvas, using some layout,
and then recreate the mixed stream from the picture sequence.

User can change the parameters of the video conference during its operation:
the size of the call's picture, the pictures layout (linear or compact)
and the number of frames per second (FPS).
CPU usage varies significantly depending on size and FPS.

Audio and video mixer

To implement audio and video conference were added class CallAudioMixer and CallVideoMixer (see: utils.js)
Them use the same class "call" as other API, with one addition:
in the call instance must be defined integer variable call.data['_line_index']


class CallAudioMixer {
    // For each call created audio mixer instance.
    // Аudio context can be taken from audio player.
    constructor(audioCtx, call)

    // Close mixer, release all resources.
    close()

    // Get mixed audio stream
    getMix()

    // Add call to mixer.
    // Returns true if added, false if the call is already added.
    add(call)

    // Remove call from mixer
    // Returns true if removed.
    // Returns false, if the call was not added, or cannot be removed, because set in constructor.
    remove(call)

    // Returns string with calls list
    toString()
}


class CallVideoMixer {
    // Used single instance for all calls.
    constructor()

    // Set canvas id.
    // Set local video element id.
    // Set remote video element id prefix. (will be added video element index 0, 1, ...)
    setElements(canvasId, localVideoId, remoteVideoId)

    // Set number of frames per seconds of mixed stream.
    // For example: 1, 2, 5, 10, 20, 50.
    // Default: 10
    setFPS(v)

    // Set calls video layout: 'linear' or 'compact'
    // Default: 'compact'
    setLayout(v)

    // Set call video size (pixels)
    // Default w=160, h=120
    setSize(w, h)

    // Set call video size (pixels)
    // size likes: {width: '160px', height: '120px'}
    setSizes(size)

    // Returns true when mixer is started
    isOn()

    // Start mixer
    start()

    // Stop mixer, remove all calls, release resources.
    // After using stop the mixer can be restarted.
    stop()

    // Get mixed video stream for added call.
    getMix(call)

    // Add call to mixer or update send/receive mode.
    // Returns true if send video was added (should be replaced connection sender track)
    add(call, send = true, receive = true)

    // Remove call from mixer.
    // Returns true if removed, false if was not added.
    remove(call)

    // Resize video layout then changed number of video channels
    // Used when added/removed local video channel.
    // Called automatically in methods: add, remove, setLayout, setSize
    //
    // Warning: it's designed for 5 lines phone !
    // Max number of video controls is 6 (including local video)
    // If you use more lines, please modify this method.
    resize()

    // Returns string with calls list
    toString()
}

Phone conference functions

The functions work with audio and video mixers and replace connection sender track to mixed stream track,
or restore original sender track.


// GUI switch conference mode: off, audio, audio and video
function guiConferenceSwitch()

// Start audio conference
function conferenceStartAudio()

// Stop audio conference
function conferenceStopAudio()

// Start video conference
function conferenceStartVideo()

// Stop video conference
function conferenceStopVideo()

// Add call to conference (audio or audio/video)
function conferenceAdd(call)

// Remove call from conference (audio or audio/video)
function conferenceRemove(call)

// Assign line video stream from camera to local video element.
// Used line set as argument or the first line that send video.
conferenceSetLocalVideo(lineIndex = -1)

// Print in console conference information.
function conferencePrint()

Citrix desktop phone prototype

Run Citrix desktop phone prototype

Citrix provides WebRTC Redirection SDK. The SDK can be used to build Electron applications or browser single page applications (SPA). If you use Citrix desktop, see: citrix.com

Note: Browser SPA only supports audio calls.

Citrix SDK conversion

The Citrix SDK file CitrixWebRTC.js (or CitrixWebRTC.min.js) is a Node module. To use it in the browser, it must be converted.

Notes:


  npm install --global browserify
  browserify CitrixWebRTC.js --outfile browserifyCitrixWebRTC.js

JsSIP modification

Citrix API is very close to standard WebRTC API, but not 100% compatible. Therefore, Citrix API JsSIP must be modified.

Note: We replaced a few WebRTC API functions to Citrix analogs. Review python script source to see replaced functions.

To convert AudioCodes SDK

  1. Install Python3.
  2. Run the following script:
    
    py citrix_convert.py <ac_webrtc.min.js >citrix_ac_webrtc.min.js
    
  3. For debugging, you can replace the obfuscated ac_webrtc.min.js file with the non-obfuscated files acapi.1.?.0.js and citrix_jssip.js

To build citrix_jssip.js

   py citrix_convert.py <jssip.js >citrix_jssip.js

Citrix cloud Windows configuration

To enable the Citrix SDK

You must set registry in remote Citrix Windows system. Edit Windows registry (use regedit.exe command).

  1. Enable Citrix redirection
    
    Key Path: HKCU\Software\Citrix\HDXMediaStream
    Key Name: MSTeamsRedirSupport
    Key Type: DWORD
    Key Value: 1
    
  2. Add the Chrome program to the allow list.
    
    Key Path: HKLM\Software\WOW6432Node\Citrix\WebSocketService
    Key Name: ProcessWhitelist
    Key Type: MULTISZ
    Key Value: chrome.exe
    
  3. [Optionally] Configure Citrix logging.
    
    Key Path: Computer\HKEY_CURRENT_USER\Software\Citrix\HDXMediaStream
    Key Name: WebrpcLogLevel
    Key Type: DWORD
    Key Value: 0
    
    Log created in local (not remote !) computer in the directory %temp%\HdxRTCEngine
    For each RTP session created subdirectory with timestamp.

    To see log:
    
       cd %temp%\HdxRTCEngine
    
    and select log according timestamp.

Configure microphone privacy settings

In Citrix Desktop Windows, open "microphone privacy setting" and enable microphone usage.

Modification simple phone prototype

The provided Citrix phone prototype is a modified simple phone prototype.
Please compare the simple phone prototype code with this version.

Note: The phone.js and citrix_jssip.js does not call the Citrix API directly, but via the citrix_adapter.js wrapper.

How Citrix phone starts

  1. Phone waits for the initialization of the Citrix SDK.
    Note: There will be error if the browser did not start in Citrix desktop or the desktop is not configured.

  2. Phone uses the Citrix API to collect available microphones and speakers.

  3. Phone selects microphone and speaker.
    Citrix API does not work without selected devices

  4. Phone attempts to use the same microphone and speaker that were selected before.
    Note: This does not always work, for example: In such cases the settings screen opens, allowing the user to select the microphone and speaker.

  5. phone starts JsSIP stack and works in the same way as other phone examples.

Dual registration phone prototype

Run dual registration phone prototype

JsSIP stack with our CRLF keep alive extension quite quickly (10..20 seconds) detects websocket disconnection of SBC, after which it reconnects with the same or with a other SBC.

However, some of our customers want the phone to simultaneously open 2 websocket connections with the main and backup SBC.

The problem is that JsSIP stack can only work with one web socket.
JsSIP is well maintained and has reliable code.
Making such drastic changes to it would be difficult to test and can significantly reduce its reliability.

However, the customers do not want a phone that can simultaneously call through the main and backup SBC.
Only one SBC will be active. (Тhe one to which the JsSIP is connected)

Without changing JsSIP we will add optional backup SBC module (backup_sbc.js) to connects to the backup SBC and sending SIP REGISTER sequence.
If necessary, we can swap these two web sockets (JsSIP websocket and backup SBC websocket)

Backup SBC module description and limitations

  1. Main and backup SBC use same account credentials (user, password, realm, domain name)
  2. BackupSBC module is not implemented complete SIP protocol. It supports:
  3. In case of failure of registration on the backup SBC transport – no action item will be done beside reconnections attempts.
  4. In case of a failure on the main transport the backup websocket transport can be swapped to the main websocket transport The phone code detecting the case (as sequence of login disconnect events) and swap main/backup transport (if backup SBC is registered)
  5. In case of receiving an INVITE on the backup websocket transport will be called incomingInvite() callback. The phone code check if there are open calls in main JsSIP transport. If there exists open calls in main JsSIP transport phone will reject the call. Otherwise it will switch main/backup transport and receive the call in main (JsSIP) transport.
  6. ACD will be supported on the main channel only. In case of a swap we will re-subscribe to ACD service.

How phone code should be modified to use dual registration

To understand it, see usage of "backupSBC" object in phone.js

Device selection (microphone, camera, speaker)

It would seem that it could be easier?
Get a list of devices (microphones, cameras, speakers).
Choose the preferred one of each type.
Then we use the device ID of the selected devices.

For input devices (microphone and camera) we use the device ID in getUserMedia constraints.


     getUserMedia({audio: { deviceId: "speaker-device-id" }});
     getUserMedia({video: { deviceId: { ideal: "camera-device-id"} }}
     getUserMedia({audio: { deviceId: { exact: "speaker-device-id"} }}
For output audio devices (speaker) we use audio element setSinkId() method.

However, it's really not that easy:

When we start the phone we usually don't use getUserMedia.
(It will look strange, we are not calling anywhere, but we are already asking permission to use the microphone and camera)

In this case, we will get an incomplete list of devices, in which there will be no devices selected in the previous browser session.
In our examples, in this case, we add the devices selected the previous time to the list of devices.

Тhis approach will work if the previously selected device is connected.

However, if it removed, then it all depends on what constraints we use. If we use the ideal constraint:


    getUserMedia({audio: deviceId: { ideal: "device-label" }})
In the absence of this device, the default device will be used.

However, if we use exact constraint and device is missed, there will be an exception: OverconstrainedError {name: 'OverconstrainedError', message: '', constraint: 'deviceId'}


    getUserMedia({video: deviceId: { exact: "camera-id" } });

We provide utils.js SelectDevices class which collects and stores a list for all available devices.
The class used in single call, multi call phone prototype and Citrix phone.

Device selection in different browsers and operating systems


Browser|OS       Microphone  Camera  Speaker     Note

chrome|windows   +microphone +camera +speaker 
chrome|macos     +microphone +camera +speaker 
chrome|linux     +microphone +camera +speaker
chrome|android   +microphone +camera +speaker   In modern Android enumerated all speakers, in obsolete (e.g. Android 9) only latest added.
chrome|ios       +microphone +camera -speaker   Speaker cannot be reassigned.                            

firefox|windows  +microphone +camera -speaker   Speaker cannot be reassigned. 
firefox|macos    +microphone +camera -speaker   Speaker cannot be reassigned. 
firefox|linux    +microphone +camera -speaker   Speaker cannot be reassigned. 
firefox|android  +microphone +camera -speaker   Speaker cannot be reassigned
firefox|ios      +microphone +camera -speaker   Speaker cannot be reassigned.

safari|macos     +microphone +camera -speaker   Speaker cannot be reassigned. 
safari|ios       +microphone +camera -speaker   Speaker cannot be reassigned. 

other|other           ?        ?       ?        We did not check some OS, e.g. Chrome OS.

Change codecs priorities. Remove codecs

RTCRtpTransceiver.setCodecPreferences() method using to removing or changing codecs priorities.
The method is not implemented in Firefox.

In our SDK we provide method: phone.setCodecFilter().

Codec set as name, e.g. 'pcma' (case insensitive)
name with frequency e.g. 'pcma/8000'
name, optional frequency and fmtp: e.g: 'VP9/90000#profile-id=0'or 'VP9#profile-id=0'

Change codecs priorities

Make PCMU, PCMA codecs more priority than OPUS.


    phone.setCodecFilter({ 
        audio: { priority: ['pcmu', 'pcma'] }
    });

Modify video codecs priorities.


    phone.setCodecFilter({ 
        video: { priority: ['av1', 'vp9', 'vp8'] }
    });

Removing codecs

Note: It’s not recommended!
It is better to keep all browser-provided codecs to ensure compatibility with different browsers and operating systems.

Remove ISAC and G722 audio codecs.


    phone.setCodecFilter({
        audio: {
             remove: ['isac', 'g722'],
        }
    });

All codec filters must be set once.
Here is a more complicated example


    phone.setCodecFilter({
        audio: {
            remove: ['isac', 'g722'],
            priority: ['pcma', 'pcmu']
        },
        video: {
            remove: ['h264', 'vp9#profile-id=2', 'av1', 'ulpfec'],
            priority: ['vp9', 'vp8']
        } 
    });

Call quality score

A functionality has been added to Mediant SBC that evaluates the sound quality after a call is completed.
If this option is enabled, then after the call the web client will receive out of dialog SIP NOTIFY with customer header X-VoiceQuality which includes a voice quality.
Take a look to phone prototype code:

incomingNotify: function (call, eventName, from, contentType, body, request) {
    ac_log(`phone>>> incoming NOTIFY "${eventName}"`, call, from, contentType, body);
    if (call === null) { // out of dialog NOTIFY
        if (eventName === 'vq') { // voice quality event
            let vq = getXVoiceQuality(request); // X-VoiceQuality header parser, defined in file utlis.js
            if (vq) {
                ac_log(`NOTIFY: "X-VoiceQuality" header: score="${vq.score}", color="${vq.color}"`);
            }
        }
    }
	. . . .
Score is small integer value and corresponding color:

Click-to-call test call

At the request of customers, a test call was added to the click-to-call phone.
If set testCallEnabled:true in the file config.js will be shown additional button "Test line".
"Test call" is call to a special SBC user configured to automatically answer and send an audible prompt.

To trigger Mediant SBC to evaluate the call quality to INVITE added custom Header:X-AC-Action: test-voice-quality
To request URL added call duration (milliseconds) parameter ;duration=10000
After the specified interval has expired, the SBS terminates the call and sends BYE with header:X-VoiceQuality which includes a quality score.

Hidden page timer throttling

All modern browsers use timer throttling for hidden page.
If timers for hidden pages work with a resolution of 1 second, this is not a problem for the phone.
We are concerned about cases when the timer works with a deviation of more than 10 seconds.

Since Chrome 88 used intensive timer throttling
See Chrome timer throttling
In the mode timer resolution is 60 seconds !

Safari 14 also uses intensive timer throttling for hidden page.

Now we have to adapt the SDK to work with such "accurate" timers.

It turned out that JsSIP itself does not enter to Chrome intensive timer throttling mode, unless you use REGISTER timeout for less than 65 seconds.

However, if you use our SDK with a websocket keep alive pings with an interval of 10..20 seconds, the phone enters to Chrome intensive timer throttling mode.

Another way to force the phone to enter this mode is to build its GUI based on a framework that uses timers intensively (e.g. for smooth scrolling).

JsSIP continues to periodically send REGISTER even when timer works with big delay.
However, there is a small chance that the phone will not be registered on the SBC for a short period of time (about 30 seconds) before sending the next REGISTER.

In modified JsSIP used for the SDK, expiration interval has been changed: the REGISTER is sent a little in advance,
taking into account a possible timer delay of 1 minute.

For phone developers it is recommended to catch page visibility event,
and when it is hidden, do not use the timers during this period.
This will prevent entering to intensive timer throttling mode.


document.addEventListener('visibilitychange', () => {
    if (document.hidden) {
        stopUseTimersInGUI();
    } else {
        startUseTimersInGUI();
    }
});

To phone testing/debugging can be used the code that print timer deviation:


   {
        let origSetTimeout = setTimeout;
        setTimeout = (func, delay) => {
            let key = origSetTimeout((orderTime) => {
                let calcTime = orderTime + delay;
                let delta = Date.now() - calcTime;
                console.log(new Date().toISOString().slice(11, -1) + ' TIMER execute timeout key=' + key + ' delta=' + delta);
                func()
            },
                delay, Date.now());
            console.log(new Date().toISOString().slice(11, -1) + ' TIMER: setTimeout ', delay, key);
            return key;
        }
        let origClearTimeout = clearTimeout;
        clearTimeout = (key) => {
            console.log(new Date().toISOString().slice(11, -1) + ' TIMER: clearTimeout ', key);
            origClearTimeout(key);
        }
        let origSetInterval = setInterval;
        setInterval = (func, delay) => {
            let counter = 0;
            let key = origSetInterval((orderTime) => {
                counter++;
                let calcTime = orderTime + delay * counter;
                let delta = Date.now() - calcTime;
                console.log(new Date().toISOString().slice(11, -1) + ' TIMER execute interval#' + counter + ' key=' + key + ' delta=' + delta);
                func()
            },
                delay, Date.now());
            console.log(new Date().toISOString().slice(11, -1) + ' TIMER: setTimeout ', delay, key);
            return key;
        }
        let origClearInterval = clearInterval;
        clearInterval = (key) => {
            console.log(new Date().toISOString().slice(11, -1) + ' TIMER: clearInterval ', key);
            origClearInterval(key);
        }
    }

Can be seen that Chrome when page is hidden uses timer throttling with timer resolution 1 seconds.
А web phone running on a laptop with a hidden page enters intensive timer throttling after 5 minutes, with timer resolution 60 seconds.

SDK setWebSocketKeepAlive method

The method code has been rewritten.


setWebSocketKeepAlive(pingInterval, pongTimeout=true, timerThrottlingBestEffort=true, pongReport=0, pongDist=false)

Note:
To prevents Chrome enter to intensive timer throttling mode mode, can be played a beep each 25 seconds.
See phone_prototype: audioPlayer.playShortSound

Safari 14 uses different timer throttling algorithm.
I don't found its description, but as I see added random timer delays, likes 0..50 seconds for hidden page.
Increased timer interval does not help, because a page does not exit the timer throttling mode.
For Firefox and Safari SDK detects big timer deviation (>10 seconds), prints warnings and does nothing.
If timer delays less than 60 seconds, modified JsSIP registrar works without problem.

Let's summarize the recommendations

Supported browsers

Desktop browsers

In the current release, the WebRTC Client SDK supports:

Mobile browsers (partial support)

For mobile phones, it is preferable to use the AudioCodes native WebRTC SDK.
However, Web WebRTC SDK also can be used (with some limitations).

In the current release, the WebRTC Client SDK partially supports:

For Chrome for Android and iOS Safari for IPhone the sound is played on bottom loudspeaker, instead of top loudspeaker (acts as earpiece).
It cannot be reassigned, because WebRTC API does not expose the top loudspeaker as an available output device.
(See navigator.mediaDevices.enumerateDevices())
This problem can be solved if you connect external headset to the mobile phone.

Chrome for Android limitations

iOS Safari limitations

Notes about Apple Safari for Mac

Developer's note. JavaScript console log

To open the JavaScript console follow the steps below:
In Chrome and Firefox press: Ctrl-Shift-I to open dev tools, and click to console tab.
In Safari press: Option(Alt)-Command-I, and click to console tab.
Note: should be enabled "Show develop menu in menu bar" in "Safari/Preferences/Advanced"

Chrome console log settings
To open the console: press Ctrl-Shift-I, click on the console tab.
Click on gear icon on top:
Disable 'Preserve log'.
Click on 'All levels' and enable: Info, Warnings, Errors.
To close console settings click again on gear icon.

Click on 'Customize and control DevTools' icon (above gear icon)
Click on settings
In 'Console' section:
Enable 'Show timestamps' and 'Group similar'
To close settings click on X icon at top

About "Verbose" log level
Using this level, problems with your certificates might be found, violations of the JavasSript standard, misuse of Browser API, etc.
If possible, it is worth correcting the problems found, but not all of them can be fixed.
Therefore, this mode is recommended for debugging, but not for general use.

Firefox console log settings
To open the console: press Ctrl-Shift-I, click on the console tab
Disable 'Persist Logs'.
Click on the 'Filter output' icon by the text field, Enable: Errors, Warnings, Logs and Info
Click on the 'Filter output' icon again to close the panel.
Click the gear icon located at the top of the console's right-hand side; the 'Toolbox options' is seen in the in the tooltip. Under in 'Web Console', enable 'Enable timestamp'.
To close console: press: Ctrl-Shift-I.

Safari console log settings
To open the console: press Option(Alt)-Command-I, click on the console tab
Enable 'All'.

To save console log to a file:

To close the console use: Option(Alt)-Command-I

About 'Preserve log' or 'Persist Logs' flag
If the flag is disabled, the console log will be cleared after page reloads.
Generally this will be the preferred setting since it helps keep the log size to a minimum.

Enable the flag when you want to see the log before and after reloading a page.
Remember to manually clear the console logs before testing.

Developer's note. WebRTC adapter

The adapter in the following link is an open source JavaScript solution.
See webrtc adapter description
The WebRTC adapter is not included in the SDK, and may be added by the developer.
It can be downloaded from: adapter-latest.js

Since SDK 1.9 we stopped using obsolete WebRTC API:
- event "addStream"
- RTCConnection getLocalStreams()
- RTCConnection getRemoteStreams()

The methods are not implemented in Safari 13, and added by webrtc adapter.
Тherefore, our previous SDK releases do not work with Safari without webrtc adapter.

Now webrtc adapter usage is optional for all supported browsers.
As you can see we do not use it in our phone examples.

Developer's note. Debugging JavaScript code

Let's use our 'check for available devices' example.
In chrome open dev tools, and select sources.
Select file phone.js
Set breakpoint in line 48. (by clicking in the line's number)
Reload page. Now JavaScript execution will break in line 48

Press the 'step over next function call' serveral times in the debugger

Check the value of some variables, then resume script execution.

The same can be done in Mozilla's Firefox browser.
These types of tips and knowledge can assist you in phone debugging.

When you updating the phone's code on a website,
you should clear browser cache and reload (in Chrome/Firefox press: Ctrl+F5)

Developer's note. Chrome local overrides

This option can aid you with general convenience in some tasks.
e.g. You're interested in taking one of the examples from our website and change some of its lines of code.

Without local overrides, you would have to create your own site, copy this example to it, and change something in it.

Using local overrides you can accomplish this using an example from our site, and the changes you've made are locally made to your Chrome browser with the help of Chrome's local overrides.
To know more about how they're used, please see: local overrides

Developer's note. About STUN protocol

For WebRTC - STUN protocol is critical and WebRTC cannot work without it.

The STUN protocol is usable when your client is connected to a network using NAT.
In a case when the client communicates with an external internet, NAT convert its IP and port to another IP and port so the client can't know how it is seen from external internet (which IP and port)
To check it, the client sends a request to a STUN server.
The STUN server will send a response where it will write which IP and port was used in request.

Before establishing a connection, the WebRTC client must create an SDP offer that include all posssible phone connections IPs and ports prepared by ICE Gathering.
It takes all the IP available on the computer,then sends STUN requests to STUN and TURN servers (STUN and TURN are optional and can be specified in the configuration).

For a WebRTC phone, you may use Google's STUN server: stun.l.google.com:19302 (Pay attention to the DNS as it returns more than one IP)
Note: Mediant SBC works without STUN servers.
Note: It is not recommended to use free Google STUN server for production.
No one guarantees that it will work, so use it only for testing.

If a corporative firewall disables the STUN protocol, the browser cannot receive responses from the STUN server.

You can see ICE gathring using the Wireshark sniffer with the filter 'stun', as the browser sends STUN requests, and does not receive responses.

Let's use this test, to see how long it takes in your browser

In the simple case, when the computer has one IP and STUN protocol is allowed in firewall, it takes less than a second.

However, consider another case: the computer has two IPs: one from a home Internet provider, and the other is VPN of our company. Suppose our organization's farevall is blocking the STUN protocol.
During ICE gathering phone will send STUN requests from all available IPs.
In this case, ICE gathering will take 40 seconds for the Chrome browser, because during VPN IP checking, the browser will try to connect to the STUN server again and again.
Therefore, we should set ICE gathering timeout to meet some reasonable time (a few seconds).

WebRTC also uses the STUN protocol for other purposes such as:
checking if the media session is alive.

A WebRTC client sends an RTP stream to the SBC server IP and port.
It will periodically send STUN requests to SBC with the same IP and port used for the open RTP session.
If during the call the request is not responded to, the client will decide that the media connection has been lost.

On the firewall, you’ll have to allow inbound STUN requests for your SBC server as well with a UDP port range (the same port range that the SBC uses for RTP sessions)

If your firewall allows STUN for the STUN server, but not for the SBC, your phone will open a connection, work for some time (tens of seconds) and then the call will close.

Developer's note. Bypassing the corporate firewall

The following is most important when a WebRTC phone is hosted in an organization’s intranet (private IP network), and the SBC is hosted in the internet (e.g. Amazon’s Cloud services).

The organization's firewall and NAT between the local network and the internet can cause problems in this type of set-up

 

  1. SIP communication between browser and SBC:

    SBC receives a SIP connection in a secure websocket, using port 443.
    It's the default secure websocket port that is also used for HTTPS so the port is enabled in most firewalls.

  2. STUN communication between a browser and the STUN server:
    Note: this STUN communication is optional. To disable it, set empty values for ICE server lists in the phone’s configuration.

    Before starting an outgoing call or answering an incoming one, WebRTC runs ICE gathering (to check computer IPs)
    The browser will send a STUN binding request to the STUN server.
    We use google open STUN server: stun.l.google.com

    The firewall must allow STUN communication between the browser and the external STUN server.

  3. STUN and DTLS-SRTP communication between the browser and the SBC:

    Before starting an RTP transmission, the browser checks the future RTP channel:
    It sends a STUN binding request to the SBC using the same ports it will use for RTP.
    If the SBC responds with a STUN response, the browser starts sending RTP packets. After the communication is established, the browser periodically sends STUN requests to check if the RTP channel is alive, so the firewall must allow STUN and RTP communication between your browser and SBC

 

For RTP communication the browser uses WebRTC API's UDP ports.

As this can be an issue, we'll expand on it below.

WebRTC API designers don't provide an API to set RTP port range !

To use WebRTC phone you must ask your IT security enable inbound RTP/STUN protocols for all UDP ports.

If using Chrome for business, you can set the port range by Chrome corporative policy.
(see WebRtcUdpPortRange)

Developer's note. Setting speaker and microphone in Windows

In order for the phone to work, you must properly configure the microphone and speaker for the browser.

Check speaker

Open the site: youtube.com and try to listen to any song.
If you hear it, your speaker is OK.
Otherwise open the Windows "Control Panel", select "Manage Audio Devices", click the "Playback" tab and there you will be able to select the default device for playback. Now repeat the "youtube" test.

Check microphone

Open the Windows "Control Panel", select "Manage Audio Devices", click on the "Recording" tab and select default device.
Say something into the microphone and and watch the green bar for activity.

You can also select a microphone internally when using Chrome as your browser.
Call someone and after the call is established, click the 'camera' icon in browser address line, then select proper microphone and camera devices for future usage.

About the JsSIP library used in this project

HTML 5 WebRTC API supports audio and video encoding/decoding, SRTP, and SDP, but misses the SIP protocol.
"JsSIP" is an open source JavaScript library that provides SIP via a websocket protocol.
It internally uses the WebRTC API, and is intended to build JavaScript WebRTC phones.

The official jssip site

The JavaScript source code

JsSIP license information

Name: JsSIP
Author: José Luis Millán 
Core Developer: Iñaki Baz Castillo 
Copyright (c) 2012-2015 José Luis Millán - Versatica 


License: The MIT License

Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:

The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Phone examples

Run simple phone

Run phone prototype

Run phone prototype with answering machine

Run phone prototype with OAuth2

Run phone prototype with ACD

Run multi call phone prototype

Run Citrix desktop phone prototype

Run dual registration phone prototype