WebRTC is a set of browser API’s that allow developers to implement various functionalities – shared live editing, file transfer, text chats, games, etc. But it is mostly used to allow people to make video or audio calls within the application. There is a growing demand nowadays for remote connection of users in various departments.
In the times before the feature was implemented developers had to rely on non-native ways of adding such functions. There were plugins for different browsers or native applications that had to be developed for every platform separately. These approaches caused the increase in development cost and expertise so not many companies could handle it and made the user experience worse.
Before going over the tips the developer has to know three major API’s of WebRTC. If you want to get more deep knowledge about how these API’s work you can read an excellent article on HTML5 Rocks.
Check out a related article:
- Media Stream
This part is all about collecting the video and/or audio stream from the device's camera or microphone and is implemented by `MediaStream` API. Usually you will have one local and one remote stream and can be attached to the HTML video element on the page.
- Peer Connection and Signaling
This part is about the communication of streams between WebRTC clients. Browser API `RTCPeerConnection` will stream media between two browsers, but in order to work it requires one more layer that is not a part of browser implementation of WebRTC - Signaling.
Signaling is a back-end system that will coordinate the stream transferring between clients.
- Data Channel
`RTCDataChannel` API is able to transfer other types of data between the clients. It is not directly related to the calling functionality but if you want to implement a real-time chat within the call - it might be the way to do it.
When using WebRTC Front End developers may be overwhelmed by the amount of knowledge and effort may be required to make the calling functionality to work in all major browsers. Here are some tips and tricks that will help you in this complex journey.
- Keep track of browser updates
Even though WebRTC is not a new feature in the world of web development, there is still work in progress in almost every major browser so make sure you subscribe to updates and keep track of browser releases and changelogs. This results in changes to the testing process as well - you should not only test current browsers, but also development releases, for example Google Chrome Canary, Opera Beta, etc. It is always better to fix an issue even before it happens in the production environment.
Check out a related article:
- Test your updates thoroughly
When making updates to WebRTC-related code in your application, you should always make sure that you test different browsers and operating systems in different roles. This is important because a media stream layer of WebRTC implementation of one browser/OS can create a set of data that will raise an error for the other browser. It’s important to change roles in browsers because the “callee” browser will usually dictate the video-codecs and other settings, so you want to check all browsers against all roles.
Here’s an example of testing an application in a pair of browsers:
- Case #1: Chrome as “callee” (MS Windows), Windows Firefox as “caller” (MS Windows)
- Case #2: Chrome as “caller” (MS Windows), Firefox as “callee” (MS Windows)
- Case #3: Chrome as “callee” (Apple Mac OS), Firefox as “caller” (Apple Mac OS)
- Case #4: Chrome as “caller”(Apple Mac OS), Firefox as “callee” (Apple Mac OS)
- Case #5: Chrome as “callee” (MS Windows), Firefox as “caller” (Apple Mac OS)
- Case #6: Chrome as “caller” (MS Windows), Firefox as “callee” (Apple Mac OS)
- Case #7: Chrome as “callee” (Apple Mac OS), Firefox as “caller” (MS Windows)
- Case #8: Chrome as “caller”(Apple Mac OS), Firefox as “callee” (MS Windows)
If you will implement calls between browser and native applications you should have an extensive fleet of devices.
As you can see you will need to do an enormous amount of testing and troubleshooting as a list of browsers that support WebRTC grows.
- Use multiple devices and multiple networks
Remember, when you test calls on a single computer or device the “caller” will always get the access to the camera or microphone. Two browsers cannot have access to the camera simultaneously. Usually you will see that “callee” will have no video and no audio coming from him.
When you test calling functionality you usually want to test the whole implementation together with the signaling layer. This means that you should make a call to the device in another network. For this purpose you can ask your colleague to make a test call or you can make use of VPN for one of your devices.
- Know where to find answers
When using WebRTC you should become familiar with browsers issue trackers. They will help not only to find probable solutions that someone came up with, but also you can make WebRTC implementation better by posting bug reports. From personal experience I can say that the development team is eager to help you with complex issues. Just make sure you give them as much relevant information as possible.
- Allow users to set their microphone and camera device
This is especially important when allowing calls on mobile devices, they tend to have more than one camera. It would be frustrating for the user if in the video call the browser will default to the rear camera. To get available devices you need to use a browser API `mediaDevices.enumerateDevices` function which will return a promise and on when it fulfills, you will get the array of objects, each object describes one of the input media devices. Based on this array you can build a user interface that will allow users to change devices.
Note: Don’t forget that changing device during call will require some additional work:
- Run `getUserMedia` function with `deviceId` mentioned in constraints object.
- Re-attach stream tracks to the peer connection.