How to get video/audio call working using WebRTC framework
Recently i required to make video call enabled in an ios app, A lot of apps do it now a days, right.
We all use WebRTC for client to client video conversations in our iOS apps. WebRTC is an open-source project (libjingle_peerConnection) maintained by google with high-level API implementations for both iOS and Android. Framework can be built as described in here, or you can download a prebuilt binary that is made available by many open source project.
Before going with the tutorial, It is assumed that you know about WebRTC concept and APIs. if you don’t , it can be checked in here.
This blog is basically to get WebRTC working in our swift based app. There are lot of open source libraries with prebuilt libraries and wrappers to include this in ios project. but could not find a good one written in swift and compatible with swift3 project with proper documentation. Either they are not compatible with carthage and swift or not updated to use the new WebRTC api. So i decided to make one of my own and with less documentation for this anywhere, its not so direct process..
Will be explaining step by step process to get this working.
For my work, I have built WebRTC framework as explained in here.
Now the work require us to make a wrapper to set up RTCClient and integrate with our project code.
What we have to do is summarise as following:
In Brief: Peer A who will be the initiator of the connection, will create an Offer. They will then send this offer to Peer B using the chosen signal channel. Peer B will receive the Offer from the signal channel and create an Answer. They will then send this back to Peer A along the signal channel.
Its easy and simple, Right !
Now break this in step by step process and then take each step with the related code.
- Firstly we will initiate `RTCPeerConnectionFactory`
self.connectionFactory = RTCPeerConnectionFactory() // save the instance for further uses
2. Create a `RTCPeerConnection` using that.
`iceServers` is a list of STUN and TURN server provided by you. For generating an iceServer you need to send username, password and a url in RTCIceServer init method.
If you omit it, you will only generate internal network host candidates, which is fine as long as you are testing on the same network.
3. Now we have configured connection, lets start the connection by adding a local stream to it.
Adding videoTrack again and again will add delay so the trick is to make it a static var and just add and remove it when connection is started or ended.
4. After setting up connection and local stream, its time to start the call.
Peer A will start the call by creating an offer.
Here are different constraint we will be using
5. Peer A should set that offer as the local description (that is, the description of the local end of the connection) and uses the signalling server to transmit the offer to the intended receiver of the call.
6. Peer B will receive this offer from the signalling server and set its remote description as an offer (the description of the other end of the connection). Peer B then creates an answer and send it again over the signalling server.
7. Peer B will set sdp generated as its local description. Peer B now knows the configuration of both ends of the connection. Sdp generated is sent to signalling server as the answer to the caller.
8. Peer A receives the answer, set this as the remoteDescription. It now knows the configuration of both peers. Media begins to flow as configured.
You may be wondering where do we set the remoteStream to our `remoteVideoTrack` for rendering of media.
9. Get RemoteVideoTrack from added stream to render remote media.
So now, we have setup both sdp for both Peers.. There is one more important thing we are yet to handle i.e. IceCandidate
10. Generated iceCandidate can be received using `RTCPeerConnectionDelegate` which needs to be sent to other Peer to keep the data flowing.
11. Signalling server is used to send/receive iceCandidate info.
Received candidate is added to peerConnection as follows.
This is it, we are done with handling webRTC api for enabling video call in our swift enabled app.
The wrapper(“RTCClient”) and the WebRTC built framework can be found from my repo in “https://github.com/Ankit-Aggarwal/SwiftyWebRTC”
Which can be added using carthage as follows:
If you have any questions or want some additions/modifications in the code or you have any feedback, please mention in the comments below.
Thanks for reading!