Reputation: 437
Over the past few years I have steadily developed a complete WebRTC based browser Phone using the SIP protocol. The main SIP toolbox is SIPJS (https://sipjs.com/), and it provides all the tools one needs to make and receive calls to a SIP based PBX of your own.
The Browser Phone project: https://github.com/InnovateAsterisk/Browser-Phone/ gives SIPJS it's full functionality and UI. You can simply navigate to the phone in a browser and start using it. Everything will works perfectly.
On Mobile
Apple finally allow WebRTC (getUserMedia()
) on WKWebView, so it wasn't long before people started to ask how it would work on mobile. And while the UI is well suited for cellphones and tablets, just the UI isn't enough now days to be a full solution.
The main consideration is that a mobile app is typically one that has a short lifespan, in that you can't or don't leave it running in the background like you can or would with the Browser on a PC. This presents a few challenges to truly making the Browser Phone mobile friendly. iOS is going to want to shutdown the app as soon as its not the front most app - and rightly so. So there are tools for handling that, like Callkit & Push Notifications. This allows the app to be woken up, so that it can accept the call, and notify the user.
Just remember, this app is created by opening a UIViewController
, adding a WKWebView
, and navigating to the phone page. There is full communication between the app and the html & Javascript, so events can be passed back and forth.
WKWebView & AVAudioSession Issue:
After a LOT of reading unsolved forum posts, it's clear that AVAudioSession.sharedInstance()
is simply not connected to the WKWebView
, or there is some undocumented connection.
The result is that if the call starts from the app, and is sent to the background, the microphone is disabled. Clearly this isn't an option if you are on a call. Now, I can manage this limitation a little, by putting the call on hold when the app is sent to the background - although this would be confusing to the user and a poor user experience.
However, the real issue is that if the app was woken from Callkit, because the app never goes to the foreground (because Callkit is), the microphone isn't activated in the first place, and even if you do witch to the app, it doesn't activate even after that. This is simply an unacceptable user experience.
What I found interesting is that if you simply open up Safari Browser on iOS (15.x), and navigate to the phone page: https://www.innovateasterisk.com/phone/ (without making an app in xCode and loading it into a WKWebView), the microphone continues to work when the app is sent to the background. So how do Safari manage to do this? Of course this doesn't and can't salve the CallKit issue, but still interesting to see that Safari can make use of the microphone in the background, since Safari is built off WKWebView.
(I was reading about entitlements, and that this may have to be specially granted... im not sure how this works?)
The next problem with AVAudioSession is that since you cannot access the session for WkWebView, you cannot change the output of the <audio>
element, so you cannot change it from say speaker to earpiece, or make it use a bluetooth device.
It simply wouldn't be feasible to redevelop the entire application using an outdated WebRTC SDK (Google no long maintain the WebRTC iOS SDK), and then build my own Swift SIP stack like SIPJS and land up with two sets of code to maintain... so my main questions are:
Upvotes: 3
Views: 2303
Reputation: 31
I found that if the app is in a not running or inactive state, and no audio configuration is done in the CallKit ProviderDelegate
, CallKit
will not go through didActive
, and the microphone and audio of the webView can function normally. However, when the app is in an active state, after CallKit
connects, it will go through didActive, which causes the webView's microphone and audio output to stop functioning properly.
Upvotes: 1
Reputation: 730
It might not be helpful. But I fought with this issue for years on a WkWebView based app.
Due to changes Apple made in WkWebView it runs in a different thread than CallKit. That's what I figured out after a lot of work debugging native code and using SWIFT to control CallKit.
Long story story short, AVAudioSession on the main SWIFT-APP thread that where CallKit was being managed is, due to being on a separate thread, independent from the AVAudioSession in the WebRTC code running in the WKWebView.
As a result the two FIGHT with each other. Further the CallKit authorization of the Microphone and Speakers etc for the main-thread SWIFT code does not have any relationship nor does it propagate to the AVAudioSession etc in the WKWebView WebRTC.
Roll back a little in time and this was never a problem. Because the WebView used to be on the same thread.
So AFAIK the WebRTC in a WKWebView is not integratable with CallKit since they are now segregated on different AVAudioSessions and there USED TO BE no way to access the WKWebView thread's AVAudioSession.
But in iOS 16.4 release notes there is brief mention of Support for a subset of the AudioSession Web API
.
For which is there a brief explanation here in this WKWebView AudioSession explainer.
And in iOS17+ you can verify that window.AudioSession
and navigator.audioSession
both exist.
So, perhaps there is finally some movement on this problem.
I have filed WebKit radar bugs regarding this. If you care or have this issue, then I encourage you to also file bugs.
See also: WebKit sources How to integrate CallKit with WebRTC in WKWebView?
- WKWebView runs on a separate thread, which means separate memory is allocated to WKWebView. If WKWebView exceeds its memory, then it only crashes the web view but not the app.
- WKWebView uses the latest javascript engine and thus renders the page faster than the older UIWebView javascript engine.
Upvotes: 3
Reputation: 21
for 1) Maybe someone also is following this approach and can add some insight/correct wrong assumptions: The audio in a WebRTC site is represented as a Mediastream. Maybe it is possible to get that stream from without the WKWebView and play it back within the app somehow ? This code should pass on some Buffers, but they are empty when they arrive over in swift:
//javascript
...
someRecorder = new MediaRecorder(audioStream);
someRecorder.ondataavailable = async (e) =>
{
window.webkit.messageHandlers.callBackMethod.postMessage(await e.data.arrayBuffer());
}
mediaRecorder.start(1000);
and then in swift receive it like
//swift
import UIKit
import WebKit
class ViewController: UIViewController, WKScriptMessageHandler {
...
let config = WKWebViewConfiguration()
config.userContentController = WKUserContentController()
config.userContentController.add(self, name: "callBackMethod")
let webView = WKWebView(frame: CGRect(x: 0, y: 0, width: 10, height: 10), configuration: config)
...
}
func userContentController(_ userContentController: WKUserContentController, didReceive message: WKScriptMessage) {
addToPlayingAudioBuffer(message.body)
//print(message.body) gives the output "{}" every 1000ms.
}
Upvotes: 2