Audio Sessions Management
Audio Sessions Management
Handling Audio Ducking
For the best user experience, certain overlays may require a decrease in the main video’s audio level. For instance, the Twitter overlay may include videos within the feed which are playable with sound. If the main audio level is not reduced accordingly, the Twitter video sound will overlap with the main sound resulting in a poor user experience. However, if the host app provides the SDK methods to control the application’s audio session, the SDK will smoothly decrease the sound when needed.
There are two methods responsible for handling audio ducking
Examples
SLRVideoPlayer: SLROverlayDelegate {
...
private var player: AVPlayer!
// tracks requests for ducking
private let volumeReduceRate: Float = 0.1
private var playerVolumeOriginal: [Float] = []
// tracks requests for audio sessions
private var kTotalSessions: Int { return kGenericSessions + kGenericSessions }
private var kGenericSessions: Int = 0
private var kVoiceSessions: Int = 0
...
public func requestAudioDucking() {
lock.withLockVoid {
if let player = player {
let isDuckingActive = playerVolumeOriginal.last.map({ Self.streamVolume == volumeReduceRate * $0 }) ?? false
playerVolumeOriginal.append(Self.streamVolume)
if !isDuckingActive {
Self.streamVolume = volumeReduceRate * Self.streamVolume
}
onPlayerVolumeChange?()
if player.timeControlStatus == .playing {
player.volume = Self.streamVolume
}
}
}
}
public func disableAudioDucking() {
lock.withLockVoid {
if playerVolumeOriginal.count > 0, let player = player {
Self.streamVolume = playerVolumeOriginal.popLast() ?? 1
onPlayerVolumeChange?()
if player.timeControlStatus == .playing {
player.volume = Self.streamVolume
}
}
}
}
...
}Audio notifications are used by the SDK in various ways. They are used for Watch Party, messaging, and other features. There is a limitation in the iOS platform which allows each application to use only one audio session, hence the host app and the SDK use the same audio session.
To minimize interference between the host app’s and SDK’s audio modes, the following delegate methods have to be implemented:
SLROverlayDelegate.prepareAudioSession(...)
SLROverlayDelegate.disableAudioSession(...)
Here is an example of one such possible implementation:
SLRVideoPlayer: SLROverlayDelegate {
// tracks requests for audio sessions
fileprivate var kTotalSessions: Int { return kGenericSessions + kGenericSessions }
fileprivate var kGenericSessions: Int = 0
fileprivate var kVoiceSessions: Int = 0
...
public func disableAudioSession(for type: SLRAudioSessionType) {
switch type {
case .voice: kVoiceSessions -= 1
case .generic: kGenericSessions -= 1
}
// no sessions at all - disable
if kTotalSessions == 0 {
disableAudioSession()
return
}
if kVoiceSessions > 0 {
return
}
if type == .voice {
enableGenericAudioSession(reactivate: false)
}
}
public func prepareAudioSession(for type: SLRAudioSessionType) {
lock.lock()
defer {
print("[AudioSession] prepare kVoiceSessions: \(kVoiceSessions), kGenericSessions: \(kGenericSessions)")
lock.unlock()
}
switch type {
case .voice:
kVoiceSessions += 1
if kVoiceSessions == 1 {
// No specific logic here for now (WebRTC handles this)
}
case .generic:
kGenericSessions += 1
if kGenericSessions > 0, kVoiceSessions == 0 {
let reactivate = kGenericSessions == 1
do {
try StreamLayer.prepareSessionForGeneralAudio(reactivate: reactivate)
} catch let error {
print("[RPC] Error: \(error)")
return
}
return
}
}
}
...
}Updated 12 months ago
