Reputation: 13
I have a simple soundboard app with a few different categories of sounds. I am using the AudioToolbox.framework (all of my sound files are basically under 10 seconds and are .wav files) but I am confused on how to make my app refer to the "Volume" volume, not the "Ringer" volume.
If my device is set to "Silent," no sound plays from my buttons, even if the device volume is turned up. However, as soon as I turn on the "Ringer" (located on the side of my device) sound plays as loud as the ringer is turned up.
I searched around online and found some sources that said to switch my AVAudioSession to AVAudioSessionCategoryPlayback, so I pasted this
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
NSError *setCategoryError = nil;
BOOL success = [audioSession setCategory:AVAudioSessionCategoryPlayback error:&setCategoryError];
if (!success) { /* handle the error condition */ }
NSError *activationError = nil;
success = [audioSession setActive:YES error:&activationError];
if (!success) { /* handle the error condition */ }
into my viewDidLoad. However, the same problem occurs. I have found other suggestions online but the explanations leave me confused about what I'm supposed to do. I'm relatively new to Objective-C and coding, so please be specific and clear in your explanations if you know the answer.
I gladly appreciate any help you can provide.
Edit One I followed the suggestion of Pau Senabre, and I did not notice any changes. The audio still didn't play while the Ringer is on silent. Current code with Pau's changes:
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
NSError *sessionError = NULL;
BOOL success = [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback
withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker
error:&sessionError];
if(!success) {
NSLog(@"Error setting category Audio Session: %@", [sessionError localizedDescription]);
}
NSError *activationError = nil;
success = [audioSession setActive:YES error:&activationError];
if (!success) { /* handle the error condition */ }
Any other suggestions?
Edit Two I followed the suggestions of Daniel Storm, who suggested I set my AVAudioSession
category to Playback
in a way slightly different than before. He suggested I do
[[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayback error: nil];
but this still left my volume muted while the ringer was muted.
What could I be doing wrong?
Here is how I am playing my audio:
NSURL *DariusSelectSound = [NSURL fileURLWithPath:[[NSBundle mainBundle]pathForResource:@"DariusSelect" ofType:@"wav"]];
AudioServicesCreateSystemSoundID((__bridge CFURLRef)DariusSelectSound, &DariusSelectAudio);
When I press a button, I play the sound with the code:
AudioServicesPlaySystemSound(DariusSelectAudio);
My volume is turned up to about half way, and the ringer is at roughly the same volume. However, adjusting my regular volume has no effect, the audio gets louder or quieter by adjusting my ringer volume.
Edit Three Problem is solved and there are multiple solutions. The recommendations by Pau Senabre and Daniel Storm both work. My problem was that I was trying to use AudioServices
when I needed the AVAudioPlayer
. Sorry that I made such a simple mistake, and I thank you greatly for your help!
Upvotes: 1
Views: 973
Reputation: 16301
It's not clear to me, it seems you want to override the system speakers volume setting, that at the end of the chain, is cut (clip) to the ringer volume level:
Please read here: about setInputGain
of an AVAudioSession
.
/* A value defined over the range [0.0, 1.0], with 0.0 corresponding to the lowest analog
gain setting and 1.0 corresponding to the highest analog gain setting. Attempting to set values
outside of the defined range will result in the value being "clamped" to a valid input. This is
a global input gain setting that applies to the current input source for the entire system.
When no applications are using the input gain control, the system will restore the default input
gain setting for the input source. Note that some audio accessories, such as USB devices, may
not have a default value. This property is only valid if inputGainSettable
is true. Note: inputGain is key-value observable */
- (BOOL)setInputGain:(float)gain error:(NSError **)outError NS_AVAILABLE_IOS(6_0);
@property(readonly) float inputGain NS_AVAILABLE_IOS(6_0); /* value in range [0.0, 1.0] */
Said that, there are several techniques to change the volume of your sound, and this depends if you are AVAudioSession
, AudioUnit
, OpenAL
, etc.
Supposed that you are using AudioUnit
, having in AUGraph
a mixer like this:
// MIXER unit ASBD
AudioComponentDescription MixerUnitDescription;
MixerUnitDescription.componentType = kAudioUnitType_Mixer;
MixerUnitDescription.componentSubType = kAudioUnitSubType_MultiChannelMixer;
MixerUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
MixerUnitDescription.componentFlags = 0;
MixerUnitDescription.componentFlagsMask = 0;
and
///
/// NODE 6: MIXER NODE
///
err = AUGraphAddNode (processingGraph, &MixerUnitDescription, &mixerNode );
if (err) { NSLog(@"mixerNode err = %d", (int)err); return NO; }
you will do then
// sets the overall mixer output volume
- (void)setOutputVolume:(AudioUnitParameterValue)value
{
OSStatus result;
result = AudioUnitSetParameter(mixerUnit, kMultiChannelMixerParam_Volume, kAudioUnitScope_Output, 0, value, 0);
if (result) {
NSLog(@"AudioUnitSetParameter kMultiChannelMixerParam_Volume Output result %d %08X %4.4s\n", (int)result, (unsigned int)result, (char*)&result);
return;
}
}
Here I suppose you have a AUGraph
like this at least
RemoteIO with Audio Unit
-------------------------
| i o |
-- BUS 1 -- from mic --> | n REMOTE I/O u | -- BUS 1 -- to app -->
| p AUDIO t |
-- BUS 0 -- from app --> | u UNIT p | -- BUS 0 -- to speaker -->
| t u |
| t |
-------------------------
Of course, you want to take care of the input as well if you need this in the chain:
// sets the input volume for a specific bus
- (void)setInputVolume:(UInt32)inputBus value:(AudioUnitParameterValue)value
{
micGainLevel = value;
OSStatus result;
result = AudioUnitSetParameter(mixerUnit, kMultiChannelMixerParam_Volume, kAudioUnitScope_Input, inputBus, value, 0);
if (result) {
NSLog(@"AudioUnitSetParameter kMultiChannelMixerParam_Volume Input result %d %08X %4.4s\n", (int)result, (unsigned int)result, (char*)&result);
}
}
Said that, if you have the WAV
file you can pass through a filter this one, and then you can set your gain byte-by-byte to the file:
WavInFile *inFile = new WavInFile( cString );
// get some audio file channels info
float samplerate = (int)(*inFile).getSampleRate();
int nChannels = (int)(*inFile).getNumChannels();
float nSamples = (*inFile).getNumSamples();
float duration = (double)(*inFile).getLengthMS() / (double) 1000;
while (inFile->eof() == 0) {
int num, samples;
// Read a chunk of samples from the input file
num = inFile->read(shortBuffer, BUFF_SIZE);
samples = num / nChannels;
seconds = (double)(*inFile).getElapsedMS() / (double) 1000;
float currentFrequency = 0.0f;
SInt16ToDouble(shortBuffer, doubleBuffer, samples);
currentFrequency = dywapitch_computepitch(&(pitchtrackerFile), doubleBuffer, 0, samples);
currentFrequency = samplerate / (float)44100.0f * currentFrequency;
// here you can change things like the pitch at this point of the amplitude i.e the volume see later
} //// eof
Here is the amplifier
short amplifyPCMSInt16(int value, int dbGain, bool clampValue) {
/*To increase the gain of a sample by X db, multiply the PCM value by
* pow( 2.0, X/6.014 ). i.e. gain +6dB means doubling the value of the sample, -6dB means halving it.
*/
int newValue = (int) ( pow(2.0, ((double)dbGain)/6.014 )*value);
if(clampValue){
if(newValue>32767)
newValue = 32767;
else if(newValue < -32768 )
newValue = -32768;
}
return (short) newValue;
}
Of course, this means reading the file and modifying it reading until EOF
, so typically you would do this if you actually need this to be stored, otherwise you will better setup a AURenderCallbackStruct
to do this in real-time with a delay of the order of msec:
AURenderCallbackStruct lineInrCallbackStruct = {};
lineInrCallbackStruct.inputProc = &micLineInCallback;
lineInrCallbackStruct.inputProcRefCon = (void*)self;
err = AudioUnitSetProperty(
vfxUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
0,
&lineInrCallbackStruct,
sizeof(lineInrCallbackStruct));
At this point you have the control of your audio in real-time and here
static OSStatus micLineInCallback (void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
}
you can multiply your buffer for the left and right channels samples:
SInt16 *sampleBufferLeft = THIS.conversionBufferLeft;
SInt16 *sampleBufferRight = THIS.conversionBufferRight;
SInt16 *sampleBuffer;
double *doubleBuffer = THIS.doubleBufferMono;
// start the actual processing
inSamplesLeft = (SInt32 *) ioData->mBuffers[0].mData; // left channel
fixedPointToSInt16(inSamplesLeft, sampleBufferLeft, inNumberFrames);
if(isStereo) {
inSamplesRight = (SInt32 *) ioData->mBuffers[1].mData; // right channel
fixedPointToSInt16(inSamplesRight, sampleBufferRight, inNumberFrames);
for( i = 0; i < inNumberFrames; i++ ) { // combine left and right channels into left
sampleBufferLeft[i] = (SInt16) ((.5 * (float) sampleBufferLeft[i]) + (.5 * (float) sampleBufferRight[i]));
}
}
There is a bit of work to do (putting all together) but you have L-R channels, just multiply as showed in the amplifyPCMSInt16
function.
Upvotes: 1
Reputation: 18898
Set your AVAudioSession
category to Playback
. You'd want to put this in your viewDidLoad
.
// Play sound when silent mode on
[[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayback error: nil];
AVAudioSession Class Reference
Upvotes: 0
Reputation: 4209
Try setting up the category of your AVAudioSession to AVAudioSessionCategoryOptionDefaultToSpeaker
NSError *sessionError = NULL;
BOOL success = [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback
withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker
error:&sessionError];
if(!success) {
NSLog(@"Error setting category Audio Session: %@", [sessionError localizedDescription]);
}
You could also try adding AudioUnitSetProperty with an AudioComponentDescription & AudioStreamBasicDescription:
// Create Audio Unit
AudioComponentDescription cd = {
.componentManufacturer = kAudioUnitManufacturer_Apple,
.componentType = kAudioUnitType_Output,
.componentSubType = kAudioUnitSubType_RemoteIO,
.componentFlags = 0,
.componentFlagsMask = 0
};
AudioComponent component = AudioComponentFindNext(NULL, &cd);
OSStatus result = AudioComponentInstanceNew(component, &_ioUnit);
NSCAssert2(
result == noErr,
@"AudioComponentInstanceNew failed. Error code: %d '%.4s'",
(int)result,
(const char *)(&result));
AudioStreamBasicDescription asbd = {
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags =
kAudioFormatFlagIsSignedInteger |
kAudioFormatFlagIsPacked |
kAudioFormatFlagsNativeEndian |
kAudioFormatFlagIsNonInterleaved,
.mChannelsPerFrame = 2,
.mBytesPerPacket = sizeof(SInt16),
.mFramesPerPacket = 1,
.mBytesPerFrame = sizeof(SInt16),
.mBitsPerChannel = 8 * sizeof(SInt16),
.mSampleRate = 1
};
result = AudioUnitSetProperty(
_ioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&asbd,
sizeof(asbd));
NSCAssert2(
result == noErr,
@"Set Stream Format failed. Error code: %d '%.4s'",
(int)result,
(const char *)(&result));
// Set Audio Callback
AURenderCallbackStruct ioRemoteInput;
ioRemoteInput.inputProc = audioCallback;
result = AudioUnitSetProperty(
_ioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&ioRemoteInput,
sizeof(ioRemoteInput));
NSCAssert2(
result == noErr,
@"Could not set Render Callback. Error code: %d '%.4s'",
(int)result,
(const char *)(&result));
// Initialize Audio Unit
result = AudioUnitInitialize(_ioUnit);
NSCAssert2(
result == noErr,
@"Initializing Audio Unit failed. Error code: %d '%.4s'",
(int)result,
(const char *)(&result));
Upvotes: 0