Reputation: 703
from Last Few Days i am working on an iphone application Which needs to record the users audio and save it with a background music in it , in Simple words by appending two audio files generate a third audio File, I try to do it using AudioToolBox api but no success in it , can any one suggest me the right direction where to go for that any suggestion,???
Thanks,
Upvotes: 1
Views: 1713
Reputation: 562
You Can do this by
- (BOOL) combineVoices1
{
NSError *error = nil;
BOOL ok = NO;
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
CMTime nextClipStartTime = kCMTimeZero;
//Create AVMutableComposition Object.This object will hold our multiple AVMutableCompositionTrack.
AVMutableComposition *composition = [[AVMutableComposition alloc] init];
AVMutableCompositionTrack *compositionAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionAudioTrack setPreferredVolume:0.8];
NSString *soundOne =[[NSBundle mainBundle]pathForResource:@"test1" ofType:@"caf"];
NSURL *url = [NSURL fileURLWithPath:soundOne];
AVAsset *avAsset = [AVURLAsset URLAssetWithURL:url options:nil];
NSArray *tracks = [avAsset tracksWithMediaType:AVMediaTypeAudio];
AVAssetTrack *clipAudioTrack = [[avAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
[compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, avAsset.duration) ofTrack:clipAudioTrack atTime:kCMTimeZero error:nil];
AVMutableCompositionTrack *compositionAudioTrack1 = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionAudioTrack setPreferredVolume:0.3];
NSString *soundOne1 =[[NSBundle mainBundle]pathForResource:@"test" ofType:@"caf"];
NSURL *url1 = [NSURL fileURLWithPath:soundOne1];
AVAsset *avAsset1 = [AVURLAsset URLAssetWithURL:url1 options:nil];
NSArray *tracks1 = [avAsset1 tracksWithMediaType:AVMediaTypeAudio];
AVAssetTrack *clipAudioTrack1 = [[avAsset1 tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
[compositionAudioTrack1 insertTimeRange:CMTimeRangeMake(kCMTimeZero, avAsset.duration) ofTrack:clipAudioTrack1 atTime:kCMTimeZero error:nil];
AVMutableCompositionTrack *compositionAudioTrack2 = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionAudioTrack2 setPreferredVolume:1.0];
NSString *soundOne2 =[[NSBundle mainBundle]pathForResource:@"song" ofType:@"caf"];
NSURL *url2 = [NSURL fileURLWithPath:soundOne2];
AVAsset *avAsset2 = [AVURLAsset URLAssetWithURL:url2 options:nil];
NSArray *tracks2 = [avAsset2 tracksWithMediaType:AVMediaTypeAudio];
AVAssetTrack *clipAudioTrack2 = [[avAsset2 tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
[compositionAudioTrack1 insertTimeRange:CMTimeRangeMake(kCMTimeZero, avAsset2.duration) ofTrack:clipAudioTrack2 atTime:kCMTimeZero error:nil];
AVAssetExportSession *exportSession = [AVAssetExportSession
exportSessionWithAsset:composition
presetName:AVAssetExportPresetAppleM4A];
if (nil == exportSession) return NO;
NSString *soundOneNew = [documentsDirectory stringByAppendingPathComponent:@"combined10.m4a"];
//NSLog(@"Output file path - %@",soundOneNew);
// configure export session output with all our parameters
exportSession.outputURL = [NSURL fileURLWithPath:soundOneNew]; // output path
exportSession.outputFileType = AVFileTypeAppleM4A; // output file type
// perform the export
[exportSession exportAsynchronouslyWithCompletionHandler:^{
if (AVAssetExportSessionStatusCompleted == exportSession.status) {
NSLog(@"AVAssetExportSessionStatusCompleted");
} else if (AVAssetExportSessionStatusFailed == exportSession.status) {
// a failure may happen because of an event out of your control
// for example, an interruption like a phone call comming in
// make sure and handle this case appropriately
NSLog(@"AVAssetExportSessionStatusFailed");
} else {
NSLog(@"Export Session Status: %d", exportSession.status);
}
}];
return YES;
}
Upvotes: 2
Reputation: 40430
You won't find any pre-rolled tool for doing this, but it's not too hard once you get the recording bit down. After that, you will need to mix the file with the background music, which can be done simply by adding the raw samples together.
For that part to work, you will either need to decode the background music from whatever compressed format you are using to raw PCM so you can manipulate the samples directly. It's been a long time since I did any iOS development, so I don't know if the iOS SDK is able to do this directly or whether you will need to bundle libffmpeg with your code (or something similar to that). But IIRC, the iPhone does support decoding compressed audio to PCM, but not encoding it (more on that in a second).
Otherwise, you can distribute compressed (as zip, not mp3/aac/ogg/whatever) raw PCM files with your app and unzip them to get the sample data directly.
Once you get the final mixdown, you can stream it directly back through the playback device as raw PCM. If you need to save or export it, you'll need to look again into a decoding/encoding library.
Speaking from experience on this issue, you will probably want to do a bit of basic processing to the vocals before mixing down with the background music. First, you will want to have your background tracks normalized to -3dB (or so) so that the user's voice is audible over the music. Second, you should apply a highpass filter to the vocals to remove all frequencies below 60Hz, as wind or other background noises can be picked up by the iPhone's mic. Finally, you will probably want to apply compression + limiter to the vocal sample to make the vocals a bit easier to hear during quiet stretches.
Unfortunately, the question you asked isn't as simple as "just use function mixdownTracksTogether()", but you can definitely get this working by chaining other tools and functions together. Hope this gets you on the right track!
Upvotes: 1