Reputation: 27330
I'm writing C++ code which plays both digital audio (synthesised music) and MIDI music at the same time (using the RtMidi library.) The digitised music will play out of the computer's audio device, but the MIDI music could play out of an external synthesiser. I want to play a song that uses both digitised instruments and MIDI instruments, and I am not sure of the best way to synchronise these two audio streams:
Currently I am using nanosleep() - which only works under Linux, not Windows - to wait for the correct time between notes. This allows both the digital audio and the MIDI data to remain synchronised, however nanosleep() is not very consistent so the resulting tempo is very uneven.
Can anyone think of a way to retain accurate timing between notes for both the digital audio as well as the MIDI data?
Upvotes: 1
Views: 1435
Reputation: 11469
The first issue is that you need to know how much audio has passed through the audio device. If your latency is low enough, you might be able to hazard a guess from the amount of data you've pushed through, but the latency between that and the playback is a moving target, so you should try to get that information from the audio hardware. That information is available, so use it because the "jitter" you will get from errors in latency measurement can effect the synchronization in a musically noticeable way.
If you must use sleep for timing, there are two issues that will make it sleep longer: 1. priority (if another process/thread has higher priority, it will run if if the timer has run out) and 2. system latency (if the system takes 5 milliseconds to swap processes/threads, it might add that to your requested delay time). These kinds of delays are musically relevant. Most midi APIs have a "sequencer" api that lets you queue data in advance to avoid having to use system timers.
You might find this document useful, even if you are not using portaudio for audio I/O.
http://www.portaudio.com/docs/portaudio_sync_acmc2003.pdf
Upvotes: 1
Reputation: 3968
The answer to this lies not in small buffers, but in large ones.
Let's take an example of a 3-minute song.
One first renders the digital part, and "tags" it with MIDI notes. Then one starts it playing and triggers the MIDI notes when it's time, perhaps using an std::vector to hold an in-order list. The synchronization can be changed by using an overall time offset:
HORRIBLE incomplete but hopefully demonstrative pseudocode on the topic:
start_digital_playing_thread();
int midi_time_sync = 10; // ms
if (time >= (midi_note[50]->time + midi_time_sync)) // play note
Upvotes: 0
Reputation: 39390
If you are willing to use Boost, it has CPU-precision timers. If not, on Windows there are functions QueryPerformanceCounter
and QueryPerformanceFrequency
, which can be used for CPU-based timing, which will certainly suit all of your needs. There are plenty of Timer classes implementations around the web, some of them working both on windows and *ix systems.
Upvotes: 2