Reputation: 580
I want to program a music program from scratch. Big goal: yes. I have no clear intention of finishing anything. This is mainly a personal project for learning. :P
The first step is building the oscillators and instruments. An instrument will probably be a combination of oscillators and filters (and envelopes + effects). Now, my first question is: How should I build the wave generators?
Imagine I have a track that plays different notes with instrument X. I imagine it's best to "pre-render" these notes. So I would pay an up-front cost to run my wave functions to generate an array of numbers that represent a wave. Say I want to do this at a sample rate of 44.1KHz, does that mean I'll have an array of 44.1k items per second of sound per instrument?
I think this question itself is language agnostic. But I'm planning on using JavaScript because I'll run this in a browser.
Upvotes: 2
Views: 221
Reputation: 653
Say I want to do this at a sample rate of 44.1KHz, does that mean I'll have an array of 44.1k items per second of sound per instrument?
That's exactly it, you'll have 44.1k samples, in the form of floats or bytes (depending on what language you're using).
Here's some pseudocode for generating a 1 second sine wave with float-based samples at 44.1kH:
RATE = 44100
frequency = 440
for(i = 0; i < RATE; i++){
array[i] = sin(i*2*PI*frequency/RATE);
}
Upvotes: 1
Reputation: 12027
As the previous answerers have pointed out, you can write a simple program in C (or any language for that matter) to output a series of values which represent sound samples, or points in a sound wave. If you write these values to a text file, then you can use a program like sox
(http://sox.sourceforge.net/) to convert this file to a .wav file. Then you can play the wav file on your computer, and listen the sound wave through your speakers.
Upvotes: 1
Reputation: 13
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#define PI 3.141592
int main(void){
double RATE = 44100;
double frequency = 440;
double Amp=16384;//amplitude of signal
FILE *file;
double data;
file=fopen("dummyf.pcm", "w");
for(double i = 0; i < RATE; i++){
data = Amp*sin(i*2*PI*frequency/RATE);
fputc(data, file);
}
fclose(file);
return 0;
}
I want to try a different solution to write down data using files. It has the advantage you don’t need to create a big array in memory. It is easier to build your own data with a function and store it in a pcm file than in memory, isn’t it ?
LE : you should use directx DirectMusic instead because it uses f.m. synthesis for many instruments.
LE2: my program now it doesn't work as expected
Upvotes: 0
Reputation: 28285
Audio is just a curve - so to build your oscillator you come up this an algo to output a curve. Software, being digital not analog, demands the curve be defined as a series of points in time (samples) where its value is the instantaneous height of the audio curve. Typically these samples happen at 44100 times per second ie. Hertz.
checkout Web Audio API - its surprisingly powerful and very will supported. Just to get an appreciation of its flexibility checkout this demo written by a google staffer insider
Web Audio Playground
http://webaudioplayground.appspot.com/
amongst other audio widgets, it offers black box oscillators, yet allows you to roll your own and render your synthesized or file based audio data in real-time. Its modular so each component is called a node - you build by linking these nodes
Here is the definition of a callback used to synthesize audio (oscillator)
function setup_onaudioprocess_callback(given_node) {
given_node.onaudioprocess = (function() {
return function(event) {
if (allow_synth) {
// console.log('inside main_glob callback onaudioprocess BUFF_SIZE ', BUFF_SIZE);
var synthesized_output_buffer;
// stens TODO - how to pass in own buffer instead of being given object: out so I can do a circular ring of such buffers
synthesized_output_buffer = event.outputBuffer.getChannelData(0); // stens TODO - do both channels not just left
var phi = 0,
dphi = 2.0 * Math.PI * given_node.sample_freq /
given_node.sample_rate;
for (var curr_sample = 0; curr_sample < given_node.BUFF_SIZE; curr_sample++, phi += dphi) {
synthesized_output_buffer[curr_sample] = Math.sin(phi);
}
given_node.sample_freq *= given_node.freq_factor;
if (given_node.sample_freq <
given_node.MIN_FREQ) {
given_node.freq_factor = given_node.increasing_freq_factor;
} else if (given_node.sample_freq > given_node.MAX_FREQ) {
given_node.freq_factor = given_node.decreasing_freq_factor;
}
// ---
audio_display_obj.pipeline_buffer_for_time_domain_cylinder(synthesized_output_buffer,
BUFF_SIZE, "providence_2");
}
};
}());
}
it would be used in relation to a node generated using createScriptProcessor
function init_synth_settings(given_node, g_MIN_FREQ, g_MAX_FREQ, g_BUFF_SIZE, g_decreasing_freq_factor, g_increasing_freq_factor) {
given_node.MIN_FREQ = g_MIN_FREQ;
given_node.MAX_FREQ = g_MAX_FREQ;
given_node.sample_freq = given_node.MIN_FREQ; // Hertz
given_node.BUFF_SIZE = g_BUFF_SIZE;
given_node.decreasing_freq_factor = g_decreasing_freq_factor;
given_node.increasing_freq_factor = g_increasing_freq_factor;
given_node.freq_factor = g_increasing_freq_factor;
}
var this_glob_01 = audio_context.createScriptProcessor(BUFF_SIZE, 1, 1);
init_synth_settings(this_glob_01, 20, 300, BUFF_SIZE, 0.98, 1.01);
setup_onaudioprocess_callback(this_glob_01);
this should get you over the hump
Upvotes: 1