Reputation: 127
I have an SVG that produces audio based on keypresses, and I want to record the audio coming from it in my react app.
In the svg I have a group of audio elements like this:
<svg id="keyboard_svg">
*other svg stuff*
<g id="keyboard">
<audio src="https://arweave.net/<hash>/fileName1.flac"></audio
<audio src="https://arweave.net/<hash>/fileName2.flac"></audio>
<audio src="https://arweave.net/<hash>/fileName3.flac"></audio>
<audio src="https://arweave.net/<hash>/fileName4.flac"></audio>
</g>
</svg>
in my react app I get access to the audio elements like this:
const svg = document.getElementById("keyboard_svg")
const sound = svg.getElementById("Electric Piano")
so sound
is the group of audio elements. I can iterate through them like this:
for(let i=0; i<sound.children.length; i++) {
//do something with sound.children[i]
}
the SVG is of a keyboard that users can use their keyboard to play. my goal is to record the audio from these elements in javascript. My understanding is that I can create a MediaStream object, add MediaStreamTracks to it, and then pass the MediaStream in the MediaRecorder constructor. But maybe Im misunderstanding how it works.
What I've tried:
let stream = new MediaStream()
for(let i=0; i<sound.children.length; i++) {
stream.addTrack(sound.children[i])
}
this doesnt work because audio elements are not of type MediaStreamTrack.
let audio = new Audio()
for(let i=0; i<sound.children.length; i++) {
let source = document.createElement("source")
source.src = sound.children[i].src
audio.appendChild(source)
}
so audio
is gives me this:
<audio>
<source src="https://arweave.net/<hash>/fileName1.flac">
<source src="https://arweave.net/<hash>/fileName2.flac">
<source src="https://arweave.net/<hash>/fileName3.flac">
<source src="https://arweave.net/<hash>/fileName4.flac">
</audio>
then I tried:
let stream = audio.captureStream()
but if I pass that into a MediaRecorder, i get:
Failed to construct 'MediaRecorder': parameter 1 is not of type 'MediaStream'
So thats where im stuck at basically. Im aware of these other functions
AudioContext.createMediaElementSource()
AudioContext.createMediaStreamSource()
AudioContext.createMediaStreamTrackSource()
MediaStreamTrackGenerator()
it sounds like theyre what I need but I cant figure out how to use them. Ive tried a bunch of other things but what I listed above is what I think makes most sense, but still doesnt work.
Any help would be great!
Upvotes: 1
Views: 1595
Reputation: 20934
For this you're going to need the Web Audio API, including some of the functions that you mentioned in your question.
Start of by creating a new AudioContext
and select your <audio>
elements. Loop over each <audio>
element and use AudioContext.createMediaElementSource
to create a playable source from the elements.
const audioContext = new AudioContext();
const audioElements = document.querySelectorAll('#keyboard audio');
const sources = Array.from(audioElements).map(
audioElement => audioCtx.createMediaElementSource(audioElement)
);
From here you need to connect all the sources into the MediaRecorder
. You can't do that directly, because the MediaRecorder
only accepts a MediaStream
instance as argument.
To connect the two, use a MediaStreamAudioDestinationNode
. This node is able to both receive the inputs from the created sources and create an output in the form of a MediaStream
.
Loop over the sources and connect them with the MediaStreamAudioDestinationNode
. Then pass the stream
property of MediaStreamAudioDestinationNode
into the MediaRecorder
constructor.
Edit:
I've included the usage of a GainNode
, which basically is a volume control, but can also be used to collect multiple inputs into a single AudioNode
.
In this case we connect all the sources
to the GainNode
en than connect the GainNode
to the speakers and the MediaStreamAudioDestinationNode
. This way we can both monitor and stream our audio while both are coming from the same source.
This is an "easier" alternative over creating an MediaStreamAudioSourceNode
and reading the stream from the MediaStreamAudioDestinationNode
.
const gainNode = audioContext.createGain();
const mediaStreamDestination = audioContext.createMediaStreamDestination();
sources.forEach(source => {
source.connect(gainNode);
});
gainNode.connect(mediaStreamDestination);
gainNode.connect(audioContext.destination);
const mediaRecorder = new MediaRecorder(mediaStreamDestination.stream);
From here, all you have to do is hit record and play your notes.
let recordingData = [];
mediaRecorder.addEventListener('start', () => {
recordingData.length = 0;
});
mediaRecorder.addEventListener('dataavailable', event => {
recordingData.push(event.data);
});
mediaRecorder.addEventListener('stop', () => {
const blob = new Blob(recordingData, {
'type': 'audio/mp3'
});
// The blob is the result of the recording.
// Handle the blob from here.
});
mediaRecorder.start();
Note 1: This solution is not written in React and should be modified to work with it. For example: document.querySelectorAll
should never be used in a React app. Instead use the useRef
hook to create a reference to the <audio>
elements.
Note 2: The <audio>
element is not a part of an SVG. You don't have to place the elements inside the SVG for them to work. Instead visually hide the <audio>
elements and trigger them with, for example, an onClick
prop.
Upvotes: 2
Reputation: 78
I found this article: https://mitya.uk/articles/concatenating-audio-pure-javascript
var uris = [];
for(let i=0; i<sound.children.length; i++) {
uris.push(sound.children[i].src);
}
let proms = uris.map(uri => fetch(uri).then(r => r.blob()));
Promise.all(proms).then(blobs => {
let blob = new Blob([blobs[0], blobs[1]]),
blobUrl = URL.createObjectURL(blob),
audio = new Audio(blobUrl);
audio.play();
});
Upvotes: 0