Reputation: 13
Lets, I explain
I use fs.createReadStream etc... methods.. but no will work correctly... they all play the audio from the starting point..
In simple words, I want to make a platform that broadcast already store mp3 files as Radio.
Example :- https://live.sgpc.net:8443/;nocache=889869
Open this link.. that will play an live audio.
Upvotes: 0
Views: 4246
Reputation: 186
I found one post made by log rocket, pointing in the right direction, the article link .
The main idea is to start reading an audio file through Fs.createReadStream It will give you a readable stream you can manipulate. You read the audio file independently of the users connected to your application, you have to move forward the audio in all case.
You do not want to read it too fast, ideally at the bitrate of the song, so you can "mimic" real time playing. There are multiple npm packages to read the bitrate of an audio file. With this information, you can pipe the audio stream into a transform stream (duplex stream), to limit the rate at which the audio stream is played.
const songReadable = Fs.createReadStream(audioFilePath); // readable stream to manipulate
const throttleTransformable = new Throttle(bitRate); // Transform stream to rate limit the reading of file
songReadable.pipe(throttleTransformable); // passing the readable stream to the transform
After that, you need to pass the throttleTransformable stream to the client. To handle backpressure a bit easier and pass audio stream to multiple client, you can create a new Stream for each new client, let's call it clientStream. When data is available on the throttleTransformable stream, you can write the data on each clientStream
throttleTransformable.on('data', (chunk) => {broadcastToEveryStreams(chunk)});
Then you simply pipe the client stream to the response stream, the browser will receive the data of the audio, but as you intend with live datas, not from the start.
Here a full example in nodejs with express framework. It is just a toy, but I hope it can lead you in the right direction
import { parseFile } from 'music-metadata';
import express from "express"
import Throttle from 'throttle';
import Fs from 'fs'
import { PassThrough } from 'stream'
const app = express();
let filePath = "./yourMP3AudioFile.mp3"; // put your favorite audio file
let bitRate = 0;
const streams = new Map()
app.use(express.static('public'));
app.get("/stream", (req, res) => {
const { id, stream } = generateStream() // We create a new stream for each new client
res.setHeader("Content-Type", "audio/mpeg")
stream.pipe(res) // the client stream is pipe to the response
res.on('close', () => { streams.delete(id) })
})
const init = async () => {
const fileInfo = await parseFile(filePath)
bitRate = fileInfo.format.bitrate / 8
}
const playFile = (filePath) => {
const songReadable = Fs.createReadStream(filePath);
const throttleTransformable = new Throttle(bitRate);
songReadable.pipe(throttleTransformable);
throttleTransformable.on('data', (chunk) => { broadcastToEveryStreams(chunk) });
throttleTransformable.on('error', (e) => console.log(e))
}
const broadcastToEveryStreams = (chunk) => {
for (let [id, stream] of streams) {
stream.write(chunk) // We write to the client stream the new chunck of data
}
}
const generateStream = () => {
const id = Math.random().toString(36).slice(2);
const stream = new PassThrough()
streams.set(id, stream)
return { id, stream }
}
init()
.then(() => app.listen(8080))
.then(() => playFile(filePath))
And for the front end part, a simple HTML file in the public folder
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>NodeJS web radio</title>
</head>
<body>
<audio id="audio" src="/stream" type="audio/mpeg" preload="none" controls autoplay></audio>
</body>
</html>
And the npm packages I used
{
"express": "^4.18.2",
"music-metadata": "^8.1.0",
"throttle": "^1.0.3"
}
The code is largely inspire from the log rocket post repo. I simply adapted it for express framework, and just kept the necessary code to understand the approach they used. It does not handle audio file transition or proper termination with the client. I used ES6 module package, you have to specify "type": "module", in your package.json file for this to work
I don't know why but the autoplay does not seem to work for me, I have to manually click on the start arrow. When I connect from multiple devices after starting the server, at different times, I can hear live audio, although it is not perfectly synchronize between each client/device.
Be carefull, there isn't any control on the client stream, if the buffer is full because the client connection is too slow, it could becomes ugly, you can check the highwater mark of the client/response stream for more control. I am not sure it will scale really well for a lot of clients. Creating a new stream where we duplicate data does not seem so right.
Maybe you can directly write the datas to the response stream of express
res.write(chunckAudioFile)
Or connect your users with a websocket to the server. In all case, you have to be aware of client connection state.
I did not spend time on this, but maybe you can buffer the audio data on the client side too, for smoother control and playtime. WebAudio API seems to have some features for streaming a link to webdev post
All that being said, for your requirement, playing audio in some web radio I advise you to look at pro radio servers, like Shoutcast or Icecast. And for the client part, maybe look at videoJs that seems to handle audio
I hope it can helps, and give you nice ideas
Upvotes: 5