Harry Stuart
Harry Stuart

Reputation: 1929

Creating .wav file from bytes

I am reading bytes from wav audio downloaded from a URL. I would like to "reconstruct" these bytes into a .wav file. I have attempted the code below, but the resulting file is pretty much static. For example, when I download audio of myself speaking, the .wav file produced is static only, but I can hear slight alterations/distortions when I know the audio should be playing my voice. What am I doing wrong?

from pprint import pprint
import scipy.io.wavfile
import numpy

#download a wav audio recording from a url
>>>response = client.get_recording(r"someurl.com")
>>>pprint(response)
(b'RIFFv\xfc\x03\x00WAVEfmt \x10\x00\x00\x00\x01\x00\x01\x00\x80>\x00\x00'
 ...
 b'\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff'
...
 b'\xea\xff\xfd\xff\x10\x00\x0c\x00\xf0\xff\x06\x00\x10\x00\x06\x00'
 ...)

>>>a=bytearray(response)
>>>pprint(a)
bytearray(b'RIFFv\xfc\x03\x00WAVEfmt \x10\x00\x00\x00\x01\x00\x01\x00'       
      b'\x80>\x00\x00\x00}\x00\x00\x02\x00\x10\x00LISTJ\x00\x00\x00INFOINAM'
      b'0\x00\x00\x00Conference d95ac842-08b7-4380-83ec-85ac6428cc41\x00'
      b'IART\x06\x00\x00\x00Nexmo\x00data\x00\xfc\x03\x00\xff\xff'
      b'\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff'
      ...
      b'\x12\x00\xf6\xff\t\x00\xed\xff\xf6\xff\xfc\xff\xea\xff\xfd\xff'
      ...)

>>>b = numpy.array(a, dtype=numpy.int16)
>>>pprint(b)
array([ 82,  73,  70, ..., 255, 248, 255], dtype=int16)

>>>scipy.io.wavfile.write(r"C:\Users\somefolder\newwavfile.wav", 
16000, b)

Upvotes: 8

Views: 42527

Answers (5)

Little Gift
Little Gift

Reputation: 11

You can do this with the wave module:

def save_audio_wf(audio_data, filename="./output-wf.wav"):

    with wave.open(filename, 'wb') as wf:
        wf.setnchannels(1)  # mono
        wf.setsampwidth(2)  # 2 bytes per sample
        wf.setframerate(16000)  # 16kHz sample rate
        wf.writeframes(audio_data)

For context, I used it to store streamed data like so:

@sock.route('/stream')
def stream(ws):
    try:
        audio_buffer = b''
        
        while True:
            message = ws.receive()
            packet = json.loads(message)
            if packet['event'] == 'start':
                print('Starting Stream')
            elif packet['event'] == 'stop':
                print('Stopping Stream')
                save_audio_wf(audio_buffer)
                audio_buffer = b''
            elif packet['event'] == 'media':
                audio = base64.b64decode(packet['media']['payload'])
                audio = audioop.ulaw2lin(audio, 2)
                audio = audioop.ratecv(audio, 2, 1, 8000, 16000, None)[0]
                audio_buffer += audio
    except Exception as e:
        raise e

I first created a buffer for the data, then appended incoming packets to the buffer (after a little bit of "preprocessing") as shown in the last elif block.

At the end of the stream, I saved it all to a .wav file using the save_audio_wf function.

Upvotes: 1

Wesam Nabki
Wesam Nabki

Reputation: 2614

I faced the same problem while streaming and I used the answers above to write a complete function. In my case, the byte array was coming from streaming an audio file (the frontend) and the backend needs to process it as a ndarray.

This function simulates how the front-ends sends the audio file as chunks that are accumulated into a byte array:

audio_file_path = 'offline_input/zoom283.wav'

chunk = 1024

wf = wave.open(audio_file_path, 'rb')
audio_input = b''
d = wf.readframes(chunk)
while len(d) > 0:
    d = wf.readframes(chunk)
    audio_input = audio_input + d

some import libraries:

import io
import wave

import numpy as np
import scipy.io.wavfile
import soundfile as sf
from scipy.io.wavfile import write

Finally, the backend will take a byte array and convert it to ndarray:

def convert_bytearray_to_wav_ndarray(input_bytearray: bytes, sampling_rate=16000):
    bytes_wav = bytes()
    byte_io = io.BytesIO(bytes_wav)
    write(byte_io, sampling_rate, np.frombuffer(input_bytearray, dtype=np.int16))
    output_wav = byte_io.read()
    output, samplerate = sf.read(io.BytesIO(output_wav))
    return output


output = convert_bytearray_to_wav_ndarray(input_bytearray=audio_input)

The output represents the audio file to be processed by the backend:

To check that the file has been received correctly, we write it to the desk:

scipy.io.wavfile.write("output1.wav", 16000, output)

Upvotes: 5

UlucSahin
UlucSahin

Reputation: 85

To add wave file header to raw audio bytes (extracted from wave library):

import struct

def write_header(_bytes, _nchannels, _sampwidth, _framerate):
    WAVE_FORMAT_PCM = 0x0001
    initlength = len(_bytes)
    bytes_to_add = b'RIFF'
    
    _nframes = initlength // (_nchannels * _sampwidth)
    _datalength = _nframes * _nchannels * _sampwidth

    bytes_to_add += struct.pack('<L4s4sLHHLLHH4s',
        36 + _datalength, b'WAVE', b'fmt ', 16,
        WAVE_FORMAT_PCM, _nchannels, _framerate,
        _nchannels * _framerate * _sampwidth,
        _nchannels * _sampwidth,
        _sampwidth * 8, b'data')

    bytes_to_add += struct.pack('<L', _datalength)

    return bytes_to_add + _bytes

Upvotes: 3

ediamant
ediamant

Reputation: 143

AudioSegment.from_raw() also will work while you have a continues stream of bytes:

import io
from pydub import AudioSegment

current_data is defined as the stream of bytes that you receive

s = io.BytesIO(current_data)
audio = AudioSegment.from_raw(s, sample_width, frame_rate, channels).export(filename, format='wav')

Upvotes: 4

Matthias
Matthias

Reputation: 4884

You can simply write the data in response to a file:

with open('myfile.wav', mode='bx') as f:
    f.write(response)

If you want to access the audio data as a NumPy array without writing it to a file first, you can do this with the soundfile module like this:

import io
import soundfile as sf

data, samplerate = sf.read(io.BytesIO(response))

See also this example: https://pysoundfile.readthedocs.io/en/0.9.0/#virtual-io

Upvotes: 12

Related Questions