Steak
Steak

Reputation: 544

Return blit-able numpy array from figure matplotlib

I am trying to find a way to return a numpy array that can be blitted onto a pygame screen. Here is the code so far:


import pyaudio
import struct
import numpy as np
import matplotlib.pyplot as plt
import time

CHUNK = 4000
FORMAT = pyaudio.paInt16
CHANNELS = 1
RATE = 42000

p = pyaudio.PyAudio()

chosen_device_index = -1

stream = p.open(format=FORMAT,
 channels=CHANNELS,
 rate=RATE,
 input_device_index=chosen_device_index,
 input=True,
 output=True,
 frames_per_buffer=CHUNK
 )
 
plt.ion()
fig, ax = plt.subplots()

x = np.arange(0, CHUNK)
data = stream.read(CHUNK)
data_int16 = struct.unpack(str(CHUNK) + 'h', data)
line, = ax.plot(x, data_int16)
#ax.set_xlim([xmin,xmax])
ax.set_ylim([-2**15,(2**15)-1])

while True:
 data = struct.unpack(str(CHUNK) + 'h', stream.read(CHUNK))
 line.set_ydata(data)
 fig.canvas.draw()
 fig.canvas.flush_events()

This is an example image of what the graph looks like:

image

I would like to be able to constantly update a PyGame window with such a graph.

Upvotes: 2

Views: 123

Answers (1)

import random
import random

Reputation: 3245

You can use a pixel array to manipulate the individual pixels on your pygame surface.

Here's a minimal example based on your pyaudio input capturing:

import pygame
import pyaudio
import struct
import numpy as np
import samplerate

# initialise audio capture
CHUNK = 4000
FORMAT = pyaudio.paInt16
CHANNELS = 1
RATE = 42000

p = pyaudio.PyAudio()

chosen_device_index = -1

stream = p.open(
    format=FORMAT,
    channels=CHANNELS,
    rate=RATE,
    input_device_index=chosen_device_index,
    input=True,
    output=False,
    frames_per_buffer=CHUNK,
)

pygame.init()

width = 320
height = 240

clock = pygame.time.Clock()
screen = pygame.display.set_mode([width, height])
pygame.display.set_caption("Audio Input")
pixels = pygame.PixelArray(screen)

done = False
while not done:
    # Events
    for event in pygame.event.get():
        if event.type == pygame.QUIT:
            done = True
    # capture data
    data = struct.unpack(str(CHUNK) + "h", stream.read(CHUNK))
    samples = samplerate.resample(data, width / len(data))

    # need to  rescale from 0 (top) to height (bottom)
    norm = ((samples / 2 ** 15) + 1) / 2  # normalise to 0 - 1
    rescaled = (1 - norm) * height

    screen.fill(pygame.Color("black"))
    for x in range(len(samples)):
        y = int(rescaled[x])
        pixels[x][y] = pygame.Color("green")
    pygame.display.update()
    clock.tick(60)

pygame.quit()
p.terminate()

This will show something like this:

PyAudio Input Capture

The captured chunk is resampled to match the width of the window. According to this answer samplerate is the best way to resample audio rather than a basic linear interpolation.

The magnitude of the signal is then scaled to the window height. Changing the screen dimensions to 800x400 looks like this: PyAudio Input Capture 800x400

You may wish to adjust the gain by tinkering with the vertical scaling.

EDIT: To eliminate the gaps, use pygame.draw.aaline(…) to draw an anti-aliased line between successive points. E.g. after filling the screen with black:

# draw lines between each point    
for x in range(1, len(samples)):
    y0 = int(rescaled[x-1])
    y1 = int(rescaled[x])
    pygame.draw.aaline(screen, pygame.Color("turquoise"), (x-1, y0), (x, y1))

Then you'll see something like:

PyAudio Input Capture aaline

Upvotes: 2

Related Questions