TychoAussie
TychoAussie

Reputation: 41

How can I discriminate the falling edge of a signal with Python?

I am working on discriminating some signals for the calculation of the free-period oscillation and damping ratio of a spring-mass system (seismometer). I am using Python as the main processing program. What I need to do is import this signal, parse the signal and find the falling edge, then return a list of the peaks from the tops of each oscillation as a list so that I can calculate the damping ratio. The calculation of the free-period is fairly straightforward once I've determined the location of the oscillation within the dataset.

Where I am mostly hung up are how to parse through the list, identify the falling edge, and then capture each of the Z0..Zn elements. The oscillation frequency can be calculated using an FFT fairly easily, once I know where that falling edge is, but if I process the whole file, a lot of energy from the energizing of the system before the release can sometimes force the algorithm to kick out an ultra-low frequency that represents the near-DC offset, rather than the actual oscillation. (It's especially a problem at higher damping ratios where there might be only four or five measurable rebounds).

Has anyone got some ideas on how I can go about this? Right now, the code below uses the arbitrarily assigned values for the signal in the screenshot. However I need to have code calculate those values. Also, I haven't yet determined how to create my list of peaks for the calculation of the damping ratio h. Your help in getting some ideas together for solving this would be very welcome. Due to the fact I have such a small Stackoverflow reputation, I have included my signal in a sample screenshot at the following link: (Boy, I hope this works!) Tycho's sample signal -->

https://github.com/tychoaussie/Sigcal_v1/blob/066faca7c3691af3f894310ffcf3bbb72d730601/Freeperiod_Damping%20Ratio.jpg

##########################################################
import os, sys, csv
from scipy import signal
from scipy.integrate import simps
import pylab as plt
import numpy as np
import scipy as sp
#
# code goes here that imports the data from the csv file into lists.
# One of those lists is called laser, a time-history list in counts from an analog-digital-converter
# sample rate is 130.28 samples / second.
#
                                      #            
                                      # Find the period of the observed signal
                                      #
delta = 0.00767                       # Calculated elsewhere in code, represents 130.28 samples/sec
                                      # laser is a list with about 20,000 elements representing time-history data from a laser position sensor
                                      # The release of the mass and system response occurs starting at sample number 2400 in this particular instance.

sense = signal.detrend(laser[2400:(2400+8192)]) # Remove the mean of the signal
N = len(sense)
W    = np.fft.fft(sense)
freq = np.fft.fftfreq(len(sense),delta) # First value represents the number of samples and delta is the sample rate

                                      #
                                      # Take the sample with the largest amplitude as our center frequency. 
                                      # This only works if the signal is heavily sinusoidal and stationary 
                                      # in nature, like our calibration data.
                                      #

idx = np.where(abs(W)==max(np.abs(W)))[0][-1]
Frequency = abs(freq[idx]) # Frequency in Hz
period = 1/(Frequency*delta) # represents the number of samples for one cycle of the test signal.
                                      #
                                      # create an axis representing time.
                                      #

dt = [] # Create an x axis that represents elapsed time in seconds. delta = seconds per sample, i represents sample count
for i in range(0,len(sensor)):
    dt.append(i*delta)


                                      #
                                      # At this point, we know the frequency interval, the delta, and we have the arrays 
                                      # for signal and laser. We can now discriminate out the peaks of each rebound and use them to process the damping ratio of either the 'undamped' system or the damping ratio of the 'electrically damped' system.
                                      #

print 'Frequency calcuated to ',Frequency,' Hz.'

Upvotes: 1

Views: 3689

Answers (2)

Clóvis Fritzen
Clóvis Fritzen

Reputation: 153

May not be exactly what you are looking for, but I implemented that in microPython for the Raspberry Pi Pico. The idea is simple, you read the state of the signal and determine wether it is HIGH or LOW.

Next time you read the state of the signal it will again be HIGH or LOW. For falling edge what you are looking for is the signal WAS HIGH and is now LOW, that's when you know you had a falling edge. Here is the piece of code I made:

if(not button.value()):
         if(previousState == True):
             led.toggle()
             previousState = False
         else:
             pass

So if the button.value is False (LOW) and the previousState is True I do led.toggle() (light an LED).

Here is the complete explanation: https://www.instructables.com/Rising-and-Falling-Edge-Detection-With-Raspberry-P/

Upvotes: 0

Rufflewind
Rufflewind

Reputation: 8956

Here's a somewhat unconventional idea, which I think is might be fairly robust and doesn't require a lot of heuristics and guesswork. The data you have is really high quality and fits a known curve so that helps a lot here. Here I assume the "good part" of your curve has the form:

V = a * exp(-γ * t) * cos(2 * π * f * t + φ) + V0     # [Eq1]
  • V: voltage
  • t: time
  • γ: damping constant
  • f: frequency
  • a: starting amplitude
  • φ: starting phase
  • V0: DC offset

Outline of the algorithm

Getting rid of the offset

Firstly, calculate the derivative numerically. Since the data quality is quite high, the noise shouldn't affect things too much.

The voltage derivative V_deriv has the same form as the original data: same frequency and damping constant, but with a different phase ψ and amplitude b,

V_deriv = b * exp(-γ * t) * cos(2 * π * f * t + ψ)         # [Eq2]

The nice thing is that this will automatically get rid of your DC offset.

Alternative: This step isn't completely needed – the offset is a relatively minor complication to the curve fitting, since you can always provide a good guess for an offset by averaging the entire curve. Trade-off is you get a better signal-to-noise if you don't use the derivative.

Preliminary guesswork

Now consider the curve of the derivative (or the original curve if you skipped the last step). Start with the data point on the very far right, and then follow the curve leftward as many oscillations as you can until you reach an oscillation whose amplitude is greater than some amplitude threshold A. You want to find a section of the curve containing one oscillation with a good signal-to-noise ratio.

How you determine the amplitude threshold is difficult to say. It depends on how good your sensors are. I suggest leaving this as a parameter for tweaking later.

Curve-fitting

Now that you have captured one oscillation, you can do lots of easy things: estimate the frequency f and damping constant γ. Using these initial estimates, you can perform a nonlinear curve fit of all the data to the right of this oscillation (including the oscillation itself).

The function that you fit is [Eq2] if you're using the derivative, or [Eq1] if you use the original curve (but they are the same functions anyway, so rather moot point).

Why do you need these initial estimates? For a nonlinear curve fit to a decaying wave, it's critical that you give a good initial guess for the parameters, especially the frequency and damping constant. The other parameters are comparatively less important (at least that's my experience), but here's how you can get the others just in case:

  • The phase is 0 if you start from a maximum, or π if you start at a minimum.
  • You can guess the starting amplitude too, though the fitting algorithm will typically do fine even if you just set it to 1.

Finding the edge

The curve fit should be really good at this point – you can tell because discrepancy between your curve and the fit is very low. Now the trick is to attempt to increase the domain of the curve to the left.

If you stay within the sinusoidal region, the curve fit should remain quite good (how you judge the "goodness" requires some experimenting). As soon as you hit the flat region of the curve, however, the errors will start to increase dramatically and the parameters will start to deviate. This can be used to determine where the "good" data ends.

You don't have to do this point by point though – that's quite inefficient. A binary search should work quite well here (possibly the "one-sided" variation of it).

Summary

You don't have to follow this exact procedure, but the basic gist is that you can perform some analysis on a small part of the data starting on the very right, and then gradually increase the time domain until you reach a point where you find that the errors are getting larger rather than smaller as they ought to be.

Furthermore, you can also combine different heuristics to see if they agree with each other. If they don't, then it's probably a rather sketchy data point that requires some manual intervention.

Note that one particular advantage of the algorithm I sketched above is that you will get the results you want (damping constant and frequency) as part of the process, along with uncertainty estimates.

I neglected to mention most of the gory mathematical and algorithmic details so as to provide a general overview, but if needed I can provide more detail.

Upvotes: 1

Related Questions