Lil Coder
Lil Coder

Reputation: 23

Testing different values in a function R

Right now I am trying to fit spectra data with a Gaussian fit.

I want to build a loop to iterate through combinations of values in a "guesstimate" range for each of the peak positions, absorbances, and bandwidth until my R^2 value is minimized.

However, I have no idea how to structure my code. I was thinking of running it through a while loop until the R^2 value is below a certain threshold, but I don't know how to run through all the combinations of my variables...

Does anyone have any pointers on where to start?

Thank you!

This is what I'm working with:

peak1P <- seq(200, 300, 1)  % Likely Peak Positions
peak2P <- seq(400, 550, 1)
peak3P <- seq(600,750, 1)

peak1A <- seq(0, 0.6, 1)  % Likely Peak Absorbance
peak2A <- seq(0, 0.5, 1)
peak3A <- seq(0, 0.3, 1)

peak1B <- seq(0, 20, 1)  # Likely Peak Bandwidth
peak2B <- seq(0, 20, 1)
peak3B <- seq(100,300, 1)

# Individual Peak Models
AbsG1 = peak1A*exp((-4*log(2))*(((((WvLnV/1500)-peak1P/1500)^2))/(peak1B/1500)))
AbsG2 = peak2A*exp((-4*log(2))*(((((WvLnV/1500)-peak2P/1500)^2))/(peak2B/1500)))
AbsG3 = peak3A*exp((-4*log(2))*(((((WvLnV/1500)-peak3P/1500)^2))/(peak3B/1500)))

# Final Fit to compare with Data
AbsF <- AbsG1 + AbsG2 + AbsG3

Ideally, my code would run through each combination of elements in peak(1-3)P, peak(1-3)A, and peak(1-3)B until the error is minimized (my function for calculating the error is not shown here)

Upvotes: 0

Views: 56

Answers (1)

Sam Mason
Sam Mason

Reputation: 16184

this sort of problem is termed "mixture modelling" in stats and there are lots of standard approaches depending on the sort of data you have, how much you know about the underlying process that generated it, and the framework in which you want to model the results in

if you really don't want to use one of the existing packages; you'd follow the standard estimation process of writing a function that takes some set of parameters and calculates how well they fit (I'd suggest probablistic/likelihood-based approaches rather than R^2), then pass it to something like optim() or a Monte carlo approach (e.g. MCMC). the idea is to start from a random guess and move parameters around by small amounts to find the "best" set of parameters. better approaches will say how confident it is about the various estimates (e.g. the Hessian of the optim() or credible intervals from Bayesian MCMC approaches)

Upvotes: 1

Related Questions