asotoasty
asotoasty

Reputation: 27

Gauss-Legendre Matlab: How to use Newton method to approximate k_1 and k_2?

Recently as a project I have been working on a program in matlab designed to implement the Gauss-Legendre order 4 method of solving an IVP. Numerical analysis- and coding in particular- is somewhat of a weakness of mine, and thus it has been rather tough going. I used the explicit Euler method to initially seed the method, but I've been stuck for a very long time on how to use the Newton method to get closer values of k1 and k2.

Thusfar, my code looks as follows:

`

%Gauss-Butcher Order 4
function[y] = GBOF(f,fprime,y_0,maxit,ertol)
A = [1/4,1/4-sqrt(3)/6;1/4+sqrt(3)/6,1/4];
h = [1,0,0,0,0,0,0,0,0,0,0,0,0];
t = [0,0,0,0,0,0,0,0,0,0,0,0,0];
for n = 1:12
    h(n+1) = 1/(2^n);
    t(n+1) = t(n)+h(n);
end
y = zeros(size(t));
y(1) = y_0;
niter = 1;
%Declaration of Variables
for i = 1:12
    k = f(y(i));
    y1approx = y(i) + ((1/2-sqrt(3))*h(i)*k);
    y2approx = y(i) + ((1/2+sqrt(3))*h(i)*k);
    k1 = f(y1approx);
    k2 = f(y2approx);
%Initial guess for newton seeding
    errorFunc =@(k1,k2) [k1-f(y(i) +A(1,1)*k1+A(1,2)*k2*h(i)); k2-f(y(i)+A(2,1)*k1+A(2,2)*k2*h(i))];
    error = errorFunc(k1,k2);
%Function for error and creation of error variable
    while norm(error) > ertol && niter < maxit
        niter = niter + 1;
**        k1 = k1-f(k1)/fprime(k1);
        k2 = k2-f(k2)/fprime(k2);
**        error = errorFunc(k1,k2);
%Newton Raphston for estimating K1 and K2
    end
    y(i+1) = y(i) +(h(i)*(k1+k2)/2);
    
%Creation of next 
end
disp(t);
    

` The part of the code I believe is causing this to fail is highlighted. When I enter in a basic ivp (i.e. y' = y, y(0) =1), I get the outputoutput from problem entered Any input on how I could go about fixing this would be much appreciated. Thank you.

I have tried replacing the k1s and k2s in the problem with the values used in the formula extrapolated from the butcher tableau, but nothing changed. I can't think of any other ways to tackle this issue.

Upvotes: 0

Views: 170

Answers (1)

Lutz Lehmann
Lutz Lehmann

Reputation: 26040

The implicit system you have to solve is

k1 = f(y + h*(a11*k1 + a12*k2) )
k2 = f(y + h*(a21*k1 + a22*k2) )

This is also correct in your residual function errorFunc.

The naive way is just to iterate this system, like any other fixed-point iteration.

This system has a linearization rel. h at the base point y

k1 = f(y) + h*f'(y)*(a11*k1 + a12*k2) + O(h^2)
k2 = f(y) + h*f'(y)*(a21*k1 + a22*k2) + O(h^2)

Seen as simple iteration, the contraction factor is O(h), so if h is small enough, the factor is smaller than 1 and thus convergence is sure, increasing the order of h in the residual by 1 in each step. So with 6 iterations the error in the implicit system is O(h^6), which is one order smaller than the local truncation error.

One can reduce the number of iterations if k1,k2 start with some higher-order estimates, not just with k1=k2=f(y).


One can reduce the right side residual by removing the terms that are linear in h (on both sides of course).

k1 - h*f'(y)*(a11*k1 + a12*k2) = f(y + h*(a11*k1 + a12*k2) ) - h*f'(y)*(a11*k1 + a12*k2)
k2 - h*f'(y)*(a21*k1 + a22*k2) = f(y + h*(a21*k1 + a22*k2) ) - h*f'(y)*(a21*k1 + a22*k2)

The right side is evaluated at the current values, the left side is a linear system for the new values. So

K = K - solve(M, rhs)

with
K = [ k1; k2]

M = [  1 - h*f'(y)*a11, -h*f'(y)*a12 ;  -h*f'(y)*a21, 1 - h*f'(y)*a22  ]
  = I - h*f'(y)*A

rhs = [ k1 - f(y + h*(a11*k1 + a12*k2) ); k2 - f(y + h*(a12*k1 + a12*k2) )
    = K - f(Y)

where 

Y = y+h*A*K

This should, probably, work for scalar equations, for systems this involves Kronecker products of matrices and vectors.

As the linear part is taken out of the residual, the contraction factor in this new fixed-point iteration is O(h^2), possibly with smaller constants, so it converges faster and, it has been argued, for larger step sizes.

What you have in the code regarding the implicit method step shows the steps of the algorithm in the right order, but with wrong arguments in the function calls.

What you do with h is not recognizable. One could guess that the task is to explore the results of the method for a collection of step sizes. This means that the h for each integration run is constant, halving the step size and increasing the step number for the next integration run.

Upvotes: 1

Related Questions