RFon
RFon

Reputation: 118

Null space basis from QR decomposition with GSL

I'm trying to get the basis for the null space of a relatively large matrix, A^T, using GSL. So far I've been extracting right-singular vectors of the SVD corresponding to vanishing singular values, but this is becoming too slow for the sizes of matrices I'm interested in.

I know that the nullspace can be extracted as the last m-r columns of the Q-matrix in the QR decomposition of A, where r is the rank of A, but I'm not sure how rank-revealing decompositions work.

Here's my first attempt using gsl_linalg_QR_decomp:

int m = 4;
int n = 3;
gsl_matrix* A  = gsl_matrix_calloc(m,n);
gsl_matrix_set(A, 0,0, 3);gsl_matrix_set(A, 0,1, 6);gsl_matrix_set(A, 0,2, 1);
gsl_matrix_set(A, 1,0, 1);gsl_matrix_set(A, 1,1, 2);gsl_matrix_set(A, 1,2, 1);
gsl_matrix_set(A, 2,0, 1);gsl_matrix_set(A, 2,1, 2);gsl_matrix_set(A, 2,2, 1);
gsl_matrix_set(A, 3,0, 1);gsl_matrix_set(A, 3,1, 2);gsl_matrix_set(A, 3,2, 1);
std::cout<<"A:"<<endl;
for(int i=0;i<m;i++){ for(int j=0;j<n;j++) printf(" %5.2f",gsl_matrix_get(A,i,j)); std::cout<<std::endl;}

gsl_matrix* Q = gsl_matrix_alloc(m,m);
gsl_matrix* R = gsl_matrix_alloc(m,n);
gsl_vector* tau = gsl_vector_alloc(std::min(m,n));
gsl_linalg_QR_decomp(A, tau);
gsl_linalg_QR_unpack(A, tau, Q, R);
std::cout<<"Q:"<<endl;
for(int i=0;i<m;i++){ for(int j=0;j<m;j++) printf(" %5.2f",gsl_matrix_get(Q,i,j)); std::cout<<std::endl;}
std::cout<<"R:"<<endl;
for(int i=0;i<m;i++){ for(int j=0;j<n;j++) printf(" %5.2f",gsl_matrix_get(R,i,j)); std::cout<<std::endl;}

This outputs

A:
  3.00  6.00  1.00
  1.00  2.00  1.00
  1.00  2.00  1.00
  1.00  2.00  1.00
Q:
 -0.87 -0.29  0.41 -0.00
 -0.29  0.96  0.06 -0.00
 -0.29 -0.04 -0.64 -0.71
 -0.29 -0.04 -0.64  0.71
R:
 -3.46 -6.93 -1.73
  0.00  0.00  0.58
  0.00  0.00 -0.82
  0.00  0.00  0.00

but I'm not sure how to compute the rank, r, from this. My second attempt uses gsl_linalg_QRPT_decomp by replacing the last part with

gsl_vector* tau = gsl_vector_alloc(std::min(m,n));
gsl_permutation* perm = gsl_permutation_alloc(n);
gsl_vector* norm = gsl_vector_alloc(n);
int* sign = new int(); *sign = 1;
gsl_linalg_QRPT_decomp2(A, Q, R, tau, perm, sign, norm );
std::cout<<"Q:"<<endl;
for(int i=0;i<m;i++){ for(int j=0;j<m;j++) printf(" %5.2f",gsl_matrix_get(Q,i,j)); std::cout<<std::endl;}
std::cout<<"R:"<<endl;
for(int i=0;i<m;i++){ for(int j=0;j<n;j++) printf(" %5.2f",gsl_matrix_get(R,i,j)); std::cout<<std::endl;}
std::cout<<"Perm:"<<endl;
for(int i=0;i<n;i++) std::cout<<" "<<gsl_permutation_get(perm,i);

which results in

Q:
 -0.87  0.50  0.00  0.00
 -0.29 -0.50 -0.58 -0.58
 -0.29 -0.50  0.79 -0.21
 -0.29 -0.50 -0.21  0.79
R:
 -6.93 -1.73 -3.46
  0.00 -1.00  0.00
  0.00  0.00  0.00
  0.00  0.00  0.00
Perm:
 1 2 0

Here, I believe that the rank is the number of non-zero diagonal elements in R, but I'm not sure which elements to extract from Q. Which approach should I take?

Upvotes: 1

Views: 1205

Answers (1)

Ahmed Fasih
Ahmed Fasih

Reputation: 6937

For 4×3 A, the “null space” will consist of 3-dimensional vectors, whereas the QR decomposition on A only gives you 4-dimensional vectors. (And of course you can generalize this for A with size M×N where M > N.)

Therefore, take the QR decomposition of the transpose of A, whose Q is now 3×3.

Sketching the process using Python/Numpy in IPython (sorry, I can’t seem to figure out how to call gsl_linalg_QR_decomp using PyGSL):

In [16]: import numpy as np

In [17]: A = np.array([[3.0, 6, 1], [1.0, 2, 1], [1.0, 2, 1], [1.0, 2, 1]])

In [18]: Q, R = np.linalg.qr(A.T)  # <---- A.T means transpose(A)

In [19]: np.diag(R)
Out[19]: array([ -6.78232998e+00,   6.59380473e-01,   2.50010468e-17])

In [20]: np.round(Q * 1000) / 1000 # <---- Q to 3 decimal places
Out[20]:
array([[-0.442, -0.066, -0.894],
       [-0.885, -0.132,  0.447],
       [-0.147,  0.989,  0.   ]])

The 19th output (i.e., Out[19], result of np.diag(R)) tells us the column-rank of A is 2. And looking at the 3rd column of Out[20] (Q to three decimal places), we see that the right answer is returned: [-0.894, 0.447, 0] is proportional to [1, 0.5, 0], and we know this is right because the first two columns of A are linearly-dependent.

Can you check with larger matrixes that the QR decomposition of transpose(A) gives you equivalent null spaces as your current SVD method?

Upvotes: 0

Related Questions