QR Decompositions

How to get from Gram-Schmidt to a QR decomposition

Problem Formulation

Given a square matrix $A\in\mathbb{R}^{n\times n}$, we want to find an orthogonal matrix $Q\in\mathbb{R}^{n\times n}$ and a upper triangular matrix $R\in\mathbb{R}^{n\times n}$ such that $A=QR$.

From Gram-Schmidt to a QR decomposition

We’ve covered the biggest part of the work in the post on Gram Schmidt. You may also find further information in a post on QR factorizations by Nick Higham. Applying the Gram-Schmidt orthogonalisation onto the matrix $A$ gives an orthogonal matrix $Q$ where the column vectors span the same subspaces. Computing $Q^\top A$ therefore yields an upper triangular matrix (the entries of $Q^\top A$ are the scalar products $\langle Q_i, A_j\rangle$). These scalar products are already computed during the Gram-Schmidt algorithm. Thus, we just have to store them. In combination with classical Gram-Schmidt we obtain the following code.

using LinearAlgebra

function qr_classical_gram_schmidt(matrix)
    # perform a QR decomposition using classical Gram Schmidt
    num_vectors = size(matrix)[2]
    orth_matrix = copy(matrix)
    r_matrix = zeros(num_vectors, num_vectors) # Initialise R matrix
    for vec_idx = 1:num_vectors
        for span_base_idx = 1:(vec_idx-1)
            # substrack sum
            r_matrix[span_base_idx, vec_idx] = dot(orth_matrix[:, span_base_idx], matrix[:, vec_idx]) # stict upper triangular part
            orth_matrix[:, vec_idx] -= r_matrix[span_base_idx, vec_idx]*orth_matrix[:, span_base_idx]
        end
        # normalise vector
        orth_matrix[:, vec_idx] = orth_matrix[:, vec_idx]/norm(orth_matrix[:, vec_idx])
        r_matrix[vec_idx, vec_idx] = dot(orth_matrix[:, vec_idx], matrix[:, vec_idx]) # main diagonal
    end
    return (orth_matrix, r_matrix)
end

In combination with the modified Gram-Schmidt we obtain similarly the following code.

using LinearAlgebra

function qr_modified_gram_schmidt(matrix)
    # perform a QR decomposition using modified Gram Schmidt
    num_vectors = size(matrix)[2]
    orth_matrix = copy(matrix)
    r_matrix = zeros(num_vectors, num_vectors)
    for vec_idx = 1:num_vectors
        orth_matrix[:, vec_idx] = orth_matrix[:, vec_idx]/norm(orth_matrix[:, vec_idx])
        r_matrix[vec_idx, vec_idx] = dot(orth_matrix[:, vec_idx], matrix[:, vec_idx])
        for span_base_idx = (vec_idx+1):num_vectors
            # perform block step
            r_matrix[vec_idx, span_base_idx] += dot(orth_matrix[:, span_base_idx], orth_matrix[:, vec_idx])
            orth_matrix[:, span_base_idx] -= dot(orth_matrix[:, span_base_idx], orth_matrix[:, vec_idx])*orth_matrix[:, vec_idx]
        end
    end
    return (orth_matrix, r_matrix)
end

We observe a similar behaviour for the QR decomposition as for Gram-Schmidt. Using the same tests as in Gram Schmidt we get the following results. Note that the error is again expressed in multiples of the machine epsilon.

using LinearAlgebra

error_qr_cgs_1 = []
error_qr_mgs_1 = []
error_qr_cgs_2 = []
error_qr_mgs_2 = []

for n = 1:10
    H = [1.0/(i+j-1) for i=1:2^n, j=1:2^n] + 0.00001 * I # Hilbert Matrix
    eye = Matrix{Float64}(I, 2^n, 2^n)     # Identity Matrix

    (Q_cgs, R_cgs) = qr_classical_gram_schmidt(H)
    (Q_mgs, R_mgs) = qr_modified_gram_schmidt(H)
    append!(error_qr_cgs_1, norm(eye - transpose(Q_cgs)*Q_cgs, Inf))
    append!(error_qr_mgs_1, norm(eye - transpose(Q_mgs)*Q_mgs, Inf))
    append!(error_qr_cgs_2, norm(H - Q_cgs*R_cgs, Inf))
    append!(error_qr_mgs_2, norm(H - Q_mgs*R_mgs, Inf))
end

In the code above we consider $I - Q^\top Q$ and $QR-A$ as error measures. In both cases we evaluate the infinity norm. For the orthogonality we obtain the next error plot.

Error evaluation

It’s no surprise that the error evolution in the orthogonality is identical to the one for Gram-Schmidt. The code is identical. Similarly, for $QR-A$ we obtain the following error plot.

Error evaluation

While the difference of the erors here is not as large as above, there’s still a notable benefit of using modified Gram-Schmidt over classical Gram-Schmidt.

Notes

All the code was evaluated with Julia version 1.5.0 (2020-08-01) using the official Julia Docker image.

Creative Commons License This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License

Changelog

  • 2020/11/15: Added a link to an external article on QR factorizations.
Mathematician and
Software Engineer

Researcher, Engineer, Tinkerer, Scholar, and Philosopher.

Related