Flomat:   Floating Point Matrices
1 Introduction
2 Quick Tutorial
2.1 Basic Properties
2.2 Basic Indexing
2.3 Matrix Creation
2.4 Elementwise Operations
2.5 Indexing, Submatrices and Iterating
2.6 Basic Linear Algebra
2.7 Matrix and Vector Products
2.8 Matrix Decompositions
2.9 Matrix Eigenvalues and Eigenvectors
2.10 Norms and Invariants
2.11 Solving Equations and Inverting Matrices
2.12 Least Squares Problems
2.13 Matrix Functions
3 Installation
4 Reference
4.1 Representation
_  flomat
flomat
define-param
index
ptr-elm
ptr-row
ptr-col
alloc-flomat
alloc-same-size-matrix
4.2 Copying
copy-flomat
unsafe-vector-copy!
unsafe-matrix-copy!
4.3 Simple Constructors
make-flomat
flomat:
list->flomat
vectors->flomat
flomat->vectors
vector->flomat
flomat->vector
flomat/  dim
4.4 Basic Properties
shape
size
nrows
ncols
4.5 Basic Indexing
ref
mset!
row
col
sub
row!
col!
sub!
4.6 Matrix Creation
matrix
matrix!
column
zeros
ones
constant!
zeros!
ones!
diag
eye
arange
arange
colarange
colarange
linspace
linspace
reshape
reshape!
4.7 Block Operations
augment
stack
block-diagonal
repeat
4.8 Elementwise Operations
.-
./
.sin
.cos
.tan
.exp
.log
.sqr
.sqrt
.+
.*
.expt
.-!
./  !
.sin!
.cos!
.tan!
.exp!
.log!
.sqr!
.sqrt!
.+  !
.*!
.expt!
define-pointwise-unary
define-pointwise-binary
ddefine-pointwise-unary/  binary
4.9 Matrix Operations
plus
plus!
times
times!
minus
minus!
power
transpose
transpose!
4.10 Matrix and Vector Products
dot
outer
kron
4.11 Norms and Invariants
norm
det
trace
rank
4.12 Solving Equations and Inverting Matrices
mldivide
mrdivide
inv
pinv
4.13 Matrix Decompositions
cholesky
qr
svd
4.14 Matrix Eigenvalues and Eigenvectors
eig
eigvals
4.15 Least Squares Problems
lstsq
4.16 Matrix Functions
expm
4.17 Printing
flomat-print
current-max-flomat-print-size
4.18 BLAS and LAPACK
4.19 CBLAS Bindings
define-cblas
cblas_  daxpy
cblas_  dcopy
cblas_  ddot
cblas_  dgemm
cblas_  dgemv
cblas_  dnrm2
cblas_  dscal
cblas_  dswap
cblas_  ixamax
4.20 LAPACK Bindings
define-lapack
dlange_
dgeev_
dgetrf_
dgesvd_
dgesdd_
dgeqrf_
dorgqr_
dgetri_
dpotrf_
dgesv_
dgelsd_
8.12

Flomat: Floating Point Matrices🔗

Jens Axel Søgaard <jensaxel@soegaard.net>

 (require flomat) package: sci

This manual documents the matrix library flomat.

    1 Introduction

    2 Quick Tutorial

      2.1 Basic Properties

      2.2 Basic Indexing

      2.3 Matrix Creation

      2.4 Elementwise Operations

      2.5 Indexing, Submatrices and Iterating

      2.6 Basic Linear Algebra

      2.7 Matrix and Vector Products

      2.8 Matrix Decompositions

      2.9 Matrix Eigenvalues and Eigenvectors

      2.10 Norms and Invariants

      2.11 Solving Equations and Inverting Matrices

      2.12 Least Squares Problems

      2.13 Matrix Functions

    3 Installation

    4 Reference

      4.1 Representation

      4.2 Copying

      4.3 Simple Constructors

      4.4 Basic Properties

      4.5 Basic Indexing

      4.6 Matrix Creation

      4.7 Block Operations

      4.8 Elementwise Operations

      4.9 Matrix Operations

      4.10 Matrix and Vector Products

      4.11 Norms and Invariants

      4.12 Solving Equations and Inverting Matrices

      4.13 Matrix Decompositions

      4.14 Matrix Eigenvalues and Eigenvectors

      4.15 Least Squares Problems

      4.16 Matrix Functions

      4.17 Printing

      4.18 BLAS and LAPACK

      4.19 CBLAS Bindings

      4.20 LAPACK Bindings

1 Introduction🔗

A matrix is a rectangular arrangements of numbers in rows and columns. This library provides functions to construct and compute with matrices whose elements are IEEE double precision floating point numbers. These numbers are referred to as flonums in the Racket manual, but the most common name for these numbers are simply doubles.

Restricting the scope of the library to dense matrices with floating numbers allow the implementation to use routines implemented in Fortran and C. The low-level routines consists of calls to functions in BLAS and LAPACK. BLAS (Basic Linear Algebra Subprograms) and LAPACK (Linear Algebra PACKage) are industry standard libraries and are available on all major platforms.

If you are in need of matrix algebra over more general numbers then look at the functional matrix library in Matrices and Linear Algebra.

This library can be used in a functional manner, but imperative operations are available. There are at least two reasons to use the imperative approach: 1) text books often describe matrix algorithms using imperative operations, and, 2) imperative operations can reduce the amount of memory needed during a computation.

The available operations can be divided into rough categories:
  • Level 1: High level - do what I mean

  • Level 2: Medium level - do what I mean this way

  • Level 3: Low level - do it using this underlying C-function

To use the library one can get by with level 1 operations, but if you understand the underlying representation, you can improve your algorithms using level 2 operations. For those that really want to squeeze out the last bit of performance we have made level 3 operations available as well. The Quick Tutorial only describes level 1 and level 2 operations.

2 Quick Tutorial🔗

This section shows how to do simple matrix computations. The beginning part of the tutorial describes working with matrices simply as arrays of numbers. The end shows how to do linear algebra.

2.1 Basic Properties🔗

An flomat consists conceptually of a two-dimensional array of floating point numbers. An m\times n (m by n) matrix is divided into m rows and n columns. The rows are numbered 0, \ldots, m-1 and the columns are numbered 0, \ldots, n-1.

A = \begin{bmatrix} a_{0,0} & a_{0,1} & \cdots & a_{0,n-1} \\ a_{1,0} & a_{1,1} & \cdots & a_{1,n-1} \\ \vdots & \vdots & \cdots & \vdots \\ a_{m-1,0} & a_{m-1,1} & \cdots & a_{m-1,n-1} \\ \end{bmatrix} = \begin{bmatrix} {\Rule 20pt 0.5pt 0pt} \mspace 2pt a_{0,*} \mspace 2pt {\Rule 20pt 0.5pt 0pt} \\ {\Rule 20pt 0.5pt 0pt} \mspace 2pt a_{1,*} \mspace 2pt {\Rule 20pt 0.5pt 0pt} \\ \cdots \\ {\Rule 20pt 0.5pt 0pt} \mspace 2pt a_{{m-1},*} \mspace 2pt {\Rule 20pt 0.5pt 0pt} \end{bmatrix} = \begin{bmatrix} \begin{matrix} {\Rule 0.5pt 20pt 0pt} \\ A_0 \\ {\Rule 0.5pt 20pt 0pt} \end{matrix} & \begin{matrix} {\Rule 0.5pt 20pt 0pt} \\ A_1 \\ {\Rule 0.5pt 20pt 0pt} \end{matrix} \cdots \begin{matrix} {\Rule 0.5pt 20pt 0pt} \\ A_{n-1} \\ {\Rule 0.5pt 20pt 0pt} \end{matrix} \end{bmatrix}

The basic properties of an flomat can be examined using these functions:

(shape A) return a list of with the number of rows and columns
(size A) the number of elements in the matrix
(nrows A) the number of rows
(ncols A) the number of columns

> (define A (flomat: [[1 2 3]
                        [4 5 5]]))
> (shape A)

'(2 3)

> (size  A)

6

> (nrows A)

2

> (ncols A)

3

2.2 Basic Indexing🔗

Since a matrix is divided into rows and columns we can refer to an element in the matrix by row and column numbers. The element on the i’th row and j’th column is referred to as the element with index (i,j).

The indices are zero-based so a matrix with m rows and n columns has row-indices 0, 1, \ldots, m-1 and column-indices 0, 1, \ldots n-1.

A = \begin{bmatrix} a_{0,0} & a_{0,1} & \cdots & a_{0,n-1} \\ a_{1,0} & a_{1,1} & \cdots & a_{1,n-1} \\ \vdots & \vdots & \cdots & \vdots \\ a_{m-1,0} & a_{m-1,1} & \cdots & a_{m-1,n-1} \\ \end{bmatrix}

(ref A i j) the element in A with index (i,j)
(mset! A i j x) change element in A with index (i,j) to x

(row A i) the i’th row of A
(col A j) the j’th column of A

Notice that row and column vectors are simply matrices with a single row and a single column respectively

> (define A (flomat: [[1 2 3]
                        [4 5 5]]))
> (ref A 0 1)

2.0

> (row A 0)

(flomat: ((1.0 2.0 3.0)))

> (col A 1)

(flomat: ((2.0) (5.0)))

2.3 Matrix Creation🔗

There are several ways of creating matrices.

Use matrix to create an flomat from existing Racket data. It can convert vector-of-vector-of and list-of-lists representation of matrices into the flomat representation. A vector of numbers or a list of numbers will be converted into a column vector (a matrix with only one column).

Any non-floating point numbers will be converted to floating point. The function matrix also accepts f64vectors as input.

(matrix obj) create a matrix with values from obj

> (matrix '[[1/2 1/3] [4 5]])

(flomat: ((0.5 0.3333333333333333) (4.0 5.0)))

> (matrix #[#[1 2 3] #[4 5 6]])

(flomat: ((1.0 2.0 3.0) (4.0 5.0 6.0)))

> (matrix (list 1 2 3))

(flomat: ((1.0) (2.0) (3.0)))

> (matrix (vector 1 2 3))

(flomat: ((1.0) (2.0) (3.0)))

> (matrix (f64vector 1 2 3))

(flomat: ((1.0) (2.0) (3.0)))

After conversion the created flomat will contain a pointer to a newly allocated piece of memory containing the floating point numbers. If you happen to work with data in the form of f64vectors, then you can avoid the allocation, if you use matrix! instead. If the same f64vector is used to create two matrices with matrix! they will share the same backing array - so setting an element one matrix will affect the other.

(matrix! obj) create a matrix with values from obj avoid allocation of backing array if possible

> (define v (f64vector 1 2 3))
> (define A (matrix! v))
> (define B (matrix! v))
> (list A B)

'((flomat: ((1.0) (2.0) (3.0))) (flomat: ((1.0) (2.0) (3.0))))

> (mset! A 0 0 42)

(flomat: ((42.0) (2.0) (3.0)))

> (list A B)

'((flomat: ((42.0) (2.0) (3.0))) (flomat: ((42.0) (2.0) (3.0))))

For comparision the same example with matrix:
> (define v (f64vector 1 2 3))
> (define A (matrix v))
> (define B (matrix v))
> (list A B)

'((flomat: ((1.0) (2.0) (3.0))) (flomat: ((1.0) (2.0) (3.0))))

> (mset! A 0 0 42)

(flomat: ((42.0) (2.0) (3.0)))

> (list A B)

'((flomat: ((42.0) (2.0) (3.0))) (flomat: ((1.0) (2.0) (3.0))))

In order to create a matrix of specific size with all zeros or all ones, use the functions zeros and ones. Use eye to make matrix with ones on a diagonal.

(zeros n) create a square n\times n matrix with all zeros
(zeros m n) create a m\times n matrix with all zeros
(ones n) create a square n\times n matrix with all ones
(ones m n) create a m\times n matrix with all ones
(eye m n k) create a m\times n matrix with ones on the k’th diagonal

The arguments n and k are optional for eye and defaults to m and 0 respectively.

> (zeros 2)

(flomat: ((0.0 0.0) (0.0 0.0)))

> (zeros 2 3)

(flomat: ((0.0 0.0 0.0) (0.0 0.0 0.0)))

> (ones 2)

(flomat: ((1.0 1.0) (1.0 1.0)))

> (ones 2 3)

(flomat: ((1.0 1.0 1.0) (1.0 1.0 1.0)))

> (list (eye 3) (eye 3 4) (eye 3 3 1) (eye 3 3 -1))

'((flomat: ((1.0 0.0 0.0) (0.0 1.0 0.0) (0.0 0.0 1.0)))

  (flomat: ((1.0 0.0 0.0 0.0) (0.0 1.0 0.0 0.0) (0.0 0.0 1.0 0.0)))

  (flomat: ((0.0 1.0 0.0) (0.0 0.0 1.0) (0.0 0.0 0.0)))

  (flomat: ((0.0 0.0 0.0) (1.0 0.0 0.0) (0.0 1.0 0.0))))

To create ranges of values use arange or colarange which both work like (matrix (range start stop step)), but avoids building an intermediary list. The functions arange and colarange produce row and column vectors respectively. The vector created has length

\huge\lceil\normalsize{ \frac{\text{stop}-\text{start}}{\text{step}}}\huge\rceil\normalsize.

(arange start stop step) create a row vector with values from start to stop (exclusively), here step is the gap between values
(arange start stop) like (arange start stop 1.0)
(arange stop) like (arange 0.0 stop 1.0)

(colarange start stop step)
(colarange start stop)
(colarange start)
like arange but produces a column vector.

> (arange 5 10 2)

(flomat: ((5.0 7.0 9.0)))

> (arange 5 10)

(flomat: ((5.0 6.0 7.0 8.0 9.0)))

> (arange 5)

(flomat: ((0.0 1.0 2.0 3.0 4.0)))

> (colarange 5 10 2)

(flomat: ((5.0) (7.0) (9.0)))

> (colarange 5 10)

(flomat: ((5.0) (6.0) (7.0) (8.0) (9.0)))

> (colarange 5)

(flomat: ((0.0) (1.0) (2.0) (3.0) (4.0)))

As an alternative to arange consider using linspace, which allow you to provide an exact endpoint.

(linspace start stop num)
return a column vector with num numbers evenly spaced from start to stop
(linspace start stop num #f)
like (linspace start stop num) but omit the last number

> (linspace 2 4 6)

(flomat: ((2.0) (2.4) (2.8) (3.2) (3.6) (4.0)))

> (linspace 2 4 6 #f)

(flomat: ((2.0) (2.4) (2.8) (3.2) (3.6)))

Sometimes it is possible to keep the elements of matrix, but change its shape.

(reshape A m n)
return a matrix with shape m\times n using the elements of A,
(reshape! A m n)
return a matrix with shape m\times n using the elements of A, share the backing area with A

> (arange 9)

(flomat: ((0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0)))

> (reshape (arange 9) 3 3)

(flomat: ((0.0 3.0 6.0) (1.0 4.0 7.0) (2.0 5.0 8.0)))

> (transpose (reshape (arange 9) 3 3))

(flomat: ((0.0 1.0 2.0) (3.0 4.0 5.0) (6.0 7.0 8.0)))

2.4 Elementwise Operations🔗

Elementwise operations (also called pointwise operations) work on each element. The operations are named with a beginning point. Besides the elementwise versions of the standard arithmetic operations, the standard numerical functions also have elementwise counterparts. Binary operators work both on matrices (of the same side) and on a number and matrix.

Formally for a function f of one or two arguments, the corresponding pointwise function .f satisfy:

.f( \begin{bmatrix} a_{ij} \end{bmatrix}) = \begin{bmatrix} f(a_{ij}) \end{bmatrix} \textrm{ or } .f( \begin{bmatrix} a_{ij} \end{bmatrix}, \begin{bmatrix} b_{ij} \end{bmatrix}) = \begin{bmatrix} f(a_{ij},b_{ij}) \end{bmatrix}

(.+ A B)
(.- A B) and (.- A)
(.* A B)
(./ A B) and (./ A)
Elementwise version of the arithmetical operations. The operations returns the result as a new matrix.

Note that .* is elementwise multiplication. Use times to multiply two matrices in the linear algebra sense.

> (define A (matrix '((0 1) (2 3))))
> (define B (matrix '((4 5) (6 7))))
> (.- A)

(flomat: ((-0.0 -1.0) (-2.0 -3.0)))

> (./ A)

(flomat: ((+inf.0 1.0) (0.5 0.3333333333333333)))

> (.+ A B)

(flomat: ((4.0 6.0) (8.0 10.0)))

> (.- A B)

(flomat: ((-4.0 -4.0) (-4.0 -4.0)))

> (.* A B)

(flomat: ((0.0 5.0) (12.0 21.0)))

> (./ A B)

(flomat: ((0.0 0.2) (0.3333333333333333 0.42857142857142855)))

One of the arguments can be a number:
> (define A (matrix '((0 1) (2 3))))
> (.+ A 1)

(flomat: ((1.0 2.0) (3.0 4.0)))

> (.- A 2)

(flomat: ((-2.0 -1.0) (0.0 1.0)))

> (.* 3 B)

(flomat: ((12.0 15.0) (18.0 21.0)))

> (./ A 4)

(flomat: ((0.0 0.25) (0.5 0.75)))

The elementwise versions of the standard numerical functions are:

(.sin A) (.cos A) (.tan A) (.exp A) (.log A) (.sqr A) (.sqrt A) (.expt A B)

> (define A (matrix '((0 1) (2 3))))
> (.sqr A)

(flomat: ((0.0 1.0) (4.0 9.0)))

> (.expt A 2)

(flomat: ((0.0 1.0) (4.0 9.0)))

> (.expt 2 A)

(flomat: ((1.0 2.0) (4.0 8.0)))

The elementwise operations above all, allocate a new matrix. If instead you want to modify the elements of an existing matrix, the following functions are for you.

(.-! A) (./! A) (.sin! A) (.cos! A) (.tan! A) (.exp! A) (.log! A) (.sqr! A) (.sqrt! A) (.expt! A B) (.+! A B) (.-! A B) (.*! A B) (./! A B)

For binary operations, the result is stored in the first argument.
> (define A (matrix '((0 1) (2 3))))
> (.-! A)

(flomat: ((-0.0 -1.0) (-2.0 -3.0)))

> (.-! A)

(flomat: ((0.0 1.0) (2.0 3.0)))

> (.expt! A 2)

(flomat: ((0.0 1.0) (4.0 9.0)))

> (.expt! A 2)

(flomat: ((0.0 1.0) (16.0 81.0)))

Also, if you want to store the result of an elementwise in another matrix C, you can do as follows for the unary operations:

(.sin! A C) (.cos! A C) (.tan! A C) (.exp! A C) (.log! A C) (.sqr! A C) (.sqrt! A C)

And for the binary operations:

(.+! A B C) (.-! A B C) (.*! A B C) (./! A B C)
> (define A (matrix '((0 1) (2 3))))
> (define B (matrix '((4 5) (6 7))))
> (.sqr! B A)

(flomat: ((16.0 25.0) (36.0 49.0)))

> A

(flomat: ((16.0 25.0) (36.0 49.0)))

Finally, for .-! and ./! which are both unary and binary operations at once, use #f as B to get the unary version.

> (define A (matrix '((0 1) (2 3))))
> (define B (matrix '((4 5) (6 7))))
> (.-! B #f A)

(flomat: ((-4.0 -5.0) (-6.0 -7.0)))

> A

(flomat: ((-4.0 -5.0) (-6.0 -7.0)))

2.5 Indexing, Submatrices and Iterating🔗

From the section on Basic Indexing we know that the element on row i in column j, has index (i,j) and can be extracted with the function ref.

The i’th row and the j’th column can be extraced with row and col respectively.

To get a submatrix use sub and sub!.

(sub A i j m n)
Make a copy of the submatrix of A with upper left corner in (i,j) and with size mxn.

(sub! A i j m n)
Same as sub, but the elements are not copied - the underlying array of flonums are shared.

> (define A (transpose (reshape (arange 25) 5 5)))
> A

(flomat:

 ((0.0 1.0 2.0 3.0 4.0)

  (5.0 6.0 7.0 8.0 9.0)

  (10.0 11.0 12.0 13.0 14.0)

  (15.0 16.0 17.0 18.0 19.0)

  (20.0 21.0 22.0 23.0 24.0)))

> (sub A 0 0 3 2)

(flomat: ((0.0 1.0) (5.0 6.0) (10.0 11.0)))

> (sub A 1 1 2 2)

(flomat: ((6.0 7.0) (11.0 12.0)))

The function sub! can be used to mutate part of a larger submatrix.

Let’s say we have a matrix, in which we want to zero out all elements except those on the edges. We can use sub! to get a submatrix of the inner part, then use zeros! to clear the elements.

> (define A (transpose (reshape (arange 10 35) 5 5)))
> A

(flomat:

 ((10.0 11.0 12.0 13.0 14.0)

  (15.0 16.0 17.0 18.0 19.0)

  (20.0 21.0 22.0 23.0 24.0)

  (25.0 26.0 27.0 28.0 29.0)

  (30.0 31.0 32.0 33.0 34.0)))

> (define B (sub! A 1 1 3 3))
> B

(flomat: ((16.0 17.0 18.0) (21.0 22.0 23.0) (26.0 27.0 28.0)))

> (zeros! B)
> A

(flomat:

 ((10.0 11.0 12.0 13.0 14.0)

  (15.0 0.0 0.0 0.0 19.0)

  (20.0 0.0 0.0 0.0 24.0)

  (25.0 0.0 0.0 0.0 29.0)

  (30.0 31.0 32.0 33.0 34.0)))

To iterate over a row or a column use in-row and in-col.

> (define A (matrix '((11 22) (33 44))))
> (for/list ([   x  (in-row A 0)]) x)

'(11.0 22.0)

> (for/list ([(i x) (in-row A 0)]) (list x i))

'((11.0 0) (22.0 1))

> (for/list ([   x  (in-col A 0)]) x)

'(11.0 33.0)

> (for/list ([(i x) (in-col A 0)]) (list x i))

'((11.0 0) (33.0 1))

2.6 Basic Linear Algebra🔗

The basic linear algebra are plus, minus and times, which compute the sum, difference and product of a series of matrices.

(plus A ...) (minus A ...) (times A ...)
Computes the sum, difference and product of a series of matrices and/or numbers.

> (define A (matrix '((2 0) (0 2))))
> (define B (matrix '((1 2) (3 4))))
> (define C (column 4 5))
> (plus A B)

(flomat: ((3.0 2.0) (3.0 6.0)))

> (plus A 10)

(flomat: ((12.0 10.0) (10.0 12.0)))

> (plus A 10 B)

(flomat: ((13.0 12.0) (13.0 16.0)))

> (minus A)

(flomat: ((-2.0 -0.0) (-0.0 -2.0)))

> (minus A B)

(flomat: ((1.0 -2.0) (-3.0 -2.0)))

> (times A B)

(flomat: ((2.0 4.0) (6.0 8.0)))

> (times A 2 B)

(flomat: ((4.0 8.0) (12.0 16.0)))

> (times A C)

(flomat: ((8.0) (10.0)))

As usual, there are variants that mutate the first given matrix instead of allocating a new backing array of flonums.

(plus! A B ...) (minus! A B ...)
Like plus and minus but stores the result in A, which must be a matrix.

> (define A (matrix '((2 0) (0 2))))
> (define B (matrix '((0 2) (2 0))))
> (plus! A B)

(flomat: ((2.0 2.0) (2.0 2.0)))

> A

(flomat: ((2.0 2.0) (2.0 2.0)))

(power A n)
Computes the n’th power of a matrix A, where n is a natural number.

> (define A (matrix '((1 1) (0 1))))
> (list (power A 0) (power A 1) (power A 2) (power A 3))

'((flomat: ((1.0 0.0) (0.0 1.0)))

  (flomat: ((1.0 1.0) (0.0 1.0)))

  (flomat: ((1.0 2.0) (0.0 1.0)))

  (flomat: ((1.0 3.0) (0.0 1.0))))

2.7 Matrix and Vector Products🔗

The inner product (also known as the dot product) of two column vectors can be computed by dot.

(dot v w)
Computes the inner product of two column vectors (i.e. matrices with only one column).

> (define v (column -1 1))
> (define w (matrix '((2) (2))))
> (dot v w)

0.0

The outer product of a column vector A with m rows and an row B with n columns is an m\times n matrix O with elements o_{i,j} = a_i\cdot b_j.

(outer A B)
Computes the outer product of the first column of A and the first row of B.

> (define A (column 2 3))
> (define B (transpose (column 5 7)))
> (outer A B)

(flomat: ((10.0 14.0) (15.0 21.0)))

The Kronecker product between two matrices A and B replaces each element a of A with a copy of B scaled with A. The Kronecker product is a generalization of the outer product.

(kron A B)
Computes the Kronecker product of the matrices A and B.

> (define A (matrix '((1 2) (3 4))))
> (define B (matrix '((1 1) (1 1))))
> (kron A B)

(flomat:

 ((1.0 1.0 2.0 2.0) (1.0 1.0 2.0 2.0) (3.0 3.0 4.0 4.0) (3.0 3.0 4.0 4.0)))

2.8 Matrix Decompositions🔗

(cholesky A) (qr A) (svd A)
Computes the Cholesky, QR and SVD decompositions respectively.

The Singular Value Decomposition (SVD) returns three matrices: a unitary matrix U, a column vector of singular values S and a unitary matrix V^T (V transposed). The function diag constructs a diagonal matrix from the singular values.

> (define A (matrix '((1 2) (3 4))))
> (define-values (U S VT) (svd A))
> (define Σ (diag S))
> (list U Σ VT S)

'((flomat:

   ((-0.4045535848337568 -0.9145142956773044)

    (-0.9145142956773044 0.4045535848337568)))

  (flomat: ((5.464985704219043 0.0) (0.0 0.3659661906262574)))

  (flomat:

   ((-0.5760484367663209 -0.8174155604703631)

    (0.8174155604703631 -0.5760484367663209)))

  (flomat: ((5.464985704219043) (0.3659661906262574))))

> (times U Σ VT)

(flomat: ((1.0000000000000002 1.999999999999999) (3.0 3.999999999999999)))

The QR Decomposition of A consists of two matrices: an orthogonal matrix Q and an upper triangular matrix R such that A=QR.

> (define A (matrix '((1 2) (3 4))))
> (define-values (Q R) (qr A))
> (list Q R)

'((flomat:

   ((-0.316227766016838 -0.9486832980505138)

    (-0.9486832980505138 0.316227766016838)))

  (flomat:

   ((-3.1622776601683795 -4.427188724235731) (0.0 -0.6324555320336751))))

> (times Q R)

(flomat: ((1.0000000000000002 1.9999999999999996) (3.0 4.0)))

If the matrix A is symmetric and positive-definite, then the Cholesky decomposition can be computed. It comes in two forms

A = L L^T \textrm{ or } A = U^T U,

where L and U are lower and upper triangular matrices.

Note: cholesky does not check that the input matrix A is symmetric and positive definite.

> (define A (matrix '((1 2) (2 4))))
> (define L (cholesky A))
> (list L (transpose L))

'((flomat: ((1.0 0.0) (2.0 0.0))) (flomat: ((1.0 2.0) (0.0 0.0))))

> (times L (transpose L))

(flomat: ((1.0 2.0) (2.0 4.0)))

> (define U (cholesky A 'upper))
> (list (transpose U) U)

'((flomat: ((1.0 0.0) (2.0 0.0))) (flomat: ((1.0 2.0) (0.0 0.0))))

> (times (transpose U) U)

(flomat: ((1.0 2.0) (2.0 4.0)))

2.9 Matrix Eigenvalues and Eigenvectors🔗

Eigenvalues and eigenvectors of a square matrix can be computed with eig or, if only the eigenvalues are needed, with eigvals. Note that even if all elements of a matrix are real, the eigenvalues in some cases are complex. Therefore the eigenvalues are returned as a standard Racket vector.

(eig A)
Compute eigenvalues and right eigenvectors.

(eigvals A)
Compute eigenvalues.

> (eig     (diag '(1 2)))

'#(1.0 2.0)

(flomat: ((1.0 0.0) (0.0 1.0)))

> (eigvals (diag '(1 2)))

'#(1.0 2.0)

> (eig     (matrix '((1 -1) (1 1))))

'#(1.0+1.0i 1.0-1.0i)

(flomat: ((0.7071067811865475 0.0) (0.0 -0.7071067811865475)))

2.10 Norms and Invariants🔗

The standard Frobenius norm |\cdot| can be computed by norm. For a column vector the norm is sometimes referred to as the length.

(norm A)
Compute the square root of the sum of the square of all elements.

> (norm    (matrix '((1 1))))

1.4142135623730951

> (norm    (matrix '((1 -1) (-1 1))))

2.0

(det A)
Computes the determinant of a square matrix A.

> (det  (matrix '((1 2) (0 4))))

4.0

> (det  (matrix '((1 1) (2 2))))

-0.0

(trace A)
Computes the trace, the sum along a diagonal, of a matrix.

> (trace (matrix '((1 2) (0 4))))

5.0

(rank A)
Computes the rank of a square matrix. The rank is the dimension of the column space, which is equal to the dimension of the row space, which is equal to the number of non-zero singular values in an SVD decomposition.

> (rank  (matrix '((1 2) (0 4))))

2

> (rank  (matrix '((1 1) (2 2))))

1

2.11 Solving Equations and Inverting Matrices🔗

Solving linear equations are more or less the raison d’etre for matrices. The main workhorse is mldivide, which can solve for X in the equation:

AX = B,

where A is a an m\times m matrix, and both X and B are m\times n.

Note that A needs to be of full rank for the equation to have a solution. The solver doesn’t check that the input matrix has full rank, it just runs it computation as usual. To check that the output from solve is indeed a solution, you can evaluate (times A X) and compare with B. The name mldivide is short for "Matrix Left divide" (think X=A\backslash B). Although mldivide doesn’t find X by multiplying B with A^{-1} on the left, it is a fitting analogy.

(mldivide A B)
Solve the equation AX = B using LU-decomposition with partial pivoting. The matrix A must be square and of full rank, the number of rows in A must be the same as the number columns in B.

> (define A (matrix '((1 2) (3 4))))
> (define B (matrix '((1) (0))))
> (define X (mldivide A B))
> (list X (times A X))

'((flomat: ((-1.9999999999999998) (1.4999999999999998)))

  (flomat: ((0.9999999999999998) (0.0))))

(mrdivide B A)
Solve the equation XA = B. The name mrdivide is short for "Matrix Right divide" (think X=A/B).

> (define A (matrix '((1 2) (3 4))))
> (define B (matrix '((2 4) (6 8))))
> (define X (mrdivide B A))
> (list X (times X A))

'((flomat: ((2.0 0.0) (0.0 2.0))) (flomat: ((2.0 4.0) (6.0 8.0))))

(inv A)
Find the multiplicative inverse of a square matrix A.

> (define A    (matrix '((1 2) (3 4))))
> (define Ainv (inv A))
> (list Ainv (times A Ainv))

'((flomat:

   ((-1.9999999999999996 0.9999999999999998)

    (1.4999999999999998 -0.4999999999999999)))

  (flomat: ((1.0 0.0) (8.881784197001252e-16 0.9999999999999996))))

An inverse of A can be used to solve AX=B, but using mldivide directly is normally better. However, let’s try to solve the equation from the previous example.

> (define B    (matrix '((1) (0))))
> (define X    (times Ainv B))
> (list X (times A X))

'((flomat: ((-1.9999999999999996) (1.4999999999999998)))

  (flomat: ((1.0) (8.881784197001252e-16))))

(pinv A)
Find the Moore-Penrose pseudo-inverse A^+of the matrix A. The matrix A does not need to be square. The pseudo inverse of an m\times n matrix is of size n\times m.

> (define A  (matrix '((1 2) (3 4))))
> (define A+ (pinv A))
> (list A+ (times A+ A A+) (times A A+ A))

'((flomat:

   ((-2.0000000000000018 1.0000000000000007)

    (1.5000000000000016 -0.5000000000000008)))

  (flomat:

   ((-2.000000000000003 1.000000000000001)

    (1.5000000000000029 -0.5000000000000014)))

  (flomat:

   ((0.9999999999999987 1.9999999999999991)

    (2.999999999999997 3.9999999999999964))))

> (define B  (matrix '((1 2 3) (4 5 6))))
> (define B+ (pinv B))
> (list B+ (times B+ B B+) (times B B+ B))

'((flomat:

   ((-0.9444444444444443 0.4444444444444446)

    (-0.11111111111111176 0.11111111111111141)

    (0.7222222222222227 -0.22222222222222254)))

  (flomat:

   ((-0.9444444444444445 0.44444444444444486)

    (-0.11111111111111169 0.11111111111111142)

    (0.7222222222222227 -0.2222222222222227)))

  (flomat:

   ((0.9999999999999991 1.999999999999999 2.9999999999999987) (4.0 5.0 6.0))))

2.12 Least Squares Problems🔗

Let A be an m\times n matrix and let b be an n\times 1 column vector. The equation Ax=b (depending on A) may not have an unique solution - or a solution at all.

As an alternative, one can look for the vector x that minimizes:

|Ax-b|_2,
where |\cdot|_2 is the Euclidean 2-norm.

The function lstsq return the minimum norm solution x of the above the problem.

If lstsq is given an n\times k matrix B, then the problem will be solved for each column b of B.

(lstsq A B)
Find minimum norm solution to the least squares problem: "minimize |Ax-b|" , for each column b of a larger matrix B.

As an example, let’s look at estimating b_0 and b_1 in the model:

y=b_0\cdot x+b_1
given a data set consisting of corresponding x- and y-values. The calculation reveals that the relation between x and y is y=2x+1.

The matrix X is called the design matrix of the problem. See Design Matrix at Wikipedia. In this case the design matrix has two columns: the first has the x-values, the second contains just ones.

> (define xs (column 0 1 2 3))
> (define ys (column 1 3 5 7))
> (define X  (augment xs (flomat-ones (nrows xs) 1)))
> X

(flomat: ((0.0 1.0) (1.0 1.0) (2.0 1.0) (3.0 1.0)))

> (define B  (lstsq X ys))
> B

(flomat: ((2.000000000000001) (0.9999999999999989)))

2.13 Matrix Functions🔗

(expm A)
Compute the matrix exponential \exp(A).

> (list (exp 1) (exp 2))

'(2.718281828459045 7.38905609893065)

> (expm (matrix '((1 0) (0 2))))

(flomat: ((2.7182818284590446 0.0) (0.0 7.389056098930647)))

> (expm (matrix '((1 2) (3 4))))

(flomat:

 ((51.96895619870492 74.73656456700309)

  (112.10484685050463 164.07380304920952)))

3 Installation🔗

The package flomat is part of sci, so to install it, write this in a terminal:

raco pkg install sci

Alternatively, use the Package Manager in DrRacket.

The package relies on the shared libraries CBLAS and LAPACK. Depending on your OS, you might need to install these yourself.

On macOS both CBLAS and LAPACK is part of the Accelerate Framework which is distributed by Apple. This means no extra installation is needed.

On Linux you need copies of CBLAS and LAPACK. Since BLAS and LAPACK exists in multiple versions, so a little care is needed. First on most systems libblas is used for the Fortran version, and libcblas, so get the latter. However on Debian it turns out libblas is exporting the names used by CBLAS, so (either?) ought to be fine.

On Windows: A tester is needed. Install CBLAS and LAPACK and let me know if it works. Otherwise make an Issue at Github and we will add the proper paths.

4 Reference🔗

4.1 Representation🔗

An m\times n matrix is consist conceptually of m\times n floating points arranged in m rows and n columns. Concretely the floating point numbers are stored in an one-dimentional array in column major order. This means that the entries in each column are stored together in the array.

Given the address of an entry the leading dimension ld is the amount to add to get the address of the next entry in the same row.

For matrices with no gaps between columns in the array, the leading dimension and the number of rows is the same ld=m.

For matrices with gaps between columns in the array, the leading dimension might be larger than the number of rows ld>m.

Allowing gaps in the array allows submatrices of a larger matrix to share the underlying array.

As an example, let’s look at an 2\times 3 matrix with leading dimension 5.

A = \begin{bmatrix} a_{00} & a_{01} & a_{02} \\ a_{10} & a_{11} & a_{12} \end{bmatrix}

The underlying array is:

[\underbrace{\overbrace{a_{00},a_{10}}^{\text{first column}},?,?}_{\text{ld}=5}, \underbrace{\overbrace{a_{01},a_{11}}^{\text{second column}},?,?}_{\text{ld}=5}, \underbrace{\overbrace{a_{02},a_{12}}^{\text{third column}},?,?}_{\text{ld}=5}]

Notice that the length of the underlying array is m\cdot\text{ld}=2\cdot 5=15.

The main takeaway is that:

  1. A matrix has an underlying array.

  2. The entries in a column is stored together.

  3. The underlying array can be shared between matrices.

  4. There can be gaps between columns in the array.

The exact details of leading dimensions is mostly relevant if you need to call BLAS or LAPACK functions directly.

value

_flomat : ctype?

Pointer to an array of flonums.

struct

(struct flomat (m n a lda))

  m : natural?
  n : natural?
  a : _flomat
  lda : natural?
Strucure representing an m\times n matrix with leading dimension lda with an underlying array of floating points stored in a.

syntax

(define-param (m n)       A)

(define-param (m n a)     A)
(define-param (m n a lda) A)
Equivalent to
(match-define (flomat m n _ _) A)
(match-define (flomat m n a _) A)
(match-define (flomat m n a lda) A)
respectively.

syntax

(index lda i j)

Expands to (+ i (* j lda)) which is the array index of the entry on row i, column j.

procedure

(ptr-elm a lda i j)  _flomat

  a : _flomat
  lda : natural?
  i : natural?
  j : natural?
Computes the address of the entry on row i, column j.

procedure

(ptr-row a i)  _flomat

  a : _flomat
  i : natural?
Computes the adress of the beginning of row i.

procedure

(ptr-col a lda j)  _flomat

  a : _flomat
  lda : natural?
  j : natural?
Computes the adress of the beginning of column j. Note that the leading dimenstion lda is needed.

procedure

(alloc-flomat m n)  _flomat

  m : natural?
  n : natural?
Allocate a floating point array with mn elements and return a tagged pointer to the array.

procedure

(alloc-same-size-matrix A)  _flomat

  A : flomat?
Like alloc-flomat but use the dimensions m\times n of A to determine the size.

4.2 Copying🔗

procedure

(copy-flomat A)  flomat?

  A : flomat?
Return a newly allocated flomat struct with a newly allocated backing array of flonums. If racket[A] has dimensions m\times n then the newly allocated array will have length m\cdot n. The backing array of the copy can therefore be shorter than the original backing array.

procedure

(unsafe-vector-copy! s a lda b)  void?

  s : natural?
  a : _flomat
  lda : natural?
  b : natural?
Copies s elements from A into B. The elements copied has indices: 0, \text{lda}, 2\text{lda} \ldots . No error checking is done.

Note that unsafe-vector-copy! can be used to copy a column or a row depending on the leading dimension used.

procedure

(unsafe-matrix-copy! m n a lda b ldb)  void?

  m : natural?
  n : natural?
  a : _flomat
  lda : natural?
  b : _flomat
  ldb : natural?
Copies the m\times n matrix A into B.

If you need to copy A into an index other than (0,0) use (ptr-elm b ldb i j) to find the addres of the submatrix of B which has upper left corner in (i,j).

In the same manner you can use (ptr-elm a lda i j) to find start of an submatrix in A.

4.3 Simple Constructors🔗

procedure

(make-flomat m n [x])  flomat?

  m : natural?
  n : natural?
  x : natural? = 0.0
Returns a flomat of dimension m\times n with an backing array of size mn. All entries are initialized to contain x.
> (make-flomat 2 3 4)

(flomat: ((4.0 4.0 4.0) (4.0 4.0 4.0)))

syntax

(flomat: [[x ...] ...])

This is "literal syntax" and expands into a form that constructs a flomat containing the numbers x ... .... The default printer outputs small matrices as flomat literals, so results can be copy-pasted into the repl.

procedure

(list->flomat xss)  flomat

  xss : list-of-list-of-number
Given a matrix represented as list of rows (where a row is a list of numbers), return a new matrix with the same entries. Note: matrix is usually simpler to use
> (list->flomat '((1 2) (3 4)))

(flomat: ((1.0 2.0) (3.0 4.0)))

procedure

(vectors->flomat xss)  flomat

  xss : vector-of-vector-of-number
Given a matrix represented as vector of rows (where a row is a vector of numbers), return a new matrix with the same entries. Note: matrix is usually simpler to use
> (vectors->flomat '#(#(1 2) #(3 4)))

(flomat: ((1.0 2.0) (3.0 4.0)))

procedure

(flomat->vectors A)  vector?

  A : flomat
Given a flomat A return a matrix with the same entries represented as vector of rows (where a row is a vector of numbers).
> (flomat->vectors (matrix '[[1 2] [3 4]]))

'#(#(1.0 2.0) #(3.0 4.0))

procedure

(vector->flomat m n v)  flomat

  m : natural?
  n : natural?
  v : vector?
Given a vector v of length mn representing a matrix with entries in row major order, return a matrix with the same dimensions and entries represented as a flomat.
> (vector->flomat 2 3 (vector 1 2 3 4 5 6))

(flomat: ((1.0 2.0 3.0) (4.0 5.0 6.0)))

procedure

(flomat->vector A)  vector?

  A : flomat
Return a vector of all entries in A in row-major order.
> (flomat->vector (matrix '[[1 2] [3 4]]))

'#(1.0 2.0 3.0 4.0)

procedure

(flomat/dim m n xs)  flomat?

  m : natural?
  n : natural?
  xs : list-of-numbers
Construct a flomat? matrix with entries from xs. The numbers in xs are expected to be in row major order.
> (flomat/dim 2 3   1 2 3 4 5 6)

(flomat: ((1.0 2.0 3.0) (4.0 5.0 6.0)))

4.4 Basic Properties🔗

The basic properties of an flomat can be examined using these functions:

procedure

(shape A)  (list natural? natural?)

  A : flomat?
Return a list with the number of rows and columns of the matrix A.

procedure

(size A)  natural?

  A : flomat?
Return the size of a matrix A. That is return mn where m is the number of rows and n is the number columns. Note that the size of a matrix can be smaller than the length of the backing array, if the leading dimension of the matrix is a non-unit.

procedure

(nrows A)  natural?

  A : flomat?
Return the number of rows, m, in the matrix A.

procedure

(ncols A)  natural?

  A : flomat?
Return the number of columns, n, in the matrix A.

> (define A (flomat: [[1 2 3]
                      [4 5 5]]))
> (shape A)

'(2 3)

> (size  A)

6

> (nrows A)

2

> (ncols A)

3

4.5 Basic Indexing🔗

Since a matrix is divided into rows and columns we can refer to an element in the matrix by row and column numbers. The element a_{ij} on the i’th row and j’th column is referred to as the element with index (i,j).

The indices are zero-based so a matrix with m rows and n columns has row-indices 0, 1, \ldots, m-1 and column-indices 0, 1, \ldots n-1.

A = \begin{bmatrix} a_{0,0} & a_{0,1} & \cdots & a_{0,n-1} \\ a_{1,0} & a_{1,1} & \cdots & a_{1,n-1} \\ \vdots & \vdots & \cdots & \vdots \\ a_{m-1,0} & a_{m-1,1} & \cdots & a_{m-1,n-1} \\ \end{bmatrix}

procedure

(ref A i j)  real?

  A : flomat?
  i : natural?
  j : natural?
Return a_{ij} the element in A with index (i,j).

procedure

(mset! A i j x)  real?

  A : flomat?
  i : natural?
  j : natural?
  x : real?
Overwrite a_{ij}, the element in A with index (i,j), with the number x.

procedure

(row A i)  flomat?

  A : flomat?
  i : natural?
Return the i’th row of A as a row vector i.e. as a matrix of dimension 1\times n. Allocates a new backing array.

procedure

(col A j)  flomat?

  A : flomat?
  j : natural?
Return the j’th column of A as a column vector i.e. as a matrix of dimension m\times 1. Allocates a new backing array.

procedure

(sub A i j m n)  flomat?

  A : flomat?
  i : natural?
  j : natural?
  m : natural?
  n : natural?
Return the m\times n submatrix of with A upper, left corner in (i,j). Allocates a new backing array.

procedure

(row! A i)  flomat?

  A : flomat?
  i : natural?

procedure

(col! A j)  flomat?

  A : flomat?
  j : natural?

procedure

(sub! A i j m n)  flomat?

  A : flomat?
  i : natural?
  j : natural?
  m : natural?
  n : natural?
Like row, col and sub, but uses the same backing array as A.

4.6 Matrix Creation🔗

procedure

(matrix obj)  flomat?

  obj : value
Create a matrix with values from obj.

Use matrix to create a flomat from existing Racket data. It can convert vector-of-vector-of and list-of-lists representation of matrices into the flomat representation. A vector of numbers or a list of numbers will be converted into a column vector (a matrix with only one column).

Any non-floating point numbers will be converted to floating point. The function matrix also accepts f64vectors as input.

procedure

(matrix! obj)  flomat?

  obj : value
Like matrix but uses the same backing array if possible.

procedure

(column x ...)  flomat?

  x : real?
Return a column vector (a 1\times n matrix) with x ... as entries.

procedure

(zeros m [n])  flomat?

  m : natural?
  n : natural? = m
Create a an m\times n matrix with all zeros. If n os omitted, a square m\times m matrix is returned.

procedure

(ones m [n])  flomat?

  m : natural?
  n : natural? = m
Line zeros but the all entries will be racket[1.0].

procedure

(constant! A x)  flomat?

  A : flomat?
  x : real?
Overwrite all entries in A with x. Return A.

procedure

(zeros! A)  flomat?

  A : flomat?

procedure

(ones! A)  flomat?

  A : flomat?
Overwrite A with zeros or ones. Returns A.

procedure

(diag X [m n reciproc?])  flomat?

  X : (or flomat? vector?)
  m : natural? = #f
  n : natural? = #f
  reciproc? : boolean? = #f
Construct a diagonal matrix of size m\times n with elements from a standard Racket vector or a flomat column vector X.

If reciproc? is true, then the diagonal will hold the reciprocal of the entries in X.

> (diag (vector 1 2 3))

(flomat: ((1.0 0.0 0.0) (0.0 2.0 0.0) (0.0 0.0 3.0)))

> (diag (vector 1 2 3) 5 5)

(flomat:

 ((1.0 0.0 0.0 0.0 0.0)

  (0.0 2.0 0.0 0.0 0.0)

  (0.0 0.0 3.0 0.0 0.0)

  (0.0 0.0 0.0 0.0 0.0)

  (0.0 0.0 0.0 0.0 0.0)))

> (diag (vector 2 4) 2 2 #t)

(flomat: ((0.5 0.0) (0.0 0.25)))

procedure

(eye m [n k])  flomat?

  m : natural?
  n : natural? = m
  k : integer? = 0
Create a m\times n matrix with ones on the k’th diagonal. The main diagonal is the 0’th diagonal. Diagonals above the main diagonal have positive indices. Diagonals below the main diagonal have negative indices.

Note: (eye n) will create a square identity matrix of size n\times n.

The diagonals are indexed as follows:

\begin{bmatrix} 0 & 1 & 2 & 3 \\ -1 & 0 & 1 & 2 \\ -2 & -1 & 0 & 1 \\ -3 & -2 & -1 & 0 \end{bmatrix}

> (eye 4 4 1)

(flomat:

 ((0.0 1.0 0.0 0.0) (0.0 0.0 1.0 0.0) (0.0 0.0 0.0 1.0) (0.0 0.0 0.0 0.0)))

> (eye 4 4 -1)

(flomat:

 ((0.0 0.0 0.0 0.0) (1.0 0.0 0.0 0.0) (0.0 1.0 0.0 0.0) (0.0 0.0 1.0 0.0)))

To create ranges of values use arange or colarange which both work like (matrix (range start stop step)), but avoids building an intermediary list. The functions arange and colarange produce row and column vectors respectively. The vector created has length

\huge\lceil\normalsize{ \frac{\text{stop}-\text{start}}{\text{step}}}\huge\rceil\normalsize.

procedure

(arange start stop [step])  flomat?

  start : real?
  stop : real?
  step : real? = 1.0

procedure

(arange stop)  flomat?

  stop : real?
Returns a row vector with values from start to stop (exclusively), here step is the gap between values. The call (arange stop) is equivalent to (arange 0.0 stop 1.0).

Note: If you need an exact endpoint, then use linspace instead.

procedure

(colarange start stop [step])  flomat?

  start : real?
  stop : real?
  step : real? = 1.0

procedure

(colarange stop)  flomat?

  stop : real?
Like arange but produces a column vector.

> (arange 5 10 2)

(flomat: ((5.0 7.0 9.0)))

> (arange 5 10)

(flomat: ((5.0 6.0 7.0 8.0 9.0)))

> (arange 5)

(flomat: ((0.0 1.0 2.0 3.0 4.0)))

> (colarange 5 10 2)

(flomat: ((5.0) (7.0) (9.0)))

> (colarange 5 10)

(flomat: ((5.0) (6.0) (7.0) (8.0) (9.0)))

> (colarange 5)

(flomat: ((0.0) (1.0) (2.0) (3.0) (4.0)))

procedure

(linspace start stop num)  flomat?

  start : real?
  stop : real?
  num : natural?

procedure

(linspace start stop num include-last?)  flomat?

  start : real?
  stop : real?
  num : natural?
  include-last? : #f
Return a column vector with num numbers evenly spaced from start to stop. If the fourth argument is #f, the last number is omitted.

> (linspace 2 4 6)

(flomat: ((2.0) (2.4) (2.8) (3.2) (3.6) (4.0)))

> (linspace 2 4 6 #f)

(flomat: ((2.0) (2.4) (2.8) (3.2) (3.6)))

procedure

(reshape A m n)  flomat?

  A : flomat?
  m : natural?
  n : natural?

procedure

(reshape! A m n)  flomat?

  A : flomat?
  m : natural?
  n : natural?
Return a matrix with shape m\times n using the elements of A. The function reshape creates a new backing area and reshape! uses the one in A.

> (arange 9)

(flomat: ((0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0)))

> (reshape (arange 9) 3 3)

(flomat: ((0.0 3.0 6.0) (1.0 4.0 7.0) (2.0 5.0 8.0)))

> (transpose (reshape (arange 9) 3 3))

(flomat: ((0.0 1.0 2.0) (3.0 4.0 5.0) (6.0 7.0 8.0)))

4.7 Block Operations🔗

procedure

(augment A ...)  flomat?

  A : flomat?
Two matrices with the same number of rows can be augmented:

\text{augment}( \begin{bmatrix} \begin{matrix} {\Rule 0.5pt 20pt 0pt} \\ A_0 \\ {\Rule 0.5pt 20pt 0pt} \end{matrix} \cdots & \begin{matrix} {\Rule 0.5pt 20pt 0pt} \\ A_{n-1} \\ {\Rule 0.5pt 20pt 0pt} \end{matrix} \end{bmatrix}, \begin{bmatrix} \begin{matrix} {\Rule 0.5pt 20pt 0pt} \\ B_0 \\ {\Rule 0.5pt 20pt 0pt} \end{matrix} \cdots & \begin{matrix} {\Rule 0.5pt 20pt 0pt} \\ B_{N-1} \\ {\Rule 0.5pt 20pt 0pt} \end{matrix} \end{bmatrix} ) = \begin{bmatrix} \begin{matrix} {\Rule 0.5pt 20pt 0pt} \\ A_0 \\ {\Rule 0.5pt 20pt 0pt} \end{matrix} \cdots \begin{matrix} {\Rule 0.5pt 20pt 0pt} \\ A_{n-1} \\ {\Rule 0.5pt 20pt 0pt} \end{matrix} & \begin{matrix} {\Rule 0.5pt 20pt 0pt} \\ B_0 \\ {\Rule 0.5pt 20pt 0pt} \end{matrix} \cdots \begin{matrix} {\Rule 0.5pt 20pt 0pt} \\ B_{N-1} \\ {\Rule 0.5pt 20pt 0pt} \end{matrix} \end{bmatrix}

The function augment will augment one or more matrices with the same number of rows.

> (augment (matrix '[[1 2]
                     [3 4]])
           (matrix '[[5 6 7]
                     [8 9 10]]))

(flomat: ((1.0 2.0 5.0 6.0 7.0) (3.0 4.0 8.0 9.0 10.0)))

procedure

(stack A ...)  flomat?

  A : flomat?
Two matrices with the same number of columns can be stacked:

\text{stack}( \begin{bmatrix} {\Rule 20pt 0.5pt 0pt} \mspace 2pt A_0 \mspace 2pt {\Rule 20pt 0.5pt 0pt} \\ \cdots \\ {\Rule 20pt 0.5pt 0pt} \mspace 2pt A_{m-1} \mspace 2pt {\Rule 20pt 0.5pt 0pt} \end{bmatrix}, \begin{bmatrix} {\Rule 20pt 0.5pt 0pt} \mspace 2pt B_0 \mspace 2pt {\Rule 20pt 0.5pt 0pt} \\ \cdots \\ {\Rule 20pt 0.5pt 0pt} \mspace 2pt B_{M-1} \mspace 2pt {\Rule 20pt 0.5pt 0pt} \end{bmatrix} ) = \begin{bmatrix} {\Rule 20pt 0.5pt 0pt} \mspace 2pt A_0 \mspace 2pt {\Rule 20pt 0.5pt 0pt} \\ \cdots \\ {\Rule 20pt 0.5pt 0pt} \mspace 2pt A_{m-1} \mspace 2pt {\Rule 20pt 0.5pt 0pt} \\ {\Rule 20pt 0.5pt 0pt} \mspace 2pt B_0 \mspace 2pt {\Rule 20pt 0.5pt 0pt} \\ \cdots \\ {\Rule 20pt 0.5pt 0pt} \mspace 2pt B_{M-1} \mspace 2pt {\Rule 20pt 0.5pt 0pt} \end{bmatrix}

The function stack will stack one or more matrices with the same number of columns.

> (stack (matrix '[[1 2]
                   [3 4]])
         (matrix '[[5 6]
                   [7 8]
                   [9 10]]))

(flomat: ((1.0 2.0) (3.0 4.0) (5.0 6.0) (7.0 8.0) (9.0 10.0)))

procedure

(block-diagonal A ...+)  flomat?

  A : flomat?
Make a block diagonal matrix with the matrices A ... on the diagonal.

> (block-diagonal (matrix '[[1 2]
                            [3 4]])
                  (matrix '[[5 6]
                            [7 8]
                            [9 10]]))

(flomat:

 ((1.0 2.0 0.0 0.0)

  (3.0 4.0 0.0 0.0)

  (0.0 0.0 5.0 6.0)

  (0.0 0.0 7.0 8.0)

  (0.0 0.0 9.0 10.0)))

procedure

(repeat A m [n])  flomat?

  A : flomat?
  m : natural?
  n : natural? = m
Make a matrix with m\times n blocks, each block is A.

> (define A (matrix '[[1 2]
                      [3 4]]))
> (repeat A 3)

(flomat:

 ((1.0 2.0 1.0 2.0 1.0 2.0)

  (3.0 4.0 3.0 4.0 3.0 4.0)

  (1.0 2.0 1.0 2.0 1.0 2.0)

  (3.0 4.0 3.0 4.0 3.0 4.0)

  (1.0 2.0 1.0 2.0 1.0 2.0)

  (3.0 4.0 3.0 4.0 3.0 4.0)))

4.8 Elementwise Operations🔗

Elementwise operations (also called pointwise operations) work on each element. The operations all have names that being with a point.

If a function f is unary, then the corresponding pointwise function .f satisfies:

.f( \begin{bmatrix} a_{ij} \end{bmatrix}) = \begin{bmatrix} f(a_{ij}) \end{bmatrix}

If a function f is binary, then the corresponding pointwise function .f satisfies:

.f( \begin{bmatrix} a_{ij} \end{bmatrix}, \begin{bmatrix} b_{ij} \end{bmatrix}) = \begin{bmatrix} f(a_{ij},b_{ij}) \end{bmatrix}

A few functions are such as .- and ./ can be used both as unary and binary functions.

procedure

(.- A)  flomat?

  A : flomat?
(./ A)  flomat?
  A : flomat?
(.sin A)  flomat?
  A : flomat?
(.cos A)  flomat?
  A : flomat?
(.tan A)  flomat?
  A : flomat?
(.exp A)  flomat?
  A : flomat?
(.log A)  flomat?
  A : flomat?
(.sqr A)  flomat?
  A : flomat?
(.sqrt A)  flomat?
  A : flomat?
(.+ A B)  flomat?
  A : flomat?
  B : flomat?
(.- A B)  flomat?
  A : flomat?
  B : flomat?
(.* A B)  flomat?
  A : flomat?
  B : flomat?
(./ A B)  flomat?
  A : flomat?
  B : flomat?
(.expt A B)  flomat?
  A : flomat?
  B : flomat?
The builtin unary and pointwise functions. They all allocate a new flomat.

procedure

(.-! A)  flomat?

  A : flomat?
(./! A)  flomat?
  A : flomat?
(.sin! A)  flomat?
  A : flomat?
(.cos! A)  flomat?
  A : flomat?
(.tan! A)  flomat?
  A : flomat?
(.exp! A)  flomat?
  A : flomat?
(.log! A)  flomat?
  A : flomat?
(.sqr! A)  flomat?
  A : flomat?
(.sqrt! A)  flomat?
  A : flomat?
(.+! A B)  flomat?
  A : flomat?
  B : flomat?
(.-! A B)  flomat?
  A : flomat?
  B : flomat?
(.*! A B)  flomat?
  A : flomat?
  B : flomat?
(./! A B)  flomat?
  A : flomat?
  B : flomat?
(.expt! A B)  flomat?
  A : flomat?
  B : flomat?
The function .f! works like .f expect no new backing array is allocated. The underlying flonum array of A is overwritten.

Define unary pointwise functions .f and .f! which applies the function bound to the identifier f.

Define binary pointwise functions .f and .f! which applies the function bound to the identifier f.

syntax

(ddefine-pointwise-unary/binary f)

Define unary/binary pointwise functions .f and .f! which applies the function bound to the identifier f.

4.9 Matrix Operations🔗

procedure

(plus A ...+)  flomat?

  A : flomat

procedure

(plus! A ...+)  flomat?

  A : flomat
Computes the matrix sum of one or more matrices. The function plus allocates a new backing area. The function plus! writes the result in the backing area of the first argument.

procedure

(times A ...+)  flomat?

  A : flomat

procedure

(times! A ...+)  flomat?

  A : flomat
Computes the matrix product of one or more matrices. The function times allocates a new backing area. The function times! writes the result in the backing area of the first argument.

procedure

(minus A B ...)  flomat?

  A : flomat
  B : flomat

procedure

(minus! A B ...)  flomat?

  A : flomat
  B : flomat
Subtracts the matrices B\ldots from A. Given only one argument A, minus computes -A.

The function minus allocates a new backing area. The function minus! writes the result in the backing area of the first argument.

procedure

(power A n)  flomat?

  A : flomat?
  n : natural?
Computes the n’th power of a matrix A, where n is a natural number.

> (define A (matrix '((1 1) (0 1))))
> (list (power A 0) (power A 1) (power A 2) (power A 3))

'((flomat: ((1.0 0.0) (0.0 1.0)))

  (flomat: ((1.0 1.0) (0.0 1.0)))

  (flomat: ((1.0 2.0) (0.0 1.0)))

  (flomat: ((1.0 3.0) (0.0 1.0))))

procedure

(transpose A)  flomat?

  A : flomat

procedure

(transpose! A)  flomat?

  A : flomat
Computes the transpose of a matrix A.

The transpose of B=A^T of an m\times n matrix A is an m\times n matrix B=A^T where b_{ij}=a_{ji}.

The function minus allocates a new backing area. The function minus! writes the result in the backing area of the first argument.

4.10 Matrix and Vector Products🔗

procedure

(dot v w)  real?

  v : flomat?
  w : flomat?
Computes the inner product (also known as the dot product) of two column vectors of the same length.

a\cdot b = \sum_{i=0}^{m-1} a_{i,0} b_{i,0}

> (define v (column -1 1))
> (define w (matrix '((2) (2))))
> (dot v w)

0.0

procedure

(outer A B)  flomat?

  A : flomat?
  B : flomat?
Computes the outer product of the first column of A and the first row of B.

The outer product of a column vector A with m rows and an row B with n columns is an m\times n matrix O with elements o_{i,j} = a_i\cdot b_j.

> (define A (column 2 3))
> (define B (transpose (column 5 7)))
> (outer A B)

(flomat: ((10.0 14.0) (15.0 21.0)))

procedure

(kron A B)  flomat?

  A : flomat?
  B : flomat?
Computes the Kronecker product of the matrices A and B.

The Kronecker product between two matrices A and B replaces each element a of A with a copy of B scaled with A. The Kronecker product is a generalization of the outer product.

> (define A (matrix '((1 2) (3 4))))
> (define B (matrix '((1 1) (1 1))))
> (kron A B)

(flomat:

 ((1.0 1.0 2.0 2.0) (1.0 1.0 2.0 2.0) (3.0 3.0 4.0 4.0) (3.0 3.0 4.0 4.0)))

4.11 Norms and Invariants🔗

procedure

(norm A [norm-type])  real?

  A : flomat?
  norm-type : (or 2 1 'inf 'max-abs) = 2
Depending on norm-type, norm will compute an 1-norm, a Frobenius norm, an infinity norm or the maximal absolute entry value.

The default norm type is the Frobenius norm. The Frobenius norm is the square root of the sum of the square of all elements.

\left\lVert A \right\rVert_\textrm{Frobenius} = \sqrt{\sum a_{ij}^2}

The 1-norm computes the maximum absolute column sum.

\left\lVert A \right\rVert_1 = \max_j \sum_{i} |a_{ij}|

The infinity-norm computes the maximum absolute row sum. The 1-norm computes the maximum absolute column sum.

\left\lVert A \right\rVert_\infty = \max_i \sum_{j} |a_{ij}|

The max-abs-normal computes the maximal absolute value.

\left\lVert A \right\rVert_\text{max} = \max_i \max_j |a_{ij}|

> (define A (matrix '[[1 2]
                      [3 4]]))
> (norm A)

5.477225575051661

> (norm A 1)

6.0

> (norm A 'inf)

7.0

> (norm A 'max)

4.0

> (define B (matrix '[[-1 -2]
                      [-3 -4]]))
> (norm B)

5.477225575051661

> (norm B 1)

6.0

> (norm B 'inf)

7.0

> (norm B 'max)

4.0

procedure

(det A)  flomat?

  A : flomat?
Computes the determinant of a square matrix A.

> (det  (matrix '((1 2) (0 4))))

4.0

> (det  (matrix '((1 1) (2 2))))

-0.0

procedure

(trace A)  flomat?

  A : flomat?
Computes the trace: the sum along a diagonal, of a matrix.
\text{trace}(A) = \sum_k a_{kk}

> (trace (matrix '((1 2) (0 4))))

5.0

procedure

(rank A)  flomat?

  A : flomat?
Computes the rank of a square matrix.

The rank is the equal to:
  • the dimension of the column space

  • the dimension of the row space

  • the number of non-zero singular values in an SVD (singular value decomposition)

> (rank  (matrix '((1 2) (0 4))))

2

> (rank  (matrix '((1 1) (2 2))))

1

4.12 Solving Equations and Inverting Matrices🔗

procedure

(mldivide A B)  flomat?

  A : flomat?
  B : flomat?
Solves the equation
AX = B
where A is a an m\times m matrix, and both X and B are m\times n.

The computation is done using LU-decomposition with partial pivoting.

The matrix A must be square and of full rank, the number of rows in A must be the same as the number columns in B.

Note that A needs to be of full rank for the equation to have a solution. The solver doesn’t check that the input matrix has full rank, it just runs it computation as usual.

To check that the output from solve is indeed a solution, you can evaluate (times A X) and compare with B.

The name mldivide is short for "Matrix Left divide". Although mldivide doesn’t find X by multiplying B with A^{-1} on the left, it is a fitting analogy.

> (define A (matrix '((1 2) (3 4))))
> (define B (matrix '((1) (0))))
> (define X (mldivide A B))
> (list X (times A X))

'((flomat: ((-1.9999999999999998) (1.4999999999999998)))

  (flomat: ((0.9999999999999998) (0.0))))

procedure

(mrdivide B A)  flomat?

  B : flomat?
  A : flomat?
Like mldivide but solves the equations
XA = B.
The name mrdivide is short for "Matrix Right divide".

> (define A (matrix '((1 2) (3 4))))
> (define B (matrix '((2 4) (6 8))))
> (define X (mrdivide B A))
> (list X (times X A))

'((flomat: ((2.0 0.0) (0.0 2.0))) (flomat: ((2.0 4.0) (6.0 8.0))))

procedure

(inv A)  flomat?

  A : flomat?
Find the multiplicative inverse of a square matrix A. The matrix needs to be of full rank.

> (define A    (matrix '((1 2) (3 4))))
> (define Ainv (inv A))
> (list Ainv (times A Ainv))

'((flomat:

   ((-1.9999999999999996 0.9999999999999998)

    (1.4999999999999998 -0.4999999999999999)))

  (flomat: ((1.0 0.0) (8.881784197001252e-16 0.9999999999999996))))

An inverse of A can be used to solve AX=B, but using mldivide directly is normally better.

procedure

(pinv A)  flomat?

  A : flomat?
Finds the Moore-Penrose pseudo-inverse A^+of the matrix A. The matrix A does not need to be square. The pseudo inverse of an m\times n matrix is of size n\times m.

> (define A  (matrix '((1 2) (3 4))))
> (define A+ (pinv A))
> (list A+ (times A+ A A+) (times A A+ A))

'((flomat:

   ((-2.0000000000000018 1.0000000000000007)

    (1.5000000000000016 -0.5000000000000008)))

  (flomat:

   ((-2.000000000000003 1.000000000000001)

    (1.5000000000000029 -0.5000000000000014)))

  (flomat:

   ((0.9999999999999987 1.9999999999999991)

    (2.999999999999997 3.9999999999999964))))

> (define B  (matrix '((1 2 3) (4 5 6))))
> (define B+ (pinv B))
> (list B+ (times B+ B B+) (times B B+ B))

'((flomat:

   ((-0.9444444444444443 0.4444444444444446)

    (-0.11111111111111176 0.11111111111111141)

    (0.7222222222222227 -0.22222222222222254)))

  (flomat:

   ((-0.9444444444444445 0.44444444444444486)

    (-0.11111111111111169 0.11111111111111142)

    (0.7222222222222227 -0.2222222222222227)))

  (flomat:

   ((0.9999999999999991 1.999999999999999 2.9999999999999987) (4.0 5.0 6.0))))

4.13 Matrix Decompositions🔗

procedure

(cholesky A [triangle])  flomat?

  A : flomat?
  triangle : (or 'upper 'lower) = 'lower
Computes the Cholesky decomposition of a symmetric, positive-definite matrix matrix A. Beware that cholesky does not check that the matrix is symmetric and positive-definite.

The Cholesky decomposition of a matrix A has two forms:

A = L L^T \textrm{ or } A = U^T U,
where L and U are lower and upper triangular matrices.

> (define A (matrix '((1 2) (2 4))))
> (define L (cholesky A))
> (list L (transpose L))

'((flomat: ((1.0 0.0) (2.0 0.0))) (flomat: ((1.0 2.0) (0.0 0.0))))

> (times L (transpose L))

(flomat: ((1.0 2.0) (2.0 4.0)))

> (define U (cholesky A 'upper))
> (list (transpose U) U)

'((flomat: ((1.0 0.0) (2.0 0.0))) (flomat: ((1.0 2.0) (0.0 0.0))))

> (times (transpose U) U)

(flomat: ((1.0 2.0) (2.0 4.0)))

procedure

(qr A)  flomat?

  A : flomat?
Computes the QR-decomposition for a matrix A.

The QR Decomposition of A consists of two matrices: an orthogonal matrix Q and an upper triangular matrix R such that A=QR.

> (define A (matrix '((1 2) (3 4))))
> (define-values (Q R) (qr A))
> (list Q R)

'((flomat:

   ((-0.316227766016838 -0.9486832980505138)

    (-0.9486832980505138 0.316227766016838)))

  (flomat:

   ((-3.1622776601683795 -4.427188724235731) (0.0 -0.6324555320336751))))

> (times Q R)

(flomat: ((1.0000000000000002 1.9999999999999996) (3.0 4.0)))

procedure

(svd A)  flomat?

  A : flomat?
Computes the Singular Value Decomposition for a matrix A.

The Singular Value Decomposition (SVD) consists of three matrices: a unitary matrix U, a column vector of singular values S and a unitary matrix V^T (V transposed).

Use the function diag to construct a diagonal matrix from the singular values.

> (define A (matrix '((1 2) (3 4))))
> (define-values (U S VT) (svd A))
> (define Σ (diag S))
> (list U Σ VT S)

'((flomat:

   ((-0.4045535848337568 -0.9145142956773044)

    (-0.9145142956773044 0.4045535848337568)))

  (flomat: ((5.464985704219043 0.0) (0.0 0.3659661906262574)))

  (flomat:

   ((-0.5760484367663209 -0.8174155604703631)

    (0.8174155604703631 -0.5760484367663209)))

  (flomat: ((5.464985704219043) (0.3659661906262574))))

> (times U Σ VT)

(flomat: ((1.0000000000000002 1.999999999999999) (3.0 3.999999999999999)))

4.14 Matrix Eigenvalues and Eigenvectors🔗

Eigenvalues and eigenvectors of a square matrix can be computed with eig or, if only the eigenvalues are needed, with eigvals. Note that even if all elements of a matrix are real, the eigenvalues in some cases are complex. Therefore the eigenvalues are returned as a standard Racket vector.

procedure

(eig A)  
vector? flomat?
  A : flomat?
Compute eigenvalues and right eigenvectors.

procedure

(eigvals A)  vector?

  A : flomat?
Compute eigenvalues.

> (eig     (diag '(1 2)))

'#(1.0 2.0)

(flomat: ((1.0 0.0) (0.0 1.0)))

> (eigvals (diag '(1 2)))

'#(1.0 2.0)

> (eig     (matrix '((1 -1) (1 1))))

'#(1.0+1.0i 1.0-1.0i)

(flomat: ((0.7071067811865475 0.0) (0.0 -0.7071067811865475)))

4.15 Least Squares Problems🔗

Let A be an m\times n matrix and let b be an n\times 1 column vector. The equation Ax=b (depending on A) may not have an unique solution - or a solution at all.

As an alternative, one can look for the vector x that minimizes:

|Ax-b|_2,
where |\cdot|_2 is the Euclidean 2-norm.

The function lstsq return the minimum norm solution x of the above the problem.

If lstsq is given an n\times k matrix B, then the problem will be solved for each column b of B.

procedure

(lstsq A B)  flomat?

  A : flomat?
  B : flomat?
Find minimum norm solution x to the least squares problem:
\text{minimize}_x |Ax-b|_2
for each column b of a larger matrix B.

As an example, let’s look at estimating b_0 and b_1 in the model:

y=b_0\cdot x+b_1
given a data set consisting of corresponding x- and y-values. The calculation reveals that the relation between x and y is y=2x+1.

The matrix X is called the design matrix of the problem. See Design Matrix at Wikipedia. In this case the design matrix has two columns: the first has the x-values, the second contains just ones.

> (define xs (column 0 1 2 3))
> (define ys (column 1 3 5 7))
> (define X  (augment xs (flomat-ones (nrows xs) 1)))
> X

(flomat: ((0.0 1.0) (1.0 1.0) (2.0 1.0) (3.0 1.0)))

> (define B  (lstsq X ys))
> B

(flomat: ((2.000000000000001) (0.9999999999999989)))

4.16 Matrix Functions🔗

procedure

(expm A)  flomat?

  A : flomat?
Compute the matrix exponential \exp(A), where A is a square matrix.

The matrix exponential \exp(A) of a square matrix A is defined as:

\exp(A) = \sum_{k=0}^{\infty} \frac{1}{k!} A^k
where A^0 is interpreted as the identity matrix of the same size as A.

The matrix exponential is well-defined for all square matrices A.

There is no routines in LAPACK for computing matrix exponentials. The algorithm used in flomat is from the paper:

"The Pade Method for computing the Matrix Exponential"

M. Arioli, B. Codenotti, C. Fassino

https://www.sciencedirect.com/science/article/pii/0024379594001901

> (list (exp 1) (exp 2))

'(2.718281828459045 7.38905609893065)

> (expm (matrix '((1 0) (0 2))))

(flomat: ((2.7182818284590446 0.0) (0.0 7.389056098930647)))

> (expm (matrix '((1 2) (3 4))))

(flomat:

 ((51.96895619870492 74.73656456700309)

  (112.10484685050463 164.07380304920952)))

4.17 Printing🔗

procedure

(flomat-print A port mode)  void?

  A : flomat?
  port : port?
  mode : boolean?
Prints a flomat A to the port port. If the mode #t means write and #f means display.

If the size of the matrix A is less than the value of the parameter current-max-flomat-print-size the entries are printed, otherwise an ellisis "..." is printed.

Currently there the output of write and display mode is the same.

> (define A (matrix '[[1 2]]))
> (flomat-print A (current-output-port) #f)

(flomat: ((1.0 2.0)))

> (display A)

(flomat: ((1.0 2.0)))

> (flomat-print A (current-output-port) #t)

(flomat: ((1.0 2.0)))

> (write A)

(flomat: ((1.0 2.0)))

parameter

(current-max-flomat-print-size)  natural?

(current-max-flomat-print-size n)  void?
  n : natural?
Parameter that controls printing whether the entries of the matrix are printed. See flomat-print.

4.18 BLAS and LAPACK🔗

BLAS (Basic Linear Algebra Subprograms) is a standard containing descriptions of a set of commonly used low-level linear algebra routines. The idea of having a standard set of operations originated with a Fortran library in 1977. Over time the standard has been implemented multiple times and there are often fast versions for particular machines available. If possible choose an implementation tuned for you hardware.

Where BLAS deals with low-level operations (simple vector and matrix computations) LAPACK (Linear Algebra Package) deals with higher-level problems such as solving linear equations, solving linear least square problems, and finding matrix decompostions). LAPACK was originally written in Fortran 77 but current versions are written in Fortran 90. LAPACK is built on top of BLAS in the sense that LAPACK calls BLAS to perform the low-level operations.

The philosophy of the flomat implementation is to use BLAS and LAPACK to solve as many problems as possible (opposed to implement the algorithm ourselves). The representation of a flomat therefore matches the expectation of the routines in BLAS and LAPACK closely.

4.19 CBLAS Bindings🔗

We have chosen to use CBLAS which provides a C interface (instead of a Fortran one) to the routines in BLAS. Apple provides documentation for their implementation: CBLAS Documentation.

All names in CBLAS has the prefix cblas_, so if you an operation in the Fortran documentation, simply put cblas_ in front.

There are quite a few operations in BLAS and they all have short names, so the names follow a pattern: the first letter describes the type of floating point.

In flomat we have for now chosen to stick with matrices containing double precision floating points, so we need the operations that begin with d.

As an example let’s consider how to use dcopy, which copies a vector X to a vector y. The first step is to lookup the arguments. The header file for cblas contains is "cblas.h" and contains:

void cblas_dcopy(const int N,

                 const double *X, const int incX,

                       double *Y, const int incY);

We can ignore const, which simplify informs the C compiler that a call to cblas_copy doesn’t change the argument.

For double * which is a pointer to an array of doubles, we use a tagged pointer _flomat.

This leads to the following binding:

(define-cblas cblas_dcopy
 (_fun (n : _int)
       (X : _flomat) (incX : _int)
       (Y : _flomat) (incY : _int)
       -> _void))

In order to use cblas_dcopy we need to study the documentation, which is found at Netlink: dcopy.

Purpose:

    DCOPY copies a vector, x, to a vector, y.

    uses unrolled loops for increments equal to 1.

Parameters

  [in]  N    is INTEGER                 the number of elements in input vector(s)

  [in]  DX   is DOUBLE PRECISION array  dimension (1+(N-1)*abs(INCX))

  [in]  INCX is INTEGER,                storage spacing between elements of DX

  [out] DY   is DOUBLE PRECISION array  dimension (1+(N-1)*abs(INCY))

  [in]  INCY is INTEGER                 storage spacing between elements of DY

For vectors we see that a vector is a passed as a pair (DX, INCX) of the start address and the stride. Since vectors in flomat are column vectors and matrices are stored in column major order, the elements of a column vector are stored with an increment of 1.

Given two matrices A and B we can copy the first n elements of the first column of A into the first column of B like this:
(define-param (ma na a lda) A)
(define-param (mb nb b ldb) B)
(cblas_dcopy n (ptr-elm a lda 0 j) 1 (ptr-elm b ldb 0 k) 1)
Here (ptr-elm a lda i j) will compute the address of the (i,j)’th element of matrix A. No error checking is done, so you need to check that n is smaller than then number of rows in A and B before making the call.

To copy into a row, use \text{INCY}=\text{ldb}.
(define-param (ma na a lda) A)
(define-param (mb nb b ldb) B)
(cblas_dcopy n (ptr-elm a lda 0 j) 1 (ptr-elm b ldb k 0) ldb)
This copies column j of A into row k in B.

Note that dcopy allows an increment INCX of zero. With an increment of INX=0 we can copy the same element into all entries of the destination.

This trick is used in the implementation of make-flomat:
(define (make-flomat m n [x 0.0])
  (define a  (alloc-flomat m n))
  (define x* (cast (malloc 1 _double 'atomic) _pointer _flomat))
  (ptr-set! x* _double (real->double-flonum x))
  (if (= x 0.0)
      (memset a 0  (* m n) _double)
      (cblas_dcopy (* m n) x* 0 a 1))
  (flomat m n a m))
Here the case 0.0 is special cased to use the faster memset.

syntax

(define-cblas name body ...)

The form define-cblas is defined via define-ffi-definer. See Defining Bindings in the Reference manual.

syntax

cblas_daxpy

syntax

cblas_dcopy

syntax

cblas_ddot

syntax

cblas_dgemm

syntax

cblas_dgemv

syntax

cblas_dnrm2

syntax

cblas_dscal

syntax

cblas_dswap

syntax

cblas_ixamax

The bindings used internally by flomat. See the source of flomat.rkt at Github for the exact calling conventions.

Most users won’t need to use these, but they are available if need be.

4.20 LAPACK Bindings🔗

LAPACK is a Fortran library, so the calling conventions are sligthly different than usual. First of all, let’s look at an example: the function dlange which is used to compute norms of a real matrix.

The Fortran documentation has the following to say about the types of the arguments:

DOUBLE PRECISION FUNCTION DLANGE( NORM, M, N, A, LDA, WORK )

 

NORM    (input)     CHARACTER*1

M       (input)     INTEGER

N       (input)     INTEGER

A       (input)     DOUBLE PRECISION array, dimension (LDA,N)

LDA     (input)     INTEGER

WORK    (workspace) DOUBLE PRECISION array, dimension (MAX(1,LWORK))

The corresponding binding is:

(define-lapack dlange_
  (_fun (norm : (_ptr i _byte))
        (m    : (_ptr i _int))
        (n    : (_ptr i _int))
        (a    :  _flomat)
        (lda  : (_ptr i _int))
        (work : (_ptr io _flomat))
        -> _double))

Note that all arguments are passed by reference. Use (_ptr i _), (_ptr o _) or
(_ptr io _) for input, output and input/output arguments.

Note that the size of an character is the same as byte. Use char->integer in order to convert a Racket character into a byte.

The function flomat-norm sets up the arguments before calling dlange_:
(define (flomat-norm A [norm-type 2])
  (define-param (m n a lda) A)
  (define norm (char->integer
                (match norm-type
                  [1         #\1]
                  ['inf      #\I]
                  [2         #\F]
                  ['max-abs  #\M]
                  [_ (error)])))
  (define lwork (if (equal? norm-type 'inf) (max 1 m) 1))
  (define W (make-flomat lwork 1))
  (define w (flomat-a W))
  (dlange_ norm m n a lda w))
Here the work array is only used when computing the infimum norm.

syntax

(define-lapack name body ...)

The form define-lapack is defined via define-ffi-definer. See Defining Bindings in the Reference manual. LAPACK uses an underscore suffix, so remember to an underscore to any names seen in the LAPACK documentation.

syntax

dlange_

syntax

dgeev_

syntax

dgetrf_

syntax

dgesvd_

syntax

dgesdd_

syntax

dgeqrf_

syntax

dorgqr_

syntax

dgetri_

syntax

dpotrf_

syntax

dgesv_

syntax

dgelsd_

The bindings from LAPACK used internally in flomat. See LAPACK Documentation at Netlib.

Most users won’t need to use these, but they are available if need be.

See the source of flomat.rkt at Github for the exact calling conventions.