8.12

2 Entry points🔗ℹ

Malt provides facilities to explore and develop different pieces (as described in the Overview) of the deep learning stack, independently of what is provided in the package itself.

This section provides a guide for the various modules that can be require’d for different purposes. We refer to these require invocations as entry points into malt

 (require malt) package: malt
This entry point covers the entire set of modules provided by the malt package. It provides all functions related to lists, tensors, automatic differentiation, extended operators, hyperparameters, random number generation, logging, loss functions, layer functions, gradient descent, dense layers, 1-D convolution layers and He initialization.

It can be used to develop example neural networks, such as the iris classifier, a dense network with a single hidden layer from The Little Learner.
(require malt)
 
(define dense-block
  (λ (n m)
    (block relu
      (list
        (list m n)
        (list m)))))
 
(define iris-network
  (stack-blocks
    (list
      (dense-block 4 8)
      (dense-block 8 3))))

 (require malt/base) package: malt
This entry point provides everything in malt, but leaves out deep learning specific functions.

This entry point is used by other sub-modules within Malt where only tensors and automatic differentiation are required.

This entry point provides everything in malt/base but leaves out automatic differentiation and deep learning specific function exports.

This entry point is used by some sub-modules within Malt where tensors and extended operations are required, but duals and automatic differentiation are not.

 (require malt/base-no-overrides) package: malt

The entry points malt, malt/base, and malt/base-no-duals by default override common mathematical functions (+, *, etc. See Differentiable extended numerical functions and Non-differentiable extended numerical functions).

This entry point does not override any default mathematical operations. The differentiable operations are provided with a d or a d- prefix. Section Differentiable extended numerical functions lists the names that are used for these operations.

This entry point is used by some sub-modules within Malt where both differentiable extended operations as well as Racket’s primitive operations are necessary.

 (require malt/base-no-overrides-no-duals) package: malt

This entry point does not override any default mathematical operations, nor does it provide automatic differentiation. See Non-differentiable extended numerical functions.

This entry point is used by some sub-modules within Malt where both non-differentiable extended operations as well as Racket’s primitive operations are necessary.

 #lang malt/learner package: malt

The malt/learner entry point can work as either a #lang or a module. It provides the tensor representation, automatic differentiation, and extended operators to allow users to develop code exactly as it appears in The Little Learner. It excludes all the deep learning specific operations. This is the default representation of tensors when using the malt entry point.

 (require malt/nested-tensors) package: malt

This entry point provides the tensor representation, automatic differentiation, and extended operators for the nested tensor representation referred to in Appendix B. I Could Have Raced All Day. It can be used for experimentation using this representation.

 (require malt/flat-tensors) package: malt

This entry point provides the tensor representation, automatic differentiation, and extended operators for the flat tensor representation referred to in Appendix B. I Could Have Raced All Day.

 (require malt/tools) package: malt

When using entry points malt/learner, malt/nested-tensors or malt/flat-tensors, this entry point can be used to provide hyperparameters, normally distributed random numbers, and logging tools. In order to use malt/tools with a given representation, they must be recompiled for that specific representation. Follow the instructions provided in the repository.