Macros

TensorCast.@castMacro
@cast Z[i,j,...] := f(A[i,j,...], B[j,k,...])  options

Macro for broadcasting, reshaping, and slicing of arrays in index notation. Understands the following things:

  • A[i,j,k] is a three-tensor with these indices.
  • B[(i,j),k] is the same thing, reshaped to a matrix. Its first axis (the bracket) is indexed by n = i + (j-1) * N where i ∈ 1:N. This may also be written B[i⊗j,k].
  • C[k][i,j] is a vector of matrices, either created by slicing (if on the left) or implying glueing into (if on the right) a 3-tensor A[i,j,k].
  • D[j,k]{i} is an ordinary matrix of SVectors, which may be reinterpreted from A[i,j,k].
  • E[i,_,k] has two nontrivial dimensions, and size(E,2)==1. On the right hand side (or when writing to an existing array) you may also write E[i,3,k] meaning view(E, :,3,:), or E[i,$c,j] to use a variable c.
  • f(x)[i,j,k] is allowed; f(x) must return a 3-tensor (and will be evaluated only once).
  • g(H[:,k])[i,j] is a generalised mapslices, with g mapping columns of H to matrices, which are glued into a 3-tensor A[i,j,k].
  • h(I[i], J[j])[k] expects an h which maps two scalars to a vector, which gets broadcasted h.(I,J'), then glued to make a 3-tensor.
  • K[i,j]' conjugates each element, equivalent to K'[j,i] which is the conjugate-transpose of the matrix.
  • M[i,i] means diag(M)[i], but only for matrices: N[i,i,k] is an error.
  • P[i,i'] is normalised to P[i,i′] with unicode \prime.
  • R[i,-j,k] means roughly reverse(R, dims=2), and Q[i,~j,k] similar with shuffle.

The left and right hand sides must have all the same indices, and the only repeated index allowed is M[i,i], which is a diagonal not a trace. See @reduce and @matmul for related macros which can sum over things.

If a function of one or more tensors appears on the right hand side, then this represents a broadcasting operation, and the necessary re-orientations of axes are automatically inserted.

The following actions are possible:

  • = writes into an existing array, overwriting its contents, while += adds (precisely Z .= Z .+ ...) and *= multiplies.
  • := creates a new array. To omit the name, write Z = @cast _[i,j,k] := ....
  • |= insists that the result is an Array not a view, or some other lazy wapper. (This may still be a reshape of the input, it does not guarantee a copy.)

Options specified at the end (if several, separated by ,) are:

  • i in 1:3 or i ∈ 1:3 supplies the range of index i. Variables and functions like j in 1:Nj, k in 1:length(K) are allowed, but i = 1:3 is not.
  • lazy=false disables PermutedDimsArray in favour of permutedims, and Diagonal in favour of diagm for Z[i,i] output.

Some modifications to broadcasting are possible, after loading the corresponding package:

  • @cast @strided Z[i,j] := ... uses Strided.jl's macro, for multi-threaded broadcasting.
  • @cast @turbo Z[i,j] := ... uses LoopVectorization.jl's macro, for SIMD acceleration.
  • @cast @lazy Z[i,j] := ... uses LazyArrays.jl's BroadcastArray type, although there is no such macro.

To create static slices D[k]{i,j} you should give all slice dimensions explicitly. You may write D[k]{i:2,j:2} to specify Size(2,2) slices. They are made most cleanly from the first indices of the input, i.e. this D from A[i,j,k]. The notation A{:,:,k} will only work in this order, and writing A{:2,:2,k} provides the sizes.

source
TensorCast.@reduceMacro
@reduce A[i] := sum(j,k) B[i,j,k]             # A = vec(sum(B, dims=(2,3)))
@reduce A[i] := prod(j) B[i] + ε * C[i,j]     # A = vec(prod(B .+ ε .* C, dims=2))
@reduce A[i] = sum(j) exp( C[i,j] / D[j] )    # sum!(A, exp.(C ./ D') )

Tensor reduction macro:

  • The reduction function can be anything which works like sum(B, dims=(1,3)), for instance prod and maximum and Statistics.mean.
  • In-place operations Z[j] = sum(... will construct the banged version of the given function's name, which must work like sum!(Z, A).
  • The tensors can be anything that @cast understands, including gluing of slices B[i,k][j] and reshaping B[i⊗j,k]. See ? @cast for the complete list.
  • If there is a function of one or more tensors on the right, then this is a broadcasting operation.
  • Index ranges may be given afterwards (as for @cast) or inside the reduction sum(i:3, k:4).
  • All indices appearing on the right must appear either within sum(...) etc, or on the left.
F = @reduce sum(i,j)  B[i] + γ * D[j]         # sum(B .+ γ .* D')
@reduce G[] := sum(i,j)  B[i] + γ * D[j]      # F == G[]

Complete reduction to a scalar output F, or a zero-dim array G. G[] involves sum(A, dims=(1,2)) rather than sum(A).

@reduce @lazy Z[k] := sum(i,j) A[i] * B[j] * C[k]  (i in 1:N, j in 1:N, k in 1:N)

The option @lazy replaces the broadcast expression with a BroadcastArray, to avoid materializeing the entire array before summing. In the example this is of size N^3. This needs using LazyArrays to work.

The options @strided and @turbo will alter broadcasting operations, and need using Strided or using LoopVectorization to work.

@reduce sum(i) A[i] * log(@reduce _[i] := sum(j) A[j] * exp(B[i,j]))
@cast W[i] := A[i] * exp(- @reduce S[i] = sum(j) exp(B[i,j]) lazy)

Recursion like this is allowed, inside either @cast or @reduce. The intermediate array need not have a name, like _[i], unless writing into an existing array, like S[i] here.

source
TensorCast.@matmulMacro
@matmul M[a,c] := sum(b)  A[a,b] * B[b,c]

Matrix multiplication macro. Uses the same syntax as @reduce, but instead of broadcasting the expression on the right out to a 3-tensor before summing it along one dimension, this calls * or mul!, which is usually much faster. But it only works on expressions of suitable form.

Note that unlike @einsum and @tensor, you must explicitly specify what indices to sum over.

@matmul Z[a⊗b,k] = sum(i,j)  D[i,a][j] * E[i⊗j,_,k,b]

Each tensor will be pre-processed exactly as for @cast / @reduce, here glueing slices of D together, reshaping E and the output Z. Once this is done, the right hand side must be of the form (tensor) * (tensor), which becomes mul!(ZZ, DD, EE).

@reduce V[i] := sum(k) W[k] * exp(@matmul _[i,k] := sum(j) A[i,j] * B[j,k])

You should be able to use this within the other macros, as shown.

source
TensorCast.@prettyMacro
@pretty @cast A[...] := B[...]

Prints an approximately equivalent expression with the macro expanded. Compared to @macroexpand1, generated symbols are replaced with animal names (from MacroTools), comments are deleted, module names are removed from functions, and the final expression is fed to println().

To copy and run the printed expression, you may need various functions which aren't exported. Try something like using TensorCast: orient, star, rview, @assert_, red_glue, sliceview

source

Functions

These are not exported, but are called by the macros above, and visible in what @pretty prints out.

TensorCast.Fast.sliceviewFunction
sliceview(A, code)
slicecopy(A, code)

Slice array A according to code, a tuple of length ndims(A), in which : indicates a dimension of the slices, and * a dimension separating them. For example if code = (:,*,:) then slices are either view(A, :,i,:) or A[:,i,:] with i=1:size(A,2).

source

These are from helper packages:

Missing docstring.

Missing docstring for TensorCast.stack. Check Documenter's build log for details.

Missing docstring.

Missing docstring for TensorCast.TransmutedDimsArray. Check Documenter's build log for details.

Missing docstring.

Missing docstring for TensorCast.transmute. Check Documenter's build log for details.

Missing docstring.

Missing docstring for TensorCast.transmutedims. Check Documenter's build log for details.