Macros
TensorCast.@cast — Macro@cast Z[i,j,...] := f(A[i,j,...], B[j,k,...]) optionsMacro for broadcasting, reshaping, and slicing of arrays in index notation. Understands the following things:
A[i,j,k]is a three-tensor with these indices.B[(i,j),k]is the same thing, reshaped to a matrix. Its first axis (the bracket) is indexed byn = i + (j-1) * Nwherei ∈ 1:N. This may also be writtenB[i⊗j,k].C[k][i,j]is a vector of matrices, either created by slicing (if on the left) or implying glueing into (if on the right) a 3-tensorA[i,j,k].D[j,k]{i}is an ordinary matrix ofSVectors, which may be reinterpreted fromA[i,j,k].E[i,_,k]has two nontrivial dimensions, andsize(E,2)==1. On the right hand side (or when writing to an existing array) you may also writeE[i,3,k]meaningview(E, :,3,:), orE[i,$c,j]to use a variablec.f(x)[i,j,k]is allowed;f(x)must return a 3-tensor (and will be evaluated only once).g(H[:,k])[i,j]is a generalisedmapslices, withgmapping columns ofHto matrices, which are glued into a 3-tensorA[i,j,k].h(I[i], J[j])[k]expects anhwhich maps two scalars to a vector, which gets broadcastedh.(I,J'), then glued to make a 3-tensor.K[i,j]'conjugates each element, equivalent toK'[j,i]which is the conjugate-transpose of the matrix.M[i,i]meansdiag(M)[i], but only for matrices:N[i,i,k]is an error.P[i,i']is normalised toP[i,i′]with unicode \prime.R[i,-j,k]means roughlyreverse(R, dims=2), andQ[i,~j,k]similar withshuffle.
The left and right hand sides must have all the same indices, and the only repeated index allowed is M[i,i], which is a diagonal not a trace. See @reduce and @matmul for related macros which can sum over things.
If a function of one or more tensors appears on the right hand side, then this represents a broadcasting operation, and the necessary re-orientations of axes are automatically inserted.
The following actions are possible:
=writes into an existing array, overwriting its contents, while+=adds (preciselyZ .= Z .+ ...) and*=multiplies.:=creates a new array. To omit the name, writeZ = @cast _[i,j,k] := ....|=insists that the result is anArraynot a view, or some other lazy wapper. (This may still be areshapeof the input, it does not guarantee a copy.)
Options specified at the end (if several, separated by ,) are:
i in 1:3ori ∈ 1:3supplies the range of indexi. Variables and functions likej in 1:Nj, k in 1:length(K)are allowed, buti = 1:3is not.lazy=falsedisablesPermutedDimsArrayin favour ofpermutedims, andDiagonalin favour ofdiagmforZ[i,i]output.
Some modifications to broadcasting are possible, after loading the corresponding package:
@cast @strided Z[i,j] := ...uses Strided.jl's macro, for multi-threaded broadcasting.@cast @turbo Z[i,j] := ...uses LoopVectorization.jl's macro, for SIMD acceleration.@cast @lazy Z[i,j] := ...uses LazyArrays.jl's BroadcastArray type, although there is no such macro.
To create static slices D[k]{i,j} you should give all slice dimensions explicitly. You may write D[k]{i:2,j:2} to specify Size(2,2) slices. They are made most cleanly from the first indices of the input, i.e. this D from A[i,j,k]. The notation A{:,:,k} will only work in this order, and writing A{:2,:2,k} provides the sizes.
TensorCast.@reduce — Macro@reduce A[i] := sum(j,k) B[i,j,k] # A = vec(sum(B, dims=(2,3)))
@reduce A[i] := prod(j) B[i] + ε * C[i,j] # A = vec(prod(B .+ ε .* C, dims=2))
@reduce A[i] = sum(j) exp( C[i,j] / D[j] ) # sum!(A, exp.(C ./ D') )Tensor reduction macro:
- The reduction function can be anything which works like
sum(B, dims=(1,3)), for instanceprodandmaximumandStatistics.mean. - In-place operations
Z[j] = sum(...will construct the banged version of the given function's name, which must work likesum!(Z, A). - The tensors can be anything that
@castunderstands, including gluing of slicesB[i,k][j]and reshapingB[i⊗j,k]. See? @castfor the complete list. - If there is a function of one or more tensors on the right, then this is a broadcasting operation.
- Index ranges may be given afterwards (as for
@cast) or inside the reductionsum(i:3, k:4). - All indices appearing on the right must appear either within
sum(...)etc, or on the left.
F = @reduce sum(i,j) B[i] + γ * D[j] # sum(B .+ γ .* D')
@reduce G[] := sum(i,j) B[i] + γ * D[j] # F == G[]Complete reduction to a scalar output F, or a zero-dim array G. G[] involves sum(A, dims=(1,2)) rather than sum(A).
@reduce @lazy Z[k] := sum(i,j) A[i] * B[j] * C[k] (i in 1:N, j in 1:N, k in 1:N)The option @lazy replaces the broadcast expression with a BroadcastArray, to avoid materializeing the entire array before summing. In the example this is of size N^3. This needs using LazyArrays to work.
The options @strided and @turbo will alter broadcasting operations, and need using Strided or using LoopVectorization to work.
@reduce sum(i) A[i] * log(@reduce _[i] := sum(j) A[j] * exp(B[i,j]))
@cast W[i] := A[i] * exp(- @reduce S[i] = sum(j) exp(B[i,j]) lazy)Recursion like this is allowed, inside either @cast or @reduce. The intermediate array need not have a name, like _[i], unless writing into an existing array, like S[i] here.
TensorCast.@matmul — Macro@matmul M[a,c] := sum(b) A[a,b] * B[b,c]Matrix multiplication macro. Uses the same syntax as @reduce, but instead of broadcasting the expression on the right out to a 3-tensor before summing it along one dimension, this calls * or mul!, which is usually much faster. But it only works on expressions of suitable form.
Note that unlike @einsum and @tensor, you must explicitly specify what indices to sum over.
@matmul Z[a⊗b,k] = sum(i,j) D[i,a][j] * E[i⊗j,_,k,b]Each tensor will be pre-processed exactly as for @cast / @reduce, here glueing slices of D together, reshaping E and the output Z. Once this is done, the right hand side must be of the form (tensor) * (tensor), which becomes mul!(ZZ, DD, EE).
@reduce V[i] := sum(k) W[k] * exp(@matmul _[i,k] := sum(j) A[i,j] * B[j,k])You should be able to use this within the other macros, as shown.
TensorCast.@pretty — Macro@pretty @cast A[...] := B[...]Prints an approximately equivalent expression with the macro expanded. Compared to @macroexpand1, generated symbols are replaced with animal names (from MacroTools), comments are deleted, module names are removed from functions, and the final expression is fed to println().
To copy and run the printed expression, you may need various functions which aren't exported. Try something like using TensorCast: orient, star, rview, @assert_, red_glue, sliceview
Functions
These are not exported, but are called by the macros above, and visible in what @pretty prints out.
TensorCast.Fast.diagview — Functiondiagview(M) = view(M, diagind(M))Like diag(M) but makes a view.
TensorCast.Fast.sliceview — Functionsliceview(A, code)
slicecopy(A, code)Slice array A according to code, a tuple of length ndims(A), in which : indicates a dimension of the slices, and * a dimension separating them. For example if code = (:,*,:) then slices are either view(A, :,i,:) or A[:,i,:] with i=1:size(A,2).
These are from helper packages:
Missing docstring for TensorCast.stack. Check Documenter's build log for details.
Missing docstring for TensorCast.TransmutedDimsArray. Check Documenter's build log for details.
Missing docstring for TensorCast.transmute. Check Documenter's build log for details.
Missing docstring for TensorCast.transmutedims. Check Documenter's build log for details.