modeci_mdf.functions.onnx

Programmatically defines every ONNX operation as a python callable function. Executing ONNX graphs in this way somewhat defeats the performance purposes of ONNX since the overhead for each operation will be high. However, this allows us to test the MDF scheduler (which invokes Python functions) on any MDF model defined over ONNX operations. In the future, the MDF should probably just compile to ONNX (or some other IR) for execution.

Functions

abs(*args, **kwargs)

Absolute takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the absolute is, y = abs(x), is applied to the tensor elementwise.

acos(*args, **kwargs)

Calculates the arccosine (inverse of cosine) of the given input tensor, element-wise.

acosh(*args, **kwargs)

Calculates the hyperbolic arccosine of the given input tensor element-wise.

add(*args, **kwargs)

Performs element-wise binary addition (with Numpy-style broadcasting support).

and(*args, **kwargs)

Returns the tensor resulted from performing the and logical operation elementwise on the input tensors A and B (with Numpy-style broadcasting support).

argmax(*args, **kwargs)

Computes the indices of the max elements of the input tensor's element along the provided axis.

argmin(*args, **kwargs)

Computes the indices of the min elements of the input tensor's element along the provided axis.

asin(*args, **kwargs)

Calculates the arcsine (inverse of sine) of the given input tensor, element-wise.

asinh(*args, **kwargs)

Calculates the hyperbolic arcsine of the given input tensor element-wise.

atan(*args, **kwargs)

Calculates the arctangent (inverse of tangent) of the given input tensor, element-wise.

atanh(*args, **kwargs)

Calculates the hyperbolic arctangent of the given input tensor element-wise.

averagepool(*args, **kwargs)

AveragePool consumes an input tensor X and applies average pooling across the tensor according to kernel sizes, stride sizes, and pad lengths.

batchnormalization(*args, **kwargs)

Carries out batch normalization as described in the paper https://arxiv.org/abs/1502.03167.

bernoulli(*args, **kwargs)

Draws binary random numbers (0 or 1) from a Bernoulli distribution.

bitshift(*args, **kwargs)

Bitwise shift operator performs element-wise operation. For each input element, if the

cast(*args, **kwargs)

The operator casts the elements of a given input tensor to a data type specified by the 'to' argument and returns an output tensor of the same size in the converted type.

castlike(*args, **kwargs)

The operator casts the elements of a given input tensor (the first input) to the same data type as the elements of the second input tensor.

ceil(*args, **kwargs)

Ceil takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the ceil is, y = ceil(x), is applied to the tensor elementwise.

celu(*args, **kwargs)

Continuously Differentiable Exponential Linear Units: Perform the linear unit element-wise on the input tensor X using formula:

clip(*args, **kwargs)

Clip operator limits the given input within an interval.

compress(*args, **kwargs)

Selects slices from an input tensor along a given axis where condition evaluates to True for each axis index.

concat(*args, **kwargs)

Concatenate a list of tensors into a single tensor.

concatfromsequence(*args, **kwargs)

Concatenate a sequence of tensors into a single tensor.

constant(*args, **kwargs)

This operator produces a constant tensor.

constantofshape(*args, **kwargs)

Generate a tensor with given value and shape.

conv(*args, **kwargs)

The convolution operator consumes an input tensor and a filter, and computes the output.

convert_type(v)

Helper function to convert types to ONNX compatible types.

convinteger(*args, **kwargs)

The integer convolution operator consumes an input tensor, its zero-point, a filter, and its zero-point, and computes the output.

convtranspose(*args, **kwargs)

The convolution transpose operator consumes an input tensor and a filter, and computes the output.

cos(*args, **kwargs)

Calculates the cosine of the given input tensor, element-wise.

cosh(*args, **kwargs)

Calculates the hyperbolic cosine of the given input tensor element-wise.

cumsum(*args, **kwargs)

Performs cumulative sum of the input elements along the given axis.

depthtospace(*args, **kwargs)

DepthToSpace rearranges (permutes) data from depth into blocks of spatial data.

dequantizelinear(*args, **kwargs)

The linear dequantization operator.

det(*args, **kwargs)

Det calculates determinant of a square matrix or batches of square matrices.

div(*args, **kwargs)

Performs element-wise binary division (with Numpy-style broadcasting support).

dropout(*args, **kwargs)

Dropout takes an input floating-point tensor, an optional input ratio (floating-point scalar) and an optional input training_mode (boolean scalar).

dynamicquantizelinear(*args, **kwargs)

A Function to fuse calculation for Scale, Zero Point and FP32->8Bit convertion of FP32 Input data. Outputs Scale, ZeroPoint and Quantized Input for a given FP32 Input. Scale is calculated as: `  y_scale = (max(x) - min(x))/(qmax - qmin)  * where qmax and qmin are max and min values for quantization range .i.e [0, 255] in case of uint8  * data range is adjusted to include 0. ` Zero point is calculated as: ` intermediate_zero_point = qmin - min(x)/y_scale y_zero_point = cast(round(saturate(itermediate_zero_point))) * where qmax and qmin are max and min values for quantization range .i.e [0, 255] in case of uint8 * for saturation, it saturates to [0, 255] if it's uint8, or [-127, 127] if it's int8. Right now only uint8 is supported. * rounding to nearest ties to even. ` Data quantization formula is: ` y = saturate (round (x / y_scale) + y_zero_point) * for saturation, it saturates to [0, 255] if it's uint8, or [-127, 127] if it's int8. Right now only uint8 is supported. * rounding to nearest ties to even. `.

einsum(*args, **kwargs)

An einsum of the form `term1, term2 -> output-term` produces an output tensor using the following equation

elu(*args, **kwargs)

Elu takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the function f(x) = alpha * (exp(x) - 1.) for x < 0, f(x) = x for x >= 0., is applied to the tensor elementwise.

equal(*args, **kwargs)

Returns the tensor resulted from performing the equal logical operation elementwise on the input tensors A and B (with Numpy-style broadcasting support).

erf(*args, **kwargs)

Computes the error function of the given input tensor element-wise.

exp(*args, **kwargs)

Calculates the exponential of the given input tensor, element-wise.

expand(*args, **kwargs)

Broadcast the input tensor following the given shape and the broadcast rule.

eyelike(*args, **kwargs)

Generate a 2D tensor (matrix) with ones on the diagonal and zeros everywhere else.

flatten(*args, **kwargs)

Flattens the input tensor into a 2D matrix.

floor(*args, **kwargs)

Floor takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the floor is, y = floor(x), is applied to the tensor elementwise.

gather(*args, **kwargs)

Given data tensor of rank r >= 1, and indices tensor of rank q, gather entries of the axis dimension of data (by default outer-most one as axis=0) indexed by indices, and concatenates them in an output tensor of rank q + (r - 1).

gatherelements(*args, **kwargs)

GatherElements takes two inputs data and indices of the same rank r >= 1 and an optional attribute axis that identifies an axis of data (by default, the outer-most axis, that is axis 0).

gathernd(*args, **kwargs)

Given data tensor of rank r >= 1, indices tensor of rank q >= 1, and batch_dims integer b, this operator gathers slices of data into an output tensor of rank q + r - indices_shape[-1] - 1 - b.

gemm(*args, **kwargs)

General Matrix multiplication: https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms#Level_3

get_all_schemas_version(max_version)

Enumerate all the OpSchemas available from ONNX.

get_onnx_ops([opset_version])

Enumerate all available ONNX operations and generate MDF function specifications for each one.

get_onnx_schema(func_name[, opset_version])

Return the ONNX schema corresponding to a generated ONNX python function with name func_name

globalaveragepool(*args, **kwargs)

GlobalAveragePool consumes an input tensor X and applies average pooling across the values in the same channel.

globallppool(*args, **kwargs)

GlobalLpPool consumes an input tensor X and applies lp pool pooling across the values in the same channel.

globalmaxpool(*args, **kwargs)

GlobalMaxPool consumes an input tensor X and applies max pooling across the values in the same channel.

greater(*args, **kwargs)

Returns the tensor resulted from performing the greater logical operation elementwise on the input tensors A and B (with Numpy-style broadcasting support).

greaterorequal(*args, **kwargs)

Returns the tensor resulted from performing the greater_equal logical operation elementwise on the input tensors A and B (with Numpy-style broadcasting support).

gru(*args, **kwargs)

Computes an one-layer GRU.

hardmax(*args, **kwargs)

The operator computes the hardmax values for the given input:

hardsigmoid(*args, **kwargs)

HardSigmoid takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the HardSigmoid function, y = max(0, min(1, alpha * x + beta)), is applied to the tensor elementwise.

hardswish(*args, **kwargs)

HardSwish takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the HardSwish function, y = x * max(0, min(1, alpha * x + beta)) = x * HardSigmoid<alpha, beta>(x), where alpha = 1/6 and beta = 0.5, is applied to the tensor elementwise.

identity(*args, **kwargs)

Identity operator

if(*args, **kwargs)

If conditional

import_class(name)

Import from a module specified by a string

instancenormalization(*args, **kwargs)

Carries out instance normalization as described in the paper https://arxiv.org/abs/1607.08022.

isinf(*args, **kwargs)

Map infinity to true and other values to false.

isnan(*args, **kwargs)

Returns which elements of the input are NaN.

leakyrelu(*args, **kwargs)

LeakyRelu takes input data (Tensor<T>) and an argument alpha, and produces one output data (Tensor<T>) where the function f(x) = alpha * x for x < 0, f(x) = x for x >= 0, is applied to the data tensor elementwise.

less(*args, **kwargs)

Returns the tensor resulted from performing the less logical operation elementwise on the input tensors A and B (with Numpy-style broadcasting support).

lessorequal(*args, **kwargs)

Returns the tensor resulted from performing the less_equal logical operation elementwise on the input tensors A and B (with Numpy-style broadcasting support).

log(*args, **kwargs)

Calculates the natural log of the given input tensor, element-wise.

logsoftmax(*args, **kwargs)

The operator computes the log of softmax values for the given input:

loop(*args, **kwargs)

Generic Looping construct.

lpnormalization(*args, **kwargs)

Given a matrix, apply Lp-normalization along the provided axis.

lppool(*args, **kwargs)

LpPool consumes an input tensor X and applies Lp pooling across the tensor according to kernel sizes, stride sizes, and pad lengths.

lrn(*args, **kwargs)

Local Response Normalization proposed in the [AlexNet paper](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf).

lstm(*args, **kwargs)

Computes an one-layer LSTM.

matmul(*args, **kwargs)

Matrix product that behaves like numpy.matmul: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.matmul.html

matmulinteger(*args, **kwargs)

Matrix product that behaves like numpy.matmul: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.matmul.html.

max(*args, **kwargs)

Element-wise max of each of the input tensors (with Numpy-style broadcasting support).

maxpool(*args, **kwargs)

MaxPool consumes an input tensor X and applies max pooling across the tensor according to kernel sizes, stride sizes, and pad lengths.

maxroipool(*args, **kwargs)

ROI max pool consumes an input tensor X and region of interests (RoIs) to apply max pooling across each RoI, to produce output 4-D tensor of shape (num_rois, channels, pooled_shape[0], pooled_shape[1]).

maxunpool(*args, **kwargs)

MaxUnpool essentially computes the partial inverse of the MaxPool op.

mean(*args, **kwargs)

Element-wise mean of each of the input tensors (with Numpy-style broadcasting support).

meanvariancenormalization(*args, **kwargs)

A MeanVarianceNormalization Function: Perform mean variance normalization on the input tensor X using formula: <br/> ` (X-EX)/sqrt(E(X-EX)^2) `

min(*args, **kwargs)

Element-wise min of each of the input tensors (with Numpy-style broadcasting support).

mod(*args, **kwargs)

Performs element-wise binary modulus (with Numpy-style broadcasting support).

mul(*args, **kwargs)

Performs element-wise binary multiplication (with Numpy-style broadcasting support).

multinomial(*args, **kwargs)

Generate a tensor of samples from a multinomial distribution according to the probabilities of each of the possible outcomes.

neg(*args, **kwargs)

Neg takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where each element flipped sign, y = -x, is applied to the tensor elementwise.

negativeloglikelihoodloss(*args, **kwargs)

A NegativeLogLikelihoodLoss operator computes (weighted) negative log likelihood loss.

nonmaxsuppression(*args, **kwargs)

Filter out boxes that have high intersection-over-union (IOU) overlap with previously selected boxes.

nonzero(*args, **kwargs)

Returns the indices of the elements that are non-zero (in row-major order - by dimension).

not(*args, **kwargs)

Returns the negation of the input tensor element-wise.

onehot(*args, **kwargs)

Produces a one-hot tensor based on inputs.

optional(*args, **kwargs)

Constructs an optional-type value containing either an empty optional of a certain type specified by the attribute, or a non-empty value containing the input element.

optionalgetelement(*args, **kwargs)

Outputs the element in the optional-type input.

optionalhaselement(*args, **kwargs)

Returns true if the optional-type input contains an element.

or(*args, **kwargs)

Returns the tensor resulted from performing the or logical operation elementwise on the input tensors A and B (with Numpy-style broadcasting support).

pad(*args, **kwargs)

Given a tensor containing the data to be padded (data), a tensor containing the number of start and end pad values for axis (pads), (optionally) a mode, and (optionally) constant_value, a padded tensor (output) is generated.

pow(*args, **kwargs)

Pow takes input data (Tensor<T>) and exponent Tensor, and produces one output data (Tensor<T>) where the function f(x) = x^exponent, is applied to the data tensor elementwise.

predict_with_onnxruntime(model_def, *inputs)

Simple helper to run an ONNX model with a set of inputs.

prelu(*args, **kwargs)

PRelu takes input data (Tensor<T>) and slope tensor as input, and produces one output data (Tensor<T>) where the function f(x) = slope * x for x < 0, f(x) = x for x >= 0., is applied to the data tensor elementwise.

qlinearconv(*args, **kwargs)

The convolution operator consumes a quantized input tensor, its scale and zero point, a quantized filter, its scale and zero point, and output's scale and zero point, and computes the quantized output.

qlinearmatmul(*args, **kwargs)

Matrix product that behaves like numpy.matmul: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.matmul.html.

quantizelinear(*args, **kwargs)

The linear quantization operator.

randomnormal(*args, **kwargs)

Generate a tensor with random values drawn from a normal distribution.

randomnormallike(*args, **kwargs)

Generate a tensor with random values drawn from a normal distribution.

randomuniform(*args, **kwargs)

Generate a tensor with random values drawn from a uniform distribution.

randomuniformlike(*args, **kwargs)

Generate a tensor with random values drawn from a uniform distribution.

range(*args, **kwargs)

Generate a tensor containing a sequence of numbers that begin at start and extends by increments of delta up to limit (exclusive).

reciprocal(*args, **kwargs)

Reciprocal takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the reciprocal is, y = 1/x, is applied to the tensor elementwise.

reducel1(*args, **kwargs)

Computes the L1 norm of the input tensor's element along the provided axes.

reducel2(*args, **kwargs)

Computes the L2 norm of the input tensor's element along the provided axes.

reducelogsum(*args, **kwargs)

Computes the log sum of the input tensor's element along the provided axes.

reducelogsumexp(*args, **kwargs)

Computes the log sum exponent of the input tensor's element along the provided axes.

reducemax(*args, **kwargs)

Computes the max of the input tensor's element along the provided axes.

reducemean(*args, **kwargs)

Computes the mean of the input tensor's element along the provided axes.

reducemin(*args, **kwargs)

Computes the min of the input tensor's element along the provided axes.

reduceprod(*args, **kwargs)

Computes the product of the input tensor's element along the provided axes.

reducesum(*args, **kwargs)

Computes the sum of the input tensor's element along the provided axes.

reducesumsquare(*args, **kwargs)

Computes the sum square of the input tensor's element along the provided axes.

relu(*args, **kwargs)

Relu takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the rectified linear function, y = max(0, x), is applied to the tensor elementwise.

reshape(*args, **kwargs)

Reshape the input tensor similar to numpy.reshape.

resize(*args, **kwargs)

Resize the input tensor. In general, it calculates every value in the output tensor as a weighted average of neighborhood (a.k.a. sampling locations) in the input tensor. Each dimension value of the output tensor is: output_dimension = floor(input_dimension * (roi_end - roi_start) * scale) if input "sizes" is not specified.

reversesequence(*args, **kwargs)

Reverse batch of sequences having different lengths specified by sequence_lens.

rnn(*args, **kwargs)

Computes an one-layer simple RNN.

roialign(*args, **kwargs)

Region of Interest (RoI) align operation described in the [Mask R-CNN paper](https://arxiv.org/abs/1703.06870).

round(*args, **kwargs)

Round takes one input Tensor and rounds the values, element-wise, meaning it finds the nearest integer for each value.

run_onnx_op(op_name, inputs, output_names[, ...])

Simple helper function that invokes a single ONNX operator with inputs and attibutes and returns the results.

scan(*args, **kwargs)

Scan can be used to iterate over one or more scan_input tensors, constructing zero or more scan_output tensors.

scatter(*args, **kwargs)

This operator is deprecated.

scatterelements(*args, **kwargs)

ScatterElements takes three inputs data, updates, and indices of the same rank r >= 1 and an optional attribute axis that identifies an axis of data (by default, the outer-most axis, that is axis 0).

scatternd(*args, **kwargs)

ScatterND takes three inputs data tensor of rank r >= 1, indices tensor of rank q >= 1, and updates tensor of rank q + r - indices.shape[-1] - 1.

selu(*args, **kwargs)

Selu takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the scaled exponential linear unit function, y = gamma * (alpha * e^x - alpha) for x <= 0, y = gamma * x for x > 0, is applied to the tensor elementwise.

sequenceat(*args, **kwargs)

Outputs a tensor copy from the tensor at 'position' in 'input_sequence'.

sequenceconstruct(*args, **kwargs)

Construct a tensor sequence containing 'inputs' tensors.

sequenceempty(*args, **kwargs)

Construct an empty tensor sequence, with given data type.

sequenceerase(*args, **kwargs)

Outputs a tensor sequence that removes the tensor at 'position' from 'input_sequence'.

sequenceinsert(*args, **kwargs)

Outputs a tensor sequence that inserts 'tensor' into 'input_sequence' at 'position'.

sequencelength(*args, **kwargs)

Produces a scalar(tensor of empty shape) containing the number of tensors in 'input_sequence'.

shape(*args, **kwargs)

Takes a tensor as input and outputs an 1D int64 tensor containing the shape of the input tensor.

shrink(*args, **kwargs)

Shrink takes one input data (Tensor<numeric>) and produces one Tensor output, having same datatype and shape with input.

sigmoid(*args, **kwargs)

Sigmoid takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the sigmoid function, y = 1 / (1 + exp(-x)), is applied to the tensor elementwise.

sign(*args, **kwargs)

Calculate the sign of the given input tensor element-wise.

sin(*args, **kwargs)

Calculates the sine of the given input tensor, element-wise.

sinh(*args, **kwargs)

Calculates the hyperbolic sine of the given input tensor element-wise.

size(*args, **kwargs)

Takes a tensor as input and outputs a int64 scalar that equals to the total number of elements of the input tensor.

slice(*args, **kwargs)

Produces a slice of the input tensor along multiple axes.

softmax(*args, **kwargs)

The operator computes the normalized exponential values for the given input:

softmaxcrossentropyloss(*args, **kwargs)

Loss function that measures the softmax cross entropy between 'scores' and 'labels'.

softplus(*args, **kwargs)

Softplus takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the softplus function, y = ln(exp(x) + 1), is applied to the tensor elementwise.

softsign(*args, **kwargs)

Calculates the softsign (x/(1+|x|)) of the given input tensor element-wise.

spacetodepth(*args, **kwargs)

SpaceToDepth rearranges blocks of spatial data into depth.

split(*args, **kwargs)

Split a tensor into a list of tensors, along the specified 'axis'.

splittosequence(*args, **kwargs)

Split a tensor into a sequence of tensors, along the specified 'axis'.

sqrt(*args, **kwargs)

Square root takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the square root is, y = x^0.5, is applied to the tensor elementwise.

squeeze(*args, **kwargs)

Remove single-dimensional entries from the shape of a tensor.

stringnormalizer(*args, **kwargs)

StringNormalization performs string operations for basic cleaning.

sub(*args, **kwargs)

Performs element-wise binary subtraction (with Numpy-style broadcasting support).

sum(*args, **kwargs)

Element-wise sum of each of the input tensors (with Numpy-style broadcasting support).

tan(*args, **kwargs)

Calculates the tangent of the given input tensor, element-wise.

tanh(*args, **kwargs)

Calculates the hyperbolic tangent of the given input tensor element-wise.

tfidfvectorizer(*args, **kwargs)

This transform extracts n-grams from the input sequence and save them as a vector.

thresholdedrelu(*args, **kwargs)

ThresholdedRelu takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the rectified linear function, y = x for x > alpha, y = 0 otherwise, is applied to the tensor elementwise.

tile(*args, **kwargs)

Constructs a tensor by tiling a given tensor.

topk(*args, **kwargs)

Retrieve the top-K largest or smallest elements along a specified axis. Given an input tensor of shape [a_1, a_2, ..., a_n, r] and integer argument k, return two outputs: -Value tensor of shape [a_1, a_2, ..., a_{axis-1}, k, a_{axis+1}, ... a_n] which contains the values of the top k elements along the specified axis -Index tensor of shape [a_1, a_2, ..., a_{axis-1}, k, a_{axis+1}, ... a_n] which contains the indices of the top k elements (original indices from the input tensor).

transpose(*args, **kwargs)

Transpose the input tensor similar to numpy.transpose.

trilu(*args, **kwargs)

Given a 2-D matrix or batches of 2-D matrices, returns the upper or lower triangular part of the tensor(s). The attribute "upper" determines whether the upper or lower part is retained. If set to true, the upper triangular matrix is retained. Lower triangular matrix is retained otherwise. Default value for the "upper" attribute is true. Trilu takes one input tensor of shape [*, N, M], where * is zero or more batch dimensions. The upper triangular part consists of the elements on and above the given diagonal (k). The lower triangular part consists of elements on and below the diagonal. All other elements in the matrix are set to zero. If k = 0, the triangular part on and above/below the main diagonal is retained. If upper is set to true, a positive k retains the upper triangular matrix excluding the main diagonal and (k-1) diagonals above it. A negative k value retains the main diagonal and |k| diagonals below it. If upper is set to false, a positive k retains the lower triangular matrix including the main diagonal and k diagonals above it. A negative k value excludes the main diagonal and (|k|-1) diagonals below it.

unique(*args, **kwargs)

Find the unique elements of a tensor.

unsqueeze(*args, **kwargs)

Insert single-dimensional entries to the shape of an input tensor (data).

upsample(*args, **kwargs)

Upsample the input tensor. Each dimension value of the output tensor is: output_dimension = floor(input_dimension * scale).

where(*args, **kwargs)

Return elements, either from X or Y, depending on condition.

xor(*args, **kwargs)

Returns the tensor resulted from performing the xor logical operation elementwise on the input tensors A and B (with Numpy-style broadcasting support).