PyTorch
This page contains details of all activation functions for PyTorch backend supported in Echo.
Mish
echoAI.Activation.t_ops.Mish()
Applies the element-wise function:
Shape:
Input:wheremeans any number of additional dimensions
Output:, same shape as input
Reference:
Mish: A Self Regularized Non-Monotonic Activation Function
Swish
echoAI.Activation.t_ops.Swish(eswish = False, swish = True, beta = 1.735, flatten = False, pfts = False)
Allows the following element-wise functions:
Parameters:
eswish - Uses E-Swish activation function. Default:
False
.swish - Uses Swish activation function. Default:
False
.flatten - Uses Flatten T-Swish activation function. c is a constant of value -0.2. Default:
False
.beta - parameter used for E-Swish formulation. Default: 1.375
pfts - Uses Parametric Flatten T-Swish function. Has the same formulation as Flatten T-Swish with only c being a trainable parameter initialized with the value of -0.2 instead of setting as a constant. Default:
False
.
Shape:
Input:wheremeans any number of additional dimensions
Output:, same shape as input
References:
Searching for Activation Functions
E-swish: Adjusting Activations to Different Network Depths
Flatten-T Swish: a thresholded ReLU-Swish-like activation function for deep learning
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
Parametric Flatten-T Swish: An Adaptive Non-linear Activation Function For Deep Learning
Aria2
echoAI.Activation.t_ops.Aria2(beta = 0.5, alpha = 1.0)
Applies the element-wise function:
Parameters:
beta -is the exponential growth rate. Default: 0.5
alpha -is a hyper-parameter which has a two-fold effect; it reduces the curvature in 3rd quadrant as well as increases the curvature in first quadrant while lowering the value of activation. Default: 1.0
Shape:
Input:wheremeans any number of additional dimensions
Output:, same shape as input
Reference:
BReLU
echoAI.Activation.t_ops.BReLU()
Applies the element-wise function:
Shape:
Input:wheremeans any number of additional dimensions
Output:, same shape as input
Reference:
Shifting Mean Activation Towards Zero with Bipolar Activation Functions
APL
echoAI.Activation.t_ops.APL(s)
Applies the element-wise function:
Parameter:
s - hyperparameter, number of hinges to be set in advance. Default: 1
Shape:
Input:wheremeans any number of additional dimensions
Output:, same shape as input
Reference:
Learning Activation Functions to Improve Deep Neural Networks
ELiSH
echoAI.Activation.t_ops.Elish(hard = False)
Allows the following element-wise functions:
Parameter:
hard - Uses Hard ELiSH activation function. Default:
False
Shape:
Input:wheremeans any number of additional dimensions
Output:, same shape as input
Reference:
The Quest for the Golden Activation Function
ISRU
echoAI.Activation.t_ops.ISRU(alpha = 1.0, isrlu = False)
Allows the following element-wise functions:
Parameters:
alpha - hyperparametercontrols the value to which an ISRLU saturates for negative inputs. Default: 1.0
isrlu - Uses ISRLU activation function. Default:
False
Shape:
Input:wheremeans any number of additional dimensions
Output:, same shape as input
Reference:
Improving Deep Learning by Inverse Square Root Linear Units (ISRLUs)
Maxout
echoAI.Activation.t_ops.Maxout()
Applies the element-wise function:
Shape:
Input:wheremeans any number of additional dimensions
Output:, same shape as input
Reference:
NLReLU
echoAI.Activation.t_ops.NLReLU(beta = 1.0, inplace = False)
Applies the element-wise function:
Parameters:
beta - parameter used for NLReLU formulation. Default: 1.0
inplace - can optionally do the operation in-place. Default:
False
Shape:
Input:wheremeans any number of additional dimensions
Output:, same shape as input
Reference:
Natural-Logarithm-Rectified Activation Function in Convolutional Neural Networks
Soft Clipping
echoAI.Activation.t_ops.SoftClipping(alpha = 0.5)
Applies the element-wise function:
Parameter:
alpha -hyper-parameter, which determines how close to linear the central region is and how sharply the linear region turns to the asymptotic values. Default: 0.5
Shape:
Input:wheremeans any number of additional dimensions
Output:, same shape as input
Reference:
Neural Network-Based Approach to Phase Space Integration
Soft Exponential
echoAI.Activation.t_ops.SoftExponential(alpha = None)
Applies the element-wise function:
Parameter:
alpha -trainable hyper-parameter which is initialized to zero by default. Default:
None
Shape:
Input:wheremeans any number of additional dimensions
Output:, same shape as input
Reference:
SQNL
echoAI.Activation.t_ops.SQNL()
Applies the element-wise function:
Shape:
Input:wheremeans any number of additional dimensions
Output:, same shape as input
Reference:
SQNL: A New Computationally Efficient Activation Function
SReLU
echoAI.Activation.t_ops.SReLU(in_features, parameters = None)
Applies the element-wise function:
Parameters:
in_features - Shape of the input. Datatype:
Tuple
parameters - ( ) parameters for manual initialization, Default:
None
. IfNone
is passed, parameters are initialized randomly.
Shape:
Input:wheremeans any number of additional dimensions
Output:, same shape as input
Reference:
Deep Learning with S-shaped Rectified Linear Activation Units
Funnel
echoAI.Activation.t_ops.Funnel(in_channels)
Applies the element-wise function:
Parameter:
in_channels - Number of channels in the input tensor. Datatype:
Integer
Shape:
Input:whereindicates the number of channels.
Output:, same shape as input
Reference:
Funnel Activation for Visual Recognition
SLAF
echoAI.Activation.t_ops.SLAF(k = 2)
Applies the element-wise function:
Parameter:
k - Number of Taylor coefficients. Default: 2
Shape:
Input:wheremeans any number of additional dimensions
Output:, same shape as input
Reference:
Learning Activation Functions: A new paradigm for understanding Neural Networks
AReLU
echoAI.Activation.t_ops.AReLU(alpha = 0.90, beta = 2.0)
Applies the element-wise function:
Parameters:
alpha -trainable hyper-parameter. Default: 0.90
beta -trainable hyper-parameter. Default: 2.0
Shape:
Input:whereindicates the number of channels.
Output:, same shape as input
Reference:
AReLU: Attention-based Rectified Linear Unit
FReLU
echoAI.Activation.t_ops.FReLU()
Applies the element-wise function:
Shape:
Input:wheremeans any number of additional dimensions
Output:, same shape as input
Reference:
FReLU: Flexible Rectified Linear Units for Improving Convolutional Neural Networks
DICE
echoAI.Activation.t_ops.DICE(emb_size, dim = 2, epsilon = 1e-8)
Applies the function:
Reference:
Deep Interest Network for Click-Through Rate Prediction
Seagull
echoAI.Activation.t_ops.Seagull()
Applies the function:
Shape:
Input:wheremeans any number of additional dimensions
Output:, same shape as input
Reference:
A Use of Even Activation Functions in Neural Networks
Snake
echoAI.Activation.t_ops.Snake(in_features, alpha = None, alpha_trainable = True)
Applies the function:
Parameters:
in_features - shape of the input
alpha -trainable hyper-parameter. Default: 1.0 when specified as
None
alpha_trainable - switches to be a trainable parameter. Default: True
Shape:
Input:wheremeans any number of additional dimensions
Output:, same shape as input
Reference:
Neural Networks Fail to Learn Periodic Functions and How to Fix It
SIREN
echoAI.Activation.t_ops.SIREN(dim_in, dim_out, w0 = 30., c = 6., is_first = False, use_bias = True, activation = None)
Applies the function:
Parameters:
dim_in - input dimension
dim_out - output dimension
w0 -hyper-parameter. Default: 30.0
c - hyper-parameter used in weight initialisation for the linear layer. Default: 6.
is_first - used for weight initialisation for the linear layer. Default:
False
use_bias - initialises bias parameter for the linear layer. Default:
True
activation - used to initialise an activation function. Default:
None
. Initialises thesine
activation function whenNone
Shape:
Input:
Output:
Reference:
Implicit Neural Representations with Periodic Activation Functions
Last updated
Was this helpful?