PyTorch
This page contains the list of all attention models and non-local layers for computer vision enabled with PyTorch backend available in Echo.
Triplet Attention
echoAI.Attention.cv.t_attn.TripletAttention(no_spatial = False, kernel_size = 7)

Parameters:
no_spatial - switches on the spatial attention branch in Triplet Attention. Default:
False
kernel_size - window size of the convolution filters in Triplet Attention. Default: 7
Shape:
Input:4 dimensional feature map tensor.
Output:,same shape as input
Reference:
Rotate to Attend: Convolutional Triplet Attention Module
Squeeze Excite Attention
echoAI.Attention.cv.t_attn.SE(gate_channels, reduction_ratio = 16)

Parameters:
gate_channels - number of channels in the input tensor. Datatype:
Integer
reduction_ratio - squeeze bottleneck factor of the MLP in Squeeze Excite Attention. Default: 16
Shape:
Input:4 dimensional feature map tensor.
Output:,same shape as input
Reference:
Squeeze-and-Excitation Networks
Convolutional Block Attention Module
echoAI.Attention.cv.t_attn.CBAM(gate_channels, kernel_size = 3, reduction_ratio = 16, pool_types = ['avg', 'max'], no_spatial = False, bam = False, num_layers = 1, bn = False, dilation_conv_num = 2, dilation_val = 4)
Supports both Convolutional Block Attention Module (CBAM) and Bottleneck Attention Module (CBAM)


Parameters:
gate_channels - number of channels in the input tensor. Datatype:
Integer
kernel_size - window size of the convolution filters in CBAM/ BAM. Default: 3
reduction_ratio - width factor of the MLP in CBAM/BAM. Default: 16
pool_types -
list
of global pooling operators for channel attention gate in CBAM/BAM. Default:['avg', 'max']
. Note: This is the default for CBAM, which expects two operators, however, if BAM is switched on, pass['avg']
. Available options:avg
,lp
,max
no_spatial - switches off the spatial attention gate in CBAM. Default:
False
bam - initializes BAM. Default:
False
num_layers - controls the number of hidden layers in the MLP of channel attention gate in CBAM/BAM. Default: 1
bn - adds a Batch Normalization layer in the MLP of the channel attention gate in CBAM/BAM. Default:
False
. Pass True when bam isTrue
.dilation_conv_num - number of dilated channel preserving convolution layers in the spatial attention gate in BAM. Default: 2
dilation_val - dilation factor for the convolution layers in the spatial attention gate in BAM. Default: 4
Shape:
Input:4 dimensional feature map tensor.
Output:,same shape as input
References:
Last updated
Was this helpful?