I will introduce the implementation of a new activation function called FReLU in keras.
For more information about FReLU + implementation in pytorch, please read the following article. It is explained very carefully. Reference: Birth & explanation of new activation function "FReLU"!
This article is about implementing FReLU in tf.keras. With Depthwise Conv2D and Lambda, you can implement it quickly.
y= \max(x,\mathbb{T}(x))
$ \ mathbb {T} (\ cdot) $ is DepthwiseConv2D.
tensorflow 2.3.0
FReLU.py
from tensorflow.keras.layers import DepthwiseConv2D,BatchNormalization
import tensorflow as tf
from tensorflow.keras.layers import Lambda
from tensorflow.keras import backend as K
#Mathematical symbol max()Definition of
def max_unit(args):
    inputs , depthconv_output = args
    return tf.maximum(inputs, depthconv_output)
def FReLU(inputs, kernel_size = 3):
    #T(x)Part of
    x = DepthwiseConv2D(kernel_size, strides=(1, 1), padding='same')(inputs)
    x = BatchNormalization()(x)
    #Calculate tensor shape for Lambda
    x_shape = K.int_shape(x)
    #max(x, T(x))Part of
    x = Lambda(max_unit, output_shape=(x_shape[1], x_shape[2], x_shape[3]))([inputs, x])
    return x
The implementation itself is easy, isn't it? I will try to implement the Layer version when I have time. It's nice because the name will appear when that person summarizes. If you have any questions or concerns, please leave a comment.
Recommended Posts