Skip to main content

Table 5 Architecture parameters of the model for experiment 2

From: CLeaR: An adaptive continual learning framework for regression tasks

Layer

Size

Activation

Encoder-Input

7

None

Encoder-1

7 ×32

LeakyReLu

Encoder-2

32 ×16

LeakyReLu

Encoder-3

16 ×8

LeakyReLu

Encoder-4

8 ×Latent size

LeakyReLu

Decoder-1

Latent size ×8

LeakyReLu

Decoder-2

8 ×16

LeakyReLu

Decoder-3

16 ×32

LeakyReLu

Decoder-4

32 ×7

LeakyReLu

Decoder-Output

7

None

DNN-1

Latent size ×96

Tanh

DNN-2

96 ×64

Tanh

DNN-3

64 ×32

Tanh

DNN-4

32 ×16

Tanh

DNN-5

16 ×8

Tanh

DNN-Output

8 ×1

None

  1. Dropout layers are used with a dropout rate selected by means of grid search. Latent size refers to the bottleneck dimension in the autoencoder. The slope parameter of the LeakyReLu is set to 0.05