Skip to main content

Table 3 The architecture of the autoencoder in experiments 1

From: CLeaR: An adaptive continual learning framework for regression tasks

Layer

Size

Activation

Encoder-Input

7

None

Encoder-1

7 ×32

LeakyReLu

Encoder-2

32 ×16

LeakyReLu

Encoder-3

16 ×8

LeakyReLu

Encoder-4

8 ×4

LeakyReLu

Decoder-1

4 ×8

LeakyReLu

Decoder-2

8 ×16

LeakyReLu

Decoder-3

16 ×32

LeakyReLu

Decoder-4

32 ×7

LeakyReLu

Decoder-Output

7

None

  1. Dropout layers are used with a dropout rate of 0.2. The slope parameter of the LeakyReLu is set to 0.05