-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Layers Library Reference
[./Layers-Library-Reference]
CNTK predefines a number of common "layers," which makes it very easy to write simple networks
that consist of standard layers layered on top of each other.
Layers are function objects that can be used like regular BrainScript functions but
hold learnable parameters and
have an additional pair of {}
to pass construction parameters or attributes.
For example, this is the network description for a simple 1-hidden layer model using the Dense(}
layer:
h = Dense(1024, activation=reul) (features)
p = Dense(9000, activation=softmax) (h)
which can then, e.g., be used for training against a cross-entropy criterion:
ce = cross_entropy(labels, p)
If your network is a straight concatenation of operations (many are), you can use the alternative
Sequential()
notation:
from layers import *
my_model = Sequential ([
Dense(1024, activation=relu},
Dense(9000, activation=softmax}
])
and invoke it like this:
p = my_model (features)
The following shows a slot tagger that embeds a word sequence, processes it with a recurrent LSTM, and then classifies each word:
from layers import *
from models import *
tagging_model = Sequential ([
Embedding(150), # embed into a 150-dimensional vector
Recurrence(LSTM(300)), # forward LSTM
Dense(labelDim) # word-wise classification
])
And the following is a simple convolutional network for image recognition:
conv_net = Sequential ([
# 3 layers of convolution and dimension reduction by pooling
Convolution((5,5), 32, pad=True, activation=relu),
MaxPooling((3,3), strides=(2,2)),
Convolution((5,5), 32, pad=True, activation=relu),
MaxPooling((3,3), strides=(2,2)),
Convolution((5,5), 64, pad=True, activation=relu),
MaxPooling((3,3), strides=(2,2)),
# 2 dense layers for classification
Dense(64, activation=relu),
Dense(10)
])
If you assign a layer to a variable and use it in multiple places, the parameters will be shared. If you say
lay = Dense(1024, activation=sigmoid)
h1 = lay(x)
h2 = lay(h1) # same weights as `h1`
h1
and h2
will share the same parameters, as lay()
is the same function in both cases.
In the above case this is probably
not what was desired, so be aware.
If both invocations of lay()
above are meant to have different parameters,
remember to define two separate instances, for example lay1 = Dense(...)
and lay2 = Dense(...)
.
So why this behavior?
Layers allow to share parameters across sections of a model.
Consider a DSSM model which processes two input images, say doc
and query
identically with the same processing chain, and compares the resulting hidden vectors:
image_to_vec = Sequential (
Convolution((5,5), 32, pad=True, activation=relu),
MaxPooling((3,3), strides=(2,2)),
Convolution((5,5), 64, pad=True, activation=relu),
MaxPooling((3,3), strides=(2,2)),
Dense(64, activation=relu),
Dense(10)
)
z_doc = image_to_vec (doc)
z_query = image_to_vec (query) # same model as for z_doc
sim = CosDistance(zdoc, z_query)
where image_to_vec
is the part of the model that converts images into flat vector.
image_to_vec
is a function object that in turn contains several function objects (e.g. three instances of
Convolution()
).
image_to_vec
is instantiated once, and this instance holds the learnable parameters of all
the included function objects. Both invocations of model()
will share
these parameters in application, and their gradients will be the sum of both invocations.
Lastly, note that if in the above example query
and doc
must have the same dimensions,
since they are processed through the same function object, and that function object's first
layer has its input dimension inferred to match that of both query
and doc
.
If their dimensions differ,
then this network is malformed, and dimension inference/validation will fail with an error message.
Many layers are wrappers around underlying CNTK primitives, along with the respective
required learnable parameters. For example,
Convolution()
wraps the convolution()
primitive.
The benefits of using layers are:
- layers contain learnable parameters of the correct dimension
- layers are composable (cf.
Sequential()
)
Factory function to create a fully-connected layer. Dense()
takes an optional non-linearity.
Dense(shape, init=init_default_or_glorot_uniform, activation=activation_default_or_None,
input_rank=None, map_rank=None,
bias=bias_default_or_True, init_bias=init_bias_default_or_0)
-
shape
: output dimension of this layer -
activation
(default:None
: pass a function here to be used as the activation function, such asactivation=relu
-
input_rank
: if given, number of inferred axes to add to weight (map_rank
must not be given) -
map_rank
: if given, expand weight matrix to leave exactlymap_rank
axes (input_rank
must not be given) -
init
: initializer descriptor for the weights, e.g.glorot_uniform()
. See here for a full list of random-initialization options. -
bias
: if False, do not include a bias parameter -
init_bias
: initializer for the bias
A function that implements the desired fully-connected layer. See description.
Use these factory functions to create a fully-connected layer.
Use Dense()
if you would like an activation function to be included, otherwise Dense()
.
Each these factory functions create a function object that contains a learnable weight matrix and,
unless bias=False
, a learnable bias. The function object can be used like a function,
which implements one of these formulas:
Dense(...) (v) = activation (v @ W + b)
Dense(...) (v) = v @ W + b
where W
is a weight matrix of dimension ((dimension of v), shape)
, b
is the bias of dimension (outdim,)
,
and the resulting value has dimension (or tensor dimensions) as given by shape
.
If the returned function is applied to an input of a tensor rank > 1, e.g. a 2D image,
W
will have the dimension (..., (second dimension of input), (first dimension of input), shape)
.
On the other hand, shape
can be a vector that specifies tensor dimensions, for example (10,10)
.
In that case,
W
will have the dimension ((dimension of input), ..., shape[1], shape[0])
,
and b
will have the tensor dimensions (..., shape[1], shape[0])
.
CNTK's matrix product will interpret these extra output or input dimensions as if they were flattened into a long vector.
For more details on this, see the documentation of Times()
h = Dense(1024, activation=sigmoid) (v)
or alternatively:
layer = Dense(1024, activation=sigmoid)
h = layer(v)
Creates a convolution layer with optional non-linearity.
Convolution(rf_shape, num_filters=None,
activation=activation_default_or_None,
init=init_default_or_glorot_uniform,
pad=pad_default_or_False,
strides=1,
bias=bias_default_or_True,
init_bias=init_bias_default_or_0)
-
rf_shape
: shape of receptive field of filter, e.g.(5,5)
for a 2D filter (not including the input feature-map depth) -
num_filters
: number of output channels (number of filters) -
activation
: optional non-linearity, e.g.activation=relu
-
init
: initializer descriptor for the weights, e.g.glorot_uniform()
. See here for a full list of random-initialization options. -
pad
: if False (default), then the filter will be shifted over the "valid" area of input, that is, no value outside the area is used. Ifpad
is True on the other hand, the filter will be applied to all input positions, and values outside the valid region will be considered zero. -
strides
: increment when sliding the filter over the input. E.g.(2,2)
to reduce the dimensions by 2 -
bias
: if False, do not include a bias parameter -
init_bias
: initializer for the bias
A function that implements the desired convolution operation.
Use these factory functions to create a convolution layer.
The resulting layer applies a convolution operation on an N-dimensional tensor.
The caller specifies the spatial extend of the filter.
A set of filters for a given receptive field (e.g. (5,5)
) is correlated with every location of the input
(e.g. a (480, 640)
-sized image).
Assuming padding is enabled (pad
) and strides are 1, this will generate an output region of the same dimension
((480, 640)
).
Typically, many filters are applied at the same time. num_filters
specifies the number,
so for every input location, an entire vector of num_filters
is produced.
For our example above, setting num_filters
to 64 would in a (64, 480, 640)
-sized tensor.
That last axis is also called the channel dimension or the number of feature maps.
When convolution is applied to an input with an channel dimension,
each filter will also consist of vectors of the input's channel dimension.
E.g. when applying convolution
with a specified spatial filter extent of (5,5)
to a (3, 480, 640)
-sized color image,
each filter will be a (3, 5, 5)]
tensor.
All num_filters
filters stacked together is called the kernel.
In our example, the kernel shape will be (64, 3, 5, 5)
.
The following summarizes the relationship between the various dimensions and shapes:
input shape : ( (#input channels), (spatial dims) )
receptive field : ( (rf_shape) )
output shape : ( num_filters, (spatial dims) )
kernel shape : ( num_filters, (#input channels), (rf_shape) )
which in our example are:
input shape : ( 3, 480, 640 )
receptive field : ( 5, 5 )
output shape : ( num_filters, 480, 640 )
kernel shape : ( num_filters, 3, 5, 5 )
If padding is not enabled, then the output region will be reduced by the boundary locations to which the full
filter extent cannot be applied. E.g. applying a (5,5)
-extent filter to an image without padding,
the outermost 2 rows and columns of pixels would cause the filter to be applied out of bounds.
Hence, Convolution()
will reduce the dimensions accordingly.
An (480, 640)
image convolved with a (5,5)
filter without padding
will leave a (476, 636)
-sized output region.
The strides
parameters specify the increment of filters.
Stride values greater than one will lead to a sub-sampling of the output region.
E.g. filtering a (480, 640)
image with a stride of (2,2)
will result in a (240, 320)
-sized
region with padding, and (238, 318)
without padding.
This layer is a wrapper around the convolution()
primitive.
The filter kernel parameters' name as shown in the log's validation section will end in .W
.
c = Convolution((3,3), 64, pad=True, strides=(1,1), bias=False) (x)
Factory functions to create a max- or average-pooling layer.
MaxPooling(rf_shape, strides=1, pad=False)
AveragePooling(rf_shape, strides=1, pad=False)
-
rf_shape
: receptive field (region) to pool over, e.g.(2,2)
(not including the input feature-map depth) -
strides
: increment when sliding the pool over the input. E.g.(2,2)
to reduce the dimensions by 2 -
pad
: if False (default), then the pool will be shifted over the "valid" area of input, that is, no value outside the area is used. Ifpad
is True on the other hand, the pool will be applied to all input positions, and values outside the valid region will be considered zero. For average pooling, count for average does not include padded values.
A function that implements the desired pooling layer.
Use this factory function to create a pooling operation. Use MaxPooling()
to
compute the maximum over the values in the pool area, and AveragePooling()
to take their average.
The pooling operation slides a receptive field, or pool window, over the input, and computes either the maximum or the average of the values in the respective window.
This operation is structurally very similar to convolution, except that the operation applied to the sliding window is of a different nature.
All considerations regarding input dimensions, padding, and strides apply identically, so please
see Convolution()
for more detail.
p = MaxPooling((3,3), strides=(2,2)) (c)
Embedding(shape=None, init=None, weights=None)
-
shape
: the dimension of the desired embedding vector -
init
: if given, initializer descriptor for the weights to be learned. See here for a full list of initialization options. -
weights
(numpy array): if given, embeddings are not learned but specified by this array (which could be, e.g., loaded from a file) and not updated further during training
A function that implements the embedding layer. See description.
"Embedding" refers to representing words or other discrete items by dense continuous vectors. This layer assumes that the input is in one-hot form. E.g., for a vocabulary size of 10,000, each input vector is expected to have dimension 10,000 and consist of zeroes except for one position that contains a 1. The index of that location is the index of the word or item it represents.
In CNTK, the corresponding embedding vectors are stored as columns of a matrix. Hence, mapping an input word to its embedding is implemented as a matrix product. For this to be very efficient, it is important that the input vectors are stored in sparse format.
Fun fact: The gradient of an embedding matrix has the form of gradient vectors that are only non-zero for words seen in a minibatch. Since for realistic vocabularies of tens or hundreds of thousands, the vast majority of columns would be zero, CNTK implements has a specific optimization to represent the gradient in "column-sparse" form.
Known issue: The above-mentioned column-sparse gradient form is currently not supported by our 1-bit SGD parallelization technique. Please use the block-momentum technique instead.
A learned embedding that represents words from a vocabulary of 87636 as a 300-dimensional vector:
input = Input(87636, is_sparse=True) # word sequence, as one-hot vector, sparse format
embEn = Embedding(300) (input) # embed word as a 300-dimensional continuous vector
In addition to is_sparse=True
, one should also declare an input as sparse in the reader
config block.
Here is an example of reading sparse text input with the CNTKTextFormatReader
:
source = MinibatchSource(CTFDeserializer('en2fr.ctf', StreamDefs(
input = StreamDef(field='E', shape=87636, is_sparse=True),
labels = StreamDef(field='F', shape=98624, is_sparse=True)
)))
If, instead, the embedding vectors already exist and should be loaded from a file, it would look like this:
input = Input(87636, is_sparse=True) # word sequence, as one-hot vector, sparse format
embEn = Embedding(300, weights=np.load_txt('embedding-en.txt')) (w) # embedding from disk
where the file 'embedding-en.txt'
would be expected to consist of 87,636 text rows,
each of which consisting of 300 space-separated numbers.
Factory function to create a single-layer or multi-layer recurrence.
RecurrentLSTM(shape, cellShape = None,
goBackwards=False,
usePeepholes=False,
init = 'glorotUniform', initValueScale = 1,
enable_self_stabilization=False,
allowOptimizedEngine=False)
RecurrentLSTMLayerStack {layerDims,
cellShapes = None,
usePeepholes=False,
init = 'glorotUniform', initValueScale = 1,
enable_self_stabilization=False,
allowOptimizedEngine=False)
-
shape
(RecurrentLSTM()
): dimension of the network's output. To denote a tensor of rank>1, this can be a vector, e.g.(40:2)
-
layerDims
(RecurrentLSTMLayerStack{)
): array of dimensions of the network's inner layers and output -
cellShape
( (RecurrentLSTM()
, optional): the dimension of the LSTM's cell. Normally this is identical toshape
. If a different value is given, an additional linear projection will be inserted to convert from the cell dimension to the output. -
cellShapes
( (RecurrentLSTMLayerStack{)
, optional): array of values likecellShape
forRecurrentLSTMLayer()
to denote projection -
goBackwards
(optional): if True, the recurrence is run backwards -
usePeepholes
(optional): if True, then use peephole connections in the LSTM -
init
: initializer descriptor for the weights. See here for a full list of initialization options. -
enable_self_stabilization
(optional): if True, insert a "stabilizer" operation similar toStabilizer()
-
allowOptimizedEngine
(optional, default false): if True, then use cudnn5's optimized RNN engine where possible
A function that implements the desired layer(s) that applies/apply a recurrent LSTM to its input sequence. This layer (-stack) maps an input sequence to a sequence of hidden states of the same length.
This implements the recurrent LSTM to be applied to a sequence of inputs, in two variants: a single layer and a multi-layer stack. This operation automatically handles variable-length input. The initial value of the hidden state and cell are 0.
Applying this layer to an input sequence
will return the sequence of the hidden states of the (top-of-stack) recurrent LSTM
(the LSTM's memory cell's value is not returned).
The returned sequence has the same length as the input.
If only the last state is desired, as in sequence-classification or some sequence-to-sequence scenarios,
use BS.Sequences.Last()
to extract the last item's hidden state only.
(In a backward recurrence, you would use BS.Sequences.First()
.)
To create a bidirectional model with RecurrentLSTMLayer()
, use two layers, one with goBackwards=True
,
and Splice()
the two outputs together.
RecurrentLSTMLayerStack()
currently does not support bidirectional models,
you must manually construct it using multiple RecurrentLSTMLayer()/Splice()
combos.
This function will automatically use CuDNN5's optimized RNN engine if possible, that is, if
- the specified model is one that can be implemented by CuDNN5's function
- no projection (no
cellShape
parameter) - no peep-hole connections
- no self-stabilization
- not going backwards
- for
RecurrentLSTMLayerStack{)
, all layer dimensions have the same value
- no projection (no
allowOptimizedEngine=True
Specifically, CNTK requires to enable allowOptimizedEngine=True
.
This is because the CuDNN5 RNN is implemented as a CNTK primitive operation that requires a GPU.
However, many real systems use GPUs for training but CPU-only servers in deployment.
The CuDNN5 RNN is not suitable here.
(It is theoretically possible to use the CuDNN5 RNN for training, and replace it for deployment
with an editing operation with an equivalent explicit LSTM implementation in BrainScript.)
If allowOptimizedEngine=True
then these two layer variants
are wrappers around the OptimizedRNNStack()
primitive.
A simple text classifier, which runs a word sequence through a recurrence and then passes the last hidden state of the LSTM to a softmax classifer, could have this form:
w = Input{...) # word sequence (one-hot vectors)
e = Embedding(150) (w) # embed as a 150-dimensional dense vector
h = RecurrentLSTM(300) (e) # left-to-right LSTM with hidden and cell dim 300
t = BS.Sequences.Last (h) # extract last hidden state
z = Dense(10000, activation=Softmax) (t) # softmax classifier
To change the above example to a 3-layer stack that uses the CuDNN5 RNN engine, change this line:
h = RecurrentLSTMLayerStack {(300:300:300), allowOptimizedEngine=True) (e)
To create a bidirectional one-layer LSTM (e.g. using half the hidden dimension compared to above), use this:
hFwd = RecurrentLSTM(150) (e)
hBwd = RecurrentLSTM(150, goBackwards=True) (e)
h = Splice (hFwd:hBwd)
Factory function to create a layer that delays its input.
Delay(T=1, defaultHiddenActivation=0)
-
T
: the number of time steps to delay. To access future values, use a negative value -
defaultHiddenActivation
: value to use for the delayed frames at the boundaries
A function that implements the desired delay operation.
This operation delays an input sequence by T
steps (default 1).
This useful, for example, to turn a word sequence into a sequence of overlapping word triples.
Consider an input sequence "a b c b", which shall be encoded as a sequence of one-hot vectors as follows:
1 0 0 0
0 1 0 1
0 0 1 0
Here, every column is a one-hot vector and corresponds to a word.
Applying Delay(T=1)
to this input will generate this sequence:
0 1 0 0
0 0 1 0
0 0 0 1
All tokens get delayed by one, and the first position gets filled in as a 0 vector.
Likewise, using Delay(T=-1)
(negative delay) will give access to the future values, and pad
from the right with a zero:
0 0 0 0
1 0 1 0
0 1 0 0
This layer is a wrapper around the PastValue()
and FutureValue()
primitives.
The following shows how to stack three neighbor words into a trigram vector:
x = ... # input value, e.g. a N-dimensional one-hot vector
xp = Delay() (x) # previous value
xn = Delay(T-1) (x) # next value (negative delay)
tg = splice ([xp, x, xn]) # concatenate all into a 3N-dimensional three-hot vector
Factory functions to create layers for batch normalization, layer normalization, and self-stabilization.
BatchNormalization(spatialRank = 0,
normalizationTimeConstant = 5000,
initialScale = 1, epsilon = 0.00001, useCntkEngine=True)
LayerNormalization(initialScale = 1, initialBias = 0)
Stabilizer()
BatchNormalizationLayer
:
-
spatialRank
: normalization parameters are pooled over the firstspatialRank
dimensions. Currently allowed values are 0 (no pooling) and 2 (pooling across all pixel positions of an image) -
normalizationTimeConstant
(default 5000): time constant in samples of the first-order low-pass filter that is used to compute mean/variance statistics for use in inference -
initialScale
: initial value of scale parameter -
epsilon
: small value that gets added to the variance estimate when computing the inverse -
useCntkEngine
: if True, use CNTK's native implementation. If false, use CuDNN's implementation (GPU only).
LayerNormalizationLayer
:
-
initialScale
: initial value of scale parameter -
initialBias
: initial value of bias parameter
A function that implements a layer that performs the normalization operation.
BatchNormalization()
implements the technique described in paper
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift (Sergey Ioffe, Christian Szegedy).
It normalizes its inputs for every minibatch by the minibatch mean/variance,
and de-normalizes it with a learned scaling factor and bias.
In inference, instead of using minibatch mean/variance,
batch normalization uses a long-term running mean/var estimate.
This estimate is computed during training by low-pass filtering minibatch statistics.
The time constant of the low-pass filter can be modified by the normalizationTimeConstant
parameter.
We recommend to start with the default of (5000),
but experiment with other values, typically on the order of several thousand to tens of thousand.
LayerNormalization()
implements Layer Normalization (Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton).
It normalizes each input sample by subtracting the mean across all elements of the sample,
and then dividing by the standard deviation over all elements of the sample.
Stabilizer()
implements a self-stabilizer per Self-stabilized deep neural network (P. Ghahremani, J. Droppo).
This simple but effective technique multiplies its input with a learnable scalar (but unlike layer normalization,
it does not first normalize the input, nor does it subtract a mean).
Note that compared to the original paper, which proposes a linear scalar beta
or an exponential one Exp (beta)
,
we found it beneficial to use a sharpened softplus operation per the second author's suggestion,
which avoids both negative values and instability from the exponential.
BatchNormalization()
is a wrapper around the BatchNormalization()
primitive.
LayerNormalization()
and Stabilizer()
are expressed directly in BrainScript.
A typical layer in a convolutional network with batch normalization:
my_layer(x, depth, init) =
{
c = Convolution(depth, (5,5), pad=True, init=init) (x)
b = BatchNormalization(spatialRank = 2) (c) #####
r = relu (b)
p = MaxPooling((3,3), strides=(2,2)) (r)
).p