How can I share code between AWS Lambda Layers? - node.js

I understand that you can share code among AWS Lambda functions when those functions use the same layer.
However, I want to reuse code among Lambda Layers.
Can I just reference the /opt/nodejs/ folder inside my layer code in order to access another layer code hoping that both layers are being used by the same Lambda function?
E.g.
layer1 --> /myFile1.ts
layer2 --> /myFile2.ts
myFunction uses both layer1 and layer2.
Can I do the following in my /myFile1.ts in order to use the /myFile2.ts code?
import * as _ from 'opt/nodejs/layer2/myFile2.ts'

Related

AWS Lambda Layer Using Another Layer

I've got a nodejs AWS Lambda Layer (lets call it dbUtil) with some low level database access code (stuff like opening connections, executing prepared statements, etc.).
Now I want to create another layer (let's call it modelUtil) with higher level, data model-specific code (stuff like data transfer objects, and model-specific transformations).
I would very much like to be able to leverage the code in the dbUtil layer within the higher-level modelUtil layer, while still being able to import dbUtil into a lambda function independently.
Importing the layer to a lambda function is easy as SAM plops the layer code into /opt/nodejs/. But as far as I know, nothing analogous exists for layers; AWS doesn't give you the ability to import a layer into another layer in the same way. Additionally, each layer is self-contained, so I couldn't have the layer just put const dbUtil = require('./dbUtil') in the modelUtil.js file, unless they were in the same directory when I built the layer, and thus, forcing them to be the same layer.
Is there a way I can have a dependency from one layer (modelUtil) on another layer (dbUtil) while still allowing them to be treated as independent layers?
I just tested this on Lambda and I can testify that a Layer can import functions and dependencies from another Layer. Even the merge order does not matter.
For your case, for modelUtil Layer to import functions from dbUtil Layer:
(Inside modelUtil)
const func1 = require('/opt/<the location of func1 in dbUtil>')
For modelUtil Layer to import npm dependencies from dbUtil Layer:
(Inside modelUtil)
const dependency = require(dependency)
It is as simple as that!

Can I access what was once `tf.get_global_step()` from within a custom Keras layer?

I'm implementing a custom Layer with the Keras API (working with TF2.0-beta). I want to use the epoch number in my calculation in order to decay a parameter over time (meaning - in the call() method).
I'm used to tf.get_global_step() but understand that TF deprecated all global scopes, and definitely for a good reason.
If I had the model instance, I could use model.optimizer.iterations, but I'm not sure how I get the instance of my parent model when I'm implementing a Layer.
Do I have any way to do that or the only way is to let the layer expose a Callback that will update the parameter I want to decay? Other ideas? Ideally something that wouldn't make the user of the layer aware of that inner detail (that's why I don't like the Callback approach - user has to add them to the model).

freeze layers while passing a part of features

I want to design a network using keras like this pic. And there is a problem I need some help. F* is a feature tensor like a sentence or a picture, the shared layers is a block comprise several layers. F* will merge together after passing the shared layer. Then merged feature will pass an output layer. The structure is described below.
The problem is I want to train this network only use F1. Namely, when F2 pass the shared layers, shared layers are frozen.
I would very appreciate if you could answer me with pseudocode.

keras layer that computes logarithms?

I'd like to set up a Keras layer in which each node simply computes the logarithm of the corresponding node in the preceding layer. I see from the Keras documentation that there is a "log" function in its backend module. But somehow I'm not understanding how to use this.
Thanks in advance for any hints you can offer!
You can use any backend function inside a Lambda layer:
from keras.layers import Lambda
import keras.backend as K
Define just any function taking the input tensor:
def logFunc(x):
return K.log(x)
And create a lambda layer with it:
#add to the model the way you're used to:
model.add(Lambda(logFunc,output_shape=(necessaryWithTheano)))
And if the function is already defined, taking only one argument and returning a tensor, you don't need to create your own function, just Lambda(K.log), for instance.

Keras - using activation function with a parameter

How is it possible to use leaky ReLUs in the newest version of keras?
Function relu() accepts an optional parameter 'alpha', that is responsible for the negative slope, but I cannot figure out how to pass ths paramtere when constructing a layer.
This line is how I tried to do it,
model.add(Activation(relu(alpha=0.1))
but then I get the error
TypeError: relu() missing 1 required positional argument: 'x'
How can I use a leaky ReLU, or any other activation function with some parameter?
relu is a function and not a class and it takes the input to the activation function as the parameter x. The activation layer takes a function as the argument, so you could initialize it with a lambda function through input x for example:
model.add(Activation(lambda x: relu(x, alpha=0.1)))
Well, from this source (keras doc), and this github question , you use a linear activation then you put the leaky relu as another layer right after.
from keras.layers.advanced_activations import LeakyReLU
model.add(Dense(512, 512, activation='linear')) # Add any layer, with the default of an identity/linear squashing function (no squashing)
model.add(LeakyReLU(alpha=.001)) # add an advanced activation
does that help?
You can build a wrapper for parameterized activations functions. I've found this useful and more intuitive.
class activation_wrapper(object):
def __init__(self, func):
self.func = func
def __call__(self, *args, **kwargs):
def _func(x):
return self.func(x, *args, **kwargs)
return _func
Of course I could have used a lambda expression in call.
Then
wrapped_relu = activation_wrapper(relu).
Then use it as you have above
model.add(Activation(wrapped_relu(alpha=0.1))
You can also use it as part of a layer
model.add(Dense(64, activation=wrapped_relu(alpha=0.1))
While this solution is a little more complicated than the one offered by #Thomas Jungblut, the wrapper class can be reused for any parameterized activation function. In fact, I used it whenever I have a family of activation functions that are parameterized.
Keras defines separate activation layers for the most common use cases, including LeakyReLU, ThresholdReLU, ReLU (which is a generic version that supports all ReLU parameters), among others. See the full documentation here: https://keras.io/api/layers/activation_layers
Example usage with the Sequential model:
import tensorflow as tf
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.InputLayer(10))
model.add(tf.keras.layers.Dense(16))
model.add(tf.keras.layers.LeakyReLU(0.2))
model.add(tf.keras.layers.Dense(1))
model.add(tf.keras.layers.Activation(tf.keras.activations.sigmoid))
model.compile('adam', 'binary_crossentropy')
If the activation parameter you want to use is unavailable as a predefined class, you could use a plain lambda expression as suggested by #Thomas Jungblut:
from tensorflow.keras.layers import Activation
model.add(Activation(lambda x: tf.keras.activations.relu(x, alpha=0.2)))
However, as noted by #leenremm in the comments, this fails when trying to save or load the model. As suggested you could use the Lambda layer as follows:
from tensorflow.keras.layers import Activation, Lambda
model.add(Activation(Lambda(lambda x: tf.keras.activations.relu(x, alpha=0.2))))
However, the Lambda documentation includes the following warning:
WARNING: tf.keras.layers.Lambda layers have (de)serialization limitations!
The main reason to subclass tf.keras.layers.Layer instead of using a Lambda layer is saving and inspecting a Model. Lambda layers are saved by serializing the Python bytecode, which is fundamentally non-portable. They should only be loaded in the same environment where they were saved. Subclassed layers can be saved in a more portable way by overriding their get_config method. Models that rely on subclassed Layers are also often easier to visualize and reason about.
As such, the best method for activations not already provided by a layer is to subclass tf.keras.layers.Layer instead. This should not be confused with subclassing object and overriding __call__ as done in #Anonymous Geometer's answer, which is the same as using a lambda without the Lambda layer.
Since my use case is covered by the provided layer classes, I'll leave it up to the reader to implement this method. I am making this answer a community wiki in the event anyone would like to provide an example below.

Resources