AWS Lambda Layer Using Another Layer - node.js

I've got a nodejs AWS Lambda Layer (lets call it dbUtil) with some low level database access code (stuff like opening connections, executing prepared statements, etc.).
Now I want to create another layer (let's call it modelUtil) with higher level, data model-specific code (stuff like data transfer objects, and model-specific transformations).
I would very much like to be able to leverage the code in the dbUtil layer within the higher-level modelUtil layer, while still being able to import dbUtil into a lambda function independently.
Importing the layer to a lambda function is easy as SAM plops the layer code into /opt/nodejs/. But as far as I know, nothing analogous exists for layers; AWS doesn't give you the ability to import a layer into another layer in the same way. Additionally, each layer is self-contained, so I couldn't have the layer just put const dbUtil = require('./dbUtil') in the modelUtil.js file, unless they were in the same directory when I built the layer, and thus, forcing them to be the same layer.
Is there a way I can have a dependency from one layer (modelUtil) on another layer (dbUtil) while still allowing them to be treated as independent layers?

I just tested this on Lambda and I can testify that a Layer can import functions and dependencies from another Layer. Even the merge order does not matter.
For your case, for modelUtil Layer to import functions from dbUtil Layer:
(Inside modelUtil)
const func1 = require('/opt/<the location of func1 in dbUtil>')
For modelUtil Layer to import npm dependencies from dbUtil Layer:
(Inside modelUtil)
const dependency = require(dependency)
It is as simple as that!

Related

How can I share code between AWS Lambda Layers?

I understand that you can share code among AWS Lambda functions when those functions use the same layer.
However, I want to reuse code among Lambda Layers.
Can I just reference the /opt/nodejs/ folder inside my layer code in order to access another layer code hoping that both layers are being used by the same Lambda function?
E.g.
layer1 --> /myFile1.ts
layer2 --> /myFile2.ts
myFunction uses both layer1 and layer2.
Can I do the following in my /myFile1.ts in order to use the /myFile2.ts code?
import * as _ from 'opt/nodejs/layer2/myFile2.ts'

How to save model architecture in PyTorch?

I know I can save a model by torch.save(model.state_dict(), FILE) or torch.save(model, FILE). But both of them don't save the architecture of model.
So how can we save the architecture of a model in PyTorch like creating a .pb file in Tensorflow ? I want to apply different tweaks to my model. Do I have any better way than copying the whole class definition every time and creating a new class if I can't save the architecture of a model?
You can refer to this article to understand how to save the classifier. To make a tweaks to a model, what you can do is create a new model which is a child of the existing model.
class newModel( oldModelClass):
def __init__(self):
super(newModel, self).__init__()
With this setup, newModel has all the layers as well as the forward function of oldModelClass. If you need to make tweaks, you can define new layers in the __init__ function and then write a new forward function to define it.
Saving all the parameters (state_dict) and all the Modules is not enough, since there are operations that manipulates the tensors, but are only reflected in the actual code of the specific implementation (e.g., reshapeing in ResNet).
Furthermore, the network might not have a fixed and pre-determined compute graph: You can think of a network that has branching or a loop (recurrence).
Therefore, you must save the actual code.
Alternatively, if there are no branches/loops in the net, you may save the computation graph, see, e.g., this post.
You should also consider exporting your model using onnx and have a representation that captures both the trained weights as well as the computation graph.
Regarding the actual question:
So how can we save the architecture of a model in PyTorch like creating a .pb file in Tensorflow ?
The answer is: You cannot
Is there any way to load a trained model without declaring the class definition before ?
I want the model architecture as well as parameters to be loaded.
no, you have to load the class definition before, this is a python pickling limitation.
https://discuss.pytorch.org/t/how-to-save-load-torch-models/718/11
Though, there are other options (probably you have already seen most of those) that are listed at this PyTorch post:
https://pytorch.org/tutorials/beginner/saving_loading_models.html
PyTorch's way of serializing a model for inference is to use torch.jit to compile the model to TorchScript.
PyTorch's TorchScript supports more advanced control flows than TensorFlow, and thus the serialization can happen either through tracing (torch.jit.trace) or compiling the Python model code (torch.jit.script).
Great references:
Video which explains this: https://www.youtube.com/watch?app=desktop&v=2awmrMRf0dA
Documentation: https://pytorch.org/docs/stable/jit.html

Can I access what was once `tf.get_global_step()` from within a custom Keras layer?

I'm implementing a custom Layer with the Keras API (working with TF2.0-beta). I want to use the epoch number in my calculation in order to decay a parameter over time (meaning - in the call() method).
I'm used to tf.get_global_step() but understand that TF deprecated all global scopes, and definitely for a good reason.
If I had the model instance, I could use model.optimizer.iterations, but I'm not sure how I get the instance of my parent model when I'm implementing a Layer.
Do I have any way to do that or the only way is to let the layer expose a Callback that will update the parameter I want to decay? Other ideas? Ideally something that wouldn't make the user of the layer aware of that inner detail (that's why I don't like the Callback approach - user has to add them to the model).

how can I modify a build-In model inside keras applications

I need to get several outputs from several layers from keras model instead of getting the output from the last layer.
I know how to adjust the code according to what I need, but don't know how I can use it within keras application. I meant how can I import it then. Do I need to infall the setup.py at keras-application again. I did so but nothing happen. My changes doesn't applied. Is there any different way to get the outputs from different layer within the model?
Actually the solution was so simple, you need just to call the output of specified layer.
new_model= tf.keras.Model(base_model.input, base_model.get_layer(layer_name).output)

freeze layers while passing a part of features

I want to design a network using keras like this pic. And there is a problem I need some help. F* is a feature tensor like a sentence or a picture, the shared layers is a block comprise several layers. F* will merge together after passing the shared layer. Then merged feature will pass an output layer. The structure is described below.
The problem is I want to train this network only use F1. Namely, when F2 pass the shared layers, shared layers are frozen.
I would very appreciate if you could answer me with pseudocode.

Resources