I am trying to figure out how to add my classes to an AWS Lambda layer. I have already added a custom library and gotten that to work, but following the same process seems to not work. I have tried zipping the classes up in a nodejs folder, and also have tried with the classes being within a node_modules folder within the nodejs folder.
Lastly, assuming they can be added, how do I import them into my lambda function to use?
const uuid = require("uuidv4").default;
module.exports = class Order {
constructor
(userId, exchange, market, trades, status, closed)
{
this.orderId = uuid();
this.userId = userId;
this.items = items;
this.status = status;
this.closed = closed;
};
};
Just add the files (with your classes) to the nodejs directory, zip it and upload it into a Lambda layer.
Layers are extracted to the /opt directory in the function execution environment. Each runtime looks for libraries in a different location under /opt, depending on the language. For Node, you would require your classes as follows:
const myclass = require('/opt/nodejs/myclass');
Related
I am writing a AWS lambda Authorizer in node.js. We are required to call Azure AD API to fetch the public keys/security policies to validate the incoming the Access Token.
However, to optimize the performance, I decided to store the public keys/security policies in node.js as a constant (this will be active until the Lambda is running or TTL of the keys expire).
Question : Is it safe from a security perspective ? I want to avoid "caching" it in DynamoDB as calls to DynamoDB would also incur additional milliseconds. Ours is a very high traffic application and we would like to save any millisecond possible for optimal performance. Also, any best practice is also higly appreciated
Typically, you should not hard-code things like that in your code. Even though it is not a security problem, it is making maintenance harder.
For example: when the key is "rotated" or the policy changed and you had it hard-coded in your Lambda, you would need to update your code and do another deployment. This is often causing issues, because the developer forgot about this etc. causing issues because your authorizer does not work anymore. If the Lambda loads the information from an external service like S3, SSM or directly Azure AD, you don't need another deployment. In theory, it should sort itself out depending on which service you use and how you manage your keys etc.
I think the best way is to load the key from an external service during the initialisation phase of the Lambda. That means when it is "booted" for the first time and then cache that value for the duration of the Lambdas lifetime (a few minutes to a few hours).
You could for example load the public keys and policies either directly from Azure, from S3 or SSM Parameter Store.
The following code uses the AWS SDK NodeJS v3, which is not bundled with the Lambda Runtime. You can use v2 of the SDK as well.
const { SSMClient, GetParameterCommand } = require("#aws-sdk/client-ssm");
// This only happens once, when the Lambda is started for the first time:
const init = async () => {
const config = {}
try {
// use whatever 'paramName' you defined, when you created the SSM parameter
const paramName = "/azure/publickey"
const command = new GetParameterCommand({Name: paramName});
const ssm = new SSMClient();
const data = await ssm.send(command);
config["publickey"] = data.Parameter.Value;
} catch (error) {
return Promise.reject(new Error("unable to read SSM parameter '"+ paramName + "'."));
}
return new Promise((resolve, reject) => {
resolve(config);
reject(new Error("unable to create configuration. Unknown error."));
});
};
const initPromise = init();
exports.handler = async (event) => {
const config = await initPromise;
console.log("My public key '%s'", config.key);
return "Hello World";
};
The most important point of this code is the init "function", which is only run on once, creating a "config" which should contain your AWS SDK clients and all the configuration you need in your code. This way, you don't have to get the policy for every request that the Lambda is processing etc.
I developed an algorithm that when u type in discord server's text channel : "-nuke #tag" that member will get a "Muted" role and his id will be added to an array. If he leaves the server and joins back, the bot will compare the new member's id with all the ids in the array and if they are matching it will automatically give that person the "Muted" role. The problem is, the bot.on doesn't seem to work inside something else than index.js. I dont really want to go inside all the event handlers and stuff, just to get this one working nuke.js
This answer is in reference to a comment I made. This will show you how to create and access a global map.
To start you need to extend the client class by the map you want to use. You do that like this.
const { Client } = require('discord.js');
module.exports = class extends Client {
constructor(config) {
super({});
this.muteIDs = new Map(); // you can name it whatever you like
this.config = config;
}
};
You should do that in a separate file. Lets call that file clientextend.js. Now you can create your bot client like you usually would, however you need to require the file that you just created. This example has both the file in which you extend the client and the file you create the client in, in the same directory. Note: In your example you have bot instead of client.
const Client = require('./clientextend.js')
const client = new Client();
You now have access to the map you created anywhere you have your client available to you. You can access it like this.
muteIDs = client.muteIDs;
I have some variables which are going to be used by the business logic part of a function. Therefore, instead of adding them inside the appsetting.json file, I have added a separated file as variable.json
Testing on my machine works but after deploy, it seems function can not find it. and I got an error:
The properties for this file is like the below image. (The build action was None before, but nothing has been changed even by content)
and the below image shows how it looks like in root
And because of that reason, any call the response will be "Function host is not running."
The code for reading this file (path = "Variables.json")
private static List<Variable> GetVariables(string path)
{
string json = File.ReadAllText(path);
var variables = JsonConvert.DeserializeObject<List<Variable>>(json);
return variables;
}
Does anyone have any clue why this is happening?
Problem was because when we start Azure Function locally the file varibale.json is available by Directory.GetCurrentDirectory(), but published on azure portal it's Directory.GetCurrentDirectory() + #"\site\wwwroot"
To get the correct folder path you can use following code:
public static HttpResponseMessage Run(HttpRequestMessage req, ExecutionContext context)
{
var path = System.IO.Path.Combine(context.FunctionDirectory, "varibale.json");
// ...
}
For startup.cs, you can use the following code:
var executioncontextoptions = builder.Services.BuildServiceProvider()
.GetService<IOptions<ExecutionContextOptions>>().Value;
var currentDirectory = executioncontextoptions.AppDirectory;
I am evaluating Mikro-Orm for a future project. There are several questions I either could not find an answer in the docs or did not fully understand them.
Let me describe a minimal complex example (NestJS): I have an order processing system with two entities: Orders and Invoices as well as a counter table for sequential invoice numbers (legal requirement). It's important to mention, that the OrderService create method is not always called by a controller, but also via crobjob/queue system. My questions is about the use case of creating a new order:
class OrderService {
async createNewOrder(orderDto) {
const order = new Order();
order.customer = orderDto.customer;
order.items = orderDto.items;
const invoice = await this.InvoiceService.createInvoice(orderDto.items);
order.invoice = invoice;
await order.persistAndFlush();
return order
}
}
class InvoiceService {
async create(items): Invoice {
const invoice = new Invoice();
invoice.number = await this.InvoiceNumberService.getNextInSequence();
// the next two lines are external apis, if they throw, the whole transaction should roll back
const pdf = await this.PdfCreator.createPdf(invoice);
const upload = await s3Api.uplpad(pdf);
return invoice;
}
}
class InvoiceNumberService {
async getNextInSequence(): number {
return await db.collection("counter").findOneAndUpdate({ type: "INVOICE" }, { $inc: { value: 1 } });
}
}
The whole use case of creating a new order with all subsequent service calls should happen in one Mikro-Orm transaction. So if anything throws in OrderService.createNewOrder() or one one of the subsequently called methods, the whole transaction should be rolled back.
Mikro-Orm does not allow the atomic update-increment shown in InvoiceNumberService. I can fall back to the native mongo driver. But how do I ensure the call to collection.findOneAndUpdate() shares the same transaction as the entities managed by Mikro-Orm?
Mikro-Orm needs a unique request context. In the examples for NestJS, this unique context is created at the controller level. In the example above the service methods are not necessarily called by a controller. So I would need a new context for each call to OrderService.createNewOrder() that has a lifetime scoped to the function call, correct? How can I acheive this?
How can I share the same request context between services? In the example above InvoiceService and InvoiceNumberService would need the same context as OrderService for Mikro-Orm to work properly.
I will start with the bad news, mongodb transactions are not yet supported in MikroORM (athough they will land within weeks probably, already got the PoC implemented). You can subscribe here for updates: https://github.com/mikro-orm/mikro-orm/issues/34
But let me answer the rest as it will then apply:
You can use const collection = (em as EntityManager<MongoDriver>).getConnection().getCollection('counter'); to get the collection from the internal mongo connection instance. You can also use orm.em.getTransactionContext() to get the current trasaction context (currently implemented only in sql drivers, but in future this will probably return the session object in mongo).
Also note that in mongo driver, implicit transactions won't be enabled by default (it will be configurable though), so you will need to use explicit transaction demarcation via em.transactional(...).
The RequestContext helper works automatically. You just register it as a middleware (done automatically in the nestjs orm adapter) and then your request handler (route/endpoint/controller method) is ran inside a domain that shares the context. Thanks to this, all services in the DI can share singleton instances of repositories, but they will automatically pick the right context from the domain.
You basically have this automatic request context, and then you can create new (nested) contexts manually via em.transactional(...).
https://mikro-orm.io/docs/transactions/#approach-2-explicitly
I'm creating a windows store app to read write files.
I gave permissions for document library but still getting this error
App manifest declares document library access capability without specifying at least one file type association
The code snippet of my code:
private async void Button_Click_1(object sender, RoutedEventArgs e)
{
String temp = Month.SelectedValue.ToString() + "/" + Day.SelectedValue.ToString() + "/" + Year.SelectedValue.ToString(); //((ComboBoxItem)Month.SelectedItem).Content.ToString();
DateTime date = Convert.ToDateTime(temp);
Windows.Storage.StorageFolder installedLocation = Windows.ApplicationModel.Package.Current.InstalledLocation;
StorageFolder storageFolder = KnownFolders.DocumentsLibrary;
StorageFile sampleFile = await storageFolder.CreateFileAsync("sample.txt");
var buffer = Windows.Security.Cryptography.CryptographicBuffer.ConvertStringToBinary(temp, Windows.Security.Cryptography.BinaryStringEncoding.Utf8);
await Windows.Storage.FileIO.WriteBufferAsync(sampleFile, buffer);
buffer = await Windows.Storage.FileIO.ReadBufferAsync(sampleFile);
}
Any other better approach is also acceptable.
1.I don't have access to Skydrive. 2.Also don't want to use filepicker
You need to specify file type association also.
From: http://msdn.microsoft.com/en-us/library/windows/apps/hh967755.aspx
Documents Library:
Note You must add File Type Associations to your app manifest that declare specific file types that your app can access in this location.
you can find it in the app manifest, when you check documents library in capabilites, you have to fill in atleast one file type association under Declarations tab.
Found exactly what i was looking for. File type associations was to be added in the app.manifest
For those who facing the same problem check this link