In AWS, working on AWS Wisdom which accepts only html and text file referring this aws.amazon.com/blogs/contact-center/ingesting-content-to-power-real-time-recommendations-and-search-with-amazon-connect-wisdom/
but i have data source as csv
how to convert the csv to html in lambda code and so that it passes to knowledge base of wisdom
Write Node JS logic the same way as you would in any Node JS app. Then use the AWS Lambda run time to build the Lambda function by using the same business logic.
There is nothing special using an AWS Lambda function when you want to perform this type of task. Its the same logic. And to make it even better, there are many examples on the NET that shows you how to convert csv to html.
If you do not know how to use NODE JS to write an AWS Lambda funciton, check the following example as a reference:
https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/javascriptv3/example_code/cross-services/lambda-for-browser
Related
While working with aws i need to load a WSDL file in order to setup a soap service. The problem I now encounter however is that i don't know how i can possibly add a file to the docker container running my lambda function so that i can just read the file inside my lambda like in the code snippet below.
const files = readdirSync(__dirname + pathToWsdl);
files.forEach(file => {
log.info(file);
});
any suggestions on how i can do this are greatly appreciated!
Here are a few options:
If the files are static and small, you can bundle them in the Lambda package.
If the files are static or change infrequently then you can store them in S3 and pull from there on Lambda cold start.
If the files need to be accessed and modified by multiple Lambda functions concurrently or if you have a large volume of data with low-latency access requirements, then use EFS.
EFS is overkill for a small, static file. I would just package the file with the Lambda function.
Summary: Logic app inline code (which uses NodeJS) is missing Buffer class.
Detailed: I am trying to trigger a logic app when some content is pushed into SFTP. I want to add some meta-data and save the details in the cosmos DB.
The issue is, The name of the file is received as a base64 encoded string in the inline code and Buffer is not available to parse it.
I even tried to create a set variable step (and decode filename there) but I am unable
to pass this variable to the inline code step. (Not supported)
The final option would be to use cloud functions instead of inline code which I am trying to avoid.
Looking for a workaround for conversion.
Logic App error image
link to ms doc
Doesn't support require() statements
Doesn't work with variables
Inline code can only perform the simplest Javascript operations, we may not be able to use Buffer.
As for passing the base64 encoded string, you can put it in Compose first, and then pass it in the inline code.
I suggest you use the built-in base64 related methods in the Azure logic app first.
If this does not meet your needs, you can create an Azure function and then call it in the Azure logic app.
I am deploying nodejs code to AWS lambda and I'd like to know how I can check whether it is running in lambda. Because I need to do something different in code between lambda and local.
AWS Lambda sets various runtime environment variables that you can leverage. You can use the following in Node.js, for example:
const isLambda = !!process.env.LAMBDA_TASK_ROOT;
console.log("Running on Lambda:", isLambda);
Note that the double bang !! converts a truthy/falsey object to a boolean (true/false).
I'd advise using a Lambda environment variable rather than attempting to check against any runtimes of the Lambda executing.
By doing this you can ensure that any infrastructure changes on the AWS side of Lambda will not affect your code.
It also allows you test it locally if you are trying to reproduce a scenario without the need to hardcode logic.
I am trying to create a pdf file that contains images, tables from HTML data in AWS lambda using python. I searched a lot on google and I didn't find any super cool solution. I tried some libraries in local(FPDF, pdfKit) and but it doesn't work on AWS. Is there any simple tool to create pdf and upload it to S3 bucket. Thanks in advance.
you can use reportlab PDF python module. It is good for all the things you have asked for. You can add images, create tables etc. There are a lot of styling options available as well. You can find more about it here: https://www.reportlab.com/docs/reportlab-userguide.pdf
I am using this is in my production and works pretty well for my use case where I have to create an invoice. You can create the invoice in the /tmp directory and then upload this to S3
pdfkit library works with aws lambda. pdfkit internally needs the wkhtmltopdf binaries installed, you can add them as lambda layer. You can download files from https://wkhtmltopdf.org/downloads.html.
Once you add the lambda layers you can set the config path as following:
config = pdfkit.configuration(wkhtmltopdf="/opt/bin/wkhtmltopdf")
pdfkit.from_string/from_file(input, <temp path of pdf file on lambda>, configuration=config)
You can uplod the file generated in your lambda temp location to S3 bucket using upload_file(). You can refer https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.upload_file on how to upload to s3 bucket.
Can I use lambda to compress images under a bucket?
I can get the images under a particular bucket visa listObject. How do you compress these returns and write it in another bucket?
Yes, you can absolutely use lambda. Try this library: aws-lambda-image-compressor
AWS lambda function to compress and resize images
This is a Lambda Function which resizes/reduces images automatically. When an image is put on some AWS S3 bucket, this function will resize/reduce it and save it into a new bucket. I have used it in the past and I loved it.
Usage
edit lambda-config.js file and assign name, description, memory size, timeout of your lambda function.
edit .env file with your AWS access data
npm install
gulp deploy
You can also try this other library which is more popular in the crowd - aws-lambda-image
If you really want to create something of your own and want a good start.
I would recommend these 2 articles that explain it very well -
Image conversion using Amazon Lambda and S3 in Node.js
Automating Image Compression Using S3 & Lambda
If you are fine to use Amazon API Gateway then u can follow this AWS Compute Blog -
Resize Images on the Fly with Amazon S3, AWS Lambda, and Amazon API Gateway
Hope this was useful.