Fetch requests goes towards the local base URL - node.js

I try to fetch data with my web-client (express-server) from my backend-service (also express-server). Locally it works fine, using environment variables to set the backend-service-url. But deployed on AWS, it won't let me fetch from my web client EC2 to my backend EC2.
I log my environment variable for the backend-service (comes from AWS SSM Paramter Store) and it logs the correct service-url for my backend EC2 instance.
But then it fails, because it calls 'GET host-url/service-url/endpoint' instead of 'GET service-url/endpoint'. Don't know, if this is an AWS or node.js/express problem.
That's how I call my backend:
async function callEndpoint(endpointUrl) {
console.log("Fetching to: " + endpointUrl)
const response = await fetch(endpointUrl, {
method: 'GET',
});
let data = await response.json();
return data;
The console.log prints out the correct value, but fetch however makes (I guess, but don't understand why) it a relative path, prefixing it with the host-url from my frontend EC2 instance IP/DNS.
Don't know, how relevant, but my servers running in Docker containers in an ECS cluster (each container on its own EC2 instance).

If you're not specifying the scheme in the URL, fetch assumes that the domain root should be applied to the URL.
fetch("external-service.domain.com/endpoint")
translates into
fetch("https://hostname/external-service.domain.com/endpoint")
Try adding https:// or the appropriate scheme to your URL.
Read more:
https://url.spec.whatwg.org/#url-writing

Related

How to allow outgoing request from gcp cloud functions with node 16?

I am using node 16 with cloud functions. and I am using Twilio and outlook APIs.
I can access this third party apis in my local system using firebase function serve,
but when I deploy this functions on GCP, all outgoing request like accessing third party APIs not working and gives error related,
Connection timed out and code: 'ECONNABORTED'.
So how can I fix this ?
In order to allow outgoing requests from Google Cloud Functions with Node.js version 16, you need to configure the environment variable 'FUNCTIONTRIGGERTYPE' to 'http'.
This can be done by adding the following to the cloud function's configuration file:
environment_variables:
FUNCTION_TRIGGER_TYPE: http
Once the environment variable has been set, you can then use the http module to make outgoing requests from inside your cloud function.
For example:
const http = require('http');
exports.myFunction = (req, res) => {
const options = {
hostname: 'example.com',
port: 80,
path: '/',
method: 'GET'
};
const req = http.request(options, (res) => {
// handle response
});
req.end();
};
Additionally, check below points also :
you should make sure that the network access control lists of your
VPC allow the outbound requests from the cloud function. You can do
this by adding an egress rule to the VPC.
You can also use the Node.js http or https modules to make requests.
You can find more information about making requests with Node.js in
the Node.js documentation

Download file to AWS Lambda using SSH client

I am trying to download a file from an EC2 instance and store it temporarily in the tmp folder inside AWS Lambda. This is what I have tried:
let Client = require('ssh2-sftp-client');
let sftp = new Client();
sftp.connect({
host: host,
username: user,
privateKey : fs.readFileSync(pemfile)
}).then(() => {
return sftp.get('/config/test.txt' , fs.createWriteStream('/tmp/test.txt'))
}).then(() => {
sftp.end();
}).catch(err => {
console.error(err.message);
});
The function runs without generating an error but nothing is written to the destination file. What am I doing wrong here and how could I debug this? Also is there a better way of doing this altogether?
This is not the cloud way to do it IMO. Create a S3 bucket, and create proper Lambda execution role for the lambda function to be able to read from the bucket. Also, create a role for the EC2 instance to be able to write to the same S3 bucket. Using S3 API from both sides, the lambda function and the EC2, should be enough to share the file.
Think about this approach: you decouple your solution from a VPC and region perspective. Also, since the lambda only needs to access S3, you save a ENI (elastic network interface) resources, so you are not using your VPC private ips. These are just advantages that may not care in your case, but it is good to be aware of them.

Failed attempts to write to DynamoDB Local

I recently discovered DynamoDB Local and started building it into my project for local development. I decided to go the docker image route (as opposed to the downloadable .jar file.
That being said I've gotten image up and running and have created a table and can successfully interact with the docker container via the aws cli. aws dynamodb list-tables --endpoint-url http://localhost:8042 successfully returns the table I created previously.
However, when I run my lambda function and set my aws config like so.
const axios = require('axios')
const cheerio = require('cheerio')
const randstring = require('randomstring')
const aws = require('aws-sdk')
const dynamodb = new aws.DynamoDB.DocumentClient()
exports.lambdaHandler = async (event, context) => {
let isLocal = process.env.AWS_SAM_LOCAL
if (isLocal) {
aws.config.update({
endpoint: new aws.Endpoint("http://localhost:8042")
})
}
(which I have confirmed is getting set) it actually writes to the table (with the same name of the local dynamodb instance) in the live AWS Webservice as opposed to the local container and table.
It's also worth mentioning I'm unable to connect to the local instance of DynamoDB with the AWS NoSQL Workbench tool even though it's configured to point to http://localhost:8042 as well...
Am I missing something? Any help would be greatly appreciated. I can provide any more information if I haven't already done so as well :D
Thanks.
SDK configuration changes, such as region or endpoint, do not retroactively apply to existing clients (regular DynamoDB client or a document client).
So, change the configuration first and then create your client object. Or simply pass the configuration options into the client constructor.

aws s3 node js sdk method generatePutUrl returns differnt results for localhost and deployment on heroku

I am trying to manage direct file upload to S3 according to heroku recomendations
first one need to generate presigned URL at ones server
use this url in client to direct upload of image from browser to S3 bucket
and finally manage to works it locally.
but when I tried to deploy server on heroku it starts to fail with no reason or readable error. Just common error and strange message when I try to print it
what looks strange for me that presigned urls are completely different when I make call from local host or from heroku
response for localhost looks like this:
https://mybucket.s3.eu-west-1.amazonaws.com/5e3ec346d0b5af34ef9dfadf_avatar.png?AWSAccessKeyId=<AWSKeyIdHere>&Content-Encoding=base64&Content-Type=image%2Fpng&Expires=1581172437&Signature=xDJcRBiA%2FmQF1qKhBZrnhFXWdaM%3D
and response for heroku deployment looks like this:
https://mybucket.s3.u-west-1.amazonaws.com/5e3ee2bd1513b60017d85c6c_avatar.png?Content-Type=image%2Fpng&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=<credentials-key-here>%2F20200208%2Fu-west-1%2Fs3%2Faws4_request&X-Amz-Date=20200208T163315Z&X-Amz-Expires=900&X-Amz-Signature=<someSignature>&X-Amz-SignedHeaders=content-encoding%3Bhost
server code is almost like in examples:
const Bucket = process.env.BUCKET_NAME
const region = process.env.BUCKET_REGION
AWS.config = new AWS.Config({
accessKeyId: process.env.S3_KEY,
secretAccessKey: process.env.S3_SECRET,
region,
logger: console
})
const s3 = new AWS.S3()
async function generatePutUrl(inputParams = {}) {
const params = { Bucket, ...inputParams }
const { Key } = inputParams
const putUrl = await s3.getSignedUrl('putObject', params)
const getUrl = generateGetUrlLocaly(Key)
return {putUrl, getUrl}
}
the only difference that I can imagine is SSL - I run local server VIA http and heroku goes over https by default...
but I don't understand how it may influence here.
I will appreciate any meaningful advises how to debug and fix it.
thank you.
It looks like that your bucket region is incorrect. Shouldn't it be eu-west-1 instead of u-west-1?
Please update your BUCKET_REGION in environment variables at Heroku Server settings from
u-west-1
to
eu-west-1
and restart the dynos. It may solve your problem.

How to run Alexa skill with the alexa-sdk on own server with Node.js without Lambda drop-in?

The Alexa skill docs will eventually allow you to send webhooks to https endpoints. However the SDK only documents lambda style alexa-sdk usage. How would one go about running Alexa applications on one's own server without anything abstracting Lambda? Is it possible to wrap the event and context objects?
You can already use your own endpoint. When you create a new skill, in the configuration tab, just choose HTTPS and provide your https endpoint. ASK will call your endpoint where you can run anything you want (tip, check ngrok.com to tunnel to your own dev machine). Regarding the event and context objects; your endpoint will receive the event object information. You don't need the context object for anything, that just lets you interact with Lambda-specific stuff (http://docs.aws.amazon.com/lambda/latest/dg/python-context-object.html). Just make sure that you comply with the (undocumented) timeouts by ASK and you are good to go.
Here's a way to do this that requires only a small change to your Skill code:
In your main index.js entry point, instead of:
exports.handler = function (event, context) {
use something like:
exports.myAppName = function (funcEvent, res) {
Below that, add the following workaround:
var event = funcEvent.body
// since not using Lambda, create dummy context with fail and succeed functions
const context = {
fail: () => {
res.sendStatus(500);
},
succeed: data => {
res.send(data);
}
};
Install and use Google Cloud Functions Local Emulator on your laptop. When you start and deploy your function to the emulator, you will get back a Resource URL something like http://localhost:8010/my-project-id/us-central1/myAppName.
Create a tunnel with ngrok. Then take the ngrok endpoint and put it in place of localhost:8010 in the Resource URL above. Your resulting fulfillment URL will be something like: https://b0xyz04e.ngrok.io/my-project-id/us-central1/myAppName
Use the fulfillment URL (like above) under Configuration in the Alexa dev console, selecting https as the Service Endpoint Type.

Resources