A while back I've created some feed fetching & processing scripts with NodeJS for an application I'm working on. I've been running this script on my local machine (OSX) manually for a while and momentarily I'm working on having a job do this. I figured I'd go with a Docker droplet ($5/mo) on Digital ocean. I've created a Dockerfile with NodeJS (9), cron and everything I think I need. Everything works fine on my local machine when I build and run the docker container.
However, when I deploy it to Digital Ocean there seems to be different behaviour from running it locally. Exactly the thing I wanted to prevent using Docker. I have a main shell script that calls 8 different node scripts sequentially. On Digital Ocean it seems that some NodeJS-scripts exit prematurely. So they haven't finished yet but the mail shell script continues with the next NodeJS script. An example:
execute() {
const outputPath = path.join(this._feedDir, this._outputFile);
return createDirIfNotExists(this._feedDir, this._log)
.then(() => this._doRequests())
.then(requestData => this._writeFile(outputPath, JSON.stringify({requests: requestData.map(data => JSON.parse(data))}), this._log))
}
When running the code:
Script does create the dir
Script does do all the requests to external sources (and accumulates the data)
Script sometimes write the data to file, and sometimes not. Without any error code.
Main (shell) script always picks up after that.
Any thoughts?
Related
I create a new project via sam init and I select the options:
1 - AWS Quick Start Templates
1 - nodejs14.x
8 - Quick Start: Web Backend
Then from inside the project root, I run sam local invoke -e ./events/event-get-all-items.json getAllItemsFunction, which returns:
Invoking src/handlers/get-all-items.getAllItemsHandler (nodejs14.x)
Skip pulling image and use local one: public.ecr.aws/sam/emulation-nodejs14.x:rapid-1.32.0.
Mounting /home/rob/code/sam-app-2/.aws-sam/build/getAllItemsFunction as /var/task:ro,delegated inside runtime container
Function 'getAllItemsFunction' timed out after 100 seconds
No response from invoke container for getAllItemsFunction
Any idea what could be going on or how to debug this? Thanks.
Any chance the image/lambda make a call to a database someplace? and does the container running the lambda have the right connection string and/or access? To me sounds like your function is getting called and then function is trying to reach something that it can't reach.
As far as debugging - lots of console.log() statements to narrow down how far your code is getting before it runs into trouble.
I'm running a NodeJS app inside a docker container inside a container-optimized-OS GCE instance.
I need this instance to shutdown an self-delete upon its task completion. Only the NodeJS app is aware of the task completion.
I used to achieve this behavior by setting up this as a startup-script:
node ./dist/app.js
echo "node script execution finished. Deleting this instance"
export NAME=$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/name -H 'Metadata-Flavor: Google')
export ZONE=$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/zone -H 'Metadata-Flavor: Google')
gcloud compute instance-groups managed delete-instances my-group --instances=$NAME --zone=$ZONE
I've also used similar setups with additional logic based on the NodeJS app exit code.
How do I do it now?
There are two problems:
I don't know how to pass NodeJS exit event (preferably with exit code) up to the startup-script. How do I do that?
Container-optimized-OS GCE instance lacks gcloud. Is there different way of shutting down an instance?
Google Cloud's Healthcheck seems too troublesome and not universal. My app is not a web-server, I prefer not to install express or something else just for sake of handling health checks.
Right now my startup-script ends with docker run ... command. Maybe I should write the shutdown command after that and somehow make docker exit on NodeJS exit?
If you think the Healthcheck is the way to go, what would be the lightest setup for a health check given that my app is not a web-server?
Try to have your app trigger a Cloud Function when the app finishes the job
Cloud function can then have script to delete your VM. See sample script below
https://medium.com/google-cloud/start-stop-compute-engine-instance-from-cloud-function-bf9ae5199609
I'm running a virtual machine with docker, which implements our CI/CD infratructure.
docker-compose has an nginx reverse proxy and another service. Essentially, this docker container's start command is a shell script, which creates local copies of files from a central repository. Then this shell script starts (by means of yarn start) a nodejs script that selects a couple of services and creates a pm2 application startup json file.
Finally, pm2-runtime is launched with this application definition file. This is done by
const child = exec(`pm2-runtime build/pm2startup.json`)
child.stdout.on("data", data => { process.stdout.write(data); })
child.stderr.on("data", data => { process.stderr.write(data); })
child.on("close", (code,signal) => {
process.stdout.write((`pm2-runtime process closed with code ${code}, signal ${signal}\n`));
})
child.on("error", error => {
process.stderr.write((`pm2-runtime process error ${error}\n`));
})
child.on("exit", (code, signal) => {
process.stdout.write((`pm2-runtime process exited with code ${code}, signal ${signal}\n`));
})
There are about 10 apps managed by pm2, docker stats say, the container has memory consumption greater than 850MB. However, I have nowhere put any explicit memory limits. I cannot find any implicit either.
Every now and then the container of services is restarted. According to the dockerd logs its task has exited. That's true: the pm2-runtime process (see above) is reported to be closed because of SIGTERM.
And that's the only message I get related to this. No other pm2 message, no service message, no docker event.
Now I'm seeking advice how to find the cause of this SIGTERM because I'm running out of ideas.
As it turned out, it was indeed the snippet inside the question that caused the problem.
pm2startup.json references long-running apps. Over time, depending on usage, they produce quite a few logs on stdout and/or stderr. At some point a certain buffer kept by exec is filled up and the node process that runs pm2-runtime stops. Unfortunately it stops without any kind of hint specifying the reason of the crash. But that's another story.
Solution in my case was to do without exec or execFile, but take spawn instead with the stdio option {stdio: "inherited"} (or the verbose version ["inherited", "inherited", "inherited"]).
I wrote a web bot that uses Selenium framework to crawl. Installed ChromeDriver 72.0.3626.69 and also downloaded Chromium 72.0.3626.121. The app initializes ChromeDriver with this included Chromium binary (and NOT a locally installed Chrome binary). All this perfectly works on my machine locally.
I've been attempting now to port the app to Azure Functions. I wrote a function, tested it, and it works fine locally. But once I publish it to Azure Functions it fails due to about 182 errors of type:
An attempt was made to access a socket in a way forbidden by its
access permissions
I know this happens due to exceeding the TCP connection limits of Azure sandbox, but the only attempt here was to create an instance of ChromeDriver (not even navigate anywhere yet!)
Here is a screenshot of Azure Function call log.
That error appears about 182 times in a row, and that's basically just an attempt to create a browser instance (or ChromeDriver instance, to be precise - can't be sure if that's Chromium or ChromeDriver causing the issue).
The question: Have anyone experienced issues with ChromeDriver/Chromium creating so many (obviously excessive) connections when launching? And what might help to avoid this.
If that's of any help, this is basically a piece of code that crashes on the last line:
ChromeOptions options = new ChromeOptions();
options.BinaryLocation = this.chromePath;
options.AddArgument("no-sandbox");
options.AddArgument("disable-infobars");
options.AddArgument("--disable-extensions");
if (this.headlessMode)
{
options.AddArgument("headless");
}
options.AddUserProfilePreference("profile.default_content_setting_values.images", 2);
Log.LogInformation("Chrome options compiled. Creating ChromeDriverService...");
var driverService = ChromeDriverService.CreateDefaultService(this.driverPath);
driver = new ChromeDriver(driverService, options, timeout);
I believe you are running this function in a Windows Function App which is subject to quite a few limitations as described in this wiki.
But when running on Linux, functions are basically run in a docker container, removing most of these restrictions that windows has. I believe what you are trying should be possible there.
You could either just deploy your function to a Linux Function App or even build a container and use that directly as well.
I have this simple node socker seerver as follows:
var ws = require("nodejs-websocket")
var connectionCount = 0;
console.info("Node websocket started # 8002");
var server = ws.createServer(function (conn) {;
console.log("New connection", ++connectionCount);
conn.on("close", function (code, reason) {
console.log("Connection closed")
});
}).listen(8002);
Now I want to hit this server from machines. So to mimic these machines, I am using docker. I want to create around 10 different docker containers which will hit my server.
I want to hit the server from this docker container by using the load testing tool called thor (https://github.com/observing/thor), which can be run as easily as
thor --amount 1000 --messages 100 ws://localhost:8002
So I want to created 10 different docker container and each container should use this tool called thor and hit my server with
thor --amount 1000 --messages 100 ws://localhost:8002
How can I implement such dockor containers.
PS: I am a novice here.
I believe that it should be possible.
There are images available in the docker hub for node of varying size. Choose the appropriate image.
Here are the pseudo instructions to create an image that you needed.
Get the node image
Install thor from git(which you already have the details)
Run the container with your command(Hoping that your websocket app might already be running)
You can do the above in two ways either doing it manually or using Dockerfile.
I believe that you wanted to run in multiple containers, Dockerfile would be good option.
If you can use docker-compose, since multiple containers, it would even better approach.
Hope this is helpful.