I have backend node app, that is run by pm2 in cluster mode.
I'm running fixed 2 instances.
Is there a way to identify instance name or number from within executed app?
App name is "test", I would like to get from within the app "test 1" and "test 2" for given instance.
Thanks!
You'll need to use two environment variables set by pm2:
process.env.pm_id is automatically set to the instance id (0, 1, ...).
process.env.name is set to the app name (in your case test).
While starting the pm2 set the name as:
pm2 start app.js --name test
Related
I'm currently building a deployment for Kubernetes through Helm. However, one of my values that I have to pass is an Endpoint that contain the following characters:
Endpoint=https://test.io;Id=001;Secret=test_test_test
The problem is that if I passed the following value:
test01:
- name: test
value: Endpoint=https://test.io;Id=001;Secret=test_test_test
The pod will never get created since is not getting and passing the value. If I add the following with single quotes is telling me that the pod is not ready.
test01:
- name: test
value: 'Endpoint=https://test.io;Id=001;Secret=test_test_test'
If I passed the value with single quotes it will tell me that the pod got created, but still the pod and namespace do not show up on in the AKS Cluster. However, if I run the same environmental variables on docker, I will see that all of these variables will be applicable to my app, and I will be able to see my app running as expected.
How can I set up the env variables inside of the Values and how can I run a command from the terminal to set & run multiple variables at the same time? & Does anyone has any other way to do this?
I'm running a NodeJS app inside a docker container inside a container-optimized-OS GCE instance.
I need this instance to shutdown an self-delete upon its task completion. Only the NodeJS app is aware of the task completion.
I used to achieve this behavior by setting up this as a startup-script:
node ./dist/app.js
echo "node script execution finished. Deleting this instance"
export NAME=$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/name -H 'Metadata-Flavor: Google')
export ZONE=$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/zone -H 'Metadata-Flavor: Google')
gcloud compute instance-groups managed delete-instances my-group --instances=$NAME --zone=$ZONE
I've also used similar setups with additional logic based on the NodeJS app exit code.
How do I do it now?
There are two problems:
I don't know how to pass NodeJS exit event (preferably with exit code) up to the startup-script. How do I do that?
Container-optimized-OS GCE instance lacks gcloud. Is there different way of shutting down an instance?
Google Cloud's Healthcheck seems too troublesome and not universal. My app is not a web-server, I prefer not to install express or something else just for sake of handling health checks.
Right now my startup-script ends with docker run ... command. Maybe I should write the shutdown command after that and somehow make docker exit on NodeJS exit?
If you think the Healthcheck is the way to go, what would be the lightest setup for a health check given that my app is not a web-server?
Try to have your app trigger a Cloud Function when the app finishes the job
Cloud function can then have script to delete your VM. See sample script below
https://medium.com/google-cloud/start-stop-compute-engine-instance-from-cloud-function-bf9ae5199609
I'm new to AWS, and I'm trying to deploy my local web app on AWS using ECR and ECS, but got stuck when running a cluster, it throws the error about the PRISMA_CONFIG environment variable in prisma container.
In my local environment, i'm using docker to build the app using nodejs, prisma and mongodb, it's working fine.
Now on ECS, i created a task definition and for prisma container, i tried to copy the yml config from my local docker-compose.yml file to make it work.
There is field called "ENVIRONMENT", I've inputted the value in the Environment variables, it's just not working and throw the error while the cluster was running, then the task Stopped.
the yml is in multiple lines, but the input box supports string only
the variable key is PRISMA_CONFIG
and the following are the values that i've already tried
| port: 4466\n databases:\n default:\n connector: mongo\n uri: mongodb://prisma:prisma#mongo\n
| \nport: 4466 \ndatabases: \ndefault: \nconnector: mongo \nuri: mongodb://prisma:prisma#mongo
|\nport: 4466\n databases:\n default:\n connector: mongo\n uri: mongodb://prisma:prisma#mongo
\nport: 4466\n databases:\n default:\n connector: mongo\n uri: mongodb://prisma:prisma#mongo
port: 4466\n databases:\n default:\n connector: mongo\n uri: mongodb://prisma:prisma#mongo\n
and the errors
Exception in thread "main" java.lang.RuntimeException: Unable to load Prisma config: java.lang.RuntimeException: No valid Prisma config could be loaded.
expected a comment or a line break, but found p(112)
expected chomping or indentation indicators, but found \(92)
i expected that all containers will run without errors, but actual results are the container stopped after running for a minute.
Please help for this.
or suggest other way to deploy to AWS?
THANK YOU VERY MUCH.
I've been looking for a similar solution to load the prisma config without the multiline string.
There are repositories that load the prisma environment variables separately without a prisma config:
Check out this repo for example:
https://github.com/akoenig/prisma-docker-compose/blob/master/.prisma.env
Here akoenig uses the following env variables using a env_file. So, I'm assuming you can just pass in these environment variables separately to achieve what prisma is looking for.
# CONTENTS OF env_file
PORT=4466
SQL_CLIENT_HOST_CLIENT1=database
SQL_CLIENT_HOST_READONLY_CLIENT1=database
SQL_CLIENT_HOST=database
SQL_CLIENT_PORT=3306
SQL_CLIENT_USER=root
SQL_CLIENT_PASSWORD=prisma
SQL_CLIENT_CONNECTION_LIMIT=10
SQL_INTERNAL_HOST=database
SQL_INTERNAL_PORT=3306
SQL_INTERNAL_USER=root
SQL_INTERNAL_PASSWORD=prisma
SQL_INTERNAL_DATABASE=graphcool
CLUSTER_ADDRESS=http://prisma:4466
SQL_INTERNAL_CONNECTION_LIMIT=10
SCHEMA_MANAGER_SECRET=graphcool
SCHEMA_MANAGER_ENDPOINT=http://prisma:4466/cluster/schema
#CLUSTER_PUBLIC_KEY=
BUGSNAG_API_KEY=""
ENABLE_METRICS=0
JAVA_OPTS=-Xmx1G
This is for a mySQL database. You would need to tailor this to suit your values. But in theory you should just be able to pass these variables one by one into single variables in AWS's GUI.
I've also asked this question on the Prisma Slack channel and am waiting to see if they have other suggestions: https://prisma.slack.com/archives/CA491RJH0/p1569689413383000
Let me know how it goes.
Not and expert here but, have you set up an environment variable PRISMA_API_MANAGEMENT_SECRET you would have defined the secret when you configured your fargate instance.
have a look at the following artical
https://www.prisma.io/tutorials/deploy-prisma-to-aws-fargate-ct14
I'm setting up a Flask app with Gunicorn in a Docker environment.
When I want to spin up my containers, I want my Flask container to create database tables (based on my models) if my database is empty. I included a function in my wsgi.py file, but that seems to trigger the function each time a worker is initialized. After that I tried to use server hooks in my gunicorn.py config file, like below.
"""gunicorn WSGI server configuration."""
from multiprocessing import cpu_count
from setup import init_database
def on_starting(server):
"""Executes code before the master process is initialized"""
init_database()
def max_workers():
"""Returns an amount of workers based on the number of CPUs in the system"""
return 2 * cpu_count() + 1
bind = '0.0.0.0:8000'
worker_class = 'eventlet'
workers = max_workers()
I expect gunicorn to trigger the on_starting function automatically but the hook never seems to trigger. The app seems to startup normally, but when I try to make a request that wants to insert a database entry it says that the table doesn't exist. How do I trigger the on_starting hook?
I fixed my issue by preloading the app first before creating workers to serve my app. I did this by adding this line to my gunicorn.py config file:
...
preload_app = True
This way the app is already running and can accept commands to create the necessary database tables.
Gunicorn imports a module in order to get at app (or whatever other name you tell Gunicorn the WSGI application object lives at). During that import, which happens before Gunicorn starts directing traffic to the app, code is executing. Put your startup code there, after you've created db (assuming you're using SQLAlchemy), and imported your models (so that SQLAlchemy will know about then and will hence know what tables to create).
Alternatively, populate your container with an pre-created database.
Env.: Node.js on Ubuntu, using PM2 programmatically.
I have started PM2 with 3 instances via Node on my main code. Suppose I use the PM2 command line to delete one of the instances. Can I add back another worker to the pool? Can this be done without affecting the operation of the other workers?
I suppose I should use the start method:
pm2.start({
name : 'worker',
script : 'api/workers/worker.js', // Script to be run
exec_mode : 'cluster', // OR FORK
instances : 1, // Optional: Scale your app by 4
max_memory_restart : '100M', // Optional: Restart your app if it reaches 100Mo
autorestart : true
}, function(err, apps) {
pm2.disconnect();
});
However, if you use pm2 monit you'll see that the 2 existing instances are restarted and no other is created. Result is still 2 running instances.
update
it doesn't matter if cluster or fork -- behavior is the same.
update 2 The command line has the scale option ( https://keymetrics.io/2015/03/26/pm2-clustering-made-easy/ ), but I don't see this method on the programmatic API documentation ( https://github.com/Unitech/PM2/blob/master/ADVANCED_README.md#programmatic-api ).
I actually think this can't be done in PM2 as I have the exact same problem.
I'm sorry, but I think the solution is to use something else as PM2 is fairly limited. The lack of ability to add more workers is a deal breaker for me.
I know you can "scale" on the command line if you are using clustering but I have no idea why you can not start more instances if you are using fork. It makes no sense.
As I know, all commands of PM2 can also be used programmatically, including scale. Check out CLI.js to see all available methods.
Try to use the force attribute in the application declaration. If force is true, you can start the same script several times, which is usually not allowed by PM2 (according to the Application Declaration
docs)
By the way, autorestart it's true by default.
You can do so by use of a ecosystem.config file.
Inside that file you can specify as much worker processes as you want.
E.g. we used BullJS to develop a microservice architecture of different workers that are started with the help of PM2 on multiple cores: The same worker started as named instances multiple times.
Now when jobs are run BullJS load balances the workloads for one specific worker on all available instances for that worker.
You could of course start or stop any instance via CLI and also start additional named workers via the command line to increase the amount of workers (e.g. if many jobs need to be run and you want to process more jobs at a time):
pm2 start './script/to/start.js' --name additional-worker-4
pm2 start './script/to/start.js' --name additional-worker-5