set memory-size for tomcat running as service - linux

As I´m running into memory-problems running my web-app I want to know how I can set the memory for tomcat-8 running on an AWS-linux as a service.
GI-cat needs at least 1000MB free heap space to work properly. You
have 506MB free (total 1752MB). Increase the memory if possible, by
adding -Xmx1000m or more to the java arguments.
I´ve read How do I increase memory on Tomcat 7 when running as a Windows Service? but it only handles windows-services, not linux.
I suppose I have to manipulate catalina.sh, don´t it but I´m unsure if this will effect the service when using service tomcat8 restart.

Non-persistent method
You can set the environment variable before you start your tomcat service:
export CATALINA_OPTS="-Xmx1000m"
And then start your service with:
service tomcat8 restart
Side note: this variable is only set until it gets unset/set by another process or your linux box restarts.
Persistent method
To make this persistent, you want to modify tomcat.conf in $CATALINA_HOME/conf/ and append/modify the environment variable with:
CATALINA_OPTS="-Xmx1000m"
Ref: https://unix.stackexchange.com/a/244197
Updated #1: Changed the response to better suit the author's needs.

Related

Does an opened SSH connection to a GCLoud VM prevent it from freezing/crashing?

I have a f1-micro gcloud vm instance running Ubuntu 20.04.
It has 0,2 vcpus and 600mb memory.
I write freezing/crashing which stands for just not responding to anything anymore.
From my monitoring i can see that the cpu is at its peak at 40% usage (usually steady under 1%), while the memory is always arround 60% (both stats with my (nodejs) server running).
When i open a ssh connection to my instance and run my (nodejs) server in background everything works fine as long as i keep the ssh connection alive. As soon as i close the connection it takes a few more minutes until the instance freezes/crashes. Without closing the ssh connection i can keep it running for hours without any problem.
I dont get any crash or freeze information from gcloud itself. The instance has a green checkmark and is kind of still running. I just cant open a new ssh connection and also the only way to do something again with this instance is by restarting it.
I have cloud logging active and there are also no messages in there.
So with this knowledge my question is if gcloud somehow boosts ssh connected vms to keep them alive?
Cause i dont know what else could cause this behaviour.
My (nodejs) server uses arround 120mb, another service uses 80mb and the gcp monitoring agent uses 30mb. The linux free command on the instance shows memory available between 60mb and 100mb.
In addition to John Hanley and Mike, You can edit your Machine Type based on you needs.
In the Google Cloud Console, Go to VM Instance under Compute Engine.
Select Instance name to open its Overview page.
Make sure to Stop the Instance before editing Instance.
Select Machine Type that match your application needs.
Save.
For more info and guides you may refer on link below:
Edit Instance
Machine Family Categories
Since there were no answers that explained the strange behaviour i encountered.
I also haven't figured it out but at least my servers wont crash/freeze anymore.
I somehow fixxed it by running my node.js application in an actual background job using forever instead of running it like node main.js &.

Schedule daily Docker container restart / reset

I have a Linux based Docker container running an application which seems to have a memory leak. After around a week requests to the application start to fail and the container requires a restart to reset its state and get things working again.
The error reported by the application is:
java.lang.OutOfMemoryError: Java heap space
Is there a generic method that can be used to trigger a restart, resetting it's state, regardless of which service is being used to host it? If there's not a good generic solution, I'm about to give DigitalOcean a whirl so maybe there's a DigitalOcean specific solution that may work instead?
You can set a restart policy (with flag on-failure) as described here.
Check out the Watchtower project. This is an incredible tool that restarts Docker containers on schedule and also updates containers automatically.

Is it possible to restart a process in Google Cloud run

We have multiple Google Cloud Run services running for an API. There is one parent service and multiple child services. When the parent service starts it loads a schema from all the children.
Currently there isn't a way to tell the parent process to reload the schema so when a new child is deployed the parent service needs to be restarted to reload the schema.
We understand there there are 1 or more instances of Google Cloud Run running and have ideas on dealing with this, but are wondering if there is a way to restart the parent process at all. Without a way to achieve it, one or more is irrelevant for now. The only way found it by deploying the parent which seems like overkill.
The containers running in google cloud are Alpine Linux with Nodejs, running an express application/middleware. I can stop the node application running but not restart it. If I stop the service Google Cloud Run may still continue to serve traffic to that instances causing errors.
Perhaps I can stop the express service so Google Cloud run will replace that instance? Is this a possibility? Is there a graceful way to do it so it tries to complete and current requests first (not simply kill express)?
Looking for any approaches to force Google Cloud Run to restart or start new instances. Thoughts?
Your design seems, at high level, be a cache system: The parent service get the data from the child service and cache the data.
Therefore, you have all the difficulties of cache management, especially cache invalidation. There is no easy solution for that, but my recommendation will be to use memorystore where all child service publish the latest version number of their schema (at container startup for example). Then, the parent service checks (at each requests, for example) the status in memory store (single digit ms latency) if a new version is available of not. If a new, request the child service, and update the parent service schema cache.
If applicable, you can also set a TTL on your cache and reload it every minute for example.
EDIT 1
If I focus only on Cloud Run, you can in only one condition, restart your container without deploying a new version: set the max-instance param to 1, and implement an exit endpoint (simply do os.exit() or similar in your code)
Ok, you loose all the scale up capacity, but it's the only case where, with a special exit endpoint, you can exit the container and force Cloud Run to reload it at the next request.
If you have more than 1 instance, you won't be able to restart all the running instances but only this one which handle the "exit" request.
Therefore, the only one solution is to deploy a new revision (simply deploy, without code/config change)

Openshift creating too many processes

I have a python application running using gunicorn. I have wrapped it in a docker image and deployed it on openshift. However, the pod either consumes too much memory or crashes with OOM/out of memory error.
On investing, I found out that there are multiple instances of my app being made even if I haven't specified gunicorn to create multiple workers.
Note: when the same docker image is ran on local machine, it works perfectly fine.
Whose image are you using? If you are using the Python S2I image provided by OpenShift to wrap your application and haven't taken control of WSGI server execution and are letting the OpenShift image configure it, it will set the number of processes based on available resources detected. If your web application is particular memory hungry though and uses more than a typical application, the number of processes it creates may be too much. In this case you can set the WEB_CONCURRENCY concurrency environment variable to override how many process it sets.
See WEB_CONCURRENCY in:
https://github.com/sclorg/s2i-python-container/blob/master/3.6/README.md

How do I start my grails dev server in single threaded mode?

grails run-app will start my app in an embedded tomcat server.
I would like to configure this embedded server so that only a single request processor thread is available and that multiple threads are processed serially rather than concurrently (similar to default webrick behaviour in the rails world)
Is it possible? If so, how do I do it?
As far as I know, this is not directly supported by the Tomcat plugin. But you could easily modify the Tomcat plug-in and run your own version.
If you look at the class org.grails.tomcat.TomcatServer, you will see it starts a Tomcat instance.
Here is the doc for this class: http://tomcat.apache.org/tomcat-7.0-doc/api/org/apache/catalina/startup/Tomcat.html
There is a getConnector() method which will return the default HTTP connector. Once you have it, you can probably change the settings, like maxThreads.
But be careful the performance will be awful. But I guess you already know that.

Resources