When I open admin console of any domain I see the below list in Configurations -> server-config -> Thread pools:
admin-thread-pool
http-thread-pool
thread-pool-1
So I conclude that thread settings are set for every domain. However, when I read https://docs.oracle.com/cd/E19798-01/821-1751/abluc/index.html I see no one word about domain settings. Besides, when I do
asadmin> help list-threadpools
I get
Valid values are as follows:
server
Lists the thread pools for the default GlassFish Server
instance server.
configuration-name
Lists the thread pools for the named configuration.
cluster-name
Lists the thread pools for every instance in the cluster.
instance-name
Lists the thread pools for a particular instance.
As you see the help says about GlassFish Server instance server. From here https://stackoverflow.com/a/15324447/5057736
A GlassFish Server instance is a single Virtual Machine for the Java
platform (Java Virtual Machine or JVM machine) on a single node in
which GlassFish Server is running. A node defines the host where the
GlassFish Server instance resides.
Could anyone explain how server thread settings merge(?) with domain thread settings? Correct me if I understand something wrong.
This question is important for me because I must understand how to make most optimal settings when I have many GlassFish domains (>20) on one instance of GlassFish server.
Related
I have tons of tomcat servers, they are all in a virtual machine. At the moment there is a need to develop a web panel, where I can track the statuses of servers, change their configurations, stop and restart. Actually the question itself: with what technologies can I do this. Previously, there was an idea to use the playbook ansible.
How can I at least display the names of my servers on the page?
There are two standard ways to monitor Tomcat:
you can use Tomcat Manager, especially its text interface,
you can use JMX directly or through the JMX Proxy Servlet. On Tomcat's webpage you can find a not so up-to-date list of MBeans. For some MBeans you'll have to fire up jconsole and explore the names yourself.
I have an ASP.NEt Core 3.1 application with Angular 8 frontend. It runs fine when hosted on IIS but as I have moved it onto a new Ubuntu 18 Server with Nginx above Kestrel sometimes the long running background processes stop working (IHostedService). Then the app runs towards accepting new requests so only the background process is stopped.
These processes get files from clients and give immediate responses with a process ids. The clients can query the process state by their id. Everything have been running fine for months now on IIS but the new config must have some limits that kills these processes. I suppose there is some kestrel or nginx option I don't know about and affect processes started by http requests.
What options can I try and where can I get some logs?
I've tried to log everything from .net core but even the most verbose logs are useless here. Nginx logs doesn't contain any info about the stopped process either.
Although the application runs fine hosted on IIS I tried to find catch blocks without any output and added logging into them but still nothing. Are there anything I can add to my application globals to log any exceptions handled or unhandled?
I forgot to say that I use a local Microsoft SQL Server Express both on windows and linux. The linux Sql Server install was done by the official ms docs (as dotnet and nginx config, too). The database is restored from a windows sql server backup. The connection string is the same with multipleresultsets=true. Are there any differences I should aware of?
For anyone getting here in the future: this was caused by a bug in Microsoft.Data.SqlClient, so I had to update it (independently from EF Core 3.1.2) from nuget to the newer 1.1.2 version.
When it stucked I had two threads waiting for each other, both in SqlClient. With Just my code enabled VS debugger stopped at one of my linq queries. The only interesting part was that it never threw any exceptions and there was no deadlock event on the sql server either. It just waited there so all logs were empty.
https://github.com/dotnet/efcore/issues/18480
https://github.com/dotnet/SqlClient/issues/262
I created sample web api on .net core and registered it on default file in Nginx and was able to access it from outside.
The API looked like https://<>/api/values.
Now I want to add more configurations to host more web api with different port number. The problem is how will default file differentiate between multiple APIs since base URL is same i.e localhost\<> for all.
You need to create server blocks. Each of these server blocks will handle/listen/respond to different app. You can host as many apps you want to on a single Ubuntu machine using nginx this way.
This will be very helpful and describe the entire process of creating server blocks for your nginx server.
I have a laptop that I am running node on, a Ubuntu Server with a quad core processor.
There is a plan for 2-3 sites on this server and I am not a really good admin and needed help getting this one site going so I dont want to start from scratch and run a hypervisor. Is there a way to have node host 3 sites and have each of the run on their own thread of the processor? I understand Node is single threaded and while I really dont need to do this for performance (because its just for development) I do like this as an exercise in doing things in node and it would be cool! There is an entire second laptop for the database so Im not worried about resources.
So 3 sites on one instance of Ubuntu Server all on different threads.....
It's not entirely clear what you're trying to accomplish. Here are a couple scenarios:
Create three separate node.js servers, each listening to their own port and they will each be running their own node.js process independent of the other. Then have each client connect to the appropriate port.
Create three separate node.js servers, each listening to their own port and they will each be running their own node.js process independent of the other. Use NGINX as a proxy in front of the three web servers and you can let NGINX direct requests all on port 80 from each of the three domains to the appropriate node.js web server. Using NGINX this way, all three web servers can appear to be be running on the default port 80 (or 443) and NGINX will separate them out and direct them to the appropriate web server process.
Create your own master node.js process that receives requests for all three domains, looks that the host header to see what domain the request was actually directed at and then forward that request to the appropriate child process. This would be similar to the way clustering works in node.js, but each child process would be each of your different web servers. Personally, I'd use the pre-built functionality in NGINX to do this for you (as described in option 2 above), but you could code it yourself if you didn't want to run NGINX.
Instead of NGINX, use some sort of load balancer that your ISP may already have to direct the incoming connections to the right server process.
If you run 3 different applications ie. sites then they will be running as different processes on your server which assuming all run on different ports, there should be no problem running them simultaneously. When you refer to node being single threaded that applies to a single process so each process has its own event loop running.
as servicestack leave it open to host service in web server or in stand alone app.
What is the best in term of performance both raw and for a high number of clients ?
Hosting on apache or nginx or XSP or IIS is just for added functionality or for perf ?
servicestack.net itself runs on Ubuntu / Nginx + MonoFastCGI, although we've been notified others have been able to get better performance with self-hosting which you can still serve behind a Nginx/Apache reverse proxy if you still wanted access to a full-featured web server.
You can also wrap a self-hosted ServiceStack in a Linux Daemon.
We've ran into same question while were choosing hosting schema for our ServiceStack services. Ran some benchmarks with same service hosted on self-host and under IIS. SelfHost windows service has shown near 1.5x better performance than IIS-hosted app.
Surely this is not and absolute number and it may vary by service's load type (cpu/io), but it is clear, that IIS routine adds tonns of overhead.
If you need speed and don't worry about all those features IIS can give you (monitoring / advanced routing / admin / etc)- self host is the way to go. Our set-up hides ServiceStack hosts behind nginx nodes that serve all the routing/proxy/balancing stuff so we don`t need monstrous IIS-routine.