Keep all jars in a folder running on jvm's as services - linux

On a remote server I'm having a folder which contains serveral jars. Each of it will represent an application, which will open up a port, on which a web-application is served.
It might look like this:
jars
app1.jar
app2.jar
app3.jar
I'm trying to find a solution now to ensure that all of them are constantly running on separated jvm's. Whenever one jar is replaced by another uploaded jar, this should immediately be handled.
At the moment I can achieve parts of this manually by setting up a service for each of it. Something like:
my-servers-service-setup-tool app1 java -jar app1.jar 12345
(last parameter is the port)
In case app1.jar is now overridden by an upload: How could the service react on that? Either restart itself or if this is necessary setup a new service for the same port.
If a new jar enters the folder, I'd setup a new service for this as described above. Might there maybe be a more declarative approach to this? I mean, another automation that would detect the arrival of a new jar and that would setup a new service.

Related

Log4j creates logfile on server but doesn't write to it

I'm working on a web application which is deployed on a glassfish server.
I wanted to implement the log4j framework. First, I tested everything on my local machine (local server) and it works perfect.
Now, i deployed it on a test environment and noticed two strange behaviours.
It creates two logfiles, one is named "server.log" and is created during the server restore function is executed. The other has the instace name, like "instance104.log" and is created while the server is starting.
But this is not the main problem. The main problem is that it doesn't write anything to any of those logfiles.
This is the logfile path from the log4j.properties:
log4j.appender.file1.File = /lfs/wwwmnt/appName/logs/project/${com.sun.aas.instanceName}.log
Does log4j need a initialization for writing logs to logfiles when it's on remote servers? Do I have to add the log4j jar to the remote sever?
Like I said, it works perfect on local evironment but on the test it just creates the logfile but doesn't write anything to it...
So I figured out it was a problem with the clusterDomain domain.xml file. The configuration tags were missing ( ), and I couldn't configure them - in our business envoronment only the DevOps can do that.
Busienss processes are a pain..

Tomcat not deploying new version of files at all - linux / eclipse

I am having issue with the way tomcat deploys my files to the server.
I have installed Tomcat 7 to /opt/tomcat7.
In my eclipse i have specified this path as my tomcat server.
my workspace directory is /home/maciej/workspace/<projects here>
now if I edit a class file and i add simply log statement
log.info("blabla"); and then deploy 'NEW' version of the file via - run on server, i do not see this 'blabla' in my output. It seems like although i have modified the class file, it was not properly deployed into tomcat. Tomcat is reading god knows what but certainly not the file it should read.
EDIT: I have recofnigured my tomcat in eclipse and now:
Server Path = /opt/tomcat7
Deploy Path = /opt/tomcat7/webapps <- used to be .metadata/blablabla default
eclipse tomcat location
When I open 'Open Lunch Configuration' under arguments/working directory the default option is ticket with greyed out path /home/maciej/Desktop
Should this also be changed?
Isn't tomcat working directory /opt/tomcat7/work ?
Any suggestions / ideas? As this issue is slightly getting on my nerves as i can not develop the app.
The Server Path is the same as the Tomcat installation directory in the modal you see in Window > Preferences > Server > Runtime Environments after hitting Edit. That should be set to /opt/tomcat7 or wherever the root of your Tomcat installation lives.
The Deploy Path is relative to the Server Path. It should be webapps, unless you already have stuff there and you want a separate directory. You will not be able to edit this until you shut down Tomcat and remove all webapps underneath it through the Servers view.
Try unchecking Modules auto reload by default if you trust the JDK hot-swapping, which you should if you're using JDK 1.7 or 1.8 and just want to see a log statement inserted.
The working directory you mentioned is just the root directory that Tomcat uses to spit out thread dumps on crashes and the like. It has nothing to do with the Tomcat "work" directory.
Open server view: Window->Show view->Other->Servers. The select correct server, right click, select "Clean" and then restart tomcat. It should help.
If you change something in the project then Eclipse will build automatically and "deploy" the files to the location you have specified. By default, Eclipe's work stops there and the rest is up to tomcat.
Tomcat, like any Java web server, detects changes in JSPs and recompiles them. Nevertheless, changes in classes have no effect because of the way Java class loading works. For the new version of a class to be used by tomcat you need to:
Not have loaded the class before. For example, you start tomcat but then see an error before doing any request. If you change the class then that change will be used because the class was not yet loaded.
Reload the application. This means that all classes are discarded and everything starts fresh.
The easiest way to reload an application, by default, is to make a change to web.xml. If you look into tomcat's configuration conf/context.xml you can see that WEB-INF/web.xml is monitored. Any change will trigger a reload of the context. So you can either make an artificial change in the file or add a resource like WEB-INF/version.properties and generate a different version.properties with any build.
In any case, reloading a complex application takes time. That is why there are plugins like JRebel. But before you go down that path (which adds another moving piece to your setup) you can also try to use Eclipe's support for hot code replacement. You start tomcat in debug, connect to it with Eclipse, and then change some class. Eclipse will try to recompile the class and upload the new definition to tomcat. If it fails it will tell you. As a general rule it will fail when you change the structure of the program and succeed when you just change method's implementations.

How can I test whether jmx-console.war is being used in JBoss 4.2.2?

There is a file within the .\jboss-4.2.2.GA\server\default\deploy folder, named "jmx-console.war". I am getting a security vulnerability dealing with this module. How can I tell if our application is using this module. I implemented an open source tool, but I'm not sure how to test whether it's being used.
Nessus vulnerability of High Severity:
JBoss JMX Console Unrestricted Access
http://www.tenable.com/plugins/index.php?view=single&id=23842
If you see that war file in the deploy folder, then most likely your application is using it. That is to say, it is most likely being loaded. It should be fairly easy to test for, assuming you know the HTTP port the JBoss instance is listening on. By default, it is 8080 so point your browser to http://[your jboss host]:8080/jmx-console and see if the console comes up, keeping in mind that it might be password protected, and your HTTP port might not be 8080.
You should also see something like this in the server.log or configured equivalent:
11:52:30,165 INFO main [TomcatDeployer] deploy, ctxPath=/jmx-console,
warUrl=.../deploy/jmx-console.war/
Having said that, there's a couple of ways I can think of that would indicate or cause the jmx-console to not be deployed:
The folder you referenced is in the default server directory. This is only one instance out of 3 (default, all, minimal) and you may be running one of the others, or even a custom configured server. That is to say, if you were running the minimal server instance, or one that did not contain the jmx-console.war, then the presence of that file in the default server's deploy directory would not cause it to be deployed in another server's instance. (that all sounds more complicated than it really is)
War files in the deploy directory depend on another directory called jboss-web.deployer which actually deploys war files. If that directory is not there, my guess is that war deployment has been disabled. Highly unlikely though, as there are easier ways of doing this, and if someone went to the trouble of removing this folder, they probably would have removed the wars too.
Bottom line is, the easiest way would be to find the http port, then hit the jmx-console URL and see if it responds, or check the log file. It is conceivable that someone could rename jmx-console.war to something else (in an ill-conceived attempt to hide it perhaps ?) in which case, you would need to execute a battery of http request scans and try and find a jmx-console signature, but that's out of my (otherwise quite large...) area of expertise.

jboss taking and executing old ear instead of a new deployed one

i'm using JBoss 5.1.0.GA on a linux machine and i'm deploying an ear for an EJB project, while looking at the server logs, i undeploy the old ear and it undeploys successfully then i put my new ear in the deploy directory and also the logs show that it is deployed successfully but when running the project, the new changes don't take effect and the old ear content gets executed instead. please advise!
i had this problem with 5.1 EJB3 projects on a windows machine a couple of times. Something very fishy going on.
have you tried everything?
i.e. undeploy, stop the service, restart the machine if possible, start the service, re-deploy
i remember in my case it stuck until the machine was restarted. never actually found the problem though.
After shutting down the server, just delete all of the temporary folders that get generated when JBoss starts. Those folders are (if you use default folder):
/server/default/data
/server/default/log
/server/default/tmp
/server/default/work
After deleting them, just restart JBoss and all your new changes should be there.

How do I support multiple versions in Node.js

I have a web-site that needs to be up all the time. I also, of course, need to do new releases. Each page tends to be very long-lived, with lots of JavaScript doing AJAX calls to the server.
What I do is build a new WAR file and put it in Tomcat's webapps directory, which ends up looking like this:
20110701-7f077d 20110711-aa8db4 20110715-6f4a12
20110701-7f077d.war 20110711-aa8db4.war 20110715-6f4a12.war live
The war file is named after the date of its release and the first few characters of its GIT commit-id, just so I can keep track of everything. Tomcat automatically unpacks the war file into a directory of the same name. The live directly just contains a file giving the name of the "live" version.
This way, each user can continue using the version of the back-end that works with the version of the front-end that he has loaded into his browser. And obviously, version upgrade and reversion is painless.
Now, I'm switching to node.js and I want to do the same thing. I am reliably informed that node.js doesn't support independent applications in one instance. So, what to do?
The only thing I can thing of is to designate n slots (where n is some small number like 10 or 100), and each slot corresponds to a port (i.e., slot 1 is 8001 and so on), put Apache in front of several node.js instances, each representing a slot, and Apache would use mod_proxy or mod_redirect to proxy requests like '/slot01' to port 8081. "live" would point to the current slot.
This would be clumsy and error prone and require an otherwise useless Apache instance and most of all I cannot believe that node.js doesn't have a good solution to what seems like a near-universal problem.
You can use node-http-proxy and write some code to monitor your 'deployment directory' for new versions and when such versions are found you can start the corresponding script and proxy it under the directory name (to make myself clear if you find a new directory 'version-11-today' your parent node-http-proxy script could start the new script assigning it a port passed as a parameter and then proxy to the new app under the path '/version-11-today').
A similar solution could be done with nginx only in this case you could write a script to monitory the deployment directory and generate some new nginx configuration when new apps are found.
If you are afraid you might run out of ports I believe both node.js and nginx can run on and proxy unix sockets besides inet sockets.
An advantage of the above is that each app runs in its own process protecting the other apps from crashes and enabling individual app restarts.
A third solution if you are not afraid some bug will crash your app is to have a parent script that loads all the app versions in the same process and maps them under different paths depending on the directory they were found in. You can still restart your server without downtime such as in this example http://codegremlins.com/28/Graceful-restart-without-downtime

Resources