wildfly 20 doesn't log with log4j. Which factors Can be involved for that? - log4j

I have two environment development(local) and other QA environment, In the last is a cluster with two nodes.
The problem came with the deploy in QA environment I can't see the log in the server, but locally print console logging without problem.
I'm sure that module structure is the same in both environments, and my configure is in the classpath with xml file.
Which aspect can influence in this difference?
Local print logging console server and QA enviroment dont do it.

I was able to solve this, I checked the server in the development environment, it's a domain configuration and for this reason it did not show the application log in the user wizard, due to application conditions only the applications write to the .log file, at start I was not allowed to check this domain log directory, but after I check this by command line my app log file is right there
I'll hope to help someone else

Related

Post-build actions in Python/Linux webapp do not run

I have a Django-based web app deployed from Github, running in Python 3.9. The app deploys and starts successfully.
I need to add post-build actions to complete the deployment; the exceeding common Django task of running "manage.py". Following the general and python-specific docs I have added the an app setting of
POST_BUILD_SCRIPT_PATH=postbuild.sh
There is a shell script, postbuild.sh in the root of my app, which runs fine if I SSH into the running container. The expected behaviour is that after deployment, this should run, and output to the deployment log. Neither of these things happen.
I have tested the app setting POST_BUILD_COMMAND with a very simple echo, and that does nothing either.
Can you tell me either what I need to do to make these app settings work, or suggest an alternative method of running the post-build script?
Please note that this is a Linux app using Oryx, so answers concerning Windows/Kudu like this one aren't related.
I noticed you asked your question over here as well. Your setting needs to be set to a relative path, /postbuild.sh.

Log4j creates logfile on server but doesn't write to it

I'm working on a web application which is deployed on a glassfish server.
I wanted to implement the log4j framework. First, I tested everything on my local machine (local server) and it works perfect.
Now, i deployed it on a test environment and noticed two strange behaviours.
It creates two logfiles, one is named "server.log" and is created during the server restore function is executed. The other has the instace name, like "instance104.log" and is created while the server is starting.
But this is not the main problem. The main problem is that it doesn't write anything to any of those logfiles.
This is the logfile path from the log4j.properties:
log4j.appender.file1.File = /lfs/wwwmnt/appName/logs/project/${com.sun.aas.instanceName}.log
Does log4j need a initialization for writing logs to logfiles when it's on remote servers? Do I have to add the log4j jar to the remote sever?
Like I said, it works perfect on local evironment but on the test it just creates the logfile but doesn't write anything to it...
So I figured out it was a problem with the clusterDomain domain.xml file. The configuration tags were missing ( ), and I couldn't configure them - in our business envoronment only the DevOps can do that.
Busienss processes are a pain..

How can I test whether jmx-console.war is being used in JBoss 4.2.2?

There is a file within the .\jboss-4.2.2.GA\server\default\deploy folder, named "jmx-console.war". I am getting a security vulnerability dealing with this module. How can I tell if our application is using this module. I implemented an open source tool, but I'm not sure how to test whether it's being used.
Nessus vulnerability of High Severity:
JBoss JMX Console Unrestricted Access
http://www.tenable.com/plugins/index.php?view=single&id=23842
If you see that war file in the deploy folder, then most likely your application is using it. That is to say, it is most likely being loaded. It should be fairly easy to test for, assuming you know the HTTP port the JBoss instance is listening on. By default, it is 8080 so point your browser to http://[your jboss host]:8080/jmx-console and see if the console comes up, keeping in mind that it might be password protected, and your HTTP port might not be 8080.
You should also see something like this in the server.log or configured equivalent:
11:52:30,165 INFO main [TomcatDeployer] deploy, ctxPath=/jmx-console,
warUrl=.../deploy/jmx-console.war/
Having said that, there's a couple of ways I can think of that would indicate or cause the jmx-console to not be deployed:
The folder you referenced is in the default server directory. This is only one instance out of 3 (default, all, minimal) and you may be running one of the others, or even a custom configured server. That is to say, if you were running the minimal server instance, or one that did not contain the jmx-console.war, then the presence of that file in the default server's deploy directory would not cause it to be deployed in another server's instance. (that all sounds more complicated than it really is)
War files in the deploy directory depend on another directory called jboss-web.deployer which actually deploys war files. If that directory is not there, my guess is that war deployment has been disabled. Highly unlikely though, as there are easier ways of doing this, and if someone went to the trouble of removing this folder, they probably would have removed the wars too.
Bottom line is, the easiest way would be to find the http port, then hit the jmx-console URL and see if it responds, or check the log file. It is conceivable that someone could rename jmx-console.war to something else (in an ill-conceived attempt to hide it perhaps ?) in which case, you would need to execute a battery of http request scans and try and find a jmx-console signature, but that's out of my (otherwise quite large...) area of expertise.

Clustered Web Servers Failing

I have three web servers running Windows Server 2008. Two are clustered, the third a standalone server (two live, one test). They use shared configuration with the configuration file located on a central file server. Every so often one live web server will stop responding. The event log shows the following error.
The worker process for application pool 'My Website' encountered an error 'Configuration file is not well-formed XML
' trying to read configuration data from file '\\?\C:\inetpub\temp\apppools\My Website\My Website.config', line number '3'. The data field contains the error code.
The config file has the following data
<!-- ERROR: There's been an error reading or processing the applicationhost.config file. Line number: 0 Error message: Cannot read configuration file
-->
There is nothing in the event viewer on the file server.
When I restart the web server everything works fine.
Any ideas?
Edit
I have around 30 websites. 10 are true standalone websites running in their own application pools. The other 20 are old websites that just redirect all requests to a different URL (some on my server, some external), these share the same application pool.
One of the 10 "standalone" websites is running php. One is .NET 2.0. One is classic asp with two virtual directories set up to run as a .NET 2.0 applications. The other 7 are running classic asp only.
It's quite old but I came here because I had this issue today and it seems to be still around. The reason is usually the DFSR feature, which is especially valid if one runs a cluster. There are three possible solutions.
Change a registry key to reduce the speed of synchronization and avoid the file lock the leads to the error as described here: https://blogs.msdn.microsoft.com/asiatech/2013/12/01/you-may-experience-configuration-file-is-not-well-formed-xml-error-while-using-dfsr-to-synchronize-the-iis-configuration-files/
Install a Hotfix provided by Microsoft. The direct link is https://support.microsoft.com/de-de/hotfix/kbhotfix?kbnum=960412&kbln=en-US
Configure your DFSR environment properly and exclude the folder *\inetpub\temp\apppools* explicitly (after DFSR replicated the applicationHost.config the WAS will rebuild the files anyway)
Despite the Hotfix I personally like the third option most.
The purpose of this late answer is documentation, as others may come here and the page seems well ranked on Google.
This is a long shot but...
If the file is accessed through netbios and its regularly accessed you may be hitting the fileserver too hard and windows thinks you are trying to do a DOS attack so it starts rejecting requests. This may also be caused by a firewall interpreting this very samething when accessing the file server.
Also, from this, check you that the temp folder for the iis user is full, the problem may not only be the file server, but failure to temporary store the config file.
Can you share some volume information of your server layout? (# application pools / # websites , etc)
Edit:
I assume you're not interested in workarounds to this, especially if it means changing the location of the file :)

Unable to start Message Engine in Websphere

I am facing a problem while starting the websphere message engine for one of the application deployed on websphere. This application is getting deployed automatically as a part of the installation of Websphere Lombardi 7.2 express edition. It's using websphere 7 internally to deploy it. When I try to start the message engine from the administrative console of websphere I am getting following error:
The messaging engine ProcessCenter01.twperfsvr-twperfsvr_bus cannot be started as there is no runtime initialized for it yet, retry the operation once it has initialized. For the runtime to successfully initialize the hosting server must be started, have its 'SIB service' already enabled, and dynamic configuration reload enabled. If this is a newly configured messaging engine and it is the first messaging engine to be hosted on this server, then it is most likely the 'SIB service' was not previously enabled and thus the server will need to be restarted. The messaging engine runtime might not be initializing because of an error while trying to start, examine the SystemOut.log of the hosting server to check for error messages indicating the problem.
After restarting the server, the same error shows. Can anyone help me to to find what gets loaded as a part of "initialization of runtime"? Are there any config files etc. that I need to check to solve this issue? I am suspecting some missing configuration causing error to load the runtime for this particular application.
I too faced this issue today had to delete all the files under the message store
check the directory-file path mentioned in
Application servers > server1 > Messaging engines > XXX.server1-primaryBus > File store
Just Enable the SIB Services For the particular Server.
Example:Server-->Application Server-->click on Server Name-->right hand side we can see SIB services-->Check box the Enable services.
This will solve your problem
Recently I have faced the same issue when I rebuilt the jvms in UAT envt. After searching on web I found that because of the old messages saved in the message store(flat files in my envt) the messaging engines was not getting initiated. After deleting the old messaging store and restarting the servers it got initialized.
I have struggled with this problem too.
In our situation the problem was that the file message store location was used that was already created for a different (or old) message-engine.
If you add a busmember to the service-bus and use a file store implementation, then you need to supply the path for the store and log folder. Make sure these locations don't exist yet, other wise you will run in the problem above. The message-engine for this member will use these folders.
If you have a script for creating the message-bus infrastructure, make sure that when you delete the bus or remove message-engines, that you remove the file store/log folders for these, before you re-run your script.
Another possibility is that you are using a external database as a data store, and the user that is used for the connection is not allowed to create a database. You might find a ffdc entry like this:
DB2 SQL Error: SQLCODE=-552, SQLSTATE=42502, SQLERRMC=DB2ADMIN;CREATE
SCHEMA, DRIVER=3.61.65
Then you have to go to your database administration tool and give DB2ADMIN the proper privileges. Then restart the server or cluster.
Finally this issue is been resolved. I did not create the schema in SQL Server with same name as that of the username I gave to connect SQL Server during the installation of WLE 7.2
Please find details about this at below link:
http://www.ibm.com/developerworks/forums/message.jspa?messageID=14795282

Resources