Can't execute jar : no main manifest attribute - log4j

I recently encountered a log4j version 1.2.17 within the gerrit application hosted on an EC2 instance.
To remove this, I decompressed gerrit.war using
jar -xvf gerrit.war
rm -rf WEB-INF/lib/log4j-1.2.17.jar
And then repackaged the gerrit.war file using
jar -cvf gerrit.war *
I've also pasted the war file contents, if it might prove useful
drwxr-xr-x WEB-INF
-rwxr--r-- robots.txt
drwxr-xr-x polygerrit_ui
drwxr-xr-x META-INF
-rwxr--r-- Main.class
-rwxr--r-- LICENSES.txt
drwxr-xr-x gerrit_ui
drwxr-xr-x gerrit-launcher
-rwxr--r-- favicon.ico
drwxr-xr-x Documentation
drwxr-xr-x com
-rwxr--r-- build-data.properties
Before repackaging the .war post log4j jar removal the MANIFEST.MF file had the class name
Manifest-Version: 1.0
Created-By: blaze-singlejar
Main-Class: Main
What I cannot see then in the META-INF\MANIFEST.MF file after removing the log4j jar is the Main Class. Essentially it looks like
$ cat META-INF/MANIFEST.MF
Manifest-Version: 1.0
Created-By: 1.8.0_312 (Red Hat, Inc.)
Hence after running
java -jar gerrit.war init -d /var/gerrit/review
I'm receiving the "no main manifest attribute" error. I do not have the source code present as the .war file was taken from the gerrit site, as mentioned in the install steps.
Any help would be really appreciated.

Related

Windows two usernames and user paths (normal and corrupted name) referencing to the same user folder content (gitlab and local machine)

I have an issue where both on my Windows machine and a gitlab Windows runner where I have two usernames and valid user path (the "base", and a "corrupted" one). For example, on my local machine:
C:\Users>dir /A
Directory of C:\Users
10/23/2021 08:51 PM <DIR> .
10/23/2021 08:51 PM <DIR> ..
10/23/2021 11:27 PM <SYMLINKD> All Users [C:\ProgramData]
10/23/2021 11:27 PM <DIR> Default
10/23/2021 11:27 PM <JUNCTION> Default User [C:\Users\Default]
10/24/2021 12:13 AM 174 desktop.ini
08/22/2022 03:58 PM <DIR> J.S.E
10/23/2021 08:35 PM <DIR> Public
1 File(s) 174 bytes
7 Dir(s) 220,413,632,512 bytes free
but I can do both:
C:\Users>cd C:\Users\J.S.E
C:\Users\J.S.E>
and a "corrupted"
C:\Users>cd C:\Users\JS2896~1.E
C:\Users\JS2896~1.E>
both these paths point to the same folder content, but I can't figure out where this JS2896~1.E comes from.
I noticed this issue while doing some unit testing in python:
I would have a script with a section similar to:
PID_SYNC_FILEPATH = pathlib.Path(tempfile.gettempdir()) / PID_SYNC_FILENAME
That would use that path to create a file. File found at:
C:\\Users\\J.S.E\\AppData\\Local\\Temp\\test_kernel_pid
But when doing checks on the file content during tests (pytest), I would create the path in a similar way but I would then run into (notice the corrupted user name):
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\JS2896~1.E\\AppData\\Local\\Temp\\test_kernel_pid'
Basically the issue is: both paths are valid (ex: can be processed with cd in the shell of with pathlib in python), but sometimes are not interpreted as equivalent, causing issues. Anyone knows what is up with that "corrupted username path"
Note: same issue with gitlab Windows runners: two users, runner10 -> C:\\Users\\runner10 and RU4A94~1 -> C:\\Users\\RU4A94~1
Found what seems to be happening:
The "corrupted version", that other people call "short version" is the 8.3 filename that exists before VFAT and the "full path" as you meant it is the "long filename" or LFN. They are interchangeable so long as accessing the file in concern. So J.S.E and JS2896~1.E ("short version") are interpreted in a similar way by the system, but not by my code.
The issue was coming from using tempfile.gettempdir(), that for some reason sometimes gave the long version, sometimes the short version, causing mismatches in my code.
This can be solved by using os.path.realpath
tempfile.gettempdir()
Out[21]: 'C:\\Users\\JS2896~1.E\\AppData\\Local\\Temp'
os.path.realpath(tempfile.gettempdir())
Out[22]: 'C:\\Users\\J.S.E\\AppData\\Local\\Temp'
or with pathlib
pathlib.Path(tempfile.gettempdir())
Out[24]: WindowsPath('C:/Users/JS2896~1.E/AppData/Local/Temp')
pathlib.Path(tempfile.gettempdir()).resolve()
Out[25]: WindowsPath('C:/Users/J.S.E/AppData/Local/Temp')

sbt resources not copied

It appears as though sbt (1.2.1, 1.2.3) is not copying resource files (from src/main/resources) to the target directory.
The build is multi-project, with a root project that aggregates subprj1 (for now).
Showing below: project structure (main directories and one resource file: application.conf), the resourceDirectory as proof that we have not overridden it, proof of successful compilation - and yet the application.conf file has not been copied to the output (target) directory.
Tried sbt versions 1.2.1, 1.2.3.
Why are the resources not being copied to the output, since we are complying with the standard directory structure?
Project structure
/main/project/home/dir/build.sbt
/main/project/home/dir/subprj1/src/main/resources
/main/project/home/dir/subprj1/src/main/resources/application.conf
/main/project/home/dir/subprj1/src/main/scala/com/myco/foo/bar/server/*.scala
IJ][subprj1#master] λ show resourceDirectory
[info] subprj1 / Compile / resourceDirectory
[info] /main/project/home/dir/subprj1/src/main/resources
build/sbt clean compile
...
[success] Total time: 22 s, completed Feb 8, 2019 3:10:04 PM
find . -name application.conf
./subprj1/src/main/resources/application.conf
It works if we run copyResources after compile, but why is that not automatic?
build/sbt copyResources
find . -name application.conf
./subprj1/src/main/resources/application.conf
./subprj1/target/scala-2.12/classes/application.conf
I can inspect the dependencies among tasks and I can see that compile does not depend on copyResources, but was it always like this, or is this a recent change? I have been using sbt for years, and I have this expectation that the build would copy resources to output automatically.
build/sbt -Dsbt.log.noformat=true "inspect tree compile" > t.txt
It turns out someone had added the settings below to build.sbt. Once I commented out these lines, the resources started being copied to the output directory.
, unmanagedResourceDirectories in Compile := Seq()
, unmanagedResourceDirectories in Test := Seq()

javax.servlet-api-3.0.1.jar not loded in liferay 6.1 and tomcat 7.0

With a custom plugin deployed, I'm getting the following error message and the webapp doesn't start:
validateJarFile(E:\6.1\liferay-portal-tomcat-6.1.1-ce-ga2-20120731132656558\liferay-portal-6.1.1-ce-ga2\tomcat-7.0.27\temp\4-SiteSkills_CMS-portlet\WEB-INF\lib\javax.servlet-api-3.0.1.jar) - jar not loaded. See Servlet Spec 2.3, section 9.7.2. Offending class: javax/servlet/Servlet.class
Why and how can I get rid of this message?
Do not package the servlet jar in an application's WEB-INF/lib directory. This jar comes with the appserver, tomcat in this case.
As your error message states the file to be in the temp folder: make sure you don't only remove the file from the webapps directory, but do a redeploy. Or just empty your temp folder

Puppet: Error 400 Could not find any files

I double checked all settings but did not find the issue and that's why I try to get help here.
Let me show the configuration:
puppet.conf:
[...]
[master]
environmentpath = $confdir/environments/
hiera_config = $confdir/environments/production/sites/xy/config/hiera.yaml
default_manifest = ./sites/xy/config/
environment_timeout = 0
fileserver.conf:
[...]
[sites]
path /etc/puppet/environments/production/sites/
allow *
auth.conf:
[...]
# extra mountpoint
path /sites
allow *
[...]
Now whenever I run Puppet and it tries to implement a specific file I get this:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find any files from puppet:///sites/xy/files/xy/xy.key.pub at /etc/puppet/environments/production/modules/xy/manifests/xy.pp:88 on node xy
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
Note that I had to replace sensitive information by xy but for debugging purposes I try to give every detail where possible.
So /sites points to /etc/puppet/environments/production/sites/ according to fileserver.conf and the directory exists like this (with correct permissions imho):
/etc/puppet % ls -ld /etc/puppet/environments/production/sites/
drwxr-xr-x 8 root puppet 4096 Oct 7 12:46 /etc/puppet/environments/production/sites/
The mentioned file puppet:///sites/xy/files/xy/xy.key.pub should therefore be located in /etc/puppet/environments/production/sites/xy/files/xy/xy.key.pub which looks like this:
/etc/puppet % ls -l /etc/puppet/environments/production/sites/*/files/*/*.key.pub
-rw-r--r-- 1 root puppet 725 Oct 7 12:46 /etc/puppet/environments/production/sites/xy/files/xy/xy.key.pub
And the in the error message mentioned line 88 of the module which loads the file it looks like this:
$sshpubkey = file("puppet:///${sitefiles}/xy/${s0_pubkey}")
where $s0_pubkey is xy.key.pub and ${sitefiles} is sites/$site/files which leads to the evaluated path of the requested file like this: puppet:///sites/xy/files/xy/xy.key.pub
The function file() in Puppet can not handle Puppet mountpoints with puppet://. The official docs at https://docs.puppet.com/puppet/latest/reference/function.html#file don't mention it specifically but obviously without mentioning it it means it can't handle extra mountpoints and wants to load files from a modules files directory.
My solution: I will use a variable which declares the absolute path to my "sites files" like this: $sitefiles_absolute = "/etc/puppet/environments/${environment}/sites/xy/files/" which will never change or at least not very often. And with keeping it in the site.pp file it can be used on every node and module.

how to find HADOOP_HOME path on Linux?

I am trying to run the below java code on a hadoop server.
javac -classpath ${HADOOP_HOME}/hadoop-${HADOOP_VERSION}-core.jar -d wordcount_classes WordCount.java
but I am not able to locate {HADOOP_HOME}. I tried with hadoop -classpath but it is giving output as below:
/etc/hadoop/conf:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/.//*:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/.//*:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-0.20-mapreduce/./:/usr/lib/hadoop-0.20-mapreduce/lib/*:/usr/lib/hadoop-0.20-mapreduce/.//*
Anyone has any idea about this?
Navigate to the path where hadoop is installed. locate ${HADOOP_HOME}/etc/hadoop, e.g.
/usr/lib/hadoop-2.2.0/etc/hadoop
When you type the ls for this folder you should see all these files.
capacity-scheduler.xml httpfs-site.xml
configuration.xsl log4j.properties
container-executor.cfg mapred-env.cmd
core-site.xml mapred-env.sh
core-site.xml~ mapred-queues.xml.template
hadoop-env.cmd mapred-site.xml
hadoop-env.sh mapred-site.xml~
hadoop-env.sh~ mapred-site.xml.template
hadoop-metrics2.properties slaves
hadoop-metrics.properties ssl-client.xml.example
hadoop-policy.xml ssl-server.xml.example
hdfs-site.xml yarn-env.cmd
hdfs-site.xml~ yarn-env.sh
httpfs-env.sh yarn-site.xml
httpfs-log4j.properties yarn-site.xml~
httpfs-signature.secret
Core configuration settings are available in hadoop-env.sh.
You can see classpath settings in this file and I copied some sample here for your reference.
# The java implementation to use.
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_67
# The jsvc implementation to use. Jsvc is required to run secure datanodes.
#export JSVC_HOME=${JSVC_HOME}
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR}
# Extra Java CLASSPATH elements. Automatically insert capacity-scheduler.
for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do
export HADOOP_CLASSPATH=${HADOOP_CLASSPATH+$HADOOP_CLASSPATH:}$f
done
Hope this helps!
hadoop-core jar file is in ${HADOOP_HOME}/share/hadoop/common directory, not in ${HADOOP_HOME} directory.
You can set the environment variable in your .bashrc file.
vim ~/.bashrc
Then add the following line to the end of .bashrc file.
export HADOOP_HOME=/your/hadoop/installation/directory
Just replace the path with your hadoop installation path.

Resources