What will be the best way to search for log4j vulnerability? Is it a search for "log4j" keyword in the source code enough? Does it only affects Java applications? I read somewhere that applications sometimes rename log4j under another name. A quick google search gives several tools than claim can detect the vulnerability.
The vulnerability comes from JndiLookup.class. In your console use the following command:
sudo grep -r --include "*.jar" JndiLookup.class /
If it returns nothing, you are not vulnerable.
But you may encounter such returns:
Fichier binaire /home/myuser/docx2tex/calabash/distro/lib/log4j-core-2.1.jar correspondant
or
grep: /usr/share/texmf-dist/scripts/arara/arara.jar : fichiers binaires correspondent
The first one is clearly vulnerable (version matches), the second one has to be investigated. To mitigate the risk I've immediately removed the JndiLookup.class with the following:
zip -q -d log4j-core-*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class
and
zip -q -d arara.jar org/apache/logging/log4j/core/lookup/JndiLookup.class
Applications may continue to work, or not. By the way upgrading to versions using log4J >= 2.17 has to be done. This is the next step, in the meantime you are no more vulnerable.
You can use below scanner to identify vulnerable log4j jar files in your application.
https://github.com/logpresso/CVE-2021-44228-Scanner
I have used jar version to scan the application.
Download jar from from:
https://github.com/logpresso/CVE-2021-44228-Scanner/releases/download/v2.4.2/logpresso-log4j2-scan-2.4.2.jar
Open a command prompt in the same location and execute the below command to scan your application
java -jar logpresso-log4j2-scan-2.4.2.jar 'your application path'
Related
Steps:
Installed coverity
Configured compiler
cov-configure --javascript
cov-configure --cs
I am stuck at the build step of cov-build. Yarn is used to run and configure the service. But I am not sure what coverity wants here.
I tried a couple of npm run commands, every time end up getting this:
[WARNING] No files were emitted. This may be due to a problem with your configuration
or because no files were actually compiled by your build command.
Please make sure you have configured the compilers actually used in the compilation.
I also tried different compilers, but no luck.
What should be done in this case?
You need to do a file system capture for Javascript files. You can accomplish this by running cov-build with the --no-command flag.
cov-build --dir CoverityIntermedediateDir --no-command --fs-capture-list list.txt
Lets break down these commands:
--dir: intermediate directory to store the emitted results (used for cov-analyze later).
--no-command: Do not run a build command and to look for certain file types
--fs-capture-list: Use the file that is provided to specify which files to look at and possibly emit to the intermediate directory.
A recommended way to generate the list.txt file is to grab it from your source control. If using git run:
git ls-files > list.txt
I want to also point out that if you don't have a convenient way to get a file listing in order to use the --fs-capture-list command you can use --fs-capture-search command and pair that with a filter to exclude the node_modules directory.
The coverity forums have some useful questions and answers:
Node.js File system capture
Really, the best place to look is at the documentation. There are several examples of what you want to do in their guides.
I have a Virtuoso version 06.01.3127 installed on Ubuntu 14.04.05 LTS version (Ubuntu-server).
I would like to upgrade my Virtuoso to at least version 7.2.4.2+, which includes the GeoSpatial features that I need.
I have looked the info provided in the following link Virtuoso: Upgrading from Release 6.x to Release 7.x but I have not been able to follow these steps.
To start with, the second step "Check the size of the .trx file, typically found alongside the .db and .ini files".
I can only find the odbc.ini and virtuoso.ini files, which are inside /virtuoso-opensource-6.1 folder.
Am I looking in the wrong place?
Does anyone have any guidance in this matter?
Thanks in advance
OpenLink Software (producer of Virtuoso, employer of me) does not force the location of any file -- so we cannot tell you exactly where to look on your host.
virtuoso.db is the default database storage file; your local file might be any *.db. This file must be present in a mounted filesystem, and should be fully identified (with full filepath) within the active *.ini file (default being virtuoso.ini).
You might have multiple virtuoso.ini and/or virtuoso.db files in different locations in your filesystem. You might try using some Linux commands, like --
find / -name virtuoso.db -ls
find / -name virtuoso.ini -ls
find / -name '*.db' -ls
find / -name '*.ini' -ls
Installing the binary components is done by following the instructions for installation...
You can get advice from a lot of experienced Virtuoso Users on the mailing list...
I'm aware of the general basics of using ldconfig and LD_LIBRARY_PATH, but I'm hoping for a little guru help with my situation.
I have a portable software package that resides in its own directory and has its own versions of many libraries.
There are MANY binaries and scripts that run from this directory.
Some of the binaries (apache, php, postgres) may also have separate versions installed on the system.
Since there might be two versions of php, it doesn't suffice to create /etc/ld.so.conf.d/myapp.conf if the system can't determine which version of "myapp" to use the ldconfig file for.
I'm looking for best practices on configuring such a system. The person who initially set up the software pack exported LD_LIBRARY_PATH so that ALL applications on the system used it.
I'm trying to isolate just the applications in the package directory.
Some parameters to work with:
/mypack - contains everything for the software package
/mypack/local/lib - contains required libraries that may not be compatible with the system
library example:
/mypack/local/lib/libz.so.1 => /mypack/local/lib/libz.so.1.2.3
/lib/libz.so.1 => /lib/libz.so.1.2.3
even though the versions are the same, the one in /mypack may not be compatible with the distro and will break the system if it's used
binary example:
php exists in both /mypack and in the default directory
php from /mypack should use libs from /mypack/local/lib and the distro version should use /lib
Some questions about linux library paths:
- Is it possible to specify /etc/ld.so.conf.d/php.conf such that it only affects the version of php in /mypack?
- Can the library path be specified based on the location of an executable? That is, at run time, if the path of an executable is under /mypack, can it automatically use libraries from there?
- How about on a per user basis? Some/most of the system runs on different user accounts. If I were able to set a different library path per user, that would solve it.
In case anyone else finds this useful, I ended up doing this before building:
export LD_RUN_PATH='$ORIGIN/../lib'
This includes a library path in the binary itself, relative to the location of the binary. If you plan to use this in a bash script or in your build files, make sure to look up your particular usage with $ORIGIN since there are cases when you need to do something like \$$ORIGIN, \$$ORIGIN or $$ORIGIN so that different utilities involved in the build get the dollar sign escaped properly. Finding this useful bit saved me having to update about 50 individual scripts that run as a batch to build our software pack.
The issue in general is that LD_LIBRARY_PATH precedes the information that is provided by ldconfig. If all you want to do is have a set of backup libraries for installation on systems that don't already have them, extract the current set of libraries from ldconfig and prepend them to LD_LIBRARY_PATH
mytmp=/tmp/${USER}_junk$$
( for i in `/sbin/ldconfig -p | grep '=>' | awk '{ print $NF }'` ; do dirname $i ; done ) | sort -r | uniq > ${mytmp}
myld=""
for j in `cat ${mytmp}` ; do myld=${j}:${myld} ; done
rm -f ${mytmp}
LD_LIBRARY_PATH=${myld}${LD_LIBRARY_PATH}:${SEP}/lib:${SEP}/lib/syslibs
export LD_LIBRARY_PATH
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
I was running a tutorial on YouTube for installing Oracle JDK on Linux. My script seems to have worked but I can no longer run
wget http://download.oracle.com/otn-pub/java/jdk/7u3-b04/jdk-7u3-linux-i586.tar.gz
What new methodology or script can I use to install Oracle JSK on Linux?
Yes, the Oracle JDK link is broken, you have to click through the website and accept the terms.
Second link on Google, the first being this question (Wow, Googlebot is fast) is an Oracle discussion thread:
Hi,
Unfortunately we have to require license acceptance prior to download.
This can be implemented in one of two ways. Either we require
registration and log in prior to download, and as part of registering
you agree to reading and complying with licenses. Or we use a
click-through on download which avoids the need to register and log
in. We have chosen the latter for Java downloads as the least
intrusive method. We found out some time ago that scripts were being
used to circumvent click-through (in violation of site policies, and
frankly also of common sense) and have plugged this hole.
We understand that this makes command line updates from our main
website for Linux users impossible and are actively looking for other
ways to enable this use case.
Oracle JDK is based on OpenJDK (with a few added components like a
closed-source font rasterizer that we license from a third party) and
the latter is available as part of most Linux distributions, so it is
a good option unless you specifically need the Oracle certified
binaries.
Regards,
Henrik Ståhl Sr. Director, Product Management Java Platform Group
Oracle
The reason it doesn't work is pretty obvious if you look at what you get back:
In order to download products from Oracle Technology Network you must
agree to the OTN license terms.
Be sure that...
- Your browser has "cookies" and JavaScript enabled.
- You clicked on "Accept License" for the product you wish to download.
- You attempt the download within 30 minutes of accepting the license.
When you do it from a browser you have to select the radiobutton "Accept License Agreement" and that's when the cookie is set. You should be able to download it using links or lynx.
After agreeing and downloading the JDK, run this script:
#!/bin/bash
#Author: Yucca Nel http://thejarbar.org
#Will restart system
#Modify these variables as needed...
tempWork=/tmp/work
locBin=/usr/local/bin
javaUsrLib=/usr/lib/jvm
downloadDir=~/Downloads
sudo mkdir -p $javaUsrLib
mkdir -p $tempWork
cd $tempWork
#Extract the download
tar -zxvf $downloadDir/*linux*
#Move it to where it can be found...
sudo mv -f $tempWork/jdk* $javaUsrLib/
sudo ln -f -s $javaUsrLib/jdk1/bin/* /usr/bin/
#Update this line to reflect versions of JDK...
export JAVA_HOME="$javaUsrLib/jdk1.7.0_03"
#Extract the download
tar -zxvf $tempWork/*
#Move it to where it can be found...
sudo mv -f $tempWork/jdk1* $javaUsrLib/
sudo ln -f -s $javaUsrLib/jdk1*/bin/* /usr/bin/
sudo rm -rf $tempWork
#Update this line to reflect newer versions of JDK...
export JAVA_HOME="$javaUsrLib/jdk1.7.0_02"
if ! grep "JAVA_HOME=$javaUsrLib/jdk1.7.0_02" /etc/environment
then
echo "JAVA_HOME=$javaUsrLib/jdk1.7.0_02"| sudo tee -a /etc/environment
fi
exit 0
I am using libcurl for my utility and its working very well till now for all Linux platforms. I downloaded, unzipped and simply followed the instructions given without any changes. My product uses the libcurl.so file and is linked dynamically. The .so file is bundled along with our product. Recently there were issues in Suse wherein we found that Libcurl is bundled by default and there was a conflict in installation.
To avoid this issue we tried renaming the libcurl.so into libother_curl.so but it did not work and my binaries still show libcurl.so as a dependency through ldd. I had since learnt that the ELF format of linux shared objects specifies the file name hardcoded as SO file name in the headers.(I could verify the same with objdump -p).
Now my question is what is the simplest way to go? How do I build a libcurl with a different name? My original process involves running configure with the following switches
./configure --without-ssl --disable-ldap --disable-telnet --disable-POP3 --disable-IMAP --disable-RTSP --disable-SMTP --disable-TFTP --disable-dict --disable-gopher --disable-debug --enable-nonblocking --enable-thread --disable-cookies --disable-crypto-auth --disable-ipv6 --disable-proxy --enable-hidden-symbols --without-libidn --without-zlib
Make
Then pick the generated files from /lib/.libs
Are there any Configure Switches available wherein I can specify the target file name? Any specific Makefile I could change?
I tried changing in what I thought could be obvious locations but either could not generate the libs or were generated with the same name.
Any help is much appreciated.
I got the answer from the curl forums(Thanks Dan). Basically we have to use the makefile.am as a starting point to go through a list of files and change the library name "libxxx_curl".
$find . -name Makefile.am |xargs sed -i 's/libcurl(.la)/libxxx_curl\1/g'
$buildconf
$configure
$make
I lot of commercial applications bundle their particular library versions in a non standard path and then tweak environment variable LD_LIBRARY_PATH in a launch script so to avoid conflict. IMHO it is better than trying to change the target name.