RPC command to initiate a software install - rpc

I was recently working with a product from Symantech called Norton EndPoint protection. It consists of a server console application and a deployment application and I would like to incorporate their deployment method into a future version of one of my products.
The deployment application allows you to select computer workstations running Win2K, WinXP, or Win7. The selection of workstations is provided from either AD (Active Directory) or NT Domain (WINs/DNS NetBIOS lookup). From the list, one can click and choose which workstations to deploy the end point software which is Symantech's virus & spyware protection suite.
Then, after selecting which workstations should receive the package, the software copies the setup.exe program to each workstation (presumable over the administrative share \pcname\c$) and then commands the workstation to execute setup.exe resulting in the workstation installing the software.
I really like how their product works but not sure what they are doing to accomplish all the steps. I've not done any deep investigations into this such as sniffing the network, etc... and wanted to check here to see if anyone is familiar with what I'm talking about and if you know how it's accomplished or have ideas how it could be accomplished.
My thinking is that they are using the admin share to copy the software to the selected workstations and then issuing an RPC call to command the workstation to do the install.
What's interesting is that the workstations do this without any of the logged in users knowing what's going on until the very end where a reboot is necessary. At which point, the user gets a pop-up asking to reboot now or later, etc... My hunch is that the setup.exe program is popping this message.
To the point: I'm looking to find out the mechanism by which one Windows based machine can tell another to do some action or run some program.
My programming language is C/C++
Any thoughts/suggestions appreciated.

I was also looking into this, since I too want to remote deploy software. I chose to packet sniff pstools since it has proven itself quite reliable in such remote admin tasks.
I must admit I was definitely over-thinking this challenge. You have probably done your packet sniff by now and discovered the same things I have. I hope by leaving this post behind we can assist other developers.
This is how pstools accomplishes execution of arbitrary code:
It copies a system service executable to \\server\admin$ (you either have to already have local admin on the remote machine, or supply credentials). Once the file is copied, it uses the Service Control Manager API to make the copied file a system service and start it.
Obviously, this system service can now do whatever it wants, including binding to an RPC named pipe. In our case, the system service would install an msi. To get confirmation of successful installation you could either remote poll a registry key, or an rpc function. Either way, you should remove the system service when you are done and delete the file (psexec does not do this, I guess they don't want it to be used surreptitiously, and in that case leaving the service behind would at least give an admin a fighting chance of realizing someone had compromised their box.) This method does not require any preconfiguration of the remote machine, simply that you have admin creds and that file sharing and rpc are open in the firewall.
I've seen demos in C# using WMI, but I don't like those solutions. File sharing and RPC are most likely to be open in firewalls. If they aren't, file sharing and remote MMC management of the remote server wouldn't work. WMI can be blocked and still leave these functional.
I've worked with a lot of software that does remote installations, and a lot of them are not as reliable as pstools. My guess is that this is because those developers are using other methods that are not as likely to be open at the firewall level.
The simple solution is often the most elusive. As always, my hat is off to the SysInternals folks. They are true hackers in the positive, old school meaning of the word!

This sort of functionality is also available with products LANDesk and Altiris. You need a daemonized listener on the client side that will listen for instructions/connections from the server. Once a connection is made any number of things can happen: you can transfer files, kick on installation scripts, etc. usually transparently to any users on that box.
I've used the Twisted Framework (http://twistedmatrix.com) to do this with a small handful of Linux machines. It's Python and Linux, not Windows, but the premise is the same: a listening client accepts instructions from a server and executes them. Very simple.
This functionality can also be accomplished with VB/Powershell scripts in a Windows-based domain.

Related

Security minecraft server mods containing Java code

I am running a Java minecraft server on my Linux server. I have been asked to install some mods (e.g. data packs) on the server which appear to contain Java code written by a third party other than Mojang.
Is this Java code restricted in what it can do, or can it run any arbitrary code it likes (e.g. read /etc/passwd, open TCP ports, claim huge amounts of memory, etc.)?
In other words, how risky are minecraft mods containing Java code?
This is a very good question. I actually was wondering that too. I have spent a bit of time looking at the binaries of minecraft and the minecraft server spigot. I assume we both use the Java Edition.
First of all, the Java code, once you run it on a host, can do anything. The only mechanism that can prevent that and protect the user from the developer in Java, is the security manager. Once you turn on the security manager (which is an opt-in mechanism) you have the ability to define a set of rules that Java will obey like e.g. it will not write into directories you don't allow it too.
So the question is: is minecraft using the security manager per default. I am 99% sure it does not. No one is using the security manager because it is a pain to configure it right and things stop working every time you get it wrong (you know that an applications uses the security manager because you face problems with policy misconfiguration every now and then).
Running minecraft is made by running an exe. I would not now where to turn the security manager on even if I would like too. There is a bit of hope with the spigot server. You can install the security manager with -Djava.security.manager and input your policy with -Djava.security.policy==my.policy. But getting the policy right will be pain. I will try to look into it though when I have a free week or so.
Minecraft mods can act with the full permissions of the user running Minecraft. If you don't trust the author of a mod, then there's basically two approaches to safely use it:
Audit the source code yourself, and then compile the mod yourself so you know it matches the binary
Run Minecraft in a sandbox such as a VM, or as a heavily restricted user.

Is Mercurial Server a must for using Mercurial?

I am trying to pick a version control software for our team but I don't have much experience for it before. After searching and googling, it seems Mercurial is a good try. However, I am a little bit confused about some general information about it. Basically, our team only have 5 people and we all connect to a server machine which will be used to store the repositories. The server is a Redhat Linux system. We probably use a lot of the centralized workflow. Because I like the local commit idea, I still prefer the DVCS kind software. Now I am trying to install mercurial. Here are my questions.
1) Does the server used for repositories always need to be installed the software "mercurial-server "? Or it depends on what kind of workflow it uses ? In other words, is it true if there is no centralized workflow used for works, then the server can be installed by "mercurial client" ?
I am confused about the term "mercurial-server". Or it means the mercurial installed on the server is always called "mercurial server" and it does matter if it is centralized or not. In addition, because we all work on that server, does it mean only one copy of mercurial is required to install there ? We all have our own user directory such as /home/Cassie, /home/John,... and /home/Joe.
2) Is SSH a must ? Or it depends on what kind of connection between users and the server ? So since we all work in the server, the SSH is not required right ?
Thank you very much,
There are two things that can be called a "mercurial server".
One is simply a social convention that "repository X on the shared drive is our common repository". You can safely push and pull to that mounted repository and use it as a common "trunk" for your development.
A second might be particular software that allows mercurial to connect remotely. There are many options for setting this up yourself, as well as options for other remote hosting.
Take a look at the first link for a list of the different connection options. But as a specific answer to #2: No, you don't need to use SSH, but it's often the simplest option if you're in an environment using it anyways.
The term that you probably want to use, rather than "mercurial server", is "remote repository". This term is used to describe the "other repository" (the one you're not executing the command from) for push/pull/clone/incoming/outgoing/others-that-i'm-forgetting commands. The remote repository can be either another repository on the same disk, or something over a network.
Typically you use one shared repository to share the code between different developers. While you don't need it technically, it has the advantage that it is easier to synchronize when there is a single spot for the fresh software.
In the simplest case this can be a repository on a simple file share where file locking is possible (NFS or SMB), where each developer has write access. In this scenario there is no need to have mercurial installed on the server, but there are drawbacks:
Every developer must have a mercurial version installed, which can handle the repo version on the share (as an example, when the repo on the share is created with mercurial 1.9, a developer with 1.3 can't access this repo)
Every developer can issue destructive operations on the shared repo, including the deletion of the whole repo.
You can't reliably run hooks on such a repo, since the hooks are executed on the developer machines, and not on the server
I suggest to use the http or ssh method. You need to have mercurial installed on the server for this (I'm not taking the http-static method into account, since you can't push into a http-static path), and get the following advantages:
the mercurial version on the server does not need to be the same as the clients, since mercurial uses a version-independent wire protocol
you can't perform destructive operations via these protocols (you can only append new revisions to a remote repo, but never remove any of them)
The decision between http and ssh depends on you local network environment. http has the advantage that it bypasses many corporate firewalls, but you need to take care about secure authentication when you want to push stuff over http back into the server (or don't want everybody to see the content). On the other hand ssh has the drawback that you might need to secure the server, so that the clients can't run arbitrary programs there (it depends on how trustworthy your clients are).
I second Rudi's answer that you should use http or ssh access to the main repository (we use http at work).
I want to address your question about "mercurial-server".
The basic Mercurial software does offer three server modes:
Using hg serve; this serves a single repository, and I think it's more used for quick hacks (when the main server is down, and you need to pull some changes from a colleague, for example).
Using hgwebdir.cgi; this is a cgi script that can be used with an HTTP server such as Apache; it can serve multiple repositories.
Using ssh (Secure Shell) access; I don't know much about it, but I believe that it is more difficult to set up than the hgwebdir variant
There is also a separate software package called "mercurial-server". This is provided by a different company; its homepage is http://www.lshift.net/mercurial-server.html. As far as I can tell, this is a management interface for option 3, the mercurial ssh server.
So, no, you don't need to have mercurial-server installed; the mercurial package already provides a server.

How do I secure a production server after inheriting it from the previous development vendor?

We received access to the environment, but I now need to go through the process of securing it so that the previous vendor can no longer access it, or the Web applications running on it. This is a Linux box running Ubuntu. I know I need to change the following passwords:
SSH
FTP
MySQL
Control Panel Admin
Primary Application Admin
However, how do I really know I've completely secured the system using best practices, and am I missing anything else that I need to do other than just changing passwords?
3 simple steps
Backup configurations / source files from HTTP / SQL tables
Reinstall operating system
Follow standard hardening steps on fresh OS
Regardless of who it was, they could have installed any old crap on there (rootkits) that you can't configure away.
You will probably get more responses at serverfault.com on these kinds of questions.
There are several things you can do to secure SSH by editing your sshd_config file which is usually in /etc/ssh/:
Disable Root Logins
PermitRootLogin no
Change the ssh port from Port 22
Port 9222
Manually specifying which accounts can login
AllowUsers Andrew,Jane,Doe
SecurityFocus has a good article about securing MySQL, although it's a bit dated.
The best thing you could do would be reinstall and make sure when you bring over files from the old system to the new that it is just data, and not executables that could be nasty. If this is to much, changing all the passwords, and watching the logs for a few weeks, as well as playing with iptables to block former vendor. Also given that it could have a rootkit at the kernel level its probably good idea to change that out, and also watch traffic coming out of the box fro something that might be going to the vendor. It really is a hassle to take someone else's machine and say that is safe now, I would go as far to say it is nearly impossible.
side note. This isn't really programming related so probably shouldn't be on this site.

Automated deployment of files to multiple Macs

We have a set of Mac machines (mostly PPC) that are used for running Java applications for experiments. The applications consist of folders with a bunch of jar files, some documentation, and some shell scripts.
I'd like to be able to push out new version of our experiments to a directory on one Linux server, and then instruct the Macs to update their versions, or retrieve an entire new experiment if they don't yet have it.
../deployment/
../deployment/experiment1/
../deployment/experiment2/
and so on
I'd like to come up with a way to automate the update process. The Macs are not always on, and they have their IP addresses assigned by DHCP, so the server (which has a domain name) can't contact them directly. I imagine that I would need some sort of daemon running full-time on the Macs, pinging the server every minute or so, to find out whether some "experiments have been updated" announcement has been set.
Can anyone think of an efficient way to manage this? Solutions can involve either existing Mac applications, or shell scripts that I can write.
You might have some success with a simple Subversion setup; if you have the dev tools on your farm of Macs, then they'll already have Subversion installed.
Your script is as simple as running svn up on the deployment directory as often as you want and checking your changes in to the Subversion server from your machine. You can do this without any special setup on the server.
If you don't care about history and a version control system seems too "heavy", the traditional Unix tool for this is called rsync, and there's lots of information on its website.
Perhaps you're looking for a solution that doesn't involve any polling; in that case, maybe you could have a process that runs on each Mac and registers a local network Bonjour service; DNS-SD libraries are probably available for your language of choice, and it's a pretty simple matter to get a list of active machines in this case. I wrote this script in Ruby to find local machines running SSH:
#!/usr/bin/env ruby
require 'rubygems'
require 'dnssd'
handle = DNSSD.browse('_ssh._tcp') do |reply|
puts "#{reply.name}.#{reply.domain}"
end
sleep 1
handle.stop
You can use AppleScript remotely if you turn on Remote Events on the client machines. As an example, you can control programs like iTunes remotely.
I'd suggest that you put an update script on your remote machines (AppleScript or otherwise) and then use remote AppleScript to trigger running your update script as needed.
If you update often then Jim Puls idea is a great one. If you'd rather have direct control over when the machines start looking for an update then remote AppleScript is the simplest solution I can think of.

Securing a linux webserver for public access

I'd like to set up a cheap Linux box as a web server to host a variety of web technologies (PHP & Java EE come to mind, but I'd like to experiment with Ruby or Python in the future as well).
I'm fairly versed in setting up Tomcat to run on Linux for serving up Java EE applications, but I'd like to be able to open this server up, even just so I can create some tools I can use while I am working in the office. All the experience I've had with configuring Java EE sites has all been for intranet applications where we were told not to focus on securing the pages for external users.
What is your advice on setting up a personal Linux web server in a secure enough way to open it up for external traffic?
This article has some of the best ways to lock things down:
http://www.petefreitag.com/item/505.cfm
Some highlights:
Make sure no one can browse the directories
Make sure only root has write privileges to everything, and only root has read privileges to certain config files
Run mod_security
The article also takes some pointers from this book:
Apache Securiy (O'Reilly Press)
As far as distros, I've run Debain and Ubuntu, but it just depends on how much you want to do. I ran Debian with no X and just ssh'd into it whenever i needed anything. That is a simple way to keep overhead down. Or Ubuntu has some nice GUI things that make it easy to control Apache/MySQL/PHP.
It's important to follow security best practices wherever possible, but you don't want to make things unduly difficult for yourself or lose sleep worrying about keeping up with the latest exploits. In my experience, there are two key things that can help keep your personal server secure enough to throw up on the internet while retaining your sanity:
1) Security through obscurity
Needless to say, relying on this in the 'real world' is a bad idea and not to be entertained. But that's because in the real world, baddies know what's there and that there's loot to be had.
On a personal server, the majority of 'attacks' you'll suffer will simply be automated sweeps from machines that have already been compromised, looking for default installations of products known to be vulnerable. If your server doesn't offer up anything enticing on the default ports or in the default locations, the automated attacker will move on. Therefore, if you're going to run a ssh server, put it on a non-standard port (>1024) and it's likely it will never be found. If you can get away with this technique for your web server then great, shift that to an obscure port too.
2) Package management
Don't compile and install Apache or sshd from source yourself unless you absolutely have to. If you do, you're taking on the responsibility of keeping up-to-date with the latest security patches. Let the nice package maintainers from Linux distros such as Debian or Ubuntu do the work for you. Install from the distro's precompiled packages, and staying current becomes a matter of issuing the occasional apt-get update && apt-get -u dist-upgrade command, or using whatever fancy GUI tool Ubuntu provides.
One thing you should be sure to consider is what ports are open to the world. I personally just open port 22 for SSH and port 123 for ntpd. But if you open port 80 (http) or ftp make sure you learn to know at least what you are serving to the world and who can do what with that. I don't know a lot about ftp, but there are millions of great Apache tutorials just a Google search away.
Bit-Tech.Net ran a couple of articles on how to setup a home server using linux. Here are the links:
Article 1
Article 2
Hope those are of some help.
#svrist mentioned EC2. EC2 provides an API for opening and closing ports remotely. This way, you can keep your box running. If you need to give a demo from a coffee shop or a client's office, you can grab your IP and add it to the ACL.
Its safe and secure if you keep your voice down about it (i.e., rarely will someone come after your home server if you're just hosting a glorified webroot on a home connection) and your wits up about your configuration (i.e., avoid using root for everything, make sure you keep your software up to date).
On that note, albeit this thread will potentially dwindle down to just flaming, my suggestion for your personal server is to stick to anything Ubuntu (get Ubuntu Server here); in my experience, the quickest to get answers from whence asking questions on forums (not sure what to say about uptake though).
My home server security BTW kinda benefits (I think, or I like to think) from not having a static IP (runs on DynDNS).
Good luck!
/mp
Be careful about opening the SSH port to the wild. If you do, make sure to disable root logins (you can always su or sudo once you get in) and consider more aggressive authentication methods within reason. I saw a huge dictionary attack in my server logs one weekend going after my SSH server from a DynDNS home IP server.
That being said, it's really awesome to be able to get to your home shell from work or away... and adding on the fact that you can use SFTP over the same port, I couldn't imagine life without it. =)
You could consider an EC2 instance from Amazon. That way you can easily test out "stuff" without messing with production. And only pay for the space,time and bandwidth you use.
If you do run a Linux server from home, install ossec on it for a nice lightweight IDS that works really well.
[EDIT]
As a side note, make sure that you do not run afoul of your ISP's Acceptable Use Policy and that they allow incoming connections on standard ports. The ISP I used to work for had it written in their terms that you could be disconnected for running servers over port 80/25 unless you were on a business-class account. While we didn't actively block those ports (we didn't care unless it was causing a problem) some ISPs don't allow any traffic over port 80 or 25 so you will have to use alternate ports.
If you're going to do this, spend a bit of money and at the least buy a dedicated router/firewall with a separate DMZ port. You'll want to firewall off your internal network from your server so that when (not if!) your web server is compromised, your internal network isn't immediately vulnerable as well.
There are plenty of ways to do this that will work just fine. I would usually jsut use a .htaccess file. Quick to set up and secure enough . Probably not the best option but it works for me. I wouldn't put my credit card numbers behind it but other than that I dont really care.
Wow, you're opening up a can of worms as soon as you start opening anything up to external traffic. Keep in mind that what you consider an experimental server, almost like a sacrificial lamb, is also easy pickings for people looking to do bad things with your network and resources.
Your whole approach to an externally-available server should be very conservative and thorough. It starts with simple things like firewall policies, includes the underlying OS (keeping it patched, configuring it for security, etc.) and involves every layer of every stack you'll be using. There isn't a simple answer or recipe, I'm afraid.
If you want to experiment, you'll do much better to keep the server private and use a VPN if you need to work on it remotely.

Resources