SCP equivalent in chef - linux

I am trying to scp a file from a server to another server both on Azure. This is the command I want to replace:
cp /tmp/openvpn/EasyRSA-3.0.4/pki/reqs/server.req jkirby29#40.121.47.3:/tmp
I have tried remote_file already, I am not sure of anything else that is even close to what I need. Is this one of those where I need to put it in a bash block? I am new to chef so excuse my lack of knowledge.

This is not really something Chef supports. The remote_file resource does support SFTP, but it expects to be pulling from a remote server, not pushing as this is. In general this kind of approach leads to a lot of complexity when scaling up/out, like when you need to push to a dozen machines instead of just one, so you need to work out which IPs, which might be changing, etc etc. You can use an execute or script resource to do it as written though, if that's really what you want.

Related

should I configure my EC2 using user_data or Ansible

When launching EC2 using Terraform (or cloud formation), we can configure EC2 by putting some scripts in user_data/remote-exec. Alternatively, we can configure EC2 using Ansible/Chef, etc. What are the difference of configuring EC2 in user_data/remote-exec and do that with Ansible/Chef? when to use the former, when to use the latter (I know Ansible/Chef is idempotent)?
In my case, the EC2 is originally manually launched, then manually configured using a lot of linux commands. and the commands are not configured by me. Now I am the person to automate the whole structure using terraform, and configure EC2s. Using user_data/remote-exec to configure EC2 is straightforward. I just need to put all the existing linux commands they have in some scripts with a little change. And if the configuration result using my script is not successful, at least I can quickly figure out whether I miss some commands by comparing my script and the original linux commands. But if I use ansible/chef, I have to rewrite all the steps using different language. And if the configuration is not what expected, it is hard for me to figure out which steps are not correct, because the syntax of ansible/chef and linux commands are totally different.
My question is, in my case, should I use ansible/chef or user_data/remote-exec for configuration?
User Data is good for initial configuration of the system. If you need longer term maintenance a configuration management software like Ansible/Chef/Salt/Puppet is a great option.
Packer can be used for immutable infrastructure, i.e. doesn't change after creation. You can run all the scripts and installs on the system for it to be ready to just boot, this is also faster because you don't have to wait for user data to run.
A few questions you have to ask as well, how often are you going to patch these? Are you going to just update existing or replace with new. Ansible is great for configuration since it's just yaml files an
Blue/Green deployments generally replace servers with all new ones and gradually move traffic over to the new servers.
Some more things to consider with your Infrastructure as code

Updating a website through SSH

I'm only partially familiar with shell and my command line, but I understand the usage of * when uploading and downloading files.
My question is this: If I have updated multiple files within my website's directory on my local device, is there some simple way to re-upload every file and directory through the put command to just update every single file and place files not previously there?
I'd imagine that i'd have to somehow
put */ (to put all of the directories)
put * (to put all of the files)
and change permissions accordingly
It may also be in my best interests to first clear the directory to I have a true update, but then there's the problem of resetting all permissions for every file and directory. I would think it would work in a similar manner, but I've had problems with it and I do not understand the use of the -r recursive option.
Basically such functionality is perfected within the rsync tool. And that tool can also be used in a "secure shell way"; as lined out in this tutorial.
As an alternative, you could also look into sshfs. That is a utility that allows you to "mount" a remote file system (using ssh) in your local system. So it would be completely transparent to rsync that it is syncing a local and a remote file system; for rsync, you would just be syncing to different directories!
Long story short: don't even think about implementing such "sync" code yourself. Yes, rsync itself requires some studying, as many unix tools it is extremely powerful; thus you have to be very diligent when using it. But thing is: this is a robust, well tested tool. The time required to learn about it will pay out pretty quickly.

Remote directory compare and merge without ssh

I have 2 remote servers/machine say s1 and s2 (linux based machines)
Both the server has 1 directory which is very huge. (i mean initially same data in both machines)
s1 is always stable upto date, changes are added by authorized user.
s2 people will make changes to the data here and there.
now requirement is to make content of s2 to inSync with s1.
Condition:
1. No replacement of s2 content with s1 because data is very huge
2. No other software allowed to install in machines
3. Only scp, sftp supported, no ssh or any other sort of access is given because it is production machine.
If anybody come across this sort of requirement Please suggest me any tool, any way to do this task.
If, you say, you have scp, then you must also have ssh. scp requires ssh to work. So, I'll start by challenging your assumption that you can't use rsync over ssh. If you have scp working, then there's no reason why rsync over ssh should not work.
rsync over ssh is the correct answer here. This is the most efficient mechanism for synchronizing content between two different servers. But, I suppose that it's possible that someone who thinks he knows what he's doing, but he really doesn't, hacked up a server to allow only the scp service, and block ssh sessions. Probably under a mistaken notion that this improves security somehow. It really doesn't, but that's a different topic. So, what now...
Well, you say you do have sftp access available. In that case, the next best answer would be a custom sftp client. Learn perl, and use the Net::SFTP module to write a custom perl script, for your specific requirement, to use SFTP to compare the contents of the two servers, and synchronize their contents.
Net::SFTP exposes the underlying SFTP protocol in a way that allows one to write custom applications that uses it. You'll use the SFTP protocol to examine the contents of each server, figure out what's different, then copy what needs to be copied, in order to update their contents.
Using Net::SFTP won't be as efficient as using rsync+ssh. With Net::SFTP, you'll know which files exist on the server, and the size of each file in bytes. However, if both servers appear to have a file with the same name, and the same byte count, you don't really know whether they are, in fact, identical, without downloading each file, and manually comparing them. You'll have to do that, of course. This is the key advantage of rsync+ssh that you do not have an sftp equivalent of. The rsync server works together with the rsync client, and they're able to verify that the file contents are identical, using checksums, without actually transferring the file from one side to another. No way to avoid doing that with sftp in this case, but this is going to be the best you'll be able to do.
If you decide to go the Perl route, don't use Net::SFTP which is an old an unmaintained module. Instead go for Net::SFTP::Foreign that BTW, implements recursive downloads allowing you to select which files to get on the fly, so you can easily do an update.
Another alternative is to use the development version of my other module Net::SSH::Any that has a built-in scp client that is able to download only the files that are newer on the remote side:
my $ssh = Net::SSH::Any->new(...);
$ssh->scp_get( { recursive => 1,
update => 1 },
$remote_dir, $local_dir );
Other scripting languages like Python or Ruby also have SFTP and SCP libraries.

Linux: Uploading files to a live server - How to automate process?

I'm developing on my local machine (apache2, php, mysql). When I want to upload files to my live server (nginx, mysql, php5-fpm), I first backup my www folder, extract the databases, scp everything to my server (which is tedious, because it's protected with opiekey), log myself in, copy the files from my home directory on the server to my www directory and if I'm lucky and the file permissions and everything else works out, I can view the changes online. If I'm unlucky I'll have to research what went wrong.
Today, I changed only one file, and had to go through the entire process just for this file. You can imagine how annoying that is. Is there a faster way to do this? A way to automate it all? Maybe something like "commit" in SVN and off you fly?
How do you guys handle these types of things?
PS: I'm very very new to all this, so bear with me! For example I'm always copying files into my home directory on the server, because scp cannot seem to copy them directly into the /var/www folder?!
There are many utilities which will do that for you. If you know python, try fabric. If you know ruby, you may prefer capistrano. They allow you to script both local and remote operations.
If you have a farm of servers to take care of, those two might not work at the scale you want. For over 10 servers, have a look at chef or puppet to manage your servers completely.
Whether you deploy from local checkout, packaged source (my preferred solution), remote repository, or something entirely different is up to you. Whatever works for you is ok. Just make sure your deployments are reproducible (that is you can always say "5 minutes ago it wasn't broken, I want to have what now what I had 5 minutes ago"). Whatever way of versioning you use is better than no versioning (tagged releases are probably the most comfortable).
I think the "SVN" approach is very close to what you really want. You make a cron job that will run "svn update" every few minutes (or hg pull -u if using mercurial, similar with git). Another option is to use dropbox (we use it for our web servers sometimes) - this one is very easy to setyp and share with non-developers (like UI designers)...
rsync will send only the changes between your local machine and the remote machine. It would be an alternative to scp. You can look into how to set it up to do what you need.
You can't copy to /var/www because the credentials you're using to log in for the copy session doesn't have access to write on /var/www. Assuming you have root access, change the group (chown) on /var/www (or better yet, a sub directory) to your group and change the permissions to allow your group write access (chmod g+w).
rsync is fairly lightweight, so it should be simple to get going.

Using directory traversal attack to execute commands

Is there a way to execute commands using directory traversal attacks?
For instance, I access a server's etc/passwd file like this
http://server.com/..%01/..%01/..%01//etc/passwd
Is there a way to run a command instead? Like...
http://server.com/..%01/..%01/..%01//ls
..... and get an output?
To be clear here, I've found the vuln in our company's server. I'm looking to raise the risk level (or bonus points for me) by proving that it may give an attacker complete access to the system
Chroot on Linux is easily breakable (unlike FreeBSD). Better solution is to switch on SELinux and run Apache in SELinux sandbox:
run_init /etc/init.d/httpd restart
Make sure you have mod_security installed and properly configured.
If you are able to view /etc/passwd as a result of the document root or access to Directory not correctly configured on the server, then the presence of this vulnerability does not automatically mean you can execute commands of your choice.
On the other hand if you are able view entries from /etc/passwd as a result of the web application using user input (filename) in calls such as popen, exec, system, shell_exec, or variants without adequate sanitization, then you may be able to execute arbitrary commands.
Unless the web server is utterly hideously programmed by someone with no idea what they're doing, trying to access ls using that (assuming it even works) would result in you seeing the contents of the ls binary, and nothing else.
Which is probably not very useful.
Yes it is possible (the first question) if the application is really really bad (in terms of security).
http://www.owasp.org/index.php/Top_10_2007-Malicious_File_Execution
Edit#2: I have edited out my comments as they were deemed sarcastic and blunt. Ok now as more information came from gAMBOOKa about this, Apache with Fedora - which you should have put into the question - I would suggest:
Post to Apache forum, highlighting you're running latest version of Apache and running on Fedora and submit the exploit to them.
Post to Fedora's forum, again, highlighting you're running the latest version of Apache and submit the exploit to them.
It should be noted, include the httpd.conf to both of the sites when posting to their forums.
To minimize access to passwd files, look into running Apache in a sandbox/chrooted environment where any other files such as passwd are not visible outside of the sandbox/chrooted environment...have you a spare box lying around to experiment with it or even better use VMWARE to simulate the identical environment you are using for the Apache/Fedora - try get it to be IDENTICAL environment, and make the httpd server run within VMWare, and remotely access the Virtual machine to check if the exploit is still visible. Then chroot/sandbox it and re-run the exploit again...
Document the step-by-step to reproduce it and include a recommendation until a fix is found, meanwhile if there is minimal impact to the webserver running in sandbox/chrooted environment - push them to do so...
Hope this helps,
Best regards,
Tom.
If you already can view etc/passwd then the server must be poorly configured...
if you really want to execute commands then you need to know the php script running in the server whether there is any system() command so that you can pass commands through the url..
eg: url?command=ls
try to view the .htaccess files....it may do the trick..

Resources