Syncing between a windows based server (host) and linux server(client) using SFTP - linux

My task is to sync folders between two computers. One which acts as a windows server which is the host and the other one is a linux based server. The file transfer has to be secure and encrypted. Are there are any free softwares which will help me do this task.
Additionally the syncing should automatically happen after every pre decided interval.

I have a recollection that WinSCP can be invoked through command line. There, you have the option to synchronize folders (and the whole hierarchy there in). It may be worth trying.
Total Commander also has FTP/SFTP capabilities, but I'm not sure you can invoke it through command line.
One point to consider: If the process is to run automatically, you need to hard-code the username and password for the connection. There your security becomes compromised.

Related

Moving files from multiple Linux servers to a central windows storage server

I have multiple Linux servers with limited storage space that create very big daily logs. I need to keep these logs but can't afford to keep them on my server for very long before it fills up. The plan is to move them to a central windows server that is mirrored.
I'm looking for suggestions on the best way to this. What I've considered so far are rsync and writing a script in python or something similar.
The ideal method of backup that I want is for the files to be copied from the Linux servers to the Windows server, then verified for size/integrity, and subsequently deleted from the Linux servers. Can rsync do that? If not, can anyone suggest a superior method?
You may want to look into using rsyslog on the linux servers to send logs elsewhere. I don't believe you can configure it to delete logged lines with a verification step - I'm not sure you'd want to either. Instead, you might be best off with an aggressive logrotate schedule + rsyslog.

Is there any jsch ChannelSftp's function work like command 'cp'

These days,I am work with jsch-0.1.41,operate resources on a remote linux server via ChannelSftp.I find that there is no function provide the functionality similar to shell command "cp".Now I want to copy a file from a directory to the other,these two directory both remote directory on linux server.
Any wrong point in my presentation,please point it out.Thanks.
The SFTP protocol doesn't offer such a command, and thus also JSch's ChannelSftp doesn't offer it.
You have basically two choices:
Use a combination of get and put, i.e. download the file and upload it again. You can do this without local storage (simply connect one of the streams to the other), but this still requires moving the data twice through the network (and encrypting/decrypting twice), where it wouldn't be really necessary. Use this only if the other way doesn't work.
Don't use SFTP, but use an exec channel to execute a copy command on the server. On unix servers, this command is usually named cp, on Windows servers likely copy. (This will not work if the server's administrator somehow limited your account to SFTP-only access.)

Migrate data from one server to another

I bought a new server and I want to move all the data (directories, sub directories, users, passwords, ..etc) from my old server to it.
Is there a way to do that?
Thanks,
Do you have physical access to both servers? If so you can use the dd command to make a clone of the disk from the old server to the disk that is going into the new server.
In order to do this though, both hard drives have to be installed in one of the servers.
You can also use netcat and dd to clone a disk over a network.
for the directories and files, use a FTP client from your server, if it allows you to, if not, just download all the content to your computer and upload it to the new server.
For the users and passwords, i guess they are in a Database, connect to the database using SSH, telnet, or MysqlAdmin or any RMDB client system and export a dump file, then log in to the new server's SQL system and import that dump file.
Anyway you should give more details of both servers anyway so we can help you, for example, are they Shared hosting or dedicated machine? and what kind of access do you have to them, also, their operative system would help people to reply you accurately
In principle, yes.
If the hardware is similar (= just more RAM, disk space but same CPU architecture and no special graphics card drivers), you might be able to copy every file and then install the boot loader once more (the boot loader config usually changes when the hard disk size changes).
Or you can create a list of all services that you use, determine which config files each one uses and then just copy those. Ideally, you shouldn't copy them but compare the old and the new versions and merge them.
The most work intensive way is to use a tool like puppet. In a nutshell, puppet allows to create install scripts for services (along with all the configuration that you need). So if you need to install a service again (new hardware, second server), you just tell puppet to do it. On the plus side, your whole installation will be documented, too. If you ever wonder why something is the way it is, you can look into the puppet files.
Of course, this approach takes a lot of time and discipline, so it might not be worth it in your case. Apply common sense.

RPC command to initiate a software install

I was recently working with a product from Symantech called Norton EndPoint protection. It consists of a server console application and a deployment application and I would like to incorporate their deployment method into a future version of one of my products.
The deployment application allows you to select computer workstations running Win2K, WinXP, or Win7. The selection of workstations is provided from either AD (Active Directory) or NT Domain (WINs/DNS NetBIOS lookup). From the list, one can click and choose which workstations to deploy the end point software which is Symantech's virus & spyware protection suite.
Then, after selecting which workstations should receive the package, the software copies the setup.exe program to each workstation (presumable over the administrative share \pcname\c$) and then commands the workstation to execute setup.exe resulting in the workstation installing the software.
I really like how their product works but not sure what they are doing to accomplish all the steps. I've not done any deep investigations into this such as sniffing the network, etc... and wanted to check here to see if anyone is familiar with what I'm talking about and if you know how it's accomplished or have ideas how it could be accomplished.
My thinking is that they are using the admin share to copy the software to the selected workstations and then issuing an RPC call to command the workstation to do the install.
What's interesting is that the workstations do this without any of the logged in users knowing what's going on until the very end where a reboot is necessary. At which point, the user gets a pop-up asking to reboot now or later, etc... My hunch is that the setup.exe program is popping this message.
To the point: I'm looking to find out the mechanism by which one Windows based machine can tell another to do some action or run some program.
My programming language is C/C++
Any thoughts/suggestions appreciated.
I was also looking into this, since I too want to remote deploy software. I chose to packet sniff pstools since it has proven itself quite reliable in such remote admin tasks.
I must admit I was definitely over-thinking this challenge. You have probably done your packet sniff by now and discovered the same things I have. I hope by leaving this post behind we can assist other developers.
This is how pstools accomplishes execution of arbitrary code:
It copies a system service executable to \\server\admin$ (you either have to already have local admin on the remote machine, or supply credentials). Once the file is copied, it uses the Service Control Manager API to make the copied file a system service and start it.
Obviously, this system service can now do whatever it wants, including binding to an RPC named pipe. In our case, the system service would install an msi. To get confirmation of successful installation you could either remote poll a registry key, or an rpc function. Either way, you should remove the system service when you are done and delete the file (psexec does not do this, I guess they don't want it to be used surreptitiously, and in that case leaving the service behind would at least give an admin a fighting chance of realizing someone had compromised their box.) This method does not require any preconfiguration of the remote machine, simply that you have admin creds and that file sharing and rpc are open in the firewall.
I've seen demos in C# using WMI, but I don't like those solutions. File sharing and RPC are most likely to be open in firewalls. If they aren't, file sharing and remote MMC management of the remote server wouldn't work. WMI can be blocked and still leave these functional.
I've worked with a lot of software that does remote installations, and a lot of them are not as reliable as pstools. My guess is that this is because those developers are using other methods that are not as likely to be open at the firewall level.
The simple solution is often the most elusive. As always, my hat is off to the SysInternals folks. They are true hackers in the positive, old school meaning of the word!
This sort of functionality is also available with products LANDesk and Altiris. You need a daemonized listener on the client side that will listen for instructions/connections from the server. Once a connection is made any number of things can happen: you can transfer files, kick on installation scripts, etc. usually transparently to any users on that box.
I've used the Twisted Framework (http://twistedmatrix.com) to do this with a small handful of Linux machines. It's Python and Linux, not Windows, but the premise is the same: a listening client accepts instructions from a server and executes them. Very simple.
This functionality can also be accomplished with VB/Powershell scripts in a Windows-based domain.

Automated deployment of files to multiple Macs

We have a set of Mac machines (mostly PPC) that are used for running Java applications for experiments. The applications consist of folders with a bunch of jar files, some documentation, and some shell scripts.
I'd like to be able to push out new version of our experiments to a directory on one Linux server, and then instruct the Macs to update their versions, or retrieve an entire new experiment if they don't yet have it.
../deployment/
../deployment/experiment1/
../deployment/experiment2/
and so on
I'd like to come up with a way to automate the update process. The Macs are not always on, and they have their IP addresses assigned by DHCP, so the server (which has a domain name) can't contact them directly. I imagine that I would need some sort of daemon running full-time on the Macs, pinging the server every minute or so, to find out whether some "experiments have been updated" announcement has been set.
Can anyone think of an efficient way to manage this? Solutions can involve either existing Mac applications, or shell scripts that I can write.
You might have some success with a simple Subversion setup; if you have the dev tools on your farm of Macs, then they'll already have Subversion installed.
Your script is as simple as running svn up on the deployment directory as often as you want and checking your changes in to the Subversion server from your machine. You can do this without any special setup on the server.
If you don't care about history and a version control system seems too "heavy", the traditional Unix tool for this is called rsync, and there's lots of information on its website.
Perhaps you're looking for a solution that doesn't involve any polling; in that case, maybe you could have a process that runs on each Mac and registers a local network Bonjour service; DNS-SD libraries are probably available for your language of choice, and it's a pretty simple matter to get a list of active machines in this case. I wrote this script in Ruby to find local machines running SSH:
#!/usr/bin/env ruby
require 'rubygems'
require 'dnssd'
handle = DNSSD.browse('_ssh._tcp') do |reply|
puts "#{reply.name}.#{reply.domain}"
end
sleep 1
handle.stop
You can use AppleScript remotely if you turn on Remote Events on the client machines. As an example, you can control programs like iTunes remotely.
I'd suggest that you put an update script on your remote machines (AppleScript or otherwise) and then use remote AppleScript to trigger running your update script as needed.
If you update often then Jim Puls idea is a great one. If you'd rather have direct control over when the machines start looking for an update then remote AppleScript is the simplest solution I can think of.

Resources