File sharing in Python - linux

I have 4 computers in a network. I am curios if anyone know how i can i make python to look for some files in different folders from the network or if i can create a mount point that include some folders from different computers. The reason is that i am running a script that needs to open some daemons on different computer. For instance i have the following folders from which i need to run:
/temp on 10.18.2.25
/opt on 10.18.2.35
/var-temp on 10.18.4.12
/spam on 10.18.2.17
I am using the command os.system('exec .....') to launch it , but only works for the current directory.

It sounds like you don't merely want to execute files stored on different machines on one machine, but on the machines they're stored on. Mounting won't help with that.
You'd need a daemon already running on the target, and tell it over the network to execute a file. xinetd is a common one.

Related

Archiving files with Python

I wrote this script that runs several executable updates in a shared network folder. Several separate machines must run these updates.
I would like to archive these updates once they are run. However, as you may see the dilemma, if the first machine runs an update and archives the executable,
the rest of connected devices won't run as they will no longer appear in the cwd. Any ideas?
It took me a while to understand what you meant by "archiving", but probably moving to another folder on a network shared mount. Also, the title should definitely be changed, I accidentally marked it as OK in Review Triage system.
You probably want to assign some ID to each machine, then have each of them create a new file once they finish the installation (e.g. empty finished1.txt for PC with ID 1, finished2.txt for PC 2 etc.). Then, one "master" PC should periodically scan for such files, and when finding all it expects, deleting/moving/archiving the installers. It may be good idea to add timeout functionality to the script on master PC, so when one of PCs will get stuck, you will get notified in some way.

NodeJS reading/writing files to network drive

I have a script that writes files to disk using fs createWriteStream.
What I am trying to achieve now is write those files to a shared network drive.
With a directory like so - //hostname/scratch/reece
I am running this script on windows, but this application will sit on ubnutu/rhel when I deploy it.
This is a crucial part of this script so any suggestions on how I can write to a network drive would be great.
The same would go for reading from a network drive and sending that back via HTTP.
Keeping in mind there would likely be hundreds of thousands of requests to write to this drive through my nodejs api, so I would like to avoid relying on background processes to handle the file transfer.
Any ideas on approach?
You will have to connect the drive to your server using a technology appropriate for that particular OS (may be different on Ubuntu vs. Windows). You can then address that server through whatever OS mount tech it uses.
In Windows, you can use either a drive letter or a UNC path. On Ubuntu, perhaps a mounted network file system volume.
This is one case where you aren't likely to make the exact same setup work on Windows vs. Ubuntu. If you put the appropriate root path name into a config file, the rest of your code can probably be identical. Beyond this, it isn't clear what you're asking.

What is the proper place to put named pipes on Linux?

I've got a few processes that talk to each other through named pipes. Currently, I'm creating all my pipes locally, and keeping the applications in the same working directory. At some point, it's assumed that these programs can (and will) be run from different directories. I need to create these pipes I'm using in a known location, so all of the different applications will be able to find the pipes they need.
I'm new to working on Linux and am not familiar with the filesystem structure. In Windows, I'd use something like the AppData folder to keep these pipes. I'm not sure what the equivalent is in Linux.
The /tmp directory looks like it probably could function just nicely. I've read in a few places that it's cleared on system shutdowns (and that's fine, I have no probably re-creating the pipes when I start back up.) but I've seen a few other people say they're losing files while the system is up, as if it's cleaned periodically, which I don't want to happen while my applications are using those pipes!
Is there a place more suited for application specific stores? Or would /tmp be the place that I'd want to keep these (since they are after all, temporary.)?
I've seen SaltStack using /var/run. The only problem is that you need root access to write into that directory, but let's say that you are going to run your process as a system daemon. SaltStack creates /var/run/salt at the installation time and changes the owner to salt so that later on it can be used without root privileges.
I also checked the Filesystem Hierarchy Standard and even though it's not really important so much, even they say:
System programs that maintain transient UNIX-domain sockets must place them in this directory.
Since named pipes are something very similar, I would go the same way.
On newer Linux distros with systemd /run/user/<userid> (created by pam_systemd during login if it doesn't already exist) can be used for opening up sockets and putting .pid files there instead of /var/run where only root has access. Also note that /var/run is a symlink to /run so /var/run/user/<userid> can also be used. For more infos check out this thread. The idea is that system daemons should have a /var/run/<daemon name>/ directory created during installation with proper permissions and put their sockets/pid files in there while daemons run by the user (such as pulseaudio) should use /run/user/<userid>/. Another option is /tmp and /var/tmp.

Is there any jsch ChannelSftp's function work like command 'cp'

These days,I am work with jsch-0.1.41,operate resources on a remote linux server via ChannelSftp.I find that there is no function provide the functionality similar to shell command "cp".Now I want to copy a file from a directory to the other,these two directory both remote directory on linux server.
Any wrong point in my presentation,please point it out.Thanks.
The SFTP protocol doesn't offer such a command, and thus also JSch's ChannelSftp doesn't offer it.
You have basically two choices:
Use a combination of get and put, i.e. download the file and upload it again. You can do this without local storage (simply connect one of the streams to the other), but this still requires moving the data twice through the network (and encrypting/decrypting twice), where it wouldn't be really necessary. Use this only if the other way doesn't work.
Don't use SFTP, but use an exec channel to execute a copy command on the server. On unix servers, this command is usually named cp, on Windows servers likely copy. (This will not work if the server's administrator somehow limited your account to SFTP-only access.)

Collecting Files From Many Machines?

I have many machines (20+) connected in a network. each machine accesses a central database, queries it, processes the information queried, and then writes the results to files on its local hard drive.
Following the processing, I'd like to be able to 'grab' all these files (from all the remote machines) back to the main machine for storage.
I thought of three possible ways to do so:
(1) rsync to each remote machine from the main machine, and 'ask' for the files
(2) rsync from every remote machine to the main machine, and 'send' the files
(3) create a NFS share on each remote machine, to which the main machine can access and read the files (no 'rsync' is needed in such a case)
Is one of the ways better than others? are there better ways I am not aware of?
All machines use Ubuntu 10.04LTS. Thanks in advance for any suggestions.
You could create one NFS share on the master machine and have each remote machine mount that. Seems like less work.
Performance-wise, it's practically the same. You are still sending files over a (relatively) slow network connection.
Now, I'd say which approach you take depends on where you want to handle errors or irregularities. If you want the responsibility to lie on your processing computers, use rsync back to the main one; or the other way round if you want the main one to work on assembling the data and assuring everything is in order.
As for the shared space approach, I would create a share on the main machine, and have the others write to it. They can start as soon as the processing finishes, ensure the file is transferred correctly, and then verify checksums or whatever.
I would prefer option (2) since you know when the processing is finished on the client machine. You could use the same SSH key on all client machines or collect the different keys in the authorized_keys file on the main machine. It's also more reliable if the main machine is unavailable for some reason, you can still sync the results later while in the NFS setup the clients are blocked.

Resources