How to fetch network card configs remotely from multiple Linux machines? - linux

I need a tool/script to fetch network card configurations from multiple Linux machines, mostly Red Hat Enterprise 5. I only know some basic bash, and I need something that can be run remotely pulling server names from a CSV. It also needs to be be run quickly and easily by non-technical types from a Windows machine. I've found WBEM/CMI/SBLIM, but I'd rather not write a whole C++ application. Can anyone point me to a tool or script that could accomplish this?

For Red Hat Enterprise Linux servers, you likely just need to take a copy of the files in /etc/sysconfig/networking/devices/ from each server. You can use an sftp client to accomplish that over ssh.
(The files are just easy-to-read text config files containing the network device configuration)

Can you give more details as to what information you need to pull? The various parameters to ifconfig give quite a lot of information about a Linux machine's network card configuration, so if you can do it that way it will be very easy. Simply write a script that converts the CSV into something white-space delimited, and then you can do something like:
#!/bin/bash
for host in $HOSTS ; do
CARDINFO=`ssh $host 'ifconfig'`
# Do whatever processing you need on CARDINFO here
done
That's a very rough sketch of the pseudocode. You'll also need to set up passwordless SSH on the hosts you want to access, but that's easy to do on Red Hat.

If you want to use WBEM/CIM for that (as mentioned in your original question), and you prefer a scripting environment over a programming language such as C/C++/Java, then there are PyWBEM and PowerCIM as two ways to do that in Python. If it needs to be bash etc, then there are command line clients (such as cimcli from the OpenPegasus project or wbemcli from the SBLIM project) and you could parse their output. Personally, I would prefer a Python based approach using PyWBEM. It is very easy to use, connecting to a CIM server is one line and enumerating CIM instances of a class is one more line.
On the side of the Linux system you want to query, the CIM server would need to run (tog-pegasus or sfcb) along with the right CIM provider packages (sblim). This approach has the advantage that your interface will be the same regardless of which Linux distribution you are using. Parsing config files is often dependent on the type of Linux distribution and I have seen them change across versions.
One main purpose of CIM is to provide reliable interfaces that are consistent across different types of environments and that change only compatibly over time.
Last but not least, using CIM allows you to get away without having to install any agent software on the system you want to inspect (as long as you can ensure that the CIM server is running).
Andy

Related

Automizing the process of setting up a new server

I'm maintaining the servers of a web game. Whenever we add a new server to our game, I have to configure many environment details and install softwares (for example, testing if some ports of the new machine can be connected from other places, installing mysql-client, pv..., copying the game server files from the other machine, and changing mysql server connection URL) on the new machine.
So my question is "How can I automize the whole process of setting up a new server?" Because most of the works I do are repetitive. I don't want to do this kind of job whenever a new machine comes in.
Is there a tool that allows me to save the state of a linux machine so that next time when we buy a new server, I can copy the state of an old linux machine to the new machine? I think this is one of the ways to automize the process of setting up a new game server.
I've also tried using some *.sh scripts to automize the process. But it's not always possible to get the return value of every command I execute. This is why I come here and ask for help.
Have you looked at Docker, Ansible, Cheff or Puppet?
In Docker you can build a new container by describing required operations in docker file. And you can easily move container between machines.
Ansible, Cheff and Puppet are systems management automation tools.
I doubt you'll find such tool to automatize an entire customization process because it's rather difficult to define/obtain a one-size-fit-all linux machine state, especially if the customisation includes logical/functional sequences.
But with good scripting you can obtain a possibly more reliable customisation from scratch (rather than copying it from another machine). I'd recommend a higher-level scripting language, tho, IMHO regular bash/zsh/csh scripting is not good/convenient enough. I prefer python, which gives easy access to every cmd's return code, stdout, stderr and with the pexpect module it can drive interactive cmds.
There are tools to handle specific types of customisations (sw package installations, config files), but not all I needed, so I didn't bother and went straight for custom scripts (more work, but total control). Personal preference, tho, others will advise against that.

Migrate data from one server to another

I bought a new server and I want to move all the data (directories, sub directories, users, passwords, ..etc) from my old server to it.
Is there a way to do that?
Thanks,
Do you have physical access to both servers? If so you can use the dd command to make a clone of the disk from the old server to the disk that is going into the new server.
In order to do this though, both hard drives have to be installed in one of the servers.
You can also use netcat and dd to clone a disk over a network.
for the directories and files, use a FTP client from your server, if it allows you to, if not, just download all the content to your computer and upload it to the new server.
For the users and passwords, i guess they are in a Database, connect to the database using SSH, telnet, or MysqlAdmin or any RMDB client system and export a dump file, then log in to the new server's SQL system and import that dump file.
Anyway you should give more details of both servers anyway so we can help you, for example, are they Shared hosting or dedicated machine? and what kind of access do you have to them, also, their operative system would help people to reply you accurately
In principle, yes.
If the hardware is similar (= just more RAM, disk space but same CPU architecture and no special graphics card drivers), you might be able to copy every file and then install the boot loader once more (the boot loader config usually changes when the hard disk size changes).
Or you can create a list of all services that you use, determine which config files each one uses and then just copy those. Ideally, you shouldn't copy them but compare the old and the new versions and merge them.
The most work intensive way is to use a tool like puppet. In a nutshell, puppet allows to create install scripts for services (along with all the configuration that you need). So if you need to install a service again (new hardware, second server), you just tell puppet to do it. On the plus side, your whole installation will be documented, too. If you ever wonder why something is the way it is, you can look into the puppet files.
Of course, this approach takes a lot of time and discipline, so it might not be worth it in your case. Apply common sense.

Linux file synchronization between computers

I'm looking for a software which will allow me to synchronize files in specyfic folders between my linux boxes. I have searched a lot of topics and what I've found is Unison. It looks prety good but it is not under development anymore and does not allow me to see file change history.
So the question is - what is the best linux file synchronizer, that:
(required) will synchronize only selected folders
(required) will synchronize computers at given time (for example each hour)
(required) will be intelligent - will remember what was deleted and when and will ask me if I want to delete it on remote machine too.
(optionally) will keep track of changes and allow to see history of changes
(optionally) will be multiplatform
Rsync is probably the de facto.
I see Unison is based on Rsync -- not sure if Rsync alone can achieve number 3 above.
Also, see this article with detailed information about rsync, including available GUI's for it.
While I agree Rsync is defacto swissknife for linux users, I found 2 other projects more interesting especially for use case where I have 2 workstations in different locations and laptop, all 3 machines for work, so I felt pain here. I found really nice project called:
https://syncthing.net/
I run it on public server with vpn access where my machines are always connected and it simply works. It has gui for monitoring purposes (basic, but enough infor available)
Second is paid, but with similar functionality on top built in:
https://www.resilio.com/
Osync is probably what you're looking for (see http://www.netpower.fr/osync )
Osync is actually rsync based but will handle number 3 above without trouble.
Number 4, keeping track of modified files can be more or less achieved by adding --verbose parameter which will log file updates.
Actually, only number 5 won't work. Osync runs on most unix flavors but not windows.

Running external code in a restricted environment (linux)

For reasons beyond the scope of this post, I want to run external (user submitted) code similar to the computer language benchmark game. Obviously this needs to be done in a restricted environment. Here are my restriction requirements:
Can only read/write to current working directory (will be large tempdir)
No external access (internet, etc)
Anything else I probably don't care about (e.g., processor/memory usage, etc).
I myself have several restrictions. A solution which uses standard *nix functionality (specifically RHEL 5.x) would be preferred, as then I could use our cluster for the backend. It is also difficult to get software installed there, so something in the base distribution would be optimal.
Now, the questions:
Can this even be done with externally compiled binaries? It seems like it could be possible, but also like it could just be hopeless.
What about if we force the code itself to be submitted, and compile it ourselves. Does that make the problem easier or harder?
Should I just give up on home directory protection, and use a VM/rollback? What about blocking external communication (isn't the VM usually talked to over a bridged LAN connection?)
Something I missed?
Possibly useful ideas:
rssh. Doesn't help with compiled code though
Using a VM with rollback after code finishes (can network be configured so there is a local bridge but no WAN bridge?). Doesn't work on cluster.
I would examine and evaluate both a VM and a special SELinux context.
I don't think you'll be able to do what you need with simple file system protection because you won't be able to prevent access to syscalls which will allow access to the network etc. You can probably use AppArmor to do what you need though. That uses the kernel and virtualizes the foreign binary.

Automated deployment of files to multiple Macs

We have a set of Mac machines (mostly PPC) that are used for running Java applications for experiments. The applications consist of folders with a bunch of jar files, some documentation, and some shell scripts.
I'd like to be able to push out new version of our experiments to a directory on one Linux server, and then instruct the Macs to update their versions, or retrieve an entire new experiment if they don't yet have it.
../deployment/
../deployment/experiment1/
../deployment/experiment2/
and so on
I'd like to come up with a way to automate the update process. The Macs are not always on, and they have their IP addresses assigned by DHCP, so the server (which has a domain name) can't contact them directly. I imagine that I would need some sort of daemon running full-time on the Macs, pinging the server every minute or so, to find out whether some "experiments have been updated" announcement has been set.
Can anyone think of an efficient way to manage this? Solutions can involve either existing Mac applications, or shell scripts that I can write.
You might have some success with a simple Subversion setup; if you have the dev tools on your farm of Macs, then they'll already have Subversion installed.
Your script is as simple as running svn up on the deployment directory as often as you want and checking your changes in to the Subversion server from your machine. You can do this without any special setup on the server.
If you don't care about history and a version control system seems too "heavy", the traditional Unix tool for this is called rsync, and there's lots of information on its website.
Perhaps you're looking for a solution that doesn't involve any polling; in that case, maybe you could have a process that runs on each Mac and registers a local network Bonjour service; DNS-SD libraries are probably available for your language of choice, and it's a pretty simple matter to get a list of active machines in this case. I wrote this script in Ruby to find local machines running SSH:
#!/usr/bin/env ruby
require 'rubygems'
require 'dnssd'
handle = DNSSD.browse('_ssh._tcp') do |reply|
puts "#{reply.name}.#{reply.domain}"
end
sleep 1
handle.stop
You can use AppleScript remotely if you turn on Remote Events on the client machines. As an example, you can control programs like iTunes remotely.
I'd suggest that you put an update script on your remote machines (AppleScript or otherwise) and then use remote AppleScript to trigger running your update script as needed.
If you update often then Jim Puls idea is a great one. If you'd rather have direct control over when the machines start looking for an update then remote AppleScript is the simplest solution I can think of.

Resources