Linux to Windows copying network script - linux

I need to improve my method, or even change it completely, for copying files on a private network from multiple Windows machines to a central Linux machine. How this works is that I run the script below as a cron job every 5 minutes to copy data from say 10 Windows machines, all with a shared folder, to the central Linux machine that gets collected each day. So in theory the Linux machine at the end of the day should have all the data that has changed on the Windows machines.
#!/bin/sh
USER='/home/user/Documents/user.ip'
IPADDY=$USER
USERNAME=$USER
while read IPADDY USERNAME; do
mkdir /mnt/$USERNAME
mkdir /home/user/Documents/$USERNAME
smbmount //$IPADDY/$USERNAME /mnt/$USERNAME -o username=usera,password=password,rw,uid=user
rsync -zrv --progress --include='*.pdf' --include='*.txt' --include='issues' --exclude='*' /mnt/$USERNAME/ /home/user/Documents/$USERNAME/
done < $USER
The script works fine but it doesn't seem to be the best method as a lot of the time data is not being copied across or not all the data is copied correctly.
Do you think that this is the best approach or can someone point me in a better solution?

How about git repository? Wouldn't it be easier? You could easily also track the changes.

Related

Is it possible to script starting up a FreeBSD VM, running a program in it, and fetching the result?

I have a library that I'd like to test on FreeBSD. My CI setup doesn't have any FreeBSD systems, and adding them would be difficult, but I can spin up a VM inside my CI script. (In fact, I already do this to test on more exotic Linux kernel versions.)
For Linux, this is pretty easy: grab a pre-built machine image from some distro site, and use cloud-init to inject a first-run script, done.
Is it possible to do the same thing with FreeBSD? I'm looking for an automated way to take a standard FreeBSD machine image (e.g. downloaded from https://freebsd.org), boot it, and inject a program to run. The tricky part is that it should be entirely automated – I don't want to have to manually click through an installer ever time FreeBSD makes a new release.
Out of the box, there is no option like cloud-init but you could create your own image and use firstboot for example, this script is used to bootstrap a VM with saltstack in AWS:
#!/bin/sh
# KEYWORD: firstboot
# PROVIDE: set_hostname
# REQUIRE: NETWORKING
# BEFORE: login
. /etc/rc.subr
name="set_hostname"
rcvar=set_hostname_enable
start_cmd="set_hostname_run"
stop_cmd=":"
export AWS_ACCESS_KEY_ID=key
export AWS_SECRET_ACCESS_KEY=secret
export AWS_DEFAULT_REGION=region
TAG_NAME="Salt"
INSTANCE_ID=$(/usr/local/bin/curl -s http://169.254.169.254/latest/meta-data/instance-id)
REGION=$(/usr/local/bin/curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone | sed 's/.$//')
TAG_VALUE=$(/usr/local/bin/aws ec2 describe-tags --filters "Name=resource-id,Values=${INSTANCE_ID}" "Name=key,Values=$TAG_NAME" --region ${REGION} --output=text | cut -f5)
set_hostname_run()
{
hostname ${INSTANCE_ID}
sysrc hostname="${INSTANCE_ID}"
sysrc salt_minion_enable="YES"
echo ${INSTANCE_ID} > /usr/local/etc/salt/minion_id
pw usermod root -c "root on ${INSTANCE_ID}"
if [ ! -z "${TAG_VALUE}" ]; then
echo "node_type: ${TAG_VALUE}" > /usr/local/etc/salt/grains
fi
service salt_minion start
}
load_rc_config $name
run_rc_command "$1"
To create your own images you could use this script as a starting point: https://github.com/fabrik-red/images/blob/master/fabrik.sh#L124, more info here: https://fabrik.red/post/creating-the-image/
You can also simply install FreeBSD in Virtualbox, configure scripts related to firstboot, test and when you are happy with the results, export it, just be careful before exporting it that /firstboot exists (touch /firstboot) since after the first boot it will be removed and it could happen that after you export it if is not present it will not call the scripts.
After you have created the image you can use it multiple times, no need to create a new "custom" VM every time, it all depends on the scripts you use to bootstrap and load the scripts on the "firstboot".
I looked into this a bit more, and it turns out that it is possible, though it's quite awkward.
There are three kinds of official FreeBSD releases: pre-installed VMs, ISO installers, and USB stick installers.
The official pre-installed VM images don't offer any way to script them from outside by default. And they use the FreeBSD UFS filesystem, which isn't modifiable from any common OS. (Linux can mount UFS read-only, and has some code for read-write support, but the read-write support is disabled by default and requires a custom kernel.) So there's no easy way to programmatically modify them unless you already have a FreeBSD install.
The USB stick installers also use UFS filesystems, so that's out. So do the pre-built live CD's I found, like mfsBSD (the CD itself is iso9660, but it's just a container for a big UFS blob that gets unpacked into memory).
So that leaves the CD installers. It turns out that these actually use iso9660 for their actual file layout. And we don't need FreeBSD to work with iso9660!
So what you have to do is:
Download the CD installer
Modify the files on it to do the install without user interaction, apply some custom configuration to the new system, and then shut down
Use your favorite VM runner to boot up the CD with a blank hard drive image, and let it run to install FreeBSD onto that hard drive
Boot up the hard drive, and it will do whatever you want.
There are a ton of fiddly details that I'm glossing over, but that's the basic idea. There's also a fully-worked example here: https://github.com/python-trio/trio/pull/1509/
To "inject" your software you generally need to be able to write to the filesystem, and the most reliable way of doing that is to run the system itself. In other words, have a FreeBSD VM to create FreeBSD VMs - you can either build them locally (man 7 release), or fetch VM images from http://download.FreeBSD.org, mount their rootfs somewhere, put your software wherever you need it, and make it execute it from mounted filesystem's /etc/rc.local.

Automating build installation process

I work on a SaaS based product of my company which is hosted on private cloud. So every time a fresh BOM package is available by the DEV team. In the common share folder , we- the testing team installs the build on our application servers (3 multi node servers, with one being primary and the other two being secondary).
The build installation is entirely done manually.on the three app servers(linux machine), where in the steps we follow are as below
Stop all the app servers
Copy the latest build from a code repository server(copy the .zip build file)
Unzip the content s if the folder on to a folder in the appserver (using the unzip command)
Run backup of existing running build on all three folders( command is something like - ant-f primaryBackup.xml, ant-f secondary backup.xml )
Then run the install on all three serverscommand is something like - ant-f primaryInstall.xml, ant-f secondaryInstall.xml )
Then restart all the server and check if the latest build is successfully applied.
Question: I am wanting to automate this entire process, such that I am just required to give the latest build number to be installed and the script takes care of the whole installation .
Presently I don't understand how this can be done ? Where should I start? Is this feasible? Will a shell script of the entire process be the solution?
There are many build automation/continuous deployment tools out there that would help you with a solution for automating your deployment pipeline. Some of the more popular configuration automation tools out there are puppet, chef, ansible, and saltstack. I only have experience with ansible and chef but my interpretation has been that chef is the more "user-friendly" option. I would start there... (chef uses the ruby language and ansible uses python).
I can answer specific questions about this, but hour original question is really open ended and broad.
free tutorials: https://learn.chef.io/
EDIT: I do not suggest provisioning your servers/deployments using bash scripts... that is generally messy and as your automation grows (which it likely will), your code will gradually become unmanageable. Using something like chef, you could set periodic checks for new code in your repositories and deploy when new code is detected (or upon certain conditions being met). you could write strait bash code within a ruby bock that will remotely stop/start a service like this (example):
bash 'Copying the conf file' do
cwd "current/working/directory"
user 'user_name'
code <<-EOH
nohup ./startservice.sh &
sleep 2m
nohup ./startservice.sh &
sleep 3m
EOH
end
to copy code from git for example... I am assuming github in this example, as i do not know where your code resides:
git "/opt/mysources/couch" do
repository "git://git.apache.org/couchdb.git"
reference "master"
action :sync
ssh_wrapper "/some/path/git_wrapper.sh"
end
lets say that your code is anywhere else.. bamboo or Jenkins for example... there is a ruby/chef resource for it or some way to call it using strait ruby code.
This is something that "you" and your team will have to figure out a strategy for.
You could untar a file with a tar resource like so:
tar_package 'http://pgfoundry.org/frs/download.php/1446/pgpool-3.4.1.tar.gz' do
prefix '/usr/local'
creates '/usr/local/bin/pgpool'
end
or use the generic linux command to like so:
execute 'extract_some_tar' do
command 'tar xzvf somefile.tar.gz'
cwd '/directory/of/tar/here'
not_if { File.exists?("/file/contained/in/tar/here") }
end
You can start up the servers in the way that I wrote the first block of code (assuming they are services.. if you need to restart the actual servers, then you can just run init 6 or something.
This is just en example of the flexibility these utilities offer

Running out of space on Linux, copying 12GB of files to a 15GB file system

I have two virtual Linux servers, one for development and one in production, a typical setup one would expect.
On the development server I have files that I need to copy to the production server, that amount to 12GB, well according to the "du -h" command. The production server has 15GB free, according to the "df -h" command. However, when trying to copy the files across, the production server ran out of file space!
Whilst I know that both commands round up or down the answer, there should be still over 2GB free at the end, 12.4GB to 14.5GB. Equally, there could be near 4GB as well, 11.5GB to 15.4GB. (For some reason, I get slightly different answers based upon the user, but still enough to fit the files - supposedly.)
Both servers are running 64 bit Ubuntu 16.04 LTS and have the file system set as EXT4.
I'm using SCP to copy the files across, since I don't have enough space to contain a zipped file and its unzipped contents.
So what am I missing?
Please try the same with rsync command in terminal. Here is the example of it.
$ rsync -a /some/path/to/src/ /other/path/to/dest/

Git, Windows, Linux & NTFS: "index file open failed"

I created a git repo in Windows 7 on a NTFS partition and when opening it in Linux (Ubuntu 12 x64, dual-boot setup) I get the index file open failed error. How can I figure out what's wrong? The partition is mounted read-write and I've never had any other problems. Does git store data in a different format Windows vs. Linux and I need to do either a clone or some conversion? I'd really like to be able to work on the same repo in both OSs without cloning around...
Clarification: I also get cat: index: Input/output error
when running the command cat index in the .git dir, so it is a NTFS related problem... but I've never had it before untill using git in a cross-systems way and I've run other apps from NTFS parts and copied files around...
The .git/index file is a binary file, which describes the current workdir. Perhaps a git fsck is able to fix it up (move the one you have out of the way to make sure it isn't lost while you fool around, or make any expertiments on a copy of the repository). You might try to clone the repository locally, the clone might get a good copy of the file, which you could then copy over the broken one.
Possibly permission problems? Backup what is relevant, defragment the drive, run hardware checks (it might be a broken/breaking disk!).
Either your Linux NTFS driver is broken, or you have filesystem corruption, or both. Reboot to Windows and run the disk checking utility, then see how things stand when it finishes.

Export SVN repository over FTP to a remote server

I'm using following command to export my repository to a local path:
svn export --force svn://localhost/repo_name /share/Web/projects/project_name
Is there any, quite easy (Linux newbie here) way to do the same over FTP protocol, to export repository to a remote server?
Last parameter of svn export AFAIK have to be a local path and AFAIK this command does not support giving paths in form of URLs, like for example:
ftp://user:pass#server:path/
So, I thing there should be some script hired here to do the job.
I have asked some people about that, and was advised that the easiest way is to export repository to a local path, transfer it to an FTP server and then purge local path. Unfortunately I failed after first step (extract to local path! :) So, the support question is, if it can be done on-the-fly, or really have to be split into two steps: export + ftp transfer?
Someone also advised me to setup local SVN client on remote server and do simple checkout / update from my repository. But this is solution possible only if everything else fails. As I want to extract pure repository structure, without SVN files, which I would get, when go this way.
BTW: I'm using QNAP TS-210, a simple NAS device, with very limited Linux on board. So, many command-line commands as good as GUI are not available to me.
EDIT: This is second question in my "chain". Even, if you help me to succeed here, I won't be able to automate this job (as I'm willing to) without your help in question "SVN: Force svn daemon to run under different user". Can someone also take a look there, please? Thank you!
Well, if you're using Linux, you should be able to mount an ftpfs. I believe there was a module in the Linux kernel for this. Then I think you would also need FUSE.
Basically, if you can mount an ftpfs, you can write your svn export directly to the mounted folder.
not sure about FTP, but SSH would be a lot easier, and should have better compression. An example of sending your repo over SSH may look like:
svnadmin dump /path/to/repository |ssh -C username#servername 'svnadmin -q load /path/to/repository/on/server'
URL i found that info was on Martin Ankerl's site
[update]
based on the comment from #trejder on the question, to do an export over ssh, my recomendation would be as follows:
svn export to a folder locally, then use the following command:
cd && tar czv src | ssh example.com 'tar xz'
where src is the folder you exported to, and example.com is the server.
this will take the files in the source folder, tar and gzip them and send them over ssh, then on ssh, extract the files directly to the machine....
I wrote this a while back - maybe it would be of some use here: exup

Resources