I have a requirement where all the scripts on solaris needs to be copy pasted on linux. I need to do this keeping in mind that the scripts should be linux compatible. I have no idea how I can check the compatibility of these already existing scripts. Note that I do not have the linux environment ready and we are doing a data collection of all such types, please help.
Thanks
Abhinav
You will have no guarantee, there are too much potential differences:
* scripts calling Solaris-specific tools.
check (grep) your scripts for calls to /bin/* and /usr/bin
* scripts calling utilities with different options on Linux
Most times Linux utilities (grep, sed, awk, date) will have more options. Try to install and use the GNU utilities or have some hope that the basic options on Solaris are supported on Linux as well.
* ksh or bash
When you can, try to install and use bash on Solaris.
Pay attention to while-loops (see https://stackoverflow.com/a/5061255/3220113).
Update:
My own experience:
I have migrated a lot of scripts from Solaris to AIX, both ksh. Problems I had were mainly:
* if [ -z $var ] fails on AIX when $var is empty (use if [ -z
"$var"]).
* Not having the sed -i option I edited files on place
using vi file <<#
This did not work when called from a remote script, it was missing a correct $TERM
* crontab scripts did not look at /etc/environment (AIX specific)
* The DST in the home-brewn timezone for date on AIX wasn't working well.
* Different directories (database distribition!)
* Other Java classes and classpaths to be set before launching a java script
* Connection to a mailserver worked different.
Recently I am only working with bash. What a relief! find that supports mtime in minutes, date that can jump days, grep has beautiful options and even awk has found his way to my toolbox. For bash just remember that you never pipe into a while-loop.
Of course you would like an estimate of the work you will have. When you need to port scripts written by yourself during the last 3 years (so you understand what they do and the amount is about 2 years work - you must have been doing other things as well), my personal guess is about 4 months programming and testing.
Related
Previous
This is a follow-up to this question.
Specs
My system is a dedicated server running Ubuntu Desktop, Release 12.04 (precise) 64-bit, 3.14.32-xxxx-std-ipv6-64. Neither release or kernel can be upgraded, but I can install any package.
Problem
The problem discribed in the question above seems to be solved, however this doesn't work for me. I've installed the latest lftp and parallel packages and they seem to work fine for themselves.
Running lftp works fine.
Running ./job.sh ftp.microsoft.com works fine, but I needed to chmod -x the script
Running sed 's/|.*$//' end_unique.txt | xargs parallel -j20 ./job.sh ::: does not work and produces bash errors in the form of /bin/bash: <server>: command not found.
To simplify things, I cleaned the input file end_unique.txt, now it has the following format for each line:
<server>
Each line ends in a CRLF, because it is imported from a windows server.
Edit 1:
This is the job.sh script:
#/bin/sh
server="$1"
lftp -e "find .; exit" "$server" >"$server-files.txt"
Edit 2:
I took the file and ran it against fromdos. Now it should be standard unix format, one server per line. Keep in mind that the server in the file can vary in format:
ftp.server.com
www.server.com
server.com
123.456.789.190
etc. All of those servers are ftp servers, accessible by ftp://<serverfromfile>/.
With :::, parallel expects the list of arguments it needs to complete the commands it's going to run to appear on the command line, as in
parallel -j20 ./job.sh ::: server1 server2 server3
Without ::: it reads the arguments from stdin, which serves us better in this case. You can simply say
parallel -j20 ./job.sh < end_unique.txt
Addendum: Things that can go wrong
Make certain two things:
That you are using GNU parallel and not another version (such as the one from moreutils), because only (as far as I'm aware) the GNU version supports reading an argument list from stdin, and
That GNU parallel is not configured to disable the GNU extensions. It turned out, after a lengthy discussion in the comments, that they are disabled by default on Ubuntu 12.04, so it is not inconceivable that this sort of thing might be found elsewhere (particularly downstream from Ubuntu). Such a configuration can hide in
The environment variable $PARALLEL,
/etc/parallel/config, or
~/.parallel/config
If the GNU version of parallel is not available to you, and if your argument list is not too long for the shell and none of the arguments in it contain whitespaces, the same thing with the moreutils parallel is
parallel -j20 job.sh -- $(cat end_unique.txt)
This did not work for OP because the file contained more servers than the shell was willing to put into a command line, but it might work for others with similar problems.
Sorry, the headline might be a bit irritating, but I didn't know anything better. Anyway, I want a bash script to work on FreeBSD, OpenBSD and Linux without modifying it, but bash isn't located at the same place in Linux and BSD.
So, if I write #!/bin/bash then it won't work on BSD, because the bash shell is located in /usr/local/bin/bash there. Is there any solution to get this script working on both?
Or do I really need to ship two scripts with different paths...?
Using env in the shebang (#!/usr/bin/env bash) should make the script OS agnostic.
I like the answer about using #!/usr/bin/env bash
It is an interesting and excellent answer, but that would only work if bash is in the path.
Another option might be to use #!/bin/sh which is the most universally compatible shell location.
Then, have the script do something in sh, such as check where bash is installed (if bash is even installed). Another option might be to have bash exist to both locations. Making another installation may sound like overkill, but this goal could be accomplished as simply as creating a hard link so that bash actually exists in both locations.
I have written a bash script on linux, and it works well, as a part of migration I moved [rather added] an aix 7.2 node to my cluster. When I tried running bash scripts on aix it failed with multiple errors on different gnu bash commands.
[ps: I have installed gnu bash on this aix node, IBM calls it a toolbox made for aix, which contains a collection of open source and GNU software built for AIX IBM Systems]
For example :
- grep -oP isn't supported
- ls -h doesn't work
- getopts fails to get parameter passed and $# as well.
I am not sure if I am doing it right with just installing the gnu bash on aix. Have anyone had any experience porting bash scripts over to ssh?
Are there any pointer community can suggest to get bash script work on aix?
The issue is that these commands are not part of bash. What you need is the GNU versions of all these utilities, that is grep and ls. As for getopts builtin, please check which version of bash you developed the script against as compared to which version you're running it against:
$ bash --version
GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu)
Copyright (C) 2005 Free Software Foundation, Inc.
That's my bash version. If your production environment has a really old version of bash, you'll need to use bourne shell scripting, instead of bash scripting (Bourne Again SHell) to ensure portable scripts.
Edit: Plagarizing from Olivier Dulac's answer, please take a look at the POSIX page on shell command language for portable Bourne shell scripting. Do take a look at the POSIX standard page for ls and grep for portable options.
Another Edit: See the page on AIX Toolbox for Linux for GNU variants of the standard utilities, which are installed into /usr/linux/bin
Yet another Edit: According to pedz, this link shows better (100% compatible) replacements for the AIX Toolbox
stick to standards, as much as possible...
write sh-compatible scripts, if you need to use them on various systems.
Stick to ancient options that are widely suppotred (-h option for ls, and other gnu introduced niceties, are nice to have, but NOT portable enough)
I am working on some bash scripts that I'd like to work across my Linux and FreeBSD systems.
Since I mostly work in Linux, I am used to starting my bash scripts with
#!/bin/bash
But this doesn't work on FreeBSD since bash lives at /usr/local/bin/bash. So on FreeBSD my scripts need to start with
#!/usr/local/bin/bash
So is there something else I could use that would be portable across both systems? I'd rather not maintain two versions of the scripts.
#!/usr/bin/env bash
should do the trick, provided that bash is on the path somewhere. See here for more details.
Honestly, if you want portability, invoke as /bin/sh and code to POSIX. It's less pretty, but you will run into fewer potential issues if you do.
Use #!/bin/sh on both systems if you want to be portable and avoid bashisms entirely.
I have uncovered another problem in the effort that we are making to port several hundreds of ksh scripts from AIX, Solaris and HPUX to Linux. See here for the previous problem.
This code:
#!/bin/ksh
if [ -a k* ]; then
echo "Oh yeah!"
else
echo "No way!"
fi
exit 0
(when run in a directory with several files whose name starts with k) produces "Oh yeah!" when called with the AT&T ksh variants (ksh88 and ksh93). On the other hand it produces and error message followed by "No way!" on the other ksh variants (pdksh, MKS ksh and bash).
Again, my question are:
Is there an environment variable that will cause pdksh to behave like ksh93? Failing that:
Is there an option on pdksh to get the required behavior?
I wouldn't use pdksh on Linux anymore.
Since AT&T ksh has become OpenSource there are packages available from the various Linux distributions. E.g. RedHat Enterprise Linux and CentOS include ksh93 as the "ksh" RPM package.
pdksh is still mentioned in many installation requirement documentations from software vendors. We replaced pdksh on all our Linux systems with ksh93 with no problems so far.
Well after one year there seems to be no solution to my problem.
I am adding this answer to say that I will have to live with it......
in Bash the test -a operation is for a single file.
I'm guessing that in Ksh88 the test -a operation is for a single file, but doesn't complain because the other test words are an unspecified condition to the -a.
you want something like
for K in /etc/rc2.d/K* ; do test -a $K && echo heck-yea ; done
I can say that ksh93 works just like bash in this regard.
Regrettably I think the code was written poorly, my opinion, and likely a bad opinion since the root cause of the problem is the ksh88 built-in test allowing for sloppy code.
You do realize that [ is an alias (often a link, symbolic or hard) for /usr/bin/test, right? So perhaps the actual problem is different versions of /usr/bin/test ?
OTOH, ksh overrides it with a builtin. Maybe there's a way to get it to not do that? or maybe you can explicitly alias [ to /usr/bin/test, if /usr/bin/test on all platforms is compatible?