Linux doesn't seems to capture some characters - linux

Last night I was making some bash script, just using proc and cat.
But suddenly, I think it broke or something like that my on Linux Installation.
What I can't do at the moment of posting is:
PuTTY doesn't seems to "accept"/register/capture the following keys. (Tried on 2 computers)
a
i
p
s
f
m
w
c
v
b
I've tried on 3 servers.
Debian 7 32Bit - nothing wrong
CentOS 6 32Bit - doesn't work
Ubuntu 12.04 32Bit -doesn't work
I haven't tried to reinstall them because my CentOS server runs cPanel and I don't want to start over again and my Ubuntu server runs like 8 TeamSpeak3 server (not a big job but people will complain).
Screenshot:
#!/bin/bash
echo "Hang on, saving CPU Info" &&
CPU=$(sudo cat /proc/cpuinfo) &&
cat >test.txt <<EOF
$CPU
EOF
I'm new to bash scripting :s

Related

How to get some information from remote machine by using shell script?

First of all, sorry for dummy question.
I would like to get distribution information from remote target by using following sample code under shell script. My local machine is Ubuntu16.04 and remote target is Ubuntu20.04(192.168.100.15). However, when I run shell script, the $distribution value is ubuntu16.04.
Why the value is not Ubuntu20.04? and How should I modify my code correctly?
ssh root#192.168.100.15 "distribution=$(. /etc/os-release;echo ) && echo $distribution"
Check the contents of /etc/os-release to find out which variables are available, then echo one of those. For example:
$ ssh root#192.168.100.15 '. /etc/os-release; echo $PRETTY_NAME'
Ubuntu 20.04.3 LTS
If you want to populate the distribution variable on your local machine, you need to use the $(...) construct locally:
$ distribution=$(ssh root#192.168.100.15 '. /etc/os-release; echo $PRETTY_NAME')
$ echo $distribution
Ubuntu 20.04.3 LTS
By the way, giving ssh access to the root user is frowned upon nowadays. And using root in this case is entirely unneccesary anyway, because /etc/os-release is world-readable to any user.
Use lsb_release:
ssh root#192.168.100.15 'lsb_release -ds'
LSB means Linux Standard Base. The command should be available on every Linux system.

string to array conversion is empty when in WSL but works fine in linux

I am trying out this simple commands in WSL (Ubuntu):
$ OS="windows redhat centos ubuntu"
$ ARR_OS=($OS)
$ echo ${ARR_OS[0]}
<BLANK OUTPUT>
$ echo ${ARR_OS[1]}
windows redhat centos ubuntu
And it has a different output if the same commands executed in Linux:
$ OS="windows redhat centos ubuntu"
$ ARR_OS=($OS)
$ echo ${ARR_OS[0]}
windows
$ echo ${ARR_OS[1]}
redhat
I was expecting that WSL works very similar with Ubuntu Linux but I'm having some difficulty trying to get this to work.
Any idea why WSL output is not the same?
EDIT:
I have attached a sample run in my WSL terminal and incorporate suggestions in the comments as well
SEE:
Screenshot of WSL Terminal Output in Windows 10
Strange, I have tried the exact commands in OSX and produces the same result:
→ echo $SHELL
/bin/zsh
→ bash --version
GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin18)
Copyright (C) 2007 Free Software Foundation, Inc.
→ OS="windows redhat centos ubuntu"
→ ARR_OS=($OS)
→ declare -p ARR_OS
typeset -a ARR_OS=( 'windows redhat centos ubuntu' )
→ echo ${ARR_OS[0]}
<BLANK>
→ echo ${ARR_OS[1]}
windows redhat centos ubuntu
UPDATE:
I have found this excellent post that explains the array differences when in zsh and bash:
https://blog.mimacom.com/arrays-on-linux-shell/

Bash Script to start a process on a remote Linux Machine and keep it detached from the starting server

I'm new to Linux (Red Hat) and I'm trying to automate the Eggplant tests I've come up with for our GUI based software. This will be run nightly.
I'll be running the base script on server 001. It will copy over the latest version of our software to a remote PC which is serving as test bench then it's to kick off a Bash script on the test bench which configures the environment then starts up the software.
I've tried:
ssh test#111.111.111.002 'bash -s' < testConfig.sh
ssh test#111.111.111.002 'bash -s testConfig.sh < /dev/nul > testConfig.log 2>&1 &'
ssh -X test#111.111.111.002 'testConfig.sh'
First one just fails, the second tries to start the software but instead of running on the test bench it tries running on the server. Of course with the third one it opens up windows on the server and runs the software; but I need the display on the test bench not the server.
A coworker happened by and saw my difficulty and suggested this, and it worked:
ssh user#1.1.1.1 'setenv DISPLAY :0.0 && testConfig.sh'
Try below and let us know. Using double quotes instead of single quotes.
ssh user#1.1.1.1 "bash -s" < ./testConfig.sh
make sure testConfig.sh is in the directory from where you are ssh'ing else use absolute path name for e.g. /home/qwe/testConfig.sh
Please also mention the error you are getting if any.

Linux Split command very slow on AWS Instance

I have deployed my application in AWS Instance and my application is using some linux system commands which is called via simple shell script.
Below is the sample script content:
#!/bin/bash
echo "File Split started"
cd /prod/data/java/
split -a 5 -l 100000000 samplefile.dat
echo "File Split Completed"
Actually the same script was running faster in our local server. In Local Server it was taking 15 minutes to complete , but in AWS Instance it took 45 minutes. This is huge difference.
Update: Also in AWS its not using more CPU, hardly 2 to 5 percentage, thats why its very slow.
Both are 64 bit OS and Local server is RHEL 5(2.6.18) , AWS is RHEL 6(2.6.xx)
Can anyone help on this?
Regards,
Shankar

bash redirect to /dev/stdout: Not a directory

I recently upgraded from CentOS 5.8 (with GNU bash 3.2.25) to CentOS 6.5 (with GNU bash 4.1.2). A command that used to work with CentOS 5.8 no longer works with CentOS 6.5. It is a silly example with an easy workaround, but I am trying to understand what is going on underneath the bash hood that is causing the different behavior. Maybe it is a new bug in bash 4.1.2 or an old bug that was fixed and the new behavior is expected?
CentOS 5.8:
(echo "hi" > /dev/stdout) > test.txt
echo $?
0
cat test.txt
hi
CentOS 6.5:
(echo "hi" > /dev/stdout) > test.txt
-bash: /dev/stdout: Not a directory
echo $?
1
Update: It doesn't look like this is problem related to CentOS version. I have another CentOS 6.5 machine where the command works. I have eliminated any environment variables as the culprit. Any ideas?
On all the machines these commands gives the same output:
ls -ld /dev/stdout
lrwxrwxrwx 1 root root 15 Apr 30 13:30 /dev/stdout -> /proc/self/fd/1
ls -lL /dev/stdout
crw--w---- 1 user1 tty 136, 0 Oct 28 23:21 /dev/stdout
Another Update: It seems the sub-shell is inheriting the redirected stdout of the parent shell. The is not too surprising I guess, but still why does it work on one machine, but fail on the other machine when they are running the same bash version?
On the working machine:
((ls -la /dev/stdout; ls -la /proc/self/fd/1) >/dev/stdout) > test.txt
cat test.txt
lrwxrwxrwx 1 root root 15 Aug 13 08:14 /dev/stdout -> /proc/self/fd/1
l-wx------ 1 user1 aladdin 64 Oct 29 06:54 /proc/self/fd/1 -> /home/user1/test.txt
I think Yu Huang is right, redirecting to /tmp works on both machines. Both machines are using isilon NAS for the /home mount, but probably one has slightly different filesystem version or configuration that caused the error. In conclusion, redirecting to /dev/stdout should be avoided unless you know the parent process will not redirecting it.
UPDATE: This problem arose after upgrade to NFS v4 from v3. After downgrading back to v3 this behavior went away.
Good morning, user1999165, :)
I suspect it's related to the underlying filesystem. On the same machine, try:
(echo "hi" > /dev/stdout) > /tmp/test.txt
/tmp/ should be linux native (ext3 or something) filesystem
On many Linux systems, /dev/stdout is an alias (link or similar) for file descriptor 1 of the current process. When you look at it from C, then the global stdout is connected to file descriptor 1.
That means echo foo > /dev/stdout is the same as echo foo 1>&1 or a redirect of a file descriptor to itself. I wouldn't expect this to work since the semantics are "close descriptor to redirect and then clone the new target". So to make it work, there must be special code which notices that the two file descriptors are actually the same and which skips the "close" step.
My guess is that on the system where it fails, BASH isn't able to figure out /dev/stdout == fd1 and actually closes it. The error message is weird, though. OTOH, I don't know any other common error which would fit better.
Note: I tried to replicate your problem on Kubuntu 14.04 with BASH 4.3.11 and here, the redirect works (i.e. I don't get an error). Maybe it's a bug in BASH 4.1 which was fixed, since.
I was seeing issues writing piped stdin input to AWS EFS (NFSV4) that paralleled this issue. (Using Centos 6.8 so unfortunately cannot upgrade bash to 4.2).
I asked AWS support about this, here's their response --
This problem is not related to EFS itself, the problem here is with bash. This issue was fixed in bash 4.2 or later in RHEL.
To avoid this problem, please, try to create a file handle before running the echo command
within a subshell, after that the same file handler can be used as a redirect. Like the below example:
exec 5> test.txt; (echo "hi" >&5); cat test.txt
hi

Resources