Hbase scan commands into a text file in windows - azure

Is it possible to capture the
hbase command "scan table" to a textfile.
The command goes like this :
hbase(main):001:0>scan sampletable
I tried using the command prompt command
hbase(main):001:0>scan sampletable > textfile.txt
but gives error as "wrong number of arguments"
I tried with the following commands as well :
hbase(main):001:0>echo "scan 'sampletable'" | hbase shell | grep "^ " > registration.txt
but there is exception "unrecognised characters ^ " in the command

You cannot execute linux commands like echo, hbase etc in hbase shell. You need to execute these commands in windows power shell.
Exit from hbase shell and execute the below command
echo "scan 'sampletable'" | hbase shell | grep "^ " > registration.txt

Related

How to call Oracle SQL query using bash shell script

How to call a sql query using bash shell script. I tried the below but seems there is some syntax error:
#!/bin/sh
LogDir='/albt/dev/test1/test2/logs' # log file
USER='test' #Enter Oracle DB User name
PASSWORD='test' #Enter Oracle DB Password
SID='test' #Enter SID
sqlplus -s << EOF > ${LogDir}/sql.log
${DB_USER_NAME}/${DB_PASSWORD}#${DB_SID}
SELECT count(1) FROM dual; # SQL script here to get executed
EOF
var=$(SELECT count(1) FROM dual)
I'm getting - unexpected token error
#!/bin/sh
user="test"
pass="test"
var="$1"
sqlplus -S $user/$pass <<EOF
SELECT * FROM tableName WHERE username=$var;
exit;
EOF
I'm getting - sqlplus: command not found -- when I run the above script
Can anyone guide me?
In your first script, one error is in the use of count(1). The whole line is
var=$(SELECT count(1) FROM dual)
This means that the shell is supposed to execute a program named SELECT with the parameters count(1), FROM and dual, and stores its stdout into the variable var. I think you want instead to feed a SELECT command into sqlplus, i.e. something like
var=$(sqlplus .... )
In your second script, the error message simply means that sqlplus can not be found in your path. Either add it to the PATH or provide the absolute path to the command explicitly.

Problem of running command from Rundeck (Linux)

On a linux server I am running following command with any error and getting the result.
xxxxx#server1 ~]$ grep -o "\-w.*%" /etc/sysconfig/nrpe-disk
-w 15% -c 7%
[xxxxx#server1 ~]$
I want to run same command from Rundeck's command line interface with same xxx user which has sudo rights too.
Command executed from rundeck gives option '.' invalide error:
option invalide -- '.'
Utilisation : grep [OPTION]... MOTIF [FICHIER].
I tried many times with different ways such as escaping . sign, running it with sudo, with absolute path, double quotes - single quotes etc. Still I am receiving same output however, in the server command works locally. What's the way to fix it ?
You can do that putting that on an inline-script ("Script" step) or call an external script with the command content ("Script file or URL" step).
Another way is to use cat tool to print the file and capture the output using log filter (Click on the tiny Gear icon at the left of the step > Click on "Add Log Filter" > Select "Key/value data" and in pattern use with this regex: .*(-w .*%).*, put a name of the data - eg: diskdata - and click on "Log data" checkbox) and you get the output that you want, you can print that value using echo ${data.diskdata} in next step. Check.

how to use /dev/null for bash command

I am using below command to get result for my SQL query.
su - postgres -c 'psql -d dbname' with stdin "COPY ( my SQL query ) TO STDOUT WITH CSV HEADER"
This works fine on my server but on different machine it is printing bash warning with output of SQL query.
For example -
/etc/profile: line 46: HISTSIZE: readonly variable
/etc/profile: line 50: HISTCONTROL: readonly variable
/etc/profile.d/20-tmout.sh: line 1: TMOUT: readonly variable
/etc/profile.d/history.sh: line 6: hcmnt_tty: readonly variable
name
abc
Please let me know anyway so that I can skip above warning messages and only get data.
If I would like to use /dev/null in this case how to modify above command to get data only.
if what you mean is "how to discard only error output?", the way to go is to redirect the standard error stream to oblivion (/dev/null), like so:
your-command 2>/dev/null
that way, if the command outputs data to standard out, it passes through, but any output to the standard error socket is discarded, so you won't see these error messages.
by the way, 2 here is a shorthand file descriptor for the standard error.
Sorry this is untested, but I hit this same error, your db session isn't read/write. You can echo the statements to psql to force a proper session as follows. I'm unsure as to how stdin may be effected
echo 'SET TRANSACTION READ WRITE; SET SESSION CHARACTERISTICS AS TRANSACTION READ WRITE ; COPY ( my SQL query ) TO STDOUT WITH CSV HEADER' | su - postgres -c 'psql -d dbname' with stdin
caution - bash hack
su - postgres -c 'psql -d dbname' with stdin "COPY ( my SQL query ) TO STDOUT WITH CSV HEADER" | grep -v "readonly"

Internal Variable PIPESTATUS

I am new to linux and bash scripting and i have query about this internal variable PIPESTATUS which is an array and stores the exit status of individual commands in pipe.
On command line:
$ find /home | /bin/pax -dwx ustar | /bin/gzip -c > myfile.tar.gz
$ echo ${PIPESTATUS[*]}
$ 0 0 0
working fine on command line but when I am putting this code in a bash script it is showing only one exit status. My default SHELL on command line is bash only.
Somebody please help me to understand why this behaviour is changing? And what should I do to get this work in script?
#!/bin/bash
cmdfile=/var/tmp/cmd$$
backfile=/var/tmp/backup$$
find_fun() {
find /home
}
cmd1="find_fun | /bin/pax -dwx ustar"
cmd2="/bin/gzip -c"
eval "$cmd1 | $cmd2 > $backfile.tar.gz " 2>/dev/null
echo -e " find ${PIPESTATUS[0]} \npax ${PIPESTATUS[1]} \ncompress ${PIPESTATUS[2]} > $cmdfile
The problem you are having with your script is that you aren't running the same code as you ran on the command line. You are running different code. Namely the script has the addition of eval. If you were to wrap your command line test in eval you would see that it fails in a similar manner.
The reason the eval version fails (only gives you one value in PIPESTATUS) is because you aren't executing a pipeline anymore. You are executing eval on a string that contains a pipeline. This is similar to executing /bin/bash -c 'some | pipe | line'. The thing actually being run by the current shell is a single command so it has a single exit code.
You have two choices here:
Get rid of eval (which you should do anyway as eval is generally something to avoid) and stop using a string for a command (see Bash FAQ 050 for more on why doing this is a bad idea.
Move the echo "${PIPESTATUS[#]}" into the eval and then capture (and split/parse) the resulting output. (This is clearly a worse solution in just about every way.)
Instead of ${PIPESTATUS[0]} use ${PIPESTATUS[#]}
As with any array in bash PIPESTATUS[0] contains the first command exit status. If you want to get all of them you have to use PIPESTATUS[#] which returns all the contents of the array.
I'm not sure why it worked for you when you tried it in the command line. I tested it and I didn't get the same result as you.

bash select command

I'm trying to run a script that uses the select command and I get error below. I'm running the most recent version of ubuntu. Why does it say the commands are not found?
#!/bin/bash
# Scriptname: runit
PS3= "Select a program to execute: "
select program in 'ls -F' pwd date cal exit
do
$program
done
This is the output:
runit.sh: 3: Select a program to execute: : not found
runit.sh: 4: select: not found
runit.sh: 5: Syntax error: "do" unexpected
Delete the space after the equal sign:
PS3= "Select a program to execute: "
^

Resources