I'm trying to get psql to format nicely and am following the docs here. Right now, whenever I do a query on tables with lots of columns, no matter how big I make my screen each line overflows into the next line and producing a whole screen of unreadable junk.
The docs (link is above) say there's a way to align columns nicely for more readable output.
Normally, to start psql, I just type:
psql
and hit Enter. Now I'm trying:
psql \pset format aligned
And getting an error:
could not change directory to "/root"
psql: warning: extra command-line argument "aligned" ingored
psql: FATAL: Indent authentication failed for user "format"
Any ideas as to how I could get these command-line args to work for me?
These are not command line args. Run psql. Manage to log into database (so pass the hostname, port, user and database if needed). And then write it in the psql program.
Example (below are two commands, write the first one, press enter, wait for psql to login, write the second):
psql -h host -p 5900 -U username database
\pset format aligned
Use \x
Example from postgres manual:
postgres=# \x
postgres=# SELECT * FROM pg_stat_statements ORDER BY total_time DESC LIMIT 3;
-[ RECORD 1 ]------------------------------------------------------------
userid | 10
dbid | 63781
query | UPDATE branches SET bbalance = bbalance + $1 WHERE bid = $2;
calls | 3000
total_time | 20.716706
rows | 3000
-[ RECORD 2 ]------------------------------------------------------------
userid | 10
dbid | 63781
query | UPDATE tellers SET tbalance = tbalance + $1 WHERE tid = $2;
calls | 3000
total_time | 17.1107649999999
rows | 3000
-[ RECORD 3 ]------------------------------------------------------------
userid | 10
dbid | 63781
query | UPDATE accounts SET abalance = abalance + $1 WHERE aid = $2;
calls | 3000
total_time | 0.645601
rows | 3000
psql --pset=format=FORMAT
Great for executing queries from command line, e.g.
psql --pset=format=unaligned -c "select bandanavalue from bandana where bandanakey = 'atlassian.confluence.settings';"
Related
I have live raw json stream data from the virtual radar server I'm using.
I use Netcat to fetch the data and jq to save it on my kali linux. using the following command.
nc 127.0.0.1 30006 | jq > A7.json
But i want to filter specific content from the data stream.
i use the following command to extract the data.
cat A7.json | jq '.acList[] | select(.Call | contains("QTR"))' - To fetch Selected airline
But i realized later that the above command only works once. in other words, it does not refresh. as the data is updating every second. i have to execute the command over and over again to extract the filter data which is causing to generate duplicate data.
Can someone help me how to filter the live data without executing the command over and over .
As you don't use the --stream option, I suppose your document is a regular JSON document.
To execute your command every second, you can implement a loop that sleeps for 1 second:
while true; do sleep 1; nc 127.0.0.1 30006 | jq '.acList[] | select(…)'; done
To have the output on the screen and also save to a file (like you did with A7.json), you can add a call to tee:
# saves the document as returned by `nc` but outputs result of `jq`
while true; do sleep 1; nc 127.0.0.1 30006 | tee A7.json | jq '.acList[] | …'; done
# saves the result of `jq` and outputs it
while true; do sleep 1; nc 127.0.0.1 30006 | jq '.acList[] | …' | tee A7.json; done
Can you try this ?
nc localhost 30006 | tee -a A7.json |
while true; do
stdbuf -o 0 jq 'try (.acList[] | select(.Call | contains("QTR")))' 2>/dev/null
done
Assuming that no other process is competing for the port, I'd suggest trying:
nc -k -l localhost 30006 | jq --unbuffered ....
Or if you want to keep a copy of the output of the netcat command:
nc -k -l localhost 30006 | tee A7.json | jq --unbuffered ....
You might want to use tee -a A7.json instead.
Break Down
Why I did what I did?
I have live raw json stream data from the virtual radar server, which is running on my laptop along with Kali Linux WSL on the background.
For those who don't know virtual radar server is a Modes-s transmission decoder that is used to decode different ADS-B Formats. Also, it rebroadcast the data in a variety of formats.. one of them is the Json stream. And I want to save select aircraft data in json format on Kali Linux.
I used the following commands to save the data before
$ nc 127.0.0.1 30001 | jq > A7.json - To save the stream.
$ cat A7.json | jq '.acList[] | select(.Call | contains("QTR"))' - To fetch Selected airline
But I realized two things after using the above. One I'm storing unwanted data which is consuming my storage. Two, when I used the second command it just goes through the json file once and produces the data which is saved at that moment and that moment alone. which caused me problems as I have to execute the command over and over again to extract the filter data which is causing me to generate duplicate data.
Command worked for me
The following command worked flawlessly for my problem.
$ nc localhost 30001 | sudo jq --unbuffered '.acList[] | select (.Icao | contains("800CB8"))' > A7.json
The following also caused me some trouble which i explain clearly down below.
Errors X Explanation
This error was resulted in the missing the Field name & key in the json object.
$ nc localhost 30001 | sudo jq --unbuffered '.acList[] | select (.Call | contains("IAD"))' > A7.json
#OUTPUT
jq: error (at <stdin>:0): null (null) and string ("IAD") cannot have their containment checked
If you see the below JSON data you'll see the missing Field name & key which caused the error message above.
{
"Icao": "800CB8",
"Alt": 3950,
"GAlt": 3794,
"InHg": 29.7637787,
"AltT": 0,
"Call": "IAD766",
"Lat": 17.608658,
"Long": 83.239166,
"Mlat": false,
"Tisb": false,
"Spd": 209,
"Trak": 88.9,
"TrkH": false,
"Sqk": "",
"Vsi": -1280,
"VsiT": 0,
"SpdTyp": 0,
"CallSus": false,
"Trt": 2
}
{
"Icao": "800CB8",
"Alt": 3950,
"GAlt": 3794,
"AltT": 0,
"Lat": 17.608658,
"Long": 83.239166,
"Mlat": false,
"Spd": 209,
"Trak": 88.9,
"Vsi": -1280
}
{
"Icao": "800CB8",
"Alt": 3800,
"GAlt": 3644,
"AltT": 0,
"Lat": 17.608795,
"Long": 83.246155,
"Mlat": false,
"Spd": 209,
"Trak": 89.2,
"Vsi": -1216
}
Commands that didn't work for me.
Command #1
When i used jq with --stream with the filter it produced the below output. --Stream with output filter worked without any errors.
$ nc localhost 30001 | sudo jq --stream '.acList[] | select (.Icao | contains("800"))' > A7.json
#OUTPUT
jq: error (at <stdin>:0): Cannot index array with string "acList"
jq: error (at <stdin>:0): Cannot index array with string "acList"
jq: error (at <stdin>:0): Cannot index array with string "acList"
jq: error (at <stdin>:0): Cannot index array with string "acList"
jq: error (at <stdin>:0): Cannot index array with string "acList"
jq: error (at <stdin>:0): Cannot index array with string "acList"
Command #2
For some reason -k -l didn't work to listen to the data. but the other command worked perfectly. i think it didn't work because of the existing port outside of the wsl.
$ nc -k -l localhost 30001
$ nc localhost 30001
Thank you to everyone who helped me to solve my issue. I'm very grateful to you guys
I would require some help in inserting data from CSV file to CUSTOMER table using Shell script. Is it possible to retrieve the CSV data by comma instead of the fixed position of the data? For example, the data for my Remarks does not have a fixed position, i.e. it could contain 10 characters or 15 characters, hence the variable changes.
#!/bin/bash
PASSFILE=/credentials/systemcredential.properties
USERID=$(cat $PASSFILE | grep UserID | cut -f2 -d=)
PASSWORD=$(cat $PASSFILE | grep Pwd | cut -f2 -d=)
# connect to database
source /opt/db2home/db2profile
db2 connect to DBRPTGU user $USERID using $PASSWORD
#--------------------------------------------------#
# TABLE: CUSTOMER
#--------------------------------------------------#
#db2 "select count(*) from udbcuser.CUSTOMER"
db2 "load from /batchload/data/CUSTOMER.csv of asc
method L(1 7, 9 23, 25 39, 41 47, 49 68)
insert_update into udbcuser.CUSTOMER(CUSTOMER_ID,CUSTOMER_NAME,ITEM_PURCHASED,AMOUNT_PURCHASED,REMARKS)"
Sample data as requested:
9000001,Michael Tan,Wallet,$30,First time customer
9000002,Sally Gomez,Jacket,$90,
9000003,Cheng Ning,Boots,$80,Member
9000004,Richard Chin,Sunglasses,$30,Member
Thank you!
Here is a sample script.
echo '9000001,Michael Tan,Wallet,$30,First time customer' > dat.csv
echo '9000002,Sally Gomez,Jacket,$90,' >> dat.csv
echo '9000003,Cheng Ning,Boots,$80,Member' >> dat.csv
echo '9000004,Richard Chin,Sunglasses,$30,Member' >> dat.csv
db2 -v drop db db1
db2 -v create db db1
db2 -v connect to db1
db2 -v "create table db2inst1.customer (num int, name char (30), item char (20), price char(10), remark char(20))"
db2 -v "load from dat.csv of del insert into db2inst1.customer"
db2 -v "select * from db2inst1.customer"
When you run, the last select * from db2inst1.customer returns as:
NUM NAME ITEM PRICE REMARK
----------- ------------------------------ -------------------- ---------- --------------------
9000001 Michael Tan Wallet $30 First time customer
9000002 Sally Gomez Jacket $90 -
9000003 Cheng Ning Boots $80 Member
9000004 Richard Chin Sunglasses $30 Member
4 record(s) selected.
There is a lot of options for load command and here is V11.5 manual page for your reference:
LOAD command
https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.5.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0008305.html
Hope this helps.
In my Cassandra table, it had been created with all columns in Upper case. When we tried to select the columns in cqlsh terminal, we were able to select those columns, but when we tried to pull same query based on the cqlsh -e facing some issue with escaping character.
cqlsh:key1> select "PLAN_ID" ,"A_ADVERTISER_ID" ,"ADVERTISER_NAME" from key1.plan_advertiser where "PLAN_ID" = '382633' and "A_ADVERTISER_ID" = 15019;
PLAN_ID | A_ADVERTISER_ID | ADVERTISER_NAME
---------+-----------------+----------------------
382633 | 15019 | Hanesbrands, Updated
NMH206576286LM:sparklatest0802 KarthikeyanDurairaj$ cqlsh -e 'select "PLAN_ID" ,
"A_ADVERTISER_ID" ,"ADVERTISER_NAME" from key1.plan_advertiser
where "PLAN_ID" = '382633' and "A_ADVERTISER_ID" = 15019'
<stdin>:1:InvalidRequest: Error from server: code=2200 [Invalid query]
message="Invalid INTEGER constant (382633) for "PLAN_ID" of type text"
NMH206576286LM:sparklatest0802 KarthikeyanDurairaj$
cqlsh can be a little tricky in this regard. While it doesn't allow you to escape single quotes, it does allow you to escape double quotes. This works for me:
$ bin/cqlsh -u cassdba -p flynnLives -e "SELECT * FROM stackoverflow.plan_advertiser
where \"PLAN_ID\" = '382633' and \"A_ADVERTISER_ID\" = 15019"
PLAN_ID | A_ADVERTISER_ID | ADVERTISER_NAME
---------+-----------------+----------------------
382633 | 15019 | Hanesbrands, Updated
(1 rows)
In this way, we switch from single quotes to double quotes for the CQL statement, use single quotes for column values, and then escape out the double quotes around the column names.
I am trying to output the size of an ARP table from a FW using an Expect script so it can be graphed. After the below code the output displayed to screen is shown:
/usr/bin/expect -f -<< EOD
spawn ssh test#1.2.3.4
sleep 1
expect {
"*word:" {send "password\r"}
}
sleep 1
expect {
"*>" {send "show arp all | match \"total ARP entries in table\"\r"}
}
sleep 1
expect {
"*>" {send "exit\r"}
}
expect eof
EOD
spawn ssh test#1.2.3.4
FW-Active
Password:
Number of failed attempts since last successful login: 0
test#FW-Active(active)> show arp all | match "total ARP entries in table"
total ARP entries in table : 2861
What I am trying to do is be able to output only the numeric value indicated from the total ARP entries in table. I am assuming I need to some how do a "cut" or "awk" or something to extract only the numbers but I am not having any luck. Any help is greatly appreciated.
You store the output of that whole command in a variable, let's say a.
Something like this will probably work. Since you're using expect, you might want to figure out how to store that output as a variable that way you can manipulate it. I stored the output as $a in my example.
$ echo $a
total ARP entries in table : 2861
$ echo ${a% *}
total ARP entries in table :
$ echo ${a% *}-
total ARP entries in table : -
$ echo ${a##* }
2861
Logic explanation (Parameter/Variable Substituion in BASH):
1) To removing/stripping the left hand side part, use # for reaching the first matching character value (reading / parsing from left side), ## for reaching the last matching character/value. It works by giving *<value> within the { } braces.
2) To removing/stripping the right hand side part, use % for reaching the first matching character value (reading / parsing from right side), %% for reaching the last matching character/value. It works by giving <value>* within the { } braces.
Or if you don't want to store the output or anything, then simply do this:
show arp all | match "total ARP entries in table" | grep -o "[0-9][0-9]*"
Or (the following assumes that you don't change
show arp all | match "total ARP entries in table" | sed "s/ *//g"|cut -d':' -f2
I Have a problem with a psql query in bash.
I really don't know why the PSQL understands the value HUB is a Column.
psql -q -A -h Some_host
-U User -d datashema -p 1111 -t -f query.txt -c 'SELECT id, text FROM great_201704 WHERE id = 10 and
text = 'HUB' ;'
ERROR: column "hub" does not exist in great_201704
You read your single quotes as if they nest:
-c 'SELECT id, text FROM great_201704 WHERE id = 10 and text = 'HUB' ;'
^---------------------------------1--------------------------------^
^-2-^
Bash reads them as two single quoted string with a literal between them:
-c 'SELECT id, text FROM great_201704 WHERE id = 10 and text = 'HUB' ;'
^------------------------------1----------------------------^
^2-^
This is equivalent to not having single quotes around HUB, which is why psql thinks its a column.
The easiest way to embed one set of quotes in another string is to just use two different types of quotes:
psql -q -A -h Some_host -U User -d datashema -p 1111 -t -f query.txt \
-c "SELECT id, text FROM great_201704 WHERE id = 10 and text = 'HUB' ;"