I have following working copy command:
export PGPASSWORD=****;psql -h "****" -U "****" -c "\copy (SELECT id, \"accountType\" as \"ACCOUNT TYPE\", \"actReasonCode\" as \"ACT REASON\",\"adrCity\" as \"ADR CITY\" FROM schemaName.\"tableName\" where id > 0 and id<=8238226) TO STDOUT CSV HEADER" dbName >/sourcefile/test.CSV
Now, Instead giving alias name in the query,my requirement is to get the header as input and append it in the copy command, something like below:
export PGPASSWORD=****;psql -h "****" -U "****" -c "\copy (SELECT id, \"accountType\", \"actReasonCode\",\"adrCity\" FROM schemaName.\"tableName\" where id > 0 and id<=8238226) TO STDOUT CSV HEADERS(ID,\"ACCOUNT TYPE\",\"ACT REASON\",\"ADR CITY\") " dbName >/sourcefile/test.CSV
Can anyone please help me with this.
You might have misunderstood the HEADER keyword.
HEADER [boolean]
Specifies that the file contains a header line with the names of each column in the file. On output, the first line contains the column names from the table, and on input, the first line is ignored. This option is allowed only when using CSV format.
From https://www.postgresql.org/docs/9.2/sql-copy.html
As you can see, the option allows you to enable or disable the printing of a header line. It is not meant to be used to select which columns to print. That is part of your SELECT in the beginning of your query.
Related
I have a following query:
Input 1:
sqoop eval "jdbc url" --password'' --username -- '' 'select id,city from table 1'
Input 2:
sqoop eval "jdbc url" --password'' --username -- '' 'select city from table 2 where id in('')'
I wanted to use the value of id column from first input into second input in unix. How will i achive the desired output?
I have a list of groups and I need to extract users by knowing partially memberOf value
Example:
# for group AAA
ldapsearch -w V1ZEYK -D "cn=XXXXXX,ou=Service Users,ou=User Accounts,dc=uuu,dc=yyy,dc=xxx,dc=net" -H ldaps://<link>:<port> -b "dc=uuu,dc=yyy,dc=xxx,dc=net" -s sub memberOf="CN=AAA,OU=Groups,DC=uuu,DC=yyy,DC=xxx,DC=net" | grep "cn:"
# returns "cn: 12345"
# for group BBB
... -s sub memberOf="CN=BBB,DC=uuu,DC=yyy,DC=xxx,DC=net" | grep "cn:"
# returns nothing, meaning memberOf DC part is different that I dont know of
How should I pass partial filter so the search could return user cns?
Is there a way (and should I) pass wildcard filters for flags -D and -b?
Tried:
... -s sub memberOf="CN=BBB*"...
... -s sub memberOf="*CN=BBB*"...
# returns nothing
The LDAP specification do not allow substring searches of Distinguished Names.
(like "CN=BBB,DC=uuu,DC=yyy,DC=xxx,DC=net")
I think you will need to write some code.
I Have a problem with a psql query in bash.
I really don't know why the PSQL understands the value HUB is a Column.
psql -q -A -h Some_host
-U User -d datashema -p 1111 -t -f query.txt -c 'SELECT id, text FROM great_201704 WHERE id = 10 and
text = 'HUB' ;'
ERROR: column "hub" does not exist in great_201704
You read your single quotes as if they nest:
-c 'SELECT id, text FROM great_201704 WHERE id = 10 and text = 'HUB' ;'
^---------------------------------1--------------------------------^
^-2-^
Bash reads them as two single quoted string with a literal between them:
-c 'SELECT id, text FROM great_201704 WHERE id = 10 and text = 'HUB' ;'
^------------------------------1----------------------------^
^2-^
This is equivalent to not having single quotes around HUB, which is why psql thinks its a column.
The easiest way to embed one set of quotes in another string is to just use two different types of quotes:
psql -q -A -h Some_host -U User -d datashema -p 1111 -t -f query.txt \
-c "SELECT id, text FROM great_201704 WHERE id = 10 and text = 'HUB' ;"
I have two files: one contains a list of AP names, and another list contains AP names again, but this time the controller IP for each AP is listed before the AP's name.
File 1:
AP1
AP2
Ap3
AP4
Ap5
Ap6
...
File 2:
1.1.1.1,Ap1
2.2.2.2,Ap2
3.3.3.3,Ap3
4.4.4.4,Ap4
6.6.6.6,Ap6
...
How can I match up the names from file 1 with the names in file 2 so that the output resembles the following?
1.1.1.1,Ap1
2.2.2.2,Ap2
3.3.3.3,Ap3
4.4.4.4,Ap4
IP Not Found,Ap5
6.6.6.6,Ap6
I was thinking that I could use the comm command, but I do not know of a good way to only compare the names and not the IPs. I could also just grep for every single name, but that would take forever (there are around 8,000 AP names).
The join command will do the job (note that both files have to be sorted by AP name first; your example data already is, but if your real-world data isn't, run the first file through sort -f, and the second file through sort -f -t , -k 2).
join -i -t , -1 1 -2 2 -a 1 -o 2.1,1.1 -e "IP Not Found" file1.txt file2.txt
-i means ignore case, -t , means fields are separated by commas, -1 1 means join on the first (only) field of the first file; -2 2 means join on the second field of the second file. -a 1 means include rows from the first file that don't have any matches. -o 2.1,1.1 specifies the output format: the first field of the second file (IP), then the first field of the first file (AP). -e "IP Not Found" means to output "IP Not Found" in place of an empty field.
This will output
1.1.1.1,AP1
2.2.2.2,AP2
3.3.3.3,Ap3
4.4.4.4,AP4
IP Not Found,Ap5
6.6.6.6,Ap6
This awk snippet should do it:
awk 'BEGIN{FS=","}
(FNR==NR){a[tolower($2)]=$0}
(FNR!=NR){if (a[tolower($1)]!="")
print a[tolower($1)]
else
print "IP Not Found," $1}' file2.txt file1.txt
producing in your case:
1.1.1.1,Ap1
2.2.2.2,Ap2
3.3.3.3,Ap3
4.4.4.4,Ap4
IP Not Found,Ap5
6.6.6.6,Ap6
I have what I thought was a simple problem. I have a mysqldump file occupying several GBs, and I want to create a script that will extract just the lines of the text pertaining to one table (variable passed to the script) and save those to a new file.
The section I want to extract always starts with:
-- Table structure for table `myTable`
And always ends with:
UNLOCK TABLES;
Where the table name and the number of lines I want afterward are variable. I'm able to find the start of the section with:
START=$(grep -n "Table structure for table \`$2\`" "$1" | awk -F ":" '{print $1}')
Where $1 is the file to search and $2 is the table name. This works just fine, but then I'm stuck. I know I can extract the lines with sed once I have the ending line number, but finding that ending line number is the tricky part.
I need to find the line number of the first occurrence of UNLOCK TABLES; after my $START line number, and I'm lost on how to do that.
For more detail, here's an example of one section of text I would want to extract:
--
-- Table structure for table `myTable`
--
DROP TABLE IF EXISTS `myTable`;
/*!40101 SET #saved_cs_client = ##character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `myTable` (
`column1` varchar(12) NOT NULL,
`column2` varchar(20) DEFAULT NULL,
PRIMARY KEY (`column1`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
/*!40101 SET character_set_client = #saved_cs_client */;
--
-- Dumping data for table `myTable`
--
LOCK TABLES `myTable` WRITE;
/*!40000 ALTER TABLE `myTable` DISABLE KEYS */;
INSERT INTO `myTable` VALUES ('test11', 'test12'),('test21', 'test22');
/*!40000 ALTER TABLE `myTable` ENABLE KEYS */;
UNLOCK TABLES;
Thanks in advance.
Use sed; you don't need to worry about explicit line numbers; you can just select the range using regexes for both the first and last line.
firstLine="-- Table structure for table \`$2\`"
secondLine="UNLOCK TABLES;"
sed -n "/$firstLine/,/$secondLine/p" "$1"
The -n limits sed to printing only those lines that match the desired range.