I am trying to execute hive command using java code. My hive is installed on linux virtual machine and the java code is on a remote windows machine which acts as client. I am able to successfully call the hive commands like:
hive -e 'Select * from mytable;'
But when I tried using load command with syntax as :
hive -e 'LOAD DATA LOCAL INPATH '/home/mapr/file.csv' INTO TABLE mytable;'
It throws me an error saying "FAILED: ParseException line 1:23 mismatched input '/' expecting StringLiteral near 'INPATH' in load statement"
This seems to be a syntax error near the file path probable an escape character issue, because I am able to execute "Select * from mytable" without error.
Can anyone help me with the syntax for hive load command using hive -e ?
By looking at your error message, it is clear that you are using single quote escape character twice and massing up your hive command.
So now use single and double quote to distinguish escape character and it will work.
New hive statement can be given below:
hive -e 'LOAD DATA LOCAL INPATH "/home/mapr/file.csv" INTO TABLE mytable;'
Hope this help you!!!
Related
How to call a sql query using bash shell script. I tried the below but seems there is some syntax error:
#!/bin/sh
LogDir='/albt/dev/test1/test2/logs' # log file
USER='test' #Enter Oracle DB User name
PASSWORD='test' #Enter Oracle DB Password
SID='test' #Enter SID
sqlplus -s << EOF > ${LogDir}/sql.log
${DB_USER_NAME}/${DB_PASSWORD}#${DB_SID}
SELECT count(1) FROM dual; # SQL script here to get executed
EOF
var=$(SELECT count(1) FROM dual)
I'm getting - unexpected token error
#!/bin/sh
user="test"
pass="test"
var="$1"
sqlplus -S $user/$pass <<EOF
SELECT * FROM tableName WHERE username=$var;
exit;
EOF
I'm getting - sqlplus: command not found -- when I run the above script
Can anyone guide me?
In your first script, one error is in the use of count(1). The whole line is
var=$(SELECT count(1) FROM dual)
This means that the shell is supposed to execute a program named SELECT with the parameters count(1), FROM and dual, and stores its stdout into the variable var. I think you want instead to feed a SELECT command into sqlplus, i.e. something like
var=$(sqlplus .... )
In your second script, the error message simply means that sqlplus can not be found in your path. Either add it to the PATH or provide the absolute path to the command explicitly.
I am a beginner in coding. Currently trying to read a file (which was imported to HDFS using sqoop) with the help of pyspark. The spark job is not progressing and my jupyter pyspark kernel is like stuck. I am not sure whether I used the correct way to import the file to hdfs and whether the code used to read the file with spark is correct or not.
The sqoop import code I used is as follows
sqoop import --connect jdbc:mysql://upgraddetest.cyaielc9bmnf.us-east-1.rds.amazonaws.com/testdatabase --table SRC_ATM_TRANS --username student --password STUDENT123 --target-dir /user/root/Spar_Nord -m 1
The pyspark code I used is
df = spark.read.csv("/user/root/Spar_Nord/part-m-00000", header = False, inferSchema = True)
Also please advice how we can know the file type that we imported with sqoop? I just assumed .csv and wrote the pyspark code.
Appreciate a quick help.
When pulling data into HDFS via sqoop, the default delimiter is the tab character. Sqoop creates a generic delimited text file based on the parameters passed into the sqoop command. To make the file output with a comma delimiter to match a generic csv format, you should add:
--fields-terminated-by <char>
So your sqoop command would look like:
sqoop import --connect jdbc:mysql://upgraddetest.cyaielc9bmnf.us-east-1.rds.amazonaws.com/testdatabase --table SRC_ATM_TRANS --username student --password STUDENT123 --fields-terminated-by ',' --target-dir /user/root/Spar_Nord -m 1
I want to write a record using execute command stage in Datastage sequence job.
I am using below syntax.
echo "InputFileName;2021-03-25 06:54:58+01:00;AAA;Batch;FOP;FUNCTIONAL;INFO;Extra key columns sent in Key Values;201;OK;SubmitRequest;ERROR;CDIER0961E: The REST step is unable to invoke the REST service, cause=javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated;;SupplyOnhand_20210325065458;;;;0;CDIER0961E: The REST step is unable to invoke the REST service, cause=javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated;;;;12;1;2021-03-25 05:55:18+00:00;2021-03-25 05:55:33+00:00" >> Filename
Below is error I am getting.
Output from command ====>
sh: -c: line 0: syntax error near unexpected token (' sh: -c: line 0: echo Unhandled failure (-1) encountered executing command echo
I tried running this manually on linux server its working there but failing in Datastage job.
Can someone please help.
You need to escape any character that is significant to the shell, or to contain the whole string in hard (single) quotes rather than soft (double) quotes.
We are currently working on a project involving an "ordinary" relational database, but we wish to enable SPARQL requests towards this database.
d2rq.org is a tool that enables SPARQL to be run towards the database with the help of a .ttl file which defines the database to RDF mapping.
This .ttl file can be built automatically with a D2RQ tool named "generate-mapping".
http://d2rq.org/generate-mapping takes quite a few arguments, some preceeded with a single dash "-" and some double "--". My challenge is that any argument preceeded with a double dash generates this error:
Command:
./generate-mapping -u root -p password -o testmappingLocal.ttl --verbose jdbc:mysql:///iswc
Result:
Exception in thread "main" java.lang.IllegalArgumentException: Unknown argument: --verbose
at jena.cmdline.CommandLine.handleUnrecognizedArg(CommandLine.java:215)
at jena.cmdline.CommandLine.process(CommandLine.java:177)
at d2rq.generate_mapping.main(generate_mapping.java:41)
Any help with the double-dash arguments will be greatly appreciated.
OS: Ubuntu Linux, D2RQ version: 0.8
D2rq and mysql database using generate mapping file & rdf files.
1).mapping file generate commands:
./generate-mapping -u root -p root -o /home/bigtapp/Documents/d2rqgenerate_mapping/mapfile.ttl jdbc:mysql://localhost:3306/d2rq
note: 1. root -p root -> mysql db username & password.
2. /home/bigtapp/Documents/d2rqgenerate_mapping/mapfile.ttl -> file save output path .
3.jdbc:mysql://localhost:3306 ->mysql driver.
4./d2rq ->database name.
2).the mapping file using RDF creation:
use following command.
The RDF syntax to use for output. Supported syntaxes are “TURTLE”, “RDF/XML”, “RDF/XML-ABBREV”, “N3”, and “N-TRIPLE” (the default). “N-TRIPLE” works best for large databases.
command:
./dump-rdf -f RDF/XML -b localhost:3306 -o /home/bigtapp/Documents/d2rqgenerate_mapping/dumpfile.rdf /home/bigtapp/Documents/d2rqgenerate_mapping/mapfile.ttl.
apache-jena-fuseki create dataset then rdf file uploadserver then your using sparql query ..you get the result...
I have 800mb backup postgresql db, i even had hardtime to open the file because not enough memory.
I tried to restore the file, but i receive this error while restoring, does any one know how to fix
I run this command:
psql -U root -d mydatabase -f dbfile.sql
i receive message:
ERROR: syntax error at or near ""
LINE 1: INSERT INTOcv_balance` VALUES (4279704,3431,'2008-08-10 2...
please help
It looks like for some reason, a ` mark got either added after cv_balance or removed before cv_balance - look on the first line of your SQL file, it currently probably reads something like this:
INSERT INTO cv_balance` VALUES ...(continued)...
modify it to read like this:
INSERT INTO cv_balance VALUES ...(continued)...
(i.e. remove the errant backquote)
If you need an editor that can handle large files, try something like vim.