How Can I Import Data From Sql Server CE DataBase File To Excel? - excel

How Can I Import Data From a Table in Sql Server CE To Excel?
I Try This From Excel But I Can Not Find an Item for This In Excel
Please Help Me To This

You can try the sqlcecmd command from the command line.
sqlcecmd -d "Data Source=C:\NW.sdf" -q "SELECT * FROM my_table" -h 0 -s "," -o my_data.csv
Of course you need to specify the correct location of your Data Source in the -d command
You need to specify the cells you wish to SELECT and the table in the FROM section of the -q command.
The "-h 0" flag will remove column names and any dashed lines.
The "-s ','" specifies the delimiter you wish you use between fields.
And finally the -o command specifies your output file.
You may also need to specify a username (-U) and password (-P) if these values have been set.
Before you can run the sqlcecmd you will need to make sure that the executable files for SQL Server CE are in your path.
Since you haven't specified a programming language i'm assuming you are doing this manually.
After you have the csv file excel should be able to open it no problem.
EDIT: If you do not have the SQLCECMD application then download and set it up first.

Related

Search for specific string using shell script

I have a set of log files and I need to find tables from a specific database being used by searching them in log files. The search keyword should start with 'database.' and should be able to grep/parse all table names with preceding database name. Can anyone advise on how this can be achieved using shell script.
Thanks
This is very easy:
grep "database" * : this shows the filename + the entire line
grep -l "database" * : this shows only the filename
grep -o "database [a-z]*" * : this shows the filename + the database name
grep -h -o "database [a-z]*" * : this shows the database name

Insert data from STDIN to specific position of string

Case: I wanna insert some data from a file to specific position of the string.
Example:
cat users.log | mysql -h localhost -u mysql -e 'SELECT id FROM users WHERE ${USERS_IDS}'
I want to replace ${USERS_IDS} string to the data from the file
I'm sure this case is very popular, but I did not find a suitable solution
To insert the contents of a file in a specific position of a string you use the facilities of your shell.
For example, Bash has the $(< filename) construction:
mysql -h localhost -u mysql -e 'SELECT id FROM users WHERE '"$(< users.log)"
If the data to be inserted needs to be editted a little, you have the $(command) construction:
mysql -h localhost -u mysql -e 'SELECT id FROM users WHERE '"$(< users.log sed -e 's/.../.../')"
Whether the file substitution or the command substitution are to be enclosed in doublequotes or not depends on the specific use case.
And, if the intended use is really to feed values in a MySQL command, beware of Bobby Tables.

How to solve 'ascp: "user#host:" in all sources must match' when download SRA data with linux?

I'm running the command -ascp -v -i ~/.aspera/connect/etc/asperaweb_id_dsa.openssh -k 1 -T -l200m anonftp#ftp-private.ncbi.nlm.nih.gov:/sra/sra-instant/reads/ByRun/sra/SRR/SRR590/SRR5907429 /SRR5907429 .sra ~/sra_download with Linux
and I get this error -
"user#host:" in all sources must match
What does this mean?How to solve it?
First,"-private"should be removed.Secondly,need to correct the space error in the sentence,example "SRR5907429 ".'ascp -v -i ~/.aspera/connect/etc/asperaweb_id_dsa.openssh -k 1 -T -l200m anonftp#ftp.ncbi.nlm.nih.gov:/sra/sra-instant/reads/ByRun/sra/SRR/SRR590/SRR5907429/SRR5907429.sra ~/sra_download'is the correct answer we need.enter image description here
Your problem:
the ascp syntax is:
Usage: ascp [OPTION] SRC... DEST
SRC to DEST, or multiple SRC to DEST dir
SRC, DEST format: [[user#]host:]PATH
Display full usage: -h,--help
You get this by simply executing ascp, get more with "ascp -h" and have a manual for it as well, or https://download.asperasoft.com/download/docs/entsrv/3.9.1/es_admin_linux/webhelp/index.html#dita/ascp_2.html
it is pretty much like "scp", but works also in "pull" mode.
so, you have:
options then one or multiple sources, then a single destination (always the last argument).
if the destination is: user#server:folder, then you do a push
if source is user#server:folder, then you do a pull
globally, you can only do a push or a pull at the same time. but there can be multiple sources, and always a single destination (on command line).
in you case you have:
options: -v -i ~/.aspera/connect/etc/asperaweb_id_dsa.openssh -k 1 -T -l200m
sources: anonftp#ftp-private.ncbi.nlm.nih.gov:/sra/sra-instant/reads/ByRun/sra/SRR/SRR590/SRR5907429 /SRR5907429 .sra
destination:~/sra_download
the first source is: anonftp#ftp-private.ncbi.nlm.nih.gov:/sra/sra-instant/reads/ByRun/sra/SRR/SRR590/SRR5907429
the other sources are: /SRR5907429 .sra
so you specify one remote source, two local sources, and one local destination.
This is the error you get.
My advice:
do not use the legacy syntax, as you did, but instead, use the advanced syntax:
ascp [options] --mode=<send|recv> --user=<user> --host=<server> sources... destination
There are plenty of options, for instance, if all your source files are in the same folder, you can use: --source-prefix=
you can also use file list file (i.e. a file that contains the list of files you want to transfer, in case it is long and generated by a script) or even file par list file.
Note also, that there is an interesting front end for aspera command line transfers:
https://www.rubydoc.info/gems/asperalm

Need help - Getting an error: xrealloc: subst.c:4072: cannot reallocate 1073741824 bytes (0 bytes allocated)

Checking if anybody else had the similar issue.
Code in the shell script:
## Convert file into Unix format first.
## THIS is IMPORTANT.
#####################
dos2unix "${file}" "${file}";
#####################
## Actual DB Change
db_change_run_op="$(ssh -qn ${db_ssh_user}#${dbserver} "sqlplus $dbuser/${pswd}#${dbname} <<ENDSQL
#${file}
ENDSQL
")";
Summary:
1. From a shell script (on a SunOS source server) I'm running a sqlplus session via ssh on a target machine to run a .sql script.
2. Output of this target ssh session (running sqlplus) is getting stored in a variable within the shell script. Variable name: db_change_run_op (as shown above in the code snapshot).
3. Most of the .sql scripts (that the variable "${file}" stores) that I'm running, shell script runs it fine and returns me the output of the .sql file (ran on target server via ssh from source server) provided, if the .sql file contains something which doesn't take much time to complete -or generates reasonable amount of output log/lines.
for ex: Let's assume if .sql I want to run does the following, then it runs fine.
select * from database123;
udpate table....
alter table..
insert ....
...some procedure .... which doesn't take much time to create....
...some more sql commands which complete..within few minutes to an hour....
4. Now, the issue I'm facing is:
Let's assume I have a .sql file where a single select command from a table have couple of hundred thousands - upto 1-5millions of lines i.e.
select * from database321;
assume the above generates the above bullet 4 condition.
In this case, I'm getting the following error message thrown by the shell script (running on the source server).
Error:
*./db_change_load.sh: xrealloc: subst.c:4072: cannot reallocate 1073741824 bytes (0 bytes allocated)*
My questions:
1. Did the .sql script complete - I assume yes. But, how can I get the output LOG file of the .sql file generated on the target server directly. If this can be done, then I won't need the variable to hold the output of whole ssh session sqlplus command and then create a log file on source server by doing [ echo "${db_change_run_op}" > sql.${file}.log ] way.
I assume the error is coming as the output or no. of lines generated by the ssh session i.e. by the sqlplus is so big that it can't fit Unix/Linux BASH variable's limit and thus, xrealloc error.
Please advise if on the above 2 questions if you have any experience or how can i solve this.
I assume, I'll try using " | tee /path/on.target.ssh.server/sql.${file}.log" soon after << ENDSQL or final close of ENDSQL (here doc keyword), wondering if that would work or not..
OK. got it working. No more store stuff in a var and then echo $var to a file.
Luckily, I had a same mount point on both source and target server i.e. if I go to /scm on source and on target, the mount (df -kvh .) shows same output for Share/NAS mount value.
Filesystem size used avail capacity Mounted on
ServerNAS02:/vol/vol1/scm 700G 560G 140G 81% /scm
Now, instead of using the variable to store the whole output of ssh session calling sqlplus session, all I did is was to create a file on the remote server using the following code.
## Actual DB Change
#db_change_run_op="$(ssh -qn ${pdt_usshu_dbs}#${dbs} "sqlplus $dbu/${pswd}#$dbn <<ENDSQL | tee "${sql_run_output_file}".ssh.log
#set echo off
#set echo on
#set timing on
#set time on
#set serveroutput on size unlimited
##${file}
#ENDSQL
#")";
ssh -qn ${pdt_usshu_dbs}#${dbs} "sqlplus $dbu/${pswd}#$dbn <<ENDSQL | tee "${sql_run_output_file}".ssh.log
set echo off
set echo on
set timing on
set time on
set serveroutput on size 1000000
#${file}
ENDSQL
"
seems like unlimited doesn't work in 11g so I had to use the 1000000 value (these small sql cmds help to show command with its output, show clock time for each output line etc).
But basically, in the above code, I'm calling the ssh command directly without using a variable="$(.....)" way.. and after the <
Even if I wouldn't have the same mount, I could have tee'd the output to a file on the remote server path (which is not available from source server) but atleast I can see upto what level the .sql command completed or generated output as now output is going directly to a file on remote server and Unix/Linux doesn't care much about the file size until there's no space left.

D2RQ parameters for generate-mapping

We are currently working on a project involving an "ordinary" relational database, but we wish to enable SPARQL requests towards this database.
d2rq.org is a tool that enables SPARQL to be run towards the database with the help of a .ttl file which defines the database to RDF mapping.
This .ttl file can be built automatically with a D2RQ tool named "generate-mapping".
http://d2rq.org/generate-mapping takes quite a few arguments, some preceeded with a single dash "-" and some double "--". My challenge is that any argument preceeded with a double dash generates this error:
Command:
./generate-mapping -u root -p password -o testmappingLocal.ttl --verbose jdbc:mysql:///iswc
Result:
Exception in thread "main" java.lang.IllegalArgumentException: Unknown argument: --verbose
at jena.cmdline.CommandLine.handleUnrecognizedArg(CommandLine.java:215)
at jena.cmdline.CommandLine.process(CommandLine.java:177)
at d2rq.generate_mapping.main(generate_mapping.java:41)
Any help with the double-dash arguments will be greatly appreciated.
OS: Ubuntu Linux, D2RQ version: 0.8
D2rq and mysql database using generate mapping file & rdf files.
1).mapping file generate commands:
./generate-mapping -u root -p root -o /home/bigtapp/Documents/d2rqgenerate_mapping/mapfile.ttl jdbc:mysql://localhost:3306/d2rq
note: 1. root -p root -> mysql db username & password.
2. /home/bigtapp/Documents/d2rqgenerate_mapping/mapfile.ttl -> file save output path .
3.jdbc:mysql://localhost:3306 ->mysql driver.
4./d2rq ->database name.
2).the mapping file using RDF creation:
use following command.
The RDF syntax to use for output. Supported syntaxes are “TURTLE”, “RDF/XML”, “RDF/XML-ABBREV”, “N3”, and “N-TRIPLE” (the default). “N-TRIPLE” works best for large databases.
command:
./dump-rdf -f RDF/XML -b localhost:3306 -o /home/bigtapp/Documents/d2rqgenerate_mapping/dumpfile.rdf /home/bigtapp/Documents/d2rqgenerate_mapping/mapfile.ttl.
apache-jena-fuseki create dataset then rdf file uploadserver then your using sparql query ..you get the result...

Resources