D2RQ parameters for generate-mapping - linux

We are currently working on a project involving an "ordinary" relational database, but we wish to enable SPARQL requests towards this database.
d2rq.org is a tool that enables SPARQL to be run towards the database with the help of a .ttl file which defines the database to RDF mapping.
This .ttl file can be built automatically with a D2RQ tool named "generate-mapping".
http://d2rq.org/generate-mapping takes quite a few arguments, some preceeded with a single dash "-" and some double "--". My challenge is that any argument preceeded with a double dash generates this error:
Command:
./generate-mapping -u root -p password -o testmappingLocal.ttl --verbose jdbc:mysql:///iswc
Result:
Exception in thread "main" java.lang.IllegalArgumentException: Unknown argument: --verbose
at jena.cmdline.CommandLine.handleUnrecognizedArg(CommandLine.java:215)
at jena.cmdline.CommandLine.process(CommandLine.java:177)
at d2rq.generate_mapping.main(generate_mapping.java:41)
Any help with the double-dash arguments will be greatly appreciated.
OS: Ubuntu Linux, D2RQ version: 0.8

D2rq and mysql database using generate mapping file & rdf files.
1).mapping file generate commands:
./generate-mapping -u root -p root -o /home/bigtapp/Documents/d2rqgenerate_mapping/mapfile.ttl jdbc:mysql://localhost:3306/d2rq
note: 1. root -p root -> mysql db username & password.
2. /home/bigtapp/Documents/d2rqgenerate_mapping/mapfile.ttl -> file save output path .
3.jdbc:mysql://localhost:3306 ->mysql driver.
4./d2rq ->database name.
2).the mapping file using RDF creation:
use following command.
The RDF syntax to use for output. Supported syntaxes are “TURTLE”, “RDF/XML”, “RDF/XML-ABBREV”, “N3”, and “N-TRIPLE” (the default). “N-TRIPLE” works best for large databases.
command:
./dump-rdf -f RDF/XML -b localhost:3306 -o /home/bigtapp/Documents/d2rqgenerate_mapping/dumpfile.rdf /home/bigtapp/Documents/d2rqgenerate_mapping/mapfile.ttl.
apache-jena-fuseki create dataset then rdf file uploadserver then your using sparql query ..you get the result...

Related

Publishing test results to Azure (VS Database Project, tSQLt, Azure Pipelines, Docker)

I am trying to fully automate the build, test, and release of a database project using Azure Pipeline.
I already have a Visual Studio solution which consists of three database projects. The first project is the database, which contains the tables, stored procedures, functions, data, etc.. The second project is the tSQLt framework (v 1.0.5873.27393 if anyone is interested). And finally the third project is the tSQLt tests.
My goal here to check the solution into source control, and the pipeline will automatically build the solution, deploy the dacpacs to a build server (docker in this case), run the tSQLt tests, and publish the results back to the pipeline.
My pipeline works like this.
Building the visual studio solution
Publish the Artifacts
Setup a docker container running Ubuntu & SQL Server
Install SQLPackage
Deploy the dacpacs to the SQL instance
Run the tSQLt tests
Publish the test results
Everything up to publishing the results is working, but on this step I got the following error:
[warning]Failed to read /home/vsts/work/1/Results.xml. Error : Data at the root level is invalid. Line 1, position 1.
I added another step in the pipeline to display the content of the Results.xml file. It appears like this:
XML_F52E2B61-18A1-11d1-B105-00805F49916B
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
<testsuites><testsuite id="1" name="MyNewTestClassOne" tests="1" errors="0" failures="0" timestamp="2021-02-01T10:40:31" time="0.000" hostname="f6a05d4a3932" package="tSQLt"><properties/><testcase classname="MyNewTestClassOne" name="TestNumberOne" time="0.
I'm not sure if the column name and dashes should be in the file, but I'm guessing not. I added another step in to remove them, just leaving me with the XML. But this then gave me a different error to deal with:
##[warning]Failed to read /home/vsts/work/1/Results.xml. Error : There is an unclosed literal string. Line 2, position 1.
This one is a little obvious to spot, because as you'll see above, the XML is incomplete.
Here is the part of my pipeline which runs the tSQLt tests and outs the results to Results.xml
- script: |
sqlcmd -S 127.0.0.1,1433 -U SA -P Password.1! -d StagingDB -Q 'EXEC tSQLt.RunAll;'
displayName: 'tSQLt - Run All Tests'
- script: |
cd $(Pipeline.Workspace)
sqlcmd -S 127.0.0.1,1433 -U SA -P Password.1! -d StagingDB -Q 'SET NOCOUNT ON; EXEC tSQLt.XmlResultFormatter;' -o 'tSQLt_Results.xml'
displayName: 'tSQLt - Output Results'
I've research so many blogs and articles on this, and most people are doing the same. Some people use PowerShell instead of sqlcmd, but given I'm using a Ubuntu machine this isn't an option here.
I am all out of options, so I am looking for a little help on this.
You are dealing with 2 problems here. There is noise in your result set, that is not xml and your xml result is truncated after 256 characters. I can help you with both.
What I am doing is basically this:
/opt/mssql-tools/bin/sqlcmd \
-S "localhost, 31114" -U sa \
-P "password" \
-d dbname \
-y0 \
-Q "BEGIN TRY EXEC tSQLt.RunAll END TRY BEGIN CATCH END CATCH; EXEC tSQLt.XmlResultFormatter" \
| grep -w "<testsuites>" \
| tee "resultfile.xml"
Few things to note:
y0 important. This sets the length of the xml result set to unlimited, up from 256.
grep with a regular expression - make sure you only get the xml and not the noise around it.
If you want to run only a subset of your tests, you need to make amendments to the SQL query being passed in, but other than that, this is a catch it all "oneliner" to run all tests and get the results in xml format, readable by Azure DevOps

openSMILE: Trying to Extract emotion features from emobase.conf results in error

I was going through the openSMILE book and in section 2.5.6, it mentioned that in order to extract emotion features, one needs to run a command of this sort:
SMILExtract_Release -C config/emobase.conf -I input.wav -O angers.arff -instname ANGER -classes {anger,fear,disgust} -classlabel anger
However, running this command gives an error:
(ERROR) [0] in commandlineParser : doParse: unknown option '-instname' on commandline!
Wanted to know how to fix this. Is -instname a deprecated option? If so, what should it be replaced with?
This is happening because config/emobase.conf doesn't have a definition for instname in the arrfsink component.
openSMILE allows to define new command line options for the openSMILE binary directly in the configuration file. If you want to define this parameter your config file must have a line like this:
instanceName=\cm[instname(N){unknown}:instance name]
You can run opensmile-2.3.0/SMILExtract -h to see which CMD options are available regardless of the configuration file. Other CMD parameters such as -instname should be defined in the configfile. Please check "config\shared\standard_data_output.conf.inc" for an example o how to define this command line option for your configuration file.

Register a variable output with Ansible CLI / Ad-Hoc

Can I register the output of a task? Is there an argument with ansible command for that ?
This is my command:
ansible all -m ios_command -a"commands='show run'" -i Resources/Inventory/hosts
I need this, because the output is a dictionary and I only need the value for one key. If this is not possible, is there a way to save the value of that key to a file?
I have found that you can convert ansible output to json when executing playbooks with "ANSIBLE_STDOUT_CALLBACK=json" preceding the "ansible-playbook" command. Example:
ANSIBLE_STDOUT_CALLBACK=json ansible-playbook Resources/.Scripts/.Users.yml
This will give you a large output because it also shows each host's facts, but will have a key for each host on each task.
This method is not possible with ansible command, but it's output is similar to json. It just shows "10.20.30.111 | SUCCESS =>" before the main bracket.
Source
Set the following in your ansible.cfg under the [defaults] group
bin_ansible_callbacks=True
Then as #D_Esc mentioned, you can use
ANSIBLE_STDOUT_CALLBACK=json ansible all -m ios_command -a"commands='show run'" -i Resources/Inventory/hosts
and can get the json output which you can try to parse.
I have not found a way to register the output to a variable using ad-hoc commands

Writing JSON to an output file using tool sstable2json in Cassandra

I want to export the SSTables to JSON. So I am using sstable2json.bat. I am able to run this bat using command prompt and can see the JSON result printing on command prompt itself. I used the following command:
sstable2json H:/cassandra/db/data/191/191/191-191-hd-1-Data.db
I have to write this JSON content to an output file. For that I used the following command:
sstable2json -f H:/output.json H:/cassandra/db/data/191/191/191-191-hd-1-Data.db
But this command is showing me exception like:
You must supply exactly one sstable
Usage: org.apache.cassandra.tools.SSTableExport<sstable> [-k key [-k key [...]]
-x key [-x key [...]]]
Can any one correct my mistake if any. I am using Cassandra 1.1.2 version.
Just redirect stdout to a file. You can find the documentation for redirection here: http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/redirection.mspx?mfr=true
For example:
sstable2json H:/cassandra/db/data/191/191/191-191-hd-1-Data.db>mysstable.json
The contents will then be in a file named mysstable.json.

How Can I Import Data From Sql Server CE DataBase File To Excel?

How Can I Import Data From a Table in Sql Server CE To Excel?
I Try This From Excel But I Can Not Find an Item for This In Excel
Please Help Me To This
You can try the sqlcecmd command from the command line.
sqlcecmd -d "Data Source=C:\NW.sdf" -q "SELECT * FROM my_table" -h 0 -s "," -o my_data.csv
Of course you need to specify the correct location of your Data Source in the -d command
You need to specify the cells you wish to SELECT and the table in the FROM section of the -q command.
The "-h 0" flag will remove column names and any dashed lines.
The "-s ','" specifies the delimiter you wish you use between fields.
And finally the -o command specifies your output file.
You may also need to specify a username (-U) and password (-P) if these values have been set.
Before you can run the sqlcecmd you will need to make sure that the executable files for SQL Server CE are in your path.
Since you haven't specified a programming language i'm assuming you are doing this manually.
After you have the csv file excel should be able to open it no problem.
EDIT: If you do not have the SQLCECMD application then download and set it up first.

Resources