Query does not work with documents - cmis

So I'm trying to curl some queries. I have 2 documents under a folder which objectId is 8e92c0d5-0fdc-4363-9922-51f9ba93af62.
If I query for the folder itself as in:
curl -uAdministrator:Administrator "http://localhost:8282/nuxeo/atom/cmis/default/query?q=SELECT+*+FROM+cmis:folder+f+WHERE+f.cmis:objectId+=+'8e92c0d5-0fdc-4363-9922-51f9ba93af62'" | tidy -q -xml -indent
I get the intended result.
However, if I query for the documents under the folder:
curl -uAdministrator:Administrator "http://localhost:8282/nuxeo/atom/cmis/default/query?q=SELECT+*+FROM+cmis:document+d+WHERE+IN_FOLDER(d,+'8e92c0d5-0fdc-4363-9922-51f9ba93af62')" | tidy -q -xml -indent
I get no results, even though there are 2 documents under it.
Is there some setting I forgot to turn on? Or am I doing something completely wrong here?

You should add &searchAllVersions=true to your URL. See the Nuxeo CMIS doc about the use of searchAllVersions in Nuxeo.

Related

Get latest tag/version from github project

I am trying to get the latest version tag of Calico, however if I go to https://github.com/projectcalico/calico/releases, I can see they have v3.25.0-0.dev, and the tag marked as "latest" is v3.22.5. So far so good, the issues is that when curling their api, the versions I get are even older than those:
$ curl -s https://api.github.com/repos/projectcalico/calicoctl/releases | jq -r '.[].tag_name' | sort -r --version-sort
v3.21.6
v3.21.5
v3.21.4
v3.21.2
v3.21.1
v3.21.0
v3.20.6
v3.20.5
v3.20.4
v3.20.3
v3.20.2
v3.20.1
v3.20.0
v3.19.4
v3.19.3
v3.19.2
v3.19.1
v3.19.0
v3.18.6
v3.18.5
v3.18.4
v3.18.3
v3.18.2
v3.18.1
v3.17.6
v3.17.5
v3.17.4
v3.16.10
v3.16.9
v3.15.5
Also, this doesn't work:
$ curl -s https://api.github.com/repos/projectcalico/calicoctl/releases/latest | grep tag_name
"tag_name": "v3.20.6",
Am I doing something wrong, or maybe it is their API that might be outdated?
Oops, my bad. The solution was to replace the word calicoctl with calico in the URL I used with curl (I was checking the webpage of one project, and curling another...).

Ldapsearch filtering using variables not displaying data

I am currently trying to query an LDAP server to find whether the email passed to the script exists on our system.
Below is the ldapsearch command I am trying to use:
ldapdata=`ldapsearch -h ### -b "ou=###,o=###" "email=$email" email firstname surname`
echo "ldapdata: $ldapdata"
This works perfectly when the filter includes a predetermined email, ie "mail=firstname-surname####" however when passed a variable, such as $email, the output is not able to be manipulated by further grep / awk statements and will not display any data in the echo statement.
From some Googling I have figured out It could be to do with the line wrapping which LDAP uses.
What I have already tried to solve the issue:
| perl -p00e 's/\r?\n //g
| sed '/^$/d
-o ldif-wrap=no
My question is, what is the best method to solve this issue. Many thanks in advance.
Just for anyone having the same issue, the issue was actually due to me writing and testing the program in a Windows environment.
I was pulling the $email variable from a file that is in the dos format.
To fix this all I did was :
dos2unix $FILELOCATION

youtube api v3 search through bash and curl

I'm having a problem with the YouTube API. I am trying to make a bash application that will make watching YouTube videos easy on command line in Linux. I'm trying to take some video search results through cURL, but it returns an error: curl: (16) HTTP/2 stream 1 was not closed cleanly: error_code = 1
the cURL command that I use is:
curl "https://ww.googleapis.com/youtube/v3/search" -d part="snippet" -d q="kde" -d key="~~~~~~~~~~~~~~~~"
And of course I add my YouTube data API key where the ~~~~~~~~ are.
What am I doing wrong?
How can I make it work and return the search attributes?
I can see two things that are incorrect in your request:
First, you mistyped "www" and said "ww". That is not a valid URL
Then, curl's "-d" options are for POSTing only, not GETting ,at least not by default. You have two options:
Add the -G switch to url, which lets curl re-interpret -d options as query options:
curl -G https://www.googleapis.com/youtube/v3/search -d part="snippet" -d q="kde" -d key="xxxx"
Rework your url to a typical GET request:
curl "https://www.googleapis.com/youtube/v3/search?part=snippet&q=kde&key=XX"
As a tip, using bash to interpret the resulting json might not be the best way to go. You might want to look into using python, javascript, etc. to run your query and interpret the resulting json.

Sequelize-auto for SQLite

I'm trying to autogenerate my data models on sequelize for SQLite using squelize-auto on Windows. I have created my sqlite file with schema only, no data inside.
Also installed everything as indicated here.
The command I'm using looks like this:
sequelize-auto -h localhost -u dontcare -d "E:\full\path\to\my\database.db" --dialect sqlite
Also tried with some other path styles like './database.db' etc.
And this is the answer I'm getting:
Executing (default): SELECT name FROM `sqlite_master` WHERE type='table' and name!='sqlite_sequence';
Done!
After this, the script creates a folder called "models" with nothing inside.
Does somebody know what's happening here?
Many thanks!
I've found the problem:
-d should be the database name, not the path to the file.
For specify the file path, you should use the option -c indicating a JSON file. The storage attribute finally indicates that path.
The command should looks like this:
sequelize-auto -h localhost -u dontcare -d databasename --dialect sqlite -c options.json
And options.json looks like this:
{
"storage":"./database_file_name.db"
}
I hope this will be useful to someone.
Bye!

Downloading json file from json file curl

I have a json file with the structure seen below:
{
url: "https://mysite.com/myjsonfile",
version_number: 69,
}
This json file is accessed from mysite.com/myrootjsonfile
I want to run a load data script to access mysite.com/myrootjsonfile and load the json content from the url field using curl and save the resulting content to local storage.
This is my attempt so far.
curl -o assets/content.json 'https://mysite.com/myrootjsonfile' | grep -Po '(?<="url": ")[^"]*'
unfortunately, instead of saving the content from mysite.com/myjsonfile its saving the content from above: mysite.com/myrootjsonfile. Can anyone point out what i might be doing wrong? Bear in mind in a completely new to curl. Thanks!
It is saving the content from myrootjsonfile because that is what you are telling curl to do - to save that file to the location assets/content.json, and then greping stdin, which is empty. You need to use two curl commands, one to download the root file (and process it to find the URL of the second), and the second to download the actual content you want. You can use command substitution for this:
my_url=$(curl https://mysite.com/myrootjsonfile | grep -Po '(?<=url: )[^,]*')
curl -o assets/content.json "$my_url"
I also changed the grep regex - this one matches a string of non-comma characters which follow after "url: ".
Assuming you wished to save the file to assets/content.json, note that flags are case sensitive.
Use -o instead of -O to redirect the output to assets/content.json.

Resources