usergrid set type column - cassandra

Is it possible to insert set type data (column) in usergrid (as cassandra support set type column). I tried
curl -XPOST http://localhost:8080/<org>/<app>/<collection>
-d '{"name":"1974", "category":{"a","b","c"}}'
but it reply json_parse error.
Response to answer: I knew that payload in above request isn't valid JSON, I only tried to tell that is there any way I could make set type column (I need to prevent duplicate entries on single column record). with square brackets, It create list type column which don't prevent duplicate entries.
One of core member reply that on current Version (1.0), usergrid don't support set type column.

It certainly is - but your payload isn't valid JSON; in JSON, you use square brackets to specify an Array: [].
Try instead:
curl -X POST http://localhost:8080/<org>/<app>/<collection>
-d '{"name":"1974", "category":["a","b","c"]}'
# ^ ^

Related

How to have specific information in mariadb.Error [Python]

I'm trying to have some specific information on a mariadb.Error. After the execution of an INSERT, with executemany(), I want to have, every time, the information about the problematical line. But I received two different types of errors.
For example when I have an error about a wrong data type, I have something like:
Invalid parameter type at row 2, column 4
When I have an error about a constraint not respected, I have something like:
Cannot add or update a child row: a foreign key constraint fails
But on the second case, I would like to have something like :
Cannot add or update a child row: a foreign key constraint fails at row X, column Y
Is it possible ? Can I configure mariadb.Error to have, always, the row/column detail ?
No, this is not possible.
The first error (Invalid parameter type at row 2, column 4) is generated by the MariaDB python driver: Before sending the execution request and data to the server, the driver checks if all columns have the same datatype, if not he will raise the exception above.
Afterwards, the driver sends a COM_STMT_BULK_EXECUTE command followed by all data to the server. The second error message (a foreign key constraint fails) was generated by the server, since the specified data violated one or more constraints. Unfortunately, the server doesn't give any hint which value in which row caused this error, so the driver can only raise this exception.
I agree, that this would be a useful feature and have submitted an issue (MDEV-29945) to the MariaDB bug tracking system.

Malformed SQL Statement: Expected token 'USING' but found Identifier with value 't' instead

I am trying to merge to a SQL Database using the following code in Databricks with pyspark
query = """
MERGE INTO deltadf t
USING df s
ON s.SLAId_Id = t.SLAId_Id
WHEN MATCHED THEN UPDATE SET *
WHEN NOT MATCHED THEN INSERT *
"""
driver_manager = spark._sc._gateway.jvm.java.sql.DriverManager
con = driver_manager.getConnection(url) #
stmt = con.createStatement()
stmt.executeUpdate(query)
stmt.close()
But I'm getting the following error:
SQLException: Malformed SQL Statement: Expected token 'USING' but found Identifier with value 't' instead at position 25.
Any thoughts on where might be going wrong?
I don't know why you're getting this exact error. However I believe there are a number of issues with what you are trying to do.
Running the query via JDBC makes it run in SQL Server only. Construct like WHEN MATCHED THEN UPDATE SET * / WHEN NOT MATCHED INSERT * will not work. Databricks accepts it, but for SQL Server you need to explicitly provide columns to update and values to insert (reference).
Also, do you actually have tables named deltadf and df in SQL Server? I suppose you have a Dataframe or temporary view named df... this will not work. As said, this query executes in SQL Server only. If you want to upload data from Dataframe use df.write.format("jdbc").save (reference).
See this Fiddle - if deltadf and df are tables, running this query in SQL Server (any version) will only complain about Incorrect syntax near '*'.
SQLException: Malformed SQL Statement: Expected token 'USING' but found Identifier with value 't' instead at position 25.
if you missed updating any specific field or specific syntax, you will get this error.
I performed merge operation its working fine for me without error, Please follow below reference .
Reference:
https://www.youtube.com/watch?v=i5oM2bUyH0o
https://docs.databricks.com/delta/delta-update.html#upsert-into-a-table-using-merge
https://www.sqlshack.com/sql-server-merge-statement-overview-and-examples/

How to have an 'OR' in search query for Github API?

Github API allows us to search users by different parameters, and one of those parameters is location. Running the following query will give all the users living in Pakistan:
curl https://api.github.com/search/users?q=location:pakistan
Now, I would like to get all the users that either live in Pakistan or in India, but it seems that Github doesn't define a way for having an or between Pakistan & India.
I have tried the following queries, but these aren't working:
curl https://api.github.com/search/users?q=location:pakistan&location:india
curl https://api.github.com/search/users?q=location:(pakistan|india)
Your first attempt is close, but doesn't work because location isn't its own HTTP GET argument. The entire string location:pakistan is the value to the q parameter.
When you do ?q=location:pakistan&location:india you are actually submitting something like
q has the value location:pakistan
location:india is a key, but has no value
Instead, join multiple location keys with + or %20:
curl https://api.github.com/search/users?q=location:pakistan+location:india
Now the entire location:pakistan+location:india string is passed as the value to the q key.
A literal space can work too, but then you have to escape it or wrap the arguments in quotes.

LDAP custom attribute cannot be searched?

I have some special custom attributes with my ldap setup. I have a custom attribute called "GroupCode". I have bunch of entries with this special attribute that I was able to write to the ldap database. Lets say that I have an entry with attribute "xyz" and another attribute with "wasd". I search with the filter "(GroupCode=xyz)" and "(GroupCode=wasd)" neither one of these search return anything. However, if I change the filter to "(GroupCode=*)", then it would return all the entries that have the GroupCode attribute. I have examined the attribute properties, and it looks normal, the apache directory studio shows it to be of "String" value, do I don't know why it isn't searching with the filter I provided. My knowledge with ldap structure is fairly limited as it is fairly complexed. Anyone have any idea, please let me know. Much appreciated. Thanks.
can you see if you can formulate the same search criteria into an ldapsearch command in the command line?
ldapsearch -H ldap://LDAP_SERVER -D LDAP_AUTH_LOGIN -b LDAP_BASE -w PW -x "CRITERIA"
if so, then you can then also experiment with your criteria.
ldapsearch -H ldap://LDAP_SERVER -D LDAP_AUTH_LOGIN -b LDAP_BASE -w PW -x "(GroupCode=xyz)"
One possible reason for your issue is that you forgot to specify the EQUALITY and SUSBSTR properties of your custom attribute.
Here is an example for a custom attribute called sAMAccountName:
attributeTypes: ( 1.2.840.113556.1.4.221
NAME 'sAMAccountName'
EQUALITY caseIgnoreMatch
SUBSTR caseIgnoreSubstringsMatch
SYNTAX '1.3.6.1.4.1.1466.115.121.1.15'
SINGLE-VALUE )

CouchDB Find One not working

I am a couchDB newbie and am doing the examples in the O'Reilly CouchDB guide.
I have a problem using a view to retrieve a document by key:
curl http://127.0.0.1:5984/basic/_design/example/_view/by_date?key="2009/01/15 15:52:20"
gives the reply:
curl: (52) Empty reply from server
but just retrieving all rows:
curl http://127.0.0.1:5984/basic/_design/example/_view/by_date
gives me 3 rows including the specific row I am looking for:
{"id":"hello-world","key":"2009/01/15 15:52:20","value":"Hello World"}
why doesn't the key query work?
I am using CouchDB version 0.10.0 on Ubuntu 9.10
CouchDB expects the start_key parameter to be a valid JSON-compatible type, such as "a string" or 12345 or ["an", "array", "with", 5.0, "elements"]. If you check your CouchDB logs you will probably see a 400 (bad client request) entry because your key is either invalid UTF8 or invalid JSON.
You probably have two problems:
The shell is interpreting your quotes which must actually be sent to CouchDB. Try single-quoting your double-quote string.
You probably also need to encode your key so that it is a valid URL. Specifically, replace your space with %20
Putting this all together, the following works for me on CouchDB 0.11 on Ubuntu 9.10.
$ curl http://127.0.0.1:5984/blog/_design/docs/_view/by_date?key='"2009/01/30%2018:04:11"'
{"total_rows":1,"offset":0,"rows":[
{"id":"biking","key":"2009/01/30 18:04:11","value":"Biking"}
]}
It worked, I single-quoted the key string and encoded the space character so the request became:
/by_date?key='"2009/01/30%2015:52:20"'

Resources