Statement not supported: CreateIndexStatement - google-cloud-spanner

When creating an index in Google Cloud Spanner, I am getting a syntax error when I try to execute the most basic form of create index statement.
I am using the Cloud Console under the Query tab in Cloud Spanner.
The error I am getting is: Statement not supported: CreateIndexStatement
The query I am executing is: CREATE INDEX SingersByFirstLastName ON Singers(FirstName, LastName)
Any idea what am I missing?

The Query tab on the Cloud Console does not support executing DDL statements such as CREATE INDEX. The alternatives would be to:
Click on the Singers table from the Overview page. You can then add an index through the user interface.
Click on the link Add Table from the Overview page. That will open a page that allows you to enter DDL statements. Although the name kind of indicates that you can only add tables, it also allows you to enter a CREATE INDEX statement.
Use a tool like DBeaver to interact with Cloud Spanner, which will allow you to enter both queries, DML statements and DDL statements in the main SQL console.

Related

How to Delete Table from Databricks with Databricks Data Explorer

I have just spun up my cluster and I was about to delete a table like I often do from with with Data workspace. However, it seems like Databricks have changed there interface and I'm now unsure how to delete a table without writing code.
Can someone show me how to delete a table from with Data Explorer please. For example I would like to delete the table trips from without Data Explorer
I would like to return to the option where I can delete a table from the Data Explorer tab as shown in the image
Data Explorer is a readonly tool. You can explore data, but for any action (DML, DDL) just use a cluster/warehouse and run the query. The only exception is that you can grant ownership/permissions with it.

NetSuite: How can I get a list of custom fields using suitesql or REST?

Using the REST suiteql query endpoint or the records endpoints, is there a way to query NetSuite to get a list of custom fields?
You can use SuiteQL to list customfields.
For example to list entity custom fields:
{
"q": "SELECT * FROM CustomField where fieldtype = 'ENTITY'"
}
Use Setup -> Records Catalog to see the records you can query. It includes custom records and custom lists. ex. If you have a custom list called customlist_year then the records browser will include it and you can do a query like: SELECT * FROM customlist_year with SuiteQL.
One caveat: SuiteQL is still in development so the fieldnames and results might change with a NetSuite release. Make sure things get tested with a Release Preview account so you are ready for any changes when they hit your production account.
Using RESTWEB Services you can fetch the schemas of the objects. Below is the screen shot which gives the sample rest web service request.
enter image description here
Any specific reason to use suiteqql? With a normal RESTlet you can load a dummy record and read all field names starting with "cust".

How to execute "group by" queries in Cosmos DB?

I need to be able to execute the following query or something similar:
SELECT count(1) AS userCount, f.gh
FROM f
WHERE f.location.coordinates[1] < <MAX_LAT>
AND f.location.coordinates[1] > <MIN_LAT>
AND f.location.coordinates[0] > <MIN_LNG>
AND f.location.coordinates[0] < <MAX_LNG>
GROUP BY f.gh
I have a collection that stores user's coordinates on a map. When zoomed in, I want to display individual users locations, but when zoomed out, I want to display the amount of users in a location grouped by their 5-character geohash. This query is exactly what I want, however you cannot run group-by queries over rest api, as mentioned in the docs.
Queries that cannot be served by gateway
It doesn't offer any alternatives to this, and I'm still pretty new to Cosmos. How can I successfully run this query?
The issue ended up being that I was trying to call "group by" through azure function bindings, and not the actual .NET SDK Nuget package. Thanks for the help.

Query from Kusto to PowerBI

there are 2 different experiences i have seen while using "Query to PowerBI" from tools menu in kusto explorer. image of both
I am getting the fist one, but want to use second one(query? with additional details/options). How do i get it in second format
You can control the behavior of PowerBI query using Tools->Options->Tools->PowerBI Export To Native Connector.

SQLite database returning two different values for the same column

I'm currently making a NodeJS application and I'm using SQLite as the backend database. I've run into a problem when trying to soft-delete entries in a table. When I use the application to change the 'deleted' attribute, the changes appear in the SQLite CLI, but the application still displays each record as not deleted. These are the steps I use to test the application and database:
In SQLite CLI, call the createDB.sql script to delete all tables and
set them up again.
Call the populateDB.sql script to input test data into each of the tables
Check in SQLite CLI that the records are correct (SELECT id, deleted FROM table1;)
Check in application console that records are correct
In the application, change deleted attribute for a single entry
Output entry to console Console shows entry not deleted
In the application, change deleted attribute for all entries
Output entry to console Console shows entries not deleted
Check in SQLite CLI that the records are correct Output shows deleted attribute has changed for all records
Output to application console the deleted attribute Still shows all entries are not deleted
These are the steps I have take to try and resolve the issue:
The data type for the deleted field is BOOLEAN. SQLite doesn't have a specific boolean type, but converts it to 1 or 0. I have tried using 1 and 0, 'true' and 'false' and even 'yes' or 'no' with no change in the behaviour.
I have tried this using both relative and absolute file paths for the database.
I added time delays to the application in case there is a delay updating the database.
I looked into file locking and learned that SQLite prevents two processes accessing a file concurrently. Makes sense. I killed my CLI process and tried to update the deleted attribute from the application only, making sure it was the only thing connected to the database, but got the same result.
After all this testing I believe the application is writing to the actual database file, but is reading from a cache. Is this an accurate conclusion? If so, how do I tell the application to refresh it's cache after executing an update query?
The application was writing to the database file, and reading from the same database, not a cache. However my query was set up to SELECT * FROM table1 LEFT OUTER JOIN table2 LEFT OUTER JOIN table3 LEFT OUTER JOIN table4. table1, table2 and table3 all have a column called "deleted". I was updating table1.deleted but reading table2.deleted. I've amended the query to only select the columns I need from now on!

Resources