I have a kind named 'audit' and has many properties however, I need to index only certain properties and in a specific order since I mostly query:
select DISTINCT ON (traceId) * from audit where tenantId='123'
When I try to run this in the GCP console it throws an error:
GQL Query error: Your Datastore does not have the composite index (developer-supplied) required for this query.
I also tried running this from a node js application that uses #google-cloud/datastore package, and datastore throws error:
9 FAILED_PRECONDITION: no matching index found. recommended index is:
- kind: audit
properties:
- name: tenantId
- name: traceId
The index.yaml is created using terraforms and the contents are:
indexes:
- kind: "audit"
properties:
- name: "tenantId"
- name: "traceId"
# AUTOGENERATED
# This index.yaml is automatically updated whenever the Cloud Datastore
# emulator detects that a new type of query is run. If you want to manage the
# index.yaml file manually, remove the "# AUTOGENERATED" marker line above.
# If you want to manage some indexes manually, move them above the marker line.
With this index.yaml file the local datastore emulator works as expected.
In the GCP console, I see the index being set and is in serving status.
It seems you have additional quotes, " around your Datastore property names in the index.yaml file.
indexes:
- kind: Cat
properties:
- name: name
- name: age
Please refer to the documentation on correct syntax.
On the Google Provider Terraform side, there is a ~> Warning mentioned, make sure to follow it to create a compatible database type. Refer here for more details on compatibility settings.
Related
I'm trying to create a project of the labeling tool from the Azure form recognizer. I have successfully deployed the web app, but I'm unable to start a project. I get this error all every time I try:
I have tried with several app instances and changing the project name and connection name, none of those work. The only common factor and finding here as that it is related to the connection.
As I see it:
1) I can either start a new project or use one on the cloud:
First I tried to create a new project:
I filled the fields with these values:
Display Name: Test-form
Source Connection: <previuosly created connection>
Folder Path: None
Form Recognizer Service Uri: https://XXX-test.cognitiveservices.azure.com/
API Key: XXXXX
Description: None
And got the error from the question's title:
"Invalid resource name creating a connection to azure storage "
I tried several combinations of names, none of those work.
Then I tried with the option: "Open a cloud project"
Got the same error instantly, hence I deduce the issue is with the connection settings.
Now, In the connection settings I have this:
At first glance, since the values are accepted and the connection is created. I guess it is correct but it is the only point I failure I can think of.
Regarding the storage container settings, I added the required CORS configuration and I have used it to train models with the Forms Recognizer, So that part does works.
At this point at pretty much stuck, since I error message does not give me many clues on where is the error.
I was facing a similar error today
You have to add container name before "?sv..." in your SAS URI in connection settings
https://****.blob.core.windows.net/**trainingdata**?sv=2019-10-10..
Environment
Terraform v0.12.24
+ provider.aws v2.61.0
Running in an alpine container.
Background
I have a basic terraform script running ok, but now I'm extending it and am trying to configure a remote (S3) state.
terraform.tf:
terraform {
backend "s3" {
bucket = "labs"
key = "com/company/labs"
region = "eu-west-2"
dynamodb_table = "labs-tf-locks"
encrypt = true
}
}
The bucket exists, and so does the table. I have created them both with terraform and have confirmed through the console.
Problem
When I run terraform init I get:
Error refreshing state: InvalidParameter: 2 validation error(s) found.
- minimum field size of 1, GetObjectInput.Bucket.
- minimum field size of 1, GetObjectInput.Key.
What I've tried
terraform fmt reports no errors and happily reformats my terraform.tf file. I tried moving the stanza into my main.tf too, just in case the terraform.tf file was being ignored for some reason. I got exactly the same results.
I've also tried running this without the alpine container, from an ubuntu ec2 instance in aws, but I get the same results.
I originally had the name of the terraform file in the key. I've removed that (thanks) but it hasn't helped resolve the problem.
Also, I've just tried running this in an older image: hashicorp/terraform:0.12.17 but I get a similar error:
Error: Failed to get existing workspaces: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, ListObjectsInput.Bucket.
I'm guessing that I've done something trivially stupid here, but I can't see what it is.
Solved!!!
I don't understand the problem, but I have a working solution now. I deleted the .terraform directory and reran terraform init. This is ok for me because I don't have an existing state. The insight came from reading the error from the 0.12.17 version of terraform, which complained about not being able to read the workspace.
Error: Failed to get existing workspaces: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, ListObjectsInput.Bucket.
Which initially led me to believe there was a problem with an earlier version of tf reading a newer version's configuration. So, I blew away the .terraform and it worked with the older tf, so I did it again and it worked with the newer tf too. Obviously, something had gotten itself screwed up in terraform's storage. I don't know how or why. But, it works for me, so...
If you are facing this issue in app side, there is a chance that you are sending a wrong payload or the payload is updated by the backend.
Before i was doing this->
--> POST .../register
{"email":"FamilyKalwar#gmail.com","user":{"password":"123456#aA","username":"familykalwar"}}
--> END POST (92-byte body)
<-- 500 Internal Server Error .../register (282ms)
{"message":"InvalidParameter: 2 validation error(s) found.\n- minimum field size of 6, SignUpInput.Password.\n- minimum field size of 1, SignUpInput.Username.\n"}
Later I found that the payload is updated to this->*
{
"email": "tanishat1#gmail.com",
"username": "tanishat1",
"password": "123456#aA"
}
I removed the "user" data class and updated the payload and it worked!
Google App Engine in Node.js on Windows 10
I use Googles Datastore emulator for local development according to
this tutorial.
I use this Node.js tutorial to create and query data in Datastore.
When I use the local emulator everything works fine. When I use the remote Datastore from my local project I get this error message:
Error: 9 FAILED_PRECONDITION: no matching index found.
at Object.callErrorFromStatus (C:\Users\Code\google-cloud\nodejs-docs-samples\appengine\datastore\node_modules\#grpc\grpc-js\build\src\call.js:30:26)
at Http2CallStream.call.on (C:\Users\Code\google-cloud\nodejs-docs-samples\appengine\datastore\node_modules\#grpc\grpc-js\build\src\client.js:96:33)
at Http2CallStream.emit (events.js:203:15)
at process.nextTick (C:\Users\Code\google-cloud\nodejs-docs-samples\appengine\datastore\node_modules\#grpc\grpc-js\build\src\call-stream.js:97:22)
at process._tickCallback (internal/process/next_tick.js:61:11)
An index is supposed to be created while running the emulator in directory ~gcloud\emulators\datastore\WEB-INF\index.yaml. The file is empty after running the emulator and running the query from the project:
app.js
/**
* Retrieve the latest 10 visit records from the database.
*/
const getVisits = () => {
const query = datastore
.createQuery('visit')
.order('timestamp', {descending: true})
.limit(10);
return datastore.runQuery(query);
};
index.yaml
indexes:
# AUTOGENERATED
# This index.yaml is automatically updated whenever the Cloud Datastore
# emulator detects that a new type of query is run. If you want to manage the
# index.yaml file manually, remove the "# AUTOGENERATED" marker line above.
# If you want to manage some indexes manually, move them above the marker line.
I understand that the remote database has no index which causes the error from above. I have run the query from the project several times but the index is not updated.
I have tried uploading the index.yaml but because there is nothing apart from comments the error continues.
How can I generate the index so the remote Datastore works without error?
I have configured coreDNS to point to an external DNS server for all *.mydomain.com requests with this yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns-custom
namespace: kube-system
data:
test.server: |
mydomain.com:53 {
errors
cache 30
forward . 10.0.0.3 10.0.0.4
}
Now what I couldn't find is what the test.server part is for. I found that .server is important, but not how to properly name this part, let alone what to call this part.
ConfigMaps use key-value pairs to organize the data contained in them. Here is a good example of this format for the data section of a ConfigMap.
For this specifically, it looks like coreDNS in AKS will identify the proxy-related configuration as long as the key matches *.server.
So in your case, the data property called test.server contains configuration information regarding mydomain.com:53 among other nested configuration data. This format is specific to coreDNS configuration on AKS.
test.server is just the key from your ConfigMap containing the configuration properties for the server.
As the second example (on the AKS docs page you linked) says:
test.server: | # you may select any name here, but it must end with the .server file extension
Meaning Azure Kubernetes Service will probably search for keys ending in .server and use them accordingly. Naming could be anything from external.server, dns.server or coredns.server to just keeping test.server.
We are hosting a NodeJS application on compute engine, which connect to google DataStore using gcloud-node.
Simple queries are running fine, but complex queries with multipple selects are giving "412: precondition failed" error. More details at":
Multiple select in google datastore query throwing ApiError: Precondition Failed error in node
I understand this error is due to the fact I have not configured datastore-indexes.xml. Being a newbie in GCP world. Could you please help me where can I define my datastore-indexes.xml file inside my project.
You can also use the gcloud preview app tool with an index.yaml file to specify an indexing policy.
For example, if you need an index on user and timestamp on LoginTimes:
indexes:
- kind: LoginTimes
properties:
- name: user
- name: timestamp
direction: desc