How to search by arbitrary fields using field selector with kubectl? - linux

In this doc supported fields are not listed and I cannot find them properly. With some trial and experiments I noticed the following:
This works nicely and finds some pods:
kubectl get pods --field-selector=spec.restartPolicy=Never
But this produces error:
kubectl get pods --field-selector=spec.serviceAccount=default
No resources found.
Error from server (BadRequest): Unable to find {"" "v1" "pods"} that match label selector "", field selector "spec.serviceAccount=default": field label not supported: spec.serviceAccount
So how is this decided? I know I can find with JSONPath but it is client-side filtering AFAIK.

You can select the serviceAccount using following query:
kubectl get pods --field-selector=spec.serviceAccountName="default"
The --field-selector currently selects only equality based values and in that too it has very limited support to select the pod based on fields. The following fields are supported by --field-selector:
metadata.name
metadata.namespace
spec.nodeName
spec.restartPolicy
spec.schedulerName
spec.serviceAccountName
status.phase
status.podIP
status.nominatedNodeName
As you already know, you need to rely on the jsonpath to select any other field other than above fields.
You can visit following link to find out more:
https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/core/v1/conversion.go#L160-L167]1

Related

Not getting results from IHP DataSync despite setting Row Level Policy

I want to use DataSync on my current application, using IHP 0.16. I believe I have followed all the installation steps in FrontController and Routes.
I have a characters table with a user_id column connected to the users table. I have set the policy on the characters table resulting in this generated SQL:
CREATE POLICY "Users can manage their characters" ON characters USING (user_id = ihp_user_id()) WITH CHECK (user_id = ihp_user_id());
ALTER TABLE characters ENABLE ROW LEVEL SECURITY;
Trying to run this in the JavaScript console
await query("characters").fetch()
I get this error in JavaScript output:
And this error in IHP output:
Query (2.119753ms): "SELECT relrowsecurity FROM pg_class WHERE oid = ?::regclass" ["characters"]
Query (0.111442ms): "SET LOCAL ROLE ?" [Identifier {fromIdentifier = "ihp_authenticated"}]
Query (0.130888ms): "SET LOCAL rls.ihp_user_id = ?" Only {fromOnly = Just 0d7b46b1-bcb4-46a2-bf77-ad27dace8416}
FormatError {fmtMessage = "1 single '?' characters, but 3 parameters", fmtQuery = "SELECT ? FROM ??", fmtParams = ["*","characters",""]}
This seems to be another error than the row level security error in the DataSync tutorial in the IHP docs. Any idea on what causes this error?
This is a known bug in IHP v0.16.0. It's already fixed in master
It's best to use IHP DataSync with the version mentioned in the introduction text at https://ihp.digitallyinduced.com/Guide/realtime-spas.html :)
There's btw a workaround for the bug if you don't want to upgrade: You always need to specify an order by, like await query("characters").orderBy('createdAt').fetch()

In k8s operators, how do I link the unique metadata.name in spec of a CRD to a unique object ID that my server generates

I am developing a new Operator to manage CRDs of my business logic objects. My business objects are stored in Mongo, and thus, we need this BSON ID (12 letter length GUID) to make subsequent changes to this object.
The question is, how do I link the CR that the operator needs to create to this upstream object? Where can I store this unique BSON ID the K8S way so that I can use it for further lookups.
Example, here is a CRD for one of my upstream objects:
apiVersion: my.custom.object/v1alpha1
kind: ApiDefinition
metadata:
name: httpbin
spec:
description: my first api
use_keyless: true
protocol: http
When I do a kubectl apply -f to this CRD, it gets created.
kubectl apply -f "the_above_yaml.yaml"
ApiDefinition created!
Then my operator picks it up in the reconcile loop, where it then creates this object in my server. The server generates a BSON ID. I need to use this BSON ID to do further lookups.
How can I store the server specific BSON ID, so that developers just need to use the unique metadata name in the spec, whilst under the hood my operator takes care of linking the two?
If every custom resource object correlates to one MongoDB document, you could store the document ID as a string in the status field of your custom resource.
// +kubebuilder:subresource:status
type MyOwnCR struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec MyOwnCRSpec `json:"spec,omitempty"`
Status MyOwnCRStatus `json:"status,omitempty"`
}
// MyOwnCRStatus defines the observed state of MyOwnCR
type MyOwnCRStatus struct {
//+optional
DocumentID string `json:"documentID,omitempty"`
}
Please note the //+optional and omitempty keywords, marking this status field as optional. This way, a K8s api user can create the resources without specifying a document id. Your operator can then interact with your MongoDB on reconcile requests and update/patch status.documentID once the ID is known.

[13.5.2]Create DataStore through ssoadm.jsp -> Atttribut dont match with 'service schema'

I want to create a DataSore through ssoadm.jsp because I use endpoint url in order to automatize process of configuration.
[localhost]/ssoadm.jsp?cmd=create-datastore
I put:
domain name (previously created with default coniguration): myDomain
data store name: myDataStore
type of DataStore: LDAPv3
Attribut values: LDAPv3=org.forgerock.openam.idrepo.ldap.DJLDAPv3Repo
Then I got something like: Attribute name "LDAPv3" doesn't match with service schema. What am I supposed to put in those fields "Attribut values" pls? An example is given:
"sunIdRepoClass=com.sun.identity.idm.plugins.files.FilesRepo"
PS: I dont want to create datastore from [Localhost]/realm/IDRepoSelectType because there is jato.pageSession that i can't automaticly get.
PS2: it is my first time asking a question on Stackoverflow, sorry if my question didn't fit with the expectation. I tried my best.
ssoadm.jsp?cmd=list-datastore-types
shows the list of user data store types
Every user data store type has specific attributes to be set. Unfortunately those are not explicitly documented. The service attributes are defined in the related service definition XML template, which is loaded (after potential tag swapping) into the OpenAM configuration data store during initial configuration. For the user data stores you can find them in OPENAM_CONFIGURATION_DIRECTORY/template/xml/idRepoService.xml
E.g. for user data store type LDAPv3 the following service attributes are defined
sunIdRepoClass
sunIdRepoAttributeMapping
sunIdRepoSupportedOperations
sun-idrepo-ldapv3-ldapv3Generic
sun-idrepo-ldapv3-config-ldap-server
sun-idrepo-ldapv3-config-authid
sun-idrepo-ldapv3-config-authpw
openam-idrepo-ldapv3-heartbeat-interval
openam-idrepo-ldapv3-heartbeat-timeunit
sun-idrepo-ldapv3-config-organization_name
sun-idrepo-ldapv3-config-connection-mode
sun-idrepo-ldapv3-config-connection_pool_min_size
sun-idrepo-ldapv3-config-connection_pool_max_size
sun-idrepo-ldapv3-config-max-result
sun-idrepo-ldapv3-config-time-limit
sun-idrepo-ldapv3-config-search-scope
sun-idrepo-ldapv3-config-users-search-attribute
sun-idrepo-ldapv3-config-users-search-filter
sun-idrepo-ldapv3-config-user-objectclass
sun-idrepo-ldapv3-config-user-attributes
sun-idrepo-ldapv3-config-createuser-attr-mapping
sun-idrepo-ldapv3-config-isactive
sun-idrepo-ldapv3-config-active
sun-idrepo-ldapv3-config-inactive
sun-idrepo-ldapv3-config-groups-search-attribute
sun-idrepo-ldapv3-config-groups-search-filter
sun-idrepo-ldapv3-config-group-container-name
sun-idrepo-ldapv3-config-group-container-value
sun-idrepo-ldapv3-config-group-objectclass
sun-idrepo-ldapv3-config-group-attributes
sun-idrepo-ldapv3-config-memberof
sun-idrepo-ldapv3-config-uniquemember
sun-idrepo-ldapv3-config-memberurl
sun-idrepo-ldapv3-config-dftgroupmember
sun-idrepo-ldapv3-config-roles-search-attribute
sun-idrepo-ldapv3-config-roles-search-filter
sun-idrepo-ldapv3-config-role-search-scope
sun-idrepo-ldapv3-config-role-objectclass
sun-idrepo-ldapv3-config-filterrole-objectclass
sun-idrepo-ldapv3-config-filterrole-attributes
sun-idrepo-ldapv3-config-nsrole
sun-idrepo-ldapv3-config-nsroledn
sun-idrepo-ldapv3-config-nsrolefilter
sun-idrepo-ldapv3-config-people-container-name
sun-idrepo-ldapv3-config-people-container-value
sun-idrepo-ldapv3-config-auth-naming-attr
sun-idrepo-ldapv3-config-psearchbase
sun-idrepo-ldapv3-config-psearch-filter
sun-idrepo-ldapv3-config-psearch-scope
com.iplanet.am.ldap.connection.delay.between.retries
sun-idrepo-ldapv3-config-service-attributes
sun-idrepo-ldapv3-dncache-enabled
sun-idrepo-ldapv3-dncache-size
openam-idrepo-ldapv3-behera-support-enabled
It might be best that you create an user data store instance via console and then use ssoadm.jsp?cmd=show-datastore to list the properties. You would get a long list of attriutes ... to much to show here.
When you create the data store, make sure you specify the password for the bind DN using property
sun-idrepo-ldapv3-config-authpw=PASSWORD

Query Google Cloud Datastore to retrieve matching results

I am using Google Cloud Datastore to save my application data. I have to add a query to get all results matching with Name, Brand or Sku.
Query data with one of the field is returning me records but using all fields together returns me error.
Query:
const term = "My Red";
const q = gstore.createQuery(req.params.orgId, "Variant")
.filter('brand', '=', term)
.filter('sku', '=', term)
.limit(10);
Error:
{"msec":435.96913800016046,"error":"no matching index found.
recommended index is:- kind: Variant properties: -
name: brand - name:
sku","data":{"code":412,"metadata":{"_internal_repr":{}},"isBoom":true,"isServer":true,"data":null,"output":{"statusCode":500,"payload":{"statusCode":500,"error":"Internal
Server Error","message":"An internal server error
occurred"},"headers":{}}}} Debug: internal, error
Also, I want to perform OR operation to get matching results as above will return data with AND operation.
Please help me to find correct path to achieve the desired result.
Thanks in advance and let me know if something is not clear.
The error indicates that the composite index required by the respective query is not in Serving state.
That means it's either not created/deployed or it was recently deployed and is still being built.
Composite indexes must be specifically created and deployed in your app.
If you didn't create it you need to do so. The error message indicates the content the index configuration requires. If you're using the development server it might create it automatically, but you still need to deploy it.
See Indexes docs for more details.
If you recently deployed the composite index please note that it can take some significant amount of time until the matching index is built, depending on how many entities of that kind already exist in the Datastore. You can check the status of the index building in the developer console, on the Indexes page

Spark Error message: Please consider a new data model based on the query pattern instead of using ALLOW FILTERING

My DSE Opscenter sends me this message:
Please consider a new data model based on the query pattern instead of using ALLOW FILTERING.
And after changing my spark code I already removed the below column value from my query. But the below error message still keeps popping up. I don't know why? Also the error message only occurs in my OPScenter on in the actual table. Thanks for your help.
Query:
select * from dse_perf.node_slow_log
Column value/ error mesage
SELECT "XXX", "XXX", "XXX", "likes", "XXX" FROM "XXX"."axes" WHERE token("article") > ? AND token("article") <= ? ALLOW FILTERING
Please consider a new data model based on the query pattern instead of using ALLOW FILTERING.
Opscenter is warning you that your request can be pretty expensive and suggesting your review the use case.
"Allow Filtering" can be pretty expensive as described here:
http://www.datastax.com/dev/blog/allow-filtering-explained-2
It maybe your use falls into the OK category - in which case you can ignore the warning. If not - it may be worth looking at other ways of modelling your data that allow you to sort it in a more efficient manner.

Resources