I am unable to claim cStor volume using the custom cStor storage pool - openebs

I was creating manual Cstor pool for disks attached to my nodes, but i am unable to claim volume using the custom storage pool
cStor storage pools are running in healthy state.
gem-cstor-disk-9wzj-6c9c8f75c5-mq6gp 2/2 Running 0 154m
gem-cstor-disk-iru2-68c85445cf-pqg6b 2/2 Running 0 132m
gem-cstor-disk-m3bx-6fcddc7dcd-f9dg2 2/2 Running 0 154m
From maya-apiserver,
2019/01/16 12:47:17.907038 [ERR] http: Request GET /latest/volumes/pvc-b8dacfcc-198c-11e9-9141-503eaa028845, error: error converting YAML to JSON: yaml: line 4: mapping values are not allowed in this context
2019/01/16 12:47:17.907070 [DEBUG] http: Request /latest/volumes/pvc-b8dacfcc-198c-11e9-9141-503eaa028845 (596.053376ms)
2019/01/16 12:47:43.536149 [DEBUG] http: Request /latest/volumes/pvc-b9a40023-198c-11e9-9141-503eaa028845 (GET)
I0116 12:47:43.536184 8 volume_endpoint_v1alpha1.go:52] cas template based volume request was received: method 'GET'
I0116 12:47:43.536206 8 volume_endpoint_v1alpha1.go:137] cas template based volume read request was received
E0116 12:47:44.705380 8 volume_endpoint_v1alpha1.go:175] failed to read cas template based volume: error 'error converting YAML to JSON: yaml: line 4: mapping values are not allowed in this context'
2019/01/16 12:47:44.705424 [ERR] http: Request GET /latest/volumes/pvc-b9a40023-198c-11e9-9141-503eaa028845, error: error converting YAML to JSON: yaml: line 4: mapping values are not allowed in this context
2019/01/16 12:47:44.705458 [DEBUG] http: Request /latest/volumes/pvc-b9a40023-198c-11e9-9141-503eaa028845 (1.169318391s)

Going thourgh PVC, SC yamls and found that StorageClass YAML is not correct. It contain extra space added. Just removed and re-applied. Everything went fine.

Related

Spark Client Pod in Kubernetes Getting 401 Error After One Hour

Kubernetes Version: 1.21
Spark Version: 3.0.0
I am using a container in a Kubernetes pod (client pod) to invoke Spark Submit which then starts a Driver pod. The client pod which did the Spark Submit starts to watch the Driver pod via LoggingPodStatusWatcherImpl. After approximately 1 hour, the client pod experiences 401 error
22/11/03 13:05:44 WARN WatchConnectionManager: Exec Failure: HTTP 401, Status: 401 - Unauthorized
java.net.ProtocolException: Expected HTTP 101 response but was '401 Unauthorized'
at okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:229)
at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:196)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
22/11/03 13:05:46 INFO LoggingPodStatusWatcherImpl: Application status for spark-blahblahblah (phase: Running)
I think Spark on Kubernetes usually looks in /var/run/secrets/kubernetes.io/serviceaccount/token so I would get the warning below when starting the client pod.
22/11/03 13:13:13 WARN Config: Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring.
However, since I provide another oauth token file via the conf below in Spark Submit command the client pod was able to connect to the Kubernetes API and start the Driver pod.
--conf spark.kubernetes.authenticate.submission.oauthTokenFile=/mytokendir/token
The token is provided to the client pod via projected volume (new to Kubernete versions 1.20+), the token expiration duration can be specified in the yaml manifest as shown below:
See this doc for reference on how this is implemented:
https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-bound-service-account-tokens
spec:
serviceAccountName: my-serviceaccount
volumes:
- name: token-vol
projected:
sources:
- serviceAccountToken:
expirationSeconds: 7200
path: token
containers:
-name: my-container
image: some-image
volumeMounts:
-name: token-vol
mountPath: /mytokendir
I then exec into the client pod to get the JWT token in /mytokendir and decoded it.
It showed valid for 2 hours; however, coming back to the original question, my client pod is still getting 401 error after 1 hour.
Sometimes I would get this error:
22/11/03 14:10:57 INFO LoggingPodStatusWatcherImpl: Application my-application with submission ID my-namespace:my-driver finished
22/11/03 14:10:57 INFO ShutdownHookManager: Shutdown hook called
22/11/03 14:10:57 INFO ShutdownHookManager: Deleting directory /tmp/spark-blahblah
The connection to the server localhost:8080 was refused - did you specify the right host or port?

K8s Error: ImagePullBackOff || read: connection refused

Can you please assist when deploying we getting ImagePullBackOff for our pods.
running kubectl get <pod-name> -n namespace -o yaml am getting below error.
containerStatuses:
- image: mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644
imageID: ""
lastState: {}
name: dmd-base
ready: false
restartCount: 0
started: false
state:
waiting:
message: Back-off pulling image "mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644"
reason: ImagePullBackOff
hostIP: x.x.x.53
phase: Pending
podIP: x.x.x.237
and running kubectl describe pod <pod-name> -n namespace am getting below error infomation
Normal Scheduled 85m default-scheduler Successfully assigned dmd-int/app-app-base-5b4b75756c-lrcp6 to aks-agentpool-35064155-vmss00000a
Warning Failed 85m kubelet Failed to pull image "mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
[rpc error: code = Unknown desc = failed to pull and unpack image "mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
failed to resolve reference "mycontainer-registry.io/commpany/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
failed to do request: Head "https://mycontainer-registry.azurecr.io/v2/company/my-app/manifests/1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
dial tcp: lookup mycontainer-registry.azurecr.io on [::1]:53: read udp [::1]:56109->[::1]:53: read: connection refused,
rpc error: code = Unknown desc = failed to pull and unpack image "mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
failed to resolve reference "mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
failed to do request: Head "https://mycontainer-registry.io/v2/company/my-app/manifests/1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
dial tcp: lookup mycontainer-registry.io on [::1]:53: read udp [::1]:60759->[::1]:53: read: connection refused]`
From the described logs I can see the issue is a connection but I can't tell where the issue is with connectivity, we running our apps in a Kubernetes cluster on Azure.
If anyone has come across this issue can you please assist the application has been running successfully throughout the past months we just got this issue this morning.
There is a known Azure outage multiple regions today.
Some DNS issue that also affects image pulls.
https://status.azure.com/en-us/status

Startup IAM Services failed

C:\domino-iam-service>npm start
> domino-iam-service#2.2.0 start
> cross-env NODE_ENV=production node iam-server.js
WARNING: NODE_ENV value of 'production' did not match any deployment config file names.
WARNING: See https://github.com/lorenwest/node-config/wiki/Strict-Mode
[11:52:41][info][master][master]: IAM version: 2.2.0
Start to unlock config:
? Enter current IAM server password: ********
Config is unlocked.
[11:53:43][info][master][master]: Starts as cluster mode.
[11:53:43][info][stats][master]: IAM StatsClient enabled: false
[11:53:43][info][cluster][master]: Worker 1 is started
[11:53:43][info][cluster][master]: Worker 2 is started
WARNING: NODE_ENV value of 'production' did not match any deployment config file names.
WARNING: See https://github.com/lorenwest/node-config/wiki/Strict-Mode
WARNING: NODE_ENV value of 'production' did not match any deployment config file names.
WARNING: See https://github.com/lorenwest/node-config/wiki/Strict-Mode
[11:53:49][info][worker][worker-1]: Worker 1 starts to provide service, which process id is: 3752
[11:53:49][info][initServices][worker-1]: Start IAM service on allAddress:9443
[11:53:49][info][worker][worker-2]: Worker 2 starts to provide service, which process id is: 2772
[11:53:49][info][stats][worker-1]: IAM StatsClient enabled: false
[11:53:49][info][initServices][worker-2]: Start IAM service on allAddress:9443
[11:53:50][warn][DBConnector][worker-1]: dbConfig.dominoConfig.credential.CLIENT_KEY_PASSPHRASE setting is empty, it is NOT SECURE.
[11:53:50][info][stats][worker-2]: IAM StatsClient enabled: false
[11:53:50][warn][DBConnector][worker-1]: Please use openssl tool to add passphrase for your client key file.
[11:53:50][warn][DBConnector][worker-2]: dbConfig.dominoConfig.credential.CLIENT_KEY_PASSPHRASE setting is empty, it is NOT SECURE.
[11:53:50][warn][DBConnector][worker-2]: Please use openssl tool to add passphrase for your client key file.
[11:53:50][error][ClusterCache][worker-2]: Error occurred when constructing ClusterCache with error: timeout
[11:53:50][error][ClusterCache][worker-1]: Error occurred when constructing ClusterCache with error: timeout
[11:53:50][info][DBConnector][worker-2]: Domino isn't connected, retry after 30s
[11:53:50][info][DBConnector][worker-1]: Domino isn't connected, retry after 30s
The domino server with only one error message.
0554:0002-0594] 2022/07/14 下午 12:06:17 AMgr: Error executing agent 'DeleteExpiredDocs' in 'iam-store.nsf'. Agent signer 'Domino Template Development/Domino': You are not authorized to perform that operation

Databricks DBT Runtime Error, cannot connect to Database. Maybe an SSL error?

I have a custom Databricks instance with a Domain name that points to an AWS Load Balancer. When I put that information in using either the HTTP instructions here or the databricks cluster instructions here, I get this response in the DBT CLI:
Connection:
host: https://subdomain.domain.com
port: 443
cluster: 123456-stuff00003
endpoint: None
schema: default
organization: 0
16:40:39.470091 [debug] [MainThread]: Acquiring new spark connection "debug"
16:40:39.471632 [debug] [MainThread]: Using spark connection "debug"
16:40:39.472524 [debug] [MainThread]: On debug: select 1 as id
16:40:39.472953 [debug] [MainThread]: Opening a new connection, currently in state init
Connection test: [ERROR]
1 check failed:
dbt was unable to connect to the specified database.
The database returned the following error:
>Runtime Error
Database Error
failed to connect
Unfortunately, DBT's debugging logs are terrible and I am not entirely sure why it is failing. I do know that when I connect to the cluster via Intellij I have to provide the CA file, the Client Certificate file, and the Client key file, because I am using a self-signed SSL cert (unfortunately, the self signed cert is required). Also, when defining my ~/.databrickscfg file I have to provide the argument insecure = true.
I've encountered this issue recently and I fixed it by installing root certificates by executing the "Install Certificates.command" script in the python home directory used to run dbt.
Laurent

Anchore Engine - Jenkins CI plugin

We are trying to scan our docker images using Anchore Engine Jenkins plugin.
Currently we create our application docker images, push it in our own private local registry and then deploy it in our test environments.
Now, we want to setup docker image scanning in our CI/CD process to check for any vulnerabilities.
We have installed Anchore Engine using the recommended Docker-Compose yaml method given in the Documentation link:
https://anchore.freshdesk.com/support/solutions/articles/36000020729-install-on-docker-swarm
Post installation, we installed the
Anchore Container Image Scanner Plugin in Jenkins.
We configured the plugin as mentioned in the document link:
https://wiki.jenkins.io/display/JENKINS/Anchore+Container+Image+Scanner+Plugin
However, the scanning fails. Error Message as follows:
2018-10-11T07:01:44.647 INFO AnchoreWorker Analysis request accepted, received image digest sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8
2018-10-11T07:01:44.647 INFO AnchoreWorker Waiting for analysis of 10.180.25.2:5000/hello-world:latest, polling status periodically
2018-10-11T07:01:44.647 DEBUG AnchoreWorker anchore-engine get policy evaluation URL: http://10.180.25.2:8228/v1/images/sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8/check?tag=10.180.25.2:5000/hello-world:latest&detail=true
2018-10-11T07:01:44.648 DEBUG AnchoreWorker Attempting anchore-engine get policy evaluation (1/300)
2018-10-11T07:01:44.675 DEBUG AnchoreWorker anchore-engine get policy evaluation failed. URL: http://10.180.25.2:8228/v1/images/sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8/check?tag=10.180.25.2:5000/hello-world:latest&detail=true, status: HTTP/1.1 404 NOT FOUND, error: {
"detail": {},
"httpcode": 404,
"message": "image is not analyzed - analysis_status: not_analyzed"
}
NOTE:
In Image TAG 10.180.25.2:5000/hello-world:latest, 10.180.25.2:5000 is our local private registry and hello-world:latest is latest hello-world image available in docker hub which we pulled and pushed in our registry to try out image scanning using Anchore-Engine.
Unfortunately we are not able to find much resource online to try and resolve the above mentioned issue.
Anyone who might have worked on Anchore-Engine, please may I request to have a look and help us resolve this issue.
Also, any suggestions or alternatives to anchore-engine or detailed steps in case we might have missed anything would be really appreciated.
End of the output is as follows:
2018-10-15T00:48:43.880 WARN AnchoreWorker anchore-engine get policy evaluation failed. HTTP method: GET, URL: http://10.180.25.2:8228/v1/images/sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8/check?tag=10.180.25.2:5000/hello-world:latest&detail=true, status: 404, error: {
"detail": {},
"httpcode": 404,
"message": "image is not analyzed - analysis_status: not_analyzed"
}
2018-10-15T00:48:43.880 WARN AnchoreWorker Exhausted all attempts polling anchore-engine. Analysis is incomplete for sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8
2018-10-15T00:48:43.880 ERROR AnchorePlugin Failing Anchore Container Image Scanner Plugin step due to errors in plugin execution
hudson.AbortException: Timed out waiting for anchore-engine analysis to complete (increasing engineRetries might help). Check above logs for errors from anchore-engine
at com.anchore.jenkins.plugins.anchore.BuildWorker.runGatesEngine(BuildWorker.java:480)
at com.anchore.jenkins.plugins.anchore.BuildWorker.runGates(BuildWorker.java:343)
at com.anchore.jenkins.plugins.anchore.AnchoreBuilder.perform(AnchoreBuilder.java:338)
at hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
at hudson.model.Build$BuildExecution.build(Build.java:206)
at hudson.model.Build$BuildExecution.doRun(Build.java:163)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:504)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:421)
I also checked status and found below:
docker run anchore/engine-cli:latest anchore-cli --u admin --p admin123 --url http://172.18.0.1:8228/v1 system status
Service analyzer (dockerhostid-anchore-engine, http://anchore-engine:8084): up
Service catalog (dockerhostid-anchore-engine, http://anchore-engine:8082): up
Service policy_engine (dockerhostid-anchore-engine, http://anchore-engine:8087): down (unavailable)
Service simplequeue (dockerhostid-anchore-engine, http://anchore-engine:8083): up
Service apiext (dockerhostid-anchore-engine, http://anchore-engine:8228): up
Service kubernetes_webhook (dockerhostid-anchore-engine, http://anchore-engine:8338): up
Engine DB Version: 0.0.7
Engine Code Version: 0.2.4
It seems service policy engine is down
Service policy_engine (dockerhostid-anchore-engine, http://anchore-engine:8087): down (unavailable)
I also checked the docker logs . I found below error:
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [DEBUG] service (policy_engine) starting in: 4
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [INFO] Registration complete.
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [INFO] Checking feeds client credentials
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [DEBUG] Initializing a feeds client
[service:policy_engine] 2018-10-15 09:37:47+0000 [-] [bootstrap] [DEBUG] init values: [None, None, None, (), None, None]
[service:policy_engine] 2018-10-15 09:37:47+0000 [-] [bootstrap] [DEBUG] using values: ['https://ancho.re/v1/service/feeds', 'https://ancho.re/oauth/token', 'https://ancho.re/v1/account/users', 'anon#ancho.re', 3, 60]
[service:policy_engine] 2018-10-15 09:37:47+0000 [-] [urllib3.connectionpool] [DEBUG] Starting new HTTPS connection (1): ancho.re
[service:policy_engine] 2018-10-15 09:37:50+0000 [-] [bootstrap] [ERROR] Preflight checks failed with error: HTTPSConnectionPool(host='ancho.re', port=443): Max retries exceeded with url: /v1/account/users/anon#ancho.re (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7ffa905f0b90>: Failed to establish a new connection: [Errno 113] No route to host',)). Aborting service startup
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/anchore_manager/cli/service.py", line 158, in startup_service
raise Exception("process exited: " + str(rc))
Exception: process exited: 1
[anchore-policy-engine] [anchore_manager.cli.service/startup_service()] [INFO] service process exited at (Mon Oct 15 09:37:50 2018): process exited: 1
[anchore-policy-engine] [anchore_manager.cli.service/startup_service()] [INFO] exiting service thread
Thanks and Regards,
Rohan Shetty
When images are added to anchore-engine, they are queued for analysis which moves them through a simple state machine that starts with ‘not_analyzed’, goes to ‘analyzing’ and finally ends in either ‘analyzed’ or ‘analysis_failed’. Only when an image has reached ‘analyzed’ will a policy evaluation be possible.
The anchore Jenkins plugin will add an image, then poll the engine for image status/evaluation for the configured number of tries (default 300). Once the image goes to ‘analyzed’ (where policy evaluation is possible), the plugin will then receive a policy evaluation result from the engine.
The plugin will fail the build (by default) if the max retries has been performed and the image has not reached ‘analyzed’, if the image does reach ‘analyzed’ but the policy evaluation is producing a ‘fail’ result (meaning the image didn’t pass your configured policy checks). Note that all build failure behavior can be controlled in the plugin (I.e. there are options to allow the plugin to succeed even if the analysis or image eval fails).
You’ll need to look at the end of the output from your build run (instead of just the beginning from your post), and combined with the information above, it should be clear which scenario is causing the plugin to fail the build.
We have resolved the issue.
Root Cause:
We were not able to establish a successful https connection to URL : https://ancho.re from within the anchore-engine docker container.
As a result the service:policy_engine was not able to start.
https://ancho.re is required to download policy feeds and sync-up periodically. Without these policy anchore-engine won't be able to analyse the docker images.
Solution:
1) We passed a HTTPS_PROXY URL as an environment variable in the docker-compose.yaml of anchore-engine.
We used this proxy URL to bypass restrictions in our environment and establish a connection with https://ancho.re url.
2) Restarted the docker containers.
Finally we got all services up and running including Anchore policy-engine.
FYI:
It takes a while to download all the required Feeds depending on your internet speed.
Lastly, Thanks to the Anchore community for quick responses and support over slack.
Hope this helps.
Warm Regards,
Rohan Shetty

Resources