dynatrace-operator running fine, but dynatrace-webhook pod logs showing error message.
how to find the OneAgentAPM ? and how to delete it ,manually ?Any help please.
kubectle logs -f pods/dynatrace-webhook-58975cf6bb-hgs4x
{"level":"info","ts":"2022-10-29T04:01:43.627Z","logger":"dynatrace-operator-version","msg":"dynatrace-operator","version":"v0.9.0","gitCommit":"99a1efbe21f7bf566be7412fe20d61a489d6333c","buildDate":"2022-09-28T13:57:47+00:00","goVersion":"go1.19.1","platform":"linux/amd64"}
{"level":"info","ts":"2022-10-29T04:01:44.243Z","logger":"main.controller-runtime.metrics","msg":"Metrics server is starting to listen","addr":":8383"}
{"level":"info","ts":"2022-10-29T04:01:44.282Z","logger":"main.controller-runtime.webhook","msg":"Registering webhook","path":"/label-ns"}
Error: OneAgentAPM object detected - the Dynatrace webhook will not inject until the deprecated OneAgent Operator has been fully uninstalled
Usage:
dynatrace-operator webhook-server [flags]
Flags:
--cert string File name for the public certificate. (default "tls.crt")
--cert-key string File name for the private key. (default "tls.key")
--certs-dir string Directory to look certificates for. (default "/tmp/webhook/certs")
-h, --help help for webhook-server
{"level":"info","ts":"2022-10-29T04:01:44.845Z","logger":"main.events","msg":"Unsupported OneAgentAPM CRD still present in cluster, please remove to proceed","type":"Warning","object":{"kind":"Pod","namespace":"dynatrace","name":"dynatrace-webhook-58975cf6bb-hgs4x","uid":"008d0262-8df3-4410-94f0-fbb50167b6cd","apiVersion":"v1","resourceVersion":"92860179"},"reason":"IncompatibleCRDPresent"}
{"level":"info","ts":"2022-10-29T04:01:44.846Z","logger":"main","msg":"OneAgentAPM object detected - the Dynatrace webhook will not inject until the deprecated OneAgent Operator has been fully uninstalled"}
I trying to find the way to find the OneAgentAPM Object and delete it.
We were using different version of operator and agent.
You probably have some old CRD that is causing the issue. Try to remove the operator, show all CRDs in the cluster and look for Dynatrace OneAgent CRDs. Just delete these CRDs and reinstall OneAgent operator. In my case, I had to delete these two: oneagents.dynatrace.com and oneagentapms.dynatrace.com
Related
I am trying to load a csv into a database in DataStax Astra using the DSBulk tool.
Here is the command I ran minus the sensitive details:
dsbulk load -url D:\\App\\data.csv -k data -t data -b D:\\App\\secure-connect-myapp -u username -p password
Here is the error I get back:
Operation LOAD_20221206-004421-512000 failed: Invalid bundle: missing file config.json.
Here is the full log:
2022-12-06 00:44:21 INFO Username and password provided but auth provider not specified, inferring PlainTextAuthProvider
2022-12-06 00:44:21 INFO A cloud secure connect bundle was provided: ignoring all explicit contact points.
2022-12-06 00:44:21 INFO A cloud secure connect bundle was provided and selected operation performs writes: changing default consistency level to LOCAL_QUORUM.
2022-12-06 00:44:21 INFO Operation directory: C:\Program Files\dsbulk-1.10.0\bin\logs\LOAD_20221206-004421-512000
2022-12-06 00:44:21 ERROR Operation LOAD_20221206-004421-512000 failed: Invalid bundle: missing file config.json.
java.lang.IllegalStateException: Invalid bundle: missing file config.json
at com.datastax.oss.driver.internal.core.config.cloud.CloudConfigFactory.createCloudConfig(CloudConfigFactory.java:114)
at com.datastax.oss.driver.api.core.session.SessionBuilder.buildDefaultSessionAsync(SessionBuilder.java:876)
at com.datastax.oss.driver.api.core.session.SessionBuilder.buildAsync(SessionBuilder.java:817)
at com.datastax.oss.driver.api.core.session.SessionBuilder.build(SessionBuilder.java:835)
at com.datastax.oss.dsbulk.workflow.commons.settings.DriverSettings.newSession(DriverSettings.java:560)
at com.datastax.oss.dsbulk.workflow.load.LoadWorkflow.init(LoadWorkflow.java:145)
at com.datastax.oss.dsbulk.runner.WorkflowThread.run(WorkflowThread.java:52)
The error says that config.json is missing, but it isn't. So I'm stuck. Unless it's looking somewhere other than in the bundle I specified, but the bundle definitely has the config.json file.
This error:
...
java.lang.IllegalStateException: Invalid bundle: missing file config.json
at com.datastax.oss.driver.internal.core.config.cloud.CloudConfigFactory.createCloudConfig(CloudConfigFactory.java:114)
...
indicates that the Java driver bundled with DSBulk is unable to connect to your Astra DB because it couldn't get the configuration details from the secure connect bundle.
Please make sure that the valid secure bundle ZIP is accessible to DSBulk. You need to provide the path to the ZIP file, not just the directory. For example:
$ dsbulk ... -b /path/to/secure-connect-db.zip ...
Please check the path in your command then try again. Cheers!
👉 Please support the Apache Cassandra community by hovering over the cassandra tag above and click on Watch tag. 🙏 Thanks!
In order for you to leverage DataStax Bulk Loader (aka DSBulk, in short), you would need to pass in the secure connect bundle (SCB) correctly. What I mean when I say is that you need either the fully qualified path or the relative path to the SCB file.
The correct command in your case would look like:
./dsbulk load -url 'D:\\App\\data.csv' -k data -t data -b 'D:\\App\\secure-connect-myapp.zip' -u username -p password
Note that -b option takes in the full SCB filename along with .zip file extension.
Other Resources:
Load data using DSBulk into DataStax Astra DB
-b command-line option reference
BONUS TIP: One could easily configure everything within a configuration file and leverage that. See documentation for additional details.
I was trying to showcase binary authorization to my client as POC. During the deployment, it is failing with the following error message:
pods "hello-app-6589454ddd-wlkbg" is forbidden: image policy webhook backend denied one or more images: Denied by cluster admission rule for us-central1.staging-cluster. Denied by Attestor. Image gcr.io//hello-app:e1479a4 denied by projects//attestors/vulnz-attestor: Attestor cannot attest to an image deployed by tag
I have adhered all steps mentioned in the site.
I have verified the image repeatedly for few occurances, for example using below command to force fully make the attestation:
gcloud alpha container binauthz attestations sign-and-create --project "projectxyz" --artifact-url "gcr.io/projectxyz/hello-app#sha256:82f1887cf5e1ff80ee67f4a820703130b7d533f43fe4b7a2b6b32ec430ddd699" --attestor "vulnz-attestor" --attestor-project "projectxyz" --keyversion "1" --keyversion-key "vulnz-signer" --keyversion-location "us-central1" --keyversion-keyring "binauthz" --keyversion-project "projectxyz"
It throws error as:
ERROR: (gcloud.alpha.container.binauthz.attestations.sign-and-create) Resource in project [project xyz] is the subject of a conflict: occurrence ID "c5f03cc3-3829-44cc-ae38-2b2b3967ba61" already exists in project "projectxyz"
So when I verify, I found the attestion present:
gcloud beta container binauthz attestations list --artifact-url "gcr.io/projectxyz/hello-app#sha256:82f1887cf5e1ff80ee67f4a820703130b7d533f43fe4b7a2b6b32ec430ddd699" --attestor "vulnz-attestor" --attestor-project "projectxyz" --format json | jq '.[0].kind' \
> | grep 'ATTESTATION'
"ATTESTATION"
Here are the screen shots:
Any feedback please?
Thanks in advance.
Thank you for trying Binary Authorization. I just updated the Binary Authorization Solution, which you might find helpful.
A few things I noticed along the way:
... denied by projects//attestors/vulnz-attestor:
There should be a project ID in between projects and attestors, like:
projects/my-project/attestors/vulnz-attestor
Similarly, your gcr.io links should include that same project ID, for example:
gcr.io//hello-app:e1479a4
should be
gcr.io/my-project/hello-app:e1479a4
If you followed a tutorial, it likely asked you to set a variable like $PROJECT_ID, but you may have accidentally unset it or ran the command in a different terminal session.
After pointed to another repository problem solved, but before that you were having problems and there could be many reasons. please contact support with error message if you are having the same problem.
I would like to know where, if it is possible, I can configure default catalog and server values to use when executing the presto CLI.
Presto CLI info:
ls -lthr /opt/presto-server-0.169/presto
/opt/presto-server-0.169/presto -> presto-cli-0.169-executable.jar
And instead of executing:
/opt/presto-server-0.169/presto --server localhost:6666 --schema abc --catalog catalog-1
I would like to execute:
/opt/presto-server-0.169/presto
with it picking up localhost:6666 as my server and catalog-1 as my catalog. I would like to specify the schema once I make the connection.
Any help will be appreciated!
Thanks.
There is no such option to set host in console lazily. The server needs to be defined upfront by default localhost:8080 is used.
If you cannot pass proper arguments to the presto-cli and cannot use the default server host, you can change default values in presto-cli source code and compile your version.
You need to checkout project at github.
Change default values in ClientOptions.
Package jar for presto cli: cd presto-cli && mvn package
You can find a jar in target/presto-cli-0.201-SNAPSHOT.jar
For schema/catalog, you can define it in the console itself with USE command. The syntax as follows: USE [<catalog>.]<schema>.
Please note that with each version of presto you need also compile and maintain your own version of presto-cli, which might become a burden quite soon.
I am having issues with node-red and can no longer run any flows, i am not sure what to do anymore.
i get the following error
Imported unrecognised type: mqtt-env-broker
Flows stopped due to missing node types. Check logs for details.
Tried to remove this module from the palette but get the following error, i am also unable to disable it
Failed to remove: node-red-contrib-mqtt-env
Error: Type in use: mqtt-env-broker
Check the log for more information
I have installed the node-red-admin and tried to remove it from command line, so i issued the following command
sudo node-red-admin remove node-red-contrib-mqtt-env/mqtt-env
i get the following error
404: Cannot DELETE /nodes/node-red-contrib-mqtt-env/mqtt-env
this is what i get when i ran the following command
node-red-admin list
Nodes Types State
node-red-contrib-mqtt-env/mqtt-env mqtt-env in error
mqtt-env out
mqtt-env-broker
node-red-dashboard/ui_audio ui_audio enabled
...
...
node-red/mqtt mqtt in error
mqtt out
mqtt-broker
The flows are stopped because they are trying to use a node type you have not got installed - or in this instance, it appears, is hitting an error when it tries to start
The runtime won't let you remove the node because it is referenced in your flow.
To fix this you need to delete any of the nodes referenced by this module from your flow. The name mqtt-env-broker suggests it is a configuration node rather than a regular flow node. Open the Configuration Nodes sidebar panel (from the drop-down menu) and look for any unknown config nodes. Double click on them and delete them. Once you've removed them, hit deploy and things should start working again.
You should then be able to delete the node module from your runtime.
When I am running this command for weave network, it is showing this error.
[root#ts ~]# kubectl apply -f https://git.io/weave-kube
error validating "https://git.io/weave-kube": error validating data: [unexpected type: object, unexpected type: object, unexpected type: object, unexpected type: object]; if you choose to ignore these errors, turn validation off with --validate=false
How to resolve this?
#verma_neeraj,
Does this still and consistently happen to you?
Which Kubernetes version are you using?
What happens if you run curl https://git.io/weave-kube?
I can confirm the YAML file available at https://git.io/weave-kube successfully configures the Weave Net daemonset under Kubernetes versions 1.5.+, but I have not tried other versions.
Anticipating an issue related to Kubernetes versions, note that there is work being done to support multiple Kubernetes versions, see these two GitHub issues.
This should be available in the coming weeks.
(Disclosure: I work for Weaveworks)