I'm having some issues when trying to use Hashicorp vault template (with terraform to.be.continuous).
Actually when I use it with terraform-vault template I got an error message.
This is a summary of .gitlab-ci.yml
include:
- project: "to-be-continuous/terraform"
ref: "2.4.0"
file: "templates/gitlab-ci-terraform.yml"
# Vault variant
- project: 'to-be-continuous/terraform'
ref: '2.4.0'
file: '/templates/gitlab-ci-terraform-vault.yml'
variables:
VAULT_BASE_URL: "https://vault.secrets.tech.orange/v1"
VAULT_ROLE_ID: $VAULT_ROLE_ID
VAULT_SECRET_ID: $VAULT_SECRET_ID
GCP_MYSECRET: "#url#http://vault-secrets-provider/api/secrets/XXX/gcp/credentials?field=mygcpsecret"
Error Message:
[ERROR] Failed getting secret GCP_MYSECRET:
... Connecting to vault-secrets-provider (127.0.0.1:80)
... wget: server returned error: HTTP/1.1 404 Not Found
I tried without vault template and it works.
Would you please help me with this? Or perhaps, where I can ask for some help?
It turns out you were facing this issue due to a Kubernetes runners limitation.
As stated in GitLab documentation,
Kubernetes runners cannot use several services using the same port
As a result, using the tracking service in addition to another one using the same port (80) fails.
It has now been fixed.
Related
I am having issue connecting my dbt cloud and dbt core to databricks
I have read these 4 links, but still can not connect
https://docs.databricks.com/integrations/prep/dbt-cloud.html#connect-to-dbt-cloud&language-Cluster
https://docs.databricks.com/integrations/prep/dbt.html
https://docs.getdbt.com/reference/warehouse-profiles/databricks-profile
https://github.com/databricks/dbt-databricks
on dbt-cloud:
When I test the connection during the project creation step, it passed the test. however when I use the connection to create a job and run, it returns this message:"Cannot set database in spark!"
Edit: the issue once has been fixed but it comes back again.
original fix:
The dbt-core connection issue has been fixed. The issue is caused by the python certificate issue on MacOS. please refer to this link for the solution
on dbt-core:
this how I set up my profiles.yaml file base on the documentations:
databrick_dbt_lakehouse:
outputs:
dev:
host: adb-755xxxxxxx7.7.azuredatabricks.net
http_path: /sql/protocolv1/o/755xxxx7/0517-xxxxxx-xxxxxx
schema: default
threads: 1
token: dapi<my token>
type: databricks
target: dev
Note:
for http_path i have tried both with or without backslash (/) before sql/...
I assume the schema means database name. I have tried 2, but none of them works
I use pipenv with python version 3.8.8
when I run dbt debug I got this message:
check failed:
dbt was unable to connect to the specified database.
The database returned the following error:
Runtime Error
Database Error
failed to connect
Please help, thanks
Your http_path seems wrong. Here is an extract from my profiles.yml
databricks_dbt_demo:
outputs:
dev:
host: adb-redacted.azuredatabricks.net
http_path: /sql/1.0/endpoints/redacted # for SQL Endpoint
# http_path: /sql/protocolv1/o/{ORG-ID}/{CLUSTER-ID} # for All-Purpose Cluster
schema: your_database_here
token: your_personal_access_token_here
type: databricks
target: dev
Are you using a cluster (e.g. All Purpose Cluster) or a SQL Endpoint?
Edit: This issue is came back again. I am writing this comment on the 3rd day after the original fix
When I run dbt run or dbt snapshot the returns this error message:
Encountered an error:
Runtime Error
Database Error
failed to connect
original fix:
The issue is caused by the python certificate issue on MacOS. please refer to this link for the solution.
The dbt cloud issue is caused by the incorrect schema name in the yaml file
I am currently working through AWS's Node.js tutorial, but am stymied at the deployment phase. When I try to upload the provided source bundle, the build fails and I get the following error:
Unable to deploy application version: Configuration validation exception: Invalid option specification (Namespace: 'aws:elasticbeanstalk:container:nodejs:staticfiles', OptionName: '/static'): Unknown configuration setting.
Where does this error come from, and where can I look to fix it?
"The current configuration assumes that you are using Amazon Linux AMI (pre-Amazon Linux 2), but the current default image is "Amazon Linux 2" and the static file parameter has changed."
Solution:
edit .ebextensions/options.config file
change:
aws:elasticbeanstalk:container:nodejs:staticfiles:
to
aws:elasticbeanstalk:environment:proxy:staticfiles:
reference: https://github.com/aws-samples/eb-node-express-sample/pull/21/files
I am in development phrase, and I am trying out Azure Function, with the following settings:
Linux
Premium Plan,
NodeJS 12
Deploy using FTP
What I have done:
I have deployed a sample Durable Functions HTTP Starter as specified here: https://learn.microsoft.com/en-us/azure/azure-functions/durable/quickstart-js-vscode#client-function-http-starter
And deployed my code to the xxxxxxxxxxx.ftp.azurewebsites.windows.net under /site/wwwroot
And I received the following Error in LogFiles/2020_06_10_xxxx_docker.log:
2020-06-10T01:05:51.825Z ERROR - Image pull failed: Verify docker image configuration and credentials (if using private repository)
2020-06-10T01:05:51.845Z INFO - Stopping site XXXXXXXXXX because it failed during startup.
2020-06-10T01:09:59.152Z INFO - Pulling image from Docker hub: mcr.microsoft.com/azure-functions/node:3.0-node8-appservice-stage6
2020-06-10T01:10:00.049Z ERROR - DockerApiException: Docker API responded with status code=NotFound, response={"message":"manifest for mcr.microsoft.com/azure-functions/node:3.0-node8-appservice-stage6 not found: manifest unknown: manifest tagged by \"3.0-node8-appservice-stage6\" is not found"}
Upon inspection, this mcr.microsoft.com/azure-functions/node:3.0-node8-appservice-stage6 docker image didn't exists, so it failed.
My question is, how to instruct Azure Function to use a valid docker image instead of a non-existing one? Or if any step above I done wrong so result in this issue? Thanks
Completely removing the Azure Function and creating a new one fix this issue.
I was trying to showcase binary authorization to my client as POC. During the deployment, it is failing with the following error message:
pods "hello-app-6589454ddd-wlkbg" is forbidden: image policy webhook backend denied one or more images: Denied by cluster admission rule for us-central1.staging-cluster. Denied by Attestor. Image gcr.io//hello-app:e1479a4 denied by projects//attestors/vulnz-attestor: Attestor cannot attest to an image deployed by tag
I have adhered all steps mentioned in the site.
I have verified the image repeatedly for few occurances, for example using below command to force fully make the attestation:
gcloud alpha container binauthz attestations sign-and-create --project "projectxyz" --artifact-url "gcr.io/projectxyz/hello-app#sha256:82f1887cf5e1ff80ee67f4a820703130b7d533f43fe4b7a2b6b32ec430ddd699" --attestor "vulnz-attestor" --attestor-project "projectxyz" --keyversion "1" --keyversion-key "vulnz-signer" --keyversion-location "us-central1" --keyversion-keyring "binauthz" --keyversion-project "projectxyz"
It throws error as:
ERROR: (gcloud.alpha.container.binauthz.attestations.sign-and-create) Resource in project [project xyz] is the subject of a conflict: occurrence ID "c5f03cc3-3829-44cc-ae38-2b2b3967ba61" already exists in project "projectxyz"
So when I verify, I found the attestion present:
gcloud beta container binauthz attestations list --artifact-url "gcr.io/projectxyz/hello-app#sha256:82f1887cf5e1ff80ee67f4a820703130b7d533f43fe4b7a2b6b32ec430ddd699" --attestor "vulnz-attestor" --attestor-project "projectxyz" --format json | jq '.[0].kind' \
> | grep 'ATTESTATION'
"ATTESTATION"
Here are the screen shots:
Any feedback please?
Thanks in advance.
Thank you for trying Binary Authorization. I just updated the Binary Authorization Solution, which you might find helpful.
A few things I noticed along the way:
... denied by projects//attestors/vulnz-attestor:
There should be a project ID in between projects and attestors, like:
projects/my-project/attestors/vulnz-attestor
Similarly, your gcr.io links should include that same project ID, for example:
gcr.io//hello-app:e1479a4
should be
gcr.io/my-project/hello-app:e1479a4
If you followed a tutorial, it likely asked you to set a variable like $PROJECT_ID, but you may have accidentally unset it or ran the command in a different terminal session.
After pointed to another repository problem solved, but before that you were having problems and there could be many reasons. please contact support with error message if you are having the same problem.
I'm trying to install Istio with automatic sidecar injection into Kubernetes. My environment consists of three masters and two nodes and was built on Azure using the Azure Container Service marketplace product.
Following the documentation located here, I have so far enabled RBAC and DynamicAdmissionControl. I have accomplished this by modifying /etc/kubernetes/istio-inializer.yaml on the Kubernetes Master by adding the following content outlined in red and then restarting the Kubernetes Master using the Unix command, reboot.
The next step in the documentation is to apply the yaml using kubectl. I assume that the documentation intends for the user to clone the Istio repository and cd into it before this step but that is unmentioned.
git clone https://github.com/istio/istio.git
cd istio
kubectl apply -f install/kubernetes/istio-initializer.yaml
After which the following error occurs:
user#hostname:~/istio$ kubectl apply -f install/kubernetes/istio-initializer.yaml
configmap "istio-inject" configured
serviceaccount "istio-initializer-service-account" configured
error: error validating "install/kubernetes/istio-initializer.yaml": error validating data: found invalid field initializers for v1.ObjectMeta; if you choose to ignore these errors, turn validation off with --validate=false
If I attempt to execute kubectl apply with the mentioned flag, validate=false, then this error is generated instead:
user#hostname:~/istio$ kubectl apply -f install/kubernetes/istio-initializer.yaml --validate=false
configmap "istio-inject" configured
serviceaccount "istio-initializer-service-account" configured
deployment "istio-initializer" configured
error: unable to recognize "install/kubernetes/istio-initializer.yaml": no matches for admissionregistration.k8s.io/, Kind=InitializerConfiguration
I'm not sure where to go from here. The problem appears to be related to the admissionregistration.k8s.io/v1alpha1 block in the yaml but I'm unsure what specifically is incorrect in this block.
apiVersion: admissionregistration.k8s.io/v1alpha1
kind: InitializerConfiguration
metadata:
name: istio-sidecar
initializers:
- name: sidecar.initializer.istio.io
rules:
- apiGroups:
- "*"
apiVersions:
- "*"
resources:
- deployments
- statefulsets
- jobs
- daemonsets
Installed version of Kubernetes:
user#hostname:~/istio$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18:21:54Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18:21:54Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
I suspect this is a versioning mismatch. As a follow up question, is it possible to deploy a version of kubernetes >= 1.7.4 to Azure using ACS?
I'm fairly new to working with Kubernetes so if anyone could help I would greatly appreciate it. Thank you for your time.
Seems to be a versioning problem as the alpha feature is supported for k8s version> 1.7 as mentioned here (https://kubernetes.io/docs/admin/extensible-admission-controllers/#what-are-initializers).
1.7 introduces two alpha features, Initializers and External Admission
Webhooks, that address these limitations. These features allow admission
controllers to be developed out-of-tree and configured at runtime.
And it is possible to deploy a version of kubernetes >= 1.7.4 to Azure. Note sure about the deployed version using the portal. But if you use acs-egnine to generate the ARM template, it is possible to deploy a cluster with version 1.7.5.
You can refer here for the procedures https://github.com/Azure/acs-engine. Basically it involves three steps. First, you should create the json file by referring to the clusterDefinition section. To use version 1.7.5, you should specify the attribute "orchestratorRelaease" to be "1.7" and also enable the RBAC by specifying the attribute "enableRbac" to be true. Second, use the acs engine (version >= 0.6.0) to parse the json file to ARM template (azuredeploy.json & azuredeploy.parameters.json should be created). Lastly, use the command "New-AzureRmResourceGroupDeployment" in powershell to deploy the cluster to Azure.
Hope this helps :)