I'm trying to deploy elastic search 7.10.x in openshift . When in deploy helm chart response is successfully deployed . But when i check the pods it shows below error .
create Pod elasticsearch-dev1-master-0 in StatefulSet elasticsearch-dev1-master failed error: pods "elasticsearch-dev1-master-0" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, provider restricted: .spec.securityContext.fsGroup: Invalid value: []int64{1000}: 1000 is not an allowed group, spec.initContainers[0].securityContext.runAsUser: Invalid value: 0: must be in the ranges: [1000620000, 1000629999], spec.initContainers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed, spec.containers[0].securityContext.runAsUser: Invalid value: 1000: must be in the ranges: [1000620000, 1000629999], spec.initContainers[0].securityContext.runAsUser: Invalid value: 0: running with the root UID is forbidden, provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount, provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden: not usable by user or serviceaccount, provider "hostaccess": Forbidden: not usable by user or serviceaccount, provider "scc-elasticsearch": Forbidden: not usable by user or serviceaccount, provider "node-exporter": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount]
Also i can i fix this by editing the default SCC . What is recommended way to deploy this in elasticsearhc helm chart in openshift ?
Look at this - seems you can just null out those runAsUser flags to avoid the permissions problem rather than assign extra privs to make it run as root.
https://github.com/elastic/helm-charts/blob/7.10/elasticsearch/examples/openshift/values.yaml
If you're using a different helm chart, provide a link to the one you're using.
Related
I deployed infrastructure using this repo. The logs of my application gateway pod looks like :
Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"ingress-appgw-deployment
-bf6785d8d-87lgm", UID:"uiuiduid-4dff-4496-ba43-0ed031542ed7", APIVersion:"v1", ResourceVersion:"102567", FieldPath:""}): type: 'Warning
' reason: 'FailedApplyingAppGwConfig' network.ApplicationGatewaysClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Origina
l Error: Code="LinkedAuthorizationFailed" Message="The client 'xxxxxxxx-551c-46a7-b1c2-e4eb093784ce' with object id 'xxxxxxxx-551c-46a7-
b1c2-e4eb093784ce' has permission to perform action 'Microsoft.Network/applicationGateways/write' on scope '/subscriptions/xxxxxxxx-6a2d
-49e7-a103-74011445fdf5/resourceGroups/rg-kubota-dev/providers/Microsoft.Network/applicationGateways/agw-kubota-dev'; however, it does n
ot have permission to perform action 'Microsoft.ManagedIdentity/userAssignedIdentities/assign/action' on the linked scope(s) '/subscript
ions/xxxxxxx-6a2d-49e7-a103-74011445fdf5/resourcegroups/rg-kubota-dev/providers/Microsoft.ManagedIdentity/userAssignedIdentities/id-agw
-keyvault-kubota-dev' or the linked scope(s) are invalid."
This issue is similar to mine. and i run :
az role assignment create --role "Managed Identity Operator" --assignee xxxxxxxx-551c-46a7-b1c2-e4eb093784ce --scope /subscriptions/xxxxxxxx-6a2d-49e7-a103-74011445fdf5/resourceGroups/rg-kubota-dev/providers/Microsoft.Network/applicationGateways/agw-kubota-dev
and the permission was added successfully:
But the error mentioned in application gateway logs, still present.
Not sure what is the cause ? Any pointers would be helpful
Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"ingress-appgw-deployment bf6785d8d-87lgm", UID:"uiuiduid-4dff-4496-ba43-0ed031542ed7", APIVersion:"v1", ResourceVersion:"102567", FieldPath:""}): type: 'Warning'reason: 'FailedApplyingAppGwConfig' network.ApplicationGatewaysClient#CreateOrUpdate: Failure sending request: StatusCode=0 – Original Error: Code="LinkedAuthorizationFailed" Message="The client 'xxxxxxxx-551c-46a7-b1c2-e4eb093784ce' with object id 'xxxxxxxx-551c-46a7-b1c2-e4eb093784ce' has permission to perform action 'Microsoft.Network/applicationGateways/write' on scope '/subscriptions/xxxxxxxx-6a2d49e7-a103-74011445fdf5/resourceGroups/rg-kubota-dev/providers/Microsoft.Network/applicationGateways/agw-kubota-dev'; however, it does not have permission to perform action 'Microsoft.ManagedIdentity/userAssignedIdentities/assign/action' on the linked scope(s) '/subscriptions/xxxxxxx-6a2d-49e7-a103-74011445fdf5/resourcegroups/rg-kubota-dev/providers/Microsoft.ManagedIdentity/userAssignedIdentities/id-agw keyvault-kubota-dev' or the linked scope(s) are invalid."
According the above error, the linked scope is invalid.
You have given the incorrect scope id in your existing Azure CLI command. Thus, you will have to execute the commands with correct scope ID and only then you will be able to configure the application gateway with the permission of "Managed identity operator role". Ensure that the below scope for your environment is mentioned in your Azure CLI command and it is executed once again for the correct scope permission to be valid.
Correct scope: -
/subscriptions/xxxxxxx-6a2d-49e7-a103-74011445fdf5/resourcegroups/rg-kubota-dev/providers/Microsoft.ManagedIdentity/userAssignedIdentities/id-agw-keyvault-kubota-dev ’
Reference:
application-gateway-kubernetes-ingress/appgw-ssl-certificate.md at master · Azure/application-gateway-kubernetes-ingress (github.com)
I was previously able to use terraform 0.11 with digitalocean. I have since updated the terraform version to 0.13.5 and updated the digitalocean provider. However, after this change, I am not able to provision any resource as I am getting a 401 error from digitalocean. I have even tried using a new authentication token but that produced the same result.
Error: Error creating droplet: POST https://api.digitalocean.com/v2/droplets: 401 Unable to authenticate you
I have modified the TF_LOG value but that has not provided any additional details to help debug the issue. Any ideas on how to troubleshoot this further?
The token is valid as I am able to use it with curl but not with terraform 0.13.5 and digitalocean provider 2.2.0.
What could be happening, is that after the upgrade, you are not loading the variable correctly.
So the terraform it's passing an empty token to the provider. The provider then tries to authenticate with an empty/wrong token and fails resulting in 401.
If you are providing a default value, to confirm the issue, try removing the default value, and make it ask you instead.
Try Following this example
#Set the variable value in *.tfvars file
# or using -var="do_token=..." CLI option
variable "do_token" {}
# Configure the DigitalOcean Provider
provider "digitalocean" {
token = var.do_token
}
# Create a web server
resource "digitalocean_droplet" "web" {
# ...
}
And make sure you name your file whatever.auto.tfvars (auto.tfvars is the key) with the toke like this:
do_token = ua0uhk0a0ka0k7a0o90ia0oekadho0eka9
And it should work, or ask you for a token.
To be noted: This is an API token, not your password. follow [this process]{https://docs.digitalocean.com/reference/api/create-personal-access-token/} if you have never created/used one.
I am trying to deploy an ARM template for ADF using Azure DevOps CI/CD
The deployment was successful but while trying to test the linked services, I am not able to connect successfully.
The linked service is to get connected to the ADLS location under same subscription and the authentication method is using service principal and using key vault secret name to get the connection.
key vault is also under the same subscription and resource group.
While trying to connect the LS to ADLS location I am getting the below error.
Failed to get access token by using service principal. Error: invalid_client, Error Message: AADSTS7000215: Invalid client secret is provided.
Trace ID: 67d0e882-****-****-****-***6a0001
Correlation ID: 39051de7-****-****-****-****6402db04
Timestamp: 2020-11-** **:**:**Z Response status code does not indicate success: 401 (Unauthorized). {"error":"invalid_client","error_description":"AADSTS7000215: Invalid client secret is provided.\r\nTrace ID: 67d0e882-****-****-****-***6a0001\r\nCorrelation ID: 39051de7-****-****-****-****6402db04\r\nTimestamp: 2020-11-** **:**:**Z","error_codes":[7000215],"timestamp":"2020-11-** **:**:**Z","trace_id":"67d0e882-****-****-****-***6a0001","correlation_id":"39051de7-****-****-****-****6402db04","error_uri":"https://login.microsoftonline.com/error?code=7000215"}: Unknown error .
AADSTS7000215: Invalid client secret is provided.
The linked services which is to connect clusters are working fine for which connection secrets are stored in the same key vault.
I was confused some secrets(for cluster connection) in the same key vault is working and few (for adls connection) are not working.
Had a check for the application under same principal id in Azure active directory and secret is valid till 2022.
Any Idea about the root cause of the error and how to resolve the issue?
I have encountered a similar problem before, you need to make sure that the client secret belongs to the application you are using, or you can also try to create a new client secret, it should work for you.
I'm trying to set up SSO between our (regular, not AKS) kubernetes clusters and Azure AD.
Since I don't know how to forward the token to the kube-dashboard, I'm just currently trying with kubectl binary installed on my computer.
It works when no groups are involved, but we want to filter by security group (accounts on AAD are synced from our onprem Active Directory), no kube RBAC involved.
Setup is inspired by https://medium.com/#olemarkus/using-azure-ad-to-authenticate-to-kubernetes-eb143d3cce10 and https://learn.microsoft.com/fr-fr/azure/aks/azure-ad-integration :
web app for kube api server configured to expose its API (add scope etc...) with app ID : abc123
native app for client kubectl configured with addition of api permission from the web app, with app ID : xyz456
kube api server yaml manifest , I add :
- --oidc-client-id=spn:abc123
- --oidc-issuer-url=https://sts.windows.net/OurAADTenantID
config kubectl binary :
kubectl config set-cluster test-legacy-2 --server=https://192.168.x.y:4443 --certificate-authority=/somelocation/ca.pem
kubectl config set-credentials USER#mydomain.com --auth-provider=azure --auth-provider-arg=environment=AzurePublicCloud --auth-provider-arg=client-id=xyz456 --auth-provider-arg=tenant-id=OurAADTenantID --auth-provider-arg=apiserver-id=abc123
Also in the Azure client app manifest, had to specify :
"allowPublicClient":true,
"oauth2AllowIdTokenImplicitFlow":true
Otherwise had error "Failed to acquire a token: acquiring a new fresh token: waiting for device code authentication to complete: autorest/adal/devicetoken: Error while retrieving OAuth token: Unknown Error".
Found on https://github.com/MicrosoftDocs/azure-docs/issues/10326
Issues start when trying to filter on some security group that I find in the JWT as per https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens
I am receiving a format error even though the JWT Azure sends me does contain the groups in the right format (json array of strings)
Config :
In azure web app manifest to have the groups in my JWT :
"groupMembershipClaims": "SecurityGroup",
kube api server yaml manifest :
- --oidc-groups-claim=groups
- --oidc-required-claim=groups=bbc2eedf-79cd-4505-9fb4-39856ed3790e
with the string here being the GUID of my target security group.
I am receiving error: You must be logged in to the server (Unauthorized) on output of kubectl and the kube api server logs provide me this authentication.go:62] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, oidc: parse claim groups: json: cannot unmarshal array into Go value of type string]]
But I don't understand why it is not happy cause when I decode the JWT I do have
"groups": [
"00530f35-0013-4237-8947-6e3f6a7895ca",
"bbc2eedf-79cd-4505-9fb4-39856ed3790e",
"17dff614-fd68-4a38-906c-69561daec8b7"
],
which to my knowledge is a well-formatted json array of strings...
Why does the api server complain about the JWT ?
Ok so, Required claims must be a string, not an array of strings
But I found a workaround.
Don't use oidc-groups-claim and oidc-required-claim
In Azure, go to the Properties of the API server App.
Select Yes in "User assignment required"
In "Users and groups" add the specific Security Group you want to filter on
To test : Remove yourself from the Security Group
Wait for the token to expire (in my case it was 1 hour)
You can't log in anymore
I am trying to access DynamoDB from my Node app deployed on AWS ElasticBeanStalk. I am getting an error
User is not authorized to perform: dynamodb:PutItem on resource
It works perfectly fine locally, but when I deploy to the AWS it stops performing.
The dynamoDB access denied is generally a Policy issue. Check the IAM/Role policies that you are using. A quick check is to add
AmazonDynamoDBFullAccess
policy in your role by going to "Permissions" tab in AWS console. If it works after that then it means you need to create a right access policy and attach it to your role.
Check the access key you are using to connect to DynamoDB in your Node app on AWS. This access key will belong to a user that does not have the necessary privileges in IAM. So, find the IAM user, create or update an appropriate policy and you should be good.
For Beanstalk you need to setup user policies when you publish. Check out the official docs here.
And check out the example from here too, courtesy of #Tirath Shah.
Granting full dynamodb access using aws managed policy AmazonDynamoDBFullAccess is not recommended and is not a best practice.
Try adding your table arn in the resource key in the policy in your role policy json.
"Resource": "arn:aws:dynamodb:<region>:<account_id>:table:/dynamodb_table_name"
In my case (I try to write to a DynamoDB table through a SageMaker Notebook for experimental purposes), the complete error looks like this:
ClientError: An error occurred (AccessDeniedException) when calling the UpdateItem operation: User: arn:aws:sts::728047644461:assumed-role/SageMakerExecutionRole/SageMaker is not authorized to perform: dynamodb:UpdateItem on resource: arn:aws:dynamodb:eu-west-1:728047644461:table/mytable
I needed to go to AWS Console -> IAM -> Roles -> SageMakerExecutionRole, and Attach these two Policies:
AmazonDynamoDBFullAccess
AWSLambdaInvocation-DynamoDB
In a real-world scenario though, I'd advise to follow the least-permissions philosophy, and apply a policy that allows put item method to go through, in order to avoid accidents (e.g. deleting a record from your table).
Sign in to IAM > Roles, select the service name. Make sure the DynamoDB Resource is correct.