Unable to access the self created storage account - azure

I have created a storage account, however, unable to access it. Error showing access denied.
The steps I have followed to create the storage account:
The error I'm getting is:
This is how the firewall and virtual network is looking like from the portal.Is there any specific things I need to select here? If I need to do some extra selection - how can I understand the meaning of that?
Also, blob storage is not appearing, do I need to select "premium" for blob storage?
I don't see any network setting now from the Azure portal.
What am I doing wrong? Any step by step method with good explanation to create a storage account?
Also,I'm unable to see blob storage.
The connectivity check is showing me error:
Failed to list containers: authMode: 4
code: AuthorizationFailure
content: _CYCLIC_OBJECT_
message: This request is not authorized to perform this operation.
RequestId:cce36eae-901e-001e-0472-415a25000000
Time:2020-06-13T11:02:14.4850017Z
name: StorageError
requestId: cce36eae-901e-001e-0472-415a25000000
url: https://example.blob.core.windows.net/?comp=list&_=1592046134245
xhr: {}
Failed to list queues: authMode: 4
code: AuthorizationFailure
content: _CYCLIC_OBJECT_
message: This request is not authorized to perform this operation.
RequestId:9c699d17-4003-0050-3672-4174ad000000
Time:2020-06-13T11:02:13.5044983Z
name: StorageError
requestId: 9c699d17-4003-0050-3672-4174ad000000
url: https://exmaple.queue.core.windows.net/?comp=list&_=1592046133224
xhr: {}
Failed to list containers: authMode: 1
code: AuthorizationFailure
content: _CYCLIC_OBJECT_
message: This request is not authorized to perform this operation.
RequestId:a5b1514f-e01e-0066-2e72-41f9dd000000
Time:2020-06-13T11:02:14.7298198Z
name: StorageError
requestId: a5b1514f-e01e-0066-2e72-41f9dd000000
url: https://example.blob.core.windows.net/?comp=list&_=1592046134491&sv=2019-10-10&ss=bqtf&srt=sco&sp=rwdlacuptfx&se=2020-06-13T19:02:13Z&sig=E4jZb9I6BjWBTrIzMnD9keq1BU8UfI%2F%2BZA1820lt3qk%3D
xhr: {}
Thanks.

To fix the access issue, please allow your client IP address to access the storage account.

Related

Connect to Cloud Storage through kubernetes pod with NodeJS

what I want to achieve, is that for my pods that live inside GKE to share files. So what I'm thinking is using the GoogleCloudStorage to write and read the files.
I have created a service account in my kubetcl
kubectl create serviceaccount myxxx-svc-account --namespace
myxxx
Then I also created the service account in my GCP console
Then, I added the roles of roles/iam.workloadIdentityUser in my GCP account
Next, I annotated my kubectl account with my GCP service account
kubectl annotate serviceaccount --namespace myxxx
myxxx-svc-account
iam.gke.io/gcp-service-account=myxxx-svc-account#myxxx-xxxxx.iam.gserviceaccount.com
I also added the roles of Storage Admin and Storage Object Admin in the GCP - IAM page
Then, in my deployment.yaml, I included my service account
spec:
serviceAccountName: myxxx-account
Bellow is how I try to upload a file to the storage
const {Storage} = require('#google-cloud/storage');
const storage = new Storage();
const bucket = storage.bucket('bucket-name');
const options = {
destination: '/folder1/folder2/123456789'
};
bucket.upload("./index.js", options, function(uploadError, file, apiResponse) {
console.log(uploadError.message)
console.log(uploadError.stack)
});
I deploy my node application to the GKE pods through docker. In the dockerFile, im using
FROM node
...
...
...
CMD ["node", "index.js"]
But I always get unauthorized 403 error
Could not refresh access token: A Forbidden error was returned while
attempting to retrieve an access token for the Compute Engine built-in
service account. This may be because the Compute Engine instance does
not have the correct permission scopes specified: Could not refresh
access token: Unsuccessful response status code. Request failed with
status code 403
Error: Could not refresh access token: A Forbidden
error was returned while attempting to retrieve an access token for
the Compute Engine built-in service account. This may be because the
Compute Engine instance does not have the correct permission scopes
specified: Could not refresh access token: Unsuccessful response
status code. Request failed with status code 403
at Gaxios._request (/opt/app/node_modules/gaxios/build/src/gaxios.js:130:23)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async metadataAccessor (/opt/app/node_modules/gcp-metadata/build/src/index.js:68:21)
at async Compute.refreshTokenNoCache (/opt/app/node_modules/google-auth-library/build/src/auth/computeclient.js:54:20)
at async Compute.getRequestMetadataAsync (/opt/app/node_modules/google-auth-library/build/src/auth/oauth2client.js:298:17)
at async Compute.requestAsync (/opt/app/node_modules/google-auth-library/build/src/auth/oauth2client.js:371:23)
at async Upload.makeRequest (/opt/app/node_modules/#google-cloud/storage/build/src/resumable-upload.js:574:21)
at async retry.retries (/opt/app/node_modules/#google-cloud/storage/build/src/resumable-upload.js:306:29)
at async Upload.createURIAsync (/opt/app/node_modules/#google-cloud/storage/build/src/resumable-upload.js:303:21)
What I'm doing wrong? seems like I have given the permission already? how can I troubleshoot it? Is it related with the docker image?

You are not authorized to perform this operation. (Service: AmazonEC2; Status Code: 403

I am a free tier aws user. I tried creating a group in IAM and created some user, in root user I created a policy which has no deletion policy, and I applied it to a group which I created.
But when I am accessing RDS by the user which I added into group, at that time I am not able to create any database. I am getting the respective error "You are not authorized to perform this operation. (Service: AmazonEC2; Status Code: 403; Error Code: UnauthorizedOperation; Request ID: 80839f5f-d08c-435d-850f-7ab185421d35; Proxy: null)"

Receiving error while running GitHub workflow

I am trying to run a simple workflow using terraform within GitHub Actions workflow using the below article, but am receiving an error.
Error: Failed to get existing workspaces: Error retrieving keys for Storage Account "xxxxx": azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://management.azure.com/subscriptions/\*\*\*/resourceGroups/XXXXXX/providers/Microsoft.Storage/storageAccounts/xxxx/listKeys?api-version=2016-01-01: StatusCode=400 -- Original Error: adal: Refresh request failed. Status Code = '400'. Response body: {"error":"invalid_request","error_description":"AADSTS90002: Tenant '***' not found. This may happen if there are no active subscriptions for the tenant. Check to make sure you have the correct tenant ID. Check with your subscription administrator.\r\nTrace
Can someone guide me on what am I missing here? I am very new to this and this is my first project.
How are you authenticating against Azure with the azurerm Terraform provider?
We normally use these ENV variables for GitHub Actions or Azure DevOps Pipelines:
export ARM_SUBSCRIPTION_ID=VALUE
export ARM_TENANT_ID=VALUE
export ARM_CLIENT_ID=VALUE
export ARM_CLIENT_SECRET=VALUE

Failed to get access token by using service principal while connecting to an ADLS location from ADF pipeline

I am trying to deploy an ARM template for ADF using Azure DevOps CI/CD
The deployment was successful but while trying to test the linked services, I am not able to connect successfully.
The linked service is to get connected to the ADLS location under same subscription and the authentication method is using service principal and using key vault secret name to get the connection.
key vault is also under the same subscription and resource group.
While trying to connect the LS to ADLS location I am getting the below error.
Failed to get access token by using service principal. Error: invalid_client, Error Message: AADSTS7000215: Invalid client secret is provided.
Trace ID: 67d0e882-****-****-****-***6a0001
Correlation ID: 39051de7-****-****-****-****6402db04
Timestamp: 2020-11-** **:**:**Z Response status code does not indicate success: 401 (Unauthorized). {"error":"invalid_client","error_description":"AADSTS7000215: Invalid client secret is provided.\r\nTrace ID: 67d0e882-****-****-****-***6a0001\r\nCorrelation ID: 39051de7-****-****-****-****6402db04\r\nTimestamp: 2020-11-** **:**:**Z","error_codes":[7000215],"timestamp":"2020-11-** **:**:**Z","trace_id":"67d0e882-****-****-****-***6a0001","correlation_id":"39051de7-****-****-****-****6402db04","error_uri":"https://login.microsoftonline.com/error?code=7000215"}: Unknown error .
AADSTS7000215: Invalid client secret is provided.
The linked services which is to connect clusters are working fine for which connection secrets are stored in the same key vault.
I was confused some secrets(for cluster connection) in the same key vault is working and few (for adls connection) are not working.
Had a check for the application under same principal id in Azure active directory and secret is valid till 2022.
Any Idea about the root cause of the error and how to resolve the issue?
I have encountered a similar problem before, you need to make sure that the client secret belongs to the application you are using, or you can also try to create a new client secret, it should work for you.

Why Azure treating 400 (Bad request) response as SCIM implementation error in provider?

Audit Log:
I have chose to not to DELETE group according to scim specification https://www.rfc-editor.org/rfc/rfc7644#section-3.6
Clients request resource removal via DELETE. Service providers MAY
choose not to permanently delete the resource
But then Azure treats it as error, Below is what I see in Audit Log, Did I understood the specification correctly or am I missing something?
Failed to delete Group '' in customappsso; Error: The SCIM
endpoint is not fully compatible with the Azure Active Directory SCIM
client. Please refer to the Azure Active Directory SCIM provisioning
documentation and adapt the SCIM endpoint to be able to process
provisioning requests from Azure Active Directory. StatusCode:
BadRequest Message: Processing of the HTTP request resulted in an
exception. Please see the HTTP response returned by the 'Response'
property of this exception for details. Web Response:
{"schemas":["urn:ietf:params:scim:api:messages:2.0:Error"],"detail":"DELETE
group not supported","status":null,"scimType":"mutability"}. This
operation was retried 0 times. It will be retried again after this
date: 2020-03-16T17:42:08.0940986Z UTC
The error shouldn't come up if you uncheck delete in the attribute mappings. You're right that the delete endpoint does not need to be implemented.
https://learn.microsoft.com/en-us/azure/active-directory/app-provisioning/customize-application-attributes#editing-user-attribute-mappings

Resources