I'm using Github Actions to build some Docker images that I want to push to Azure Container Registry. I am attempting to use OIDC as an auth mechanism, based on this GH Action. I know the action supports other auth strategies, which I have discarded for my use case for reasons.
According to GH docs the "subject" field needs to be populated based on the GH account, repo name and branch name. However, want to build Docker images for multiple branches, which seems to require one federation config per branch - not practical, IMO.
So my question is: does anyone know if it's possible (and how) to set up a single federation config with a "subject" value that would work as a wildcard of sorts, covering all branches from a give repo?
thanks!
On AWS it is possible to use wildcards, like:
"repo:MY_ORG/MY_REPO:*"
but that doesn't seem to work on Azure, you can enter a wildcard in Azure Federated Credentials, but the GitHub workflow fails. To actually need a branch is crazy, as we'd need to setup new credential config for each new git branch.
I worked around the issue by using GitHub environments. I set an environment (called main but it can be called anything) and then set my workflow like this:
jobs:
test:
runs-on: ubuntu-latest
environment: main
and then in Azure set the federated creds to use:
Entity of Environment rather than Entity of Branch
This will then work for any branch - but clearly if you use GitHub environments for other reasons this may not be viable.
Note that, since Oct. 2022:
GitHub Actions:OpenID Connect support enhanced to enable secure cloud deployments at scale (Oct. 2022)
OpenID Connect (OIDC) support in GitHub Actions enables secure cloud deployments using short-lived tokens that are automatically rotated for each deployment.
You can now use the enhanced OIDC support to configure the subject claim format within the OIDC tokens, by defining a customization template at either org or repo levels.
Once the configuration is completed, the new OIDC tokens generated during each deployment will follow the custom format.
This enables organization & repository admins to standardize OIDC configuration across their cloud deployment workflows that suits their compliance & security needs.
Learn more about Security hardening your GitHub Workflows using OpenID Connect.
That means, from the documentation:
Customizing the subject claims for an organization or repository
To help improve security, compliance, and standardization, you can customize the standard claims to suit your required access conditions.
If your cloud provider supports conditions on subject claims, you can create a condition that checks whether the sub value matches the path of the reusable workflow, such as "job_workflow_ref: "octo-org/octo-automation/.github/workflows/oidc.yml#refs/heads/main"".
The exact format will vary depending on your cloud provider's OIDC configuration. To configure the matching condition on GitHub, you can can use the REST API to require that the sub claim must always include a specific custom claim, such as job_workflow_ref.
You can use the OIDC REST API to apply a customization template for the OIDC subject claim; for example, you can require that the sub claim within the OIDC token must always include a specific custom claim, such as job_workflow_ref.
Related
We're in the process of implementing AD B2C as our sso and will have to go through multiple versions of user flows during testing. They will be used for different environments and some in parallel in testing so we won't be able simply change existing versions. We'd like to establish a base flow for our sign in and our multiple sign ups as starting points that we could then clone when we create a new version. Is there any way to clone an existing User Flow either directly or by download/upload to a new flow? I know we could do something similar with custom policies but we've made the decision to stick with user flows.
Thanks!
User flows cannot be cloned as user flows, but you can download their source code and clone them as custom policies. User flows are custom policies anyways. You can download their source code from the Azure Portal as shown in the following picture.
You can opt to append the base policies code to the user flow code using the following API call. Please keep in mind this API is not publicly supported and only provided AS IS:
GET https://main.b2cadmin.ext.azure.com/api/trustframework/GetAsXml?sendAsAttachment=true&tenantId=<TENANT NAME>.onmicrosoft.com&policyId=<USER FLOW ID>&getBasePolicies=true
You will need an access token for scope https://management.core.windows.net/user_impersonation
Let me know if you need additional help!
I'm using GitLab Enterprise Edition 14.6.5-ee
I want to create a Git tag automatically when I merge a branch back to master. I'm fine with the actual Git commands; the problem is with the authentication: the build bot doesn't know how to authenticate back to the server. There's an answer here how to set up SSH keys. But this requires me to use my personal credentials, which is just wrong, because it's not me creating the tag; it's the build bot.
Seriously, it just doesn't make sense to say that the bot doesn't know how to authenticate. I mean, it just pulled the freakin' code from the repo! So why is it such a big leap from being able to pull code to being able to push code?
Any ideas how to automate the creation of tags without using my personal credentials?
CI jobs do have a builtin credential token for accessing the repository: the $CI_JOB_TOKEN variable. However this token only has read permissions, so it won't be able to create tags. To write to the repository or API, you'll have to supply a token or SSH key to the job. However, this doesn't necessarily have to be your personal token.
There are a few ways you can authenticate to write to the project without using a personal credential:
You can use project access tokens
You can use group access tokens -- these are only exposed in the UI after GitLab 14.7
You can use deploy SSH keys (when you grant read-write to the key)
So why is it such a big leap from being able to pull code to being able to push code?
This is probably a good thing. While it may require you to do extra work in this case, the builtin job authorization tries to apply the principle of least privilege. Many customers have even argued that the existing CI_JOB_TOKEN permissions are too permissive because they allow access to read other projects!
In any case, it is on GitLab's roadmap to make these permissions more controllable and flexible :-)
Alternatively, use releases
If you don't mind creating a release in addition to a tag, you could also use the release: keyword in the CI yaml as an easy way to create the tag.
It's somewhat of an irony that the releases API allows you to use the builtin CI_JOB_TOKEN to create releases (and presumably tags) but you cannot (as far as I know) use CI_JOB_TOKEN on the tags API to create a tag.
However, in this case, it will still have the effect that the releases/tag appear to be created by you.
Is there an elegant way to use a single set of ADB2C IEF custom policies across multiple environments (eg dev/test/prod)?
This issue has arisen as we have designed two custom IEF policies - one for signin, and separately one for signup
On the signin page ADB2C tries to generate a url for signup, but because we have a custom policy for signup we need to rewrite this URL in javascript so that it points to a different url
(as described in these q/a's) :
B2C - How to override sign up now link (custom policy)
Msal 2.0 - how to generate Sign Up link with Azure B2C?
But now we start hitting more issues. We can't rewrite the url to myapp.com/signup, because we need to rewrite it based on the environment. It needs to rewrite to dev-app.com/signup or test-app.com/signup etc
So the only way I can see to fix this is to use separate ContentDefinitions for each environment, each with customised javascript.
But then I also need individual policies for each environment so that each policy can use a specific content definition file!
Ugh. Is there an easier way than trying to maintain what should really be one set of policies across three sets of environments (which ends up becoming 6 sets of policies, content definition files etc)?!
Fantasising a bit - I think ideally we'd configure MSAL to send the environment to the policy somehow, and then at least make that variable available in the policy files so that they could perhaps fetch the content definition files with a query parameter?
<ContentDefinition Id="api.signin">
<LoadUri>https://storage.com/adb2c/signin-{Culture:RFC5646}.html?env={environment}</LoadUri>
Yes, use DevOps and Azure Pipelines.
You can then search and replace the variables that you need to change across environments.
I am running into conflicts using Docusign connect with multiple environments.
My Sandbox account is being used by Staging, Review apps & Dev environments. Docusign Connect is sending out envelope-events to environments that have not created the envelope which is causing lots of confusion.
This must be a common issue - is there a recommended way of handling it?
The only work-around I can think of is to add an sending_environment custom field with each envelope and then filter out the envelope-events when they are sent to each environment.
Thanks
Yes, using custom fields is a good approach.
I assume when you say "multiple environments" you mean your company/app/IT etc. not DocuSign's.
If it was DocuSign's (demo/sandbox vs. production) you could tell based on many things, account number, envelope ID, URL etc.
in any event, putting a text field that you can retrieve on the other end is a good way to handle this.
What I think you can also do, is have a different DocuSign configuration for each of your environments, such that they use different URLs for the callback from DocuSign. That approach may be more or less work than the other approach. Your call.
The solution we have gone for:
Each Dev sets up their own 'user' account (invited via the docusign account dashboard)
Devs use their credentials locally
Staging & Review apps use the default sandbox docusign user account
Production uses the default production docusign user account
Once this is done, we set up a permissions check on each envelope event that arrives, ensuring the user <Email> field matches that of the current environment
<?xml version="1.0" encoding="utf-8"?>
<DocuSignEnvelopeInformation>
<EnvelopeStatus>
<TimeGenerated>2020-05-18T12:00:00</TimeGenerated>
<EnvelopeID>abcdef</EnvelopeID>
<Email>me#gmail.com</Email>
</EnvelopeStatus>
</DocuSignEnvelopeInformation>
This ensures Dev environments don't conflict with staging & review apps.
Next we need a way of distinguishing Staging and each Review app. For this we add a <CustomField> to each envelope that we create, and in Staging & Review app environments, add an additional check for that custom field to filter out any envelopes that were not created in the current environment.
I am connecting my GitLab account to PyCharm and while creating the token access in GitLab, I was uncertain what are the practical uses.
I am very new to this so if someone can dumb this down, that would be appreciated.
The idea is to:
limit the actions you can do with one PAT
have several PAT for several usage
easily revoke one PAT if compromised/not needed, without invalidating the others
As illustrated here, if you intend to use a PAT as your GitLab password, you would need the "api" scope.
If not, "read_repository" or (if you don't need to clone) "read_user" is enough.
"read_registry" is only needed if your GitLab host docker images as a docker registry.
What I don't understand is it allows me to select all at once
That is because each scope matches a distinct use case, possibly covered by another scope.
By selecting them all, you cover all the use cases:
api covers everything (which is too much, as illustrated by issue 20440. GitLab 12.10, Apr. 2020, should fix that with merge request 28944 and a read_api scope)
write (since GitLab 1.11, May 2019): "Repository read-write scope for personal access tokens".
Many personal access tokens rely on api level scoping for programmatic changes, but full API access may be too permissive for some users or organizations.
Thanks to a community contribution, personal access tokens can now be scoped to only read and write to project repositories – preventing deeper API access to sensitive areas of GitLab like settings and membership.
read_users and registry each address a distinct scenario
As mentioned in gitlab-ce/merge_request 5951:
I would want us to (eventually) have separate scopes for read_user and write_user, for example.
I've looked at the OpenID Connect Core spec - it defines the profile, email, address, and phone scopes, which need to be accompanied by the openid scope.
Since we can have multiple allowable scopes for a given resource, my preference would be to leave the read_user scope here, and add in the openid and profile scopes whenever we're implementing OpenID compliance.
The presence of other scopes (like read_user and write_user) shouldn't affect the OpenID flow.