I am currently using project based matrix for Jenkins global security management. I am planning to change that to Role based strategy I got the plugin installed but the thing is I don't want to lock out all the users for this update. is there a safe way to do it without having to lock out all the users while moving from project based to role based strategy. we are using AWS Ec2 instance for jenkins if that helps? Is there an option to clone the repo and do something.
Thank you.
Related
Is it possible to use Gitlab's Dependency Proxy for a private project not in a group?
It seems wasteful to create a single group per project (as they may not be related) just to cache container images. Besides, what if the project needs to use images from multiple groups (without being associated with multiple groups)?
I am using a docker runner, and the docs describe a DOCKER AUTH_CONFIG variable, but:
Setting it in gitlab-ci.yaml would expose secrets in the repo
Setting it elsewhere means hard coding a username/password, which I'd rather avoid and use pre-existing variables such as CI_REGISTRY_USER and CI_JOB_TOKEN
I also thought about creating a Deploy Token in the group, but the docs also say those are only for projects within the group.
We'd like to allow our developers to automatically deploy their changes to a kubernetes cluster by merging their code and k8s ressources in a git repo which is watched by ArgoCD. The release management teams would responsible to manage the ArgoCD config and setting up new apps as well as for creation of namespaces, roles and role bindings on the cluster while the devs should be able to deploy their applications through GitOps without the need to interact with the cluster directly. Devs might have read access on the cluster for debugging purposes.
Now the question: in theory it would be possible that a dev would create a new yaml and specify a rolebinding ressource, which binds his/her account to a cluster admin role. As ArgoCD has cluster admin rights, this would be a way to escalate privileges for the dev. (or an attacker impersonating a developer)
Is there a way to restrict, which k8s ressources are allowed to be created through ArgoCD.
EDIT:
According to the docs, this is possible per project using clusterResourceWhitelist.
Is it possible to do that globally?
You are right about Argo CD project. The project CRD supports allowing/denying K8S resources using clusterResourceWhitelist, clusterResourceBlacklist etc fields. The sample project definition is also available in Argo CD documentation.
In order to restrict the list of managed resources globally, you can specify the resource.exclusions/resource.inclusions field in the argocd-cm ConfigMap. The example is available here.
I'm working on an asp.net core project and I'm trying to figure out how to keep my source and my pipelines 100% secret free.
I've got a VM running the azure agent and an azure dev ops pipelines for build and release.
If i delete the site on the VM, the release pipeline will auto-magically recreate it for me and deploy the latest build.
Super cool.
Now I read up on best practices for configuring a .Net core app and I found this article: https://www.humankode.com/asp-net-core/asp-net-core-configuration-best-practices-for-keeping-secrets-out-of-source-control
So its a bad idea to keep secrets in code, that makes perfect sense.
But if i apply the same security principals to Yaml, then surely I shouldn't place secrets in my pipelines either.
But I need the pipelines to be able to just recreate the site from scratch and it should just work. Somehow the site needs to know where its default sql connection is, or it needs to have a key to the azure app config service. I shouldn't have to log onto the VM and create an appsettings.json manually after every release!
So whatever the site needs to operate needs to be included in the pipeline, therefore some artifact, or included in the code.
I've googled for days, but I can't seem to find any info on how to fully automate this.
I've considered creating a custom configuration provider that reads from the actual VM registry, but that feels wrong too.
I basically need a config option that is NOT hosted in the site itself. So i set it up once on the VM and never again.
The approach that Lex Li lists in the comments is the Microsoft recommended way of securing "secrets" in pipelines.
Ben Smith's answer in my opinion is just as good, maybe slightly less secure.
I use this approach in our organization. All of our release pipelines do the final configuration transformation with the appropriate settings based on the environment they are being deployed to.
i.e db connections are transformed at the dev, test and UAT and production deployment stages.
I keep the relevant secrets in the pipeline variables as protected secrets. I do this for 2 reasons:
Only a select number of trusted personnel have access to the release pipeline definitions.
Even if someone does have access to those definitions - you cannot see a secured variable. Even you you "undo the padlock" on the variable tab - you cannot see what the setting is.
Our actual secrets are then stored in our enterprise secret vault.
Using the Azure Key Vault is definitely a good approach. However we already have a centralized place to keep our stuff; I don't want it in 3 spots.
I would be remiss to not include Variable Groups as part of the pipeline process. Same concept as the build / release variables - the difference is you can now share them in one spot.
These are all opinions of course. This is just one way of doing this; which I feel is a pretty good balance of security and flexibility.
In addition to the suggestions in your questions comments, you can also store secrets in the pipeline "Variables" section.
In here you can add variables and then mark them as secret by selecting "Keep this value secret". Once you've saved a secret its value is then obfuscated i.e. you can make use of it but you can no long see its original value within Azure Devops (which admittedly can be rather frustrating if you want to revisit the variable to check it!).
You can then reference the secret variable in your pipeline YAML using the syntax:
$(variable-name)
So this approach keeps secrets safe within Azure Devops until they need to be resolved by the pipeline YAML script.
I want to be able to let the users push a docker file along with code to gitlab and let the gitlab build the image, that can then be pulled by authenticated user of the project.
The problem is , I want to make sure the users dont push docker images directly to gitlab container registry , so that we can review the docker-files and control , and make sure the Dockefiles are using the Redhat only registry to pull stuff from.
How can we prevent users from pushing thier own built image to gitlab?
In other words , how can we make sure that docker image in the container registry of gitlab project is the one built by gitlab from dockerfile and is not the one pushed by the project users direclty from somewhere else?
deploy tokens is probably the best way forward. You can grant these on a per-repository or a group basis and specify granular access such as, for your use case read_registry as well as an optional expiry date.
Another option is to use personal access tokens. These are set globally for a user and you can specify as many as you like (eg one for each client), set an expiry date, and restrict access to read_registry.
I don't think it's currently possible. If you check gitlab's permissions model, you'll see that the user access levels determine what you can do in the container registry:
read rights are available as Reporter+
update rights are available as Developer+
If your users are developers, then they will be able to push images to the registry. If you want to limit that to gitlab-ci builds, you'd need to use protected branches and limit your users to Reporter access level (probably not what you want).
An alternative a bit convoluted would be to setup a second project that is used as the source for images, and configure its build setup to pull from the first project protected branch. Commits to the protected branch in the first project would always have to be reviewed and docker images would be pulled from the second project.
I am investigating ways to automate deployment of a specific build of a product to a specific Azure Cloud Service or VM.
The following steps would be automated, with as little manual intervention as possible:
Create a Cloud Service or VM
Install a specific build of the product (as a standalone exe or
Windows service, not IIS)
Tweak the configuration files(s)
Set up user account(s)
Run the exe/service
The code is currently in Visual Studio Online / TFS. We have Cruise Control .NET CI set up and we are looking at moving to TeamCity.
This will be used for the usual QA & Production type environments, but also for ad-hoc deployment e.g. if a trial feature has been added to the product and we want to deploy that to a new VM for a specific customer to play around with. Ideally we would be able to use the command line or a UI to pick the build, create the VM and specify any configuration changes.
One possible solution might be Octopus Deploy although I don't think this would be able to actually create an Azure VM. I will probably also look at the Azure API, and also TFS deploy.
Basically is this feasible, and are there any proven alternatives that I'm missing, in order to narrow down my research?
Thanks in advance!
While Octopus Deploy can do many things, in this particular scenario of yours, you're asking it to do three types of work - release management, automated provisioning and configuration management. It's a fine line between automation awesomeness and a really sticky situation.
Of the tasks you're asking, almost all of them can be done within Octopus today. I'd argue that it may be possible to Create a cloud service or VM. If there's some PowerShell cmdlet/library that allows you to spin up VMs with authentication, odds are you can do it Octopus - but it may not be the right tool to do that job today. Why?
In my opinion, it distorts the barrier between Developers, DevOps and SysAdmins. Whether you use Chef, Puppet, Salt, etc. whatever configuration management you have, that needs a whole layer of users with the expertise to back it up - often said expertise of system which the very developers who want such flexibility may not have. Secondly, right now this isn't a focus within Octopus (yet). I'd be hard pressed to say whether to use a tool such as Octopus on what it can do vs what it should do or not.
It's really nice that Azure now has support for preinstalling the Octopus tentacle for VMs. But that requires additional info such as, the Server thumbprint, port other supplementary configuration info in order to automate vm provisioning. That configuration management - should it be under Octopus's control, or something like Chef or Puppet? I honestly don't have an answer to this but my feeling as of now is not Octopus. Someday, perhaps, but until this is really ready and fully tested and vetted, I'd wait it out (a little) at least with Octopus.
If you're the adventurous type, then by all means try out Octopus. I may do a PoC (proof of concept) of this infrastructure automation later this year, but to rely on it today for business/production usage as the primary means of infrastructure automation will be risky and require a lot of work and experimentation. Again, I'm not saying it cannot be done, I'm questioning whether it should be done within Octopus as of this response today.
If anything, from the Octopus Deploy side of things is this feasible? Yes - it just hasn't quite been worked out yet. Looking at what you want to do, I'd say it's a two-phase process: 1. spinning up the new VM, attaching the tentacle to the environment and 2. running the deployment process on that new VM.
I'd also recommend checking out the Octopus blog. They're publicly talking about infrastructure automation. You can read about it here: http://octopusdeploy.com/blog/rfc-cloud-and-infrastructure-automation-support
I hope this response helps in some way.
The solution to the automated deployment in Azure is use ElasticBox.
I will skip the details of all the configuration options for Azure supported by ElasticBox, as they are detailed in the documentation section: http://elasticbox.com/documentation/deploying-and-managing-instances/using-azure/.
You only need to create a box (abstraction unit that ElasticBox uses to define the installation and configuration of the deployment of a service or application in any cloud) that takes care of the steps you need to be automated. So finally you will deploy the vm with near no manual intervention, just one click or a command with some parameters.
A box includes the variables necessary for your deployment and your scripts (In this case probably PowerShell, but they could be bash, python, perl, java, etc.)
When you deploy the box you create to deploy your application, ElasticBox will:
Create a Cloud Service or VM. (ElasticBox takes care of provision the vm in your Azure provider, or any of your preferred cloud provider).
Install a specific build of the product (as a standalone exe or Windows service, not IIS) -> This should be your install event script.
Tweak the configuration files(s) -> This should be part of your configure event script.
Set up user account(s) -> This should be part of your configure event script.
Run the exe/service -> This should be part of your start event script.
ElasticBox has a command line tool that enables to do VM deployments of your boxes and also you can manage your deployed vms with it: https://pypi.python.org/pypi/ebcli
It also support automatic termination of the vm after a custom time value.
This is quite a broad question, but certainly the goal is achieveable via one of a number of methods. While a bit old, Tom Hollander's blog on automated deployments is a good starting place. I've seen a lot of OctopusDeploy used as well as TeamCity but they all ultimately rely on Azure's PowerShell Cmdlets, Management Libraries in custom code or pure REST API calls.
Just an FYI; One option is to do everything by using the Azure Management API. I also like to reference the Azure Client Libraries in a VS project and do everything is C# code.