Handling multiple environments, spinning up environment for testing in Azure Devops - azure

We have a large-scale service architecture with multiple features. Currently there are 3 environments created apart from production, due to which there are issues in sharing environments across multiple teams working on the same application but different features.
What would be nice is if every programmer is able to test his/her code in an integrated environment before merging to master.
Is there a way in Azure DevOps where in we can spin up a new temporary environment for testing and can be teared down after the code is merged to master.
I am not sure how to get started on this. It would be great if someone can help me.

Related

How would one create an isolated jenkins build node (without access to secrets)?

As the Top 10 CI/CD Security Risks SEC-04 states:
Ensure that pipelines running unreviewed code are executed on isolated nodes, not exposed to secrets and sensitive environments.
The above statement seems especially true when the code (or pipeline code itself) is in a pull request which has not yet been seen/approved/merged but from a developer perspective you want to know if it builds successfully in the first place. Running code that nobody has laid eyes upon while having access to build secrets is definitely a security risk.
Wondering if isolation is achievable with Jenkins build nodes as I cannot find any specific options for this.
My assumption is that dynamic provisioned containerized agents are best suited for isolated environments, I'm just not sure how to prevent their access to secrets from the Jenkins controller.

Simplest setup for a staging server and a production server -

What's the simplest way to manage a staging server vs production?
What's the point of having a staging server if you could just push changes to a different branch in production?
What's the best way to merge the staging server with production? Cron job?
Current setup is staging server which we don't use we are just pushing straight to production, but trying to improve the process
What's the simplest way to manage a staging server vs production?
The simplest and cheapest way is to get rid of your staging server. Staging servers don't inherently make deploys safer, but generally developers want at least a dev environment (functionally not necessarily distinct from the idea of staging server) to host their code in a prod-like environment before they push it to prod.
What's the point of having a staging server if you could just push changes to a different branch in production?
If you have 2 branches running in production simultaneously, that's functionally equivalent to a staging server. Most shops prefer to have a staging environment not server so that their data tier, 3rd party integrations, etc. are completely separate between staging and prod.
Simply deploying another copy of your application in prod is deceptively dangerous because if you mess up the data tier or 3rd party integrations you can easily effect prod.
trying to improve the process
Feature flags. if you can enable new features or even fixes for specific users you can then roll it out to your QA team (or the devs, whomever is going to test) and then when you're happy with it, roll it out to the general user base. This isn't inherently safer than anything else, but it has the advantage that it front loads the work of planning for multiple concurrent code paths and makes that planning more explicit.
Unfortunately there's no magic bullet for having testing (dev ,staging , whatever you want to call them) environments increase reliability.
What's the best way to merge the staging server with production? Cron job?
For code, usually the preferred method is to "promote" the artifact you deployed to staging over to prod without rebuilding, guaranteeing the same thing is shipped.
for runtime environment, using containerization makes most of that part of the code artifact and that's the simplest way. If you're running on a container-centric hosting like ECS Fargate or Google's docker oriented service, there's nothing else on the app side to ship. This is what I recommend, it's straight forward and easy to reason about. Adding virtual servers into the mix just adds an OS level to manage and there's little benefit to that. If you can make your app serverless so it's not sitting waiting for connections but instead is invoked when connections come in, the same thing applies, no OS to manage (AWS lambda for example has serverless docker image support)
Data is generally considered the tricky bit to having test environments by those who have experience with them. If your production data is not at all sensitive you can copy it over, but that may or may not actually work depending on what's in the data and how distributed your data ends up being. Generally production data is sensitive enough that you don't want to expose it to dev environments, which makes it tricky to ensure the dev data is appropriate for testing features. One common methodology for overcoming that obstacle is automating end-to-end tests via something like selenium for web broswers, and automated API tests for non-browser-centric endpoints. This allows you to write the test along with the app to prove it's working.

How many agents should I have?

I trying to build a branch-based GitOps declarative infrastructure for Kubernetes. I plan to create clusters on a cloud provider with crossplane, and those clusters will be stored in Gitlab. However, as I start building, I seem to be running into gitlab-agent sprawl.
Every application I will be deploying to each of my environments is stored in a separate git repo, and I'm wondering if I need a separate agent for each repo and environment. For example, I have my three clusters prod, stage, and dev, and my three apps, API, kafka, and DB. I've started with three agents per repo (gitlab-agent-api-prod, gitlab-agent-kafka-stage, ...), Which seems a bit excessive. Do I really need 9 agents?
Additionally, I now have to install as many agents as I have apps onto each of my clusters, which already eats up significant resources. I'd imagine I can get away with one gitlab agent per cluster, I am just not seeing how that is done. Any help would be appreciated!
PS: if anyone has a guide on how to automatically add gitlab agents to new clusters created with crossplane, I'm all ears. Thanks!

How to manage patching on multiple AWS accounts with different schedules

I'm looking for the best way to manage patching Linux systems across AWS accounts with the following things to consider:
Separate schedules to roll patches through Dev, QA, Staging and Prod sequentially
Production patches to be released on approval, not automatic
No newer patches can be deployed to Production than what was already deployed to lower environments (as new patches come out periodically throughout the month)
We have started by caching all patches in all environments on the first Sunday of every month. The goal there was to then install patches from cache. This helps prevent un-vetted patches being installed in prod.
Most, not all, instances are managed by OpsWorks, but there are numerous OpsWorks stacks. We have some other instances managed by Chef Server. Still others are not managed, but are just simple EC2 instances created from the EC2 console. This means, using recipes means we have to kick off approved patches on a stack-by-stack basis or instance-by-instance basis. Not optimal.
More recently, we have looked at the new features of SSM using a central AWS account to manage instances. However, this causes problems with some applications because the AssumeRole for SSM adds credentials to the .aws/config file that interferes with other tasks we need to run.
We have considered other tools, such as Ansible, but we would like to explore staying within the toolset we currently have which is largely OpsWorks and Chef Server. I'm looking for ideas that are more on a higher level, an architecture of how one would approach this scenario.
Thanks for any thoughts or ideas.
This sounds like one of the exact scenarios RunCommand was designed for.
You can create multiple groups of servers with different schedules based on tags. More importantly, you don't need to rely on secret/keys being deployed anywhere.

how do developers add new function on a massive online Saas application although it is running?

I want to know how developer continously develop a web application on cloud.
which software or which development environment they ae using?
Is Docker correct answer?
Thank you
This is an extremely open ended question, so I will give you a relatively open ended answer. CI/CD isn't really a defined process, but typically people follow the same strategy.
CI:
Develop and store code in Git or some repository
Execute unit test cases
Build Source Code
At this point, you have code that is being continuously tested and built. Now continuous delivery (CD) kicks in. This differs from company to company, but it may follow the below
CD:
Deploy Source Code to development integration testing server (DIT)
Execute Automated test portfolio
Deploy Source Code to Stage or Pre Prod environment
Execute Automated test portfolio
Now at this point in time you have your code fully tested and deployed to internal testing/stage servers. As a company, you can decide whether your confidence level is high enough to implement continuous deployment or if you implement a change mgmt process. continuous deployment is similar to continuous delivery EXCEPT you deploy the built application/service to production automatically with no gates in place. Then, you will run your test portfolio again against prod. Do not do performance testing in prod (do this testing in stage typically)
Product typically used for CI = Jenkins (open source, great community support)
Product(s) typically used for CD = Puppet, Chef, Ansible, uDeploy
Disclaimer - please do not get into a conversation about which products are best used for which stage...I only know what I know; and I know there are other tools to do CI/CD that I havent mentioned.

Resources