Azure edge layered deployment not reapplied when base deployment modified - azure

We are using deployments for our IoT devices and managing these using deployment templates. I am in the process of migrating our deployments to a layered approach, where we use a base deployment with all required containers and then apply a layer that is dependent on the type of product.
I have noticed that a layer does not get re-applied when changing the base deployment. Notice the bad crop, but it says 3 devices are targeted but it does not get applied to them after having updated the base deployment:
And when re-applying the layer after having changed the deployment, everything works as it should.
Just because I change my base deployment, I don't want to drop the containers defined in the layer.
The documentation on layered deployments says nothing about this, and I can reproduce this consistently.
What is the intended behavior? Doesn't this break the purpose of layered deployments?
I have also noticed that our stack becomes extremely slow when using layered deployments. Rolling back to a "monolithic" deployment template for each product and everything is snappy again. We are using routes in the edgeHub, and some of these routes point to a container that is deployed as a layer. Don't know if that is an issue, but it is still very slow even after this container has been deployed. The system works, but with extreme delays.

The documentation I linked clearly states:
Any layered deployments targeting a device must have a higher priority than the automatic deployment for that device.
So now the automatic deployment has priority 0 and the layers have priority 1 and it all works.

Related

How would one create an isolated jenkins build node (without access to secrets)?

As the Top 10 CI/CD Security Risks SEC-04 states:
Ensure that pipelines running unreviewed code are executed on isolated nodes, not exposed to secrets and sensitive environments.
The above statement seems especially true when the code (or pipeline code itself) is in a pull request which has not yet been seen/approved/merged but from a developer perspective you want to know if it builds successfully in the first place. Running code that nobody has laid eyes upon while having access to build secrets is definitely a security risk.
Wondering if isolation is achievable with Jenkins build nodes as I cannot find any specific options for this.
My assumption is that dynamic provisioned containerized agents are best suited for isolated environments, I'm just not sure how to prevent their access to secrets from the Jenkins controller.

Blue Green Deployment with AWS ECS

We are using ECS Fargate containers to deploy all of our services (~10) and want to follow Blue/Green Deployment.
We have deployed all the services under BLUE flag where target groups are pointing to the services.
In CICD, New Target groups are created and having slightly different forward rules to allow testing without any issue.
Now, my System is running with 2 kind of target groups, services and task definition -
tg_blue, service_blue, task_blue → pointing to old containers and serving live traffic
tg_green, service_green, task_green → pointing to new containers and do not have any traffic.
All above steps are done in Terraform.
Now, I want to switch the traffic and here I am stuck, How to Switch the Traffic and How the next Deployment will look like?
I would go for AWS native solution if no important reasons against. I have on my mind CodeDeploy. It switches in automatic way between TGroups.
Without CDeploy, you need to implement weighted balancing among two TGroups and adjust them later on. That is extra work.
Whole flow is quite good explained on this YT video.

Trigger CodeDeploy in GitLab?

I am working on a CI/CD pipeline on AWS. For the given information, I have to use GitLab as the repository and use Blue/Green Deployment as the deployment method for ECS Fargate. I would like to use CodeDeploy(preset in the template of Cloudformation) and trigger it by each commit push to GitLab. I cannot use CodePipeline in my region so using CodePipeline is not work for me.
I have read so much docs and webpage related to ECS fargate and B/G deployment. But it seems not much information can help. Are there anyone have related experience?
If your goal is Zero Down Time, ECS already comes packaged as so by default, but not in what I'd call Blue/Green deployment, but rather a rolling upgrade. You'll have the ability to control percentage of healthy instances, ensuring no downtime, with ECS draining connections from the old tasks and provisioning new tasks with new versions.
Your application must be able to handle this 'duality' in versions, e.g. on the data layer, UX etc.
If Blue/Green is an essential requirement, you'll have to leverage CodeDeploy and ALB with ECS. Without going into implementation details, here's the highlight of it:
You have two sets of: Task Definitions and Target Groups (tied to one ALB)
Code Deploy deploys new task definition, which is tied
to the green Target Group. Leaving blue as is.
Test your green deployment by configuring a test listener to the new target group.
When testing is complete, switch all/incremental traffic from blue to green (ALB rules/weighted targets)
Repeat the same process on the next update, except you'll be going from green to red.
Parts of what I've described are handled by CodeDeploy, but hopefully this gives you an idea of the solution architecture, hence how to automate. ECS B/G.

Can test and production share the same cloud kubernetes environment?

I have created a kubernetes cluster and I successfully deployed my spring boot application + nginx reverse proxy for testing purposes.
Now I'm moving to production, the only difference between test and prod is the connection to the database and the nginx basic auth (of course scaling parameters are also different).
In this case, considering I'm using a cloud provider infrastructure, what are the best practices for kubernetes?
Should I create a new cluster only for prod? Or I could use the same cluster and use labels to identify test and production machines?
For now having 2 clusters seems a waste to me: the provider assures me that I have the hardware capabilities and I can put different request/limit/replication parameters according to the environment. Also, for now, I just have 2 images to deploy per environment (even though for production I will opt for an horizontal scaling of 2).
I would absolutely 100% set up a separate test cluster. (...assuming a setup large enough where Kubernetes makes sense; I might consider an easier deployment system for a simple three-tier app like what you're describing.)
At a financial level this shouldn't make much difference to you. You'll need some amount of hardware to run the test copy of your application, and your organization will be paying for it whether it's in the same cluster or a different cluster. The additional cost will only be the cost of the management plane, which shouldn't be excessive.
At an operational level, there are all kinds of things that can go wrong during a deployment, and in particular there are cases where one Kubernetes resource can "step on" another. Deploying to a physically separate cluster helps minimize the risk of accidents in production; you won't accidentally overwrite the prod deployment's ConfigMap holding its database configuration, for example. If you have some sort of crash reporting or alerting set up, "it came from the test cluster" is a very clear check you can use to not wake up the DevOps team. It also gives you a place to try out possibly risky configuration changes: if you run your update script once in the test cluster and it passes then you can re-run it in prod, but if the first time you run it is in prod and it fails, that's an outage.
Depending on what you're using for a CI system, the other thing you can set up is fully automated deploys to the test environment. If a commit passes its own unit tests, you can have the test environment always running current master and run integration tests there. If and only if those integration tests pass, you can promote to the production environment.
It is true, that it is definitely a better practice to use a different cluster, as in your test cluster you could do something wrong (especially resource wise), and take down you prod environment, but if you can't afford it, and if you feel confident on k8s, you can put your prod environment in a different namespace.
I don't know on azure, but on GKE you can take the number of nodes to zero. If it is possible on azure, may be you can take the number of nodes of test environment to zero, whenever not using it, and get 2 clusters.
Its better to use different clusters for production and dev/testing. Please refer here for best practices

Why nobody does not make it in the docker? (All-in-one container/"black box")

I need a lot of various web applications and microservices.
Also, I need to do easy backup/restore and move it between servers/cloud providers.
I started to study Docker for this. And I'm embarrassed when I see advice like this: "create first container for your application, create second container for your database and link these together".
But why I need to do separate container for database? If I understand correctly, the main message is the docker the: "allow to run and move applications with all these dependencies in isolated environment". That is, as I understand, it is appropriate to place in the container application and all its dependencies (especially if it's a small application with no require to have external database).
How I see the best-way for use Docker in my case:
Take a baseimage (eg phusion/baseimage)
Build my own image based on this (with nginx, database and
application code).
Expose port for interaction with my application.
Create data-volume based on this image on the target server (for store application data, database, uploads etc) or restore data-volume from prevous backup.
Run this container and have fun.
Pros:
Easy to backup/restore/move application around all. (Move data-volume only and simply start it on the new server/environment).
Application is the "black box", with no headache external dependencies.
If I need to store data in external databases or use data form this - nothing prevents me for doing it (but usually it is never necessary). And I prefer to use the API of other blackboxes instead direct access to their databases.
Much isolation and security than in the case of a single database for all containers.
Cons:
Greater consumption of RAM and disk space.
A little bit hard to scale. (If I need several instances of app for response on thousand requests per second - I can move database in separate container and link several app instances on it. But it need in very rare cases)
Why I not found recommendations for use of this approach? What's wrong with it? What's the pitfalls I have not seen?
First of all you need to understand a Docker container is not a virtual machine, just a wrapper around the kernel features chroot, cgroups and namespaces, using layered filesystems, with its own packaging format. A virtual machine usually a heavyweight, stateful artifact with extensive configuration options regarding to the resources available on the host machine and you can setup complex environments within a VM.
A container is a lightweight, throwable runtime environment with a recommendation to make it as stateless as possible. All changes are stored with in the container that is just a running instance of the image and you'll loose all diffs in case of container deletion. Of course you can map volumes for more static data, but this is available for the multi-container architecture too.
If you pack everything into one container you loose the capability to scale the components independently from each other and build a tight coupling.
With this tight coupling you can't implement fail-over, redundancy and scalability features into your app config. The most modern nosql databases are built to scale out easily and also the data redundancy could be a possibility when you run more than one backing database instance.
On the other side defining this single-responsible containers is easy with docker-compose, where you can declare them in a simple yml file.

Resources