I host an Angular app on AWS S3 via a Cloudfront distribution. I want to setup a staging distribution: how do I change my CodePipeline workflow? - amazon-cloudfront

I learned that AWS Cloudfront now supports continued deployment:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/continuous-deployment.html
I would like to use this before I release a major update to my app, in order to roll it out slowly and catch any unforeseen issues early.
My current deployment happens via AWS CodePipeline: after building my app, it is deployed directly to the AWS S3 bucket to which the Cloudfront distribution is tied.
I couldn't find any documentation on how to change my CodePipeline configuration to account for a staging Cloudfront distribution.
Ideally, I would like the following setup:
Normally, things work as before (as I don't often release major updates that need a gradual roll out), so the default behavior would be to automatically promote the staging distribution to production after every release.
In case I exceptionally want a gradual roll out, it's okay to do things manually: I would go into Cloudfront, change some settings, make my release, and then gradually increase the roll out over a few days, until I reach 100%, I'm happy with the release, and restore the settings to behave as the point above.
Can anyone suggest how to tackle this scenario?
Thanks!

Related

Simplest setup for a staging server and a production server -

What's the simplest way to manage a staging server vs production?
What's the point of having a staging server if you could just push changes to a different branch in production?
What's the best way to merge the staging server with production? Cron job?
Current setup is staging server which we don't use we are just pushing straight to production, but trying to improve the process
What's the simplest way to manage a staging server vs production?
The simplest and cheapest way is to get rid of your staging server. Staging servers don't inherently make deploys safer, but generally developers want at least a dev environment (functionally not necessarily distinct from the idea of staging server) to host their code in a prod-like environment before they push it to prod.
What's the point of having a staging server if you could just push changes to a different branch in production?
If you have 2 branches running in production simultaneously, that's functionally equivalent to a staging server. Most shops prefer to have a staging environment not server so that their data tier, 3rd party integrations, etc. are completely separate between staging and prod.
Simply deploying another copy of your application in prod is deceptively dangerous because if you mess up the data tier or 3rd party integrations you can easily effect prod.
trying to improve the process
Feature flags. if you can enable new features or even fixes for specific users you can then roll it out to your QA team (or the devs, whomever is going to test) and then when you're happy with it, roll it out to the general user base. This isn't inherently safer than anything else, but it has the advantage that it front loads the work of planning for multiple concurrent code paths and makes that planning more explicit.
Unfortunately there's no magic bullet for having testing (dev ,staging , whatever you want to call them) environments increase reliability.
What's the best way to merge the staging server with production? Cron job?
For code, usually the preferred method is to "promote" the artifact you deployed to staging over to prod without rebuilding, guaranteeing the same thing is shipped.
for runtime environment, using containerization makes most of that part of the code artifact and that's the simplest way. If you're running on a container-centric hosting like ECS Fargate or Google's docker oriented service, there's nothing else on the app side to ship. This is what I recommend, it's straight forward and easy to reason about. Adding virtual servers into the mix just adds an OS level to manage and there's little benefit to that. If you can make your app serverless so it's not sitting waiting for connections but instead is invoked when connections come in, the same thing applies, no OS to manage (AWS lambda for example has serverless docker image support)
Data is generally considered the tricky bit to having test environments by those who have experience with them. If your production data is not at all sensitive you can copy it over, but that may or may not actually work depending on what's in the data and how distributed your data ends up being. Generally production data is sensitive enough that you don't want to expose it to dev environments, which makes it tricky to ensure the dev data is appropriate for testing features. One common methodology for overcoming that obstacle is automating end-to-end tests via something like selenium for web broswers, and automated API tests for non-browser-centric endpoints. This allows you to write the test along with the app to prove it's working.

Is there any way to perform rollback in firebase cloud functions?

We are running a larger backend application in NodeJS/TS on Firebase with about 180 cloud functions and Firestore as database. Firebase has been good for our needs so far, but we are getting to a level of usage where even small amounts of down-time can cause a lot of damage. Due to the amount of cloud functions a full deploy could take up to 30 minutes, we therefore usually only do partial deploys of changed functions only, which still take about 10 minutes. I am trying to find a way to be able to do quick rollback to previous version of a given function in case a bug is discovered after a production deploy. Firebase does not seem to provide rollback functionality, so the only option is to re-deploy the code with the previous version. One issue is the deploy time (up to 10 min for a single function), and the other is git versioning when there are partial deploys. Normally there would be a branch reflecting exactly what is in prod that could be used, but with partial deploys this is no longer the case. The only alternative for maintaining good git versioning with one to one branch with prod is to do a full deploy every time, but this takes a prohibitive amount of time (30+ minutes not including retries). The firebase deploy usually fail or exceed deployment quota as well, which makes things like CI pipelines very difficult (it would have to automatically retry failed functions, and the time is still an issue since 30+ min to deploy is not acceptable in the case of down-time). Has anyone found a good solution for roll-back (versioning) and a git structure that works well with firebase at scale?
Cloud Functions for Firebase is based on Cloud Functions and their behavior are the same. And today, it's not possible to route the traffic to a previous version (and to perform a rollback). (And I can also told you that NodeJS16 is now GA, instead of Beta as still mentioned in the Cloud Functions for Firebase documentation)
The next Cloud Functions runtime is cooking (and available in preview). That runtime is based on Cloud Run under the hood, that allow traffic splitting/routing, and therefore accept rollback.
So, for now, you haven't solution to perform a simple rollback with Firebase functions. A great change could be to use Cloud Functions V2 runtime directly, or event Cloud Run, but it's a big change in your code base.
Another solution could be to use a load balancer in front of all your functions and to:
Deploy new function under new name (no update of the current deployment, create a new service each time that you deploy a new version)
Create a new serverless backend with the new functions
Update the URL map to take into account the new backend.
After a while, delete the old function versions.
That also requires a lot of work to put that in action. And the advertising delay when you update your URL map should be between 3 and 5 minutes, not a such great advantage compare to your current solution.
it looks like your not the only one. previous questions answered. I recommend setting up some version control.I would solve the failing deploy issues first which should reduce the deploy time and redeploy times specifically if its multiple . You could use a different deploy branch or setup a staging environment as well. I would invest the time in getting the GIT control setup/turnkey.
Per user Ariel:
Each time you make a deploy to a cloud function you get an output line like this:
sourceArchiveUrl: gs://my-store-bucket/us-central1-function_name-xxoxtdxvxaxx.zip
I entered my Google Cloud Platform Developer Console -> Cloud Functions -> function_name -> Source tab
and there almost at the bottom it says: Source location
my-store-bucket/us-central1-function_name-xxoxtdxvxaxx.zip
the same as it was shown in the CLI, but without gs:// that link lead me to the following: https://storage.cloud.google.com/my-store-bucket/us-central1-function_name-........
I removed from the link everything that came after
https://storage.cloud.google.com/my-store-bucket
and that lead me to a huge list of files that each one of them represented a an image of all my cloud functions at the time point of each time i have made a deploy, exactly what i needed!
The only thing left to do was to locate the file with the last date before my mistaken deploy
source: Retrieving an old version of a Google Cloud function source
as of 2019
Rolling back to an older version of a firebase function (google cloud function)
2021:
Roll back Firebase hosting and functions deploy jointly?
You can roll back a Firebase Hosting deployment, but not the functions without using a GIT Version control etc. Using partials you can deploy multiple functions/Groups. You can checkout remote config templates to rollback and its kept for up to 90 days.
https://firebase.google.com/docs/remote-config/templates
Firebase partial deploy multiple grouped functions
https://firebase.google.com/docs/cli#roll_back_deploys

Trigger CodeDeploy in GitLab?

I am working on a CI/CD pipeline on AWS. For the given information, I have to use GitLab as the repository and use Blue/Green Deployment as the deployment method for ECS Fargate. I would like to use CodeDeploy(preset in the template of Cloudformation) and trigger it by each commit push to GitLab. I cannot use CodePipeline in my region so using CodePipeline is not work for me.
I have read so much docs and webpage related to ECS fargate and B/G deployment. But it seems not much information can help. Are there anyone have related experience?
If your goal is Zero Down Time, ECS already comes packaged as so by default, but not in what I'd call Blue/Green deployment, but rather a rolling upgrade. You'll have the ability to control percentage of healthy instances, ensuring no downtime, with ECS draining connections from the old tasks and provisioning new tasks with new versions.
Your application must be able to handle this 'duality' in versions, e.g. on the data layer, UX etc.
If Blue/Green is an essential requirement, you'll have to leverage CodeDeploy and ALB with ECS. Without going into implementation details, here's the highlight of it:
You have two sets of: Task Definitions and Target Groups (tied to one ALB)
Code Deploy deploys new task definition, which is tied
to the green Target Group. Leaving blue as is.
Test your green deployment by configuring a test listener to the new target group.
When testing is complete, switch all/incremental traffic from blue to green (ALB rules/weighted targets)
Repeat the same process on the next update, except you'll be going from green to red.
Parts of what I've described are handled by CodeDeploy, but hopefully this gives you an idea of the solution architecture, hence how to automate. ECS B/G.

How to manage patching on multiple AWS accounts with different schedules

I'm looking for the best way to manage patching Linux systems across AWS accounts with the following things to consider:
Separate schedules to roll patches through Dev, QA, Staging and Prod sequentially
Production patches to be released on approval, not automatic
No newer patches can be deployed to Production than what was already deployed to lower environments (as new patches come out periodically throughout the month)
We have started by caching all patches in all environments on the first Sunday of every month. The goal there was to then install patches from cache. This helps prevent un-vetted patches being installed in prod.
Most, not all, instances are managed by OpsWorks, but there are numerous OpsWorks stacks. We have some other instances managed by Chef Server. Still others are not managed, but are just simple EC2 instances created from the EC2 console. This means, using recipes means we have to kick off approved patches on a stack-by-stack basis or instance-by-instance basis. Not optimal.
More recently, we have looked at the new features of SSM using a central AWS account to manage instances. However, this causes problems with some applications because the AssumeRole for SSM adds credentials to the .aws/config file that interferes with other tasks we need to run.
We have considered other tools, such as Ansible, but we would like to explore staying within the toolset we currently have which is largely OpsWorks and Chef Server. I'm looking for ideas that are more on a higher level, an architecture of how one would approach this scenario.
Thanks for any thoughts or ideas.
This sounds like one of the exact scenarios RunCommand was designed for.
You can create multiple groups of servers with different schedules based on tags. More importantly, you don't need to rely on secret/keys being deployed anywhere.

How can I deploy a web process and a worker process with Elastic Beanstalk (node.js)?

My heroku Procfile looks like:
web: coffee server.coffee
scorebot: npm run score
So within the same codebase, I have 2 different types of process that will run. Is there an equivalent to doing this with Elastic Beanstalk?
Generally speaking Amazon gives you much more control than Heroku. With great power comes great responsibility. That means that with the increased power comes increased configuration steps. Amazon performs optimizations (both technical and billing) based on what tasks you're performing. You configure web or worker environments separately and deploy to them separately. Heroku does this for you but in some cases you may not want to deploy both at once. Amazon leaves that configuration up to you.
Now, don't get me wrong, you might see this as a feature of a heroku, but in advanced configurations you might have entire teams working on and redeploying workers independent from your web tier. This means that the default on Amazon is basically that you set up two completely separate apps that might happen to share source code (but don't have to).
Basically the answer to your question is no, there is not something that will allow you to do what you're asking in as simple a manor as with Heroku. That doesn't mean it is impossible, it just means you need to set up your environments yourself instead of Heroku doing it for you.
For more info see:
Worker "dyno" in AWS Elastic Beanstalk
http://colintoh.com/blog/configure-worker-for-aws-elastic-beanstalk

Resources