How to do continuous deployment - c++ on AWS - linux

I have a source on a repository server. The application is running on an AWS instance. I could make a script that logs, makes the pull, compiles, and copies the new binary to it's destination.
But how do I copy the new binary if the application is running? what's the usual way to do this? Do I have to stop the application to make an update? how does continuous deployment works then?
I'm using linux, the application is in C++.

You would have to restart the application after copying the binary. I would highly recommend that you use one of the frameworks for continuous building/integration to make this less painful though, for example Jenkins.
It will not only help with the actual deployment process, but can also run tests for you and only deploy if the tests succeed. There is also a plugin for AWS integration.

Related

Is it possible to stream Cloud Build logs with the Node.js library?

Some context: Our Cloud Build process relies on manual triggers and about 8 substitutions to customize deploys to various firebase projects, hosting sites, and preview channels. Previously we used a bash script and gcloud to automate the selection of these substitution options, the "updating" of the trigger (via gcloud beta builds triggers import: our needs require us to use a single trigger, it's a long story), and the "running" of the trigger.
This bash script was hard to work with and improve, and through the import-run shenanigans actually led to some faulty deploys that caused all kinds of chaos: not great.
However, recently I found a way to pass substitution variables as part of a manual trigger operation using the Node.js library for Cloud Build (runTrigger with subs passed as part of the request)!
Problem: So I'm converting our build utility to Node, which is great, but as far as I can tell there isn't a native way to steam build logs from a running build in the console (except maybe with exec, but that feels hacky).
Am I missing something? Or should I be looking at one of the logging libraries?
I've tried my best scanning Google's docs and APIs (Cloud Build REST, the Node client library, etc.) but to no avail.

CI/CD PHP app with Webpack on Azure Web App

I'm trying to deploy a Laravel + Vue app over an Azure App Service - Web App. It is however very unclear and I cannot find any proper solution inside Microsoft's documentation to get it into working.
'Traditional' deployment workflow
What I typically do to deploy my code (outside CI/CD):
sync Git repository
run composer install
run npm run prod (which is a shorthand for compiling webpack in my case)
Done
There is a really easy approach with a Docker container, where in my Dockerfile I just configure php-apache image with additionally installed Nodejs (w. NPM).
However I would like to find a solution to use Azure's built-in features to configure this deployment. Is it possible?
I can use Windows or Linux Web Apps. No difference for me.
I recommend that you use continuous deployment. For specific operations, you can check the official documentation.
Recommended reason:
As long as it runs successfully locally and continuously deploys through git, the project can be released, and later updates only need to submit code through git.
You can easily view the deployment log in Action in git.
Simple operation and convenient update
Steps:
First, ensure that the project is running normally locally, and create web app services on the portal. (Linux is recommended for the nodejs program, which can avoid many problems caused by dependencies)
According to the official document, in the Deployment Center, select github for release
Check the release information of Action on the official github website and wait for the release to be completed
Note:
If it is a nodejs program or other language program, if the Linux operating system is used, the Startup Command may need to be configured in the Configuration. If the program cannot be accessed normally after release, then try to set npx serve -s (nodejs program, other Language program), and then proceed to restart the webapp.

AWS Lambda Dev Workflow

I've been using AWS for a while now but am wondering about how to go about developing with Lambda. I'm a big fan of having server-less functions and letting Amazon handle the maintenance and have been using it for a while. My question: Is there a recommended workflow for version control and development?
I understand there's the ability to publish a new version in Lambda. And that you can point to specific versions in a service that calls it, such as API Gateway. I see API Gateway also has some nice abilities to partition who calls which version. i.e. Having a test API and also slowly rolling updates to say 10% of production API calls and scaling up slowly.
However, this feels a bit clunky for an actual version control system. Perhaps the functions are coded locally and uploaded using the AWS CLI and then everything is managed through a third party version control system (Github, Bitbucket, etc)? Can I deploy to new or existing versions of the function this way? That way I can maintain a separation of test and production functions.
Development also doesn't feel as nice through the editor in Lambda. Not to mention using custom packages require to upload anyways. Seems local development is the better solution. Trying to understand others workflows so I can improve mine.
How have you approached this issue in your experience?
I wrote roughly a dozen lambda functions that trigger based on S3 file write event or time, and make a HTTP req to an API to kickstart data processing jobs.
I don't think there's any gold standard. From my research, there are various approaches and frameworks out there. I decided that I didn't want to depend on any kind of frameworks like Serverless nor Apex because I didn't want to learn how to use those things on top of learning about Lambda. Instead I built out improvements organically based on my needs as I was developing a function.
To answer your question, here's my workflow.
Develop locally and git commit changes.
Mock test data and test locally using mocha and chai.
Run a bash script that creates a zip file compressing files to be deployed to AWS lambda.
Upload the zip file to AWS lambda.
You can have version control on your lambda using aws CodeCommit (much simpler than using an external git repository system, although you can do either). Here is a tutorial for setting up a CodePipeline for commit/build/deploy stages: https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-simple-codecommit.html
This example deploys an EC2 instance, so for the deploy portion for a lambda, see here
If you set up a pipeline you can have an initial commit stage, then a build stage that runs your unit tests and packages the code, and then a deploy stage (and potentially more stages if required). It's a very organized way of deploying lambda changes.
I would suggest you to have a look at SAM. SAM is a command line tool and a framework to help you to develop your serverless application. Using SAM, you can test your applications locally before to upload them to the cloud. It also support blue / green deployment and CI/CD workflows, starting automatically from github.
https://github.com/awslabs/aws-sam-cli

Creating a Web UI for StrongLoop build & deploy processes?

I want to build web ui for StrongLoop. It would let a user build and deploy process with that UI like StrongLoop Arc.
There are simple node applications(Web Services) without created with StrongLoop tools. Need to deploy these applications via web ui. Solution in my mind is some server-side processes, listed steps below:
Upload zip folder(node application) to server
Extract zip and build to tar.gz by shell command (slc build) through node.js child_process API
Deploy tar.gz file to relevant StrongLoop host by shell command(slc deploy..) through API which is mentioned on previous step.
I wonder is there any alternative way to deploy node application(without created with StrongLoop tools) to StrongLoop host via web ui using some StrongLoop API?
I have looked API could not find specific solution.
What you require is a CDP (Continuous delivery pipeline) setup, there seem to be many ways in which you can achieve this (easiest way is using Codeship or similar platforms), but if you want to know how it works it requires a bit of orchestration tools to help you. To describe the steps I'll be using the following tools:
Docker (what is docker?)
Ansible (Use Cases and How it works?)
Jenkins (What is it and Why to use it?)
"There are many other combination of tools that you can look at, but this should give you an idea"
Now that we have the tools, I'll try to describe the deployment pipeline with a very basic use-case.
Step I "Ideally" - Creating a docker image for your nodejs application.
What generally everyone suggests is that you create a docker image of your application. Then save this image on docker-hub. How this will help you is that, now your nodejs application is contained inside a docker image which makes it independent of the Host and can be deployed anywhere you want.
To create this image all you need to do is create a Dockerfile, which is described in the in the link I've shared.
Step II "Ideally" - Creating an Ansible playbook to mimic the setup steps of your application.
Ansible playbooks are basically used to automate every manual process that you would need to do in order to setup-deploy-run your application. This decreases the need to run even trivial tasks like "slc build".
Step III "Ideally" - This is where we get to the UI stuff
By using Jenkins, you are given a UI which will help you configure tasks that can be combined with Github hooks and trigger the deployment as soon as you make a commit. This is explained in more details in the link shared.
So to summarize, This is what goes on at back to some extent, in order to automate the build and deployment of your application using UI. I hope this serves as a good starting point to achieve your requirements, and also in case you want skip these steps in the start, you could always go with Codeship or similar other tools to help you with the steps that you've mentioned.

Deploy repository code to multiple machines at once

My question is: How do you guys deploy the same code from whatever [D]VCS you use on multiple machines? Do you have an automated deployment system and if so what's that? Is it built in-house? Are there out there any tools that can do this automatically? I am asking because I am pretty bored updating up to 20 machines every time I make some modifications.
P.S.: Probably this belongs on ServerFault, but I am asking here because I am thinking at writing my own custom-made deployment system.
Roll your own rpm/deb/whatever for your package, set up your own repo, and have your machines pull on a regular basis. Its really not that hard to do and its already built-in to your system, is well tested, and loaded with features. You could use something like Func if you needed to push instead.
Depending on your situation deploying straight from the versioning system might not always be the best idea. You can only so much by just updating files, and mixing deployment and development probably will make the development use of the versioning system less free.
I see two alternatives that might be interesting.
Deploy from your continuous integration server. (add a task that runs after every successful build, copies over files and executes some remote commands, I'm using this to deploy to a testserver and would find it to tricky to upgrade production in such a way)
Deploy using an existing package manager. You can set up your own apt (or equivalent) repository and package the updates using apt. Have your continuous build system build apt packages but let an admin decide if the should be pushed to the update server. I think this is the only safe solution for production machines.
We use Capistrano for deployment & Puppet for maintaining the servers and avoiding the inevitable 'configuration drift' when many developers/engineers tinker with the package lists and configuration files.
Both of these programs are written in Ruby, but we use them for our PHP codebase stored in a git repository.
I use a combination of deb packages with puppet to deploy code and configure a bunch of machines.
In most projects i have been involved with the final stage has always been an scripted rsync deployment to live. so the multiple targets are built into this process.

Resources