Is it possible to setup continuous delivery for a simple html page in under 1 hour?
Suppose I have a hello world index.html page being hosted by npm serve, a Dockerfile to build the image and a image.sh script using docker build. This is in a github repo.
I want to be able to check-in a change to the index.html file and see it on my website immediately.
Can this be done in under 1 hour. Either AWS or Google Cloud. What are the steps?
To answer your question. 1 hour. Is it possible? Yes.
Using only AWS,
Services to be used:
AWS CodePipeline - To trigger Github webhooks and send the source files to AWS CodeBuild
AWS CodeBuild - Takes the source files from the CodePipeline and build your application, serve the build to S3, Heroku, Elastic Beanstalk, or any alternate service you desire
The Steps
Create an AWS CodePipeline
Attach your source(Github) in your Pipeline (Each commit will trigger your pipeline to take the new commit and use it as a source and build it in CodeBuild)
Using your custom Docker build environment, CodeBuild uses a yml file to specify the steps to take in your build process. Use it to build the newly committed source files, and deploy your app(s) using the AWS CLI.
Good Luck.
I think I would start with creating a web-enabled script which would be a Github commit hook. Probably in Node on a AWS instance which would then trigger the whole process of cleaning up (deleting) the old AWS instance and reinstalling a new AWS instance with the contents of your repository.
The exact method will be largely dependant on how your whole stack is setup.
Related
I'd like to make the CodePipeline Build# (CODEBUILD_BUILD_NUMBER) available to my node code that is being deployed. Currently there are only two steps in the pipeline: pull from bitbucket, then deploy to Elastic Beanstalk, so I don't know how this would work.
Alternatively, if I could get the most recent commit number available to my node.js code, that would be ok.
This demonstrate how to specify an artifact name that is created at build time.example
I watches a lot a tutorials but still don't know how to update my code in EC2 instance without 100 steps. In every tutorial they uploading files with filezilla but I'm updating my node app code several time a day and opening filezilla, dragging files, opening ssh connection and restarting app every time is frustrating, I hope there is a way to push code with single command or something.
There is, use git (push from dev machine, pull from ec2 instance then restart app) or simpler use git with CI/CD (requires some study though).
Simply, You can push your code in Git and pull that code from inside the EC2 instance.
For automated deployment, You can use Git and AWS CodeDeploy to update the EC2 code change in one step.
For reference:
https://hackernoon.com/continuous-deployment-with-aws-codedeploy-github-d1eb97550b82 (Step by step guide)
https://github.com/azat-co/codedeploy-codepipeline-node (sample github code for codedeploy with aws)
You can use AWS developer tools (CodeCommit, CodeDeploy, and CodePipeline) for this.
PS. For codedeploy, you've to make sure that you've successfully installed the codedeploy agent on your web server.
We have a new project in which we are trying to make use of the built in continuous integration in Kentico for tracking changes to templates, page types, transformations etc.
We have managed to get this working locally between two instances of a Kentico database, making changes in one, syncing the changes through CI and then restoring them up to the second database using the Continuous integration application that sits in the bin folder in the Kentico site.
The issue we are having is when it comes to deploying our changes to our dev and live environments.
Our sites are hosted as Azure App services and we deploy to them using VSTS (Azure DevOps) build and release workflows however, as these tasks run in an agent, any powershell script we try to run to trigger the CI application fails because it is not running in the site / server context.
My question is, has anyone managed to successfully run Kentico CI in the context of an Azure app service? Alternatively, how can I trigger a powershell script on the site following a deployment?
Yes, I've got this running in Azure DevOps within the release pipeline itself. It's something that we're currently rolling out as a business where I work.
The key steps to getting this working for me were as follows:
I don't want to deploy the ContinuousIntegration.exe or the repository folders, so I need to create a second artefact set from source control (this is only possible at the moment with Azure Repos and GitHub to my knowledge).
Unzip your deployment package and copy the CMS folder to a working directory, this is where you're going to run CI. I did this because I need the built assemblies available.
From the repo artefact in step 1, copy the ContinuousIntegration.exe and CI repository folders into the correct place in your unzipped working folder.
Ensure that the connection string actually works for the DB in your unzipped folder, if necessary, you may want to change your VS build options in regards to how the web.config is handled.
From here, you should be able to run CI in the new working folder against your target database.
In my instance, as I don't have CI running on the target site it means that everything is restored every time.
I'm in to process of writing this up in more detail, so I'll share here when I've done that.
Edit:
- I finally wrote this up in more detail: https://www.ridgeway.com/blog/web-development/using-kentico-12-mvc-with-azure-devops
We do, but no CI. VSTS + GIT. We store virtual objects in the file system and using git for version control. We have our own custom library that does import export of the Kentico objects (the ones are not controlled by Git).Essentially we have a json file "publishing manifest" where we specify what objects need to be exported (i.e. moved between environments).
There is a step from Microsoft 'Powershell on Target Machines', you guess you can look into that.
P.S. Take a look also at Three Ways to Manage Data in Kentico Using PowerShell
Deploy your CI files to the Azure App Service, and then use a Azure Job to run "ContinuousIntegration.exe"
If you place a file called KenticoCI.bat in the directory \App_Data\jobs\triggered\ContinuousIntegration - this will automatically create a web job that you can can trigger:
KenticoCI.bat
cd D:\home\site\wwwroot
ren App_Offline.bak App_Offline.htm
rem # run Kentico CI Integraton
cd D:\home\site\wwwroot\bin
ContinuousIntegration.exe -r
rem # Removes the 'App_Offline.htm' file to bring the site back online
cd D:\home\site\wwwroot
ren App_Offline.htm App_Offline.bak
I'm developing a nodeJS application using nextJS and an expressJS application. And I'm using an own gitlab instance for managing the git repository.
But the current application should not be deployed to a webserver at the end, but I need to create decentralized productive application. To make it a bit clearer:
Developing the application locally
Push application to my remote server
My customers should be able to get the productive app code from my remove server
Customers will run the application on there local environment - should be able to pull new versions from the remove server
So the application itself won't run on my remote server, but on the local server of the customers.
Normaly I would use my CI to test and build the application (which is be done by npm run build). Then I build an docker image which I use to run the application on my server. But all that is normaly working on the same server.
In this case I need to build the application and serve it to the customers / the customers should be able to pull the productive code. How can this be done.
Maybe I lose sight of the wood for the trees... and that's why I'm asking for help/hints.
There are a number of ways you can do this and a number of tools you can use as well. You probably want a pipeline similar to the following.
Code is developed locally, committed, and pushed to the self-hosted gitlab.
GitLab CI, (or any other CI configured) will then run CI of your code.
The final step of the CI is to create a "bundle" of your application. This is probably a .zip or similar and this will be pushed to a remote storage location. It is also possible to ensure that this is done only when pushing to specific branches (such as master).
You can use a number of things as your remote storage location, such as some sort of AWS S3 bucket, or something more complex such as Nexus (there are many free alternatives).
You would then want to give your customers access to either this storage location (if you're using something like S3, or Digital Ocean Block Storage, etc), or access to your distribution repository (such as Nexus).
You should be able to generate some sort of SSH key that you can put on your GitLabCI server and use to publish to these places. It should then be a simple case of making a HTTP call to upload a file to the relevant source. This would often be called when everything has been successful, and only for specific branches. For example if all your tests pass and you're on the master branch, zip up all your code and make a HTTP call to push the new zip file to AWS S3 which your customers have access to.
For further ideas, you could make your storage / distribution location into an FTP server if you wanted to, or a local network drive depending on what your needs are for distribution. If you're just dealing with docker for your customers, then I'd suggest building a Docker image and self-hosting a docker registry. Push to that registry after you've built the image, and that would be the end of your CI run.
As a side note, if your customers are using docker you could create a docker image either push it to a registry or export it as a .tar and upload it to a file storage location (S3 for example). This would make things simple for your customers and ensure you control the image creation step (if that's something you want to manage).
The gitlab ci docs might help you with the specifics of uploading artifacts to various locations.
I have a node.js application on my Github. Right now I am using Heroku for hosting it but I want to give DigitialOcean a try (the $5/month is more affordable).
I am used to using Heroku, where I just go create an app > connect it to my github account > deploy from the master branch > boom app deployed.
When I signed up for DO and started exploring it seemed way too much and too many steps to get my app deployed. I researched around to find a simpler way (similar to one I follow in Heroku) but all the blogs and YouTube videos go through the same tedious process.
I know I am being lazy but I just need a few clicks app deployment. Does anyone know a better (smarter) way I can deploy my app on DO from Github?
It will not be as easy with Heroku. It is always tempting to use cheaper services like Digital Ocean or Vultr and pay only fraction of the price (especially using coupon links that can make it free for months - Digital Ocean, Vultr) but having your own VPS means that you need to manage it yourself. Simplifying that process is what you pay for when you're using Heroku. But it doesn't have to be that bad.
Here is a good tutorial on how to do it:
https://www.distelli.com/docs/tutorials/build-and-deploy-nodejs-to-digitalocean/
And see this list of tutorials - search for those with "deploy" in the title:
https://www.digitalocean.com/community/tags/node-js?type=tutorials
Basically you have few options that I would consider here:
A semi-manual deploy with git - You can install a git server on your VPS and push to it whenever you want to deploy a new version
Automatic deploy with git - You can add a deployment process to you CI scripts that will do what you do manually in (1) but after all tests pass
You can trigger a pull from git on the server with ssh or a custom API
You can do (3) in your CI scripts
You can add a custom webhook in GitHub to notify your server about new version and your server may then pull the code and restart
You can add a custom webhook in CI and do the same as in (5)