I'm trying to create backup and restore for cassandra node using Priam. I want to upload the snapshot into S3 and restore from here.
I found priam setup Priam setup but I didn't understand the steps given here.
I have created git clone and ran
./gradlew build
I have already setup for ASGs.
Can someone give me fully described steps on how to install and execute backup and restore?
Hopefully you solved it already. You had to investigate more on how to deploy wars (for example in tomcat server which is basically just moving a war file and starting the server service) and create ASGs (Autoscaling groups) in Amazon webservices (see http://docs.aws.amazon.com/autoscaling/latest/userguide/GettingStartedTutorial.html).
Basically Priam runs as a webapp in tomcat, is configured in a *.yaml file and helps you manage the cassandra nodes through a REST interface (see: https://github.com/Netflix/Priam/wiki/REST-API)
Related
I am creating an Azure pipeline for the first time in my life (and a pipeline too) and there are some basic concepts that I don't understand.
First of all I have trouble understanding how the installation works, if my .yaml file installs Liquibase, will Liquibase installation run every time the pipeline is triggered? (by pushing on github)
Also, I don't know how to run liquibase commands from the agent, I see here that they use the liquibase bat file, I guess you have to download the zip from the Liquibase website and put it in the agent, but how do you do that?
You can setup Liquibase in a couple of different ways:
You can use Liquibase Docker image in your Azure pipeline. You can find more information about using Liquibase Docker image here: https://docs.liquibase.com/workflows/liquibase-community/using-liquibase-and-docker.html
You can install Liquibase on Azure agent and ensure that all Liquibase jobs run on that specific agent where Liquibase is installed. Liquibase releases can be downloaded from: https://github.com/liquibase/liquibase/releases
The URL you point to shows that Liquibase commands are invoked from C:\apps\Liquibase directory.
This article offers a step-by-step guide to deploy a .NET Core Worker Service into a linux systemd service unit/daemon. Thanks to the author I successfully achieved the desired result.
There is one thing, however, I am finding counter-intuitive in the whole process: to have these pre-make build cleanup steps - before creating the .tar file - when eventually the rules makefile will define the publish (and implicitly a build) command in build override target.
I assume dh_make works on a tar and that's why it we create it before the publish, but would not the cleanup steps fit better if put after the publish execution ?
Could anyone clarify that?
We have a new project in which we are trying to make use of the built in continuous integration in Kentico for tracking changes to templates, page types, transformations etc.
We have managed to get this working locally between two instances of a Kentico database, making changes in one, syncing the changes through CI and then restoring them up to the second database using the Continuous integration application that sits in the bin folder in the Kentico site.
The issue we are having is when it comes to deploying our changes to our dev and live environments.
Our sites are hosted as Azure App services and we deploy to them using VSTS (Azure DevOps) build and release workflows however, as these tasks run in an agent, any powershell script we try to run to trigger the CI application fails because it is not running in the site / server context.
My question is, has anyone managed to successfully run Kentico CI in the context of an Azure app service? Alternatively, how can I trigger a powershell script on the site following a deployment?
Yes, I've got this running in Azure DevOps within the release pipeline itself. It's something that we're currently rolling out as a business where I work.
The key steps to getting this working for me were as follows:
I don't want to deploy the ContinuousIntegration.exe or the repository folders, so I need to create a second artefact set from source control (this is only possible at the moment with Azure Repos and GitHub to my knowledge).
Unzip your deployment package and copy the CMS folder to a working directory, this is where you're going to run CI. I did this because I need the built assemblies available.
From the repo artefact in step 1, copy the ContinuousIntegration.exe and CI repository folders into the correct place in your unzipped working folder.
Ensure that the connection string actually works for the DB in your unzipped folder, if necessary, you may want to change your VS build options in regards to how the web.config is handled.
From here, you should be able to run CI in the new working folder against your target database.
In my instance, as I don't have CI running on the target site it means that everything is restored every time.
I'm in to process of writing this up in more detail, so I'll share here when I've done that.
Edit:
- I finally wrote this up in more detail: https://www.ridgeway.com/blog/web-development/using-kentico-12-mvc-with-azure-devops
We do, but no CI. VSTS + GIT. We store virtual objects in the file system and using git for version control. We have our own custom library that does import export of the Kentico objects (the ones are not controlled by Git).Essentially we have a json file "publishing manifest" where we specify what objects need to be exported (i.e. moved between environments).
There is a step from Microsoft 'Powershell on Target Machines', you guess you can look into that.
P.S. Take a look also at Three Ways to Manage Data in Kentico Using PowerShell
Deploy your CI files to the Azure App Service, and then use a Azure Job to run "ContinuousIntegration.exe"
If you place a file called KenticoCI.bat in the directory \App_Data\jobs\triggered\ContinuousIntegration - this will automatically create a web job that you can can trigger:
KenticoCI.bat
cd D:\home\site\wwwroot
ren App_Offline.bak App_Offline.htm
rem # run Kentico CI Integraton
cd D:\home\site\wwwroot\bin
ContinuousIntegration.exe -r
rem # Removes the 'App_Offline.htm' file to bring the site back online
cd D:\home\site\wwwroot
ren App_Offline.htm App_Offline.bak
I'm forking the Gitlab CE source code to make a few small changes, and want to set up continuous deployment on my forked project to deploy to the cloud (could be an Ubuntu server in the cloud). Could you share some suggestions about how to set up this CD?
Have you considered using GitLab GDK? https://gitlab.com/gitlab-org/gitlab-development-kit It's very simple to install and easy to contribute. It is very well documented here
I have an Azure Website configured to deploy from a Bitbucket repository. This works fine.
Since the application is still in active development, I update the nuget packages it uses quite frequently. This causes the packages folder to keep growing indefinitely, unless I go and manually delete the packages.
Now, in my local machine this is not a big issue. Space is cheap. But in Azure, this makes us go over the quota really fast, as old packages accumulate.
How can I customize the Azure deploy process so that it deletes all the packages after a successful deployment?
(I am open to other solutions as well)
You can utilize the custom deployment script feature where you add a step that cleans up the packages directory.
You can read about it here:
http://blog.amitapple.com/post/38418009331/azurewebsitecustomdeploymentpart2/
Another option is to add a post deployment action, by adding a script file (.cmd/.bat) that has the clenup logic to the following directory in your site: d:\home\site\deployments\tools\PostDeploymentActions\, this script will run after the deployment completes successfully.
Read more about it here:
https://github.com/projectkudu/kudu/wiki/Post-Deployment-Action-Hooks