I am using "Click to Deploy" to create 3-node cassandra cluster in my project.
No I need to create one more cluster for another purpose in the same project.
I am not able to create new one, as its showing the cluster is already installed and only option is to delete the existing cluster.
This is a known limitation with the current version of Click to Deploy. We are working on an update that will allow multiple deployments in a single project. To #chrispomeroy's point, a current workaround is to create another project and deploy your next cluster.
Related
Basically I am looking for a way to deploy azure fabrics where services are updated based on nothing but the version number.
I am working on deploying a service fabric application via azure devops. I have written a script that does a diff and updates the version numbers on ApplicationManifest.xml and ServiceManifest.xml. This script has been tested and it updates the versions correctly for the services that have been changed.
Now, when I try to deploy, I get the following error message:
##[error]The content in CodePackage Name:Code and Version:1.0.111 in Service Manifest 'MyMicroServicePkg' has changed, but the version
number is the same.
This error message keeps showing for one service after another until I have updated the version on every single package. Basically it is forcing me to update every single package.
Here is the publish profile I am using:
<?xml version="1.0" encoding="utf-8"?>
<PublishProfile xmlns="http://schemas.microsoft.com/2015/05/fabrictools">
<ClusterConnectionParameters .... />
<ApplicationParameterFile Path="..\ApplicationParameters\PublishProfName.xml" />
<CopyPackageParameters CompressPackage="true" />
<UpgradeDeployment Mode="Monitored" Enabled="true">
<Parameters FailureAction="Rollback" Force="True" />
</UpgradeDeployment>
</PublishProfile>
Here is the devops task on yaml:
- task: ServiceFabricDeploy#1
inputs:
publishProfilePath: $(publishProfilePath)
applicationPackagePath: $(applicationPackagePath)
serviceConnectionName: ${{ parameters.connection }}
overrideApplicationParameter: true
upgradeMode: Monitored
FailureAction: Rollback
# 30 mins timeout
UpgradeTimeoutSec: 3600
I have looked up this issue online. Generally people talk about how to make sure all services with code changes have versions updated. In my case, I am positive the versions are updated for the changed services.
How do I configure the deployment such that it does not do any code comparison and updates only the services that have an updated version?
The gist of the answer here, based on my experience, is that Service Fabric only deploys the entire application as a single versioned collection of each of the internal packages, but leaves specifics of service deployment accessible to you for customization. So even if you haven't made changes to a single service within that application, ultimately it's still going to be bundled up in the application deployed to the cluster and you're still going to have to make a decision about whether to deploy the service or not (or use the provided Deplooy-FabricInstaller.ps1 script to handle such deployments).
My suggestion (based off my own pipeline) is that if you really trust that your script is detecting changed versions properly and you really want to avoid upgrades for services you haven't made changes to (presumably to cut down on deployment time) is to shift around your approach and instead don't fight the application deployment and instead optimize service installation.
Build the solution as you normally would, and utilize the ServiceFabricUpdateManifests#2 task to let it replace all the versions of each service automatically.
At this point, I generally diverge from the standard pipeline.
I've got a script that removes the DefaultServices from ApplicationManifest.xml to remove automatic service deployments
I run a separate script that checks to see if the application already exists or not (to inform whether it's a fresh deployment or an upgrade)
Based on it being an install/upgrade I take a different deployment task route and capture information about the existing services in the cluster
Once the application has been fully deployed, I wrote a custom script that's similar to the included Deploy-FabricApplication.ps1 script that iterates through each of the services and itself installs, upgrades or removes them based on their state in the new deployment utilizing ServiceTemplates to populate configuration values.
Because I control the whole of the actual service install process then, I can infer based on data I collect and pass through the build pipeline what should and shouldn't actually be deployed independent of application deployment.
You might find the customization process a bit easier going this route compared to the approach you're taking.
I'm looking for some advices concerning release management in azure
I've found a lot of ressources, but I still have some questions
I have an asp.net 4 solution (to make it simple : one asp.net project, one database project, one test project)
I'm using GIT in visual studio online
At this moment I have one app service and one sql server database in azure.
I have a build that download nuget packages, build, execute a dacpac for the database, executes the tests from the project (I have integration tests that uses the database) and finaly deploys the app on an azure app services
What I want to do seems a "normal stuff" :
I want to cerate the build, then deploy it on a "dev" environnement in azure, then in a "qa" environnement, then in a "stagging" and in "prod"
In my web project, I created different web.config transformations (one for each environnement)
I've seen the releases in visual studio online and I get that its for the deployment part in different environnements
What I have questions with :
In Azure :
Do I create 1 app service by environnement ? or do I create a single app service and use slots ?
Do I create 1 sql server for each environnement or is it best to use a single sql server and to have one database for each environnement ?
In visual studio online :
How do I do the tasks ?
In the build part, what configuration do I use ? Which environnement do I select ?
In the build, how do I manage the database project ? I thing the deployment part should be in the release part, but I don't see how to configure the connexionstring ?
For the tests -> Do I execute them in the release part? How do I change the connexionstrings and appsettings? there is no transformations for the app.config
I've seen that there is test-plans as well, but I don't really get it
Can somebody help me to see a little bit better in all of that ?
I cannot answer all of these, but I can answer a few.
I would create separate Web App instances for your separate environments. With slots, your slots exist on the same Web App and share computing resources. If something goes horribly wrong (your staging code pegs CPU to 100% or eats all of your RAM), this will cause problems for your production slot. I see slots as part of A/B testing or to aid in deployment.
You will likely need to create a separate database per environment as well. This is almost always required if you will be upgrading your database schema at any point in the future and introduce breaking changes to your database schema. For example, what happens if your production code requires a specific field in a database table, but your next version of the database removes that field?
You mentioned you're using web.config transforms, but I want to throw out another option that we've found to be easier and have fewer moving parts and sources for error. If you're just changing connection strings and AppSettings, consider using the Web App's application settings per environment. These override whatever is in your web.config. Doing so means you can forget about web.config transforms and not have one more thing that could possibly go wrong in a deployment.
Since you're using a Database project, to deploy your database, check out the VSTS Azure SQL Database Deployment task. It'll use your database project to create a DACPAC, and then deploy that to your target server.
I have created an AWS DataPipeline using EMR template, but its not installing Spark on EMR cluster. Do I need to set any special action for that ?
I see some bootstrapaction is need for spark installation but that is also not working.
That install-spark bootstrap action is only for 3.x AMI versions. If you are using a releaseLabel (emr-4.x or beyond), the applications to install are specified in a different way.
When you are creating a pipeline, you click "Edit in Architect" at the bottom or edit your pipeline on pipelines home page then you can then click on the EmrCluster node and select Applications from the "Add an optional field..." dropdown. That is where you may add Spark.
Can I install any applicAtions in VDA in xenapp essentials and use it in the catalog, I am talking about defining a catalog and using the same catalog in azure to install any applications, would that applications reflect in the catalog?
This is not a recommended approach. It is always better to update your master image with the application and then update the catalog with new master image.
Here is a video that walks you through catalog image update workflow:
https://www.youtube.com/watch?v=vAMOoYLhMTw
I'm trying to create backup and restore for cassandra node using Priam. I want to upload the snapshot into S3 and restore from here.
I found priam setup Priam setup but I didn't understand the steps given here.
I have created git clone and ran
./gradlew build
I have already setup for ASGs.
Can someone give me fully described steps on how to install and execute backup and restore?
Hopefully you solved it already. You had to investigate more on how to deploy wars (for example in tomcat server which is basically just moving a war file and starting the server service) and create ASGs (Autoscaling groups) in Amazon webservices (see http://docs.aws.amazon.com/autoscaling/latest/userguide/GettingStartedTutorial.html).
Basically Priam runs as a webapp in tomcat, is configured in a *.yaml file and helps you manage the cassandra nodes through a REST interface (see: https://github.com/Netflix/Priam/wiki/REST-API)