I'm using Service Fabric as a container for deploying existing executables.
I intend to spawn a listener on the endpoint configured at deployment time, is it possible to get the endpoint settings somehow from the context? I know that using the Stateful/stateless/actor boilerplate type of projects allow the retrieval of CodePackageActivationContext, but how about a basic console project deployed as an exe?
Thanks
You should be able to retrieve the activation context using FabricRuntime.GetActivationContext()
Related
I have been attempting this for a good chunk of today but still have not found a solution.
I have a built spring boot application in the form of a jar.
I push this to a storage account container as a blob with azurerm_storage_blob
I reference this from a azurerm_app_service in app_settings.WEBSITE_RUN_FROM_PACKAGE using a data.azurerm_storage_account_sas
I see that it has pulled the blob from storage in the app-service but it has exploded it under D:\home\site\wwwroot
I have set site_config.java* (java_version, java_container and java_container_version) but it makes no attempt to start the application
I see there is a site_config.app_command_line but none of the examples I have found set this.
Has anybody gotten a spring boot application in a windows app service running using terraform?
Is there a better way to get the application jar to azure using terraform?
There are various ways to deploy your application to Azure App Service. For your scenario, I recommend not to set WEBSITE_RUN_FROM_PACKAGE and make sure your executable jar is called app.jar and it is dropped to the root of your Web App's content folder (/site/wwwroot).
App Service will automatically take care of setting the appropriate SERVER_PORT environment variable behind the scenes, so that when your Spring Boot application starts, it will start listening to the correct port.
If you need to set parameters, you can always set JAVA_OPTS in the App Service Settings section in the Azure portal and those will travel as environment variables and ultimately used by java.exe upon start.
If you hit any rough edge, feel free to open a ticket in Azure portal and we will be able to assist you better to make sure your app runs well in Azure App Service.
Other popular mechanism to deploy is using Maven:
https://learn.microsoft.com/en-us/azure/java/spring-framework/deploy-spring-boot-java-app-with-maven-plugin
Im trying get three or more applications running with Service Fabric. They would all use same api services, only difference would be that each of them would get configurations from different storages. What would be the right way to pass the correct connection string without using environment variables?
Add your configuration parameters for each SF application under the application's ApplicationPackageRoot/ApplicationManifest.xml file, then specify the parameters in the "ApplicationParameters" folder for your publish profiles. More info on this approach can be found here: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-manage-multiple-environment-app-configuration
I have deployed a Web API written with .net Core to a local dev Azure Service Fabric cluster on my machine. I have plenty of disk space, memory, etc, and the app gets deployed there. However, success is intermittent. Sometimes it doesn't deploy to all the nodes, and now I have it deployed to all the nodes, but within the Azure Service Fabric Manager page, I see each application in each node has an Error status with the message: "The ServiceType was not registered within the configured timeout." I don't THINK I should have to remove and redeploy everything. Is there some way I can force it to 're-register' the installed service type? Microsoft docs are really really thin on troubleshooting these clusters.
Is there some way I can force it to 're-register' the installed service type?
On your local machine you can set the deployment to always remove the application when you're done debugging. However, if it's not completing in the first place I'm not sure if this workflow would still work.
Since we're on the topic, in the cloud I think you'd just have to use the Powershell scripts to first compare the existing app types and version and remove them before "updating". Since the orchestration of this is complicated I like to use tools to manage it.
In VSTS for instance there is an overwrite SameAppTypeAndVersion option.
And finally, if you're just tired of using the Service Fabric UI to remove the Application over and over while you troubleshoot it might be faster to use the icon in the system tray to reset the cluster to a fresh state.
Is it possible to setup Continuous Integration on VSTS without using external VM as build agent (https://azure.microsoft.com/en-us/documentation/articles/service-fabric-set-up-continuous-integration/)?
What I would like to achieve is to have one Service Fabric Solution with 2 statefull/stateless services (serviceA and serviceB). I want to build and deploy them separately as different build jobs on VSTS, but to deployed them to the same Service Fabric Cluster on Azure (fabric:/App/ServiceA, fabric:/App/ServiceB).
As of the Service Fabric SDK 2.1.150 and Runtime 5.1.150 release, it is possible to deploy Service Fabric application using VSTS's hosted build agent as the dependencies can be added via a NuGet package - refer to the following video for details. http://www.dotjson.uk/azure-service-fabric-continous-integration-and-deployment-in-15-minutes/
In your specific case; just create 2 build definitions (1 for each service) and 2 release definitions (1 for each service) and hook them up to the same hosted Service Fabric cluster.
Unfortunately deploying applications relies on the Service Fabric SDK being installed so you'll need to set up an agent as the instructions suggest. If you don't want to pay for the Azure VM, you might want to consider running the agent service locally e.g. On your devbox.
Note that with Service Fabric you deploy applications, not services. You can however update services independently.
It sounds like you need to have service fabric SDK installed on the build machine, and I'm guessing the hosted agent doesn't have that. If that's the case, then yes you need to create your own build server VM
I have a created a sample web role application using cloud service. Before hosting my application in cloud, i want to test the application in Dev Fabric. I am sure that when we run the application from VS, it creates an environment that simulates the cloud.
But, if I want to give my application for testing to QA, do I still need to give my source to them and run the application from VS under Dev Fabric or is there any other ways in running my deployed package under Dev Fabric.
In a line, my question is: How do i run my packaged Azure application under Dev Fabric before hosting in Cloud?
Can anyone having an idea, please share me some information?
Thanks for your quick response. CSRun command helped in accoomplishing my requirement. But i can see that it is taking an IP Address, http://127.0.0.1:80/ by default.
Also i am trying to find it out that, is there a way we can change this to a proper name instead of using like an IP?
for ex: http://localhost/ or
with deployed machine name like http://applicationserver/webrole1/ - so that we can access this from any machine in the netwrok.
I went through the Dev Fabric UI, where we can see the curent instances running, but i didnt find any options for these.
Please share me some information on this.
When you run your application locally, a different kind of package gets created (actually a directory) with a .csx extension.
As long as you have that .csx directory and your configuration file (.cscfg), you can run the package by using the "csrun" command. (So no, you don't need Visual Studio.)
You can use this blog post to access azure services running in DevFabric (DF) from other boxes -
http://blog.ehuna.org/2009/10/an_easier_way_to_access_the_wi.html