Upgrade/downgrade service fabric application with already deployed version - azure

In Service Fabric cluster, If application has multiple versions(say 1.0.0,1.0.1,1.0.2), then how can we shift the application to one version to another version(say active is 1.0.0, then I wanted to shift to 1.0.1) with out redeploying the application. Is there a PowerShell command to do this?

You should be able to use the PowerShell command
Start-ServiceFabricApplicationUpgrade
This being said I did hit an issue with my local cluster, telling me I couldn't upgrade / roll back the application if the service description had changed, which it hadn't. Using an Azure hosted cluster this worked as expected, perhaps an inconsistency with how the package is copied into the image store.
Depending on what you are attempting to achieve you could also look at named instance where you are able to deploy multiple versions of an application at once, for A - B testing.
Here are some similar posts:
Post 1
Post 2
EDIT:
Thanks to Aleksey L for the comment below. With a bit of messing around due to types not being the same and as long as you haven't changed any parameters between versions this will work,if you have you will need to manually build up the hash table.

Related

How can I run node v10.x on Azure Functions on a Linux host?

I've been working on a small function to automate my certificate renewal in Azure Functions.
The function works in my local emulator (in vscode), running under node v10.15.3.
However, when running it online, an exeption is generated on the syntax of an async iterator when the file containing it is included
Stack: /home/site/wwwroot/node_modules/acme-dns-01-cloudflare/index.js:125
for await(const zone of consumePages(pagination =>
It's my understanding that this syntax has been adopted in node versions 10.x. I therefore added the console output line: console.log(process.versions); and get the response that the function is running node version 8.16.1. I therefore checked the WEBSITE_NODE_DEFAULT_VERSION application setting, and confirmed it is set to 10.14.1. I have also checked it with another recommended setting of ~10 and got the same result.
Unfortunately the documentation is difficult to search for such a specific issue, but I have not yet come across anything that states that Linux functions are limited to node v8.x
As extra information, the FUNCTIONS_WORKER_RUNTIME is set to "node", and the runtime version is 2.0.12733.0 (~2)
At time of writing, this issue on github highlights the problem https://github.com/Azure/azure-functions-host/issues/4948. Different node versions are simply not available on Linux consumption plans regardless of the setting in WEBSITE_NODE_DEFAULT_VERSION.
Hopefully their new arrangements will be in place soon for anyone else who has this issue.
For now you can switch to a windows consumption plan, or potentially switch to a service plan (I haven't checked this, as it sort of defeats the point of functions)
Refer to this issue1 and issue2, looks like WEBSITE_NODE_DEFAULT_VERSION won't work for linux function, have to set the LinuxFxVersion property to select the node version.
Follow is my flow to change it.
1.Go to your Function App in the portal and open the Resource Explorer. You will find LinuxFxVersion is node:2.0-node8-appservice.
2.Select the web under config node, then choose the Edit button. Find the linuxFxVersion and change the value to NODE|10.14, after this click the PUT button to update the setting. Then restart your Function, check the node version you will find it's 10.14.

Azure function - "Did not find any initialized language workers"

I'm running an Azure function in Azure, the function gets triggered by a file being uploaded to blob storage container. The function detects the new blob (file) but then outputs the following message - Did not find any initialized language workers.
Setup:
Azure function using Python 3.6.8
Running on linux machine
Built and deployed using azure devops (for ci/cd capability)
Blob Trigger Function
I have run the code locally using the same blob storage container, the same configuration values and the local instance of the azure function works as expected.
The functions core purpose is to read in the .xml file uploaded into blob storage container and parse and transform the data in the xml to be stored as Json in cosmos db.
I expect the process to complete like on my local instance with my documents in cosmos db, but it looks like the function doesn't actually get to process anything due to the following error:
Did not find any initialized language workers
Troy Witthoeft's answer was almost certainly the right one at the time the question was asked, but this error message is very general. I've had this error recently on runtime 3.0.14287.0. I saw the error on many attempted invocations over about 1 hour, but before and after that everything worked fine with no intervention.
I worked with an Azure support engineer who gave some pointers that could be generally useful:
Python versions: if you have function runtime version ~3 set under the Configuration blade, then the platform may choose any of python versions 3.6, 3.7, or 3.8 to run your code. So you should test your code against all three of these versions. Or, as per that link's suggestion, create the function app using the --runtime-version switch to specify a specific python version.
Consumption plans: this error may be related to a consumption-priced app having idled off and taking a little longer to warm back up again. This depends, of course, on the usage pattern of the app. (I infer (but the Engineer didn't say this) that perhaps if the Azure datacenter my app is in happens to be quite busy when my app wants to restart, it might just have to wait for some resources to become available.). You could address this either by paying for an always-on function app, or by rigging some kind of heartbeat process to stop the app idling for too long. (Easiest with a HTTP trigger: probably just ping it?)
The Engineer was able to see a lower-level error message generated by the Azure platform, that wasn't available to me in Application Insights: ARM authentication token validation failed. This was raised in Microsoft.Azure.WebJobs.Script.WebHost.Security.Authentication.ArmAuthenticationHandler.HandleAuthenticate() at /src/azure-functions-host/src/WebJobs.Script.WebHost/Security/Authentication/Arm/ArmAuthenticationHandler.cs. There was a long stack trace with innermost exception being: System.Security.Cryptography.CryptographicException : Padding is invalid and cannot be removed.. Neither of us were able to make complete sense of this and I'm not clear whether the responsibility for this error lies within the HandleAuthenticate() call, or outside (invalid input token from... where?).
The last of these points may be some obscure bug within the Azure Functions Host codebase, or some other platform problem, or totally misleading and unrelated.
Same error but different technology, environment, and root cause.
Technology Net 5, target system windows. In my case, I was using dependency injection to add a few services, I was getting one parameter from the environment variables inside the .ConfigureServices() section, but when I deployed I forget to add the variable to the application settings in azure, because of that I was getting this weird error.
This is due to SDK version, I would suggest to deploy fresh function App in Azure and deploy your code there. 2 things to check :
Make sure your local function app SDK version matches with Azure function app.
Check python version both side.
This error is most likely github issue #4384. This bug was identified, and a fix was released mid-june 2020. Apps running on version 3.0.14063 or greater should be fine. List of versions is here.
You can use azure application insights to check your version. KUSTO Query the logs. The exception table, azure SDK column has your version.
If you are on the dedicated App Service plan, you may be able to "pull" the latest version from Microsoft by deleting and redeploying your app. If you are on consumption plan, then you may need to wait for this bugfix to rollout to all servers.
Took me a while to find the cause as well, but it was related to me installing a version of protobuf explicitly which conflicted with what was used by Azure Functions. Fair, there was a warning about that in the docs. How I found it: went to <your app name>.scm.azurewebsites.net/api/logstream and looked for any errors I could find.

Local cluster does not allow same application type with a different version in local service fabric cluster

The following post (on stackoverflow.com):
Design of Application in Azure Service Fabric
suggested that it is possible to have side by side installation of same application type with a different version. I tried to install a new version of application (fabric:/ServiceFabApp1 with a new version of 2.0.0 and of ServiceFabApp1Type) on my local cluster (that already has same application name with same application type with version 1.0.3 i.e. fabric:/ServiceFabApp1 with a existing version of 1.0.3 and of ServiceFabApp1Type) and got following error:
An application with name 'fabric:/ServiceFabApp1' already exists, its Type is 'ServiceFabApp1Type' and Version is
'1.0.3'.
You must first remove the existing application before a new application can be deployed or provide
a new name for the application.
Is this by design that application type (for multiple versions) can be same but the application name must be different for each version? Or it simply does not work on the local cluster but works in the azure cloud? Or is my interpretation of the information in the above link is incorrect?
Application types (eg. ServiceFabricApp1Type) can have one or more versions but an application instance (eg. fabric:/ServiceFabricApp1) can only be running one version at a time.
Thus, if you want to have two different versions of your application type running in your local cluster, you will need two different application instances, such that you can have, say, fabric://ServiceFabricApp1 running version 1.0.0 and fabric:/ServiceFabricApp2 running version 2.0.0. The easiest way to do this with the VS tools is to create two application parameters files, each of which defines a distinct app instance name. You can then choose which of the current instances to target with the current version that you're building. To move back and forth between versions of the type in VS, you'll probably want to just create a branch for each.
When you deploy a SF application, there are several steps:
1. Copying application package to the service SF image store
2. Provision application
3. Deploy/upgrade application
Step #1 is just copying the package to the SF cluster image store.
Step #2 provisions a new version of the application so that SF can either deploy that application, or upgrade an existing application if it has already been deployed.
Step #3 depends on what you've done before. If you have already deployed version X of your app, you can't deploy version X+1. You can only upgrade/downgrade.
If you need to run multiple instances of applications with the same version, you'll need to create different packages where name of the app is a unique name (a multi-tenant scenario).

Updating code of managed vm on google compute engine

I understand this might be an easy solution, but I am very new to this so any help would be appreciated.
I have been running through the hello world application for node.js with managed vms on google compute engine, and I have just done this stage
gcloud preview app deploy app.yaml --promote
Which has allowed me to put up the app, and it works.
BUT HOW do I now update that code? If I run that command again it starts up new instances and essentially treats it like a new upload.
You can deploy the updated version of your app by running the same command you used to deploy the app the first time, as indicate in this article:
If you update your app, you can deploy the updated version by entering the same command you used to deploy the app the first time. The new deployment creates a new version of your app and promotes it to the default version. The older versions of your app remain, as do their associated VM instances. Be aware that all of these app versions and VM instances are billable resources. For information about deleting or stopping your VM instances, see Cleaning up.
Just in case anyone found this question looking for the same information, I finally seemingly worked out how to do it.
You need to attach the --version flag when you are deploying, instead of using --promote.
You can find the default version in google cloud console, by going menu (burger icon) -> app engine -> versions and you will see in that list one item with (default) by it.
so then when deploying put that version string after --version and it will deploy without needlessly creating new things

MachineKey Azure SDK 1.5/1.6

I am using a custom Api Token implementation using WCF Web API on Azure. This uses FormsAuthentication.Decrypt in order to obtain a FormsAuthenticationTicket. To make sure that the decrpyt process works across multiple instances, I have provided a MachineKey in my web.config.
However, I've noticed that the MachineKey doesn't seem to be working on Azure because it looks like Azure is using a random machinekey and overwriting the one I specificed in the web.config I'm using the latest Azure SDK 1.5 (or 1.6?)
I am well aware of this issue with Azure SDK 1.3 and I believe this was rectified in 1.4. Is there a chance that this issue has since re-appeared on Azure SDK1.5/1.6?
I was having the same problem where my FormsAuthentication tickets were not validating across sub domains after the recent Microsoft .Net 4.0 Security upgrade KB2656351.
My FormsAuth tickets are generated from my dedicated servers and read on sub domains on Windows Azure.
In order to get all sub domains to decrypt the tickets I made sure all my dedicated servers were patched with the latest .Net updates via Windows Update. Then I upgraded my Azure project to version 1.6 and selected the latest Azure OS after deploying. This seemed to do the trick.
Here are some articles about the issue:
http://weblogs.asp.net/scottgu/archive/2011/12/28/asp-net-security-update-shipping-thursday-dec-29th.aspx
http://technet.microsoft.com/en-us/security/bulletin/ms11-100.mspx
cheers
Francesco
Windows Azure already synchronizes machine keys across the same role in a deployment. As such, you should be fine to completely ignore the MachineKey setting in web.config and just let Windows Azure handle it for you (the web farm scenario is well supported). Your scenario is supported on Windows Azure out of box with no modifications (just call Decrypt).
The issue that you might be talking about was a 1.3 issue where the web.config files were being modified directly to sync the machine keys. This failed when the file was read-only (i.e. TFS source control) and caused deployment failures. That was fixed some time ago.
I think I finally found the solution. This had nothing to do with Azure or MachineKeys but had more to do with the way the app was being tested. The encrypted key that was stored on my Phone App was encrypted on a different web server (however, the machine key used was the same). I just un-installed and re-installed my app thereby forcing the server to generate a new key.
It seems that decrypting this key on a different server was causing problems. I'm a little worried if this will cause problems in the future. Shouldn't using the same Machine Keys ensure that encrypt/decrypt works across boxes?
Anyways, I apologize for the inconvenience caused.
We seem to have the same problem as well. We set machinekey set in the web.config file. Things were fine until a couple of days ago when Decrypt started returning null. The decryptionkey and validationkey are identical on all machines. Not sure what the problem is.
EDIT - Azure v1.6 does seem to respect the machinekey we set in the config file. We figured out how to solve our problem - Maybe this would help you - we were seeing that decrypt on the cookie does not work on our Windows 7 64 bit dev machines. Then we checked pending updates and there were a couple of .NET updates related to security. We ran the updates and voila things started to work again.
OK so I had the problem as described above in a 3-server NLB group.
It looks like the Windows Automatic Updates had installed KB2656352, KB2656358 and KB2657424 on two of the three servers.
I'd put money on the fact that it's because some of the servers are running with the patch and some aren't. I guess machines that have been patched don't like decoding things encoded by a non-patched machine (and/or vice-versa).
Anyway, I've installed all three patches on the remaining machine and put it back into the NLB group. It seems to all work fine.

Resources