Seeking version of blob storage - azure

Team,
I am a beginner in Azure and have some queries regarding the blob storage logs
I am referring to this link http://blogs.msdn.com/b/windowsazurestorage/archive/2014/08/05/microsoft-azure-storage-service-version-removal.aspx
Now in my test environment I have enabled logging and I have got access to the log folders via Azure Management Studio.
When I look in the folders there are several mini folders I guess differentiated hr by hr. In that there are small txt files which get generated every 10 mins.
First question: How come the logs get generated even though there is no web activity ?
Second question : I have a problem with the storage version. The link given above talks about removing version before 2012. Does that mean I should only look at GetBlob/GetTable requests ? The reason I am asking this is because along with the first Getblob request you have another request which is ListBlobs,Releaseblobs each with different version;
for eg
1.0;2015-01-22T09:25:05.1660119Z;ReleaseBlobLease;Success;200;8;8;authenticated.......;;; having version 2011-08-18
1.0;2015-01-22T09:26:51.2674946Z;ListBlobs;Success;200;4;4;authenticated;;autoenrolmenttest;blob...... having version 2012-02-12
1.0;2015-01-22T09:22:18.6111213Z;PutBlob;Success;201;13;12;authenticated;......... having version 2011-08-18
1.0;2015-01-22T09:25:06.0485334Z;GetBlob;Success;200;53;52;authenticated;....having version 2011-08-18
My predicament is which version I should consider from the above 4 ? Only getbllob?

1) Logs containing requests certainly mean that there is an activity in your Azure Storage account. You can look at the client IP address to figure out where those requests are coming from.
2) Requests are independent and thus can be executed with different versions. All requests with an older version will not work after the version removal.

Related

Is there an alternative way to add another Registered Cloud to Visual Studio?

I've lost the ability to log in to Visual Studio using my work account which is prohibiting me from being able to connect to my Azure Dev Ops repos. I believe the reason is because it's trying to auth me against Azure Gov instead of Azure Comm where my account lives.
When I go to my account options I see the US Azure Gov cloud as registered and when I click "Add" I can see the other Gov clouds; but I don't see the commercial cloud. And when I try to add https://login.microsoftonline.com/ manually, it complains that the server returned a 404. Doing a trace reveals the GET to the metadata endpoint is the culprit. https://login.microsoftonline.com/metadata/endpoints?api-version=1.0
I've checked with other developers and they do have it in their list along with the others I have; they have no issues logging in or connecting to the repos. We're running the same major version of VS 2019 - 16.8.x.
Is there an alternative way to register the comm cloud or get that list to be populated with it?
UPDATE: updated to 16.9.4 without luck.
My google-fu finally kicked in and I found a solution.
Near the bottom of the answer from TianyuSun-MSFT, they suggest:
...you can also close all Visual Studio instances and go to C:\Users[user name]\AppData\Local.IdentityService then rename(or delete) the .IdentityService folder, after that you can restart Visual Studio 2019 and try to log in again.
That worked. But before I did it, I investigated the json configs in there and saw a lot pertaining to older versions I no longer have installed. Perhaps, that was the issue.
Once I restarted and logged in, the commercial cloud was automatically added to the list of Registered Clouds.

Azure function - "Did not find any initialized language workers"

I'm running an Azure function in Azure, the function gets triggered by a file being uploaded to blob storage container. The function detects the new blob (file) but then outputs the following message - Did not find any initialized language workers.
Setup:
Azure function using Python 3.6.8
Running on linux machine
Built and deployed using azure devops (for ci/cd capability)
Blob Trigger Function
I have run the code locally using the same blob storage container, the same configuration values and the local instance of the azure function works as expected.
The functions core purpose is to read in the .xml file uploaded into blob storage container and parse and transform the data in the xml to be stored as Json in cosmos db.
I expect the process to complete like on my local instance with my documents in cosmos db, but it looks like the function doesn't actually get to process anything due to the following error:
Did not find any initialized language workers
Troy Witthoeft's answer was almost certainly the right one at the time the question was asked, but this error message is very general. I've had this error recently on runtime 3.0.14287.0. I saw the error on many attempted invocations over about 1 hour, but before and after that everything worked fine with no intervention.
I worked with an Azure support engineer who gave some pointers that could be generally useful:
Python versions: if you have function runtime version ~3 set under the Configuration blade, then the platform may choose any of python versions 3.6, 3.7, or 3.8 to run your code. So you should test your code against all three of these versions. Or, as per that link's suggestion, create the function app using the --runtime-version switch to specify a specific python version.
Consumption plans: this error may be related to a consumption-priced app having idled off and taking a little longer to warm back up again. This depends, of course, on the usage pattern of the app. (I infer (but the Engineer didn't say this) that perhaps if the Azure datacenter my app is in happens to be quite busy when my app wants to restart, it might just have to wait for some resources to become available.). You could address this either by paying for an always-on function app, or by rigging some kind of heartbeat process to stop the app idling for too long. (Easiest with a HTTP trigger: probably just ping it?)
The Engineer was able to see a lower-level error message generated by the Azure platform, that wasn't available to me in Application Insights: ARM authentication token validation failed. This was raised in Microsoft.Azure.WebJobs.Script.WebHost.Security.Authentication.ArmAuthenticationHandler.HandleAuthenticate() at /src/azure-functions-host/src/WebJobs.Script.WebHost/Security/Authentication/Arm/ArmAuthenticationHandler.cs. There was a long stack trace with innermost exception being: System.Security.Cryptography.CryptographicException : Padding is invalid and cannot be removed.. Neither of us were able to make complete sense of this and I'm not clear whether the responsibility for this error lies within the HandleAuthenticate() call, or outside (invalid input token from... where?).
The last of these points may be some obscure bug within the Azure Functions Host codebase, or some other platform problem, or totally misleading and unrelated.
Same error but different technology, environment, and root cause.
Technology Net 5, target system windows. In my case, I was using dependency injection to add a few services, I was getting one parameter from the environment variables inside the .ConfigureServices() section, but when I deployed I forget to add the variable to the application settings in azure, because of that I was getting this weird error.
This is due to SDK version, I would suggest to deploy fresh function App in Azure and deploy your code there. 2 things to check :
Make sure your local function app SDK version matches with Azure function app.
Check python version both side.
This error is most likely github issue #4384. This bug was identified, and a fix was released mid-june 2020. Apps running on version 3.0.14063 or greater should be fine. List of versions is here.
You can use azure application insights to check your version. KUSTO Query the logs. The exception table, azure SDK column has your version.
If you are on the dedicated App Service plan, you may be able to "pull" the latest version from Microsoft by deleting and redeploying your app. If you are on consumption plan, then you may need to wait for this bugfix to rollout to all servers.
Took me a while to find the cause as well, but it was related to me installing a version of protobuf explicitly which conflicted with what was used by Azure Functions. Fair, there was a warning about that in the docs. How I found it: went to <your app name>.scm.azurewebsites.net/api/logstream and looked for any errors I could find.

Upgrade/downgrade service fabric application with already deployed version

In Service Fabric cluster, If application has multiple versions(say 1.0.0,1.0.1,1.0.2), then how can we shift the application to one version to another version(say active is 1.0.0, then I wanted to shift to 1.0.1) with out redeploying the application. Is there a PowerShell command to do this?
You should be able to use the PowerShell command
Start-ServiceFabricApplicationUpgrade
This being said I did hit an issue with my local cluster, telling me I couldn't upgrade / roll back the application if the service description had changed, which it hadn't. Using an Azure hosted cluster this worked as expected, perhaps an inconsistency with how the package is copied into the image store.
Depending on what you are attempting to achieve you could also look at named instance where you are able to deploy multiple versions of an application at once, for A - B testing.
Here are some similar posts:
Post 1
Post 2
EDIT:
Thanks to Aleksey L for the comment below. With a bit of messing around due to types not being the same and as long as you haven't changed any parameters between versions this will work,if you have you will need to manually build up the hash table.

Sitecore 8.1 Lucene not updating - how to indentify if index has been fullly built?

We are using Sitecore 8.1 with LUCENE search provider, 1 CM and 2x CDs. The solution is hosted in Azure Web Apps.
We noticed that when content author publishes or updates the article, the changes is seen my some users/browsers and not for others.
I suspect this is due to index not being built on one of CDs (as history engine is not enabled). In the past I could troubleshoot this by RDP to Azure Web Role VM or similar and analyse the lunene index files data time.
Above is not possible with WEB APP as you can't RDP or FTP to specific instances.
So..
Is there a way in Sitecore to find out whether index has been 100% built for N number of CDs?
Is it true that History Engine MUST be turned on if we have more than 1 CDs?
If there are N (where N > 1) number of CDs, does one of the CD gets rebuilt instantly after publish end? This is what we have noticed and it confuses me.
Any reason why History Engine section might be missing out of box?
Thanks.
Don't know.
My understanding that you need to have History Engine "on" if you have ANY CDs.
The combined instance (that has CM and CD on the same instance) does not need a History Engine as it gets updated instantly.
I would expect it to be missing, as the default installation is not intended for scaling. Also, I would mention, that you need all your CD instances that you publish to explicitly listed in a web.config (or added through Include). Please see this post from Alen Pelin: http://blog.alen.pw/2012/06/lucene-index-isnt-updated-on-cd.html

MachineKey Azure SDK 1.5/1.6

I am using a custom Api Token implementation using WCF Web API on Azure. This uses FormsAuthentication.Decrypt in order to obtain a FormsAuthenticationTicket. To make sure that the decrpyt process works across multiple instances, I have provided a MachineKey in my web.config.
However, I've noticed that the MachineKey doesn't seem to be working on Azure because it looks like Azure is using a random machinekey and overwriting the one I specificed in the web.config I'm using the latest Azure SDK 1.5 (or 1.6?)
I am well aware of this issue with Azure SDK 1.3 and I believe this was rectified in 1.4. Is there a chance that this issue has since re-appeared on Azure SDK1.5/1.6?
I was having the same problem where my FormsAuthentication tickets were not validating across sub domains after the recent Microsoft .Net 4.0 Security upgrade KB2656351.
My FormsAuth tickets are generated from my dedicated servers and read on sub domains on Windows Azure.
In order to get all sub domains to decrypt the tickets I made sure all my dedicated servers were patched with the latest .Net updates via Windows Update. Then I upgraded my Azure project to version 1.6 and selected the latest Azure OS after deploying. This seemed to do the trick.
Here are some articles about the issue:
http://weblogs.asp.net/scottgu/archive/2011/12/28/asp-net-security-update-shipping-thursday-dec-29th.aspx
http://technet.microsoft.com/en-us/security/bulletin/ms11-100.mspx
cheers
Francesco
Windows Azure already synchronizes machine keys across the same role in a deployment. As such, you should be fine to completely ignore the MachineKey setting in web.config and just let Windows Azure handle it for you (the web farm scenario is well supported). Your scenario is supported on Windows Azure out of box with no modifications (just call Decrypt).
The issue that you might be talking about was a 1.3 issue where the web.config files were being modified directly to sync the machine keys. This failed when the file was read-only (i.e. TFS source control) and caused deployment failures. That was fixed some time ago.
I think I finally found the solution. This had nothing to do with Azure or MachineKeys but had more to do with the way the app was being tested. The encrypted key that was stored on my Phone App was encrypted on a different web server (however, the machine key used was the same). I just un-installed and re-installed my app thereby forcing the server to generate a new key.
It seems that decrypting this key on a different server was causing problems. I'm a little worried if this will cause problems in the future. Shouldn't using the same Machine Keys ensure that encrypt/decrypt works across boxes?
Anyways, I apologize for the inconvenience caused.
We seem to have the same problem as well. We set machinekey set in the web.config file. Things were fine until a couple of days ago when Decrypt started returning null. The decryptionkey and validationkey are identical on all machines. Not sure what the problem is.
EDIT - Azure v1.6 does seem to respect the machinekey we set in the config file. We figured out how to solve our problem - Maybe this would help you - we were seeing that decrypt on the cookie does not work on our Windows 7 64 bit dev machines. Then we checked pending updates and there were a couple of .NET updates related to security. We ran the updates and voila things started to work again.
OK so I had the problem as described above in a 3-server NLB group.
It looks like the Windows Automatic Updates had installed KB2656352, KB2656358 and KB2657424 on two of the three servers.
I'd put money on the fact that it's because some of the servers are running with the patch and some aren't. I guess machines that have been patched don't like decoding things encoded by a non-patched machine (and/or vice-versa).
Anyway, I've installed all three patches on the remaining machine and put it back into the NLB group. It seems to all work fine.

Resources