I have setup Azure search on my Kentico site. The search works fine.
When I try to setup automatic index build it fails.
So, it's never rebuilding the indexes. How can I fix this issues?
The error message suggests that either there are two columns with the same name or perhaps the case of a column name has changed somehow and Azure is struggling with that. I'd suggest dropping the index in the Azure portal and seeing if the automatic indexing will recreate it.
If you can go into the portal, it's worth checking the index definition to confirm that the fields are as you expect.
Related
I created a function and I am trying to deploy it from VS Code by clicking the Deploy to Function App.... The Deployment runs successfully based on the output log - Deployment successful but then when I go to the portal, the function is not listed under Functions.
What shall I do and what is the problem here?
When I debug in VS Code, I get this: No job functions found. Try making your job classes and methods public. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.).
Unfortunatly I would not know if those steps don't work for uploading. The deployment finishes, and every single time it becomes visible in my portal. Uh, maybe there is a slight difference. The app service itself is pre-created via terraform. Just the uploading of the code I do via VSC.
As far as deletion goes:
Open the resource group, in the list lookup the App Service. Select the checkbox in front of it. Delete in the top nav bar of that pane.
Trying to delete it any other way will indeed give you the "Not found" error.
I've had the same 'issue', in my case it turns out that the issue was a bad entry in the requirements.txt
I had an incorrect line with 'io' and when it was present despite the deployment appearing to complete successfully in VS code, the function was not updated if it was previously deployed or not deployed if it wasn't resulting in the same 'no results' in the functions list.
Having other requirements such as 'numpy' or 'scipy' worked just fine.
It's an old thread but maybe it'll be helpful to whoever gets here in the future.
Even as of now, some changes I make in VS Code seem to take time to be immediately visible on the portal. I had a similar issue with resources, i.e. creating a resource from VS Code wouldn't make it immediately visible on Azure Portal. You can always go to Functions on the portal and click Refresh. Also try going to Advanced Tools, then Kudu and checking if your function can be found there.
One word of advice: if you publish your functions from VS Code, then work on that resource only from VS Code. You will find it reiterated all over Azure Functions docs that:
Publishing to an existing function app overwrites the content of that
app in Azure.
Today I saw something really strange with my azure web site. My site was originally deployed using an ARM Template that configured various application settings.
After the initial deploy one of the settings was manually changed via the portal. Today that setting was reverted back to the original value used in the template.
Should that even be possible? I checked audit/activity logs to see if anyone changed it, and the audit logs are empty.
What is going on here, anyone have an idea?
That should not be happening. Azure will never automatically redeploy your ARM template.
Some possibilities that could have led to this:
Someone redeployed your ARM template, which would cause settings to be reset
Maybe when you made the setting change, it was never actually applied, e.g. if 'save' wasn't click, or some error happen.
I'd suggest applying you setting change again, and making sure that it is in fact applied. It should not get reverted by magic.
My azure search indexer which reads from an sql table with Change Tracking, is failing with the following error
"Unable to cast object of type 'Newtonsoft.Json.Linq.JObject' to type 'System.String'."
If I reset the indexer, it'll start working fine. What is the root cause of the problem here?
We’ve identified an issue in SQL indexers that use SQL integrated change detection that affects a very small number of customers. We’re working on a fix, which will probably be deployed in production next week. We'll also improve our telemetry so that we'll be able to identify this class of issues proactively.
The workaround you’ve already used (resetting the indexer) is the best workaround for this issue.
Sorry for the trouble!
We are using Sitecore 8.1 with LUCENE search provider, 1 CM and 2x CDs. The solution is hosted in Azure Web Apps.
We noticed that when content author publishes or updates the article, the changes is seen my some users/browsers and not for others.
I suspect this is due to index not being built on one of CDs (as history engine is not enabled). In the past I could troubleshoot this by RDP to Azure Web Role VM or similar and analyse the lunene index files data time.
Above is not possible with WEB APP as you can't RDP or FTP to specific instances.
So..
Is there a way in Sitecore to find out whether index has been 100% built for N number of CDs?
Is it true that History Engine MUST be turned on if we have more than 1 CDs?
If there are N (where N > 1) number of CDs, does one of the CD gets rebuilt instantly after publish end? This is what we have noticed and it confuses me.
Any reason why History Engine section might be missing out of box?
Thanks.
Don't know.
My understanding that you need to have History Engine "on" if you have ANY CDs.
The combined instance (that has CM and CD on the same instance) does not need a History Engine as it gets updated instantly.
I would expect it to be missing, as the default installation is not intended for scaling. Also, I would mention, that you need all your CD instances that you publish to explicitly listed in a web.config (or added through Include). Please see this post from Alen Pelin: http://blog.alen.pw/2012/06/lucene-index-isnt-updated-on-cd.html
I created a cloud service and tested it successfully locally. I added service configurations for stage and production. Here is a snippet of my staging-configuration:
and here my configuration-settings:
Then when I publish I set up the deployment as follows:
All this worked like 2 weeks ago. But now he deploys in VS and when I look into Azure Service Configure area it looks like this:
I played a little bit with the "Update development ..."-checkbox on the second screen but the result is the same.
So it ignores all the settings I made and just won't tranistion my configuration to the ine I named "CloudStage". My current Web PI tells me that I use Windows Azure SDK for .NET (VS 2013) 2.3. I don't get the point.
Edit
Some more things I observed:
No WADLogsTable and WADWindowsEventLogsTable is generated automatically in the staging storage.
I deactivated Remote Desktop because it was one of the changes I made to monitor the event log (which wasn't useful here)
I manually changed the connection strings in Azure Portal but it seems as if the worker is totally unaware of the storage (rebooted it with no success).
Edit
I recognized another thing. Here you can see a running deployment of my service:
See the warning-mark on the left? If I go to my Error list this is shown:
This warning is senseless since it tells me that I did everything the right way. My *.Local.csfg-files are pointing to the local storage. So?!?
This seems weird. Please check the in your ServiceConfiguration.CloudStage.cscfg to verify the expected values.
Have you tried updating any other property like Enabling Remote desktop? Does that get updated on your deployment? You should select the "Deployment Update" check box in the publish dialog. Now, when deploying to an existing Cloud Service, it should ask you if you want to replace it.
If you get the Object reference error every time you right click on project, there might be some issue with the Azure SDK set up.
I'm a little bit further now. What I did was:
Deleted all Services in Azure.
Deleted all Storage Accounts in Azure
Removed my Service-Project completely from solution (not the library containing the worker-logic).
Re-added storage-accounts in Azure.
Re-added services in Azure.
Re-added a project in the solution and added the worker-logic inside it.
Builded up all the publishing-stuff again.
Published it.
The first publish ended like the one described in my question. After I checked the "Update development..."-option in properties of my worker it finally took my transitions into the stage!
Now I recognized, that WADLogsTable was still empty. I hit the instance right in server-explorer and choosse "Update diagnostic settings...". There was an option "Transfer period" suddenly set to "None". This explained to me, why my table was empty and after I set it back to "1" my table is filling again!
Another funny thing beside: When I right-click my Cloud-project in the solution I get "Object reference not set to an instance...". When I just click it left and choose Build->Publish it works.
I just hope that I can help somebody with this. Lets see if it's stable now.
Edit: Yesterday it worked - today is still the same issue :-(.
When you get "Object reference not set to an instance.." for a CloudService project you usually have some kind of mismatch. It could be that a setting in the ServiceConfiguration is not defined in the ServiceDefinition. It could also be that there is a publish profile defined in the .ccproj file for the CloudService that doesn't exist. This might also be what is causing your problems with the different configurations.
So it turns out that the problem is completely on client-side. My Visual Studio (now with SDK 2.4) is doing something wrong. I set up a fresh installation with all the stuff needed :-( and there it works perfect. I'll try to determine if one of my extensions is causing the strange "Object reference not set..."-bug.
Repair-Installation of VS does not solve the problem btw.