Smart search index issue – Kentico 9 - search

Our team of developers have been having an ongoing issue with smart search functionality not working as expected in our PROD website. UAT site works as expected. We use Kentico 9 CMS.
Example:
We have created a page in Kentico and added some information to the smart search field. We followed the same process in UAT and PROD but only UAT comes back with search results when you use this functionality on the website.
Image 1
Below - search results in UAT – PROD does not come back with anything related:
Image 2
What we have done so far to try and solve the issue:
• We noticed the PROD site needed an index status rebuild. We processed the rebuild but it did not solve the problem. After a while, it requested another rebuild.
Image 3
• PROD site is linked to Azure Portal and is scaled out to a minimum of 2 instances while UAT is scaled out to only 1 instance. We tried to reproduce the same issue in UAT (where we can debug it) by increasing its number of instances to 2. It did not make any difference; the search functionality still works fine in UAT but not in PROD.
• We saved scheduled task – Execute search tasks again to make sure this is running as expected. The scheduled task is running fine but It did not help solving the problem.
Image 4
Does anybody have any idea or experienced the same or similar issue before?

You have to make sure that your scheduled task is run on each web farm server individually.
Take a look at the documentation (Check the Tip: Executing search tasks via the scheduler on a web farm). Anyway to me it looks like more a web farm synchronization issue than issue with smart search

Related

Xamarin Android+iOS builds failing on Azure Dev Ops

I have an xamarin mobile app with 3 projects. Shared, Android and iOS.
All 3 build perfectly fine locally but fail on Azure Dev ops pipeline.
iOS and Android only have 2 xmal views that are platform specific. The rest are located in Shared.
For both of the xmal views, all the errors are coming from the code behind cs files complaining that something doesn't exist in the current context. There are around 80 errors like the one below. The errors are identical on both platform builds.
Example error from Droid build:
Droid\Views(Filename).xaml.cs(26,13): error CS0103: The name 'InitializeComponent' does not exist in the current context
This build hasn't run for a while, around 8 months. It used to work fine and none of the views xmal/cs code has changed. I'm assuming a version is now misconfigured somewhere.
Both builds run on VS 2022 pipelines.
Both builds restore okay.
I have tried (Mostly suggestions from similar posts)
Adding restore argurment to the Build step.
Checking name spaces match
Adding a small change (whitespace) to the 2 xmal pages to force a change.
Removing the shared project reference and re-adding it.
I would be grateful for any suggestions or ideas.
Thanks in advance.
In past experience, if something works locally and not on the build server pipeline; this usually points to a discrepancy between both machines, and potentially their versions of given libraries.
If you run the pipeline off of a local machine as well, maybe confirm that all libraries installed on that machine match your local ones (XCode, android-sdk, etc.)
If you run the pipeline off of a hosted machine, it maybe that the hosted images needs to be updated to a newer one to keep up with the project.

What is the purpose of 1 click installation of Drupal on Web hosting?

I am in process of making first Drupal 7 website and for that searching for the web hosting. And I found that, several hosting says 1-click Drupal installation. But as I was searching the net for standard practice of site development, many sources explains Develop the site on local environment and then transfer to the web server (which includes, transfer of database, and whole drupal with modules) which is quite convincing that you develop locally and transfer to web so it start working there as it was working on locally.
On the other hand, what is the use of 1 click drupal installation on web server, I believe it will install the fresh core drupal, so from again initially I have to start developing, installing each module so, starting from square 1.
So, which is the BEST practice for making web site live, shall i develop locally first or develop directly on web server?
Simultaneously, what is the best practice about maintaining site, I read that, there should be One live site where visitor will come, Second Test site which is similar to live, One local site, So what is the standard practice for this, and how to maintain?
Very thanks in advance.
In my previous answer in point 2 i outline 4 servers: DEV, STAGE, QA and PROD. this is usually the process on kind of "biggish" company where lots of people might be working in infrastructure, development and qa department. This said, if you are not working in a complex environment, you might just have 2 drupal installations, one for testing , on DEV (e.g dev.mysite.com) and one live (eg.mysite.com). The different url can be arranged from you cpanel or personal panel in case of a shared server.
They might run on the same server, however the dev site is the one you will be working on while creating the site, then you will clone the dev and make it live once the site is ready.
You will keep the dev site as a space to test new features, fix bugs, test updates of module or core files. Once these new features are implemented, or bug fixed, you will replicate the same steps on the live site.
GIT is a version control system: it allow you to keep track of the code you are working on, you might want to create 2 branches: DEV and MASTER.
You will be working on DEV to create the site or update files or fix bugs, and you will merge to LIVE and pull on the live server the code once it's stable. I hope this clarify a bit.
1) one click installation process are usually offered on shared servers, they then might have lower performance and memory limit that your local lamp set up. It is the good to check what is the version of PHP and MySQL that runs on the server, as well as max upload file limit or connection time. I prefer start working locally and then publish on my server, BUT if you install on your server first you will have a good idea of how drupal will perform in the real world, then you can always clone the site and db on your local, and also you will avoid the ugly surprise of trying to move your site from local to your server and find out bugs or migration issues.
2) in enterprise dev environment you might have 3/4 steps, DEV ("wild west") STAGE (release candidate) QA (quality assurance server) PROD (live site). You usually sync (eg with GIT) your local to DEV or STAGE , than you push to QA, then if it's all good to PROD

Sitecore 8.1 Lucene not updating - how to indentify if index has been fullly built?

We are using Sitecore 8.1 with LUCENE search provider, 1 CM and 2x CDs. The solution is hosted in Azure Web Apps.
We noticed that when content author publishes or updates the article, the changes is seen my some users/browsers and not for others.
I suspect this is due to index not being built on one of CDs (as history engine is not enabled). In the past I could troubleshoot this by RDP to Azure Web Role VM or similar and analyse the lunene index files data time.
Above is not possible with WEB APP as you can't RDP or FTP to specific instances.
So..
Is there a way in Sitecore to find out whether index has been 100% built for N number of CDs?
Is it true that History Engine MUST be turned on if we have more than 1 CDs?
If there are N (where N > 1) number of CDs, does one of the CD gets rebuilt instantly after publish end? This is what we have noticed and it confuses me.
Any reason why History Engine section might be missing out of box?
Thanks.
Don't know.
My understanding that you need to have History Engine "on" if you have ANY CDs.
The combined instance (that has CM and CD on the same instance) does not need a History Engine as it gets updated instantly.
I would expect it to be missing, as the default installation is not intended for scaling. Also, I would mention, that you need all your CD instances that you publish to explicitly listed in a web.config (or added through Include). Please see this post from Alen Pelin: http://blog.alen.pw/2012/06/lucene-index-isnt-updated-on-cd.html

Azure Websites Continuous Delivery

I have a solution in Visual Studio Team Services that has 2 Web Applications (specifically one project for WebAPI services and another for the actual site using MVC).
I'm trying to set up continuous delivery to Azure but all the information that I can find seems to assume that you only have a single Web Application within your solution (which seems a little unrealistic for all but the simplest of projects!).
The out of box continuous delivery process seems to just pick and deploy the first Web Application it finds (which isn't necessarily the same project each time!)
I've tried specifying the Deployment Settings file, but that seems to affect the destination rather than the project being deployed since again, it seems to just "pick" a project to deploy, and each time it deploys every single compiled assembly plus all dependencies rather than just the binaries and dependencies of the project actually being deployed, which can cause issues with MVC finding duplicate controller matches for a given name (this can of course be fixed by specifying the namespace of the controllers within the route configuration, but that seems less than ideal, and still doesn't fix the entire problem).
Ideally I'd like to find a way to deploy both projects with a single build, but as a temporary solution I'd be happy with 2 builds that are both triggered by a check-in of the single solution, that each reliably deploy 1 of the 2 Web Applications.
Does anyone know if this is possible? I guess I could write my own custom build template, but I'm hoping there is an easier answer (not least because I can't imagine that this isn't a problem being faced by other people!)
I did find this question TFSPreview.com and Azure continuous deployment for multiple solutions in TFS but since that's quite old and is specifically talking about AzureWebRoleProjects rather than Web Applications being deployed to the newer Azure Websites feature, I'm hoping that there is a more positive answer?
This is possible with multiple build configurations. In addition to Debug and Release you could specify two more, one for each app.
You can find these in Visual Studio at Build -> Configuration Manager. And then in the configurations specify only one of them to be built. Then running MSBuild with that configuration will output only one WebDeploy package.

deploying to sharepoint using the object model doesn't work reliably

Deploying to sharepoint using the object model or STSADM commands sometimes results in one or more packages being in the "error" state in the web control, a redeploy instantly fixes this, usually, even stranger, if i create two apps one which adds and one which deploys then i get no problems, but putting a delay between a single program does not have a similar effect.
If i run the deploy twice for programs which did not deploy successfully it works fine, as long as I do not try to do it programatically in which case it makes no difference.
It is different files and sometimes is none.
I do use stsadm -execadmsvcjobs between add and deploy and even between two of the deploy bunches.
(i'm deploying around 10 wsp files programatically)
Does anyone have any ideas on why this happens? or how to solve it, as when i get to implementations it causes problems.
The problem lies in the fact that sharepoint will perform app pool recycles and / or full iisresets, as well restarts of the SharePoint Timer Service (altough not completely sure about that though). When you try to actually deploy the just installed package sharepoint is still busy getting up and running again, the timer job created to install / deploy is basically waiting for the central admin app pool to be fully running again.
The same thing happens (somewhat reproducably) while retracting a solution. Hit F5 a lot of times on the solution management page while the retract process is underway and if you refreshed fast enough it will hang and display "error" in red.
My solution was to create a WebRequest to at least the central admin (or just do a SPSite = new SPSite("centraladminurl")) in your deployment app or in powershell. Do this after every deploy action as well.
This SHOULD fix the timing issue (basically a kind of "race condition").

Resources