I have been using Coveo7 free version with Sitecore 8.1 update 1,
Configured the coveo to index all the content items under a path. 404 page is also under the same path, so tried to exclude the item by adding an exclusion rule in indexed documents. I checked in filters also, i'm able to see that exclusion rule is added to the list. But the same document is still coming up in search results.
I have tried the following options.
1. Rebuild index.
2. Full refresh of the index.
3. Delete the complete source and rebuild the index from sitecore.
I'm using Coveo admin credentials to login the index browser, but i'm not able to see the options to remove the corresponding indexed item.
Below are the screenshots.
unable to delete from index
Exclusion rule
Any help would be appreciated.
With Coveo for Sitecore, you cannot use the CES7 "Add an Exclusion Filter" feature from the Index Browser. You need to filter the items to be indexed from the Coveo for Sitecore configuration files inside Sitecore. There two ways to do it:
Via ExcludeTemplate/AddExcludedTemplate
Via inbound filters
A rebuild of the indexes in Sitecore is needed after you modify those configurations.
Note: Rebuilding/refreshing the Sitecore sources from CES7 have no impact as it only closes and re-open the communication to RabbitMQ to listen for messages. It doesn't instruct Sitecore to index items. All the rebuild operations must be done in Sitecore.
Related
I am having site level search in my share point site,its not working, I have done full crawl of database still its not working.Even I had configured search services again.Still its not working,whereas item level search is working.
You have to perform following steps.
Make sure the search account has “full read” permissions in the web
applications
Entered my content sources and verified they are being crawled
Checked the crawling error log and there are no “Top Level Errors”
The search managed account has “SPSearchDBAdmin” rights in all 4 of
the search databases
I have performed an “index Reset” and performed a full crawl
I have not disabled loop back check cause, according to my research
this should not be done in production servers
I'm used the Microsoft Graph API to query SharePoint. Until recently I was able to find the "Site Assets" document library via the Graph API. I can no longer find the list.
What queries have I tried:
https://graph.microsoft.com/v1.0/sites/{siteid}/lists?select=weburl
No URL matches the list of Site Assets. Next:
https://graph.microsoft.com/v1.0/sites/{siteid}/drives?select=weburl
Again no URL matches the list of Site Assets. In the past I was always able to find the assets used the second query. I switched both queries to beta, also without result.
I've looked at the changelog in the Graph API, but nothing relevant is listed.
How can I (nowadays) find the "Site Assets" list on any SharePoint Site?
By default both the lists and drives enumerations attempt to hide system objects, but unfortunately doing so in SharePoint is non-trivial. As a result some system lists were still coming through until recently when we made sure they didn't.
You can still see them, but you'll need to explicitly ask for them by requesting the system facet.
https://graph.microsoft.com/v1.0/sites/{siteid}/lists?select=weburl,system
https://graph.microsoft.com/v1.0/sites/{siteid}/drives?select=weburl,system
As part of a SharePoint solution, the functionality for users to create new web sites and publishing pages (programmatically) via a button click has been added. I need to ensure that the Description field for the newly created sites and pages is indexed by SharePoint Search. What is the best way to do this?
Please note, I am NOT interested in starting a new crawl. I just want to ensure that whenever the next scheduled crawl occurs, the contents of these fields will be searchable.
Thanks, MagicAndi
I'm guessing you mean how can you ensure the site is indexed immediately?
Generally, crawls are scheduled which means your new site will only be added to the search index after the next crawl is done. So if your incremental crawl happens every hour you may have to wait up to an hour for it to appear in the search index.
However, given that your new sites are being added programatically you could also programatically start an incremental crawl if it is vital for it to start appearing in search results immediately. There are details how to do this in this article.
Update:
The site title and description should be indexed automatically by the next crawl. If this isn't happening, then you don't have a Content Source that covers that site so you need to create/update one to cover the new sites and make sure it has a crawl schedule. If the new sites are created in separate site collections consider putting them on a Managed Path.
In our SharePoint system we have a terrabyte of data with 100,000 site collections and probably 20 new site collections added every day. We only have one content source that points to the root of the site and everything gets indexed automatically.
It sounds like you're missing a content source or a crawl schedule.
It turns out that the site description is included in the crawl by default. I tested the search default properties by creating a new site and assigning a unique text string to the description. After the next incremental crawl, I was able to search and find the unique string via the default SharePoint search.
I have not yet tested if the page description is included in the search scope by default, but I'm prepared to guess that it is. I will update my answer as soon as I get a chance to test this.
For some reason my search in the sharepoint site does not work.
I have set up the SSP, the scopes, the crawls, everything but it still does not work
Can someone explain to me how to setup the search? Maybe I did something wrong in the process.
It's not the simplest thing in the world to setup, as it's comprised of a number of components.
You need to check each one to determine where your problem is.
Start from the crawl, and work your way forward to the search production on the page.
So check the following:
Check some servers have been setup to index pages. (You can see this under services on servers in the central administration pages.)
Make sure they're all running correctly. (Not in a half started state.)
Check your crawl log in your SSP to see if it is indexing anything.
(Index different types of content, like file shares, web sites, and sharepoint itself. (check each one.)).
(Note you need a special plugin to index PDF's.).
Check your index is copied to the front end server where it is used.
If it's not, it may be because this hasn't been configured, (Check Services running on servers again)
Then check your site collection setup, and ensure you have a search site configured.
Ensure the site collection search details are configured to use the search site.
Finally check the user doing the searching actually has access to the content being indexed.
Doing all of that should give you some idea of where the problem is.
In addition to Bravax's answer its worth checking that you are not getting stung by the local loopback check.
I had similar problem and ended up using search server express which is free (see my answer from this link: sharepoint 2010 foundation search not working)
I have installed search server express 2010 on top of SPF which works great. it has additional features and work well with sharepoint foundation. her is a link for upgrade and configuration: http://www.mssharepointtips.com/tip.asp?id=1086
You need to crawl the the contents source and add the website to it, then run full crawl to index data.
I want to exclude certain pages from MOSS indexing like a confirmation page that sits in the pages library in the root of my site: http://server/Pages/ConfirmSignup.aspx
I can do this by going to search administration / search result removal and adding the url to the URLs to remove box.
Because I have dev, staging, uat, production environments I want to script this. I could only find a command in Gary Lapointe's stsadm commands but that adds an exclusion to a search scope which does not seem to work for individual files, only folders. Since there are other files in my /Pages library I can't use this.
How do I add search result removal urls programmatically?
The SPList object has a NoCrawl property. Setting this to true will ensure no items in the list will be indexed or appear in search results.
Unfortunately this doesn't go down to the SPListItem level. You would need to have an 'Admin' site and exclude its Pages list from indexing.
The advantage this solution has is its level of control. In some cases crawl rules are very complex or impossible to define correctly in search configuration. This option avoids those issues.