Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have searched for possible solutions for days, but have had no luck at getting my SharePoint 2010 to return search results.
The search was working, but was only returning results from a subsite. I have gone through many blog posts and sites on setting up the search and still nothing. My last resort was to delete the search and reimplement it.
The search crawls just fine (no errors). Here are a couple of the blogs and sites I have tried (out of the many), but nothing seems to help.
http://social.msdn.microsoft.com/Forums/en-US/sharepoint2010setup/thread/688b5c52-f478-463b-bc00-debfd0c3be2b
http://sharepointgeorge.com/2010/configuring-enterprise-search-sharepoint-2010/
My setup is an intranet on a VM with SQL 2008 R2 (nothing out of the ordinary for the server, single farm). The search account has the rights to Full Read and is also included on all page permissions.
Here is a look at the log descriptions when a search is performed (no access denied errors). These results are all from the "Query Processor" category.
(w3wp.exe) PluggableSecurityTrimmerManager:SetSearchApplicationToUse: Set SearchApplication to 'Search Service Application'
(w3wp.exe) Resetting cookie: Old value = '', new value = 'nautilusRankDescending'
(mssearch.exe) 63239349-6356-4a02-96db-c40ffb223572-query-0: Query completed 109 ms, detailed time: Query stage execution ms times: 62 47 0 0 47 0 0 0 Query stage cpu ms times: 31 15 0 0 15 0 0 0 Query stage hit counts: 1 1 1 7 2 0 0 0 Cursor count: 13 Mapped page count: 16 Total index count: 1 [srequest.cxx:5526] d:\office\source\search\native\ytrip\tripoli\cifrmwrk\srequest.cxx
(w3wp.exe) Completed query execution with timings: total:140 dup:0 sec:0 join:0 ft:109 sql:31. Join Retry: 0. Security Trimming Retry: 0. Duplicate removal Retry: 0.
I am thoroughly baffled. Hopefully someone has had the same problem and can share how they fixed it.
One of the mistakes that we made is to use default Network Service account for the Application Pool. Make sure that you set up a separate domain\user account e.g. sp_search for search purpose.
Well, It has been quite the headache getting this to work. Below are some suggestions or check points to keep in mind. I hope they might help someone else out there.
if you have tried everything and you still don't have your search working delete the SSA and start over. I did this and then went through step by step
Set up a proper service/system account to handle the search.
Make sure that your application pool is started (at one time for some reason mine had stopped itself
I utilized this well documented article about setting up a search to rewalk myself through the process (after I deleted my initial search).
http://blog.concurrency.com/sharepoint/search-configuration-in-sharepoint-2010/
It has screen shots and useful tips for setting up scopes and crawl rules, etc.
Related
I am trying to setup monitors that detect any penetration attempts to a website. For that, I am using the Alert functionality in open distro ( older version of opensearch ).
The problem is we have indices for the actual day and the 29 day before. And as far as I know, I can only select one of these 30 options.
Is there a workaround where I can create monitors for indices that haven´t been created yet or a persistent monitor that I don´t have to modify every time a new day/indice comes up ?
Every answer or comment is very much appreciated !
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I installed openstack via devstack on Ubuntu 14.04. I have got 8 gb of ram on my computer and i have created around 8 VM's which i don't use simultaneously as I use the VM differently.
Now i cannot create any more VM's. I get an error message
No Valid Host was found.
there are not enough hosts available.
Can someone advice what should i do?
Since you say that this is a devstack installation, I'm assuming that you aren't running this in a production environment. Openstack allows users to bump up their over-subscription ratio for the RAM. By default, it is kept at 1.5 times the physical RAM available in the machine. Hence, it should be 12 Gb of usable memory. To change the subscription ratio:
sudo vim /etc/nova/nova.conf
#Add these two lines
ram_allocation_ratio=2
cpu_allocation_ratio=20 # Default value here is 16
These values are just a rough estimate. Change the values around to make them work for your environment. Restart the Devstack.
To check if the changes were made, log into mysql (or whichever DB is supporting devstack) and check:
mysql> use nova;
mysql> select * from compute_nodes \G;
*************************** 1. row ***************************
created_at: 2015-09-25 13:52:55
updated_at: 2016-02-03 18:32:49
deleted_at: NULL
id: 1
service_id: 7
vcpus: 8
memory_mb: 12007
local_gb: 446
vcpus_used: 6
memory_mb_used: 8832
local_gb_used: 80
hypervisor_type: QEMU
disk_available_least: 240
free_ram_mb: 3175
free_disk_gb: 366
current_workload: 0
running_vms: 4
pci_stats: NULL
metrics: []
.....
1 row in set (0.00 sec)
The Scheduler looks at the free_ram_mb. If you have a free_ram_mb of 3175 and if you want to run a new m1.medium instance with 4096M of memory, the Scheduler will end up with this message in the logs:
WARNING nova.scheduler.manager Failed to schedule_run_instance: No valid host was found.
Hence, make sure to keep an eye out for those when starting a new VM after making those changes.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I'm writing a script that does daily snapshots of users' home directories. First I do a dry run using:
rsync -azvrn --out-format="%M %f" source/dir dest/dir
and then the actual rsync operation (by removing the -n option).
I'm trying to parse the output of the dry run. Specifically, I'm interested in learning the exact cause of the rsync error (if one occurred). Does anyone know of
The most common rsync errors and their codes?
A link to a comprehensive rsync error code page?
Most importantly, rsync (at least on CentOs 5) does not return an error code. Rather it displays the errors internally and returns with 0. Like thus:
sending incremental file list
rsync: link_stat "/data/users/gary/testdi" failed: No such file or directory (2)
sent 18 bytes received 12 bytes 60.00 bytes/sec
total size is 0 speedup is 0.00 (DRY RUN)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1039) [sender=3.0.6]
Has anyone had to parse rsync errors and have a suggestion on how to store the rsync return state(s)? I believe, when transferring multiple files, that errors may be raised on a per file basis and are collected at the end as shown on the last line of code above.
Per the rsync "man" page, here are the error codes it could return and what they mean. If you're scripting it in bash, you could look at $?
0 Success
1 Syntax or usage error
2 Protocol incompatibility
3 Errors selecting input/output files, dirs
4 Requested action not supported: an attempt was made to manipulate 64-bit
files on a platform that cannot support them; or an option was specified
that is supported by the client and not by the server.
5 Error starting client-server protocol
6 Daemon unable to append to log-file
10 Error in socket I/O
11 Error in file I/O
12 Error in rsync protocol data stream
13 Errors with program diagnostics
14 Error in IPC code
20 Received SIGUSR1 or SIGINT
21 Some error returned by waitpid()
22 Error allocating core memory buffers
23 Partial transfer due to error
24 Partial transfer due to vanished source files
25 The --max-delete limit stopped deletions
30 Timeout in data send/receive
35 Timeout waiting for daemon connection
I've never seen a comprehensive "most common errors" list but I'm betting error code 1 would be at the top.
I'm using sharepoint2013 + windows2012. I noticed that the SP search component has 5 processes in taskmgr. each uses about 400-500 MB memory. Is this normal? I also tried
Set-SPEnterpriseSearchService -PerformanceLevel Reduced
But it did not change anything. Should I restart the server?
I never nooticed this on other SP server I worked before. Just curious, is it because of SP 2013, some default settings?
thanks
user3211586 ‘s link worked for me. Basically this article says:
Quick and Dirty
Kill the noderunner.exe (Microsoft Sharepoint Search Component) process via TaskManager
This will obviously break everything related to Search on the site
Production
Change the Search Service Performance Level with powerhsell
Get-SPEnterpriseSearchService | Set-SPEnterpriseSearchService –PerformanceLevel “PartlyReduced”
Performance Level Explained:
Reduced: Total number of threads = number of processors, Max Threads/host = number of processors
PartlyReduced: Total number of threads = 4 times the number of processors , Max Threads/host = 16 times the number of processors
Maximum: Total number of threads = 4 times the number of processors , Max Threads/host = 16 times the number of processors (threads are created at HIGH priority)
For the setting to take effect do an IISReset or restart the Search Service in Central Admin
I had the same issue as the OP and running Set-SPEnterpriseSearchService –PerformanceLevel “PartlyReduced” followed by IISRESET /noforce resolved the issue for me.
Please check below given article:
http://social.technet.microsoft.com/wiki/contents/articles/20413.sharepoint-2013-performance-issues-with-search-service-application.aspx
When I tried this method, and when I changed the config setting from 0 to any value between 1 and 500, it did reduce the memory usage but the search stopped working. After I reverted back the config settings to 0, the memory usage increased but search started working again.
Here's a dump of the stats provided my mod_pagespeed from one of my sites.
resource_url_domain_rejections: 6105
rewrite_cached_output_missed_deadline: 4801
rewrite_cached_output_hits: 116004
rewrite_cached_output_misses: 934
resource_404_count: 0
slurp_404_count: 0
total_page_load_ms: 0
page_load_count: 0
resource_fetches_cached: 0
resource_fetch_construct_successes: 45
resource_fetch_construct_failures: 0
num_flushes: 947
total_fetch_count: 0
total_rewrite_count: 0
cache_time_us: 572878
cache_hits: 872
cache_misses: 1345
cache_expirations: 242
cache_inserts: 1795
cache_extensions: 50799
not_cacheable: 0
css_file_count_reduction: 0
css_elements: 0
domain_rewrites: 0
google_analytics_page_load_count: 0
google_analytics_rewritten_count: 0
image_inline: 7567
image_rewrite_saved_bytes: 208854
image_rewrites: 34128
image_ongoing_rewrites: 0
image_webp_rewrites: 0
image_rewrites_dropped_due_to_load: 0
image_file_count_reduction: 0
javascript_blocks_minified: 12438
javascript_bytes_saved: 1173778
javascript_minification_failures: 0
javascript_total_blocks: 12439
js_file_count_reduction: 0
converted_meta_tags: 902
url_trims: 54765
url_trim_saved_bytes: 1651244
css_filter_files_minified: 0
css_filter_minified_bytes_saved: 0
css_filter_parse_failures: 2
css_image_rewrites: 0
css_image_cache_extends: 0
css_image_no_rewrite: 0
css_imports_to_links: 0
serf_fetch_request_count: 1412
serf_fetch_bytes_count: 12809245
serf_fetch_time_duration_ms: 28706
serf_fetch_cancel_count: 0
serf_fetch_active_count: 0
serf_fetch_timeout_count: 0
serf_fetch_failure_count: 0
Can someone please explain what all of the stats mean?
There's a lot of stats here. I'm going to just describe a few of them, because this will get long. We should probably add detailed doc. I can follow-up with more answers later if these are useful.
resource_url_domain_rejections: 6105: this means that since your server restarted, mod_pagespeed has found 6105 resources that it's not going rewrite a resource because their domains are not authorized for rewriting with the ModPagespeedDomain directive. This is common & occurs anytime time someone refreshes a page with a twitter, facebook, or google+ widget.
rewrite_cached_output_missed_deadline: 4801: when a resources (e.g. a jpeg image) is optimized, it happens in a background thread, and the result is cached so that subsequent page views referencing the same refresh are fast. To avoid slowing down the first view, however, we use a 10 millisecond timer to avoid slowing down the time-to-first byte. This stat counts how many times that deadline was exceeded, in which case the resource is left unchanged for that view, but the optimization continues in the background & so the cache is written.
rewrite_cached_output_hits: 116004: counts the number of times we served an optimized resource from the cache, thus avoiding the need to re-optimize it.
rewrite_cached_output_misses: 934: counts the number of times we looked up a resource in our cache and it wasn't there, forcing us to rewrite it. Note that we would also rewrite a resource that was in the cache, but whose origin cache expiration-time had expired. E.g. if your images had cache-control:max-age=600 then we would re-fetch them every 10 minutes to see if they've changed. If they have changed we must re-optimize them.
num_flushes: 947: this is the number of times the Apache resource-generator for the HTML (e.g. mod_php or Wordpress) called the Apache function ap_flush(), which causes partial HTML to be flushed all the way through to the user's browser. This is interesting for mod_pagespeed because it can limit the amount of optimization we can do (e.g. we can't combine CSS files whose elements are separated by a Flush).
cache_time_us: 572878 - the total amount of time, in microseconds, spent waiting for mod_pagespeed's HTTP Cache (file + memory) to respond to a lookup request, since the server was started.
I think that's enough for now. Are there specific other statistics you'd like to learn more about?
Most of these were created for us to monitor the health of mod_pagespeed as it's running, and to help diagnose users' issues. I have to admit we haven't used it much for that purpose, but we use them during development.