Unable to index pages in indexed_search TYPO3 6 - search

I'm using indexed_search, indexed_search_mysql and crawler extensions for my website. I created a crawler record under Crawler Configuration and ran it. This crawls through all the pages successfully.
Configurations index_enable and index_externals are set true
The problem is none of the crawled stuff are showing up in the index tables. The info>Indexed Search shows all the pages as 'not indexed'.
indexed_search only works when disableFrontendIndexing is set false . But for this I'd have to visit every page.
Are there any other configurations that i'm missing here?

Indexed Search only indexes pages that are cached. So you might wanna check if something like
config.no_cache = 1
is set. In that case you wont get anything indexed. You schould also clear all caches before crawling through your pages, so they have to be cached again.
If its still not working, you can try if
config.index_enable = 1
and
page.config.index_enable = 1
makes any difference.

In standard configuration the crawler is only writting the pages to index into a queue. You have to run it over "Add Proccess". Did you do this?
or over cli:
/usr/bin/php /var/www/example.com/typo3/cli_dispatch.phpsh crawler
there is an option to do it in one run:
/usr/bin/php /var/www/example.ch/typo3/cli_dispatch.phpsh crawler_im 597 -d 9 -conf yourconfititle -o exec
More info on about cli in the docu here:
https://docs.typo3.org/typo3cms/extensions/crawler/ExtCrawler/ExecutingTheQueue/BuildingAndExecutingQueueRightAway(fromCli)/Index.html

Related

I want to add the the raw content which is stored in segment folder nutch version 1.17

While running this command below:
bin/nutch solrindex http://localhost:8983/solr/nutch/ testingnewline/crawldb -linkdb testingnewline/linkdb -dir testingnewline/segments/ -deleteGone -addBinaryContent
It is throwing below exception.
Error: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://localhost:8983/solr/nutch: ERROR: [doc=https://www.saintlukeskc.org/] Error adding field 'binaryContent'
May I know what changes need to do I need to change the schema.xml.Please help me.
The Solr schema must contain the field "binaryContent", see Nutch's default Solr schema.xml which contains all the fields eventually added by Nutch or one of the plugins.

Symfony3 customizing error pages (404, 403...)

I don't know what is wrong but I couldn't override the error pages as describe in the documentation by simply create the related file at app/Resources/TwigBundle/views/Exception/error.404.html.twig or app/Resources/TwigBundle/views/Exception/error.403.html.twig and so on...
Notice that I also clean the cache as well before check it: bin/console cache:clear --env=prod. I'm using Symfony 3.0.6
the twig file name should be:
error404.html.twig
not:
error.404.html.twig
Let me know if you still have problems. This should work, because I've used it.

Maximum execution time of 300 seconds exceeded in

I know this subject has like ... many duplcates here and there, but trust me I've spent time fetching these posts and my question remains answerless.
I'm running a PHP script on a Debian Linux / Nginx / PHP-FPM / APC / I think that's all.
I'm executing the script from my SSH terminal (CLI) like : > php plzrunthisFscript.php &
It used to work flawlessly but now, it returns this famous error.
What I've tried ALREADY and it failed (I mean it didn't change anything) :
Check what PHP.INI was used with a phpinfo(); inside my script (it's /etc/php5/cli/php.ini)
maximum_execution time is hardcoded to 0 (means no limit) for CLI if I'm not mistaken.
Tried to add at begining of my script : set_time_limit(0);
Tried to add : ini_set('max_execution_time', -1);
Tried to execute my php command with parameter -d max_execution_time=0
Tried to execute from a web interface (served by Nginx)
Tried to execute from another web interface (SErver by Apache2 this time) gives same error on page.
tried to configure max_execution_time 15 in /etc/php5/cli/php.ini to check if the php.ini that is supposed to be used (because phpinfo() inside my script is ignored or not : it's ignored
Everytime, it brings this error, and sometimes even after more than 300 seconds I think, which is really confusing.
If someone has any idea on how to fix this, or has some things I can try, please advise.
Thank you for your time.
God I finaly solved it!
I think that could be considered as a Magento Bug in a way.
Here is how to reproduce this :
You have Magento (My version is CE 1.7) running with APACHE 2.
One day, you decide to get rid of Apache and try to adopt Nginx.
You configure everything and it's working great, but you end up with this error one day, trying to rebuild your indexes as you often do.
Thing is, when you run (for example) : php indexer.php --reindex catalog_url &
This script includes another one called abstract.php which contains this awesome function :
protected function _applyPhpVariables()
This function will look for a .htaccess in your Magento root directory, then parse every single configuration parameters, and will execute the index script with these parameters.
How clever ...
Anyway, to solve this you just need to (delete / rename / burn) this .htaccess file, and then everything will go back to normal.
Thanks for your help everyone.

kohana 2 return empty page in browser only

My app is built on Kohana 2,at first I built it in windows with php5.4, it worked well, now I move it to fedora I use php 5.5.9 and it always return empty page in browser, but if I run it in command line, it works well.
I noticed that in bootstrap file after the line require Kohana,it return empty page, before this line I can print something in browser
there's no error in serve logs. can anyone help me. I tried many ways but it still return empty page
There are many things changed in php 5.4 >=, and probably many kohana lib files causing an issue, one possible thing is mysql connection type because mysql is replace by mysqli/pdo.
https://wiki.php.net/rfc/mysql_deprecation

Getting capybara and cucumber to log more information

I'm having problems with my cucumber/capybara setup and was wondering...
How can I get more information out of cucumber and capybara to see what's going on?
I've tried running
bundle exec cucumber features/myfeature.feature -v -b -x
But that just shows which rb files are loaded and which feature is being loaded. I want to know what on earth it is running. All it shows me is:
F_______________F
Which is completely unhelpful.
What kind of information are you looking to see?
You can try adding the --format=pretty option - this will print out each step as it's being processed, with the file location of the step definition that matches it, so you can see the status of each step (passed, failed, skipped, pending, etc.)
I miss some real logging support in Cucumber, in order to debug steps, but it seems the current way to log from steps is to "announce" messages. Announcements are turned on with the #announce tag. Try tagging your feature/scenario with #announce, that's the best option I've discovered so far.

Resources