{"error":"not_found","reason":"missing"} error while running couchdb-lucene on windows - couchdb

I am running CouchDB and Couchdb-lucene on Windows Server 2019 version 1809.
I followed all steps documented on link https://github.com/rnewson/couchdb-lucene
My CouchDB local.ini file
[couchdb]
os_process_timeout = 60000
[external]
fti=D:/Python/python.exe "C:/couchdb-lucene-2.2.0/tools/couchdb-external-hook.py --remote-port 5986"
[httpd_db_handlers]
_fti = {couch_httpd_external, handle_external_req, <<"fti">>}
[httpd_global_handlers]
_fti = {couch_httpd_proxy, handle_proxy_req, <<"http://127.0.0.1:5986">>}
couchdb-lucene.ini file
[lucene]
# The output directory for Lucene indexes.
dir=indexes
# The local host name that couchdb-lucene binds to
host=localhost
# The port that couchdb-lucene binds to.
port=5986
# Timeout for requests in milliseconds.
timeout=10000
# Timeout for changes requests.
# changes_timeout=60000
# Default limit for search results
limit=25
# Allow leading wildcard?
allowLeadingWildcard=false
# couchdb server mappings
[local]
url = http://localhost:5984/
Curl outputs
C:\Users\serhato>curl http://localhost:5986/_fti
{"couchdb-lucene":"Welcome","version":"2.2.0-SNAPSHOT"}
C:\Users\serhato>curl http://localhost:5984
{"couchdb":"Welcome","version":"3.1.1","git_sha":"ce596c65d","uuid":"cc1269d5a23b98efa74a7546ba45f1ab","features":["access-ready","partitioned","pluggable-storage-engines","reshard","scheduler"],"vendor":{"name":"The Apache Software Foundation"}}
Design document I defined in CouchDB which aims to create full text search index for RenderedMessage field
{
"_id": "_design/foo",
"_rev": "11-8ae842420bb4e122514fea6f05fac90c",
"fulltext": {
"by_message": {
"index": "function(doc) { var ret=new Document(); ret.add(doc.RenderedMessage); return ret }"
}
}
}
when I navigate to http://localhost:5984/dev-request-logs/_fti/_design/foo/by_message?q=hello
Response is
{"error":"not_found","reason":"missing"}
when I also navigate http://localhost:5984/dev-request-logs/_fti/
Response is same
{"error":"not_found","reason":"missing"}
I think there is a problem with external integration to lucene engine. So to my cruosity i try to execute python command to check if py script is running.
D:/Python/python.exe C:/couchdb-lucene-2.2.0/tools/couchdb-external-hook.py
but the result is
C:\Users\serhato>D:/Python/python.exe C:/couchdb-lucene-2.2.0/tools/couchdb-external-hook.py
File "C:\couchdb-lucene-2.2.0\tools\couchdb-external-hook.py", line 43
except Exception, e:
^
SyntaxError: invalid syntax
What might be the problem ?

After hours of search I finally got into this link
https://github.com/rnewson/couchdb-lucene/issues/265
the query must be through directly Lucene not coucbdb itself. Below url returns the result
C:\Users\serhato>curl http://localhost:5986/localx/dev-requestlogs/_design/foo/by_message?q=hello
Original Documentation is very misleading as all the examples uses couchdb default port not the Lucene.
Or am I missing something ??

Related

How to control log level in Firefox with arsenic

I am running an arsenic (Firefox + Geckodriver) script in a Docker container. I am trying to control log levels in the Browser like this:
from arsenic import browsers
browser = browsers.Firefox(
**{'moz:firefoxOptions': {
'args': ['-headless'],
'binary': '/usr/bin/firefox'},
'log': {'level': 'warning'}
}
)
However, when I try running it, I get:
ERROR:asyncio:Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x7f5c93762df0>
ERROR:asyncio:Unclosed connector
connections: ['[(<aiohttp.client_proto.ResponseHandler object at 0x7f5c92dd9940>, 25597.978260712)]']
connector: <aiohttp.connector.TCPConnector object at 0x7f5c937629d0>
ERROR:root:
Got ('log is not the name of a known capability or extension capability', None, '') with https://example.com.
Docker image: python:3.9
Firefox: 94.0b7 (linux-x86_64)
Geckodriver: v0.30.0 (linux-x86_64)
I've checked a bunch of times and the sytnax of the Firefox options seems to ok. If anyone has a clue to solving this, that would be much appreciated.
So, after reading this https://developer.mozilla.org/en-US/docs/Web/WebDriver/Capabilities/firefoxOptions one more time, I realized that the log argument needs to passed as an argument.
This works for me:
browser = browsers.Firefox(
**{'moz:firefoxOptions': {
'args': ['-headless', '-log', "{'level': 'warning'}"],
'binary': '/usr/bin/firefox'}
}
)
I'll have to play around a bit more to see if it should be warning or warn, but passing the log argument works.

Golang htaccess configure without Nginx or Apache

I've created web app and analyzed it with Google site analyzer.
In most cases I need to configure htaccess file. As I understand this file can be used only on Nginx or Apache server, but I don't want to use any of these.
I want to configure htaccess only with golang tools. Currently my app running on VPS server.
This project allows you to support the http auth standard with GO, zero apache code.
You can even use a password file created with the Apache htpasswd (bad) or htdigest (good) commands:
https://github.com/abbot/go-http-auth
You don't need .htaccess as it's only meant for Apache:
http://httpd.apache.org/docs/2.2/howto/htaccess.html
If you use Apache, external services like Google Site Analyzer can't see .htaccess since it's not served by Apache. It's kept private.
Everything Apache can do with .htaccess, Go can do with net/http or with a 3rd package like Gorilla to help.
If you want to do some constraints, then you may reference the following.
package main
import (
"github.com/gorilla/mux"
"io/fs"
"net/http"
"path/filepath"
)
type TxtFileSystem struct {
http.FileSystem
}
func (txtFS TxtFileSystem) Open(path string) (http.File, error) {
if filepath.Ext(path) != ".txt" {
return nil, &fs.PathError{Op: "open", Path: path, Err: fs.ErrNotExist}
}
return txtFS.FileSystem.Open(path)
}
func main() {
m := mux.NewRouter()
m.PathPrefix("/doc/").Handler(http.FileServer(TxtFileSystem{http.Dir("./doc")}))
}
That will only allow you to visit the file extension is .txt

.arangosh.rc not sourced on Mac OSX

I am following the ArangoDB documentation, and I'm currently following the section ArangoDB Shell Configuration; here, they describe an .arangosh.rc file that is sourced from your home directory, placing custom code into the arango shell's global scope. Following the documentation to a T, I've made an .arangosh.rc file in my home directory ~/.arangosh.rc and added the example function
timed = function (cb) {
var internal = require("internal");
var start = internal.time();
cb();
internal.print("execution took: ", internal.time() - start);
};
I've tried exiting and restarting the arango shell as well as completely restarting my terminal session but I can't get arangosh to source the rc file. When I try invoking timed() I get a
ReferenceError: timed is not defined
Blockquote
As far as I can see the condition for sourcing ~/.arangosh.rc changed somewhere in 2.6, but this looks like an error to me. I have reverted that change in the 2.7, 2.8 and devel branches, so the file will get sourced there now. The fix will be contained in the next official releases.
If you want to apply it before that, the commit id for 2.7 is 8e85a2fbb67c8c50c75cf93aefb7365e1e9fd7d1
It also looks like that in 2.7 any "globals" in the rc file need to be attached to the global object. For example,
timed = function (cb) { ... };
should become
global.timed = function (cb) { ... };
I have also updated the docs to reflect this change.

PDO Fried my Websites - Any tips?

I've spent the last several months upgrading my websites, including upgrading my database queries to PDO. I published a test query online a while ago and got an error message, so I checked with my webhost before wading into all the technical fixes I Googled. He said the problem was simple: PDO wasn't installed on my server.
So he installed it - and all my websites crashed.
I checked back, and another tech told me there's a conflict between PDO and a line in my .htaccess files -
php_flag magic_quotes_gpc Off
So I commented out that line. That restored things to a point, but I'm now getting this message...
Warning: include(/home/geobear2/public_html/2B/dbc.php) [function.include]: failed to open stream: Permission denied in /home/symbolos/public_html/1A/ACE.php on line 67
dbc.php is simply an included file with my database connection; all my websites include it from the main site. I checked, and it's where it's supposed to be. Actually, I get a similar error with a second included page. And here's an additional error:
Warning: include() [function.include]: Failed opening '/home/geobear2/public_html/2B/dbc.php' for inclusion (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/symbolos/public_html/1A/ACE.php on line 67
Does anyone have any idea what's going on here? Can PDO somehow disrupt include links between websites? I'm totally confused. Thanks.
P.S. I downloaded the online file that includes the database connection file. Here's the relevant code...
$path = $_SERVER['REQUEST_URI'];
$path2 = str_replace('/', '', $path);
$Section = current(explode('/', ltrim($path, '/'), 2)); // Main line
$Sections = array('Introduction', 'Topics', 'World', 'Ecosymbols', 'Glossary', 'Reference', 'Links', 'About', 'Search');
if ( ! in_array($Section, $Sections))
{
// die('Invalid section: ' . $Section);
}
switch (PHP_OS)
{
case 'Linux':
$BaseINC = '/home/geobear2/public_html';
$BaseURL = 'http://www.geobop.org';
break;
case 'Darwin':
// Just some code for my local includes...
break;
default:
break;
}
include ($BaseINC."/2B/dbc.php");

Puppet breaks with multiple node inheritances

Puppet on the tst-01 works fine when using:
node "tst-01" inherits basenode {
But it breaks when I try to organize servers into groups with this configuration:
node "tst-01" inherits redhat6server {
The error with "inherits redhat6server" is:
err: Could not retrieve catalog; skipping run
[root#tst-01 ~]# puppet agent --test
err: Could not retrieve catalog from remote server: Error 400 on SERVER: Failed to parse template ldap/access.conf: Could not find value for 'netgroup' at 124:/etc/puppet/modules/ldap/templates/access.conf at /etc/puppet/modules/ldap/manifests/init.pp:82 on node tst-01.tst.it.test.com
warning: Not using cache on failed catalog
err: Could not retrieve catalog; skipping run
This is the access.conf file, that works fine if inherits is set to "inherits basenode".
[root#puppet]# grep -v "#" /etc/puppet/modules/ldap/templates/access.conf
+ : root : LOCAL
+ : #<%= netgroup %> : ALL
- : ALL : ALL
[root#puppet]#
This is the configuration in /etc/puppet/manifests/nodes.pp.
# Basenode configuration
node "basenode" {
include resolv_conf
include sshd
include ntpd
include motd
}
# Groups
node "redhat6server" inherits basenode {
include ldap_auth
}
# Testservers
node "tst-01" inherits redhat6server {
$netgroup = tst-01
}
I am planning to bring more organisation (read: avoid configuration repetition) in the nodes.pp by grouping machines, e.g. RH5 and RH6 machines instead of adding multiple lines of includes for all RH5 and RH6 servers.
Your running into a variable scoping problem. The official documentation discusses this issue.
In short, redhat6server doesn't have access to the netgroup variable.
The method I employ to work around this is to use hiera. With this, the ldap_auth module can be defined this way, and it will pull the value from a hiera configuration file (typically a yaml file in /etc/puppet/hiera).
You would defined ldap_auth like this:
ldap_auth/manifests/init.pp:
class ldap_auth($netgroup=hiera('netgroup')) {
...
}
Or if your on puppet 3.x, you can use automatic parameter lookup:
class ldap_auth($netgroup) {
...
}
And have a yaml file with:
ldap_auth::netgroup = 'netgroup'

Resources