I've followed the 'Getting Started' guide in Cayley's documentation and installed Cayley on my remote server:
Getting Started: https://github.com/google/cayley
Server OS: CentOS 7.2.1511
I've added cayley to my $PATH:
echo $PATH :
/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/csse/cayley/src/github.com/google/cayley
Here is my config file at /etc/cayley.cfg
{
"database": "leveldb",
"db_options": {
"cache_size_mb": 2,
"write_buffer_mb": 20
},
"db_path": "~/cayley/src/github.com/google/cayley/data/testdata.nq",
"listen_host": "127.0.0.1",
"listen_port": "64210",
"read_only": false,
"replication_options": {
"ignore_missing": false,
"ignore_duplicate": false
},
"timeout": 30
}
I serve cayley over http by simply doing:
cayley http
and the terminal outputs:
Cayley now listening on 127.0.0.1:64210
On my main machine (Mac OSX 10.10.5 Yosemite), I've used npm to install the cayley package and written a test:
##testconnection.js
var cayley = require('cayley');
var client = cayley("137.112.104.107");
var g = client.graph;
g.V().All(function(err, result) {
if(err) {
console.log('error');
} else {
console.log('result');
}
});
However, it fails when I run it: node testconnection.js
error: Error: Invalid URI "137.112.104.107/api/v1/query/gremlin"
I'd like to connect to Cayley and modify the database from my test. I've found a great powerpoint full of Cayley information:
https://docs.google.com/presentation/d/1tCbsYym1kXWWDcnRU9ymj6xP0Nvgq-Qhy9WDmqWcM-o/edit#slide=id.g3776708f1_0319
As well as pertinent Cayley docs:
Overview Doc
Configuration Doc
HTTP API Doc
And a post on stackoverflow:
Cayley db user and password protection over HTTP connections
But I'm struggling to come up with a way to connect Cayley (on my remote machine) with my local machine. I'd like to connect with npm if possible, but am open to other options. Where am I going wrong?
Edit #1
I've appended the "http://" to my ip, so now it reads http://137.112.104.107. At that point, I solved another issue by performing
cayley init --config=/etc/cayley.cfg
as mentioned by the author here
I've also removed the listen_post and listen_port from my config file (each individually first, then both), yet have still have the same socket hang up error. Here's a printout of client from the test script:
Client {
host: 'http://137.112.104.107',
request:
{ [Function]
get: [Function],
head: [Function],
post: [Function],
put: [Function],
patch: [Function],
del: [Function],
cookie: [Function],
jar: [Function],
defaults: [Function] },
graph: Gremlin { client: [Circular], query: [Function] },
g: Gremlin { client: [Circular], query: [Function] },
write: [Function: bound ],
delete: [Function: bound ],
writeFile: [Function: bound ]
}
Your Cayley server is listening on 127.0.0.1 / localhost and therefor not reachable from another machine. To be able to reach it from a virtual machine or another computer on your network it needs to bind to an interface that is reachable.
If you configure host: 0.0.0.0 and check what is your network IP (I assume: 137.112.104.107) and connect it, it should work or you need to open it or forward the port on your firewall (depending on your network).
Related
My setup is very simple and small:
C:\temp\npm [master]> tree /f
Folder PATH listing for volume OSDisk
Volume serial number is F6C4-7BEF
C:.
│ .gitignore
│ 1.js
│ package.json
│
└───.vscode
launch.json
C:\temp\npm [master]> cat .\package.json
{
"name": "node-modules",
"version": "1.0.0",
"description": "",
"main": "index.js",
"dependencies": {
"emitter": "http://github.com/component/emitter/archive/1.0.1.tar.gz",
"global": "https://github.com/component/global/archive/v2.0.1.tar.gz"
},
"author": "",
"license": "ISC"
}
C:\temp\npm [master]> npm config list
; cli configs
metrics-registry = "https://registry.npmjs.org/"
scope = ""
user-agent = "npm/6.14.12 node/v14.16.1 win32 x64"
; userconfig C:\Users\p11f70f\.npmrc
https-proxy = "http://127.0.0.1:8888/"
proxy = "http://127.0.0.1:8888/"
strict-ssl = false
; builtin config undefined
prefix = "C:\\Users\\p11f70f\\AppData\\Roaming\\npm"
; node bin location = C:\Program Files\nodejs\node.exe
; cwd = C:\temp\npm
; HOME = C:\Users\p11f70f
; "npm config ls -l" to show all defaults.
C:\temp\npm [master]>
Notes:
The proxy addresses correspond to Fiddler.
Notice that the emitter dependency url uses http whereas the global uses https.
When I run npm install it starts and then hangs very quickly. And I know why, because Fiddler tells me:
The request is:
GET http://github.com:80/component/emitter/archive/1.0.1.tar.gz HTTP/1.1
connection: keep-alive
user-agent: npm/6.14.12 node/v14.16.1 win32 x64
npm-in-ci: false
npm-scope:
npm-session: 74727385b32ebcbf
referer: install
pacote-req-type: tarball
pacote-pkg-id: registry:undefined#http://github.com/component/emitter/archive/1.0.1.tar.gz
accept: */*
accept-encoding: gzip,deflate
Host: github.com:80
And the response is:
HTTP/1.1 301 Moved Permanently
Content-Length: 0
Location: https://github.com:80/component/emitter/archive/1.0.1.tar.gz
Now this is BS, pardon my French, because the returned Location value of https://github.com:80/component/emitter/archive/1.0.1.tar.gz is invalid. But I suppose the server is not very smart - it just redirects to https without changing anything else, including the port, which remains 80 - good for http, wrong for https. This explains the hanging - the fetch API used by npm seems to retry at progressively longer delays which creates an illusion of hanging.
Debugging npm brings me to the following code inside C:\Program Files\nodejs\node_modules\npm\node_modules\npm-registry-fetch\index.js:
return opts.Promise.resolve(body).then(body => fetch(uri, {
agent: opts.agent,
algorithms: opts.algorithms,
body,
cache: getCacheMode(opts),
cacheManager: opts.cache,
ca: opts.ca,
cert: opts.cert,
headers,
integrity: opts.integrity,
key: opts.key,
localAddress: opts['local-address'],
maxSockets: opts.maxsockets,
memoize: opts.memoize,
method: opts.method || 'GET',
noProxy: opts['no-proxy'] || opts.noproxy,
Promise: opts.Promise,
proxy: opts['https-proxy'] || opts.proxy,
referer: opts.refer,
retry: opts.retry != null ? opts.retry : {
retries: opts['fetch-retries'],
factor: opts['fetch-retry-factor'],
minTimeout: opts['fetch-retry-mintimeout'],
maxTimeout: opts['fetch-retry-maxtimeout']
},
strictSSL: !!opts['strict-ssl'],
timeout: opts.timeout
}).then(res => checkResponse(
opts.method || 'GET', res, registry, startTime, opts
)))
And when I stop at the right moment, this boils down to the following values:
uri
'http://github.com/component/emitter/archive/1.0.1.tar.gz'
agent:undefined
algorithms:['sha1']
body:undefined
ca:null
cache:'default'
cacheManager:'C:\\Users\\p11f70f\\AppData\\Roaming\\npm-cache\\_cacache'
cert:null
headers:{
npm-in-ci:false
npm-scope:''
npm-session:'413f9b25525c452a'
pacote-pkg-id:'registry:undefined#http://github.com/component/emitter/archive/1.0.1.tar.gz'
pacote-req-type:'tarball'
referer:'install'
user-agent:'npm/6.14.12 node/v14.16.1 win32 x64'
}
integrity:undefined
key:null
localAddress:undefined
maxSockets:50
method:'GET'
noProxy:null
proxy:'http://127.0.0.1:8888/'
referer:'install'
retry:{retries: 2, factor: 10, minTimeout: 10000, maxTimeout: 60000}
strictSSL:false
timeout:0
I have omitted two values and I truly do not know their significance - opt.Promise and memoize. It is possible that they are crucial, I do not know.
Anyway, when I step over this statement, the aforementioned session appears in Fiddler with the bogus url of http://github.com:80/component/emitter/archive/1.0.1.tar.gz and I do not understand - how come? The debugger clearly shows that the uri parameter passed to fetch does not specify the port number.
I thought maybe it is some kind of a non string type, but typeof uri returns 'string'.
I have even written a tiny reproduction to execute just this request using the same arguments, except for the opt.Promise and memoize:
const fetch = require('make-fetch-happen')
fetch('http://github.com/component/emitter/archive/1.0.1.tar.gz', {
algorithms: ['sha1'],
cache: 'default',
cacheManager: 'C:\\Users\\p11f70f\\AppData\\Roaming\\npm-cache\\_cacache',
headers:{
"npm-in-ci":false,
"npm-scope":"",
"npm-session":"00b5bb97075e3c35",
"user-agent":"npm/6.14.12 node/v14.16.1 win32 x64",
"referer":"install",
"pacote-req-type":"tarball",
"pacote-pkg-id":"registry:undefined#http://github.com/component/emitter/archive/1.0.1.tar.gz"
},
maxSockets: 50,
method: 'GET',
proxy: 'http://127.0.0.1:8888',
referer: 'install',
retry: {
retries: 2,
factor: 10,
minTimeout: 10000,
maxTimeout: 60000
},
strictSSL: false,
timeout: 0
}).then(res => console.log(res))
But it shows up correctly in Fiddler - no port is added and hence the redirection works fine.
When there is no Fiddler (and hence no proxy) everything works correctly too, but I am very much curious to know why it does not work with Fiddler.
What is going on here?
After some efforts the HTTP server started and status report showed that
node.js server was successfully reached. For testing my Drupal site I hit
a random url and waited for it to reflect on my dblog at the same time (as
demonstrated in the video). It failed. The backend showed the error - The
channel "watchdog_dblog" doesn't exist.
Here, the port used was 8888 (as per the video). It was changed to 8080
and then this error did not show up, but the drupal site still did not
auto-refresh.
The nodejs.config.js file currently:
settings = {
scheme: 'http',
port: 8080,
host: 'localhost',
resource: '/socket.io',
serviceKey: 'mytest1',
backend: {
port: 80,
host: 'drupal8',
scheme: 'http',
basePath: '',
messagePath: '/nodejs/message'
},
debug: true,
sslKeyPath: '',
sslCertPath: '',
sslCAPath: '',
baseAuthPath: '/nodejs/',
publishUrl: 'publish',
kickUserUrl: 'user/kick/:uid',
logoutUserUrl: 'user/logout/:authtoken',
addUserToChannelUrl: 'user/channel/add/:channel/:uid',
removeUserFromChannelUrl: 'user/channel/remove/:channel/:uid',
addChannelUrl: 'channel/add/:channel',
removeChannelUrl: 'channel/remove/:channel',
setUserPresenceListUrl: 'user/presence-list/:uid/:uidList',
addAuthTokenToChannelUrl: 'authtoken/channel/add/:channel/:uid',
removeAuthTokenFromChannelUrl: 'authtoken/channel/remove/:channel/:uid',
toggleDebugUrl: 'debug/toggle',
contentTokenUrl: 'content/token',
publishMessageToContentChannelUrl: 'content/token/message',
extensions: [],
clientsCanWriteToChannels: true,
clientsCanWriteToClients: true,
transports: ['websocket', 'flashsocket', 'htmlfile', 'xhr-polling', 'jsonp-polling'],
jsMinification: true,
jsEtag: true,
logLevel: 1
};
I did not see socket.io while looking at the page source and inspect
element network. After that the node.js config module was enabled but the
service key field was non-editable. I then created a file nodejs.config.js
in modules/nodejs and copied the configuration there. Socket.io showed up
with the path localhost:8080//socket.io/socket.io.js. Even if I change the
port to 8888, the socket path/port remains the same.
This is the error now cleanupSocket: Cleaning up after socket id
C0I_r38wIbcT1LbtAAAD, uid undefined
I'm using Meteor 1.0.2.1 and I noticed that working with the filesystem is not as easy as I tought :p
I ended up installing the peerlibrary:fs package (https://atmospherejs.com/peerlibrary/fs) so that I now have access to the node.js "fs" module and now I'm trying to list the content of the public folder but as mentioned here:
Reading files from a directory inside a meteor app
the path now (with version 1) seems to be '../../../../../public'
var files = fs.readdirSync('../../../../../public');
But I assume this to be wrong.
Is there an alias to the project root folder?
Is it ok to use the peerlibrary:fs for this?
Thanks.
console.log of _meteor_bootstrap_ says(I removed personal content)
{ startupHooks:
[ [Function], [Function],
[Function], [Function],
[Function],
[Function],
[Function],
[Function],
[Function] ],
serverDir: 'my_path_to_serverDir',
configJson:
{ meteorRelease: 'METEOR#1.0.2',
clientPaths: { 'web.browser': '../web.browser/program.json' } } }
=> Started your app.
I did checked program.json in /home/user/app/.meteor/local/build/programs/web.browser/program.json
Part of it looks like that(I changed some personal data)
{
"path": "app/pathToImage/image.png",
"where": "client",
"type": "asset",
"cacheable": false,
"url": "/pathToimage/image.png",
"size": someSize,
"hash": "someHash"
},
On that i state there is no public folder in deployed state but you can get paths from program.json file and _meteor_bootstrap_.configJson.clientPaths gives object with path to it wich looks like this(paste from console.log):
{ 'web.browser': '../web.browser/program.json' }
I am running node.js as follows:
> http = require('http')
> http.get('http://myhost.local:8080',
function (res) { console.log("RES" + res) }
).on('error', function (e) { console.log("Error:", e) })
> uri = require('url').parse("http://myhost.local:8080")
{ protocol: 'http:',
slashes: true,
auth: null,
host: 'myhost.local:8080',
port: '8080',
hostname: 'myhost.local',
hash: null,
search: null,
query: null,
pathname: '/',
path: '/',
href: 'http://myhost.local:8080/' }
> http.get(uri,
function (res) { console.log("RES" + res) }
).on('error', function (e) { console.log("Error:", e) })
An error is thrown for both the implicit and explicitly parsed URI and I get the following output for both:
Error: { [Error: connect ECONNREFUSED]
code: 'ECONNREFUSED',
errno: 'ECONNREFUSED',
syscall: 'connect' }
The host myhost.local is an alias for localhost in /etc/hosts, being:
127.0.0.1 localhost myhost.local myhost
255.255.255.255 broadcasthost
::1 localhost myhost.local myhost
fe80::1%lo0 localhost myhost.local myhost
EDIT: I tried virtually every permutation for the hosts file, including the most obvious:
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost myhost.local myhost
fe80::1%lo0 localhost
EDIT I should also mention that I have tried this on more than one Mac now.
Although it seems this is a rather common error, I have seen no useful explanations or workarounds. Here are some notable related facts:
Running $ wget http://myhost.local:8080 works as expected, so it isn't a firewall problem.
Running $ telnet myhost.local 8080 and then manually GET'ing the url works fine, so it's not a weird HTTP problem.
I have no trouble using node.js to connect to other hosts e.g. http://www.google.com
I expect the useful system information would include:
$ node -v
v0.9.11
$ uname -a
Darwin myhost.local 12.2.1 Darwin Kernel Version 12.2.1:
Thu Oct 18 12:13:47 PDT 2012; root:xnu-2050.20.9~1/RELEASE_X86_64 x86_64
$ sw_vers
ProductName: Mac OS X
ProductVersion: 10.8.2
BuildVersion: 12C3104
$ sudo netstat -nalt | grep LISTEN | grep 8080
tcp6 0 0 ::1.8080 *.* LISTEN
Does anyone have any idea what is going on here, and what a fix might be?
I'm going to post this here in case somebody else has the problem.
Bert Belder, Node.js mailing list:
On your system "myhost.local" resolves to three different addresses
(127.0.0.1, ::1, and fe80::1). Node prefers ipv4 over ipv6 so it'll
try to connect to 127.0.0.1. Nothing is listening on 127.0.0.1:8080 so
the connect() syscall fails with ECONNREFUSED. Node doesn't retry with
any of the other resolved IPs - it just reports the error to you. A
simple solution would be to replace 'localhost' by the intended
destination ip address, '::1'.
Whether this behavior is right is somewhat open for debate, but this
is what causes it.
Bert
This stemmed from an issue with Node (though there are ways to work around it), as per the discussion on nodejs/Google Groups, as #alessioalex alluded in his answer. A useful comment per Bert Belder:
there should be a getaddrinfo wrapper that returns more that just the first result
For example,
> require('dns').lookup('myhost.local', console.log)
{ oncomplete: [Function: onanswer] }
> null '127.0.0.1' 4
This is the first of multiple getaddrinfo results passed to Node. It seems that nodejs only uses the first item of the getaddrinfo call. Bert and Ben Noordhuis agreed in the Groups discussion that there should be a way to return more than just the first result with the getaddrinfo wrapper.
Contrast python, which returns all results from getaddrinfo:
>>> import socket
>>> socket.getaddrinfo("myhost.local", 8080)
[(30, 2, 17, '', ('::1', 8080, 0, 0)),
(30, 1, 6, '', ('::1', 8080, 0, 0)),
(2, 2, 17, '', ('127.0.0.1', 8080)),
(2, 1, 6, '', ('127.0.0.1', 8080)),
(30, 2, 17, '', ('fe80::1%lo0', 8080, 0, 1)),
(30, 1, 6, '', ('fe80::1%lo0', 8080, 0, 1))]
does this work?
var http = require('http');
var options = {
host: 'myhost.local',
port: 8080,
path: '/'
};
http.get(options, function (res) {
console.log("RES" + res)
}).on('error', function (e) {
console.log("Error:", e)
});
I have designed an app using elastic search. And when Iam trying to write the test case using node load.js. I have got a problem that when I increase the number users I was getting the warning that "WARN: Error during HTTP request: Error: ECONNREFUSED, Could not contact DNS servers" and Iam unable rectify the problem so please help me in solving this error.
nl.run({
name: "test",
host: 'localhost',
port: 9200,
//path: '/my_river/page/_search?q=sweden',
numUsers: 2000, //Increased my num of user**
timeLimit: 180,
targetRps: 500,
stats: [
'result-codes',
{ name: 'latency', percentiles: [0.9, 0.99] },
'concurrency',
'rps',
'uniques',
{ name: 'http-errors', successCodes: [200,404], log: 'http-errors.log' }
],