Weird behaviour of the fetch when running npm install - node.js

My setup is very simple and small:
C:\temp\npm [master]> tree /f
Folder PATH listing for volume OSDisk
Volume serial number is F6C4-7BEF
C:.
│ .gitignore
│ 1.js
│ package.json
│
└───.vscode
launch.json
C:\temp\npm [master]> cat .\package.json
{
"name": "node-modules",
"version": "1.0.0",
"description": "",
"main": "index.js",
"dependencies": {
"emitter": "http://github.com/component/emitter/archive/1.0.1.tar.gz",
"global": "https://github.com/component/global/archive/v2.0.1.tar.gz"
},
"author": "",
"license": "ISC"
}
C:\temp\npm [master]> npm config list
; cli configs
metrics-registry = "https://registry.npmjs.org/"
scope = ""
user-agent = "npm/6.14.12 node/v14.16.1 win32 x64"
; userconfig C:\Users\p11f70f\.npmrc
https-proxy = "http://127.0.0.1:8888/"
proxy = "http://127.0.0.1:8888/"
strict-ssl = false
; builtin config undefined
prefix = "C:\\Users\\p11f70f\\AppData\\Roaming\\npm"
; node bin location = C:\Program Files\nodejs\node.exe
; cwd = C:\temp\npm
; HOME = C:\Users\p11f70f
; "npm config ls -l" to show all defaults.
C:\temp\npm [master]>
Notes:
The proxy addresses correspond to Fiddler.
Notice that the emitter dependency url uses http whereas the global uses https.
When I run npm install it starts and then hangs very quickly. And I know why, because Fiddler tells me:
The request is:
GET http://github.com:80/component/emitter/archive/1.0.1.tar.gz HTTP/1.1
connection: keep-alive
user-agent: npm/6.14.12 node/v14.16.1 win32 x64
npm-in-ci: false
npm-scope:
npm-session: 74727385b32ebcbf
referer: install
pacote-req-type: tarball
pacote-pkg-id: registry:undefined#http://github.com/component/emitter/archive/1.0.1.tar.gz
accept: */*
accept-encoding: gzip,deflate
Host: github.com:80
And the response is:
HTTP/1.1 301 Moved Permanently
Content-Length: 0
Location: https://github.com:80/component/emitter/archive/1.0.1.tar.gz
Now this is BS, pardon my French, because the returned Location value of https://github.com:80/component/emitter/archive/1.0.1.tar.gz is invalid. But I suppose the server is not very smart - it just redirects to https without changing anything else, including the port, which remains 80 - good for http, wrong for https. This explains the hanging - the fetch API used by npm seems to retry at progressively longer delays which creates an illusion of hanging.
Debugging npm brings me to the following code inside C:\Program Files\nodejs\node_modules\npm\node_modules\npm-registry-fetch\index.js:
return opts.Promise.resolve(body).then(body => fetch(uri, {
agent: opts.agent,
algorithms: opts.algorithms,
body,
cache: getCacheMode(opts),
cacheManager: opts.cache,
ca: opts.ca,
cert: opts.cert,
headers,
integrity: opts.integrity,
key: opts.key,
localAddress: opts['local-address'],
maxSockets: opts.maxsockets,
memoize: opts.memoize,
method: opts.method || 'GET',
noProxy: opts['no-proxy'] || opts.noproxy,
Promise: opts.Promise,
proxy: opts['https-proxy'] || opts.proxy,
referer: opts.refer,
retry: opts.retry != null ? opts.retry : {
retries: opts['fetch-retries'],
factor: opts['fetch-retry-factor'],
minTimeout: opts['fetch-retry-mintimeout'],
maxTimeout: opts['fetch-retry-maxtimeout']
},
strictSSL: !!opts['strict-ssl'],
timeout: opts.timeout
}).then(res => checkResponse(
opts.method || 'GET', res, registry, startTime, opts
)))
And when I stop at the right moment, this boils down to the following values:
uri
'http://github.com/component/emitter/archive/1.0.1.tar.gz'
agent:undefined
algorithms:['sha1']
body:undefined
ca:null
cache:'default'
cacheManager:'C:\\Users\\p11f70f\\AppData\\Roaming\\npm-cache\\_cacache'
cert:null
headers:{
npm-in-ci:false
npm-scope:''
npm-session:'413f9b25525c452a'
pacote-pkg-id:'registry:undefined#http://github.com/component/emitter/archive/1.0.1.tar.gz'
pacote-req-type:'tarball'
referer:'install'
user-agent:'npm/6.14.12 node/v14.16.1 win32 x64'
}
integrity:undefined
key:null
localAddress:undefined
maxSockets:50
method:'GET'
noProxy:null
proxy:'http://127.0.0.1:8888/'
referer:'install'
retry:{retries: 2, factor: 10, minTimeout: 10000, maxTimeout: 60000}
strictSSL:false
timeout:0
I have omitted two values and I truly do not know their significance - opt.Promise and memoize. It is possible that they are crucial, I do not know.
Anyway, when I step over this statement, the aforementioned session appears in Fiddler with the bogus url of http://github.com:80/component/emitter/archive/1.0.1.tar.gz and I do not understand - how come? The debugger clearly shows that the uri parameter passed to fetch does not specify the port number.
I thought maybe it is some kind of a non string type, but typeof uri returns 'string'.
I have even written a tiny reproduction to execute just this request using the same arguments, except for the opt.Promise and memoize:
const fetch = require('make-fetch-happen')
fetch('http://github.com/component/emitter/archive/1.0.1.tar.gz', {
algorithms: ['sha1'],
cache: 'default',
cacheManager: 'C:\\Users\\p11f70f\\AppData\\Roaming\\npm-cache\\_cacache',
headers:{
"npm-in-ci":false,
"npm-scope":"",
"npm-session":"00b5bb97075e3c35",
"user-agent":"npm/6.14.12 node/v14.16.1 win32 x64",
"referer":"install",
"pacote-req-type":"tarball",
"pacote-pkg-id":"registry:undefined#http://github.com/component/emitter/archive/1.0.1.tar.gz"
},
maxSockets: 50,
method: 'GET',
proxy: 'http://127.0.0.1:8888',
referer: 'install',
retry: {
retries: 2,
factor: 10,
minTimeout: 10000,
maxTimeout: 60000
},
strictSSL: false,
timeout: 0
}).then(res => console.log(res))
But it shows up correctly in Fiddler - no port is added and hence the redirection works fine.
When there is no Fiddler (and hence no proxy) everything works correctly too, but I am very much curious to know why it does not work with Fiddler.
What is going on here?

Related

net::ERR_FAILED api call to backend from nextjs with nginx docker

I have 3 docker containers running in mac os.
backend - port 5055
frontend (next.js) - port 3000
nginx - port 80
I am getting net::ERR_FAILED for backend api requests when I access from browser (http://localhost:80). I can make a request to backend (http://localhost:5055) in postman and it works well.
Sample api request - GET http://backend:5055/api/category/show
What is the reason for this behaviour ?
Thanks.
docker-compose.yml
version: '3.9'
services:
backend:
image: backend-image
build:
context: ./backend
dockerfile: Dockerfile.prod
ports:
- '5055:5055'
frontend:
image: frontend-image
build:
context: ./frontend
dockerfile: Dockerfile.prod
ports:
- '3000:3000'
depends_on:
- backend
nginx:
image: nginx-image
build:
context: ./nginx
ports:
- '80:80'
depends_on:
- backend
- frontend
backend - Dockerfile.prod
FROM node:19.0.1-slim
WORKDIR /app
COPY package.json .
RUN yarn install
COPY . ./
ENV PORT 5055
EXPOSE $PORT
CMD ["npm", "run", "start"]
frontend - Dockerfile.prod
FROM node:19-alpine
WORKDIR /usr/app
RUN npm install --global pm2
COPY ./package*.json ./
RUN npm install
COPY ./ ./
RUN npm run build
EXPOSE 3000
USER node
CMD [ "pm2-runtime", "start", "npm", "--", "start" ]
nginx - Dockerfile
FROM public.ecr.aws/nginx/nginx:stable-alpine
RUN rm /etc/nginx/conf.d/*
COPY ./default.conf /etc/nginx/conf.d/
EXPOSE 80
CMD [ "nginx", "-g", "daemon off;" ]
nginx - default.conf
upstream frontend {
server frontend:3000;
}
upstream backend {
server backend:5055;
}
server {
listen 80 default_server;
...
location /api {
...
proxy_pass http://backend;
proxy_redirect off;
...
}
location /_next/static {
proxy_cache STATIC;
proxy_pass http://frontend;
}
location /static {
proxy_cache STATIC;
proxy_ignore_headers Cache-Control;
proxy_cache_valid 60m;
proxy_pass http://frontend;
}
location / {
proxy_pass http://frontend;
}
}
frontend - .env.local
NEXT_PUBLIC_API_BASE_URL=http://backend:5055/api
frontend - httpServices.js
import axios from 'axios'
import Cookies from 'js-cookie'
const instance = axios.create({
baseURL: `${process.env.NEXT_PUBLIC_API_BASE_URL}`,
timeout: 500000,
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
},
})
...
const responseBody = (response) => response.data
const requests = {
get: (url, body) => instance.get(url, body).then(responseBody),
post: (url, body, headers) =>
instance.post(url, body, headers).then(responseBody),
put: (url, body) => instance.put(url, body).then(responseBody),
}
export default requests
Edit
nginx logs (docker logs -f nginx 2>/dev/null)
172.20.0.1 - - [14/Nov/2022:17:02:39 +0000] "GET /_next/image?url=%2Fslider%2Fslider-1.jpg&w=1080&q=75 HTTP/1.1" 304 0 "http://localhost/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" "-"
172.20.0.1 - - [14/Nov/2022:17:02:41 +0000] "GET /service-worker.js HTTP/1.1" 304 0 "http://localhost/service-worker.js" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" "-"
172.20.0.1 - - [14/Nov/2022:17:02:41 +0000] "GET /fallback-B639VDPLP_r91l2hRR104.js HTTP/1.1" 304 0 "http://localhost/service-worker.js" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" "-"
172.20.0.1 - - [14/Nov/2022:17:02:41 +0000] "GET /workbox-fbc529db.js HTTP/1.1" 304 0 "http://localhost/service-worker.js" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" "-"
curl request is working well from nginx container to backend container (curl backend:5055/api/category/show)
Edit 2
const CategoryCarousel = () => {
...
const { data, error } = useAsync(() => CategoryServices.getShowingCategory())
...
}
import requests from './httpServices'
const CategoryServices = {
getShowingCategory() {
return requests.get('/category/show')
},
}
Edit 3
When NEXT_PUBLIC_API_BASE_URL=http://localhost:5055/api
Error: connect ECONNREFUSED 127.0.0.1:5055
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1283:16) {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 5055,
config: {
...
baseURL: 'http://localhost:5055/api',
method: 'get',
url: '/products/show',
data: undefined
},
...
_options: {
...
protocol: 'http:',
path: '/api/products/show',
method: 'GET',
...
pathname: '/api/products/show'
},
...
},
_currentUrl: 'http://localhost:5055/api/products/show',
_timeout: null,
},
response: undefined,
isAxiosError: true,
}
Docker configuration is all correct
I've read the question again, with the given logs, it seems your configuration is correct all along. However, what you're doing on browser is access violation.
Docker compose service(hosts) access
Docker services are connected to each other, that means making a request inside another service to another service is possible. What's impossible is a user outside from that service is not accessible to another service. Here's a simpler explanation:
# docker-compose.yaml
service-a:
ports:
- 3000:3000 # exposed to outside
# ...
service-b:
ports:
- 5000:5000 # exposed to outside
# ...
# ✅this works! case A
(service-a) $ curl http://service-b:5000
(service-b) $ curl http://service-a:3000
(local machine) $ curl http://localhost:5000
(local machine) $ curl http://localhost:3000
# ❌this does not work! case B
(local machine) $ curl http://service-a:3000
(local machine) $ curl http://service-b:3000
When I was reading the question initially, I've missed the part that the frontend code is accessing unexposed backend service in browser. This is clearly falls into case B, which will not work.
Solution: With server side rendering...
A frontend app is merely a javascript code on your browser. It cannot access to internals. Therefore the request should be corrected:
# change from http://backend:5055/api
NEXT_PUBLIC_API_BASE_URL=http://localhost:5055/api
But here's better way to solve it. Try access the api inside server-side code. Since your frontend is Next.js, it is possible to inject backend result to frontend.
export async function getServerSideProps(context) {
const req = await fetch('http://backend:5055/api/...');
const data = req.json();
return {
props: { data }, // will be passed to the page component as props
}
}
Edit
(Edit 3) contains frontend logs when NEXT_PUBLIC_API_BASE_URL is changed to 'localhost' (as you mentioned in the answer). Now the error comes from different api i.e. 'localhost:5055/api/products/show' which is inside a getServerSideProps(). Is this happening because some apis are calling from client side and some are from server side ? If that is the case , How should I fix this ? Thanks
Here's more practical example:
// outside getServerSideProps, getStaticProps - browser
// ❌this will fail 1-a
await fetch('http://backend:5055/...') // 'backend' should be service name
// ✅this will work 1-b
await fetch('http://localhost:5055/...')
// inside getServerSideProps or getStaticProps - internal network
export const getServerSideProps() {
// ✅this will work 2-a
await fetch('http://backend:5055/...');
// ❌this will fail 2-b
await fetch('http://localhost:5055/...');
}
In short, the request has to be either 1-b or 2-a.
Is this happening because some apis are calling from client side and some are from server side ? If that is the case , How should I fix this ? Thanks
Yes. There are several ways to deal with it.
1.Programmatically differentiating the host
const isServer = typeof window === 'undefined';
const HOST_URL = isServer ? 'http://backend:5055' : 'http://localhost:5055';
fetch(HOST_URL);
2.Manually differentiating the host
// server side
getServerSideProps() {
// this seems unnecessary and confusing at first sight but it comes very handy later in terms of security.
fetch('http://backend:5055');
}
// client side
fetch('http://localhost:5055');
3.Use separate domain for backend(modify hosts file)
This is what I usually resort to when testing services with domain name in local environment.
Modifying hosts file, means exampleurl.com will be resolved as localhost in the OS. In this case, production environment must use separate domains and host file setup is required. The service must be exposed to public. Please refer to this document on modifying hosts file.
# docker-compose.yaml
services:
backend:
ports:
- 5050:5050
# ...
# hosts file
127.0.0.1 exampleurl.com
# ...
// in this case,
// development == local development
const IS_DEV = process.NODE_ENV = 'development';
// hosts file does not resolve port. it's necessary for local development.
const BACKEND_HOST = 'exampleurl.com'
const BACKEND_URL = IS_DEV ? `${BACKEND_HOST}:5050` : BACKEND_HOST;
// client-side
fetch(BACKEND_URL);
// server-side
getServerSideProps() {
fetch(BACKEND_URL);
}
There are many clever ways to solve this problem but there is no "always right" answer. Take your time to think which method best fits to your case.
in nginx.conf you need to specify the backend port too and also the base path (/api):
location /api {
...
proxy_pass http://backend:5055/api;
}

Cross origin error between angular cli and flask

first i post the user id and password from the UI(angular) to flask
public send_login(user){
console.log(user)
return
this.http.post(this.apiURL+'/login',JSON.stringify(user),this.httpOptions)
.pipe(retry(1),catchError(this.
handleError))
}
next i received it from backend
backend error
all the operations are working properly but at console the cross origin error is raising
Error at UI console
the http option in Ui side is mentioned below
constructor(private http: HttpClient) { }
// Http Options
httpOptions = {
headers: new HttpHeaders({
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': 'http://localhost:9000',
'Access-Control-Allow-Methods': "GET,POST,OPTIONS,DELETE,PUT",
'X-Requested-With': 'XMLHttpRequest',
'MyClientCert': '', // This is empty
'MyToken': ''
})
}
the cors declared at backend is metioned below
cors = CORS(app, resources={r"/login": {"origins": "*"}})
#app.route('/login', methods=['GET','POST'])
def loginForm():
json_data = ast.literal_eval(request.data.decode('utf-8'))
print('\n\n\n',json_data,'\n\n\n')
im not able to find were is the problem is raising
Note: cross origin arising in the time of login process other wise in the consecutive steps
add below code in your app.py
CORS(app, supports_credentials=True)
and from frontend use
{with-credentials :true}
it will enable the communication between the frontend and backend
To me it seems the call is not leaving the front end (at least in my experience it is like this), because the browsers are securing this.
Create a new file proxy.conf.json in src/ folder of your project.
{
"/backendApiUrl": { <--- This needs to reflect the server backend base path
"target": "http://localhost:9000", <-- this is the backend server name and port
"secure": false,
"logLevel": "debug" <--- optional, this will give some debug output
}
}
In the file angular.json (CLI configuration file), add the following proxyConfig option to the serve target:
"projectname": {
"serve": {
"builder": "#angular-devkit/build-angular:dev-server",
"options": {
"browserTarget": "your-application-name:build",
"proxyConfig": "src/proxy.conf.json" <--- this is the important addition
},
Simply call ng serve to run the dev server using this proxy configuration.
You can read the section Proxying to a backend server in https://angular.io/guide/build
Hope this helps.

Configuring express gateway to work with redis

I'm setting up an instance of the express gateway for routing requests to microservices. It works as expected, but I get the following errors when I try to include redis in my system config
0|apigateway-service | 2020-01-09T18:50:10.118Z [EG:policy] error: Failed to initialize custom express-session store, please ensure you have connect-redis npm package installed
0|apigateway-service | 2020-01-09T18:50:10.118Z [EG:gateway] error: Could not hot-reload gateway.config.yml. Configuration is invalid. Error: A client must be directly provided to the RedisStore
0|apigateway-service | 2020-01-09T18:50:10.118Z [EG:gateway] warn: body-parser policy hasn't provided a schema. Validation for this policy will be skipped.
0|apigateway-service | 2020-01-09T18:50:10.118Z [EG:policy] error: Failed to initialize custom express-session store, please ensure you have connect-redis npm package installed
I have installed the necessary packages
npm install redis connect-redis express-session
and have updated the system.config.yml file like so,
# Core
db:
redis:
host: ${REDIS_HOST}
port: ${REDIS_PORT}
db: ${REDIS_DB}
namespace: EG
plugins:
# express-gateway-plugin-example:
# param1: 'param from system.config'
health-check:
package: './health-check/manifest.js'
body-parser:
package: './body-parser/manifest.js'
crypto:
cipherKey: sensitiveKey
algorithm: aes256
saltRounds: 10
# OAuth2 Settings
session:
storeProvider: connect-redis
storeOptions:
host: ${REDIS_HOST}
port: ${REDIS_PORT}
db: ${REDIS_DB}
secret: keyboard cat # replace with secure key that will be used to sign session cookie
resave: false
saveUninitialized: false
accessTokens:
timeToExpiry: 7200000
refreshTokens:
timeToExpiry: 7200000
authorizationCodes:
timeToExpiry: 300000
My gateway.config.yml file looks like this
http:
port: 8080
admin:
port: 9876
apiEndpoints:
accounts:
paths: '/accounts*'
billing:
paths: '/billing*'
serviceEndpoints:
accounts:
url: ${ACCOUNTS_URL}
billing:
url: ${BILLING_URL}
policies:
- body-parser
- basic-auth
- cors
- expression
- key-auth
- log
- oauth2
- proxy
- rate-limit
pipelines:
accounts:
apiEndpoints:
- accounts
policies:
# Uncomment `key-auth:` when instructed to in the Getting Started guide.
# - key-auth:
- body-parser:
- log: # policy name
- action: # array of condition/actions objects
message: ${req.method} ${req.originalUrl} ${JSON.stringify(req.body)} # parameter for log action
- proxy:
- action:
serviceEndpoint: accounts
changeOrigin: true
prependPath: true
ignorePath: false
stripPath: true
billing:
apiEndpoints:
- billing
policies:
# Uncomment `key-auth:` when instructed to in the Getting Started guide.
# - key-auth:
- body-parser:
- log: # policy name
- action: # array of condition/actions objects
message: ${req.method} ${req.originalUrl} ${JSON.stringify(req.body)} # parameter for log action
- proxy:
- action:
serviceEndpoint: billing
changeOrigin: true
prependPath: true
ignorePath: false
stripPath: true
package.json
{
"name": "max-apigateway-service",
"description": "Express Gateway Instance Bootstraped from Command Line",
"repository": {},
"license": "UNLICENSED",
"version": "1.0.0",
"main": "server.js",
"dependencies": {
"connect-redis": "^4.0.3",
"express-gateway": "^1.16.9",
"express-gateway-plugin-example": "^1.0.1",
"express-session": "^1.17.0",
"redis": "^2.8.0"
}
}
Am I missing anything?
In my case, I used AWS Elasticache for Redis. I tried to run it but I had "A client must be directly provided to the RedisStore" error. I found my problem from the security group setting. EC2(server) should have a proper security group for the port of Elasticache. And Elasticache should have the same security group.
Step1. Create new security group. Set the inbound rule
Step2. Add the security group to the EC2 server.
Step3. Add the security group to the Elasticache.

request not work, Error: Invalid protocol: 127.0.0.1:?

I am new to node.js, I use request send the post request.but I got a error!
request({
method: 'POST',
url: config.api + '/index',
body: {
name: "name"
},
json: true
})
throw er; // Unhandled 'error' event
^
Error: Invalid protocol: 127.0.0.1:
I write this: It work fine, you can modify it like this.
request({
method: 'POST',
url: 'http://127.0.0.1:3000' + '/index',
body: {
name: "name"
},
json: true
})
Your code is incorrect: follow the instructions on the NPM module page.
If you're using a PHP Development Server please refer to this thread for the solution.
Stack Overflow
I met a similar issue on Win10 after a system update. It was caused by the system proxy settings.
http_proxy=127.0.0.1:8888
https_proxy=127.0.0.1:8888
Change the above environment settings to
http_proxy=http://127.0.0.1:8888
https_proxy=http://127.0.0.1:8888
done the job for me.
Btw, if you use git-bash, you can also check the git config.
$git config --list
...
http.sslverify=false
http.proxy=http://127.0.0.1:8888
https.proxy=http://127.0.0.1:8888
...

Connecting to Cayley serving over localhost

I've followed the 'Getting Started' guide in Cayley's documentation and installed Cayley on my remote server:
Getting Started: https://github.com/google/cayley
Server OS: CentOS 7.2.1511
I've added cayley to my $PATH:
echo $PATH :
/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/csse/cayley/src/github.com/google/cayley
Here is my config file at /etc/cayley.cfg
{
"database": "leveldb",
"db_options": {
"cache_size_mb": 2,
"write_buffer_mb": 20
},
"db_path": "~/cayley/src/github.com/google/cayley/data/testdata.nq",
"listen_host": "127.0.0.1",
"listen_port": "64210",
"read_only": false,
"replication_options": {
"ignore_missing": false,
"ignore_duplicate": false
},
"timeout": 30
}
I serve cayley over http by simply doing:
cayley http
and the terminal outputs:
Cayley now listening on 127.0.0.1:64210
On my main machine (Mac OSX 10.10.5 Yosemite), I've used npm to install the cayley package and written a test:
##testconnection.js
var cayley = require('cayley');
var client = cayley("137.112.104.107");
var g = client.graph;
g.V().All(function(err, result) {
if(err) {
console.log('error');
} else {
console.log('result');
}
});
However, it fails when I run it: node testconnection.js
error: Error: Invalid URI "137.112.104.107/api/v1/query/gremlin"
I'd like to connect to Cayley and modify the database from my test. I've found a great powerpoint full of Cayley information:
https://docs.google.com/presentation/d/1tCbsYym1kXWWDcnRU9ymj6xP0Nvgq-Qhy9WDmqWcM-o/edit#slide=id.g3776708f1_0319
As well as pertinent Cayley docs:
Overview Doc
Configuration Doc
HTTP API Doc
And a post on stackoverflow:
Cayley db user and password protection over HTTP connections
But I'm struggling to come up with a way to connect Cayley (on my remote machine) with my local machine. I'd like to connect with npm if possible, but am open to other options. Where am I going wrong?
Edit #1
I've appended the "http://" to my ip, so now it reads http://137.112.104.107. At that point, I solved another issue by performing
cayley init --config=/etc/cayley.cfg
as mentioned by the author here
I've also removed the listen_post and listen_port from my config file (each individually first, then both), yet have still have the same socket hang up error. Here's a printout of client from the test script:
Client {
host: 'http://137.112.104.107',
request:
{ [Function]
get: [Function],
head: [Function],
post: [Function],
put: [Function],
patch: [Function],
del: [Function],
cookie: [Function],
jar: [Function],
defaults: [Function] },
graph: Gremlin { client: [Circular], query: [Function] },
g: Gremlin { client: [Circular], query: [Function] },
write: [Function: bound ],
delete: [Function: bound ],
writeFile: [Function: bound ]
}
Your Cayley server is listening on 127.0.0.1 / localhost and therefor not reachable from another machine. To be able to reach it from a virtual machine or another computer on your network it needs to bind to an interface that is reachable.
If you configure host: 0.0.0.0 and check what is your network IP (I assume: 137.112.104.107) and connect it, it should work or you need to open it or forward the port on your firewall (depending on your network).

Resources