I am totally new to couchdb,
How can i expose the service into a local development remote server ? (after in a future step expose it public)
I try to install on a remote development server besides i am not using Digital Ocean server i am using this tutorial : https://www.digitalocean.com/community/tutorials/how-to-install-couchdb-and-futon-on-ubuntu-14-04
I could not access with a web browser after install and start couchdb service with
couchdb -b
Wich return the default message : Apache CouchDB has started, time to relax.
Also from comand line i could:
curl http://127.0.0.1:5984/
And receive the correct message.
How can i access via web browser this development server ?
I can't know for sure, since I don't know your setup, but I'm guessing that you're trying to access the database from a different machine then the one it's running on. And I assume you know what IP to use to get to your remote, and that leads me to believe that your problem is that the port is not open (or not forwarded correctly) to your couchDB server.
A standard couchDB installation should be accessible, from a web-browser.
Related
I'm trying to develop a web application in nodejs. I'm using an npm package called "simple-peer" but i don't think this issue is related to that. I was able to use this package and get it working when integrating it with a laravel application using an apache server as the back end. I could access the host machine through it's IP:PORT on the network and connect a separate client to the host successfully with a peer-to-peer connection. However, I'm now trying to develop this specifically in node without an apache back end. I have my express server up and running on port 3000, I can access the index page from a remote client on the same network through IP:3000. But when I try to connect through webrtc, I get a "Connection failed" error. If I connect two different browser instances on the same localhost device, the connection succeeds.
For reference: i'm just using the copy/pasted code from this usage demo. I have the "simplepeer.min.js" included and referenced in the correct directory.
So my main questions are: is there a setting or some webRTC protocol that could be blocking the remote clients from connecting? What would I need to change to meet this requirement? Why would it work in a laravel/webpack app with apache and not with express?
If your remote clients can not get icecandidates, you need TURN server.
When WebRTC Peer behind NAT, firewall or using Cellular Network(like smartphone), P2P Connection will fail.
At that time, for fallback, TURN server will work as a relay server.
I recommend coTURN.
Here is an simple implementation of simple-peer with nodejs backend for multi-user video/audio chat. You can find the client code in /public/js/main.js. Github Project and the Demo.
And just like #JinhoJang said. You do need a turn server to pass the information. Here is a list of public stun/turn servers.
I am trying to get a streaming service running from a modified version of an open source repo https://github.com/nabendu82/streams.
I have a frontend client in React, a RTMP server for the stream, and a backend API. I have got a docker compose file to host them all together. If I run docker-compose up on my local computer, everything works perfectly. I can visit http://localhost:3000/matches/view and see two stream windows that aren't loaded, until I open up the streaming software OBS, Settings -> Stream -> Server: rtmp://localhost/live, Stream Key: 7. Then the right stream window will start.
To host this repo on the internet, I've created a basic EC2 instance on AWS (http://13.54.200.18:3000/matches/view). I installed docker-compose and I've copied all the repo files up to it.
However, when running on the AWS box the stream does not load, and the console error is always the spectacularly unhelpful:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://server:3002/streams/6. (Reason: CORS request did not succeed).
So for some reason CORS is preventing the React frontend from reading the server backend while it is hosted on AWS.
Here is the catch. I can actually get the streaming on the AWS hosted site to work, but only by running docker-compose up on my LOCAL computer at the same time. For some unknown reason, the AWS hosted version is able to pick up on the backend server running on my local machine (rather than the one running alongside it in docker-compose on AWS) and connect that way. I can even stream to the website via OBS at rtmp://13.54.200.18/live and everything works. But it only works on exactly my local computer running the docker-compose infrastructure (and only if I use calls to 'localhost' instead of the docker-compose service 'server'), if anyone else tries to see the stream on the live site they will just get Loading... perpetually and the CORS error.
Why is the AWS hosted code not looking at its own docker-compose file and its own server:3002 service? For the rest of the world, and for me if I'm not running a local server, it throws a CORS error. For just my local computer, and only if I'm running a local server and making requests to 'localhost:3002', it works perfectly.
If I ssh on to the AWS image, then docker-compose run client curl localhost:3002/streams will fail, but docker-compose run client curl server:3002/streams will give me back the correct JSON data. From everything I understand about docker compose, my services should be able to access each other and it appears they can, everything works great locally, and the services can talk to each other on the AWS box too, but just somehow this CORS error appears out of nowhere only on the AWS hosted version.
I've tried everything under the sun I can think of. I was originally using json-server, but I thought that might be the issue (as it has to specifically bind to -H 0.0.0.0), so I wrote my own Express server using the cors package to replace it and there has been no change. I've tried every configuration of docker-compose variables I can imagine. As far as I can understand I've done everything right, but somehow the AWS box wants to talk to my own computer's localhost aka "server service" aka 0.0.0.0 instead of its own. What is going on?
Repository here: https://github.com/JeremyEllingham/streams
Any help much appreciated.
I figured out how to get it working, by just posting direct to the Linux box IP address in production instead of trying to get it working with "localhost" or the docker service names. Kind of disappointed that docker-compose doesn't seem to work quite like I thought it did, but it's totally functional to just conditionally alter the base URL.
See also this answer: React app (in a docker container) cannot access API (in a docker container) on AWS EC2
I came across this awesome library xterm.js which is also the base for Visual Studio Code's terminal. I have a very general question.
I want to access a machine(ssh into a machine ) on a local network through a web based terminal(which is out of network, may be on a aws server). I was able to do this in a local network successfully but I could not reach to a conclusion to do it from Internet-->local network .
As an example - An aws server running the application on ip 54.123.11.98 which has a GUI with a button to open terminal. I want to open terminal of a local machine which is in a local network somewhere behind some public ip on local ip 192.168.1.7.
Can the above example be achieved using some sort of solutions where i can use xterm.js so that I don't have to go for building a web based terminal? What are the major security concerns I should keep in mind while exposing the terminals this way ?
I was thinking in line with using a fixed intermediate server between AWS and local network ip and use some sort of reverse ssh tunnel process to do this but I am not sure if this is the right way or could there be a more simple/better way to achieve this.
I know digital ocean, google cloud , they all do this but they have to connect to a computer which has public ip while I have a machine in a local network. I don't really want to configure router to do any kind of setup .
After a bit of research here is working code.
Libraries:
1) https://socket.io/
This library is used for transmit package from client to server.
2) https://github.com/staltz/xstream
This library is used for terminal view.
3) https://github.com/mscdex/ssh2
This is the main library which is used for establishing a connection with your remote server.
Step 1: Install Library 3 into your project folder
Step 2: Start from node side create a server.js file for open socket
Step 3:
Connection client socket to node server (both are in local machine)
The tricky logic is how to use socket and ssh2.
On emission of socket you need to trigger an SSH command using the ssh2 library. On response of the ssh2 library (from server) you need to transmit the socket package to the client. That's it.
Click here to find an example.
That example will have these files & folders:
Type Name
------------
FILE server.js
FILE package.json
FOLDER src
FOLDER xtream
First you need to configure your server IP, user and password or cert file on server.js and just run node server.js.
P.S.: Don't forget to run npm install
Let me know if you have any questions!
After some research later I came across this service : https://tmate.io/ which does the job perfectly. Though if you need a web-based terminal of tmate you have to use their ssh servers as a reverse proxy which ideally I was not comfortable with. However, they provide tmate-server which can be used to host your own reverse proxy server but lacks web UI. But to build a system where you have to access a client behind NAT over ssh on web, below are the steps.
Install and configure tmate-server on some cloud machine.
Install tmate on the client side and configure to connect to a cloud machine.
Create a nodejs application using xterm.js(easy because of WebSocket based communication) which connects to your tmate-server and pass commands to the respective client. (Beware of security issues of exposing this application, since you will be passing Linux commands ).
Depending on your use case you might need a small wrapper around tmate client on client-side to start/stop it automatically or via some UI/manual action.
Note: I wrote a small wrapper on client-side as well to start/stop and pass on the required information to an API server (written in nodejs) which then pass on the information to another API which connects the browser to the respective client session. Since we had written this application it included authentication as well as command restrictions of what can be run on terminal. You can customize it a lot.
I'm kind of lost in my current project. From a linux machine (Ubuntu server), running a code in nodejs I have to connect to a windows server, through VPN, and access a mySQL server running on it.
About the VPN server I only know it's Windows and I can easily connect to it by using the VPN conector on another Windows machine, I do not have access to that machine or know its parameters.
All I have is the IP of both VPN and database server inside that VPN, and username/password for VPN and database as well. Also I know that the VPN uses ms-chap v2.
I'm trying to use openvpn like that:
sudo openvpn --remote vpnIP --dev tun --ifconfig 127.0.0.1 dbIP
This does not show any error message but never request VPN's username/password
And what should I do from nodejs to access the database once VPN is created?
As I've said, I'm very lost on that! Any tip will be welcome!
Unless something else is specified, a Windows based VPN almost always uses PPTP. You can not connect with OpenVPN. You have to use a PPTP client.
The Ubuntu package is pptp-linux.
There is a detailed explanation on how to configure it here.
In a nutshell (I assume you have no GUI on a server),
you can create a tunnel with :
pptpsetup --create my_tunnel --server <server_address> --username <username> --password '<password>' --encrypt
Configuration files will be created in /etc/ppp. You can then connect (in debug mode) with:
pon my_tunnel debug dump logfd 2 nodetach
or simply (once it work) :
pon my_tunnel
and stop it with :
poff my_tunnel
If the server is a gateway, you may need to add a route, something like :
ip route add 192.168.1.0/24 dev ppp0
You may want Network Manager with a plugin network-manager-pptp, also see this wiki
https://help.ubuntu.com/community/VPNClient#PPTP
I have Solr with its default Jetty that came with example directory installed on Linux server which has apache2 as its web server.
Now, within the same private LAN, when I open a browser and type in http://<ip-address>:8983/solr works ONLY when I do port forwarding otherwise it doesn't work. I am not sure what could be the problem? Please note this installation has been done on a remote server in a hosting environment for production deployment and I am a beginner wrt deployment stuff.
You can use the jetty.host parameter during startup to allow direct access to Jetty.
The -D option of the java command can be used with the followin syntax:
java -Djetty.host=0.0.0.0 -jar start.jar
In this way Jetty can be reached from all the hosts.
However this is not the ideal setup IMHO. I prefere to setup Jetty to listen only on localhost, implementing the client with another frontend server which listen on port 80. If you want to implement the frontend on another server you can use iptables to limit the incoming connection, dropping everything on the 8983 port if the IP is different from the one of your frontend server.
This image depicts my preferred setup for a LAMP stack includin SOLR: