Node Can't connect to vagrant box - node.js

I am not sure if this is the correct place to ask my question, but really I am out of ideas, and my clock is ticking.
In short, I got a new machine that I need to make development ready.
This project is based on rather old program versions, that is a task to update.
In short I have set up the Vagrant (1.8.1) in VirtualBox (5.0.14). Chef (0.10.0) created all dependencies successfully and I can SSH to machine and see all is fine, all services are running as set in VagrantFile.
Vagrant box is latest ubunty/trystu64. My host machine is MacOs HighSierra(10.13.3).
Now, I open for example an mySQL editor (mySQL Workbench) and it connects to the Box, I can see DB and manipulate it.
My problem is with the NodeJS (I think). When I run my tests, it simply refuses to connect to the Box. More precisely, it attempts to connect to 127.0.0.1: 3306 (mySQL) and it errors. While MySQL Workbench performs the same connection without problems.
It seems the port forwarding in Vegrant works fine, as mySQL workbench is being forwarded to a box. Nodejs is not being forwarded, or something.
Is it Node doing it? Something else that I need to allow?
I have tried many different things, I have lost count. And always the same issue.
Is there something that I can do to Node, so it behaves as mySQL Workbench? Any idea is appreciated.
This identical setup used to work before, but not now.

Related

Keep OrientDB server running on AWS EC2

I recently downloaded and managed to start an OrientDB server/database on an AWS EC2 Linux 14.04 (I think the name is) server for an application I want to set up. I started OrientDB "as usual" by running ./server.sh in the terminal via SSH link to the EC2 server. All works fine and I can query the database while at the computer. But as soon as I leave my computer and the SSH link is broken (for example when closing the computer), so is the database, i.e. it stops.
Is there a way to go around this or do I have to set up the database in some other way?
OrientDB is provided as AWS AMI. Take a look to
http://orientdb.com/orientdb-amazon-web-services/
If you want to DIY, follow the instructions provided on
http://orientdb.com/docs/last/Unix-Service.html
Update: new link to doc:
https://orientdb.com/docs/last/admin/Unix-Service.html
Hope this helps
you can try putting full path to server.sh into /etc/rc.local before exit 0 and reboot the instance
Before running the server, run the command:
screen
This will create a persistent environment which will allow your process to keep running after you disconnect.
When you reconnect, you can use this command to reconnect to that environment:
screen -r

How to run a rails application in another computer while still coding in my own computer?

I have an iMac and when develop in Ruby on Rails I run everything on it: MySQL server, Redis service, ElasticSearch service, guard, and, of course, the proper rails server. By doing so, my computer runs pretty slow.
So I just bought a CPU and install linux in it, along with MySQL, Redis & ElasticSearch. Now I connect to that services from my iMac and it runs way faster.
However, Rspec/Guard still takes ages to load/run.
So, how do I make the linux server to take the hit and actually run this programs while I keep editing the code in my mac?
I know it may be a crude way and i haven't tested it
Place your project in Dropbox folder on your machine and have it open on the second computer
Run it on the quick machine. You can use following command to access it through ip address on the computers in same network
rails server -b 192.168.1.12 -p 8000
When saving code it should sync from one machine to another, may have slight delay for syncing
or set up Vagrant.
Thats the only ways I can see.

PostgreSQL via pgAdmin III - Server Doesn't Listen

Our company has an old linux server that runs a few tomcat web applications. One of those applications is connecting to PostgreSQL. While I'm a C#.Net/Windows coder, I need to connect to this database from my computer using pGAdmin III (or any suggested equivalent). When attempting the connection, pgAdmin says Server Not Listening.
Without knowing much about linux I'm using WinSCP to connect to the file structure. I have ZERO documentation on the old apps, any data sources, or their data connections. I've been able to determine the following, assuming the location of the web app is actuallly legit and not some non-running copy.
PostgreSQL
In one app's connection information:
jdbc:postgresql://localhost:5432/somename
After some digging, I found the following possible instances of postgresql on the server file structure.
\etc\postgresql\8.3\main
\etc\postgresql\8.4\main
There's also \etc\postgresql-common with very different types of files in there.
If there are other instances or related folder, I am unaware and wouldn't know where to look. It's a labyrinthine beast.
I ensured in the config file for both that listening="*", which was supposed to be one of two fixes. It was already set to *, so assuming one of these is the right one, I should be good there.
I know that at least some instance of postgresql is turned on because the old app is running and fetching data, so that's the other of the two fixes.
pgAdmin
I heard in a separate thread here that reinstalling pgAdmin might solve the problem, but it did not. I tried with and without ssl.
Here is how I'm trying to set up the connection in pgAdmin III:
Name: SomeName
Host: I've tried a few combinations here. //servername/somename, or just //servername
Port: 5432 (matches what was expected, also the port from the connection)
Service: Blank
MaintenanceDB: I tried the default in pgadmin, postgres and the actual db I'm trying to connect to.
username & Password: the credentials from the connection info in the old app.
I'm getting the Server Doesn't Listen, suggesting that either it's not on (Well...some data source is on and working and the data in WEB-INF suggests it's postgresql), or it's not accepting TCP/IP connections, which it is according to the instances of postgresql I was able to find.
Long Story Short
At this point I'm assuming that one of the following is the problem...
The connection information I'm entering into postgreSQL is not being entered correctly, but I don't know what I'm doing wrong.
The source of the connection information (the web application) is bad/old/not from a running instance (and in this case I don't know how to tell, not in linux).
The instances of postgresql I found are not the instances it's using, and I have no idea how to find it.
Something's fishy network-wise, but since both my computer and the linux server are on the same network, it doesn't seem too likely.
Also, everyone, please document your stuff for the poor souls of the future. I greatly appreciate any assistance you are able to offer me.
You may want to use a tunnel:
ssh -L 5432:localhost:5432 user#server
After you log into the remote server, you'll have mapped port 5432 on your computer to the remote one. Then you can use pgAdmin to connect to your localhost on port 5432. Make sure you don't have anything running on this port on your computer.
Edit: Look at these examples on how to setup tunnels using putty

Install Neo4j on Azure, cannot browse WebAdmin

I've just installed Neo4j 1.8.2 onto Azure by following this step-by-step process...
http://de.slideshare.net/neo4j/neo4j-on-azure-step-by-step-22598695
Unfortunately, when I browse to http://:7474/webadmin Fiddler says Error 10061 - No connection could be made because the target machine actively refused it.
I've followed the instructions exactly and haven't received any errors.
Any help much appreciated.
So, I think I got to the bottom of this. I think it was due to the size of compute / VM I was creating. It looks like the problem is caused when running on Extra Small instances. I created a new installation using a Small instance and everything now works :).
Try setting the server to accept connections form all hosts, and maybe use a newer Neo4j, say 1.9.4
http://docs.neo4j.org/chunked/stable/security-server.html#_secure_the_port_and_remote_client_connection_accepts
The way the VM Depot image is set up, it's pre-configured to allow all hosts to connect, and the Neo4j server will auto-start. The only thing you need to take care of, when constructing your VM, is to open an Input Endpoint, with any public port you want (preferably 7474 to stay true to Neo4j) and internal port 7474.
Note that the UI changed a bit since the how-to was published: You can specify the endpoint as the last step before creating your virtual machine. Other than that, the instructions should be the same. And... once the VM is up and running (it'll take about 5-10 minutes), you just visit http://yourservicename.cloudapp.net:7474 and you should see the web admin. Note: this is not the same as your vm name. If you named your VM something like 'neo' then you do not want http://neo:7474 or http://neo.cloudapp.net:7474. You need to use your cloud service name (you had to create a name for the service when you deployed the VM.
I've deployed that image several times in demos, and just tried again right now to make sure nothing wonky happened. Worked perfectly.

Collectd server not writing down received client data

I have pretty strange problem with Collectd. I'm not new to Collectd, was using it for a long time on CentOS based boxes, but now we have Ubuntu TLS 12.04 boxes, and I have really strange issue.
So, using version 5.2 on Ubuntu 12.04 TLS. Two boxes residing on Rackspace (maybe important, but I'm not sure). Network plugin configured using two local IPs, without any firewall in between and without any security (just to try to set simple client server scenario).
On both servers collectd writes in configured folders as it should write, but on server machine it doesn't write data received from client.
Troubleshooted with tcpdump, and I can clearly see UDP traffic and collectd data, including hostname and plugin names from my client machine, received on server, but they are not flushed to appropriate folder (configured by collectd) ever. Also running everything as root user, to avoid troubleshooting permissions.
Anyone has any idea or similar experience with this? Or maybe some idea what could I do for troubleshooting this beside trying to crawl internet (I think I clicked on every sensible link Google gave me in last two days) and checking network layer (which looks fine)?
And just small note: exactly the same happened with official 4.10.2 version from Ubuntu's repo. After trying to troubleshoot it for hours moved to upgrade to version five.
I'd suggest trying out the quite generic troubleshooting procedure based on the csv and logfile plugins, as described in this answer. As everything seems to be fine locally, follow this procedure on the server, activating only the network plugin (in addition to logfile, csv and possibly rrdtool).
So after no way of fixing this, I upgraded my Ubuntu to 12.04.2 LTS (3.2.0-24-virtual) and this just started working fine, without any intervention.

Resources