I am trying to connect to a MongoDB instance running on a remote Linux server. The connection attempt is timing out leading me to suspect network settings on the Linux server as I can ssh to the server and connect to MongoDB locally.
/etc/mongod.conf does not have the bind_ip set, so that is not the issue.
Can anyone point me to how I can troubleshoot this problem?
Can you telnet to the MongoDB port from the remote host and connect successfully?
If so, that means the port is open, the connection works and it's not a firewall/iptables etc. issue. You should look into the next piece - do you have authentication enabled?
If you can't connect with telnet, then something is blocking you from getting from the remote host to that port. It could be firewall, or it could be on the host - it really depends on your set up and infrastructure.
Remove any lock acquired by the mongoDB
sudo rm -rf /data/configdb/mongod.lock;
Create the data directories for the config and shard servers
sudo mkdir /data;
sudo mkdir /data/db;
sudo mkdir /data/configdb;
sudo chown -R ubuntu:ubuntu /data;
sudo chmod -R 755 /data
Give the rights to the above created folders.
Give ownership to the above created folders.
Now try to connect to mongodb, It will surely connect now.
Related
I have an app that has been successfully running on EC2 for a few years. The system is Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-1032-aws x86_64).
It's a small and simple app with low traffic. I had never made any changes to the server itself until today. I wanted to deal with the X packages can be updated. message, so I ran:
sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade
Then I ran sudo reboot. Once rebooted, the app runs perfectly. I can access it as normal via the public URL and look at things, including db (postgresql directly on the server) data with no issues or surprises.
But, when I tried to ssh into the machine again, I couldn't. I do ssh -i "key.pem" -vvv ubuntu#<IP> and get:
debug1: Connecting to <IP> [<IP>] port 22.
debug1: connect to address <IP> port 22: Operation timed out
ssh: connect to host <IP> port 22: Operation timed out
No changes were made to security groups. Also, it's such a small project, that I never setup EC2 Instance Connect or anything like that.
I had the thought of launching a new EC2 and just switching the EBS volumes, thinking EBS would bring the app and data, while the instance itself would have configs and permissions.
I do not understand much about this (clearly), and was surprised to learn that the EBS volume itself seems to be the problem and hold all the cards.
I can switch EBS volumes back and forth between the two EC2 instances. At any given time, whichever one has the newest (and therefore blank) EBS volume attached at /dev/sda1 allows SSH but surely does not run the app. And, vice-versa: Whichever EC2 instance has the original EBS volume runs the app perfectly but keeps me locked out of ssh.
In this scenario, the question is: How can I make one of the EC2 instances bypass this EBS issue and make its own decision about allowing me to connect with ssh?
Or: What is the obvious and/or silly thing I'm missing here?
PS: I do have elastic IP going for all of this, so it doesn't seem like DNS would be the source of the problem.
With John Rotenstein's help, I was able to resolve this.
Here are the core steps:
Phase 1 - Attach and mount additional volume
Per John's comment, it's possible to boot the instance from the "good" volume and then attach and mount the "bad" volume after. This allowed me to explore files and look for issues.
AWS panel
Attach volume to EC2 instance as root by using /dev/sda1 for name
Start the EC2 instance
Attach the other volume after instance has booted
Terminal
SSH into the server
See root volume information:
~$ df -hT /dev/xvda1
Check for mounted volumes:
~$ lsblk
See additional volume information:
~$ df -hT /dev/xvdf1
Switch to root user:
~$ sudo su -
Make a directory to be the mount path:
~$ mkdir /addvol
Mount the additional volume to the path:
~$ mount /dev/xvdf1 /addvol
Check additional volume contents:
~$ ls -la /addvol/home/ubuntu
Now I could see and navigate the additional volume's contents, finding config files, looking at authorized_keys, file permissions, etc.
This article from AWS helped a lot to get me here.
After finally getting to this point, I could not find any problems with the keys, or permissions, etc. John pointed me to this article dealing with Ubuntu's firewall things.
Phase 2 - Dealing with the firewall
I ran some commands from the article and tried to understand how they worked.
Once I grasped it a little, I decided to use an existing reboot script I have on the volume to ensure the firewall was ok with SSH connections.
I updated my existing custom reboot script, adding the following lines:
sudo ufw allow ssh
sudo ufw allow 22
sudo ufw disable
sudo ufw --force enable
Basically it calls to allow for ssh twice, once by name and then by port. I'm a newbie on this stuff and just went for the overkill.
Then it disables and enables the firewall to ensure it runs with these news things configured.
Because sudo ufw enable requires an interaction, I chose to use sudo ufw --force enable.
Phase 3 - Testing and using it!
After the script update, I exited the server.
AWS panel:
Stop the EC2 instance
Detach one volume from the instance
Detach the other volume from the instance
Reattach the "bad" volume, this time as root
Start the EC2 instance
Terminal:
SSH into the instance - Voila!
NOTE: Before truly working 100%, my computer complained about the known_hosts thing. The server key must have changed on the update/upgrade and/or after all of the volume changes. I don't think having to confirm hosts is a big deal, so I just usually clear all of the contents in my local .ssh/known_hosts file. If you prefer to be specific, you can find the server's information on there specifically and delete only the relevant lines.
The original postgres server was run on ubuntu , but the machine crash and can't boot .
I tried to mount the disk on the Ubuntu live with the postgres installed , and then change the data_directory to the path of the oringinal data_directoty which I mount ,but it failed.
There is no error message when I run systemctl start postgresql ,but when I try to login with psql ,it show
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
It seem that the server is not on. How can I fix it ?
Or is there any other way to get the database without booting the original machine?
I have found the problem,
The privilege of data_directory should
be 700 ,but I set to 777sudo chmod -R <data_dir> 700
The owner of data_directory should be postgressudo chown -R postgres:postgres <data_dir>
I have one ubuntu server located in one city(remotely), i want to give dummy display/GUI to that server and access it from my local ubuntu machine, how can i create this, please suggest me if there is any way to create a dummy display to that server from my local machine and can i access it like my own local machine.
Connect via ssh to your Linux server and do the following in cli:
sudo passwd root (put password for your username that you want to connect from)
sudo apt-get install ubuntu-desktop (to install your GUI)
sudo apt-get install xrdp (to install the middleware to connect through)
open port on your cloud/host portal for remote connection (port 3389)
Do your remote desktop to your vm dns using that port and use ur created username/password to go through.
You can use ssh service, open a terminal and write
# sudo root ssh#ip_address_of_server
so if the ip address of server is : 192.168.1.22
the command line will be :
# sudo root ssh#192.168.1.22
You do not forget to install ssh service
I'm getting the following error when I try to run "mongod" in the terminal. I've tried uninstalling, reinstalling, and restarting the machine. Any suggestions on how to get it working would be amazing.
ERROR:
dbpath (/data/db) does not exist.
Create this directory or give existing directory in --dbpath.
See http://dochub.mongodb.org/core/startingandstoppingmongo
Side note:
Node also stopped working on my machine around the same time that I got this error.
events.js:72
throw er; // Unhandled 'error' event
^
Error: failed to connect to [localhost:27017]
Any help would be much appreciated!
This should work to ensure that the directory is set up in the right place so that Mongo can find it:
sudo mkdir -p /data/db/
sudo chown `id -u` /data/db
You need to create the directory on root /data/db or set any other path with the following command :
mongod --dbpath /srv/mongodb/
See the example link
I solved the problem with :
sudo mongod --dbpath=/var/lib/mongodb and then mongo to access the mongodb Shell.
Change the user of the new data directory:
chown mongodb [rute_directory]
And try another time to start the mongo service
service mongod start
I solve the same problem with this.
Daemons (usually ending with d) are normally started as services. Starting the service (daemon) will allow mongodb to work as designed (without permission changes if integrates well with your distro). I start it using the service named mongodb instead of starting mongod directly--on distro with systemd enable on startup then run like:
sudo systemctl enable mongodb
sudo systemctl start mongodb
or, on distro with upstart (if you have /etc/init) or init (if you have /etc/init.d) ( https://www.tecmint.com/systemd-replaces-init-in-linux/ ) instead run:
sudo service mongodb enable
sudo service mongodb start
If you have a distro with rc ("run commands") such as Gentoo (settings in /etc/init.d) (https://forums.gentoo.org/viewtopic-t-854138-start-0.html) run:
rc-update add mongodb default
/etc/init.d/mongodb start
In a distro/version of FreeBSD which still has rc (check whether your version switched to systemd, otherwise see below):
add the following line to /etc/rc.conf:
mongod_enable="YES"
then:
sudo service mongod start
After starting the service, an unpriveleged user can use mongo, and each user will have separate data.
I also got the error that "The file /data/db doesn't exist" when I tried to save my file using the "mkdir -p /data/db" command(using both with and without sudo command). But later on one site, a person named Emil answered that the path "/data/db" no longer works on Mac, so use "~/data/db" instead
i.e., use the command
mkdir -p ~/data/db
instead of previous command.
Moreover, use
mongod --dbpath ~/data/db
to run mongod
It worked for me, hope it work for others too facing the same problem
My Cassandra used to work with no problems.
I was able to connect with no problems but now for some reason it doesn't work anymore.
[default#unknown] connect localhost/9160;
Exception connecting to localhost/9160. Reason: Connection refused.
I am in Ubuntu server
Thanks in-advance
The solution to this question was provided to you on the pycassa google group:
https://groups.google.com/d/topic/pycassa-discuss/Bai7bvkHYU4/discussion
This is not a pycassa problem. The problems you are having are specific to starting a Cassandra instance and not following the documentation in the README.txt that is in the root folder of the distribution:
Getting started
This short guide will walk you through getting a basic one node cluster up
and running, and demonstrate some simple reads and writes.
tar -zxvf apache-cassandra-$VERSION.tar.gz
cd apache-cassandra-$VERSION
sudo mkdir -p /var/log/cassandra
sudo chown -R whoami /var/log/cassandra
sudo mkdir -p /var/lib/cassandra
sudo chown -R whoami /var/lib/cassandra
Note: The sample configuration files in conf/ determine the file-system
locations Cassandra uses for logging and data storage. You are free to
change these to suit your own environment and adjust the path names
used here accordingly.
Now that we're ready, let's start it up!
bin/cassandra -f
Running the startup script with the -f argument will cause Cassandra to
remain in the foreground and log to standard out.
Now let's try to read and write some data using the command line client.
bin/cassandra-cli --host localhost
The command line client is interactive so if everything worked you should
be sitting in front of a prompt...
Connected to: "Test Cluster" on localhost/9160
Welcome to cassandra CLI.
Type 'help;' or '?' for help. Type 'quit;' or 'exit;' to quit.
[default#unknown]
As the banner says, you can use 'help;' or '?' to see what the CLI has to
offer, and 'quit;' or 'exit;' when you've had enough fun.
Verify the following:
Cassandra process is running and thrift is listening on 9160 (netstat-tulpn)
9160 port not being blocked by a firewall rule or similar
If the above are true, then check the cassandra log for additional information.
Other than that your description is pretty vague. So any other information about what may have changed in the environment would be helpful.
I faced the same problem and the reason was that I had configured Cassandra to listen on the Server IP and not on localhost.
/etc/dse/cassandra/cassandra.yaml
listen_address: 10.102.8.71
So try this and check if it works for you:
cassandra-cli --host "your host name"