I'm trying to start 3 zookeeper services on the same host on my development computer.
This is obviously not something I'll do in production, I'm doing this to explore fault tolerance and Kafka dependency in my test/development computer.
I have installed Kafka 2.5.0 on my dev computer, and I was able to successfully set 3 Kafka services on the host with 1 Zookeeper service on the same host. Using the zookeeper scripts and package that comes with the kafka package.
The problems started when tried to set 3 zookeeper services... I did the following to set 3 zookeeper services but I'm not able to successfully start the servics.
I have 3 config files:
config/zookeeper.properties
config/zookeeper1.properties
config/zookeeper2.properties
The content of config/zookeeper.properties is:
dataDir=/tmp/zookeeper
clientPort=2181
maxClientCnxns=0
admin.enableServer=false
initLimit=5
syncLimit=2
server.1=localhost:2888:3888
server.2=localhost:2889:3889
server.3=localhost:2890:3890
on config/zookeeper1.properties I have clientPort=2182 and dataDir=/tmp/zookeeper1
on config/zookeeper2.properties I have clientPort=2183 and dataDir=/tmp/zookeeper2
also created the files /tmp/zookeeper/myid, /tmp/zookeeper1/myid, /tmp/zookeeper2/myid and entered id values text 1, 2, 3 respectivly.
when starting the 3 zookeepers from the command line, they start ok:
$ sudo bin/zookeeper-server-start.sh config/zookeeper.properties
$ sudo bin/zookeeper-server-start.sh config/zookeeper1.properties
$ sudo bin/zookeeper-server-start.sh config/zookeeper2.properties
and I can also see who the leader and followers are by:
$ echo srvr | nc localhost 2181 | grep Mode
Mode: follower
$ echo srvr | nc localhost 2182 | grep Mode
Mode: leader
$ echo srvr | nc localhost 2183 | grep Mode
Mode: follower
But when I try setting them up as system services I'm unable to start them properly... here's the unit files I have:
$ cat /etc/systemd/system/zookeeper.service
[Unit]
Description=zookeeper
After=syslog.target network.target
[Service]
Type=simple
User=kafka
Group=kafka
ExecStart=/opt/kafka/bin/zookeeper-server-start.sh /opt/kafka/config/zookeeper.properties
ExecStop=/opt/kafka/bin/zookeeper-server-stop.sh
[Install]
WantedBy=multi-user.target
$ cat /etc/systemd/system/zookeeper1.service
[Unit]
Description=zookeeper 1
After=syslog.target network.target
[Service]
Type=simple
User=kafka
Group=kafka
ExecStart=/opt/kafka/bin/zookeeper-server-start.sh /opt/kafka/config/zookeeper1.properties
ExecStop=/opt/kafka/bin/zookeeper-server-stop1.sh
[Install]
WantedBy=multi-user.target
$ cat /etc/systemd/system/zookeeper2.service
[Unit]
Description=zookeeper 2
After=syslog.target network.target
[Service]
Type=simple
User=kafka
Group=kafka
ExecStart=/opt/kafka/bin/zookeeper-server-start.sh /opt/kafka/config/zookeeper2.properties
ExecStop=/opt/kafka/bin/zookeeper-server-stop2.sh
[Install]
WantedBy=multi-user.target
After trying to start them with
$ sudo systemctl daemon-reload
$ sudo systemctl enable zookeeper
$ sudo systemctl enable zookeeper1
$ sudo systemctl enable zookeeper2
$ sudo systemctl start zookeeper
$ sudo systemctl start zookeeper1
$ sudo systemctl start zookeeper2
I dont see that they run...
On systemlog I see this:
May 17 03:56:20 melly-dev2 kafka-server-start.sh: [2020-05-17 03:56:20,039] INFO Opening socket connection to server localhost/127.0.0.1:2183. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
May 17 03:56:20 melly-dev2 kafka-server-start.sh: [2020-05-17 03:56:20,040] INFO Socket error occurred: localhost/127.0.0.1:2183: Connection refused (org.apache.zookeeper.ClientCnxn)
Here's wgat I see on sudo journalctl -u zookeeper.service :
[2020-05-17 06:33:33,096] INFO Notification time out: 6400 (org.apache.zookeeper.server.quorum.FastLeaderElection)
[2020-05-17 06:33:39,497] WARN Cannot open channel to 2 at election address localhost/127.0.0.1:3889 (org.apache.zookeeper.server.quorum.QuorumCnxManager)
java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:650)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:707)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:735)
at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:910)
at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1247)
[2020-05-17 06:33:39,497] WARN Cannot open channel to 3 at election address localhost/127.0.0.1:3890 (org.apache.zookeeper.server.quorum.QuorumCnxManager)
java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:650)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:707)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:735)
at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:910)
at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1247)
[2020-05-17 06:33:39,497] INFO Notification time out: 12800 (org.apache.zookeeper.server.quorum.FastLeaderElection)
How can set/find the zookeeper log files, and how can I make zookeeper successfully start as a service?
[2020-05-17 07:02:35,248] ERROR Unable to access datadir, exiting abnormally (org.apache.zookeeper.server.quorum.QuorumPeerMain)
org.apache.zookeeper.server.persistence.FileTxnSnapLog$DatadirException: Cannot write to data directory /tmp/zookeeper1/version-2
I suspect that the user melly-dev2 does not have access to write logs under /tmp/zookeeper/.
Also, make sure to change dataDir to a permanent location (i.e. not under /tmp/) as everything will be lost once your machine turns off.
The missing step in my procedure was:
sudo chown -R kafka:kafka /tmp/zookeeper
sudo chown -R kafka:kafka /tmp/zookeeper1
sudo chown -R kafka:kafka /tmp/zookeeper2
sudo chmod -R 777 /tmp/zookeeper
sudo chmod -R 777 /tmp/zookeeper1
sudo chmod -R 777 /tmp/zookeeper2
One of the issues was that the default zookeeper that comes with Kafka has no log that shows the write error. Once I did this command (based on GiorgosMyrianthous comment):
journalctl -u zookeeper.service
I could clearly see the error and fix the problem.
Related
I want to start my app as service from systemd but the app is not starting.
My unit file appstart.service looks like this:
[Unit]
Description=Application start
[Service]
Type=simple
User=ec2-user
ExecStart=/usr/bin/bash /home/ec2-user/project/restartScript.sh
SyslogIdentifier=App_start
[Install]
WantedBy=multi-user.target
RestartScript.sh should start the java app:
#!/bin/bash
export SPRING_PROFILES_ACTIVE="tst,development"
cd /home/ec2-user/project
pkill java
/usr/bin/java -jar /home/ec2-user/project/app.jar >>/home/ec2-user/project/web.log 2>>/home/ec2-user/project/web-error.log &
I am starting the app as a service this way using User Data on AWS EC2 instance:
#!/bin/bash
mkdir /home/ec2-user/project
cd /home/ec2-user/project
sudo wget -P /home/ec2-user/project/ https://tst.s3.eu-west-1.amazonaws.com/app.jar
chown -R ec2-user:ec2-user /home/ec2-user/project
sudo wget -P /home/ec2-user/project/ https://tst.s3.eu-west-1.amazonaws.com/restartScript.sh
sudo chmod 755 /home/ec2-user/project/restartScript.sh
cd /etc/systemd/system/
sudo wget /etc/systemd/system/ https://tst.s3.eu-west-1.amazonaws.com/appstart.service
sudo su
systemctl daemon-reload
systemctl enable appstart.service
systemctl start appstart.service
exit
The output I am getting when I start the EC2 instance this way is:
$ systemctl status appstart.service
● appstart.service - Application start
Loaded: loaded (/etc/systemd/system/appstart.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Thu 2022-08-25 13:35:52 UTC; 4min 19s ago
Process: 7328 ExecStart=/usr/bin/bash /home/ec2-user/project/restartScript.sh (code=exited, status=0/SUCCESS)
Main PID: 7328 (code=exited, status=0/SUCCESS)
Aug 25 13:35:52 ip-x-x-x-x.tst.local systemd[1]: Started Application start.
When I try to do
systemctl start appstart.service
Nothing changes. The application is not working.
Any idea why is this happening?
OS on the machine:
$ cat /etc/os-release
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
Nothing seems to be wrong with this, the service is running and the application runs and finishes successfully. The Active state of the service becomes inactive (dead) in case of failure it should be Failed and Main PID is (code=exited, status=0/SUCCESS).
To verify that the service is running correct, write this line somewhere in RestartScript.sh:
echo "Test" > test.txt
After starting the service you will find the created file near RestartScript.sh file.
I managed to resolve the issue by changing the WantedBy section in appstart.service file to default.target.
I try to install vnc-server on my centos 7 server by following the steps below:
1) We install vnc-server
sudo yum install tigervnc-server
After, you’ve installed the program, login with the user you want to run the VNC program and issue the below command in terminal in order to configure a password for the VNC server.
su - your_user # If you want to configure VNC server to run under this user directly from CLI without switching users from GUI
$ vncpasswd
add a VNC service configuration file for your user via a daemon configuration file placed in systemd directory tree. In order to copy the VNC template file you need to run the following command with root privileges.
cp /lib/systemd/system/vncserver#.service /etc/systemd/system/vncserver#:1.service
On the next step edit the copied VNC template configuration file from /etc/systemd/system/ directory and replace the values to reflect your user as shown in the below
vi /etc/systemd/system/vncserver#\:1.service
[Unit]
Description=Remote desktop service (VNC)
After=syslog.target network.target
[Service]
Type=forking
ExecStartPre=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :'
ExecStart=/sbin/runuser -l my_user -c "/usr/bin/vncserver %i -geometry 1280x720"
PIDFile=/home/my_user/.vnc/%H%i.pid
ExecStop=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :'
[Install]
WantedBy=multi-user.target
After you’ve made the proper changes to VNC service file, reload systemd system initialization program to pick up the new vnc configuration file and start the TigerVNC server.
systemctl daemon-reload
# systemctl start vncserver#:1
# systemctl status vncserver#:1
# systemctl enable vncserver#:1
Obtaining the following error
systemctl daemon-reload
[root#ns363691 ~]# systemctl start vncserver#:1
Job for vncserver#:1.service failed because a configured resource limit was exceeded. See "systemctl status vncserver#:1.service" and "journalctl -xe" for details.
[root#ns363691 ~]# systemctl status vncserver#:1
● vncserver#:1.service - Remote desktop service (VNC)
Loaded: loaded (/etc/systemd/system/vncserver#:1.service; disabled; vendor preset: disabled)
Active: failed (Result: resources) since mié 2019-11-13 02:09:07 CET; 14s ago
Process: 7605 ExecStart=/usr/sbin/runuser -l root -c /usr/bin/vncserver %i -geometry 1280x720 (code=exited, status=0/SUCCESS)
Process: 7593 ExecStartPre=/bin/sh -c /usr/bin/vncserver -kill %i > /dev/null 2>&1 || : (code=exited, status=0/SUCCESS)
nov 13 02:09:04 ns363691 systemd[1]: Starting Remote desktop service (VNC)...
nov 13 02:09:07 ns363691 systemd[1]: Can't open PID file /home/root/.vnc/ns363691:1.pid (yet?) after start: No such file or directory
nov 13 02:09:07 ns363691 systemd[1]: Failed to start Remote desktop service (VNC).
nov 13 02:09:07 ns363691 systemd[1]: Unit vncserver#:1.service entered failed state.
nov 13 02:09:07 ns363691 systemd[1]: vncserver#:1.service failed.
Any idea why the service does not start, what is this doing wrong? :(
I want to make a systemd unit for pgagnent.
I found only init.d script on this page http://technobytz.com/automatic-sql-database-backup-postgres.html, but I don't know how to exec start-stop-daemon in systemd.
I have written that unit:
[Unit]
Description=pgagent
After=network.target postgresql.service
[Service]
ExecStart=start-stop-daemon -b --start --quiet --exec pgagent --name pgagent --startas pgagent -- hostaddr=localhost port=5432 dbname=postgres user=postgres
ExecStop=start-stop-daemon --stop --quiet -n pgagent
[Install]
WantedBy=multi-user.target
But I get errors like:
[/etc/systemd/system/pgagent.service:14] Executable path is not absolute, ignoring: start-stop-daemon --stop --quiet -n pgagent
What is wrong with that unit?
systemd expects the ExecStart and ExecStop commands to include the full path to the executable.
start-stop-daemon is not necessary for services under systemd management. you will want to have it execute the underlying pgagent commands.
look at https://unix.stackexchange.com/questions/220362/systemd-postgresql-start-script for an example
If you installed pgagent with yum or apt-get, it should have created the systemd file for you. For example, on RHEL 7 (essentially CentOS 7), you can install PostgreSQL 12 followed by pgagent
sudo yum install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
sudo yum install postgresql12
sudo yum install postgresql12-server
sudo yum install pgagent_12.x86_64
This installs PostgreSQL to /var/lib/pgsql/12 and pgagent_12 to /usr/bin/pgagent_12
In addition, it creates a systemd file at /usr/lib/systemd/system/pgagent_12.service
View the status of the service with systemctl status pgagent_12
Configure it to auto-start, then start it, with:
sudo systemctl enable pgagent_12
sudo systemctl start pgagent_12
Most likely the authentication will fail, since the default .service file has
ExecStart=/usr/bin/pgagent_12 -s ${LOGFILE} hostaddr=${DBHOST} dbname=${DBNAME} user=${DBUSER} port=${DBPORT}
Confirm with sudo tail /var/log/pgagent_12.log which will show
Sat Oct 12 19:35:47 2019 WARNING: Couldn't create the primary connection [Attempt #1]
Sat Oct 12 19:35:52 2019 WARNING: Couldn't create the primary connection [Attempt #2]
Sat Oct 12 19:35:57 2019 WARNING: Couldn't create the primary connection [Attempt #3]
Sat Oct 12 19:36:02 2019 WARNING: Couldn't create the primary connection [Attempt #4]
To fix things, we need to create a .pgpass file that is accessible when the service starts. First, stop the service
sudo systemctl stop pgagent_12
Examining the service file with less /usr/lib/systemd/system/pgagent_12.service shows it has
User=pgagent
Group=pgagent
Furthermore, /etc/pgagent/pgagent_12.conf has
DBNAME=postgres
DBUSER=postgres
DBHOST=127.0.0.1
DBPORT=5432
LOGFILE=/var/log/pgagent_12.log
Examine the /etc/passwd file to look for the pgagent user and its home directory: grep "pgagent" /etc/passwd
pgagent:x:980:977:pgAgent Job Schedule:/home/pgagent:/bin/false
Thus, we need to create a .pgpass file at /home/pgagent/.pgpass to define the postgres user's password
sudo su -
mkdir /home/pgagent
chown pgagent:pgagent /home/pgagent
chmod 0700 /home/pgagent
echo "127.0.0.1:5432:postgres:postgres:PasswordGoesHere" > /home/pgagent/.pgpass
chown pgagent:pgagent /home/pgagent/.pgpass
chmod 0600 /home/pgagent/.pgpass
The directory and file permissions are important. If you're having problems, you can enable debug logging by editing the service file at /usr/lib/systemd/system/pgagent_12.service to enable debug logging by updating the ExecStart command to have -l 2
ExecStart=/usr/bin/pgagent_12 -l 2-s ${LOGFILE} hostaddr=${DBHOST} dbname=${DBNAME} user=${DBUSER} port=${DBPORT}
After changing a .service file, things must be reloaded with sudo systemctl daemon-reload (systemd will inform you of this requirement if you forget it).
Keep starting/stopping the service and checking /var/log/pgagent_12.log Eventually, it will start properly and sudo systemctl status pgagent_12 will show
● pgagent_12.service - PgAgent for PostgreSQL 12
Loaded: loaded (/usr/lib/systemd/system/pgagent_12.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2019-10-12 20:18:18 PDT; 13s ago
Process: 6159 ExecStart=/usr/bin/pgagent_12 -s ${LOGFILE} hostaddr=${DBHOST} dbname=${DBNAME} user=${DBUSER} port=${DBPORT} (code=exited, status=0/SUCCESS)
Main PID: 6160 (pgagent_12)
Tasks: 1
Memory: 1.1M
CGroup: /system.slice/pgagent_12.service
└─6160 /usr/bin/pgagent_12 -s /var/log/pgagent_12.log hostaddr=127.0.0.1 dbname=postgres user=postgres port=5432
Oct 12 20:18:18 prismweb3 systemd[1]: Starting PgAgent for PostgreSQL 12...
Oct 12 20:18:18 prismweb3 systemd[1]: Started PgAgent for PostgreSQL 12.
context: I've added some scripts to an empty centos VM to install some monitoring tools including prometheus 2.0.
problem: Once installed in the non-root sudo user's home directory, I copy the prometheus.service that I wrote to "/etc/systemd/system", run sudo systemctl daemon-reload, sudo systemctl enable prometheus.service, sudo systemctl start prometheus.service but the service fails.
note: I can run the prometheus binary in the terminal directly using the same command without any problems, but I can't run it as a service.
Here's my .service file:
[Unit]
Description=Prometheus Server
Documentation=https://prometheus.io/docs/introduction/overview/
After=network-online.target
[Service]
User=centos
ExecStart=/home/centos/prometheus/prometheus --config.file="/home/centos/prometheus/prometheus.yml" --storage.tsdb.path="/home/centos/prometheus/data"
[Install]
WantedBy=multi-user.target
Here's some of the log:
...
Nov 21 12:41:55 localhost.localdomain prometheus[1554]: level=info ts=2017-11-21T17:41:55.114757834Z caller=main.go:314 msg="Starting TSDB"
Nov 21 12:41:55 localhost.localdomain prometheus[1554]: level=error ts=2017-11-21T17:41:55.114819195Z caller=main.go:323 msg="Opening storage failed" err="mkdir \": permission denied"
Nov 21 12:41:55 localhost.localdomain systemd[1]: prometheus.service: control process exited, code=exited status=1
Nov 21 12:41:55 localhost.localdomain systemd[1]: Failed to start Prometheus Server.
...
I'm new to linux services management, I've spent a lot of time reading online but I'm not sure how permissions works for services, and why it can't create the directory it needs to create.
I've tried:
Changing "SELINUX=enforcing" to "SELINUX=permissive"
Changing the permission to the prometheus directory to 777
...
You also have to set up --web.console.templates and --web.console.libraries. You can copy these directories from exctracted archive. For example:
sudo cp -R ~/prometheus-2.0.0.linux-amd64/consoles /etc/prometheus
sudo cp -R ~/prometheus-2.0.0.linux-amd64/console_libraries /etc/prometheus
Example of working service (change path for yours):
[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target
[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/usr/local/bin/prometheus --config.file=/etc/prometheus/prometheus.yml \
--storage.tsdb.path=/var/lib/prometheus/ \
--web.console.templates=/etc/prometheus/consoles \
--web.console.libraries=/etc/prometheus/console_libraries
[Install]
WantedBy=multi-user.target
P.S. Inspired by suggestions here.
Data directory for Prometheus should have write permissions for prometheus application user. If you're running it from a container and external mounting the data directory, you can set 777 permissions on original folder.
If SELinux is stopping startup always consult journalctl -xe to view the SELinux alerts. There are recommended actions to be taken.
I have setup prometheus with SELinux on CentOS 8 without problems. And I don't agree with people that recommend disabling SELinux.
For reference Redhat has a good video for you to watch:
https://www.youtube.com/watch?v=_WOKRaM-HI4&t=1464s
Here is my prometheus.service file.
[Unit]
Description=Prometheus Server
Documentation=https://prometheus.io/docs/introduction/overview/
After=network-online.target
[Service]
User=prometheus
#Restart=on-failure
#Change this line if you download the
#Prometheus on different path user
ExecStart=/home/prometheus/prometheus-2.22.0.linux-amd64/prometheus \
--config.file=/home/prometheus/prometheus-2.22.0.linux-amd64/prometheus.yml \
--storage.tsdb.path=/home/prometheus/prometheus-2.22.0.linux-amd64/data \
--web.listen-address="0.0.0.0:9091"
[Install]
WantedBy=multi-user.target
I've installed mongodb for the very first time on my Debian 8, following this mongodb install guide. The goal is to use mongodb for rocket.chat, for which I follow this guide.
So far, all I did was:
$sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6
$echo "deb http://repo.mongodb.org/apt/debian jessie/mongodb-org/3.4 main" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.4.list
$sudo apt-get update
$sudo apt-get install mongodb-org
$sudo systemctl enable mongod
$sudo vi /etc/mongod.conf
<insert>
replication:
oplogSizeMB: 1
replSetName: rs0
$sudo systemctl restart mongod
$export LC_ALL=C
$sudo mongo
MongoDB shell version v3.4.0
connecting to: mongodb://127.0.0.1:27017
2016-12-14T10:21:55.356+0100 W NETWORK [main] Failed to connect to 127.0.0.1:27017 after 5000 milliseconds, giving up.
2016-12-14T10:21:55.356+0100 E QUERY [main] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed :
connect#src/mongo/shell/mongo.js:234:13
#(connect):1:6
exception: connect failed
I'm monitoring the log file, when attempting to access the mongo shell, but nothing shows up.
The mongod service is running, configured to listen on 127.0.0.1 and I'm working on the server locally.
How do I access the mongo shell from the localhost?
edit Solved. The issue was an iptables rule, that disallowed local connections to the mongodb.
Run the following command :
sudo rm /var/lib/mongodb/mongod.lock
sudo service mongod restart
Credit: Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused
You can access the mongodb shell by changing directory to your MongoDb installation and entering ./bin/mongo. See this guide: enter link description here
To recover from an unclean shutdown run these in a terminal
killall mongod
cd ~
./mongod --repair
rm -rfv data/mongod.lock
./mongod
If you want to remove the --httpinterface warning then run, try this :
echo 'mongod --bind_ip=$IP --dbpath=data --nojournal --rest --httpinterface "$#"' > mongod
(it only needs running once) before you run
./mongod
I hope this helps. Cheers!