I installed odoo 15 in vps and when I try start odoo15 by run
systemctl start odoo15
after that run :
systemctl status odoo15
I installed odoo 15 in vps and when I try start odoo15 by run
systemctl start odoo15
after that run
systemctl status odoo15
`● odoo15.service - Odoo15
Loaded: loaded (/etc/systemd/system/odoo15.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2023-01-05 12:25:15 UTC; 8s ago
Process: 19453 ExecStart=/opt/odoo/odoo15-venv/bin/python3 /opt/odoo/odoo15/odoo-bin -c /etc/odoo.conf (code=exited, status=203/EXEC)
Main PID: 19453 (code=exited, status=203/EXEC)
Jan 05 12:25:15 vps103.com\ systemd[1]:\ Started\ Odoo15.
Jan\ 05\ 12:25:15\ vps103.com\ systemd[1]:\ odoo15.service:\ main\ proces...C
Jan\ 05\ 12:25:15\ vps103.com\ systemd[1]:\ Unit\ odoo15.service\ entered....
Jan\ 05\ 12:25:15\ vps103.com systemd[1]: odoo15.service failed.
`
Can any one help me, please?
thanks alot
Related
my question is similar like this
but all the answer of given here are not working for me ? My etc/mongod.conf file like this and start the server
bindIp: 127.0.0.1,<sameserverIP>,<anotherServerIp>
If i run systemctl status mogod i got following response:
mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Tue 2022-12-27 14:12:42 +0545; 2min 36s ago
Docs: https://docs.mongodb.org/manual
Process: 5493 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=48)
Main PID: 5493 (code=exited, status=48)
and if i change my etc/mongod.conf file like this and restart server
bindIp: 127.0.0.1,<sameserverIP>
then the mongodb server will work fine here is the output of systemctl status mongod
mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2022-12-27 13:47:45 +0545; 10s ago
Docs: https://docs.mongodb.org/manual
Main PID: 4608 (mongod)
Memory: 211.8M
CGroup: /system.slice/mongod.service
└─4608 /usr/bin/mongod --config /etc/mongod.conf
I tried checking journal also using this command journalctl -u mongod
Dec 27 14:12:42 178-79-149-108.ip.linodeusercontent.com systemd[1]: Started MongoDB Database Server.
Dec 27 14:12:42 178-79-149-108.ip.linodeusercontent.com systemd[1]: mongod.service: Main process exited, code=exited, status=48/n/a
Dec 27 14:12:42 178-79-149-108.ip.linodeusercontent.com systemd[1]: mongod.service: Failed with result 'exit-code'.
This mongo server is installed in linode ubuntu 2020.04 lts from linode marketplace and mongodb version is 5.0.14. i want to give access to only 4 ipaddress in this mongodb can any one help me fix this issue?
I installed a rabbitmq server and it was active and running fine until I entered:
echo "[{loopback_users, []}]}]." > /etc/rabbitmq/rabbitmq.config
I also assigned a username and password and also assigned the user as administrator. I tried to restart the server after that and it wouldn't restart. Checked the status and it was no longer running. Any idea what I did wrong? Still a rookie, any tips would be great.
[root#rmq01 ~]# systemctl status rabbitmq-server.service -l
● rabbitmq-server.service - RabbitMQ broker
Loaded: loaded (/usr/lib/systemd/system/rabbitmq-server.service; enabled; vendor preset: disabled)
Active: activating (auto-restart) (Result: exit-code) since Fri 2022-08-12 05:47:22 UTC; 1s ago
Process: 1807 ExecStop=/usr/sbin/rabbitmqctl shutdown (code=exited, status=0/SUCCESS)
Process: 7026 ExecStart=/usr/sbin/rabbitmq-server (code=exited, status=1/FAILURE)
Main PID: 7026 (code=exited, status=1/FAILURE)
Status: "Standing by"
Aug 12 05:47:22 rmq01 systemd[1]: rabbitmq-server.service: main process exited, code=exited, status=1/FAILURE
Aug 12 05:47:22 rmq01 systemd[1]: Failed to start RabbitMQ broker.
Aug 12 05:47:22 rmq01 systemd[1]: Unit rabbitmq-server.service entered failed state.
Aug 12 05:47:22 rmq01 systemd[1]: rabbitmq-server.service failed.
I'm sure the answer is right in front of me but I can't seem to pin point it.
please use the new format file config:
https://github.com/rabbitmq/rabbitmq-server/blob/v3.8.x/deps/rabbit/docs/rabbitmq.conf.example
## Related doc guide: https://rabbitmq.com/access-control.html.
## The default "guest" user is only permitted to access the server
## via a loopback interface (e.g. localhost).
## {loopback_users, [<<"guest">>]},
##
# loopback_users.guest = true
pm2 is not resurrecting the services on reboot. I am using an EC2 instance with RHEL8.
This is how I have setup pm2:
1. pm2 start service1
2. pm2 startup
3. run the cmd returned by this
4. pm2 save
5. sudo reboot
Following are the logs on running this command systemctl -l status pm2-ec2-user:
● pm2-ec2-user.service - PM2 process manager
Loaded: loaded (/etc/systemd/system/pm2-ec2-user.service; enabled; vendor preset: disabled)
Active: activating (auto-restart) (Result: exit-code) since Tue 2021-06-15 21:24:35 UTC; 662ms ago
Docs: https://pm2.keymetrics.io/
Process: 1528 ExecStart=/home/ec2-user/.nvm/versions/node/v12.13.0/lib/node_modules/pm2/bin/pm2 resurrect (code=exited, status=203/EXEC)
Jun 15 21:24:35 ip-172-31-0-225.us-east-2.compute.internal systemd[1]: pm2-ec2-user.service: Control process exited, code=exited status=203
Jun 15 21:24:35 ip-172-31-0-225.us-east-2.compute.internal systemd[1]: pm2-ec2-user.service: Failed with result 'exit-code'.
Jun 15 21:24:35 ip-172-31-0-225.us-east-2.compute.internal systemd[1]: Failed to start PM2 process manager.
When I run the same command (/home/ec2-user/.nvm/versions/node/v12.13.0/lib/node_modules/pm2/bin/pm2 resurrect) manually after reboot, it works. But somehow this is not working automatically.
I'm trying to start NFS in Fedora25
[root#localhost tftpboot]# systemctl start nfs-server.service
Failed to start nfs-server.service: Unit proc-fs-nfsd.mount is masked.
Which gave error as above and also status also shown inactive
[root#localhost tftpboot]# systemctl status nfs-server.service
nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Drop-In: /run/systemd/generator/nfs-server.service.d
└─order-with-mounts.conf
Active: inactive (dead) since Thu 2017-03-30 19:02:10 IST; 2min 45s ago
Process: 4886 ExecStopPost=/usr/sbin/exportfs -f (code=exited, status=0/SUCCESS)
Process: 4883 ExecStopPost=/usr/sbin/exportfs -au (code=exited, status=0/SUCCESS)
Process: 4880 ExecStop=/usr/sbin/rpc.nfsd 0 (code=exited, status=0/SUCCESS)
Main PID: 981 (code=exited, status=0/SUCCESS)
Tried below command for unmask "sudo systemctl unmask proc-fs-nfsd.mount" then " sudo service nfs restart". it worked
i install nginx in my remote server but i done some error in my nginx.conf file and could not able to revert back
so it tried to remove my nginx and reconfigure it
so i used these step which is given in the link to delete my nginx
http://www.ehowstuff.com/how-to-remove-uninstall-nginx-on-centos-7-rhel-7-oracle-linux-7/
then i use yum remove nginx and again reinstall it
but when i try sudo systemctl start nginx or [root#lotto nginx]# service nginx start
its showing
Job for nginx.service failed because the control process exitenter code hereed with error code. See "systemctl status nginx.service" and "journalctl -xe" for details.
when i am using
[root#lotto nginx]# systemctl status nginx.service
showing
nginx.service - The nginx HTTP and reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Fri 2016-07-01 07:48:44 EDT; 18s ago
Process: 30832 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=1/FAILURE)
Process: 30830 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)
Main PID: 14307 (code=exited, status=0/SUCCESS)
Jul 01 07:48:44 lotto systemd[1]: Starting The nginx HTTP and reverse proxy server...
Jul 01 07:48:44 lotto nginx[30832]: nginx: [emerg] getpwnam("nginx") failed in /etc/nginx/nginx.conf:5
Jul 01 07:48:44 lotto nginx[30832]: nginx: configuration file /etc/nginx/nginx.conf test failed
Jul 01 07:48:44 lotto systemd[1]: nginx.service: control process exited, code=exited status=1
Jul 01 07:48:44 lotto systemd[1]: Failed to start The nginx HTTP and reverse proxy server.
Jul 01 07:48:44 lotto systemd[1]: Unit nginx.service entered failed state.
Jul 01 07:48:44 lotto systemd[1]: nginx.service failed.
and [root#lotto nginx]# journalctl -xe
nginx.service - The nginx HTTP and reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Fri 2016-07-01 07:48:44 EDT; 18s ago
Process: 30832 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=1/FAILURE)
Process: 30830 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)
Main PID: 14307 (code=exited, status=0/SUCCESS)
uninstall
yum remove nginx
install
In CentOS , you should using yum install ; instead of apt-get install in Ubuntu.
at last I found out the solutions by my self
I used nginx -t which shows that I don't have any syntax error in my code
Then I use
user nobody; // in my nginx.conf
This solved my problem
Thanks everyone for your help!