PostgreSQL 9.5 doesn't start after reboot with systemd - linux

I'm having a problem with PostgreSQL 9.5+173 on Ubuntu 16.04 and I happened to stumble across the following threads in my research that somewhat describes the behavior I'm seeing:
https://www.postgresql.org/message-id/CAFyxdeT%2B%3Dx-d0oNbFPoe%2B4xnt0Qdfi%2BzAEn%2BrQmEK0AZbJFRtg%40mail.gmail.com
https://www.postgresql.org/message-id/562E4453.5090803%40aklaver.com
Long story short I have a fresh install of Ubuntu 16 with nothing on it and PostgreSQL running. I've stopped PostgreSQL changed the data directory and port and a couple other settings and it starts back up fine.
I can start and stop PostgreSQL manually via systemctl without any problems. I can also connect to the database and can verify that it is running via a ps ax | grep postgres.
However, after I reboot PostgreSQL will not start up. Any attempt to start it up via systemctl start postgresql.service doesn't do anything and does not fail. The only way I am able to get it started is if I call systemctl start postgresql#9.5-main.service.
I did some investigation and looked at both the postgresql.service and postgresql#9.5-main.service scripts and realized that the postgresql.service script does nothing as stated in the thread above and that the postgresql#9.5-main.service has the PartOf directive which means it should be getting triggered from the postgresql.service as the sytemd docs state, but it isn't for some reason. Basically I'm at a loss as to why everything works before I reboot and then doesn't work after I reboot. Is there something I'm missing? I'm starting to go CRAZY over something so simple.
Update: I added an ExecStartPre=/bin/touch /tmp/postgresq.log to the postgresql#9.5-main.service to see if it's actually getting called on boot and it is not. Manually calling systemctl start postgresql#9.5-main.service creates the file in the /tmp directory.
Update: I have also found that calling systemd daemon-reload after reboot will allow me to start postgres via the systemctl start postgresql command.

Did you try doing systemctl enable postgresql? This will tell systemd to start this service after boot. Try rebooting after that.

Turns out that the problem was the fact that I symlinked /etc/postgresql/9.5/main/ across partitions to a custom partition that wasn't available right away, so when PostgreSQL tried to start on boot it couldn't because it's configuration files were not available. This describes what was happening since I could start PostgreSQL manually after I logged in.

Related

What's difference between these redis starting commands

sudo /etc/init.d/redis-server start
sudo service redis-server start
sudo systemctl start redis-server
sudo redis-server --daemonize yes
The last one is "nearest to the metal", it directly starts the Redis server process with no special options, and is "stand-alone". I would use this type of command when just "messing around" in the Terminal with quick tests and when trying to get an initial configuration tested and running.
The first 3 are all basically wrappers around starting the Redis server process to make it compatible with systemd or other Linux startup systems. They potentially add more layers of management, like:
reporting to the systemctl logs
saving the process id so the process can be killed or restarted
potentially specifying a different config file
potentially waiting for other services to become available before starting Redis
I would prefer one of the first three for routine, every-day, managed starting up of Redis on a production system.

What is the difference between 'service apache2 reload' and 'sudo systemctl restart apache2'?

What is the difference between service apache2 reload and sudo systemctl restart apache2?
I understand that one uses sudo and others don't.
Also, I can understand the difference between reloading and restart.
But what is the major difference between these two commands?
Restart = stop + start
Reload = remain running + re-read configuration files
We could define it like this:
Restart--> STOP the service and then it will START the service.
Now comes Reload option.
Reload--> Read .service file for which you have executed the command
and if any changes happened it will start using those changes now, so
each time a change happens in any service file a reload is needed. You
could even see this message coming, lets say you have changed a
service and you forgot to reload it so whenever you run any systemctl
command towards that service it will throw an error to reload it.

Systemd service failing on startup

I'm trying to get a nodejs server to run on startup, so I created the following systemd unit file:
[Unit]
Description=TI SensorTag Communicator
After=network.target
[Service]
ExecStart=/usr/bin/node /home/pi/sensortag-comm/sensortag.js
User=root
[Install]
WantedBy=multi-user.target
I'm not sure what I'm doing wrong here. It seems to fail before the nodejs script even starts, as no logging occurs. My script is dependent on mysql 5.5 (I think this is where I'm running into an issue). Any insight, or even a different solution would be appreciated.
Also, it runs fine once I'm logged into the system.
Update
The service is enabled, and is logging through journalctl. I'll update with the results on 7/11/16.
Not sure why it didn't work the first time, but upon checking journalctl the issue was 100% that MySQL hadn't started. I once again changed it to After=MySQL.service and it worked perfectly!
If there is no mention of the service at all in the output of journalctl that could indicate that the service was not enabled to start at boot.
Make you run systemctl enable my-unit-name before your next boot test.
Also, since you depend on MySQL being up and running, you should declare that with something like: After=mysql.service. The exact service name may depend on your Linux distribution, which you didn't state.
Adding User=root adds nothing, as system units would be run by root by default anyway.
When you said "it fails", you didn't specify whether it was failing at boot time, or with a test run by systemctl start my-unit-name.
After attempting to start a service, there should be logging if you run journalctl -u my-unit.name.service.
You might also consider adding StandardOutput=journal to your unit file to make sure you capture output from the service you are running as well.

Unable to restart rogue Jenkins on Ubuntu

I was configuring Jenkins last night to run some reporting plugins (codestyle, findbugs, cobertura). When I ran my build job it got hung up somewhere in codestyle, and the server ui became unresponsive.
Today I logged in to the server and the Jenkins log is reporting errors that look like the server ran out of memory, but more than that, I cannot seem to stop or restart the server. I have limited experience with services in linux.
Jenkins was installed on Ubuntu with atp. I have tried $ sudo /etc/init.d/jenkins restart but it reports
* Starting Jenkins Continuous Integration Server jenkins
The selected http port (8080) seems to be in use by another program
Please select another port to use for jenkins
When I try to run service jenkins status to get a pid to kill i get
2 instances of jenkins are running at the moment
but the pidfile /var/run/jenkins/jenkins.pid is missing
Running netstat and ps has identified the port being held by a jenkins instance.
How can I recover from this?
Mostly I was concerned about abruptly killing the Jenkins server while it has gone rogue. Something this tied into process with server connections and plugins makes me wary of taking a shotgun to the process.
That's exactly what I did. server jenkins status didn't work, so I got the process id from netstat -tulpn. kill -15 didn't work so I did kill -9, waited a respectful grieving period, then restarted the Jenkins service.
I will next be investigating the root problem of running out of memory in my Jenkins installation so hopefully this doesn't happen again while I am firewalled away from my server.
Where is your server hosted?
I had the same issue with AWS EC2 server.
Command lines did not work to reboot the server.
However, on AWS admin console, I did: EC2 -> restart and it works like a charm.
This may not be a solution but a workaround.
I was able to do
sudo ps aux | grep jenkins
To find a list of jenkins processes. Then I ran
sudo kill <pid>
And then finally
sudo service jenkins restart

upstart & node.js app "stop: Unknown instance:"

I'm having a bit of trouble with upstart on ubuntu and a node.js app.
Everything was working fine with the upstart script. Start, stopping, status-ing, etc all worked as expected until I deployed new code. The changes weren't reflected in the running app. I reasoned that somehow the new code wasn't being loaded by stoping & starting the app.
I did a manual kill on the pid of the running daemon which is where I believe I went awry.
At the present moment, If I initctl list I see my app in the list:
mynodejs.app stop/waiting
When I start mynodejs.app it seems to start:
mynodejs.app start/running, process 16228
But, when try to stop it:
stop: Unknown instance:
And...
status mynodejs.app
mynodejs.app stop/waiting
...although the app is up and running.
I'll answer my own question...
Restarting the init process cleared everything up.
sudo /sbin/telinit q
I needed to kill the rogue instance of my app. After that, using start and stop worked as expected.
Using automatic monitoring -> restart, can resolve this issue.
Setting up monit to do so is described on howtonode.org, yet more effectively here. The comments for the howtonode.org guide I found very useful for other's approach towards setting up Ubuntu with Upstart, alas it's inclusion in this post.

Resources