Postgres connection failure when running node app using supervisor - node.js

I have nodejs webapp with postresql. I am running this using supervisord on the server. The problem is that the postgresql login from nodejs is failing. The error message is:
no PostgreSQL user name specified in startup packet
which basically means no user name is being passed from the webapp while connecting to the db.
Note that I am using unix socket for connecting to postgres from my webapp.
My webapp1.conf looks like:
[program:webapp1]
user=webapp1
command = node /home/webapp1/projects/webapp1/app.js
directory = /home/webapp1/projects/webapp1
autostart = true
autorestart = true
stdout_logfile = /var/log/supervisor/webapp1.log
stderr_logfile = /var/log/supervisor/webapp1_err.log
I have confirmed that supervisor is running the webapp is running under user webapp1.
One more thing - if I start my webapp by logging in as user webapp1, it works.

It sounds like you've got your server set up to use password-less logins to PostgreSQL-- i.e. you have local logins in your pg_hba.conf set to peer or trust so that as long as there's a user configured in postgres with the same name as your linux user, you don't have to do any further configuration to get Postgres working in your apps-- it effectively grants access to the db based on your Linux user account.
I had the same problem when running a simple nodejs script via cron. It worked fine from the shell, but complained of missing username when running via cron. Setting the username explicitly in code wasn't an option because I'd built my config to be as automatic as possible-- I needed it to figure out privileges by which user the script was running as.
It turns out that either the connector library or postgres itself infers the username from an environment variable. I was able to fix it by setting USER=<cron user name> at the top of my crontab (since this was set explicitly in the env of an interactive shell, which is why this works at all).
It looks like the proper syntax to add to your webapp1.conf would be:
environment=USER="<user name here>"

Related

jenkins.plugins.publish_over.BapPublisherException: Failed to connect and initialize SSH connection Message [Auth fail]

I am learning to use Jenkins to deploy a .Net 5.0 application on an AWS EC2 server. This is the first time I am using Linux server and Jenkins for .Net (I'm am a life long Windows guy), and I am facing an error while trying to publish my artifacts over SSH to Web Server.
My setup:
Jenkins server is an AWS EC2 Linux AMI server.
Web Server is also an AWS EC2 LInux AMI server.
My Jenkins is correctly installed and working. I am able to build and run unit test cases without any issues.
For Deploy, I am using 'Publish Over SSH' plugin, and I have followed all steps to configure this plugin as mentioned here https://plugins.jenkins.io/publish-over-ssh/.
However, when try to 'Test Configuration', I get the below error,
Failed to connect or change directory
jenkins.plugins.publish_over.BapPublisherException: Failed to connect and initialize SSH connection. Message: [Failed to connect session for config [WebServer]. Message [Auth fail]]
I did a ping test from Jenkins server to Web Server, and it is a success.
I'm using the .pem key in the 'Key' section of 'Publish over SSH'. This key is the same key I use to SSH into the web server.
The below link suggests many different solutions, but none is working in my case.
Jenkins Publish over ssh authentification failed with private key
I was looking at the below link which describes the same problem,
Jenkins publish over SSH failed to change to remote directory
However in my case I have kept 'Remote Directory' as empty. I don't know if I have to specify any directory here. Anyways, I tried creating a new directory under the home directory of user ec2-user as '/home/ec2-user/publish' and then used this path as Remote Directory, but it still didn't work.
Screenshot of my settings in Jenkins:
I would appreciate if anyone can point me to the right direction or highlight any mistake I'm doing with my configuration.
In my case following steps solved the problem.
Solution is based on Ubuntu 22.04
add two line in /etc/ssh/sshd_config
PubkeyAuthentication yes
PubkeyAcceptedKeyTypes +ssh-rsa
restart sshd service
sudo service sshd restart
you might consider the following:
a. From the screenshot you’ve provided, it seems that you have checked the Use password authentication, or use different key option which will require you to add your key and password (inputs from these fields will be used in connecting to your server via SSH connection). If you use the same SSH key and passphrase/password on all of your servers, you can uncheck/untick that box and just use the config you have specified above.
b. You might also check if port 22 of your web server allows inbound traffic from the security group where your Jenkins server/EC2 instance is running. See reference here.
c. Also, make sure that the remote directory you have specified is existing otherwise the connection may fail.
Here's the sample config

Postgresqll 10: Postgres User /sbin/nologin preventing initdb setup

I'm attempting to install postgresql 10 for the first time and need to run the initdb setup. Unfortunately, this fails and returns an error from the nologin shell.
server# /usr/pgsql-10/bin/postgresql-10-setup initdb
Initializing database ...
failed, see /var/lib/pgsql/10/initdb.log
server# cat /var/lib/pgsql/10/initdb.log
This account is currently not available.
I strace'd the command and verified the su commands are probably what's causing this and it seems the default setting for the postgres user is /sbin/nologin. In various examples I've seen, there is no mention of this being a possible issue. How would this work on any other system by default? I feel that temporarily modifying the login shell would work but I want to understand this issue better more specifically from the application's end.
centos 7.8
selinux mode: permissive
postgresql 10

Jenkins connection closed after authentication succesful

Trying to configure linux node to my windows master Jenkins, throwing below error after authentication is succesful
SSH connection reports a garbage before a command execution.
Check your .bashrc, .profile, and so on to make sure it is quiet.
The received junk text is as follows:
/usr/bin/id: cannot find name for group ID ******
null
Looking at the error it looks SSH is failing because the group doesn't exist in the destination Linux node.
Verify that the groups of the SSH user on the Jenkins Windows master using which SSH is happening to Linux node
Ensure that the SSH user exists on the Linux node and it is a member of the groups that appeared in Windows.
If there are any missing groups in Linux node as compare to Windows Master then you need to create them.
Do let me know the result for next step of troubleshooting.

Set Node-red password in root mode

I have a node-red flow in my raspberry pi 3 for which I'd like to set a user and password in root mode, but haven't succeeded yet.
So far I've managed to set it as a regular user as stated in their own security website (https://nodered.org/docs/security), but I need to run it as admin in order to save some stuff and found out there's no guidance for such scenario
(on they way found out there are two versions of Node-red in the raspberry pi, one for 'sudo start-node-red' and another one for 'start-node-red', I'm interested on the first case)
The one I've managed for the plain user would be editing the settings.js as follows:
adminAuth: {
type: "credentials",
users: [{
username: "admin",
password: "$2a$08$zZWtXTja0fB1pzD4sHCMyOCMYz2Z6dNbM6tl8sJogENOMcxWV9DN.",
permissions: "*"
}]
}
Has anyone managed to do so?
There are not two versions of Node-RED installed on the Raspberry Pi - you are running it in two different ways.
Node-RED is installed as a system service. The service can be started and stopped using node-red-start and node-red-stop commands. By default, the service will run Node-RED as the pi user, and use /home/pi/.node-red as the user directory - where the settings.js file is located.
You can manually run Node-RED by using the node-red command. Rather than start it as a service, it will run in the terminal you ran the command in. It uses ~/.node-red as the user directory. If you run it as the Pi user, that will be /home/pi/.node-red - the same as the service instance. If you run using sudo then you are running as the root user, so the user directory will be /root/.node-red. Following from that, the settings file it will use will be /root/.node-red/settings.js - so it is that file you would need to enable adminAuth in.
You can confirm exactly what user directory and settings file it is using by viewing the log on start up where the full paths to both of these things is provided.
Note: we strongly recommend not running as root if you do not need to.

Set password to meteor's mongo database

I have deployed a meteor project in a stage server and 2 days ago I found out mongodb had no password. I was able to connect to mongodb with robomongo by only providing the IP(no username, no password).
I want to set a password to protect it. I have been following this documentation but I get "mongo/mongod not a command" when writing these commands in application's root directory or after "meteor mongo" command.
What am I missing here, how can I protect mongodb with a password?
Thanks
I don't think you can, when you are running Meteors built-in MongoDB server.
The reason for this is that if you put a password on that database, Meteor will not be able to connect to it.
And to specify a password in the MongoDB connection you need to set the MONGO_URL environment variable.
And when you do that Meteor will think you are running an external MongoDB installation and it will not even start the built-in MongoDB server.
So it's kind of catch-22.
To set a password you need to have a separate MongoDB installed on your server, set a password on that one, and then tell Meteor to use it using a MONGO_URL environment variable in the format:
mongodb://username:password#127.0.0.1:27017/meteor
See https://docs.meteor.com/api/collections.html#mongo_url
Writing this as an answer because it is impossible to format text in a comment, it makes it very hard to read.
I assume you are running on an Amazon linux server, then.
If you really read the install instructions you linked to, you will see that it is not a ton of commands at all.
Install 1: Create the /etc/yum.repos.d/mongodb-org-3.2.repo file with the content given.
Install 2: sudo yum install -y mongodb-org
Run: sudo service mongod start
Done! MongoDB is now running and listening to port 27017.
You can now add a password, and set MONGO_URL as above.

Resources