AWS Session manager welcome message - linux

I have noticed that .bashrc does not run whenever launching an AWS session in Systems manager session manager.. Switching user using su - user however does run .bashrc but that is not what I am trying to achieve.
My objective is to have a welcome message whenever someone starts a session through the console/browser.
Does anyone have any work around or ideas on how to do this? /etc/motd didn't seem to work either.

This is one of the limitations of SSM Session Manager reported in several places, e.g.:
SSM Session Manager does not source .bashrc or .bash_profile
Custom Shell for aws ssm start-session
AWS Session Manager is not sourcing bash rc
You can only customize for now the user which ssm agent is using. So instead of ssm-user you can change it to ec2-user or based on your individual IAM users.
You could also consider using EC2 Instance Connect (browser-based SSH connection) instead of Session Manager. The connect works as expected, though its not enabled by default on Ubuntu.

Related

Terraform/RDS: Create DB users after creation inside VPC

Using terraform and AWS I've created a Postgres RDS DB within a VPC. During creation, the superuser is created, but it seems like a bad idea to use this user inside my app to run queries.
I'd like to create an additional access-limited DB user during the terraform apply phase after the DB has been created. The only solutions I've found expect the DB to be accessible outside the VPC. For security, it also seems like a bad idea to make the DB accessible outside the VPC.
The first thing that comes to mind is to have terraform launch an EC2 instance that creates the user as part of the userdata script and then promptly terminates. This seems like a pretty heavy solution and terraform would create the instance each time terraform apply is run unless the instance is not terminated at the end of the script.
Another option is to pass the superuser and limited user credentials to the server that runs migrations and have that server create the user as necessary. This, however, would mean that the server would have access to the superuser and could do some nefarious things if someone got access to it.
Are there other common patterns for solving this? Do people just use the superuser for everything or open the DB to the outside world?

process in GCP VM instance killed automatically

I'm using GCP VM instance for running my python script as back ground process.
But I found my script got SIGTERM.
I check the syslog and daemon.log in /var/log
and I found my python script (2316) was terminated by system.
What do I need to check VM settings?
Judging from this log line in your screenshot:
Nov 12 18:23:10 ai-task-1 systemd-logind[1051]: Power key pressed.
I would say that your script's process was SIGTERMed as a result of the hypervisor gracefully shutting down the VM, which would happen when a GCP user or service account with admin access to the project performs a GCE compute.instances.stop request.
You can look for this request's logs for more details on where it comes from in the Logs Viewer/Explorer or gcloud logging read --freshness=30d (man) with some filters like:
resource.type="gce_instance"
"ai-task-1"
timestamp>="2020-11-12T18:22:40Z"
timestamp<="2020-11-12T18:23:40Z"
Though depending on the retention period for your _Default bucket (30 days by default), these logs may have already expired.

Change a redis password without external downtime

I would like to refresh redis server password. The issue is that there are some external services using it so until I propagate this change thing will eventually stop working.
From my research I have only seen the requirepass command + the server restart, but this has downtime.
With other databases like Postgres, I would create new user-password, migrate permissions, change at application level and then invalidate the previous access.
How can I do this process in redis?
You can change the password without downtime by issuing:
redis> CONFIG SET requirepass <your new password>
To persist the changes for next restart, edit your .conf file or issue a CONFIG REWRITE.

Rundeck authentication error even added correct ssh_username and password

Created a project into rundeck. Added nodes as ansible inventory which contain two remote nodes. Added ssh_username and password into that.
Now got Displaying that two nodes in the "Nodes" area while show all nodes filtering.
Created a job with node filtering (selected one remote node based on tags). Now added mkdir /etc/test2018 as a command.
Now run the job. But I got the error below:
Failed: AuthenticationFailure: Authentication failure connecting to node: "114.12.14.*".
Make sure your resource definitions and credentials are up to date.
note: I login into rundeck as admin user with default password.
Am using aws-linux servers.
Image: Rundeck Log Error Output
It seems that you have not configured access to your remote nodes. Configure the access to your remote nodes:
Check This:
https://www.youtube.com/watch?v=qOA-kWse22g
And This:
Add a remote node in rundeck

Slurm srun --uid behavior

We are trying to use slurm in our uni lab but we can't quite understand slurmUser behavior.
For instance:
If I run srun while I'm logged in as the user 'acnazarejr' (srun -n1 id -a), then I would expect something like this:
uid=80000001637(acnazarejr) gid=80000000253(domain user) groups=80000000253(domain user),1001(slurm)
But this is what I get:
uid=1001(slurm) gid=1001(slurm) groups=1001(slurm), 27(sudo), docker(999)
Even if run (srun --uid=80000001637 -n1 id -a) I get the same result. We are using LDAP across all nodes and 'slurm' user can't access the user's home folder, which is important to us.
Is this the expected behavior? I'm almost sure that in earlier tests I was getting my user as output instead of slurm, but I can't replicate it anymore.
Your slurm.conf probably contains
SlurmdUser=slurm
while it should be
SlurmdUser=root
The SlurmdUser is the user running the slurmd daemon, which must be root, or another account able to demote to the submitting user's account.
Not to be mixed up with SlurmUser, the user running the slurmctld daemon which should be a regular user, often named slurm.

Resources