I have created a docker image where I install the mailutils package using:
RUN apt-get update && apt-get install -y mailutils
As a sample command, I am running:
mail -s 'Hello World' {email-address} <<< 'Message body'
When I am executing the same command on my local machine, it sends the mail. However, in docker container, it is not showing any errors but there is no mail received on the specified e-mail id.
I tried with --net=host argument in while spawning my docker container.
Following is my docker command:
docker run --net=host -p 0.0.0.0:8000:8000 {imageName}:{tagName} {arguments}
Is there anything that I am missing? Could someone explain the networking concepts behind this problem?
Install ssmtp and configure to send all mails to your relayhost.
https://wiki.debian.org/sSMTP
Thanks for the response #pilasguru. ssmtp works for sending mail from within a docker container.
Just to make the response more verbose, here are the things one would need to do.
Install ssmtp in the container. You could do this by the following command.
RUN apt-get update && apt-get -y install ssmtp.
You could configure the configurations for ssmtp at /etc/ssmtp/ssmtp.conf
Ideal configurations.
`
#
# Config file for sSMTP sendmail
#
# The person who gets all mail for userids < 1000
# Make this empty to disable rewriting.
root={root-name}
# The place where the mail goes. The actual machine name is required no
# MX records are consulted. Commonly mailhosts are named mail.domain.com
mailhub={smtp-server}
# Where will the mail seem to come from?
rewriteDomain={domain-name}
# The full hostname
hostname=c67fcdc6361d
# Are users allowed to set their own From: address?
# YES - Allow the user to specify their own From: address
# NO - Use the system generated From: address
FromLineOverride=YES
`
You can directly copy that from your root directory where you're building your docker image. For eg. you keep your configurations in file named: my.conf.
You can copy them in your docker container using the command:
COPY ./my.conf /etc/ssmtp/ssmtp.conf
Send a mail using a simple command such as:
ssmtp recipient_name#gmail.com < filename.txt
You can even send an attachment, specify to and from using the following command:
echo -e "to: {to-addr}\nFrom: {from-addr}\nsubject: {subject}\n"| (cat - && uuencode /path/to/file/inside/container {attachment-name-in mail}) | ssmtp recipient_name#gmail.com
uuencode could be installed by the command apt-get install sharutils
Related
i'm having trouble in setting up a full headless install for Ubuntu Server Focal (ARM) on a Raspberry pi 4 using cloud init config. The whole purpose of doing this is to simplify the SD card swap in case of failure. I'm trying to use cloud-init config files to apply static config for lan/wlan, create new user, add ssh authorized keys for the new user, install docker etc. However, whatever i do it seems the Wifi settings are not applied before the first reboot.
Step1: burn the image on SD Card.
Step2: rewrite SD card system-boot/network_config and system-boot/user-data with config files
network-config
version: 2
renderer: networkd
ethernets:
eth0:
dhcp4: false
optional: true
addresses: [192.168.100.8/24]
gateway4: 192.168.100.2
nameservers:
addresses: [192.168.100.2, 8.8.8.8]
wifis:
wlan0:
optional: true
access-points:
"AP-NAME":
password: "AP-Password"
dhcp4: false
addresses: [192.168.100.13/24]
gateway4: 192.168.100.2
nameservers:
#search: [mydomain, otherdomain]
addresses: [192.168.100.2, 8.8.8.8]
user-data
chpasswd:
expire: true
list:
- ubuntu:ubuntu
# Enable password authentication with the SSH daemon
ssh_pwauth: true
groups:
- myuser
- docker
users:
- default
- name: myuser
gecos: My Name
primary_group: myuser
groups: sudo
shell: /bin/bash
ssh_authorized_keys:
- ssh-rsa AAAA....
lock_passwd: false
passwd: $6$rounds=4096$7uRxBCbz9$SPdYdqd...
packages:
- apt-transport-https
- ca-certificates
- curl
- gnupg-agent
- software-properties-common
- git
runcmd:
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
- add-apt-repository "deb [arch=arm64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- apt-get update -y
- apt-get install -y docker-ce docker-ce-cli containerd.io
- systemctl start docker
- systemctl enable docker
## TODO: add git deployment and configure folders
power_state:
mode: reboot
During the first boot cloud-init always applies the fallback network config.
I also tried to apply the headless config for wifi as described here.
Created wpa_supplicant.conf and copied it to SD system-boot folder.
trl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=RO
network={
ssid="AP-NAME"
psk="AP-Password"
}
Also created an empty ssh file and copied it to system-boot
The run commands always fail since during the first boot cloud-init applies the fallback network config. After reboot, lan/wlan settings are applied, the user is created, ssh authorized keys added. However i still need to ssh into the PI and install install the remaining packages: docker etc, and i wanted to avoid this. Am i doing something wrong?
I'm not sure if you ever found a workaround, but I'll share some information I found when researching options.
Ubuntu's Raspberry Pi WiFi Setup Page states the need for a reboot when using network-config with WiFi:
Note: During the first boot, your Raspberry Pi will try to connect to this network. It will fail the first time around. Simply reboot sudo reboot and it will work.
There's an interesting workaround & approach in this repo.
It states it was created for 18.04, but it should work with 20.04 as both Server versions use netplan and systemd-networkd.
Personally, I've gone a different route.
I create custom images that contain my settings & packages, then burn to uSD or share via a TFTP server. I was surprised at how easy this was.
There's a good post on creating custom images here
Some important additional info is here
I am using Packer with Ansible to create an AWS EC2 image (AMI). Ansible is used to install Java 8, install the database (Cassandra), install Ansible and upload an Ansible playbook (I know that I should push the playbook to git and pull it but I will do it when this is working). I am installing Ansible and uploading the playbook, because I have to change some of the Cassandra properties when an instance is launched from the AMI (add the current instance IP in the Cassandra options for example). In order to accomplish this I wrote a simple bash script, that is added as the user-data-file property. This is the script:
#cloud-boothook
#!/bin/bash
#cloud-config
output: {all: '| tee -a /var/log/cloud-init-output.log'}
ansible-playbook -i "localhost," -c local /usr/local/etc/replace_cassandra.yaml
As you can see I am executing the ansible-playbook in a localhost mode.
The problem is that when I start the instance, I am finding an error inside the /var/log/cloud-init.log file. The error states, that ansible-playbook could not be found. So I added an ls line inside the user-data script to check the content of the /usr/bin/ folder (the folder where Ansible is installed) and there were no Ansible in it, but when I access the instance with ssh I can see that Ansible is present inside the /usr/bin/ folder and there is no problem executing the ansible-playbook.
Has anyone encountered a similar problem? I think that this should be a quite popular use case for Ansible with EC2.
EDIT
After some logging I found out that not only there is no Ansible, during the execution of the user data, but the database is missing as well.
Is it possible, that some of the code (or all of it) in the Ansible provisioner in Packer, is executed when the instance is launched?
EDIT2
I have found out what is happening here. When I add the user data via packer trough the user_data_file property, the user data is executed when packer lunches an instance to build the AMI. The script is launched before the Ansible provisioner is executed, and that is why Ansible is missing.
What I want to do is to automatically add a user data to the AMI, so that when an instance is launched from the AMI, the user data will be executed then, and not when packer builds the said AMI.
Any ideas on how to do this?
Just run multiple provisioners and don't try to run ansible via cloud-init.
I'm making an assumption here that your playbook and roles are stored locally where you are starting the packer run from. Instead of shoehorning the ansible stuff into user data, run a shell provisioner to install ansbile, run the ansible-local provisioner to run the playbook/role you want.
Below is a simplified example of what I'm talking about. It won't run without some more values in the builder config but I left those out for the sake of brevity.
In the example json, the install-prereqs.sh just adds the ansible ppa apt repo and runs apt-get update -y, then installs ansible.
#!/bin/bash
sudo apt-get install software-properties-common
sudo apt-add-repository -y ppa:ansible/ansible
sudo apt-get update
sudo apt-get install -y ansible
The second provisioner will then copy the playbook and roles you specify to the target host and run them.
{
"builders": [
{
"type": "amazon-ebs",
"ssh_username": "ubuntu",
"image_name": "some-name",
"source_image": "some-ami-id",
"ssh_pty": true
}
],
"provisioners": [
{
"type": "shell",
"script": "scripts/install-prereqs.sh"
},
{
"type": "ansible-local",
"playbook_file": "path/to/playbook.yml",
"role_paths": ["path/to/roles"]
},
]
}
This is possible! Please make sure of the following.
An Ansible server (install ansible via cloud formation userdata if not built into AMI) and your target have SSH access in the security groups you create in cloudformation.
After you install ansible on the ansible server, your ansible.cfg file points to a private key on the ansible server
The matching public key for the ansible private key is copied to the authorized_keys file on the servers in the root user .ssh directory you wish to run playbooks on
-You have enabled root ssh access between the ansible server and target server(s), this can be done by editing the the /etc/ssh/sshd_config file and making sure there is nothing preventing the SSH access from the root user in the root authorized_keys file on the target server(s)
I have a Magento website setup on a linux machine that is based on a Bitnami
ready-made image.
The main goal is to be notified by email whenever there might be a potential attack on the site.
To achieve that I decided to install Snort IDS and email the alerts coming to the syslog using Swatch.
I've installed snort by following this tutorial from Snort's official website.
I've just finished section 9 of that tutorial which means:
Installed all the perquisites.
Installed Snort IDS on the machine.
Setup a test rule to alert when ICMP requests (ping) occurs.
Next to allow Snort to log alerts to syslog I've uncommented this line in the snort.conf file:
output alert_syslog: LOG_AUTH LOG_ALERT
I've tested the installation by running this command:
sudo /usr/local/bin/snort -A console -q -u snort -g snort -c /etc/snort/snort.conf -i eth0
while Snort is running I've made a ping request from another system.
I can see alerts registering in Snort's log file but nothing was added to the syslog.
Trail and errors:
Run snort as user root.
Set syslog to bounce logs to another server (remote syslog).
I don't have great deal of experience with linux so any help to point me to the right direction will be very much appreciated.
Some facts:
Bitnami Magento Stack 1.9.1.0-0
Ubuntu 14.04.3 LTS
Snort 2.9.7.5
I've posted this question on linuxquestions.org aswell and got an answer.
Following unSpawn reply I've reviewed the rsyslog conf files and found that auth logs are sent to the auto.log file.
Which led to a quick fix of adding an additional .conf file to /etc/rsyslog.d with the content:
auth /var/log/syslog
Also as suggested I've made some changes to the snort execution command (omitting the -q -A console):
sudo /usr/local/bin/snort -u snort -g snort -c /etc/snort/snort.conf -i eth0
after restarting the rsyslog service I found the missing Snort alerts in syslog.
I've installed GitLab on Google Compute Engine using "Click to Deploy" from the project interface. The deployment is successful after a few minutes. I can SSH into the instance, and muck around with it as expected.
I can also log in to GitLab using the web interface, and add SSH keys to my profile. So far, so good. However, when I attempt to push or pull to a new example repository, I receive this message:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I've removed my local SSH config so it doesn't interfere. Do I need to setup an SSH tunnel of some sort? What am I missing?
UPDATE: Wiping out my local ~/.ssh folder, and regenerating an SSH key (which I've added to my profile in GitLab) produces the following error:
Received disconnect from {GITLAB_IP_ADDRESS}: 2: Too many authentication failures for git
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
UPDATE 2: It seems GitLab may already have a solution: run sudo gitlab-ctl reconfigure. See here: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md#git-ssh-access-stops-working-on-selinux-enabled-systems
You need to create an SSH tunnel to communicate with GitLab.
1. Log into your development server as your user, and create a key.
ssh-keygen -t rsa
Follow the steps, and create a passcode (that you can remember) as you'd need this to pull and push code from/to GitLab.
2. Now that you've created your key, we can copy it;
cat id_rsa.pub
Copy the output of that command (including ssh-rsa), and add it to your GitLab profile. (http://my-gitlab-server.com/profile/keys/new).
3. Ensure you have the correct privilege to the project(s)
Ensure you are at role developer at the very least. (Screengrab of roles: http://i.stack.imgur.com/DSSvl.jpg)
4. Now, copy the project link
Go into your project, and find the SSH link in the top right;
5. Now back to your development server
Navigate to your directory where you'd like to work, and run the following;
$ git init
$ git remote add origin <<project_url>>
$ git fetch
Where <<project_url>> is the link we copied in step 4.
You will be prompted your password (this is your ssh key password, not your server password) and to add the host to your known_hosts file. After that, the project will start to download and you can enjoy development.
I did these steps on a CentOS 6.4 machine with Digital Ocean. But they shouldn't differ from using Google CE.
Edit
Quote from Marty Penner answer as per this comment
Solved it! Thanks to #sxleixer and #Alexander Wenzowski for figuring this out.
Apparently, SELinux was interfering with a non-standard location for the .ssh directory. I needed to run the following commands on the Compute Engine instance:
sudo yum -y install policycoreutils-python # Install the `semanage` tool
sudo semanage fcontext -a -t ssh_home_t "/var/opt/gitlab/.ssh/authorized_keys" # Allow the nonstandard ssh_home_t
See the full thread here:
Google Cloud Engine. Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
Solved it! Thanks to #sxleixer and #Alexander Wenzowski for figuring this out.
Apparently, SELinux was interfering with a non-standard location for the .ssh directory. I needed to run the following commands on the Compute Engine instance:
sudo yum -y install policycoreutils-python # Install the `semanage` tool
sudo semanage fcontext -a -t ssh_home_t "/var/opt/gitlab/.ssh/authorized_keys" # Allow the nonstandard ssh_home_t
See the full thread here:
Google Cloud Engine. Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
UPDATE: It seems GitLab may already have a solution: run sudo gitlab-ctl reconfigure. See here: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md#git-ssh-access-stops-working-on-selinux-enabled-systems
In my situation the git user wasn´t set up completely. If you get in your log files messages like "User git not allowed because account is locked" (Under Centos or Redhat it´s /var/log/secure) than you simply need to activate the user via "passwd -d git"
I have a couple of local apt repositories which are setup using reprepro.
I want to make them available on the local network and I've got next to everything setup.
However the repos are sitting behind https (this is out of my control) so when I try to pull from them from another server the request just hangs and I think it's because it is waiting for the username / password to be supplied.
I'm not sure how to supply these. Do they go in the sources.list file on the pulling server? What would the format be?
Cheers
Instead of hardcoding the password in sources.list, its better to create an auth.conf file and supply your credentials as is documented in Debian page:
/etc/apt/auth.conf:
machine example.org
login apt
password debian
for an entry like below in sources.list
deb https://example.org/debian buster main
Refer for more info:
Debian Page Reference
In order to supply a password to a debian-style repository with https, add this line to /etc/apt/sources.list
deb https://user:password#repo.server.com/debian ./
If you want per user authentication, you can pass a custom auth.conf per apt call:
trap "rm ./auth.conf.tmp" EXIT
cat << EOF > ./auth.conf.tmp
machine example.org
login apt
password debian
EOF
# overrule /etc/apt/auth.conf with Dir::Etc::netrc config
# see all options with
# zcat /usr/share/doc/apt/examples/configure-index.gz
sudo apt -o Dir::Etc::netrc=./auth.conf.tmp update
Sources:
https://manpages.debian.org/bullseye/apt/apt_auth.conf.5.en.html#FILES
https://manpages.ubuntu.com/manpages/xenial/man8/apt-get.8.html