How to attach a Cloud Block Storage volume to an OnMetal server with pyrax? - rackspace-cloud

I would like to automate the attachment of a Cloud Block Storage volume to an OnMetal server running CentOS 7 by writing a Python script that make uses of the pyrax Python module. Do you know how to do it?

Attaching a Cloud Block Storage volume to an OnMetal server is a bit more complicated than attaching it to a normal Rackspace virtual server. That you will notice when you try to attach a Cloud Block Storage volume to an OnMetal server in the Rackspace web interface Cloud Control Panel, as you would see this text:
Note: When attaching volumes to OnMetal servers, you must log into the OnMetal server to set the initiator name, discover the targets and then connect to the target.
So you can attach the volume in the web interface but additionally you need to log in to the OnMetal server and run a few commands. The actual commands can be copy-and-pasted from the web interface into the terminal of the OnMetal server.
Also before detaching you need to run a command.
But the web interface is actually not needed. It can be done with the Python module pyrax.
First install the RPM package iscsi-initiator-utils on the OnMetal server
[root#server-01 ~]# yum -y install iscsi-initiator-utils
Assuming the volume_id and the server_id are known, this Python code first attaches the volume and then detaches the volume. Unfortunately, the mount_point argument attach_to_instance() is not working for OnMetal servers, so we would need to use the command lsblk -n -d before and after attaching the volume. By comparing the outputs we would then deduce the device name used for the attached volume. (The part to deduce the device name is not taken care of by the following Python code).
#/usr/bin/python
# Disclaimer: Use the script at your own Risk!
import json
import os
import paramiko
import pyrax
# Replace server_id and volume_id
# to your settings
server_id = "cbdcb7e3-5231-40ad-bba6-45aaeabf0a8d"
volume_id = "35abb4ba-caee-4cae-ada3-a16f6fa2ab50"
# Just to demonstrate that the mount_point argument for
# attach_to_instance() is not working for OnMetal servers
disk_device = "/dev/xvdd"
def run_ssh_commands(ssh_client, remote_commands):
for remote_command in remote_commands:
stdin, stdout, stderr = ssh_client.exec_command(remote_command)
print("")
print("command: " + remote_command)
for line in stdout.read().splitlines():
print(" stdout: " + line)
exit_status = stdout.channel.recv_exit_status()
if exit_status != 0:
raise RuntimeError("The command :\n{}\n"
"exited with exit status: {}\n"
"stderr: {}".format(remote_command,
exit_status,
stderr.read()))
pyrax.set_setting("identity_type", "rackspace")
pyrax.set_default_region('IAD')
creds_file = os.path.expanduser("~/.rackspace_cloud_credentials")
pyrax.set_credential_file(creds_file)
server = pyrax.cloudservers.servers.get(server_id)
vol = pyrax.cloud_blockstorage.find(id = volume_id)
vol.attach_to_instance(server, mountpoint=disk_device)
pyrax.utils.wait_until(vol, "status", "in-use", interval=3, attempts=0,
verbose=True)
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(server.accessIPv4, username='root', allow_agent=True)
# The new metadata is only available if we get() the server once more
server = pyrax.cloudservers.servers.get(server_id)
metadata = server.metadata["volumes_" + volume_id]
parsed_json = json.loads(metadata)
target_iqn = parsed_json["target_iqn"]
target_portal = parsed_json["target_portal"]
initiator_name = parsed_json["initiator_name"]
run_ssh_commands(ssh_client, [
"lsblk -n -d",
"echo InitiatorName={} > /etc/iscsi/initiatorname.iscsi".format(initiator_name),
"iscsiadm -m discovery --type sendtargets --portal {}".format(target_portal),
"iscsiadm -m node --targetname={} --portal {} --login".format(target_iqn, target_portal),
"lsblk -n -d",
"iscsiadm -m node --targetname={} --portal {} --logout".format(target_iqn, target_portal),
"lsblk -n -d"
])
vol.detach()
pyrax.utils.wait_until(vol, "status", "available", interval=3, attempts=0,
verbose=True)
Running the python code looks like this
user#ubuntu:~$ python attach.py 2> /dev/null
Current value of status: attaching (elapsed: 1.0 seconds)
Current value of status: in-use (elapsed: 4.9 seconds)
command: lsblk -n -d
stdout: sda 8:0 0 29.8G 0 disk
command: echo InitiatorName=iqn.2008-10.org.openstack:a24b6f80-cf02-48fc-9a25-ccc3ed3fb918 > /etc/iscsi/initiatorname.iscsi
command: iscsiadm -m discovery --type sendtargets --portal 10.190.142.116:3260
stdout: 10.190.142.116:3260,1 iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50
stdout: 10.69.193.1:3260,1 iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50
command: iscsiadm -m node --targetname=iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50 --portal 10.190.142.116:3260 --login
stdout: Logging in to [iface: default, target: iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50, portal: 10.190.142.116,3260] (multiple)
stdout: Login to [iface: default, target: iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50, portal: 10.190.142.116,3260] successful.
command: lsblk -n -d
stdout: sda 8:0 0 29.8G 0 disk
stdout: sdb 8:16 0 50G 0 disk
command: iscsiadm -m node --targetname=iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50 --portal 10.190.142.116:3260 --logout
stdout: Logging out of session [sid: 5, target: iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50, portal: 10.190.142.116,3260]
stdout: Logout of [sid: 5, target: iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50, portal: 10.190.142.116,3260] successful.
command: lsblk -n -d
stdout: sda 8:0 0 29.8G 0 disk
Current value of status: detaching (elapsed: 0.8 seconds)
Current value of status: available (elapsed: 4.7 seconds)
user#ubuntu:~$
Just, one additional note:
Although not mentioned in the official Rackspace documentation
https://support.rackspace.com/how-to/attach-a-cloud-block-storage-volume-to-an-onmetal-server/
in a forum post from 5 Aug 2015 the Rackspace Managed Infrastructure Support
also recommends running
iscsiadm -m node -T $TARGET_IQN -p $TARGET_PORTAL --op update -n node.startup -v automatic
to make the connection persistent so that it automatically restarts the iscsi session upon startup.
Update
Regarding deducing the new device name:
Major Hayden writes in a blog post that
[root#server-01 ~]# ls /dev/disk/by-path/
could be used to find a path to the new device.
If you would like to deference any symlinks, I guess this would work
[root#server-01 ~]# find -L /dev/disk/by-path -maxdepth 1 -mindepth 1 -exec realpath {} \;

Related

How to callculate the free space on a host machine and get the information inside a docker container [duplicate]

How to control host from docker container?
For example, how to execute copied to host bash script?
This answer is just a more detailed version of Bradford Medeiros's solution, which for me as well turned out to be the best answer, so credit goes to him.
In his answer, he explains WHAT to do (named pipes) but not exactly HOW to do it.
I have to admit I didn't know what named pipes were when I read his solution. So I struggled to implement it (while it's actually very simple), but I did succeed.
So the point of my answer is just detailing the commands you need to run in order to get it working, but again, credit goes to him.
PART 1 - Testing the named pipe concept without docker
On the main host, chose the folder where you want to put your named pipe file, for instance /path/to/pipe/ and a pipe name, for instance mypipe, and then run:
mkfifo /path/to/pipe/mypipe
The pipe is created.
Type
ls -l /path/to/pipe/mypipe
And check the access rights start with "p", such as
prw-r--r-- 1 root root 0 mypipe
Now run:
tail -f /path/to/pipe/mypipe
The terminal is now waiting for data to be sent into this pipe
Now open another terminal window.
And then run:
echo "hello world" > /path/to/pipe/mypipe
Check the first terminal (the one with tail -f), it should display "hello world"
PART 2 - Run commands through the pipe
On the host container, instead of running tail -f which just outputs whatever is sent as input, run this command that will execute it as commands:
eval "$(cat /path/to/pipe/mypipe)"
Then, from the other terminal, try running:
echo "ls -l" > /path/to/pipe/mypipe
Go back to the first terminal and you should see the result of the ls -l command.
PART 3 - Make it listen forever
You may have noticed that in the previous part, right after ls -l output is displayed, it stops listening for commands.
Instead of eval "$(cat /path/to/pipe/mypipe)", run:
while true; do eval "$(cat /path/to/pipe/mypipe)"; done
(you can nohup that)
Now you can send unlimited number of commands one after the other, they will all be executed, not just the first one.
PART 4 - Make it work even when reboot happens
The only caveat is if the host has to reboot, the "while" loop will stop working.
To handle reboot, here what I've done:
Put the while true; do eval "$(cat /path/to/pipe/mypipe)"; done in a file called execpipe.sh with #!/bin/bash header
Don't forget to chmod +x it
Add it to crontab by running
crontab -e
And then adding
#reboot /path/to/execpipe.sh
At this point, test it: reboot your server, and when it's back up, echo some commands into the pipe and check if they are executed.
Of course, you aren't able to see the output of commands, so ls -l won't help, but touch somefile will help.
Another option is to modify the script to put the output in a file, such as:
while true; do eval "$(cat /path/to/pipe/mypipe)" &> /somepath/output.txt; done
Now you can run ls -l and the output (both stdout and stderr using &> in bash) should be in output.txt.
PART 5 - Make it work with docker
If you are using both docker compose and dockerfile like I do, here is what I've done:
Let's assume you want to mount the mypipe's parent folder as /hostpipe in your container
Add this:
VOLUME /hostpipe
in your dockerfile in order to create a mount point
Then add this:
volumes:
- /path/to/pipe:/hostpipe
in your docker compose file in order to mount /path/to/pipe as /hostpipe
Restart your docker containers.
PART 6 - Testing
Exec into your docker container:
docker exec -it <container> bash
Go into the mount folder and check you can see the pipe:
cd /hostpipe && ls -l
Now try running a command from within the container:
echo "touch this_file_was_created_on_main_host_from_a_container.txt" > /hostpipe/mypipe
And it should work!
WARNING: If you have an OSX (Mac OS) host and a Linux container, it won't work (explanation here https://stackoverflow.com/a/43474708/10018801 and issue here https://github.com/docker/for-mac/issues/483 ) because the pipe implementation is not the same, so what you write into the pipe from Linux can be read only by a Linux and what you write into the pipe from Mac OS can be read only by a Mac OS (this sentence might not be very accurate, but just be aware that a cross-platform issue exists).
For instance, when I run my docker setup in DEV from my Mac OS computer, the named pipe as explained above does not work. But in staging and production, I have Linux host and Linux containers, and it works perfectly.
PART 7 - Example from Node.JS container
Here is how I send a command from my Node.JS container to the main host and retrieve the output:
const pipePath = "/hostpipe/mypipe"
const outputPath = "/hostpipe/output.txt"
const commandToRun = "pwd && ls-l"
console.log("delete previous output")
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath)
console.log("writing to pipe...")
const wstream = fs.createWriteStream(pipePath)
wstream.write(commandToRun)
wstream.close()
console.log("waiting for output.txt...") //there are better ways to do that than setInterval
let timeout = 10000 //stop waiting after 10 seconds (something might be wrong)
const timeoutStart = Date.now()
const myLoop = setInterval(function () {
if (Date.now() - timeoutStart > timeout) {
clearInterval(myLoop);
console.log("timed out")
} else {
//if output.txt exists, read it
if (fs.existsSync(outputPath)) {
clearInterval(myLoop);
const data = fs.readFileSync(outputPath).toString()
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath) //delete the output file
console.log(data) //log the output of the command
}
}
}, 300);
Use a named pipe.
On the host OS, create a script to loop and read commands, and then you call eval on that.
Have the docker container read to that named pipe.
To be able to access the pipe, you need to mount it via a volume.
This is similar to the SSH mechanism (or a similar socket-based method), but restricts you properly to the host device, which is probably better. Plus you don't have to be passing around authentication information.
My only warning is to be cautious about why you are doing this. It's totally something to do if you want to create a method to self-upgrade with user input or whatever, but you probably don't want to call a command to get some config data, as the proper way would be to pass that in as args/volume into docker. Also, be cautious about the fact that you are evaling, so just give the permission model a thought.
Some of the other answers such as running a script. Under a volume won't work generically since they won't have access to the full system resources, but it might be more appropriate depending on your usage.
The solution I use is to connect to the host over SSH and execute the command like this:
ssh -l ${USERNAME} ${HOSTNAME} "${SCRIPT}"
UPDATE
As this answer keeps getting up votes, I would like to remind (and highly recommend), that the account which is being used to invoke the script should be an account with no permissions at all, but only executing that script as sudo (that can be done from sudoers file).
UPDATE: Named Pipes
The solution I suggested above was only the one I used while I was relatively new to Docker. Now in 2021 take a look on the answers that talk about Named Pipes. This seems to be a better solution.
However, nobody there mentioned anything about security. The script that will evaluate the commands sent through the pipe (the script that calls eval) must actually not use eval for the whole pipe output, but to handle specific cases and call the required commands according to the text sent, otherwise any command that can do anything can be sent through the pipe.
That REALLY depends on what you need that bash script to do!
For example, if the bash script just echoes some output, you could just do
docker run --rm -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh
Another possibility is that you want the bash script to install some software- say the script to install docker-compose. you could do something like
docker run --rm -v /usr/bin:/usr/bin --privileged -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh
But at this point you're really getting into having to know intimately what the script is doing to allow the specific permissions it needs on your host from inside the container.
My laziness led me to find the easiest solution that wasn't published as an answer here.
It is based on the great article by luc juggery.
All you need to do in order to gain a full shell to your linux host from within your docker container is:
docker run --privileged --pid=host -it alpine:3.8 \
nsenter -t 1 -m -u -n -i sh
Explanation:
--privileged : grants additional permissions to the container, it allows the container to gain access to the devices of the host (/dev)
--pid=host : allows the containers to use the processes tree of the Docker host (the VM in which the Docker daemon is running)
nsenter utility: allows to run a process in existing namespaces (the building blocks that provide isolation to containers)
nsenter (-t 1 -m -u -n -i sh) allows to run the process sh in the same isolation context as the process with PID 1.
The whole command will then provide an interactive sh shell in the VM
This setup has major security implications and should be used with cautions (if any).
Write a simple server python server listening on a port (say 8080), bind the port -p 8080:8080 with the container, make a HTTP request to localhost:8080 to ask the python server running shell scripts with popen, run a curl or writing code to make a HTTP request curl -d '{"foo":"bar"}' localhost:8080
#!/usr/bin/python
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
import subprocess
import json
PORT_NUMBER = 8080
# This class will handles any incoming request from
# the browser
class myHandler(BaseHTTPRequestHandler):
def do_POST(self):
content_len = int(self.headers.getheader('content-length'))
post_body = self.rfile.read(content_len)
self.send_response(200)
self.end_headers()
data = json.loads(post_body)
# Use the post data
cmd = "your shell cmd"
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
p_status = p.wait()
(output, err) = p.communicate()
print "Command output : ", output
print "Command exit status/return code : ", p_status
self.wfile.write(cmd + "\n")
return
try:
# Create a web server and define the handler to manage the
# incoming request
server = HTTPServer(('', PORT_NUMBER), myHandler)
print 'Started httpserver on port ' , PORT_NUMBER
# Wait forever for incoming http requests
server.serve_forever()
except KeyboardInterrupt:
print '^C received, shutting down the web server'
server.socket.close()
If you are not worried about security and you're simply looking to start a docker container on the host from within another docker container like the OP, you can share the docker server running on the host with the docker container by sharing it's listen socket.
Please see https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface and see if your personal risk tolerance allows this for this particular application.
You can do this by adding the following volume args to your start command
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
or by sharing /var/run/docker.sock within your docker compose file like this:
version: '3'
services:
ci:
command: ...
image: ...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
When you run the docker start command within your docker container,
the docker server running on your host will see the request and provision the sibling container.
credit: http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
As Marcus reminds, docker is basically process isolation. Starting with docker 1.8, you can copy files both ways between the host and the container, see the doc of docker cp
https://docs.docker.com/reference/commandline/cp/
Once a file is copied, you can run it locally
docker run --detach-keys="ctrl-p" -it -v /:/mnt/rootdir --name testing busybox
# chroot /mnt/rootdir
#
I have a simple approach.
Step 1: Mount /var/run/docker.sock:/var/run/docker.sock (So you will be able to execute docker commands inside your container)
Step 2: Execute this below inside your container. The key part here is (--network host as this will execute from host context)
docker run -i --rm --network host -v /opt/test.sh:/test.sh alpine:3.7
sh /test.sh
test.sh should contain the some commands (ifconfig, netstat etc...) whatever you need.
Now you will be able to get host context output.
You can use the pipe concept, but use a file on the host and fswatch to accomplish the goal to execute a script on the host machine from a docker container. Like so (Use at your own risk):
#! /bin/bash
touch .command_pipe
chmod +x .command_pipe
# Use fswatch to execute a command on the host machine and log result
fswatch -o --event Updated .command_pipe | \
xargs -n1 -I "{}" .command_pipe >> .command_pipe_log &
docker run -it --rm \
--name alpine \
-w /home/test \
-v $PWD/.command_pipe:/dev/command_pipe \
alpine:3.7 sh
rm -rf .command_pipe
kill %1
In this example, inside the container send commands to /dev/command_pipe, like so:
/home/test # echo 'docker network create test2.network.com' > /dev/command_pipe
On the host, you can check if the network was created:
$ docker network ls | grep test2
8e029ec83afe test2.network.com bridge local
In my scenario I just ssh login the host (via host ip) within a container and then I can do anything I want to the host machine
I found answers using named pipes awesome. But I was wondering if there is a way to get the output of the executed command.
The solution is to create two named pipes:
mkfifo /path/to/pipe/exec_in
mkfifo /path/to/pipe/exec_out
Then, the solution using a loop, as suggested by #Vincent, would become:
# on the host
while true; do eval "$(cat exec_in)" > exec_out; done
And then on the docker container, we can execute the command and get the output using:
# on the container
echo "ls -l" > /path/to/pipe/exec_in
cat /path/to/pipe/exec_out
If anyone interested, my need was to use a failover IP on the host from the container, I created this simple ruby method:
def fifo_exec(cmd)
exec_in = '/path/to/pipe/exec_in'
exec_out = '/path/to/pipe/exec_out'
%x[ echo #{cmd} > #{exec_in} ]
%x[ cat #{exec_out} ]
end
# example
fifo_exec "curl https://ip4.seeip.org"
Depending on the situation, this could be a helpful resource.
This uses a job queue (Celery) that can be run on the host, commands/data could be passed to this through Redis (or rabbitmq). In the example below, this is occurring in a django application (which is commonly dockerized).
https://www.codingforentrepreneurs.com/blog/celery-redis-django/
To expand on user2915097's response:
The idea of isolation is to be able to restrict what an application/process/container (whatever your angle at this is) can do to the host system very clearly. Hence, being able to copy and execute a file would really break the whole concept.
Yes. But it's sometimes necessary.
No. That's not the case, or Docker is not the right thing to use. What you should do is declare a clear interface for what you want to do (e.g. updating a host config), and write a minimal client/server to do exactly that and nothing more. Generally, however, this doesn't seem to be very desirable. In many cases, you should simply rethink your approach and eradicate that need. Docker came into an existence when basically everything was a service that was reachable using some protocol. I can't think of any proper usecase of a Docker container getting the rights to execute arbitrary stuff on the host.

Problems running shell script from within .Net Core service daemon on Linux

I'm trying to execute a .sh script from within a .Net Core service daemon and getting weird behavior. The purpose of the script is to create an encrypted container, format it, set some settings, then mount it.
I'm using .Net Core version 3.1.4 on Raspbian on a Raspberry Pi 4.
The problem: I have the below script which creates the container, formats it, sets the settings, then attempts to mount it. It all seems to work fine but the last command, mount call, never actually works. The mount point is not valid.
The kicker: After the script is run via the service, if I open a terminal and issue the mount command there manully, it mounts correctly. I can then goto that mount point and it shows ~10GB of space available meaning it's using the container.
Note: Make sure the script is chmod +x when testing. Also you'll need cryptsetup installed to work.
Thoughts:
I'm not sure if some environment or PATH variables are missing for the shell script to properly function. Since this is a service, I can edit the Unit to include this information, if I knew what it was.
In previous attempts at issuing bash commands, I've had to set the DISPLAY variable like below for it to work correctly (because of needing to work with the desktop). For this issue that doesn't seem to matter but if I need to set the script as executable, then this command is uses as an example
string chmodArgs = string.Format("DISPLAY=:0.0; export DISPLAY && chmod +x {0}", scriptPath);
chmodArgs = string.Format("-c \"{0}\"", chmodArgs);
I'd like to see if someone can take the below and test on their end to confirm and possibly help come up with a solution. Thanks!
#!/bin/bash
# variables
# s0f4e7n4r4h8x4j4
# /usr/sbin/content1
# content1
# /mnt/content1
# 10240
# change the size of M to what the size of container should be
echo "Allocating 10240MB..."
fallocate -l 10240M /usr/sbin/content1
sleep 1
# using echo with -n passes in the password required for cryptsetup command. The dash at the end tells cryptsetup to read in from console
echo "Formatting..."
echo -n s0f4e7n4r4h8x4j4 | cryptsetup luksFormat /usr/sbin/content1 -
sleep 1
echo "Opening..."
echo -n s0f4e7n4r4h8x4j4 | cryptsetup luksOpen /usr/sbin/content1 content1 -
sleep 1
# create without journaling
echo "Creating filesystem..."
mkfs.ext4 -O ^has_journal /dev/mapper/content1
sleep 1
# enable writeback mode
echo "Tuning..."
tune2fs -o journal_data_writeback /dev/mapper/content1
sleep 1
if [ ! -d "/mnt/content1" ]; then
echo "Creating directory..."
mkdir -p /mnt/content1
sleep 1
fi
# mount with no access time to stop unnecessary writes to disk for just access
echo "Mounting..."
mount /dev/mapper/content1 /mnt/content1 -o noatime
sleep 1
This is how I'm executing the script in .Net
var proc = new System.Diagnostics.Process {
StartInfo =
{
FileName = pathToScript,
WorkingDirectory = workingDir,
Arguments = args,
UseShellExecute = false
}
};
if (proc.Start())
{
while (!proc.HasExited)
{
System.Threading.Thread.Sleep(33);
}
}
The Unit file use for service daemon
[Unit]
Description=Service name
[Service]
ExecStart=/bin/bash -c 'PATH=/sbin/dotnet:$PATH exec dotnet myservice.dll'
WorkingDirectory=/sbin/myservice/
User=root
Group=root
Restart=on-failure
SyslogIdentifier=my-service
PrivateTmp=true
[Install]
WantedBy=multi-user.target
The problem was not being able to run the mount command from within a service directly. From extensive trial and error, even printing verbose of the mount command would show that there was NO errors and it would NOT be mounted. Very misleading to not provide some failure message for users.
Solution is to create a Unit file "service" to handle the mount/umount. Below explains with a link to the inspiring article that brought me here.
Step 1: Create the Unit File
The key is the .mount file needs to be named in a pattern that matches the Where= in the Unit file. So if you're mounting /mnt/content1, your file would be:
sudo nano /etc/systemd/system/mnt-content1.mount
Here is the Unit file details I used.
[Unit]
Description=Mount Content (/mnt/content1)
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=/dev/mapper/content1
Where=/mnt/content1
Type=ext4
Options=noatime
[Install]
WantedBy=multi-user.target
Step 2: Reload systemctl
systemctl daemon-reload
Final steps:
You can now issue start/stop on the new "service" that is dedicated just to mount and unmount. This will not auto mount on reboot, if you need that you'll need to enable the service to do such.
systemctl start mnt-content1.mount
systemctl stop mnt-content1.mount
Article: https://www.golinuxcloud.com/mount-filesystem-without-fstab-systemd-rhel-8/

Different output of command over ssh

I want to dump cpu usage of a particular process over ssh using top and I want full command line to be shown.
When I ssh to the server and execute command locally, I see following:
remote-server$ top -c -b -n 1 |grep redis-server
5137 redis-user 20 0 83.5g 23g 884 S 13.7 29.3 13388:28 ./bin/redis-server *:11000
But when I execute the same command over ssh, I see following:
local-desktop$ ssh news-cache1 "top -c -b -n 1 |grep redis-server"
5137 redis-user 20 0 83.5g 23g 884 S 13.7 29.4 13388:55 ./bin/redis-server
I don't understand why I don't get complete command line (with host and port arguments *:11000) when I run the command over ssh.
Can anyone tell me what I am doing wrong?
My local desktop is OS X, El Capitan while remote server is centos 6.
Rerun the command with -t option in ssh.
local-desktop$ ssh -t news-cache1 "top -c -b -n 1 |grep redis-server"
ssh client assigns a tty terminal with limited width when you run commands remotely. The width of the terminal assigned was not enough to show the full line that you are interested in. Adding -t forces a pseudo-terminal allocation. From http://man.openbsd.org/ssh
-t Force pseudo-terminal allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty.

How to run tshark commands from root mode in linux using TCL script ?

I am using linux pc and installed tshark . And have to capture packets in eth1 interface using TCL script. But tshark is running in root mode. Capturing and script running pc's are same. How to login as root and how to run tshark commands using TCL ? Please provide me a solution for this.
#!/usr/bin/tclsh
set out [exec tshark -V -i eth1 arp -c 1 ]
puts $out
Output
test#test:~$ tclsh pcap.tcl
Capturing on eth1
tshark: The capture session could not be initiated (eth1: You don't have permission to capture on that device (socket: Operation not permitted)).
Please check to make sure you have sufficient permissions, and that you have the proper interface or pipe specified.
0 packets captured
while executing
"exec tshark -V -i eth1 arp -c 1 "
invoked from within
"set out [exec tshark -V -i eth1 arp -c 1 ]"
(file "pcap.tcl" line 5)
test#test:~$
please try below steps and also refer this link http://packetlife.net/blog/2010/mar/19/sniffing-wireshark-non-root-user/
root#test:/usr/bin# setcap cap_net_raw,cap_net_admin=eip /usr/bin/dumpcap
root#test:/usr/bin# getcap /usr/bin/dumpcap
/usr/bin/dumpcap = cap_net_admin,cap_net_raw+eip
root#test:/usr/bin# exit
exit
test#test:/usr/bin$ tshark -V -i eth1
Capturing on eth1
Frame 1 (60 bytes on wire, 60 bytes captured)
Arrival Time: Aug 8, 2013 13:54:27.481528000
[Time delta from previous captured frame: 0.000000000 seconds]
[Time delta from previous displayed frame: 0.000000000 seconds]
You have to either elevate the privileges of your tshark process via sudo (or any other available means) or run your whole script with elevated privileges.
One way to do that which might be simpler than sudo as it would require zero customizations is to write a super-simple C program which would just run /usr/bin/tshark with the necessary arguments and then make that program setuid root and distribute along with your Tcl program. That is only needed if you need portability. Otherwise sudo is much simpler.

Nagios/NRPE giving a "No output returned from plugin" error

Getting a "No output returned from plugin" error message from a Nagios/NRPE script
1) Running Nagios v3.2.3 and NRPE v2.12
2) The script:
OK_STATE=0
UNAME=/bin/uname -r
echo "OK: Kernel Version=$UNAME"
exit $OK_STATE
2) Command line results on the Nagios Server using NRPE
Same OK results for both the root and nagios users:
[nagios#cmonmm03 libexec]$ ./check_nrpe -H dappsi01b.dev.screenscape.local -c check_kernel
OK: Kernel Version=2.6.18-194.11.3.el5
When I run the check_kernel.sh script on the machine's local command line it works there to.
Help, any thoughts or known solution regarding this would be appreciated?
Thank you
Your command does not take any arguments, but it is likely the command definition for check_nrpe does define an argument parameter, for example:
define command{
command_name check_nrpe
command_line /usr/lib64/nagios/plugins/check_nrpe -H $HOSTADDRESS$ -c $ARG1$ -a $ARG2$
}
Therefore, try placing a dummy argument in the service definition:
define service{
use normal-service
host_name hostname
service_description Description
check_command check_nrpe!check_foo!placeholder
}
Did you add the nagios host to the /etc/xinetd.nrpe.cfg file? Specifically, the only_from line typically includes the localhost (on the remote system). Make sure to add the IPs of your nagios host there as well:
# default: on
# description: NRPE (Nagios Remote Plugin Executor)
service nrpe
{
flags = REUSE
socket_type = stream
port = 5666
wait = no
user = nagios
group = nagios
server = /usr/local/nagios/bin/nrpe
server_args = -c /usr/local/nagios/etc/nrpe.cfg --inetd
log_on_failure += USERID
disable = no
only_from = 127.0.0.1 192.168.1.61
}

Resources