Check_MK - configuring legacy check - linux

I am running OMD 1.20 -latest according to official website, Check_MK 1.2.4p5 community edition on an Ubuntu 14.04.3 LTS machine.
I need to configure a FTP check that will check also the credentials and reading/writing a file. The standard plugins do not offer such a feature from what I know so I am trying to use a custom plugin, specifically: https://exchange.nagios.org/directory/Plugins/Network-Protocols/FTP/check_ftp_rw/details
So the monitoring server is supposed to test an external FTP server that does not have the agent installed.
I have the plugin in /usr/lib/nagios/plugins and have ran it manually and it works ok.
Now I am trying to configure it as a check in check_mk so I did the following
in /opt/omd/sites/monitoring/etc/check_mk/main.mk
# Put your host names here
# all_hosts = [ 'localhost' ]
all_hosts = [ ]
extra_nagios_conf += r"""
define command {
command_name check_ftprw
command_line /usr/lib/nagios/plugins/check_ftp_rw --host ftp.test.com --user test --password 'test123' --dir pub
}
"""
legacy_checks = [
( ( "check_ftprw", FTP", True), [ "localhost"] ),
]
I restart the omd site and check the inventory but it nevers picks up this check.

I have solved this problem, the config is ok, then we need to reload check_mk: cmk -O
Then we need to check that the command is registered in check_mk_objects.cfg:
# extra_nagios_conf
define command {
command_name check_ftprw
command_line /usr/lib/nagios/plugins/check_ftp_rw --host ftp.xxx --user test --password 'test' --dir pub --file test.txt --write
}
We can check the agent output using: check_mk_agent

Related

No passwd entry for user 'mssql' sqlsrv on Linux

I use the mcr.microsoft.com/mssql/server:2017 docker container to run a mssql server. I tried to change the collation like this:
echo "SQL_Latin1_General_CP1_CI_AS" | /opt/mssql/bin/mssql-conf set-collation
Unfortunately I get this error:
No passwd entry for user 'mssql'
How is it possible to fix this error?
I created a new user with useradd mssql, but now I get this error if I run the command:
sqlservr: Unable to open /var/opt/mssql/.system/instance_id: File: pal.cpp:566 [Status: 0xC0000022 Access Denied errno = 0xD(13) Permission denied]
/opt/mssql/bin/sqlservr: PAL initialization failed. Error: 101
It looks the latest mcr.microsoft.com/mssql/server fix such issue, if you insist on the old, next could be the procedure to fix all user/permission issue:
cake#cake:~/20211012$ docker run --rm -it mcr.microsoft.com/mssql/server:2017-latest /bin/bash
SQL Server 2019 will run as non-root by default.
This container is running as user root.
To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216.
root#4fd0bdf1d21c:/# useradd mssql
root#4fd0bdf1d21c:/# mkdir -p /var/opt/mssql
root#4fd0bdf1d21c:/# chmod -R 777 /var/opt/mssql
root#4fd0bdf1d21c:/# echo "SQL_Latin1_General_CP1_CI_AS" | /opt/mssql/bin/mssql-conf set-collation
Enter the collation: Configuring SQL Server...
The SQL Server End-User License Agreement (EULA) must be accepted before SQL
Server can start. The license terms for this product can be downloaded from
http://go.microsoft.com/fwlink/?LinkId=746388.
You can accept the EULA by specifying the --accept-eula command line option,
setting the ACCEPT_EULA environment variable, or using the mssql-conf tool.

Hammer command in Puppet exec fails (foreman 1.20.1)

I am trying to use Hammer in Foreman 1.20.1 on Centos 7.6 to refresh proxy features (or just about any other command other than --version) in a Puppet exec. The command I am using works fine at the shell. It fails in Puppet exec with:
Error: undefined local variable or method `dotfile' for
Notice: /Stage[main]/Profiles::Test/Exec[test]/returns: Did you mean?
##dotfile Notice: /Stage[main]/Profiles::Test/Exec[test]/returns:
Error: No such sub-command 'proxy'.
The code I am using is:
class profiles::test{
exec {'test':
command => '/usr/bin/hammer proxy refresh-features --name $(hostname)',
}
}
include profiles::test
I'm not concerned about idempotency as it will have a refreshonly, I just want to get the command to work.
I have tried adding other options such as path, user, environment etc to no avail. Any help appreciated.
from clues I found at https://github.com/awesome-print/awesome_print/issues/316 and https://grokbase.com/t/gg/puppet-users/141mrjg2bw/problems-with-onlyif-in-exec, it turns out that the HOME environment has to be set. So the working code is:
exec {'test':
command => '/usr/bin/hammer proxy refresh-features --name $(hostname)',
environment => ["HOME=/root"],
refreshonly => true,
}
f'ing ruby!

How do I grant the mlock syscall to a container invoked via "sudo rkt run" on CoreOs

Running my app as below:
sudo rkt run --insecure-options=image --interactive --net=host ./myapp.aci
I get the message:
Failed to lock memory: cannot allocate memory
Which after some digging would seem to indicate that the container does not have the CAP_IPC_LOCK capability passed to it. I have dug into some of the documentation, but cannot find where I need to add configuration or any option to enable this. How do I do this?
ACIs can specify which caps they need in their manifest with an isolator of type os/linux/capabilities-retain-set.
To check if the manifest contains such an isolator, you can use actool:
$ actool cat-manifest --pretty-print ./myapp.aci
You might see the following:
"isolators": [
{
"name": "os/linux/capabilities-retain-set",
"value": {
"set": [
"CAP_IPC_LOCK"
]
}
}
]
To add CAP_IPC_LOCK, you can use:
$ actool patch-manifest --capability=CAP_IPC_LOCK --replace ./myapp.aci
It is currently not possible to add a capability directly on the rkt run command line. I filed an issue on GitHub for this feature request: coreos/rkt#2371
You can use acbuild to give your container the right capabilities.
If you're already using acbuild to make your ACI, just add this line to the build script:
echo '{ "set": ["CAP_IPC_LOCK"] }' | acbuild isolator add "os/linux/capabilities-retain-set" -
Or if you're not already using acbuild to make your ACI, you can modify an existing ACI by using the --modify flag. So the command would be:
echo '{ "set": ["CAP_IPC_LOCK"] }' | acbuild --modify path/to/your/app.aci isolator add "os/linux/capabilities-retain-set" -

How can I run a Docker container in AWS Elastic Beanstalk with non-default run parameters?

I have a Docker container that runs great on my local development machine. I would like to move this to AWS Elastic Beanstalk, but I am running into a small bit of trouble.
I am trying to mount an S3 bucket to my container by using s3fs. I have the Dockerfile:
FROM tomcat:7.0
MAINTAINER me#example.com
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y build-essential libfuse-dev libcurl4-openssl-dev libxml++2.6-dev libssl-dev mime-support automake libtool wget tar
# Add the java source
ADD . /path/to/tomcat/webapps/
ADD run_docker.sh /root/run_docker.sh
WORKDIR $CATALINA_HOME
EXPOSE 8080
CMD ["/root/run_docker.sh"]
And I install s3fs, mount an S3 bucket, and run the Tomcat server after the image has been created, by running run_docker.sh:
#!/bin/bash
#run_docker.sh
wget https://github.com/s3fs-fuse/s3fs-fuse/archive/master.zip -O /usr/src/master.zip;
cd /usr/src/;
unzip /usr/src/master.zip;
cd /usr/src/s3fs-fuse-master;
autoreconf --install;
CPPFLAGS=-I/usr/include/libxml2/ /usr/src/s3fs-fuse-master/configure;
make;
make install;
cd $CATALINA_HOME;
mkdir /opt/s3-files;
s3fs my-bucket /opt/s3-files;
catalina.sh run
When I build and run this Docker container using the command:
docker run --cap-add mknod --cap-add sys_admin --device=/dev/fuse -p 80:8080 -d username/mycontainer:latest
it works well. Yet, when I remove the --cap-add mknod --cap-add sys_admin --device=/dev/fuse, then s3fs fails to mount my S3 bucket.
Now, I would like to run this on AWS Elastic Beanstalk, and when I deploy the container (and run run_docker.sh), all the steps execute fine, except the step s3fs my-bucket /opt/s3-files in run_docker.sh fails to mount the bucket.
Presumably, this is because whatever Elastic Beanstalk does to run a Docker container, it doesn't add any additional flags like, --cap-add mknod --cap-add sys_admin --device=/dev/fuse.
My Dockerrun.aws.json file looks like:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "tomcat:7.0"
},
"Ports": [
{
"ContainerPort": "8080"
}
]
}
Is it possible to add additional docker run flags to an AWS EB Docker deployment?
An alternative option is to find another way to mount an S3 bucket, but I suspect I'd run into similar permission errors regardless. Has anyone seen any way to accomplish this???
UPDATE:
For people trying to use #Egor's answer below, it works when the EB configuration is set to use v1.4.0 running Docker 1.6.0. Anything past the v1.4.0 version fails. So to make it work, build your environment as normal (which should give you a failed build), then rebuild it with a v1.4.0 running Docker 1.6.0 configuration. That should do it!
If you are using the latest version of aws docker stack (docker 1.7.1 for example), you'll need to slightly modify the above answer. Try this:
commands:
00001_add_privileged:
cwd: /tmp
command: 'sed -i "s/docker run -d/docker run --privileged -d/" /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh'
Notice the change of location && name of the run script
Add file .ebextensions/01-commands.config
container_commands:
00001-docker-privileged: command: 'sed -i "s/docker run -d/docker run --privileged -d/" /opt/elasticbeanstalk/hooks/appdeploy/pre/04run.sh'
I am also using s3fs
Thanks elijahchancey for answer it was much helpful. I would just like to add small comment:
Elasticbeanstalk is now using ECS tasks to deploy and manage application cluster. There is very important paragraph in Multicontainer Docker Configuration
docs (which I originally missed).
The following examples show a subset of parameters that are commonly used. More optional parameters are available. For more information on the task definition format and a full list of task definition parameters, see Amazon ECS Task Definitions in the Amazon ECS Developer Guide.
So the document is not complete reference but it just shows typical entries and you are supposed to find more elsewhere. This has quite major impact because now (2018) you are able to specify more options and you don't need to hack ebextensions any more. Only thing you need to do is to use task parameter in containerDefinitions of your multi docker Dockerrun.aws.json.
This is not mentioned in single docker containers but one can try and verify...
Example of multi docker Dockerrun.aws.json with extra cap:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "service1",
"image": "myapp/service1:latest",
"essential": true,
"memoryReservation": 128,
"portMappings": [
{
"hostPort": 8080,
"containerPort": 8080
}
],
"linuxParameters": {
"capabilities": {
"add": [
"SYS_PTRACE"
]
}
}
}
]
}
You can now add capabilities using the task definition. Here are the docs:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html
This is specifically what you would add to your task definition:
"linuxParameters": {
"capabilities": {
"add": [
"SYS_PTRACE"
]
}
},

NPM private git module on Heroku

I am trying to deploy my app to Heroku however I rely on using some private git repos as modules. I do this for code reuse between projects, e.g. I have a custom logger I use in multiple apps.
"logger":"git+ssh://git#bitbucket.org..............#master"
The problem is Heroku obviously does not have ssh access to this code. I can't find anything on this problem. Ideally Heroku have a public key I can can just add to the modules.
Basic auth
GitHub has support for basic auth:
"dependencies" : {
"my-module" : "git+https://my_username:my_password#github.com/my_github_account/my_repo.git"
}
As does BitBucket:
"dependencies" : {
"my-module": "git+https://my_username:my_password#bitbucket.org/my_bitbucket_account/my_repo.git"
}
But having plain passwords in your package.json is probably not desired.
Personal access tokens (GitHub)
To make this answer more up-to-date, I would now suggest using a personal access token on GitHub instead of username/password combo.
You should now use:
"dependencies" : {
"my-module" : "git+https://<username>:<token>#github.com/my_github_account/my_repo.git"
}
For Github you can generate a new token here:
https://github.com/settings/tokens
App passwords (Bitbucket)
App passwords are primarily intended as a way to provide compatibility with apps that don't support two-factor authentication, and you can use them for this purpose as well. First, create an app password, then specify your dependency like this:
"dependencies" : {
"my-module": "git+https://<username>:<app-password>#bitbucket.org/my_bitbucket_account/my_repo.git"
}
[Deprecated] API key for teams (Bitbucket)
For BitBucket you can generate an API Key on the Manage Team page and then use this URL:
"dependencies" : {
"my-module" : "git+https://<teamname>:<api-key>#bitbucket.org/team_name/repo_name.git"
}
Update 2016-03-26
The method described no longer works if you are using npm3, since npm3 fetches all modules described in package.json before running the preinstall script. This has been confirmed as a bug.
The official node.js Heroku buildpack now includes heroku-prebuild and heroku-postbuild, which will be run before and after npm install respectively. You should use these scripts instead of preinstall and postinstall in all cases, to support both npm2 and npm3.
In other words, your package.json should resemble:
"scripts": {
"heroku-prebuild": "bash preinstall.sh",
"heroku-postbuild": "bash postinstall.sh"
}
I've come up with an alternative to Michael's answer, retaining the (IMO) favourable requirement of keeping your credentials out of source control, whilst not requiring a custom buildpack. This was borne out of frustration that the buildpack linked by Michael is rather out of date.
The solution is to setup and tear down the SSH environment in npm's preinstall and postinstall scripts, instead of in the buildpack.
Follow these instructions:
Create two scripts in your repo, let's call them preinstall.sh and postinstall.sh.
Make them executable (chmod +x *.sh).
Add the following to preinstall.sh:
#!/bin/bash
# Generates an SSH config file for connections if a config var exists.
if [ "$GIT_SSH_KEY" != "" ]; then
echo "Detected SSH key for git. Adding SSH config" >&1
echo "" >&1
# Ensure we have an ssh folder
if [ ! -d ~/.ssh ]; then
mkdir -p ~/.ssh
chmod 700 ~/.ssh
fi
# Load the private key into a file.
echo $GIT_SSH_KEY | base64 --decode > ~/.ssh/deploy_key
# Change the permissions on the file to
# be read-only for this user.
chmod 400 ~/.ssh/deploy_key
# Setup the ssh config file.
echo -e "Host github.com\n"\
" IdentityFile ~/.ssh/deploy_key\n"\
" IdentitiesOnly yes\n"\
" UserKnownHostsFile=/dev/null\n"\
" StrictHostKeyChecking no"\
> ~/.ssh/config
fi
Add the following to postinstall.sh:
#!/bin/bash
if [ "$GIT_SSH_KEY" != "" ]; then
echo "Cleaning up SSH config" >&1
echo "" >&1
# Now that npm has finished running, we shouldn't need the ssh key/config anymore.
# Remove the files that we created.
rm -f ~/.ssh/config
rm -f ~/.ssh/deploy_key
# Clear that sensitive key data from the environment
export GIT_SSH_KEY=0
fi
Add the following to your package.json:
"scripts": {
"preinstall": "bash preinstall.sh",
"postinstall": "bash postinstall.sh"
}
Generate a private/public key pair using ssh-agent.
Add the public key as a deploy key on Github.
Create a base64 encoded version of your private key, and set it as the Heroku config var GIT_SSH_KEY.
Commit and push your app to Github.
When Heroku builds your app, before npm installs your dependencies, the preinstall.sh script is run. This creates a private key file from the decoded contents of the GIT_SSH_KEY environment variable, and creates an SSH config file to tell SSH to use this file when connecting to github.com. (If you are connecting to Bitbucket instead, then update the Host entry in preinstall.sh to bitbucket.org). npm then installs the modules using this SSH config. After installation, the private key is removed and the config is wiped.
This allows Heroku to pull down your private modules via SSH, while keeping the private key out of the codebase. If your private key becomes compromised, since it is just one half of a deploy key, you can revoke the public key in GitHub and regenerate the keypair.
As an aside, since GitHub deploy keys have read/write permissions, if you are hosting the module in a GitHub organization, you can instead create a read-only team and assign a 'deploy' user to it. The deploy user can then be configured with the public half of the keypair. This adds an extra layer of security to your module.
It's a REALLY bad idea to have plain text passwords in your git repo, using an access token is better, but you will still want to be super careful.
"my_module": "git+https://ACCESS_TOKEN:x-oauth-basic#github.com/me/my_module.git"
I created a custom nodeJS buildpack that will allow you to specify an SSH key that is registered with ssh-agent and used by npm when dynos are first setup. It seamlessly allows you to specify your module as an ssh url in your package.json like shown:
"private_module": "git+ssh://git#github.com:me/my_module.git"
To setup your app to use your private key:
Generate a key: ssh-keygen -t rsa -C "your_email#example.com" (Enter no passphrase. The buildpack does not support keys with passphrases)
Add the public key to github: pbcopy < ~/.ssh/id_rsa.pub (in OS X) and paste the results into the github admin
Add the private key to your heroku app's config: cat id_rsa | base64 | pbcopy, then heroku config:set GIT_SSH_KEY=<paste_here> --app your-app-name
Setup your app to use the buildpack as described in the heroku nodeJS buildpack README included in the project. In summary the simplest way is to set a special config value with heroku config:set to the github url of the repository containing the desired buildpack. I'd recommend forking my version and linking to your own github fork, as I'm not promising to not change my buildpack.
My custom buildpack can be found here: https://github.com/thirdiron/heroku-buildpack-nodejs and it works for my system. Comments and pull requests are more than welcome.
Based on the answer from #fiznool I created a buildpack to solve this problem using a custom ssh key stored as an environment variable. As the buildpack is technology agnostic, it can be used to download dependencies using any tool like composer for php, bundler for ruby, npm for javascript, etc: https://github.com/simon0191/custom-ssh-key-buildpack
Add the buildpack to your app:
$ heroku buildpacks:add --index 1 https://github.com/simon0191/custom-ssh-key-buildpack
Generate a new SSH key without passphrase (lets say you named it deploy_key)
Add the public key to your private repository account. For example:
Github: https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account/
Bitbucket: https://confluence.atlassian.com/bitbucket/add-an-ssh-key-to-an-account-302811853.html
Encode the private key as a base64 string and add it as the CUSTOM_SSH_KEY environment variable of the heroku app.
Make a comma separated list of the hosts for which the ssh key should be used and add it as the CUSTOM_SSH_KEY_HOSTS environment variable of the heroku app.
# MacOS
$ heroku config:set CUSTOM_SSH_KEY=$(base64 --input ~/.ssh/deploy_key) CUSTOM_SSH_KEY_HOSTS=bitbucket.org,github.com
# Ubuntu
$ heroku config:set CUSTOM_SSH_KEY=$(base64 ~/.ssh/deploy_key) CUSTOM_SSH_KEY_HOSTS=bitbucket.org,github.com
Deploy your app and enjoy :)
I was able to setup resolving of Github private repositories in Heroku build via Personal access tokens.
Generate Github access token here: https://github.com/settings/tokens
Set access token as Heroku config var: heroku config:set GITHUB_TOKEN=<paste_here> --app your-app-name or via Heroku Dashboard
Add heroku-prebuild.sh script:
#!/bin/bash
if [ "$GITHUB_TOKEN" != "" ]; then
echo "Detected GITHUB_TOKEN. Setting git config to use the security token" >&1
git config --global url."https://${GITHUB_TOKEN}#github.com/".insteadOf git#github.com:
fi
add the prebuild script to package.json:
"scripts": {
"heroku-prebuild": "bash heroku-prebuild.sh"
}
For local environment we can also use git config ... or we can add the access token to ~/.netrc file:
machine github.com
login PASTE_GITHUB_USERNAME_HERE
password PASTE_GITHUB_TOKEN_HERE
and installing private github repos should work.
npm install OWNER/REPO --save will appear in package.json as: "REPO": "github:OWNER/REPO"
and resolving private repos in Heroku build should also work.
optionally you can setup a postbuild script to unset the GITHUB_TOKEN.
This answer is good https://stackoverflow.com/a/29677091/6135922, but I changed a little bit preinstall script. Hope this will help someone.
#!/bin/bash
# Generates an SSH config file for connections if a config var exists.
echo "Preinstall"
if [ "$GIT_SSH_KEY" != "" ]; then
echo "Detected SSH key for git. Adding SSH config" >&1
echo "" >&1
# Ensure we have an ssh folder
if [ ! -d ~/.ssh ]; then
mkdir -p ~/.ssh
chmod 700 ~/.ssh
fi
# Load the private key into a file.
echo $GIT_SSH_KEY | base64 --decode > ~/.ssh/deploy_key
# Change the permissions on the file to
# be read-only for this user.
chmod o-w ~/
chmod 700 ~/.ssh
chmod 600 ~/.ssh/deploy_key
# Setup the ssh config file.
echo -e "Host bitbucket.org\n"\
" IdentityFile ~/.ssh/deploy_key\n"\
" HostName bitbucket.org\n" \
" IdentitiesOnly yes\n"\
" UserKnownHostsFile=/dev/null\n"\
" StrictHostKeyChecking no"\
> ~/.ssh/config
echo "eval `ssh-agent -s`"
eval `ssh-agent -s`
echo "ssh-add -l"
ssh-add -l
echo "ssh-add ~/.ssh/deploy_key"
ssh-add ~/.ssh/deploy_key
# uncomment to check that everything works just fine
# ssh -v git#bitbucket.org
fi
You can use in package.json private repository with authentication example below:
https://usernamegit:passwordgit#github.com/reponame/web/tarball/branchname
In short it is not possible. The best solution to this problem I came up with is to use the new git subtree's. At the time of writing they are not in the official git source and so needs to be installed manual but they will be included in v1.7.11. At the moment it is available on homebrew and apt-get. it is then a case of doing
git subtree add -P /node_modules/someprivatemodue git#github.......someprivatemodule {master|tag|commit}
this bulks out the repo size but an update is easy by doing the command above with gitsubtree pull.
I have done this before with modules from github. Npm currently accepts the name of the package or a link to a tar.gz file which contains the package.
For example if you want to use express.js directly from Github (grab the link via the download section) you could do:
"dependencies" : {
"express" : "https://github.com/visionmedia/express/tarball/2.5.9"
}
So you need to find a way to access you repository as a tar.gz file via http(s).

Resources