I am at my wits end trying to figure this out
When I execute the following command:
sudo -u icinga '/usr/lib//nagios/plugins/check_db2_health' '--database' 'mydatabase' '--environment' 'DB2DIR=/opt/IBM/db2/V11.1.4fp5a' '--environment' 'DB2INSTANCE=mydatabase' '--environment' 'INSTHOME=/srv/db2/home/mydatabase' '--report' 'short' '--username' 'icinga' '--mode' 'connection-time' '--warning' '50'
The output as follow
[DBinstance : mydatabase] Status : CRITICAL - cannot connect to mydatabase. install_driver(DB2) failed: Can't load '/usr/lib/nagios/plugins/PerlLib/lib/perl5/site_perl/5.18.2/x86_64-linux-thread-multi/auto/DBD/DB2/DB2.so' for module DBD::DB2: libdb2.so.1: cannot open shared object file: No such file or directory at /usr/lib/perl5/5.18.2/x86_64-linux-thread-multi/DynaLoader.pm line 190.
at (eval 10) line 3.
Compilation failed in require at (eval 10) line 3.
Perhaps a required shared library or dll isn't installed where expected
at /usr/lib//nagios/plugins/check_db2_health line 2627.
But when I login to the user icinga using su - icinga
And run
'/usr/lib//nagios/plugins/check_db2_health' '--database' 'mydatabase' '--environment' 'DB2DIR=/opt/IBM/db2/V11.1.4fp5a' '--environment' 'DB2INSTANCE=mydatabase' '--environment' 'INSTHOME=/srv/db2/home/mydatabase' '--report' 'short' '--username' 'icinga' '--mode' 'connection-time' '--warning' '50'
It works fine.
How do I setup environment variables when sudo - u icinga command is fired ?
I am on a SUSE linux
I am kind of trying to setup a global environment variable just like the environment variable in icinga which can work across all commands executed on the server without have to use sudo -E etc because I cannot change the way icinga calls the plugin
you need to run the db2profile when you sudo
sudo -u icinga sqllib/db2profile; '/usr/lib//nagios/plugins/check_db2_health' ...
Related
I use the mcr.microsoft.com/mssql/server:2017 docker container to run a mssql server. I tried to change the collation like this:
echo "SQL_Latin1_General_CP1_CI_AS" | /opt/mssql/bin/mssql-conf set-collation
Unfortunately I get this error:
No passwd entry for user 'mssql'
How is it possible to fix this error?
I created a new user with useradd mssql, but now I get this error if I run the command:
sqlservr: Unable to open /var/opt/mssql/.system/instance_id: File: pal.cpp:566 [Status: 0xC0000022 Access Denied errno = 0xD(13) Permission denied]
/opt/mssql/bin/sqlservr: PAL initialization failed. Error: 101
It looks the latest mcr.microsoft.com/mssql/server fix such issue, if you insist on the old, next could be the procedure to fix all user/permission issue:
cake#cake:~/20211012$ docker run --rm -it mcr.microsoft.com/mssql/server:2017-latest /bin/bash
SQL Server 2019 will run as non-root by default.
This container is running as user root.
To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216.
root#4fd0bdf1d21c:/# useradd mssql
root#4fd0bdf1d21c:/# mkdir -p /var/opt/mssql
root#4fd0bdf1d21c:/# chmod -R 777 /var/opt/mssql
root#4fd0bdf1d21c:/# echo "SQL_Latin1_General_CP1_CI_AS" | /opt/mssql/bin/mssql-conf set-collation
Enter the collation: Configuring SQL Server...
The SQL Server End-User License Agreement (EULA) must be accepted before SQL
Server can start. The license terms for this product can be downloaded from
http://go.microsoft.com/fwlink/?LinkId=746388.
You can accept the EULA by specifying the --accept-eula command line option,
setting the ACCEPT_EULA environment variable, or using the mssql-conf tool.
I have a task to install Oracle 11g on a centOS 8 using VM (i'm new to linux / oracle).
I downloaded the Oracle files and unzipped them, then I tried to ./runInstaller but I get an error and this is the full terminal with error:
login as: admin
admin#192.168.163.129's password:
Activate the web console with: systemctl enable --now cockpit.socket
Last login: Thu May 21 09:26:48 2020 from 192.168.163.1
[admin#oracledb ~]$ cd Downloads
[admin#oracledb Downloads]$ cd database
[admin#oracledb database]$ ls
doc linux.x64_11gR2_database_1of2 response runInstaller stage
install linux.x64_11gR2_database_2of2 rpm sshsetup welcome.html
[admin#oracledb database]$ ./runInstaller
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB. Actual 2027 MB Passed
Checking swap space: must be greater than 150 MB. Actual 1759 MB Passed
Checking monitor: must be configured to display at least 256 colors
>>> Could not execute auto check for display colors using command /usr/bin/xdpyinfo. Check if the DISPLAY variable is set. Failed <<<<
Some requirement checks failed. You must fulfill these requirements before
continuing with the installation,
Continue? (y/n) [n] y
>>> Ignoring required pre-requisite failures. Continuing...
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2020-05-21_09-43-58AM. Please wait ...
DISPLAY not set. Please set the DISPLAY and try again.
Depending on the Unix Shell, you can use one of the following commands as examples to set the DISPLAY environment variable:
- For csh: % setenv DISPLAY 192.168.1.128:0.0
- For sh, ksh and bash: $ DISPLAY=192.168.1.128:0.0; export DISPLAY
Use the following command to see what shell is being used:
echo $SHELL
Use the following command to view the current DISPLAY environment variable setting:
echo $DISPLAY
- Make sure that client users are authorized to connect to the X Server.
To enable client users to access the X Server, open an xterm, dtterm or xconsole as the user that started the session and type the following command:
% xhost +
To test that the DISPLAY environment variable is set correctly, run a X11 based program that comes with the native operating system such as 'xclock':
% <full path to xclock.. see below>
If you are not able to run xclock successfully, please refer to your PC-X Server or OS vendor for further assistance.
Typical path for xclock: /usr/X11R6/bin/xclock
[admin#oracledb database]$
I am using putty and Xming but I still get this error.
Make sure your putty session is connecting with "x-11 port forwarding"
Be sure after you set this that you scroll back up to 'Session' and then 'save'
I am using Linux 18.04 and I want to lunch a spark cluster on EC2.
I used the export command to set environment variables
export AWS_ACCESS_KEY_ID=MyAccesskey
export AWS_SECRET_ACCESS_KEY=Mysecretkey
but when I run the command to lunch the spark cluster I get
ERROR: The environment variable AWS_ACCESS_KEY_ID must be set
I put all the commands I used in case I made a mistake:
sudo mv ~/Downloads/keypair.pem /usr/local/spark/keypair.pem
sudo mv ~/Downloads/credentials.csv /usr/local/spark/credentials.csv
# Make sure the .pem file is readable by the current user.
chmod 400 "keypair.pem"
# Go into the spark directory and set the environment variables with the credentials information
cd spark
export AWS_ACCESS_KEY_ID=ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=SECRET_KEY
# To install Spark 2.0 on the cluster:
sudo spark-ec2/spark-ec2 -k keypair --identity-file=keypair.pem --region=us-west-2 --zone=us-west-2a --copy-aws-credentials --instance-type t2.micro --worker-instances 1 launch project-launch
I am new to these things and any help is really appreciated
You can also retrieve the value AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY by using the get subcommand for aws configure:
AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)
In command line:
sudo AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id) AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key) spark-ec2/spark-ec2 -k keypair --identity-file=keypair.pem --region=us-west-2 --zone=us-west-2a --copy-aws-credentials --instance-type t2.micro --worker-instances 1 launch project-launch
source: AWS Command Line Interface User Guide
Environment variables can be simply passed after sudo in form ENV=VALUE and they'll be accepted by followed command. It's not known to me if there are restrictions to this usage, so my example problem can be solved with:
sudo AWS_ACCESS_KEY_ID=ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=SECRET_KEY spark-ec2/spark-ec2 -k keypair --identity-file=keypair.pem --region=us-west-2 --zone=us-west-2a --copy-aws-credentials --instance-type t2.micro --worker-instances 1 launch project-launch
I've installed an npm package / script in a JAIL on FreeNAS 9.10. (FreeBSD based)
It works perfectly if I run "npm start" in the directory where the scripts are installed.
However, I need this to be auto-starting when the jail starts. I don't know now to do that. Do I need to create an rc script?
Basically all I need to do is give the "npm start" in the correct directory on start up. How do I do that?
thanks
Yes, you can place an rc script within the jail and enable it using the jail's /etc/rc.conf file.
But, for a quick and dirty solution, you could create a /etc/rc.local script (also within the jail's environment) and put your startup commands in there.
See the manual page here.
Don't know about npm start, but for node.js I made such RC srcipt:
#!/bin/sh
# $FreeBSD: 340872 2014-01-24 00:14:07Z mat $
#
# PROVIDE: SERVICENAME
# REQUIRE: NETWORKING
# KEYWORD: shutdown
#
# Add the following line to /etc/rc.conf to enable SERVICENAME:
#
# SERVICENAME_enable="YES"
#
. /etc/rc.subr
name="SERVICENAME"
rcvar=SERVICENAME_enable
pidfile=${SERVICENAME_pidfile:-"/var/run/SERVICENAME.pid"}
command="/usr/sbin/daemon"
#command_args="-r -u USERNAME -P /var/run/SERVICENAME.pid /usr/local/bin/node /home/USERNAME/PROGDIR" # cjayho: restart if crashed
command_args="-u USERNAME -P /var/run/SERVICENAME.pid /usr/local/bin/node /home/USERNAME/PROGDIR"
load_rc_config $name
: ${SERVICENAME_enable:="NO"}
run_rc_command "$1"
name this file something like SERVICENAME and put to /usr/local/etc/rc.d
to enable automatic startup run command as root:
sysrc SERVICENAME_enable="YES"
do not forget to replace SERVICENAME, USERNAME and PROGDIR to your values, and add
process.chdir('/home/USERNAME/PROGDIR')
to your entry js file.
I'm setting up the Puppet Dashboard for the first time. I have it running with the passenger module in Apache.
sudo rake RAILS_ENV=production reports:import
When I run this command, the tasks appear in the dashboard as failed.
630 new failed tasks
The details for each failure look something like this:
Importing report 201212270754.yaml at 2012-12-27 09:21 UTC
Permission denied - /var/lib/puppet/reports/rb-db1/201212270754.yaml
Backtrace
/usr/share/puppet-dashboard/app/models/report.rb:86:in `read'
/usr/share/puppet-dashboard/app/models/report.rb:86:in `create_from_yaml_file'
The report files were owned by puppet:puppet with a 640 permission by default.
I ran chmod a+rw on the reports directory, but I still get the same errors.
Any ideas on what I might be doing wrong here?
If you are running the puppet-dashboard server as root instead of as the puppet-dashboard user, you will see this error. My system is using /usr/share/puppet-dashboard/script/server on centos 6.4 using the puppet-dashboard-1.2.23-1.el6.noarch rpm from puppetlabs.
[root#hadoop01 puppet-dashboard]# cat /etc/sysconfig/puppet-dashboard
#
# path to where you installed puppet dashboard
#
DASHBOARD_HOME=/usr/share/puppet-dashboard
#DASHBOARD_USER=puppet-dashboard
DASHBOARD_USER=root
DASHBOARD_RUBY=/usr/bin/ruby
DASHBOARD_ENVIRONMENT=production
DASHBOARD_IFACE=0.0.0.0
DASHBOARD_PORT=3000
edit the file like above and then run the command
/etc/init.d/puppet-dashboard restart && /etc/init.d/puppet-dashboard-workers restart
my puppet-dashboard version is 1.2.23