How are configuration files read from /etc/sysctl.d/ directory? - security

I was reading the CIS CentOS Linux 7 Benchmark v.2.2.0, section-1.5.1(Ensure core dumps are restricted). Its remediation sections says:
Add the following line to /etc/security/limits.conf or a /etc/security/limits.d/*
file:
hard core 0
I have a remediation script to achieve the same (got from github). Its contents are :
echo "hard core 0" >> /etc/security/limits.d/CIS.conf
echo "fs.suid_dumpable = 0" >> /etc/sysctl.d/CIS.conf
My question is -> Why is the script adding the lines in CIS.conf. I know it is not a system conf file. So how does the OS know to read this conf file? Is it that the OS reads from all conf files present under the /etc/sysctl.d/ path?

Related

How to create and fill a file via kickstart?

Hi im currently writing a Oracle 8 Kickstart file for our company and i need to automatically create a file after installation and write in it. This is what i tried
cat <<\EOF >>/etc/sysctl.d/disableipv6.conf
net.ipv6.conf.all.disable_ipv6 =1
net.ipv6.conf.default.disable_ipv6 =1
EOF
%end
but it somehow messes up because the file created looks like this
'disableipv6.conf'$'\r'

Moving a mount's origin directory

I have experienced a weird behaivour on our production environment.
We have a NFS and a linux server which runs our application.
On the linux server there is a mount to the NFS,
from: /configuration/data (on the NFS)
To: /software (on the linux server).
Which the application modifies the files there periodically.
Some time ago someone accidentally moved to "data" folder to: /configuration/other/data
The application kept running without any side effect, and modified the files periodically, and the files inside /configuration/other/data also changed even though the mount (/configuration/data) point to nothing.
I guess there is a shortcut to the origin of the mount which being modified on the folder relocation, but that just a guess.
I would like to know why and how this behaivour is possible, and how it works internally.
and how it works internally.
File descriptor refers to a file. You can move the file, you can remove the file - the file descriptor refers to the same "entity". So in shell you can for example:
# open fd 10 to refer to /tmp/10
$ exec 10>/tmp/10
# Just write something so that it works
$ echo abc >&10
$ cat /tmp/10
abc
# Move the file to some dir
$ mkdir /tmp/dir
$ mv /tmp/10 /tmp/dir/10
# now it still writes to the same "file", even when moved
$ echo def >&10
$ cat /tmp/dir/10
abc
def
# You can remove the file and still access it
# The file still "exists"
$ exec 11</tmp/dir/10
$ rm /tmp/dir/10
$ echo 123 >&10
$ cat <&11
abc
def
123
Creating a file descriptor to a file and then removing the file is typically in C programs:
char *filename = mkstemp(...);
int fd = open(filename);
unlink(filename);
// use fd
The file is really "removed" when there are no links to it - when the last file descriptor is closed. Research posix file descriptors and see for example man 2 unlink and other resources explaining what is a file descriptor.
Most probably you application has continously opened file descriptors to files inside /configuration/data, so after the file is moved, the data become available at the new location but application still uses same file descriptors.

Adapt command to creating csv file from storage content including date(time) & file size also

According to thread:
Linux: fast creating of formatted output file (csv) from find command
there is a suggested bash command, including awk (which I don't understand):
find /mnt/sda2/ | awk 'BEGIN{FS=OFS="/"}!/.cache/ {$2=$3=""; new=sprintf("%s",$0);gsub(/^\/\/\//,"",new); printf "05;%s;/%s\n",$NF,new }' > $p1"Seagate-4TB-S2-BTRFS-1TB-Dateien-Verzeichnisse.csv"
With this command, I am able to create a csv file containing "05;file name;full path and file name" of the directory and file content of my device mounted on /mnt/sda2. Thanks again to -> tink
How must I adapt the above command to receive date(&time) and file size also?
Thank you in advance,
-Linuxfluesterer

Need help - Getting an error: xrealloc: subst.c:4072: cannot reallocate 1073741824 bytes (0 bytes allocated)

Checking if anybody else had the similar issue.
Code in the shell script:
## Convert file into Unix format first.
## THIS is IMPORTANT.
#####################
dos2unix "${file}" "${file}";
#####################
## Actual DB Change
db_change_run_op="$(ssh -qn ${db_ssh_user}#${dbserver} "sqlplus $dbuser/${pswd}#${dbname} <<ENDSQL
#${file}
ENDSQL
")";
Summary:
1. From a shell script (on a SunOS source server) I'm running a sqlplus session via ssh on a target machine to run a .sql script.
2. Output of this target ssh session (running sqlplus) is getting stored in a variable within the shell script. Variable name: db_change_run_op (as shown above in the code snapshot).
3. Most of the .sql scripts (that the variable "${file}" stores) that I'm running, shell script runs it fine and returns me the output of the .sql file (ran on target server via ssh from source server) provided, if the .sql file contains something which doesn't take much time to complete -or generates reasonable amount of output log/lines.
for ex: Let's assume if .sql I want to run does the following, then it runs fine.
select * from database123;
udpate table....
alter table..
insert ....
...some procedure .... which doesn't take much time to create....
...some more sql commands which complete..within few minutes to an hour....
4. Now, the issue I'm facing is:
Let's assume I have a .sql file where a single select command from a table have couple of hundred thousands - upto 1-5millions of lines i.e.
select * from database321;
assume the above generates the above bullet 4 condition.
In this case, I'm getting the following error message thrown by the shell script (running on the source server).
Error:
*./db_change_load.sh: xrealloc: subst.c:4072: cannot reallocate 1073741824 bytes (0 bytes allocated)*
My questions:
1. Did the .sql script complete - I assume yes. But, how can I get the output LOG file of the .sql file generated on the target server directly. If this can be done, then I won't need the variable to hold the output of whole ssh session sqlplus command and then create a log file on source server by doing [ echo "${db_change_run_op}" > sql.${file}.log ] way.
I assume the error is coming as the output or no. of lines generated by the ssh session i.e. by the sqlplus is so big that it can't fit Unix/Linux BASH variable's limit and thus, xrealloc error.
Please advise if on the above 2 questions if you have any experience or how can i solve this.
I assume, I'll try using " | tee /path/on.target.ssh.server/sql.${file}.log" soon after << ENDSQL or final close of ENDSQL (here doc keyword), wondering if that would work or not..
OK. got it working. No more store stuff in a var and then echo $var to a file.
Luckily, I had a same mount point on both source and target server i.e. if I go to /scm on source and on target, the mount (df -kvh .) shows same output for Share/NAS mount value.
Filesystem size used avail capacity Mounted on
ServerNAS02:/vol/vol1/scm 700G 560G 140G 81% /scm
Now, instead of using the variable to store the whole output of ssh session calling sqlplus session, all I did is was to create a file on the remote server using the following code.
## Actual DB Change
#db_change_run_op="$(ssh -qn ${pdt_usshu_dbs}#${dbs} "sqlplus $dbu/${pswd}#$dbn <<ENDSQL | tee "${sql_run_output_file}".ssh.log
#set echo off
#set echo on
#set timing on
#set time on
#set serveroutput on size unlimited
##${file}
#ENDSQL
#")";
ssh -qn ${pdt_usshu_dbs}#${dbs} "sqlplus $dbu/${pswd}#$dbn <<ENDSQL | tee "${sql_run_output_file}".ssh.log
set echo off
set echo on
set timing on
set time on
set serveroutput on size 1000000
#${file}
ENDSQL
"
seems like unlimited doesn't work in 11g so I had to use the 1000000 value (these small sql cmds help to show command with its output, show clock time for each output line etc).
But basically, in the above code, I'm calling the ssh command directly without using a variable="$(.....)" way.. and after the <
Even if I wouldn't have the same mount, I could have tee'd the output to a file on the remote server path (which is not available from source server) but atleast I can see upto what level the .sql command completed or generated output as now output is going directly to a file on remote server and Unix/Linux doesn't care much about the file size until there's no space left.

Compressing the core files during core generation

Is there way to compress the core files during core dump generation?
If the storage space is limited in the system, is there a way of conserving it in case of need for core dump generation with immediate compression?
Ideally the method would work on older versions of linux such as 2.6.x.
The Linux kernel /proc/sys/kernel/core_pattern file will do what you want: http://www.mjmwired.net/kernel/Documentation/sysctl/kernel.txt#191
Set the filename to something like |/bin/gzip -1 > /var/crash/core-%t-%p-%u.gz and your core files should be saved compressed for you.
For an embedded Linux systems, following script change perfectly works to generate compressed core files in 2 steps
step 1: create a script
touch /bin/gen_compress_core.sh
chmod +x /bin/gen_compress_core.sh
cat > /bin/gen_compress_core.sh #!/bin/sh exec /bin/gzip -f - >"/var/core/core-$1.$2.gz"
ctrl +d
step 2: update the core pattern file
cat > /proc/sys/kernel/core_pattern |/bin/gen_compress_core.sh %e %p ctrl+d
As suggested by other answer, the Linux kernel /proc/sys/kernel/core_pattern file is good place to start: http://www.mjmwired.net/kernel/Documentation/sysctl/kernel.txt#141
As documentation says you can specify the special character "|" which will tell kernel to output the file to script. As suggested you could use |/bin/gzip -1 > /var/crash/core-%t-%p-%u.gz as name, however it doesn't seem to work for me. I expect that the reason is that on my system kernel doesn't treat the > character as a output, rather it probably passes it as a parameter to gzip.
In order to avoid this problem, like other suggested you can create your file in some location I am using /home//crash/core.sh, create it using the following command, replacing with your user. Alternatively you can also obviously change the entire path.
echo -e '#!/bin/bash\nexec /bin/gzip -f - >"/home/<username>/crashes/core-$1-$2-$3-$4-$5.gz"' > ~/crashes/core.sh
Now this script will take 5 input parameters and concatenate them and add to core-path. The full paths must be specified in the ~/crashes/core.sh. Also the location of this script can be specified. Now lets tell kernel to use tour executable with parameters when generating file:
sudo sysctl -w kernel.core_pattern="|/home/<username>/crashes/core.sh %e %p %h %t"
Again should be replaced (or entire path to match location and name of core.sh script). Next step is to crash some program, lets create example crashing cpp file:
int main (){
int * a = nullptr;
int b = *a;
}
After compiling and running there are 2 options, either we will see:
Segmentation fault (core dumped)
Or
Segmentation fault
In case we see the latter, there are few possible reasons.
ulimit is not set, ulimit -c should specify what is limit for cores
apport or your distro core dump collector is not running, this should be investigated further
there is an error in script we wrote, I suggest than checking some basic dump path to check if the other things aren't reason the below should create /tmp/core.dump:
sudo sysctl -w kernel.core_pattern="/tmp/core.dump"
I know there is already an answer for this question however it wasn't obvious for me why it isn't working "out of the box" so I wanted to summarize my findings, hope it helps someone.

Resources