Can I increase linux entropy by using rng-daemon without hardware generator? - linux

I want to continuously increase /prco/sys/random/entropy_avail when it reduced.
I first check the rngd (https://wiki.archlinux.org/index.php/Rng-tools)
It says /dev/random is very slow since it only collects entropy from device drivers and other (slow) sources and I think that is why we use rngd.
And it says rngd mainly uses hardware random number generators (TRNG), present in modern hardware like recent AMD/Intel processors, VIA Nano or even Raspberry Pi.
However, when I start rngd it says
[root#localhost init.d]# rngd
can't open entropy source(tpm or intel/amd rng)
Maybe RNG device modules are not loaded
But I don't have Intel RDRAND confirmed by cat /proc/cpuinfo | grep rdrand:
[root#localhost init.d]# cat /proc/cpuinfo | grep rdrand | wc -l
0
If there is any possible resources that I can use?
Alternatively, is it possible making script to increase /proc/sys/random/entropy_avail?

Try this:
sudo apt-get install haveged

Related

Measure LLC/L3 Cache Miss Rate on AMD Zen2 CPU

I have question related to this one.
I want to (programatically) measure L3 Hits (Accesses) and Misses on an AMD EPYC 7742 CPU (Zen2). I run Linux Kernel 5.4.0-66-generic on Ubuntu Server 20.04.2 LTS. According to the question linked above, the events rFF04 (L3LookupState) and r0106 (L3CombClstrState) should represent the L3 accesses and misses, respectively. Furthermore, Kernel 5.4 should support these events.
However, when measuring it with perf, I run into issues. Similar to the question linked above, if I run numactl -C 0 -m 0 perf stat -e instructions,cycles,r0106,rFF04 ./benchmark, I only measure 0 values. If I try to use numactl -C 0 -m 0 perf stat -e instructions,cycles,amd_l3/r8001/,amd_l3/r0106/, perf complains about "unknown terms". If I use the perf event names, i.e. numactl -C 0 -m 0 perf stat -e instructions,cycles,l3_request_g1.caching_l3_cache_accesses, l3_comb_clstr_state.request_miss perf outputs <not supported> for these events.
Furthermore, I actually want to measure this using perf's C API. Currently, I dispatch a perf_event_attr with type PERF_TYPE_RAW and config set to, e.g., 0x8001. How do I get the amd_l3 PMU stuff into my perf_event_attr object? Otherwise, it would be equivalent to numactl -C 0 -m 0 perf stat -e instructions,cycles,r0106,rFF04 ./benchmark, which is measuring undefined values.
Thank you so much for your help.

cpupower utility linux - how to get a list of available frequencies

I am trying to get a list of available frequencies for my cpu using the cpupower tool.
I am executing following command-
cpupower -c 0,1,4,5 frequency-info
This gives me much information but I need to see a list of available frequencies to which I can set these CPUs to.
On older versions of Fedora, I used to do this
$ cat /system/cpu/cpu3/cpufreq/scaling_available_frequencies
2201000 2200000 2100000 2000000 1800000 1700000 1600000 1500000 1300000 1200000 1100000
but on Fedora 20, cpufreq is obsolete. I googled and found that cpupower has same functionality like cpufreq.
How do you use it to get a list of available frequencies?
cpupower frequency-info
should give you your info

Disabling LRO using ethtool?

My NIC driver does not support H/W LRO but emulates LRO in the driver. Now GRO (which is a linux network stack feature) can be disabled using 'ethtool -k ethx gro off'. Is that available for LRO as well? i know most distros have either LRO or GRO. So if LRO is disabled using ethtool does it imply turning off the H/W LRO feature or the LRO emulation feature that i do in my driver?
Yes, if your ethtool is new enough.
On my Ubuntu box (Natty Narwhal), "man ethtool" and "ethtool --help" show that LRO is controlled just like GRO; i.e.
ethtool -K ethX lro off
Run "ethtool --help | grep lro" to see if yours supports it.
ethtool -k ethx lro off
It should prevent the system from performing lro. However, many of the drivers which still use LRO are broken and do not honor ethtool.
http://www.spinics.net/lists/netdev/msg149013.html

What is the significance of the numbers in the name of the flush processes for newer linux kernels?

I am running kernel 2.6.33.7.
Previously, I was running v2.6.18.x. On 2.6.18, the flush processes were named pdflush.
After upgrading to 2.6.33.7, the flush processes have a format of "flush-:".
For example, currently I see flush process "flush-8:32" popping up in top.
In doing a google search to try to determine an answer to this question, I saw examples of "flush-8:38", "flush-8:64" and "flush-253:0" just to name a few.
I understand what the flush process itself does, my question is what is the significance of the numbers on the end of the process name? What do they represent?
Thanks
Device numbers used to identify block devices. A kernel thread may be spawned to handle a particular device.
(On one of my systems, block devices are currently numbered as shown below. They may change from boot to boot or hotplug to hotplug.)
$ grep ^ /sys/class/block/*/dev
/sys/class/block/dm-0/dev:254:0
/sys/class/block/dm-1/dev:254:1
/sys/class/block/dm-2/dev:254:2
/sys/class/block/dm-3/dev:254:3
/sys/class/block/dm-4/dev:254:4
/sys/class/block/dm-5/dev:254:5
/sys/class/block/dm-6/dev:254:6
/sys/class/block/dm-7/dev:254:7
/sys/class/block/dm-8/dev:254:8
/sys/class/block/dm-9/dev:254:9
/sys/class/block/loop0/dev:7:0
/sys/class/block/loop1/dev:7:1
/sys/class/block/loop2/dev:7:2
/sys/class/block/loop3/dev:7:3
/sys/class/block/loop4/dev:7:4
/sys/class/block/loop5/dev:7:5
/sys/class/block/loop6/dev:7:6
/sys/class/block/loop7/dev:7:7
/sys/class/block/md0/dev:9:0
/sys/class/block/md1/dev:9:1
/sys/class/block/sda/dev:8:0
/sys/class/block/sda1/dev:8:1
/sys/class/block/sda2/dev:8:2
/sys/class/block/sdb/dev:8:16
/sys/class/block/sdb1/dev:8:17
/sys/class/block/sdb2/dev:8:18
/sys/class/block/sdc/dev:8:32
/sys/class/block/sdc1/dev:8:33
/sys/class/block/sdc2/dev:8:34
/sys/class/block/sdd/dev:8:48
/sys/class/block/sdd1/dev:8:49
/sys/class/block/sdd2/dev:8:50
/sys/class/block/sde/dev:8:64
/sys/class/block/sdf/dev:8:80
/sys/class/block/sdg/dev:8:96
/sys/class/block/sdh/dev:8:112
/sys/class/block/sdi/dev:8:128
/sys/class/block/sr0/dev:11:0
/sys/class/block/sr1/dev:11:1
/sys/class/block/sr2/dev:11:2
You should also be able to figure this out by searching for those numbers in /proc/self/mountinfo, eg:
$ grep 8:32 /proc/self/mountinfo
25 22 8:32 / /var rw,relatime - ext4 /dev/mapper/sysvg-var rw,barrier=1,data=ordered
This has the side benefit of working with nfs as well:
$ grep 0:73 /proc/self/mountinfo
108 42 0:73 /foo /mnt/foo rw,relatime - nfs host.domain.com:/volume/path rw, ...
Note, the data I included here is fabricated, but the mechanism works just fine.

Get details of RAID configuration on Linux

How to get the details of RAID configuration in Linux ?
mdadm -D /dev/mdxx will give you detail of raid configuration.
cat /proc/mdstat will give detail about raid algorithm,level and chunk size etc .
This is real if this RAID is sofware....
In case of RAID hardware, you could type this command :
lspci -vv | grep -i raid
01:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2208 [Thunderbolt] (rev 01)
Kernel driver in use: megaraid_sas
Kernel modules: megaraid_sas
If you're talking about a running array:
cat /proc/mdstat
If you're talking about the mdadm config file, it's usually in /etc or /etc/mdadm depending on the distribution you're running on. The following command should find it in any event:
find /etc -name '*mdadm*'
ETA: Also, I would strongly recommend that you carefully study the mdadm man page so that you are very familiar with that utility. Knowing that utility well will save your bacon at some point.
mdadm --detail /dev/md0
(or whatever /dev/mdXXX you are using)

Resources