I use pv -f in piping like in 'command | pv -f | command2'. pv shows a progress bar like this
83.6MB 0:00:03 [27.9MB/s] [ <=> ]
When pv exits it shows more information. How can I show only the progress bar?
500+0 records in28.3MB/s] [ <=> ]
500+0 records out
524288000 bytes (524 MB) copied, 17.6912 s, 29.6 MB/s
500MB 0:00:17 [28.3MB/s] [ <=> ]
1024000+0 records in
1024000+0 records out
524288000 bytes (524 MB) copied, 17.6956 s, 29.6 MB/s
Edit:
My test case was
dd if=/dev/zero bs=100k count=80000 | pv -f | dd of=/dev/null
Related
I have a relatively simple question but probably not for me.
I created a bbclass named spi-nand-ubi.bbclass. In this file, I have a few records like
dd if=${DEPLOY_DIR_IMAGE}/${KERNEL_DEVICETREE} of=${SPINAND} bs=1k seek=512 conv=notrunc
dd if=${DEPLOY_DIR_IMAGE}/${KERNEL_IMAGETYPE} of=${SPINAND} bs=1k seek=640 conv=notrunc
and above records are fine, but the following record don't
dd if=${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.ubi of=${SPINAND} bs=1k seek=5504 conv=notrunc
I get always an error like
dd: failed to open '/home/mw/yocto/tmp/work/indus-poky-linux-gnueabi/console-image/1.0-r0/deploy-console-image-image-complete/console-image-indus-20230120084720.rootfs.ubi': No such file or directory
When I type manually to the
cd /home/mw/yocto/tmp/work/indus-poky-linux-gnueabi/console-image/1.0-r0/deploy-console-image-image-complete
the file with provided name exists.
What I missed, misunderstood. Please give me a hint on how to resolve this issue.
Thanks
directory content of
/home/mw/yocto/tmp/work/indus-poky-linux-gnueabi/console-image/1.0-r0/deploy-console-image-image-complete
console-image-indus-20230120082905.rootfs.ext4
console-image-indus-20230120082905.rootfs.jffs2
console-image-indus-20230120082905.rootfs.manifest
console-image-indus-20230120082905.rootfs.sunxi-spinand
console-image-indus-20230120082905.rootfs.tar
console-image-indus-20230120082905.rootfs.tar.gz
console-image-indus-20230120082905.rootfs.tar.xz
console-image-indus-20230120082905.rootfs.ubi
console-image-indus-20230120082905.rootfs.ubifs
console-image-indus-20230120082905.testdata.json
console-image-indus-20230120084720.rootfs.sunxi-spinand
console-image-indus-20230120084720.rootfs.tar
console-image-indus-20230120084720.rootfs.tar.gz
console-image-indus-20230120084720.rootfs.tar.xz
console-image-indus.ext4
console-image-indus.jffs2
console-image-indus.manifest
console-image-indus.testdata.json
console-image-indus.ubi
console-image-indus.ubifs
ubinize-console-image-indus-20230120082905.cfg
the content of my spi-nand-ubi.bbclass
inherit image_types
#
# Create an image that can by written into a SPI NAND (128 MBytes) flash technology using dd application.
# Written for Indus board to simplify programming process and to write only one combined image file.
#
# The image layout(layout is valid for 128MBytes Flashes) used is:
#
# OFFSET PARTITION SIZE PARTITION
# 0 -> 458752(0x80000) - SPL U-Boot with NAND offset
# 512*1024 -> 65536(0x20000) - The dtb file
# 640*1024 -> 4980736(0x4C0000) - Kernel
# 5504*1024 -> 122494976(0x7AA0000) - Ubifs rootfs (*.ubi for NAND)
#
# SUM of all partition should give all flash memory size,
# SUM = 0x70000+0x10000+0x4C0000+0x7AA0000= 0x7A12000(128000000)
#
# Before change partition offsets here, do it first for U-Boot DTS, defconifg and Kernel DTS
# This image depends on the rootfs image
RDEPENDS_mtd-utils-tests += "bash"
SPINAND_ROOTFS_TYPE ?= "ubifs"
IMAGE_TYPEDEP_sunxi-spinand= "${SPINAND_ROOTFS_TYPE}"
do_image_sunxi_spinand[depends] += " \
mtools-native:do_populate_sysroot \
dosfstools-native:do_populate_sysroot \
virtual/kernel:do_deploy \
virtual/bootloader:do_deploy \
"
# The NAND SPI Flash image name
SPINAND = "${IMGDEPLOYDIR}/${IMAGE_NAME}.rootfs.sunxi-spinand "
IMAGE_CMD:sunxi-spinand () {
${DEPLOY_DIR_IMAGE}/mknanduboot.sh ${DEPLOY_DIR_IMAGE}/${SPL_BINARY} ${DEPLOY_DIR_IMAGE}/${NAND_SPL_BINARY}
dd if=/dev/zero of=${SPINAND} bs=1M count=16
dd if=${DEPLOY_DIR_IMAGE}/${NAND_SPL_BINARY} of=${SPINAND} bs=1k conv=notrunc
dd if=${DEPLOY_DIR_IMAGE}/${KERNEL_DEVICETREE} of=${SPINAND} bs=1k seek=512 conv=notrunc
dd if=${DEPLOY_DIR_IMAGE}/${KERNEL_IMAGETYPE} of=${SPINAND} bs=1k seek=640 conv=notrunc
mkfs.ubifs -F -r ${WORKDIR}/rootfs -m 2048 -e 126976 -c 2048 -o ${DEPLOY_DIR_IMAGE}/rootfs.ubifs
#ubinize -vv -o ${DEPLOY_DIR_IMAGE}/rootfs.ubi -m 2048 -p 131072 -s 2048 ${DEPLOY_DIR_IMAGE}/config.ini
#ubinize -vv -o ${DEPLOY_DIR_IMAGE}/rootfs.ubi -m 2048 -p 131072 -s 2048 ${DEPLOY_DIR_IMAGE}/ubinize${vname}-${IMAGE_NAME}.cfg
dd if=${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.ubi of=${SPINAND} bs=1k seek=5504 conv=notrunc
#dd if=${DEPLOY_DIR_IMAGE}/rootfs.ubi of=${SPINAND} bs=1k seek=5504 conv=notrunc
}
I did a few experiments and I think I found a reason. The problem is that ubi and ubifs are generated later than the receipt which uses my bbclass. This is the reason why it doesn't see demanded files. The question is what should I add to wait on ubi and ubifs to be finalized?
Inside the docker container I test the
dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 2> >( grep copied )
line execution. I drill into container using 2 ways.
1) the classical one is docker exec -it 2b65c84ddce2 /bin/sh
the execution the line inside the contained inherinted from the alpine I'm greeting
/bin/sh: syntax error: unexpected redirection beacuse of something near >(
2) when I enter the container into the bash executor like docker exec -it 2b65c84ddce2 /bin/bash
dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 2> >( grep copied ) returns no output
dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 returns 2 lines output only, while the expectation is 3:
1+0 records in
1+0 records out
At the host level the same dd command is returning 3 lines like this:
dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000
1000+0 records in
1000+0 records out
512000 bytes (512 kB, 500 KiB) copied, 0.0109968 s, 46.6 MB/s
and with redirection the output is the last line:
dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 2> >( grep copied )
512000 bytes (512 kB, 500 KiB) copied, 0.0076261 s, 67.1 MB/s
So how can I get the last line of dd output from the inside of the docker container?
PS.
The redirecting stderr to stdout doesn't help in general:
/ # dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 2>&1 | grep copied
/ # dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 2>&1
1000+0 records in
1000+0 records out
while at the host system it works
$ dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 2>&1 | grep copied
512000 bytes (512 kB, 500 KiB) copied, 0.00896706 s, 57.1 MB/s
host:
dd --v
dd (coreutils) 8.30
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Written by Paul Rubin, David MacKenzie, and Stuart Kemp.
container:
/ # dd --v
BusyBox v1.31.1 () multi-call binary.
Usage: dd [if=FILE] [of=FILE] [ibs=N obs=N/bs=N] [count=N] [skip=N] [seek=N]
[conv=notrunc|noerror|sync|fsync]
[iflag=skip_bytes|fullblock] [oflag=seek_bytes|append]
Copy a file with converting and formatting
if=FILE Read from FILE instead of stdin
of=FILE Write to FILE instead of stdout
bs=N Read and write N bytes at a time
ibs=N Read N bytes at a time
obs=N Write N bytes at a time
count=N Copy only N input blocks
skip=N Skip N input blocks
seek=N Skip N output blocks
conv=notrunc Don't truncate output file
conv=noerror Continue after read errors
conv=sync Pad blocks with zeros
conv=fsync Physically write data out before finishing
conv=swab Swap every pair of bytes
iflag=skip_bytes skip=N is in bytes
iflag=fullblock Read full blocks
oflag=seek_bytes seek=N is in bytes
oflag=append Open output file in append mode
status=noxfer Suppress rate output
status=none Suppress all output
N may be suffixed by c (1), w (2), b (512), kB (1000), k (1024), MB, M, GB, G
they are in fact, different
For anyone searching this question:
the DD used was being used with BusyBox. The third line is an optional output which is defined when compiling BusyBox from Source. The pre compiled versions have this disabled
ENABLE_FEATURE_DD_THIRD_STATUS_LINE must be defined.
see https://git.busybox.net/busybox/tree/coreutils/dd.c line 166.
#if ENABLE_FEATURE_DD_THIRD_STATUS_LINE
# if ENABLE_FEATURE_DD_STATUS
if (G.flags & FLAG_STATUS_NOXFER) /* status=noxfer active? */
return;
//TODO: should status=none make dd stop reacting to USR1 entirely?
//So far we react to it (we print the stats),
//status=none only suppresses final, non-USR1 generated status message.
# endif
fprintf(stderr, "%llu bytes (%sB) copied, ",
G.total_bytes,
/* show fractional digit, use suffixes */
make_human_readable_str(G.total_bytes, 1, 0)
);
/* Corner cases:
* ./busybox dd </dev/null >/dev/null
* ./busybox dd bs=1M count=2000 </dev/zero >/dev/null
* (echo DONE) | ./busybox dd >/dev/null
* (sleep 1; echo DONE) | ./busybox dd >/dev/null
*/
seconds = (now_us - G.begin_time_us) / 1000000.0;
bytes_sec = G.total_bytes / seconds;
fprintf(stderr, "%f seconds, %sB/s\n",
seconds,
/* show fractional digit, use suffixes */
make_human_readable_str(bytes_sec, 1, 0)
);
#endif
}
I am looking for a solution to create Oracle ASM udev rules file for linux. I have two input file. file1 has info of ASM disk requirement and file2 has disk information.
For example, line 2 of file1 is showing DATA12 need 3 disk(DATA12_01,DATA12_02,DATA12_03) of each 128G. file2 has all disk info with size. From these two input file I need to create output file shown bellow.
cat file1
Count - size - name
3 - 128 GB DATA12
1 - 128 GB TEMP02
2 - 4 GB ARCH03
2 - 1 GB ARCH04
1 - 3 GB ORAC01
cat file2
UUID Size
360060e80166ef70000016ef700006700 128.00 GiB
360060e80166ef70000016ef700006701 128.00 GiB
360060e80166ef70000016ef700006702 128.00 GiB
360060e80166ef70000016ef700006703 128.00 GiB
360060e80166ef70000016ef700006730 4.00 GiB
360060e80166ef70000016ef700006731 4.00 GiB
360060e80166ef70000016ef700006733 1.00 GiB
360060e80166ef70000016ef700006734 1.00 GiB
360060e80166ef70000016ef700006735 3.00 GiB
Output File
ACTION=="add|change", ENV{DM_NAME}=="360060e80166ef70000016ef700006700", SYMLINK+="udevlinks/DATA12_01"
ACTION=="add|change", ENV{DM_NAME}=="360060e80166ef70000016ef700006701", SYMLINK+="udevlinks/DATA12_02"
ACTION=="add|change", ENV{DM_NAME}=="360060e80166ef70000016ef700006702", SYMLINK+="udevlinks/DATA12_03"
ACTION=="add|change", ENV{DM_NAME}=="360060e80166ef70000016ef700006703", SYMLINK+="udevlinks/TEMP02_01"
ACTION=="add|change", ENV{DM_NAME}=="360060e80166ef70000016ef700006730", SYMLINK+="udevlinks/ARCH03_01"
ACTION=="add|change", ENV{DM_NAME}=="360060e80166ef70000016ef700006731", SYMLINK+="udevlinks/ARCH03_02"
ACTION=="add|change", ENV{DM_NAME}=="360060e80166ef70000016ef700006733", SYMLINK+="udevlinks/ARCH04_01"
ACTION=="add|change", ENV{DM_NAME}=="360060e80166ef70000016ef700006734", SYMLINK+="udevlinks/ARCH04_02"
ACTION=="add|change", ENV{DM_NAME}=="360060e80166ef70000016ef700006735", SYMLINK+="udevlinks/ORAC01_01"
Here is one in AWK:
$ cat > test.awk
BEGIN {FS="([.]| +)"} # field separator do deal with "." in file2 128.00
FNR==1 {next} # skip header
NR==FNR { # read available disks to pool from file1
for(i=1; i<=$1; i++)
a[$5"_"0i]=$3 # name and set the disks into pool
next}
{
for(i in a) { # look for right sized disk
if(a[i]==$2) { # when found, print...
printf "%s%s%s%s%s", "ACTION==\"add|change\", ENV{DM_NAME}==\"",$1,"\",\"SYMLINK+=\"udevlinks/",i,"\"\n"
delete a[i] # ... and remove from pool
break
}
} # if no device was found:
old=len; len=length(a); if(old==len) {print "No device found for ",$0}
}
$ awk -f test.awk file1 file2
ACTION=="add|change", ENV{DM_NAME}=="360060e80166ef70000016ef700006700","SYMLINK+="udevlinks/DATA12_01"
ACTION=="add|change", ENV{DM_NAME}=="360060e80166ef70000016ef700006701","SYMLINK+="udevlinks/DATA12_02"
...
No device found for THIS_IS_AN_EXAMPLE_OF_MISSING_DISK 666.00 GiB
Due to disk search using for(i in a) no order in which disks are read from the pool is guaranteed.
This question already has answers here:
Pipe only STDERR through a filter
(7 answers)
Closed 8 years ago.
I've been trying to use the following command on my server
dd if=/dev/zero bs=1M count=1024 | md5sum
The output:
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.92245 s, 367 MB/s
cd573cfaace07e7949bc0c46028904ff -
How do I let it show the speed (367 MB/s) as output only? The status is printed to stderr.
I'm currently using awk but it showed the md5 hash.
Helps are appreciated :)
First, a function to simulate your command
simulation() {
echo "1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.92245 s, 367 MB/s" >&2
echo "cd573cfaace07e7949bc0c46028904ff -"
}
$ simulation >/dev/null
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.92245 s, 367 MB/s
$ simulation 2>/dev/null
cd573cfaace07e7949bc0c46028904ff -
Then, the solution: redirecting stderr to a process substitution that displays the desired output back to stderr, capturing stdout in a variable.
$ md5sum=$( simulation 2> >(sed -n '/MB\/s/ {s/.*, //p; q}' >&2) )
367 MB/s
$ echo $md5sum
cd573cfaace07e7949bc0c46028904ff -
How i can see memory usage by user in linux centos 6
For example:
USER USAGE
root 40370
admin 247372
user2 30570
user3 967373
This one-liner worked for me on at least four different Linux systems with different distros and versions. It also worked on FreeBSD 10.
ps hax -o rss,user | awk '{a[$2]+=$1;}END{for(i in a)print i" "int(a[i]/1024+0.5);}' | sort -rnk2
About the implementation, there are no shell loop constructs here; this uses an associative array in awk to do the grouping & summation.
Here's sample output from one of my servers that is running a decent sized MySQL, Tomcat, and Apache. Figures are in MB.
mysql 1566
joshua 1186
tomcat 353
root 28
wwwrun 12
vbox 1
messagebus 1
avahi 1
statd 0
nagios 0
Caveat: like most similar solutions, this is only considering the resident set (RSS), so it doesn't count any shared memory segments.
EDIT: A more human-readable version.
echo "USER RSS PROCS" ; echo "-------------------- -------- -----" ; ps hax -o rss,user | awk '{rss[$2]+=$1;procs[$2]+=1;}END{for(user in rss) printf "%-20s %8.0f %5.0f\n", user, rss[user]/1024, procs[user];}' | sort -rnk2
And the output:
USER RSS PROCS
-------------------- -------- -----
mysql 1521 1
joshua 1120 28
tomcat 379 1
root 19 107
wwwrun 10 10
vbox 1 3
statd 1 1
nagios 1 1
messagebus 1 1
avahi 1 1
Per-user memory usage in percent using standard tools:
for _user in $(ps haux | awk '{print $1}' | sort -u)
do
ps haux | awk -v user=${_user} '$1 ~ user { sum += $4} END { print user, sum; }'
done
or for more precision:
TOTAL=$(free | awk '/Mem:/ { print $2 }')
for _user in $(ps haux | awk '{print $1}' | sort -u)
do
ps hux -U ${_user} | awk -v user=${_user} -v total=$TOTAL '{ sum += $6 } END { printf "%s %.2f\n", user, sum / total * 100; }'
done
The first version just sums up the memory percentage for each process as reported by ps. The second version sums up the memory in bytes instead and calculates the total percentage afterwards, thus leading to a higher precision.
If your system supports, try to install and use smem:
smem -u
User Count Swap USS PSS RSS
gdm 1 0 308 323 820
nobody 1 0 912 932 2240
root 76 0 969016 1010829 1347768
or
smem -u -t -k
User Count Swap USS PSS RSS
gdm 1 0 308.0K 323.0K 820.0K
nobody 1 0 892.0K 912.0K 2.2M
root 76 0 937.6M 978.5M 1.3G
ameskaas 46 0 1.2G 1.2G 1.5G
124 0 2.1G 2.2G 2.8G
In Ubuntu, smem can be installed by typing
sudo apt install smem
This will return the total ram usage by users in GBs, reverse sorted
sudo ps --no-headers -eo user,rss | awk '{arr[$1]+=$2}; END {for (i in arr) {print i,arr[i]/1024/1024}}' | sort -nk2 -r
You can use the following Python script to find per-user memory usage using only sys and os module.
import sys
import os
# Get list of all users present in the system
allUsers = os.popen('cut -d: -f1 /etc/passwd').read().split('\n')[:-1]
for users in allUsers:
# Check if the home directory exists for the user
if os.path.exists('/home/' + str(users)):
# Print the current usage of the user
print(os.system('du -sh /home/' + str(users)))