I have a list of file that looks like this
4 -rw-r--r-- 1 neversaint hgc0746 53 May 1 10:37 SRX016372-SRR037477.est_count
4 -rw-r--r-- 1 neversaint hgc0746 53 May 1 10:34 SRX016372-SRR037478.est_count
4 -rw-r--r-- 1 neversaint hgc0746 53 May 1 10:41 SRX016372-SRR037479.est_count
0 -rw-r--r-- 1 neversaint hgc0746 0 Apr 27 11:16 SRX003838-SRR015096.est_count
0 -rw-r--r-- 1 neversaint hgc0746 0 Apr 27 11:32 SRX004765-SRR016565.est_count
What I want to do is to find files that has exactly size 53. But why this command failed?
$ find . -name "*.est_count" -size 53 -print
It works well though if I just want to find file of size 0 with this command:
$ find . -name "*.est_count" -size 0 -print
You need to suffix the size 53 by 'c'. As per find's manpage -
-size n[cwbkMG]
File uses n units of space. The following suffixes can be used:
`b' for 512-byte blocks (this is the default if no suffix is
used)
`c' for bytes
`w' for two-byte words
`k' for Kilobytes (units of 1024 bytes)
`M' for Megabytes (units of 1048576 bytes)
`G' for Gigabytes (units of 1073741824 bytes)
The size does not count indirect blocks, but it does count
blocks in sparse files that are not actually allocated. Bear in
mind that the `%k' and `%b' format specifiers of -printf handle
sparse files differently. The `b' suffix always denotes
512-byte blocks and never 1 Kilobyte blocks, which is different
to the behaviour of -ls.
-size n[ckMGTP]
True if the file's size, rounded up, in 512-byte blocks is n. If
n is followed by a c, then the primary is true if the file's size
is n bytes (characters). Similarly if n is followed by a scale
indicator then the file's size is compared to n scaled as:
k kilobytes (1024 bytes)
M megabytes (1024 kilobytes)
G gigabytes (1024 megabytes)
T terabytes (1024 gigabytes)
P petabytes (1024 terabytes)
You need to use -size 53c.
This is what I get on A Mac OS 10.5
> man find
...
-size n[c]
True if the file's size, rounded up, in 512-byte blocks is n. If n
is followed by a c, then the primary is true if the file's size is n
bytes (characters).
Related
I was running a simulation on a terminal and the simulation did not go through due to disk space issue. (It reported "No space left on device")
Then we cleaned up some space and ran simulation on the same terminal.
However, it still complained the space issue.
When we ran on a new terminal, the simulation went through.
Hence I want to understand the cause of this.
Please help
Thank you.
This is a common issue in linux.
If a process has opened a file and not closed it, removing the file only removes the directory entry (think name) from the directory it is in. Until the file is closed by the process or the process terminates the disk space will not be reclaimed.
To find these files you can look through the /proc file system. Every running process can be found in there by it process id (pid).
Here I'm running a python program that opened a file and is doing nothing. If I use ps to find the pid of the process and cd int /proc/<pid>/fd I can see the open file descriptors and the names of the files that are open:
$ pwd
/proc/38246/fd
$ ls -l
total 0
lrwx------ 1 x x 64 Sep 8 15:39 0 -> /dev/pts/0
lrwx------ 1 x x 64 Sep 8 15:39 1 -> /dev/pts/0
lrwx------ 1 x x 64 Sep 8 15:39 2 -> /dev/pts/0
lr-x------ 1 x x 64 Sep 8 15:39 3 -> /tmp/test
If I remove the file /tmp/test I see this:
$ rm /tmp/test
$ ls -l
total 0
lrwx------ 1 x x 64 Sep 8 15:39 0 -> /dev/pts/0
lrwx------ 1 x x 64 Sep 8 15:39 1 -> /dev/pts/0
lrwx------ 1 x x 64 Sep 8 15:39 2 -> /dev/pts/0
lr-x------ 1 x x 64 Sep 8 15:39 3 -> /tmp/test (deleted)
Search through /proc/*/fd/ for files that say deleted.
When I type vmstat -m in command line, it shows:
Cache Num Total Size Pages
fuse_request 0 0 424 9
fuse_inode 0 0 768 5
pid_2 0 0 128 30
nfs_direct_cache 0 0 200 19
nfs_commit_data 0 0 704 11
nfs_write_data 36 36 960 4
nfs_read_data 0 0 896 4
nfs_inode_cache 8224 8265 1048 3
nfs_page 0 0 128 30
fscache_cookie_jar 2 48 80 48
rpc_buffers 8 8 2048 2
rpc_tasks 8 15 256 15
rpc_inode_cache 17 24 832 4
bridge_fdb_cache 14 59 64 59
nf_conntrack_expect 0 0 240 16
For the nfs_write_data line(line 7), why the "pages" is less than "total"?
For some of them, the "total" is always equal to "pages".
Taken from vmstat man page
...
The -m switch displays slabinfo.
...
Field Description For Slab Mode
cache: Cache name
num: Number of currently active objects
total: Total number of available objects
size: Size of each object
pages: Number of pages with at least one active object
totpages: Total number of allocated pages
pslab: Number of pages per slab
Thus, total is the number of slabinfo objects (objects used by the OS as inodes, buffers and so on) and a page can contain more than one object
I took apart an ICC file from http://www.brucelindbloom.com/index.html?MunsellCalcHelp.html with a look up table using ICC Profile Inspector. The ICC file is supposed to convert Lab to Uniform LAB.
The files it outputs include headers, a matrix (3x3 identity matrix), Input and Output curves, and a lookup table. What do these files mean? And how are they related to the color transform?
The header contents are:
InputChan: 3
OutputChan: 3
Input_Entries: 258
Output_Entries: 256
Clut_Size: 51
The InputCurves file has entries like:
0 0 0 0
1 256 255 255
2 512 510 510
...
256 65535 65280 65280
257 65535 65535 65535
The OutputCurves file has entries like:
0 0 0 0
1 256 257 257
2 512 514 514
...
254 65024 65278 65278
255 65280 65535 65535
And the lookup table entries look like:
0 0 0 25968
1 0 0 26351
2 0 0 26789
...
132649 65535 65535 49667
132650 65535 65535 50603
I'd like to understand how an input LAB color maps to an output value. I'm especially confused because a & bvalues can be negative.
I believe I understand how this works after skimming through http://www.color.org/specification/ICC1v43_2010-12.pdf
This explination may have some off by 1 errors, but it should be generally correct.
The input values are LAB, and L values are mapped using table 39 & 40 in section 10.8 lut16Type. Then the 258 values in the input curves are uniformly spaced across those L, a, & b ranges. The output values are 16 bit, so 0-65535.
The same goes for the CLUT. There are 51^3 entries (51 was chosen by the ICC file authoer). Each dimension (L,a,b) is split uniformally across this space as well. So 0 = 0 & 50 (note 0 - 50 is 51 entries) = 65535 from the previous section. The first 51 rows are for L =0 and a =0, but incriment b. Every 51 rows, the a value increses by 1, and every 51*51 rows, the L values increases by 1.
So given L, a, and b values adjusted by the input curves, figure out their index (0-50) and look those up in the CLUT (l_ind*51*51+a_ind*51+b_ind), which will give you 3 more values.
Now the output curves come in. It's another set of curves that work just like the input curves. The outputs can then get mapped back using the same values from Tables 39 & 40.
I'm analysing the X-Loader settings for the POP mDDR on the Beagleboard xM.
The amount of mDDR POP memory in the BB xM is 512MB (according to the Manual).
More precisely the Micron variant: 256MB on CS0 + 256MB on CS1 = 512MB total.
The bus width is 32 bits, this can be verified in the SDRC_MCFG_p register settings in the X-Loader.
The type of memory used is the MT46H128M32L2KQ-5 as mentioned in this group:
https://groups.google.com/forum/#!topic/beagleboard/vgrq2bOxXrE
Reading the data sheet of that memory, the 32 bit configuration with the maximum capacity is 16Meg x 32 x 4 = 64Meg x 32.
So 64MB are not 256MB, 128 MB are feasible but only with 16 bit bus width, and even then, we are still not at 256MB.
The guy in the group mentioned above says that the memory is a 4Gb, but the data sheet says that it is a 2Gb.
My question:
How can 512MB be achieved by using 2 memory chips of the above type and 32 bit bus width?
Thanks in advance for your help.
Martin
According to datasheet MT46H128M32L2KQ-5 has following configuration:
MT46H128M32L2 – 16 Meg x 32 x 4 Banks x 2
16 Meg x 32 x 4 Banks x 2 = 4096 Meg (bits, not bytes)
4096 Meg (bits) / 8 = 512 MB (Megabytes)
More from datasheet:
The 2Gb Mobile low-power DDR SDRAM is a high-speed CMOS, dynamic
random-access memory containing 2,147,483,648 bits.
Each of the x32’s 536,870,912-bit banks is organized as 16,384 rows by 1024
columns by 32 bits. (p. 8)
So, if you multiply the number of rows by the number of columns by the number of bits (it's specified in the datasheet), you will get the size of a bank in bits. Bank size is = 16384 x 1024 x 32 = 16 Megs x 32 = 536870912 (bits).
Next, you need to multiply the bank size (in bits) by the number of banks in chip: chip size = 536870912 x 4 = 2147483648 (bits).
In order to get result in bytes, you have to dived it by 8.
chip size (bytes) = 2147483648 (bits) / 8 = 268435456
In order to get result in megabytes, you have to dived it by 1024 x 1024
chip size = 268435456 / 1024 / 1024 = 256 MB (Megabytes)
This is dual LPDDR chip internally organized as 2 x 256 MB chips (it has two chip selects: CS0#, CS1#) (it's specified in the datasheet). The single chip contains two memory chips inside, 256MB each. For BB this single chip must be configured like 2 memories 256MB each in order to get 512MB. So, you have to setup CS0 as 256MB and CS1 as 256MB.
I want to find the size of all the sections/segments of libc.a.
When I run size on it, I get many lines of output, with different file names. Here's a snippet of a couple of lines I get:
text data bss dec hex filename
244 4 0 248 f8 init-first.o (ex /usr/lib64/libc.a)
720 0 0 720 2d0 libc-start.o (ex /usr/lib64/libc.a)
67 0 0 67 43 sysdep.o (ex /usr/lib64/libc.a)
942 0 0 942 3ae version.o (ex /usr/lib64/libc.a)
Wouldn't it be possible to just output the total size of the all segments of libc.a using the size command?
size -t /usr/lib/libc.a should do it.
Its the last line after adding -t option. So to extract only the last line pipe it to tail -n 1
$ size -t /usr/lib/libc.a | tail -n 1
1534448 3764 19567 1557779 17c513 (TOTALS)