Check size of big directory [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have 6 very big directorys and once a day I would like to check size each of this directories for my monitoring. Now I'm using du -s command but it take many time and significantly slows my server. Is any different better way to do this?

Depending on circumstances you could put those directories on seperate partitions, the "used" size of which you can check very quickly with df.
This, of course, means that the directories are limited to the size of their respective partitions, which could be a pain. Hence the "depending on circumstances".

Related

What is the "1" in this column "drwxr-xr-x 1 bash bash 4.0K Dec ..." [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I read that "1" is the number of hard links to the specific file, but what exactly are hard links?
In computing, a hard link is a directory entry that associates a name
with a file on a file system. All directory-based file systems must
have at least one hard link giving the original name for each file.
The term “hard link” is usually only used in file systems that allow
more than one hard link for the same file.

What happens when a multiple file transfer is interrupted? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I used mv to move some files from /source_dir/ to /target_dir/, which looked like: mv /source_dir/*some_regex* /target_dir/.
One of the files which started to move, file1, is now in both target_dir and source_dir.
target_dir/file1 weighs considerably less than source_dir/file1.
My question is: Is source_dir/file1 broken? Is it unaffected (in which case I can delete target_dir/file1 and rerun the mv.
The source file is removed after copying is finished. Therefore, source file stays unaffected until the operation is completed.
If moving on the same filesystem, a different mechanism is used, where the data stays in place.

Is there an argument for the "top" command to get a permanent result? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
The top command is live and constantly updating, not generating a permanent result. Can we add an argument for a permanent result (if it exists), or use a different command resulting in a definite and final response?
top -n1
should do the job. If you want to store the output in a file, you should add the -b option for batch mode.
Note that this is just a sample of usage at one time, not anything like a final answers as all the numbers in top vary over time even on the stablest of systems.

/tmp usage in linux [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
How I can find out how much /tmp space is required by an application. Generally sometime I see /tmp is full and get error saying not able to write to /tmp. So is there any way to find out how much /tmp space is required by an application ?
There is no way. Programs use /tmp on an ad-hoc basis.

Join AVCHD .mts files on linux [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have a Lumix camera which, like most new cameras, record video in AVCHD format. The files get segmented into 2 or 4 GiB segments because of the limitations of the filesystem used on the memory card.
When I transfer the files to my linux computer to edit them I naturally want to have each video in a single file, which is no problem at all for linux's filesystems. So, how can I losslessly join these segments, maintaining a/v-sync?
(With Avidemux 2.6.8 I can append these segments, but it leads to nasty distortions at the cut point.)
The solution, which seems to work with my files at least, turned out to be very simple:
ffmpeg -i "concat:00000.MTS|00001.MTS|00002.MTS" -c copy output.mts
One still has to figure out which of the files belong together, though.

Resources