Join AVCHD .mts files on linux [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have a Lumix camera which, like most new cameras, record video in AVCHD format. The files get segmented into 2 or 4 GiB segments because of the limitations of the filesystem used on the memory card.
When I transfer the files to my linux computer to edit them I naturally want to have each video in a single file, which is no problem at all for linux's filesystems. So, how can I losslessly join these segments, maintaining a/v-sync?
(With Avidemux 2.6.8 I can append these segments, but it leads to nasty distortions at the cut point.)

The solution, which seems to work with my files at least, turned out to be very simple:
ffmpeg -i "concat:00000.MTS|00001.MTS|00002.MTS" -c copy output.mts
One still has to figure out which of the files belong together, though.

Related

JPEG size made smaller by MSPAINT, why? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I have lots of JPEGs from DSLR, and they are roughly about 5-6MB per JPEG. I open any of them using MSPAINT, and click the SAVE and notice the size immediately go down to 2-3MB.
Why? Is Mspaint doing a lossy or lossless compression?
Things Paint May be doing:
Using different quantization tables
Subsampling the Cb and Cr color components
Using optimal huffman tables.
Stripping out metadata.
You an run a JPEG dumping program on the two versions and compare the output to see the changes

Check size of big directory [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have 6 very big directorys and once a day I would like to check size each of this directories for my monitoring. Now I'm using du -s command but it take many time and significantly slows my server. Is any different better way to do this?
Depending on circumstances you could put those directories on seperate partitions, the "used" size of which you can check very quickly with df.
This, of course, means that the directories are limited to the size of their respective partitions, which could be a pain. Hence the "depending on circumstances".

How does mv compare to cp in terms of speed and safety when the transfer is across different filesystems? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
By safety I mean if the transfer gets interrupted, how does that impact the data in both source and dest? Is it also dependent on the specific types of filesystems?
When working across filesystems mv really has no choice but copying the file, in effect doing whatever cp does and then unlinking the original file.
A simple strace shows this:
rename("/tmp/file.rand", "./file.rand") = -1 EXDEV (Invalid cross-device link)
^^^^^^^^^^^^^^^^^^^
After this point mv reads 65536 bytes at a time from one fd and writes them to the other and does an unlinkat at the end.

Is there something like USN Journal on Linux filesystem? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I often use Everything (a search tool) on Windows. It uses USN Journal to speed file name search.
Do Linux filesystems (ext4, xfs, btrfs, etc.) have a similar function to USN Journal?
The USN journal lets a Windows program keep track of changes to files.
An program on Linux can do the same by using inotify. It allows a program to be notified about every change to the files.
It is not a function of any particular filesystem, but of the kernel's filesystem layer, so it works with any filesystem.

How to shrink an ext4 partition without formatting it? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
Recently i installed Ubuntu 13.04 and allocated 20 GB for it. The system got installed space less than 10 GB. Now, can i shrink it to 10 GB without formatting it?
Thats to say, i don't want to have large empty space in the partition.
You could use the resize2fs command.
However, I would suggest to backup the most important files (on e.g. an USB key) before doing that (e.g. /etc/ and some of /home/ )
See also this question...
BTW, 20GB for the system partition is not that much.....

Resources