Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I was editing a file in nano and, when I saved it, I got a disk is full error. However, when I opened the file again, all it's content was gone, including everything that was there before the disk was filled. How can I recover the file's content? My partition type is EXT4. I've already tried recovering it using debugfs with no success.
Thanks in advance.
I've managed to get my file back by dumping an image of the SD card then grepping the strings output of the block file for "signatures" I remembered of the file. After getting the line number, I just cropped the output and saved it to a file.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 months ago.
Improve this question
I have downloaded the following file on my Linux computer:
wget https://github.com/tomwhite/hadoop-book/blob/master/input/ncdc/all/1901.gz
I tried to unzip the file using gunzip 1901.gz but it did not work. I check the file format using 'file' command and it says:
1901.gz: HTML document, UTF-8 Unicode text, with very long lines
I am quite new to Linux. May I know how can I successfully extract the data for usage?
You have downloaded a regular HTML file and you called it something.gz, hoping that that would turn it into a zipped file, but this is not how it works: your file is not a zipped file, so there's no reason trying to unzip it.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I read that "1" is the number of hard links to the specific file, but what exactly are hard links?
In computing, a hard link is a directory entry that associates a name
with a file on a file system. All directory-based file systems must
have at least one hard link giving the original name for each file.
The term “hard link” is usually only used in file systems that allow
more than one hard link for the same file.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
Suppose I just don't have the whole file, I just download the first part and it contains all the file signature/magic bytes. Can I use file command in Linux to get its type? I think this command detect the file signature at the beginning, but I am not sure if they have more validation of the rest of the files.
file(1) will look by default at the first 1Mb of the file.
If you're using it as a library (libmagic) from your own program, you can change that with magic_setparam(MAGIC_PARAM_BYTES_MAX), see its manpage.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have a problem using dd command, assume that I am writing 20MB file to 100MB partition. After the write I am not able to access the rest of 80MB.
dd if=temp_file of=/dev/sdb1
Is there a way I can specify dd to adjust to the file system that I am writing into?
All I am interested is know if there is a way to use the 80MB space without disturbing the initial 20MB.
By using the dd command the way you do, you overwrite the file-system data, including the important meta-data about the file-system. If the temp_file contains a file-system for a 20MB partition then that's what you will get.
If you want a 100MB partition, you need to create a 100MB disk-image to write to the disk.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I have a script witch use lpr to print the output to a PDF file.
I would like to change the location or even the file name of the saved file.
I read several forums about lpr and did not find anything on how to specify the name and the directory of the printed pdf.
Instead I always get a standard name in my PDF directory.
Thank you!
Take a look at cups-pdf, https://help.ubuntu.com/community/PDFPrinting . You can configure the output directory and filename with it. The configuration file is /etc/cups/cups-pdf.conf.