Related
I am using a program by using the linux kernel (in this case a predictor for protein localization). The output/results are printed in the linux kernel, one after each other. However, if I want to copy it to a simple textfile, the "length" of the kernel is not long enough for all the results.
instead of using smaller seperate files, I would like to print the output of the kernel to a file. I tried to google this, but it doesn't really help me futher.
1. dmesg seems to be for system-output stuff?
2. the /var/log/syslog.txt doesn't show the stuff I need, but other technical kernel stuff.
3. i saw something with printf(), but didn't quite understand the mechanics and if it was useable for my problem.
could someone explain how to do this or where to look for the right info?
I think i found out how to do it, by using > fileToBeNamed.txt at the end of the command, Sorry :(
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Suppose I have 2 files with size of 100G each. And I want to merge them into one, and then delete them. In linux
we can use
cat file1 file2 > final_file
But that needs to read 2 big files, and then write a bigger file. Is it possible just append one file to the other, so that no IO is required? Since metadata of file contains the location of the file, and the length, I am wondering whether it is possible to change the metadata of the file to do the merge, so no IO will happen.
Can you merge two files without writing one file onto the other?
Only in obscure theory. Since disk storage is always based on blocks and filesystems therefore store things on block boundaries, you could only append one file to another without rewriting if the first file ended perfectly on a block boundary. There are some rare filesystem configurations that use tail packing, but that would only help if the first file where already using the tail block of the previous file.
Unless that perfect scenario occurs or your filesystem is able to mark a partial block in the middle of the file (I've never heard of this), this won't work. Just to kick the edge case around, there's also no way outside of changing the kernel interace to make such a call (re: Link to a specific inode)
Can we make this better than doubling the size of both files?
Yes, we can use the append (>>) operation instead.
cat file2 >> file1
That will still result in using all the space of consumed by file2 twice over until we can delete it.
Can we avoid using extra space?
No. Unless somebody comes back with something I don't know, you're basically out of luck there. It's possible to truncate a file, forgetting about the existence of the end of it, but there is no way to forget about the existence of the start unless we get back to modifying inodes directly and having to alter the kernel interface to the filesystem since that's definitely not a a POSIX operation.
What about writing a little bit at a time, then deleting what we wrote?
No again. Since we can't chop the start of a file off, we'd have to rewrite everything from the point of interest all the way to the end of the file. This would be very costly for IO and only useful after we've already read half the file.
What about sparse files?
Maybe! Sparse file allow us to store a long string of zeroes without using up nearly that much space. If we were to read file2 in large chunks starting at the end, we could write those blocks to the end of file1. file1 would immediately look (and read) as if it were the same size as both, but it would be corrupted until we were done because everything we hadn't written would be full of zeroes.
Explaining all this is another answer in itself, but if you can do a spare allocation, you would be able to use only your chunk read size + a little bit extra in disk space to perform this operation. For a reference talking about sparse blocks in the middle of files, see http://lwn.net/Articles/357767/ or do a search involving the term, SEEK_HOLE.
Why is this "maybe" instead of "yes"? Two parts: you'd have to write your own tool (at least we're on the right site for that), and sparse files are not universally respected by file systems and other processes alike. Fortunately you probably won't have to worry about other processes respecting your file, but you will have to worry about setting the right flags and making sure your filesystem is amenable. Last of all, you'll still be reading and re-writing the length of file2, which isn't what you want. This method does mean you can append with just a small amount of disk space, though, rather at using at least 2*file2 amount of space.
You can do like this
cat file2 >> file1
file1 will become the full content.
No, it is not possible to merge (on Linux) two big files by working on their meta-data.
Maybe you might consider some kind of database for your work.
As Alexandre noticed, you can append one big file to another, but this still requires a lot of data copying.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I've got a lab work relating to Linux's File System. The thing is that I need to retrieve all directories and files in a floppy image.It is required using C language to get directories and filenames of an FAT12 formatted floppy image.
Below is what I've done by now:
I had created a floppy.img using linux dd command and add some files into it;
I had gathered some information about FAT12 file system and known how data arranged in a floppy disk;
Below is what I completely have no idea about:
I know little linux system call
Now I need your help for showing me the entry of solving this problem. Some hints or documents also do great help to me!
What you will need to do is to open your copied image in C. You will need to read this data in, understanding the FAT12 format, modify it, then write it back out.
Look at fopen, fread, fclose using "man". You will probably need to make an array of FAT structures, read in each FAT entry using fread, then modify your array with new entries and write it back out using fwrite. You will probably want to use fseek to jump around. I expect you want to write a new entry to the disk FAT table (sic) and also knowing where the free space is, write the actual file there.
1) fopen
2) fread FAT into arrays (using fseek as needed)
3) modify arrays with new entries
4) fwrite new files' data to the appropriate free area of the image
5) fwrite the updated array back to the FAT
6) fclose
7) test the image
You can test the image by something like this:
mkdir /mnt/test -p
mount -o loop -t vfat test.img /mnt/test
If this fails, then you messed up somewhere. Use hexdump to examine your file.
If it works, make sure to umount it before modifying the file again.
From what I understand you should read data from a disk image in a pure user-space program. In other words: you're not required to write any kernel code or driver.
This task is possible using only standard C API, no Linux-specific calls should be necessary.
I've written a complete implementation of this for a previous employer. Fortunately FAT is VERY well supported as binary formats and filesystems go.
The canonical reference for this is the Microsoft specification (FAT was conceived by Bill Gates in 1976), which covers FAT12, FAT16, and FAT32, which are all very similar.
As you are working in user space, the C stdio library will meet all your needs.
Good day,
I am working on a Stratix III FPGA which contains M9K block memories, the contents of which are conveniently initialised to zero on power-on. This suits my application very well.
Is there a way to reset the contents back to zero without power-cycling/reflashing/etc the FPGA? There seems to be no such option in the megawizard plugin manager, and I would like to avoid wasting a bunch of logic which just goes and sequentially writes zero to every address...
I have looked around and there is no reference to such a mechanism, but I thought I'd ask just in case someone knew a handy trick :] By the way I'm working in VHDL but I should be able to translate any Verilog.
Datasheet (does not contain the answer!) : http://www.altera.com/literature/hb/stx3/stx3_siii51004.pdf
Thanks in advance,
- Thomas
PS: This be my first post here, so if I've violated any etiquette please let me know :)
Sorry, the conventional ways to do that are:
to re-configure the fpga (you could trigger that from within your hardware if you don;t mind the whole thing "disappearing" while it reconfigures)
explicitly write zeros in (as you already suggested)
At the wackier end of the solution space, I guess you could also wire something up to the JTAG port if you already have a microcontroller either in the FPGA or outside - you might be able to overwrite the RAM contents that way too.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
Are there any editors that can edit multi-gigabyte text files, perhaps by only loading small portions into memory at once? It doesn't seem like Vim can handle it =(
Ctrl-C will stop file load. If the file is small enough you may have been lucky to have loaded all the contents and just killed any post load steps. Verify that the whole file has been loaded when using this tip.
Vim can handle large files pretty well. I just edited a 3.4GB file, deleting lines, etc. Three things to keep in mind:
Press Ctrl-C: Vim tries to read in the whole file initially, to do things like syntax highlighting and number of lines in file, etc. Ctrl-C will cancel this enumeration (and the syntax highlighting), and it will only load what's needed to display on your screen.
Readonly: Vim will likely start read-only when the file is too big for it to make a . file copy to perform the edits on. I had to w! to save the file, and that's when it took the most time.
Go to line: Typing :115355 will take you directly to line 115355, which is much faster going in those large files. Vim seems to start scanning from the beginning every time it loads a buffer of lines, and holding down Ctrl-F to scan through the file seems to get really slow near the end of it.
Note - If your Vim instance is in readonly because you hit Ctrl-C, it is possible that Vim did not load the entire file into the buffer. If that happens, saving it will only save what is in the buffer, not the entire file. You might quickly check with a G to skip to the end to make sure all the lines in your file are there.
If you are on *nix (and assuming you have to modify only parts of file (and rarely)), you may split the files (using the split command), edit them individually (using awk, sed, or something similar) and concatenate them after you are done.
cat file2 file3 >> file1
It may be plugins that are causing it to choke. (syntax highlighting, folds etc.)
You can run vim without plugins.
vim -u "NONE" hugefile.log
It's minimalist but it will at least give you the vi motions you are used to.
syntax off
is another obvious one. Prune your install down and source what you need. You'll find out what it's capable of and if you need to accomplish a task via other means.
A slight improvement on the answer given by #Al pachio with the split + vim solution you can read the files in with a glob, effectively using file chunks as a buffer e.g
$ split -l 5000 myBigFile
xaa
xab
xac
...
$ vim xa*
#edit the files
:nw #skip forward and write
:n! #skip forward and don't save
:Nw #skip back and write
:N! #skip back and don't save
You might want to check out this VIM plugin which disables certain vim features in the interest of speed when loading large files.
I've tried to do that, mostly with files around 1 GB when I needed to make some small change to an SQL dump. I'm on Windows, which makes it a major pain. It's seriously difficult.
The obvious question is "why do you need to?" I can tell you from experience having to try this more than once, you probably really want to try to find another way.
So how do you do it? There are a few ways I've done it. Sometimes I can get vim or nano to open the file, and I can use them. That's a really tough pain, but it works.
When that doesn't work (as in your case) you only have a few options. You can write a little program to make the changes you need (for example, search & replaces). You could use a command line program that may be able to do it (maybe it could be accomplished with sed/awk/grep/etc?)
If those don't work, you can always split the file into chunks (something like split being the obvious choice, but you could use head/tail to get the part you want) and then edit the part(s) that need it, and recombine later.
Trust me though, try to find another way.
I think it is reasonably common for hex editors to handle huge files. On Windows, I use HxD, which claims to handle files up to 8 EB (8 billion gigabytes).
I'm using vim 7.3.3 on Win7 x64 with the LargeFile plugin by Charles Campbell to handle multi-gigabyte plain text files. It works really well.
I hope you come right.
Wow, never managed to get vim to choke, even with a GB or two. I've heard that UltraEdit (on Windows) and BBEdit (on Macs) are even more suitable for even-larger files, but I have no personal experience.
In the past I opened up to a 3 gig file with this tool http://csved.sjfrancke.nl/
Personally, I like UltraEdit. Here is their little spiel on large files.
I've used FAR Commander's built-in editor/viewer for super-large log files.
I have used TextPad for large log files it doesn't have an upper limit.
The only thing I've been able to use for something like that is my favorite Mac hex editor, 0XED. However, that was with files that I considered large at tens of megabytes. I'm not sure how far it will go. I'm pretty sure it only loads parts of the file into memory at once, though.
In the past I've successfully used a split/edit/join approach when files get very large. For this to work you have to know about where the to-be-edited text is, in the original file.