I have a bunch of arbitrary text files in my project in a directory which I need to concatenate into a single text file. The order of concatenation isn't important. Is it possible to do this using brunch?
It is possible to achieve with a after-brunch plugin (execute a shell command line cat app/file-{1,2,3}.txt > public/result.txt).
Related
I have multiple .tsv files named as choochoo1.tsv, choochoo2.tsv, ... choochoo(nth).tsv files. I also have a main.tsv file. I want to extract the header line in main.tsv and paste over all choochoo(nth).tsv files. Please note that there are other .tsv files in the directory that I don't want to change or paste header, so I can't do *.tsv and select all the .tsv files (so need to select choochoo string for wanted files). This is what I have tried using bash script, but could not make it work. Please suggest the right way to do it.
for x in *choochoo; do
head -n1 main.tsv > $x
done
You have a problem with the file glob, as well as the redirect:
the file glob will catch things like AAchoochoo but not choochoo1.tsv and not even AAchoochoo.tsv
the redirect will overwrite the existing files instead of adding to them. The redirect command for adding to a file is >>, but that will append text to the end and you want to prepend text in the beginning.
The problem with prepending text to an existing file, is that you have to open the file for both reading and writing and then stream both prepended text and original text, in order - and that is usually where people fail because the shell can't open files like that (there is a slightly more complex way of doing this directly, by opening the file for both reading and writing, but I'm not going to address that further).
You might want to use a temporary file, something like this:
for x in choochoo[0-9]*.tsv; do
mv "$x"{,.orig}
(head -n1 main.tsv; cat "$x.orig") > $x
rm "$x.orig"
done
I have build_config (and other *_config files), GdbRun and build.txt files which are basically bash shell scripts.
How could I associate these files with shell syntax ? To place a pattern like
'if filename is *_config or GdbRun or build.txt' somewhere.
BufferScroll plugin can remember most of view settings for a particular file including syntax.
Just install BufferScroll and change the syntax manually using the status bar or the command palette and it'll remember it next time you open that file.
When we use the redirect IO operator for a shell script does the operator keep all the data to be written in memory and write it all at once or does write it to file line by line.
Here is what i am working on.
I have about 200 small files ~1000 lines each in a specific format. I want to process (do a regex and change the format a little) each line in all the files and have the new transformed lines in a single combined file.
I have a transformscript.sh that takes a single file and applies the transformation. I run it in the following manner
sh transformscript.sh somefile.txt > newfile.txt
This works fine and fast for a single file.
How do i extend to do it for all the files. will it be efficient to change transformscript.sh to take a directory as argument instead of filename and add a for loop to transform all the lines of all the files together. Or should I run the above trnsformscript.sh for each file and create a new file for each one and combine then separately.
Thanks.
The redirect operator simply opens the file for writing and passes that file descriptor to the shell as its standard output. The shell then writes to the file directly.
You probably do NOT want to run the script separately for each file since you will incur the overhead of bash process creation for each pass. For example:
# don't do it this way
for somefile in $(ls somefiles*.txt); do
newfile=${somefile//some/new}
sh transformscript.sh $somefile > $newfile
done
The above starts one shell for every file found which is pretty inefficient. It would be better to rewrite transformscript.sh to handle multiple files if possible. Depending on how complicated your transform is and whether you need to keep the original filenames, you might be able to use a single sed process. For example, assume you have 200 files named test1.txt through test200.txt all with a "Hello world" line you want to change to "Hello joe". You could do something as simple a this:
sed -i.save 's/Hello world/Hello joe/' test*.txt
The -i tells sed to do an "in place" edit (edit the original file) and the optional ".save" argument to -i makes a backup copy of the original file with a .save extension before editing the original file. Note, this will leave the original contents in the .save files and the new content in the files with the original name which may not be what you want.
If in a directory, suppose there are 100 files with names like file.pcap1, file.pcap2, file.pcap3,....., file.pcap100. In a shell script, to read these file one-by-one, i have written a line like:
for $file in /root/*pcap*
do
Something
done
What is the order by which files are read? Are they read in the increasing order of the numbers which are at the end of the file names? Is this the same case for all types of linux machines?
It is sorted by file name. Just like the default ls (with no flags).
Also, you need to remove the $ in your foreach:
for file in /root/*pcap*
POSIX shell returns paths sorted using current locale:
If the pattern matches any existing filenames or pathnames, the
pattern shall be replaced with those filenames and pathnames, sorted
according to the collating sequence in effect in the current locale
It means pcap10 comes before pcap2. You probably want natural sorting order instead e.g., Python analog of natsort function (sort a list using a “natural order” algorithm).
I would like to search for a specific command I've previously used. Is it possible to do a free text search on MATLAB command history?
Yes. Matlab stores your command history in a file called history.m in the "preferences folder," a directory containing preferences, history, and layout files. You can find the preferences folder using the prefdir command:
>> prefdir
ans =
/home/tobin/.matlab/R2010a
Then search the history.m file in that directory using the mechanism of your choice. For instance, using grep on unix:
>> chdir(prefdir)
>> !grep plot history.m
plot(f, abs(tf))
doc biplot
!grep plot history.m
You can also simply use the search function in the command history window if you just want to use the GUI.
If you want to accomplish this in a programmatic and platform-independent manner, you can first use MATLAB's Java internals to get the command history as a character array:
history = com.mathworks.mlservices.MLCommandHistoryServices.getSessionHistory;
historyText = char(history);
Then you can search through the character array however you like, using functions like STRFIND or REGEXP. You can also turn the character array into a cell array of strings (one line per cell) with the function CELLSTR, since they can sometimes be easier to work with.