How to extract each line from a file, into separate files in Linux
Example :
If the file content 10 rows, it will be 10 files.
if File contents
12345
1uthste
128766
First-line should move to 1.txt
Second-line should move to 2.txt
like this goes on.
Related
I split a file using split -n r/12 file, now how do I concatenate these 12 files? I've tried cat <files> and paste <files>, but after using diff, whole file was different from the original.
How do I concatenate these 12 files so that cmp/diff will show no differences? Any special arguments for paste/cat to use?
Is round robin splitting an absolute requirement? If not you might just split into sections:
$split --number=12 file
This creates 12 files:
$ ls x*
xaa xab xac xad xae xaf xag xah xai xaj xak xal
Now you can concat without any difference:
$cat x* > file.new
$diff file file.new
But if there is no way around the round robin requirement I would create a bash script - not pretty. Just providing a pseudocode
Something like:
Create working directory
Copy all x* files into working directory
Change to working directory
Touch new concatenated file
While all x* files are not empty
Iterate over files in alpha order
Remove the first line in file
Append the line to the new concatenated file
According to thread:
Linux: fast creating of formatted output file (csv) from find command
there is a suggested bash command, including awk (which I don't understand):
find /mnt/sda2/ | awk 'BEGIN{FS=OFS="/"}!/.cache/ {$2=$3=""; new=sprintf("%s",$0);gsub(/^\/\/\//,"",new); printf "05;%s;/%s\n",$NF,new }' > $p1"Seagate-4TB-S2-BTRFS-1TB-Dateien-Verzeichnisse.csv"
With this command, I am able to create a csv file containing "05;file name;full path and file name" of the directory and file content of my device mounted on /mnt/sda2. Thanks again to -> tink
How must I adapt the above command to receive date(&time) and file size also?
Thank you in advance,
-Linuxfluesterer
How can I copy lines from a .csv file that contain "D,1", "D,2", or "D,3" into a .txt file where the solutions are in the same order as the .csv file? Would I consider using grep? I'm new to the Linux command line and have only used sed and head so far.
I have a text file, each line is one or more file paths separated with space, all the file has suffix dl, e.g.
/some/path/file.dl
/some/other/path/file2.dl /some/other/path2/file3.dl
/some/other/path3/file4.dl /some/other/path4/file5.dl ...
...
Now I need to transform the above file to another text file. Only the first file of every line should be changed to /out/P{fileName}.h:, {fileName} is the original file name without directory and suffix. e.g.
/out/Pfile.h:
/out/Pfile2.h: /some/other/path2/file3.dl
/out/Pfile4.h: /some/other/path2/file5.dl ...
...
So how can I write the linux shell script?
Try this command:
$ sed -r 's#^\S*/(\S*)\.dl#/out/P\1.h:#' input
/out/Pfile.h:
/out/Pfile2.h: /some/other/path2/file3.dl
/out/Pfile4.h: /some/other/path4/file5.dl
I am new to Linux and have a challenging task.
I have 3 data files, and need to do the following:
Go to line 31 of file 1, delete it
Read 1 line from file 2 and add in place of deleted line
Go to line 97 of file 1 delete it and then read the line 1 from file 2 and add in place of that deleted line in file 1.
The thing is also important to keep the same file i.e file , it is not to be changed.
I tried different versions of sed and perl, with buffer copying tricks but was not successful.
I am open for all suggestions and request the experts to give me suggestions.
I cannot find a reference to the 3rd file in your question, but if you mean replace line number 31 of file 1 with the 1st line of file 2, and replace line number 97 of file 1 with the 2nd line of file 2:
sed -i -e '30R f2
31d;96R f2
97d' f1
The new lines are important after f2 so sed knows that it is the end of the file name.
Note that the R command is a GNU extension, it is not standard.