How can I provide stdin to ed, which need a filename? - linux

Need some unix shell basic here:
For command that I see no "-" target in , say ed:
print '%-2p\nq' | ed -s FILE
Can I provide a stream from stdout of some cmd, rather than FILE name, as the data to be processed:
SomeCMD | ed -s SOMETHING_MAGICAL <<< 'print '%-2p\nq'
Is is possible?

ed reads its commands from stdin, so if your file is also on stdin, how do you work?
In fact, you can feed file input over stdin, if you concatenate its output with a single line
i
at the beginning, to start writing in the data, then append a single . to end the input, followed by any commands. You can even output the results to stdout. Do remember that it will break if there is a line in the file with nothing but a single . in it.
So if a file input.file contains this:
First line
Second line
Third line
And a file commands.list contains this:
.
1d
1,$w /dev/stdout
Then this command line...
echo i | cat - input.file commands.list | ed -s
Will output this:
Second line
Third line
Dare I say tadaaaaa! ?
Note: you can probably protect against the case of single . lines in the file by piping the file through a filter that escapes any such lines and then unescaping them again with ed commands. I leave that to your ingenuity.
Another note: you really should use sed for this, but I couldn't let the it can't be done comments go by.

You use r to read a command output into the text buffer. So, portable:
printf '%s\n' 'r !df -h' g/tmpfs/d ,p q | ed -s
or
ed -s << IN
r !df -h
g/tmpfs/d
,p
q
IN
The above reads in the output of df -h, deletes the lines matching tmpfs and prints the result.
If your shell supports process substitution:
printf '%s\n' g/tmpfs/d ,p q | ed -s <(df -h)
With gnu ed that SOMETHING_MAGICAL is called !.
As per the man page:
Start edit by reading in 'file' if given. If 'file' begins with a
'!', read output of shell command.
printf '%s\n' g/tmpfs/d ,p q | ed -s '!df -h'
or, with herestring:
ed -s '!df -h' <<< $'g/tmpfs/d\n,p\nq\n'

Yes. Effectively, instead of 'piping' into ed, you can use 'process substitution' to pass the output of your command as the input to be edited, leaving the standard pipe free to take pre-scripted ed commands.
Example:
echo '#
,s/\/dev\/\(\w*\) .* \b\(.*\)%.*$/DEVICE \1 is \2% full!/
,p
Q' | ed -s <(df 2> /dev/null | tail -n +2 | egrep "^/dev/")
DEVICE sda6 is 90% full!
DEVICE sda2 is 88% full!
Explanation:
Process substitution (the <() part) turns the output of df 2> /dev/null | tail -n +2 | egrep "^/dev/" into the contents of a temporary file descriptor, which is then used as an input file to ed -s.
At the same time, ed commands are passed via echo into a pipe.
Echo here is used in multiline single-quote mode without interpretation of escape sequences; if you're not too bothered about not having everything appear on a single line, then this is the most straightforward way to pass ed commands without having to go into escape-sequences hell.
Specifically, we are passing four ed commands:
A comment (just to align the remaining commands on the console)
A substitution command
A 'print all' command
A 'quit unconditionally' command, to prevent any warning messages that could have been printed on the terminal.

Related

Bash add line numbers to a file and save the output to the input file itself [duplicate]

Basically I want to take as input text from a file, remove a line from that file, and send the output back to the same file. Something along these lines if that makes it any clearer.
grep -v 'seg[0-9]\{1,\}\.[0-9]\{1\}' file_name > file_name
however, when I do this I end up with a blank file.
Any thoughts?
Use sponge for this kind of tasks. Its part of moreutils.
Try this command:
grep -v 'seg[0-9]\{1,\}\.[0-9]\{1\}' file_name | sponge file_name
You cannot do that because bash processes the redirections first, then executes the command. So by the time grep looks at file_name, it is already empty. You can use a temporary file though.
#!/bin/sh
tmpfile=$(mktemp)
grep -v 'seg[0-9]\{1,\}\.[0-9]\{1\}' file_name > ${tmpfile}
cat ${tmpfile} > file_name
rm -f ${tmpfile}
like that, consider using mktemp to create the tmpfile but note that it's not POSIX.
Use sed instead:
sed -i '/seg[0-9]\{1,\}\.[0-9]\{1\}/d' file_name
try this simple one
grep -v 'seg[0-9]\{1,\}\.[0-9]\{1\}' file_name | tee file_name
Your file will not be blank this time :) and your output is also printed to your terminal.
You can't use redirection operator (> or >>) to the same file, because it has a higher precedence and it will create/truncate the file before the command is even invoked. To avoid that, you should use appropriate tools such as tee, sponge, sed -i or any other tool which can write results to the file (e.g. sort file -o file).
Basically redirecting input to the same original file doesn't make sense and you should use appropriate in-place editors for that, for example Ex editor (part of Vim):
ex '+g/seg[0-9]\{1,\}\.[0-9]\{1\}/d' -scwq file_name
where:
'+cmd'/-c - run any Ex/Vim command
g/pattern/d - remove lines matching a pattern using global (help :g)
-s - silent mode (man ex)
-c wq - execute :write and :quit commands
You may use sed to achieve the same (as already shown in other answers), however in-place (-i) is non-standard FreeBSD extension (may work differently between Unix/Linux) and basically it's a stream editor, not a file editor. See: Does Ex mode have any practical use?
One liner alternative - set the content of the file as variable:
VAR=`cat file_name`; echo "$VAR"|grep -v 'seg[0-9]\{1,\}\.[0-9]\{1\}' > file_name
Since this question is the top result in search engines, here's a one-liner based on https://serverfault.com/a/547331 that uses a subshell instead of sponge (which often isn't part of a vanilla install like OS X):
echo "$(grep -v 'seg[0-9]\{1,\}\.[0-9]\{1\}' file_name)" > file_name
The general case is:
echo "$(cat file_name)" > file_name
Edit, the above solution has some caveats:
printf '%s' <string> should be used instead of echo <string> so that files containing -n don't cause undesired behavior.
Command substitution strips trailing newlines (this is a bug/feature of shells like bash) so we should append a postfix character like x to the output and remove it on the outside via parameter expansion of a temporary variable like ${v%x}.
Using a temporary variable $v stomps the value of any existing variable $v in the current shell environment, so we should nest the entire expression in parentheses to preserve the previous value.
Another bug/feature of shells like bash is that command substitution strips unprintable characters like null from the output. I verified this by calling dd if=/dev/zero bs=1 count=1 >> file_name and viewing it in hex with cat file_name | xxd -p. But echo $(cat file_name) | xxd -p is stripped. So this answer should not be used on binary files or anything using unprintable characters, as Lynch pointed out.
The general solution (albiet slightly slower, more memory intensive and still stripping unprintable characters) is:
(v=$(cat file_name; printf x); printf '%s' ${v%x} > file_name)
Test from https://askubuntu.com/a/752451:
printf "hello\nworld\n" > file_uniquely_named.txt && for ((i=0; i<1000; i++)); do (v=$(cat file_uniquely_named.txt; printf x); printf '%s' ${v%x} > file_uniquely_named.txt); done; cat file_uniquely_named.txt; rm file_uniquely_named.txt
Should print:
hello
world
Whereas calling cat file_uniquely_named.txt > file_uniquely_named.txt in the current shell:
printf "hello\nworld\n" > file_uniquely_named.txt && for ((i=0; i<1000; i++)); do cat file_uniquely_named.txt > file_uniquely_named.txt; done; cat file_uniquely_named.txt; rm file_uniquely_named.txt
Prints an empty string.
I haven't tested this on large files (probably over 2 or 4 GB).
I have borrowed this answer from Hart Simha and kos.
This is very much possible, you just have to make sure that by the time you write the output, you're writing it to a different file. This can be done by removing the file after opening a file descriptor to it, but before writing to it:
exec 3<file ; rm file; COMMAND <&3 >file ; exec 3>&-
Or line by line, to understand it better :
exec 3<file # open a file descriptor reading 'file'
rm file # remove file (but fd3 will still point to the removed file)
COMMAND <&3 >file # run command, with the removed file as input
exec 3>&- # close the file descriptor
It's still a risky thing to do, because if COMMAND fails to run properly, you'll lose the file contents. That can be mitigated by restoring the file if COMMAND returns a non-zero exit code :
exec 3<file ; rm file; COMMAND <&3 >file || cat <&3 >file ; exec 3>&-
We can also define a shell function to make it easier to use :
# Usage: replace FILE COMMAND
replace() { exec 3<$1 ; rm $1; ${#:2} <&3 >$1 || cat <&3 >$1 ; exec 3>&- }
Example :
$ echo aaa > test
$ replace test tr a b
$ cat test
bbb
Also, note that this will keep a full copy of the original file (until the third file descriptor is closed). If you're using Linux, and the file you're processing on is too big to fit twice on the disk, you can check out this script that will pipe the file to the specified command block-by-block while unallocating the already processed blocks. As always, read the warnings in the usage page.
The following will accomplish the same thing that sponge does, without requiring moreutils:
shuf --output=file --random-source=/dev/zero
The --random-source=/dev/zero part tricks shuf into doing its thing without doing any shuffling at all, so it will buffer your input without altering it.
However, it is true that using a temporary file is best, for performance reasons. So, here is a function that I have written that will do that for you in a generalized way:
# Pipes a file into a command, and pipes the output of that command
# back into the same file, ensuring that the file is not truncated.
# Parameters:
# $1: the file.
# $2: the command. (With $3... being its arguments.)
# See https://stackoverflow.com/a/55655338/773113
siphon()
{
local tmp file rc=0
[ "$#" -ge 2 ] || { echo "Usage: siphon filename [command...]" >&2; return 1; }
file="$1"; shift
tmp=$(mktemp -- "$file.XXXXXX") || return
"$#" <"$file" >"$tmp" || rc=$?
mv -- "$tmp" "$file" || rc=$(( rc | $? ))
return "$rc"
}
There's also ed (as an alternative to sed -i):
# cf. http://wiki.bash-hackers.org/howto/edit-ed
printf '%s\n' H 'g/seg[0-9]\{1,\}\.[0-9]\{1\}/d' wq | ed -s file_name
You can use slurp with POSIX Awk:
!/seg[0-9]\{1,\}\.[0-9]\{1\}/ {
q = q ? q RS $0 : $0
}
END {
print q > ARGV[1]
}
Example
This does the trick pretty nicely in most of the cases I faced:
cat <<< "$(do_stuff_with f)" > f
Note that while $(…) strips trailing newlines, <<< ensures a final newline, so generally the result is magically satisfying.
(Look for “Here Strings” in man bash if you want to learn more.)
Full example:
#! /usr/bin/env bash
get_new_content() {
sed 's/Initial/Final/g' "${1:?}"
}
echo 'Initial content.' > f
cat f
cat <<< "$(get_new_content f)" > f
cat f
This does not truncate the file and yields:
Initial content.
Final content.
Note that I used a function here for the sake of clarity and extensibility, but that’s not a requirement.
A common usecase is JSON edition:
echo '{ "a": 12 }' > f
cat f
cat <<< "$(jq '.a = 24' f)" > f
cat f
This yields:
{ "a": 12 }
{
"a": 24
}
Try this
echo -e "AAA\nBBB\nCCC" > testfile
cat testfile
AAA
BBB
CCC
echo "$(grep -v 'AAA' testfile)" > testfile
cat testfile
BBB
CCC
I usually use the tee program to do this:
grep -v 'seg[0-9]\{1,\}\.[0-9]\{1\}' file_name | tee file_name
It creates and removes a tempfile by itself.

Linux: Append variable to end of line using line number as variable

I am new to shell scripting. I am using ksh.
I have this particular line in my script which I use to append text in a variable q to the end of a particular line given by the variable a
containing the line number .
sed -i ''$a's#$#'"$q"'#' test.txt
Now the variable q can contain a large amount of text, with all sorts of special characters, such as !##$%^&*()_+:"<>.,/;'[]= etc etc, no exceptions. For now, I use a couple of sed commands in my script to remove any ' and " in this text (sed "s/'/ /g" | sed 's/"/ /g'), but still when I execute the above command I get the following error
sed: -e expression #1, char 168: unterminated `s' command
Any sed, awk, perl, suggestions are very much appreciated
The difficulty here is to quote (escape) the substitution separator characters # in the sed command:
sed -i ''$a's#$#'"$q"'#' test.txt
For example, if q contains # it will not work. The # will terminate the replacement pattern prematurely. Example: q='a#b', a=2, and the command expands to
sed -i 2s#$#a#b# test.txt
which will not append a#b to the end of line 2, but rather a#.
This can be solved by escaping the # characters in q:
sed -i 2s#$#a\#b# test.txt
However, this escaping could be cumbersome to do in shell.
Another approach is to use another level of indirection. Here is an example of using a Perl one-liner. First q is passed to the script in quoted form. Then, within the script the variable assigned to a new internal variable $q. Using this approach there is no need to escape the substitution separator characters:
perl -pi -E 'BEGIN {$q = shift; $a = shift} s/$/$q/ if $. == $a' "$q" "$a" test.txt
Do not bother trying to sanitize the string. Just put it in a file, and use sed's r command to read it in:
echo "$q" > tmpfile
sed -i -e ${a}rtmpfile test.txt
Ah, but that creates an extra newline that you don't want. You can remove it with:
sed -e ${a}rtmpfile test.txt | awk 'NR=='$a'{printf $0; next}1' > output
Another approach is to use the patch utility if present in your system.
patch test.txt <<-EOF
${a}c
$(sed "${a}q;d" test.txt)$q
.
EOF
${a}c will be replaced with the line number followed by c which means the operation is a change in line ${a}.
The second line is the replacement of the change. This is the concatenated value of the original text and the added text.
The sole . means execute the commands.

parse grep output and run vim with result

I'm current using command line to grep a pattern in a source tree. A line of grep output is in the form:
path/to/a/file.java:123: some text here
If I want to open the file at the location specified in the grep output, I would have to manually enter the vim command as:
$ vim +123 path/to/a/file.java
Is there an easier method that would allow me to use the raw grep output and have the relevant components parsed and run vim for the file at the line#.
I am interested in a command line solution. I am aware that I can do greps inside vim.
Thanks
The file-line plugin is exactly what you want. With that installed, you can just run
vim path/to/a/file.java:123
You could simply run grep from Vim itself and benefit from the quickfix list/window:
:grep -Rn foo **/*.h
:cw
(scroll around)
<CR>
Or you could pass your grep output to Vim for the same benefits:
$ vim -q <(grep -Rn foo **/*.h)
:cw
(scroll around)
<CR>
Or, if you are already in Vim, you could insert the output of your grep in a buffer and use gF to jump to the right line of the right file:
:r !grep -Rn foo **/*.h
(scroll around)
gF
Or, from your shell:
$ vim <(grep -Rn foo **/*.h)
(scroll around)
gF
Or, if you just ran your grep, you can reuse it like so:
$ vim <(!!)
(scroll around)
gF
Or, if you know its number in history:
$ vim <(!884)
(scroll around)
gF
> vim $(cat the.file | grep xxx)
will evauluates the $() - find xxx in the.file then will pipe xxx to vim
also possible with backticks ``:
> vim `cat the.file | grep xxx`
Try this:
grep -nr --null pattern | { IFS= read -rd "" f; IFS=: read -d "" n match; vim +$n "$f" </dev/tty; }
grep does a recursive search for pattern. For the first file that it finds, vim is started with the +linenum parameter to put you on the line of interest.
This approach uses NUL-separated i/o. It should be safe for all file names, even ones that contain white space or other difficult characters.
This was tested on GNU tools (Linux). It may work on BSD/OSX as well.
Multiline version
For those who prefer their commands spread over multiple lines:
grep -nr --null pattern | {
IFS= read -rd "" f
IFS=: read -d "" n match
vim +$n "$f" </dev/tty
}
Convenience function
Because the above command is long, one may want to put it in a shell function:
vigrep() { grep -nr --null "$1" | { IFS= read -rd "" f; IFS=: read -d "" n match; vim +$n "$f" </dev/tty; }; }
Once this has been defined, it can be used to search for a file containing any pattern. For example:
vigrep 'some text here'
To make the definition of vigrep permanent, put it in your ~/.bashrc file.
How it works
grep -nr --null pattern
-r tells grep to search recursively.
-n tells grep to return line number of the matches.
-null tells grep to use NUL-separated output
pattern is the regex to search for.
IFS= read -rd "" f
This reads the first NUL-separated section of input (which will be a file name) and assigns it to the shell variable f.
IFS=: read -d "" n match
This reads the next NUL-separated section of input using : as the word separator. The first word (which is the line number) is assigned to shell variable n. The rest of this line will be ignored.
vim +$n "$f" </dev/tty
This starts vim on line number $n of file $f using the terminal, /dev/tty, for input.
Generally, when running vim, one really wants to have vim accept input from the keyboard. That is why, for this case, we hard-coded input from /dev/tty.
Using cut-and-paste to launch vim
Start the following and cut-and-paste a line of grep -n output to it:
IFS=: read f n rest; vim +$n "$f"
The read command will wait for a line on standard input. The type of input it expects looks like:
path/to/a/file.java:123: some text here
Because IFS=:, it divides up the line on colons and assigns the file name to shell variable f and the line number to shell variable n. When this is done, it launches the vim command.
This command could also, if desired, be saved as a shell function:
grvim() { IFS=: read f n rest; vim "+$n" "$f"; }
I have this function in my .bashrc:
grep_edit(){
grep "$#" | sed 's/:/ +/;s/:/ /';
}
So, the output is in the form:
path/to/a/file.java +123 some text here
Then I can directly use
$ vi path/to/a/file.java +123
Note: I have also heard of file-line plugin, but I was not sure how it will work with netrw plugin.
e.g. vi can open remote files with this syntax:
vi scp://root#remote-system//var/log/daemon.log
But if that is not a concern, then you can better use file-line plugin.

Append text to file from command line without using io redirection

How can we append text in a file via a one-line command without using io redirection?
If you don't mind using sed then,
$ cat test
this is line 1
$ sed -i '$ a\this is line 2 without redirection' test
$ cat test
this is line 1
this is line 2 without redirection
As the documentation may be a bit long to go through, some explanations :
-i means an inplace transformation, so all changes will occur in the file you specify
$ is used to specify the last line
a means append a line after
\ is simply used as a delimiter
If you just want to tack something on by hand, then the sed answer will work for you. If instead the text is in file(s) (say file1.txt and file2.txt):
Using Perl:
perl -e 'open(OUT, ">>", "outfile.txt"); print OUT while (<>);' file*.txt
N.B. while the >> may look like an indication of redirection, it is just the file open mode, in this case "append".
You can use the --append feature of tee:
cat file01.txt | tee --append bothFiles.txt
cat file02.txt | tee --append bothFiles.txt
Or shorter,
cat file01.txt file02.txt | tee --append bothFiles.txt
I assume the request for no redirection (>>) comes from the need to use this in xargs or similar. So if that doesn't count, you can mute the output with >/dev/null.
You can use Vim in Ex mode:
ex -sc 'a|BRAVO' -cx file
a append text
x save and close
On Linux/GNU systems, the simplest and cleanest solution is:
dd of=oldfile oflag=append conv=notrunc
Simple and clean, because no quoting or backslashitis is required.
Unfortunately, this also doesn't work on BSD (and so, on Darwin), because their dd has no oflag . Argh! Can anyone suggest how to do it with the BSD dd ?

Add a prefix string to beginning of each line

I have a file as below:
line1
line2
line3
And I want to get:
prefixline1
prefixline2
prefixline3
I could write a Ruby script, but it is better if I do not need to.
prefix will contain /. It is a path, /opt/workdir/ for example.
# If you want to edit the file in-place
sed -i -e 's/^/prefix/' file
# If you want to create a new file
sed -e 's/^/prefix/' file > file.new
If prefix contains /, you can use any other character not in prefix, or
escape the /, so the sed command becomes
's#^#/opt/workdir#'
# or
's/^/\/opt\/workdir/'
awk '$0="prefix"$0' file > new_file
In awk the default action is '{print $0}' (i.e. print the whole line), so the above is equivalent to:
awk '{print "prefix"$0}' file > new_file
With Perl (in place replacement):
perl -pi 's/^/prefix/' file
You can use Vim in Ex mode:
ex -sc '%s/^/prefix/|x' file
% select all lines
s replace
x save and close
If your prefix is a bit complicated, just put it in a variable:
prefix=path/to/file/
Then, you pass that variable and let awk deal with it:
awk -v prefix="$prefix" '{print prefix $0}' input_file.txt
Here is a hightly readable oneliner solution using the ts command from moreutils
$ cat file | ts prefix | tr -d ' '
And how it's derived step by step:
# Step 0. create the file
$ cat file
line1
line2
line3
# Step 1. add prefix to the beginning of each line
$ cat file | ts prefix
prefix line1
prefix line2
prefix line3
# Step 2. remove spaces in the middle
$ cat file | ts prefix | tr -d ' '
prefixline1
prefixline2
prefixline3
If you have Perl:
perl -pe 's/^/PREFIX/' input.file
Using & (the whole part of the input that was matched by the pattern”):
cat in.txt | sed -e "s/.*/prefix&/" > out.txt
OR using back references:
cat in.txt | sed -e "s/\(.*\)/prefix\1/" > out.txt
Using the shell:
#!/bin/bash
prefix="something"
file="file"
while read -r line
do
echo "${prefix}$line"
done <$file > newfile
mv newfile $file
While I don't think pierr had this concern, I needed a solution that would not delay output from the live "tail" of a file, since I wanted to monitor several alert logs simultaneously, prefixing each line with the name of its respective log.
Unfortunately, sed, cut, etc. introduced too much buffering and kept me from seeing the most current lines. Steven Penny's suggestion to use the -s option of nl was intriguing, and testing proved that it did not introduce the unwanted buffering that concerned me.
There were a couple of problems with using nl, though, related to the desire to strip out the unwanted line numbers (even if you don't care about the aesthetics of it, there may be cases where using the extra columns would be undesirable). First, using "cut" to strip out the numbers re-introduces the buffering problem, so it wrecks the solution. Second, using "-w1" doesn't help, since this does NOT restrict the line number to a single column - it just gets wider as more digits are needed.
It isn't pretty if you want to capture this elsewhere, but since that's exactly what I didn't need to do (everything was being written to log files already, I just wanted to watch several at once in real time), the best way to lose the line numbers and have only my prefix was to start the -s string with a carriage return (CR or ^M or Ctrl-M). So for example:
#!/bin/ksh
# Monitor the widget, framas, and dweezil
# log files until the operator hits <enter>
# to end monitoring.
PGRP=$$
for LOGFILE in widget framas dweezil
do
(
tail -f $LOGFILE 2>&1 |
nl -s"^M${LOGFILE}> "
) &
sleep 1
done
read KILLEM
kill -- -${PGRP}
Using ed:
ed infile <<'EOE'
,s/^/prefix/
wq
EOE
This substitutes, for each line (,), the beginning of the line (^) with prefix. wq saves and exits.
If the replacement string contains a slash, we can use a different delimiter for s instead:
ed infile <<'EOE'
,s#^#/opt/workdir/#
wq
EOE
I've quoted the here-doc delimiter EOE ("end of ed") to prevent parameter expansion. In this example, it would work unquoted as well, but it's good practice to prevent surprises if you ever have a $ in your ed script.
Here's a wrapped up example using the sed approach from this answer:
$ cat /path/to/some/file | prefix_lines "WOW: "
WOW: some text
WOW: another line
WOW: more text
prefix_lines
function show_help()
{
IT=$(CAT <<EOF
Usage: PREFIX {FILE}
e.g.
cat /path/to/file | prefix_lines "WOW: "
WOW: some text
WOW: another line
WOW: more text
)
echo "$IT"
exit
}
# Require a prefix
if [ -z "$1" ]
then
show_help
fi
# Check if input is from stdin or a file
FILE=$2
if [ -z "$2" ]
then
# If no stdin exists
if [ -t 0 ]; then
show_help
fi
FILE=/dev/stdin
fi
# Now prefix the output
PREFIX=$1
sed -e "s/^/$PREFIX/" $FILE
You can also achieve this using the backreference technique
sed -i.bak 's/\(.*\)/prefix\1/' foo.txt
You can also use with awk like this
awk '{print "prefix"$0}' foo.txt > tmp && mv tmp foo.txt
Using Pythonize (pz):
pz '"preix"+s' <filename
Simple solution using a for loop on the command line with bash:
for i in $(cat yourfile.txt); do echo "prefix$i"; done
Save the output to a file:
for i in $(cat yourfile.txt); do echo "prefix$i"; done > yourfilewithprefixes.txt
You can do it using AWK
echo example| awk '{print "prefix"$0}'
or
awk '{print "prefix"$0}' file.txt > output.txt
For suffix: awk '{print $0"suffix"}'
For prefix and suffix: awk '{print "prefix"$0"suffix"}'
For people on BSD/OSX systems there's utility called lam, short for laminate. lam -s prefix file will do what you want. I use it in pipelines, eg:
find -type f -exec lam -s "{}: " "{}" \; | fzf
...which will find all files, exec lam on each of them, giving each file a prefix of its own filename. (And pump the output to fzf for searching.)
If you need to prepend a text at the beginning of each line that has a certain string, try following. In the following example, I am adding # at the beginning of each line that has the word "rock" in it.
sed -i -e 's/^.*rock.*/#&/' file_name
SETLOCAL ENABLEDELAYEDEXPANSION
YourPrefix=blabla
YourPath=C:\path
for /f "tokens=*" %%a in (!YourPath!\longfile.csv) do (echo !YourPrefix!%%a) >> !YourPath!\Archive\output.csv

Resources