sed: -i may not be used with stdin on Mac OS X - linux

I am using a bison parser in my project. When I run the following command:
sed -i y.tab.c -e "s/ __attribute__ ((__unused__))$/# ifndef __cplusplus\n __attribute__ ((__unused__));\n# endif/"
I get this error
sed: -i may not be used with stdin
The command works fine in linux machines. I am using Mac OS X 10.9. It throws an error only on mac os x. I am not sure why. Can anyone help?
Thanks

The problem is that Mac OS X uses the BSD version of sed, which treats the -i option slightly differently. The GNU version used in Linux takes an optional argument with -i: if present, sed makes a backup file whose name consists of the input file plus the argument. Without an argument, sed simply modifies the input file without saving a backup of the original.
In BSD sed, the argument to -i is required. To avoid making a backup, you need to provide a zero-length argument, e.g. sed -i '' y.tab.c ....
Your command, which simply edits y.tab.c with no backup in Linux, would attempt to save a backup file using 'y.tab.c' as an extension. But now, with no other file in the command line, sed thinks you want to edit standard input in-place, something that is not allowed.

From the sed manpage:
-i extension
Edit files in-place, saving backups with the specified extension.
If a zero-length extension is given, no backup will be saved. It
is not recommended to give a zero-length extension when in-place
editing files, as you risk corruption or partial content in situ-
ations where disk space is exhausted, etc.
The solution is to send a zero-length extension like this:
sed -i '' 's/apples/oranges/' file.txt

You need to put the input file as the last parameter.
sed -i -e "s/ __attribute__ ((__unused__))$/# ifndef __cplusplus\n __attribute__ ((__unused__));\n# endif/" y.tab.c

Piggy-backing off of #chepner's explanation for a quick-and-dirty solution:
Install the version of sed that'll get the job done with brew install gnu-sed, then replace usages of sed in your script with gsed.
(The homebrew community is fairly cognizant of issues that can arise of OS X built-ins are overridden unexpectedly and has worked to not do that for most alternate-distro commands.)

Related

How to use sed to replace variable value declared in a text file

I would like to edit the /etc/environment file to change the MY_VARIABLE from VALUE_01 to VALUE_02.
Here is the context of the /etc/environment file:
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin"
JAVA_HOME="/usr/java/jdk8/jdk1.8.0_92-1"
MY_VARIABLE=VALUE_01
Ideally I would like to use sed command to edit it, for example (please note it is not working command):
sed -e 'MY_VARIABLE=VALUE_02' -i /etc/environment
How can I achieve it?
sed -- 's/MY_VARIABLE=.*/MY_VARIABLE=VALUE_02/' /etc/environment
Once you check it works, add the -i option:
sed -i -- 's/MY_VARIABLE=.*/MY_VARIABLE=VALUE_02/' /etc/environment
You will probably need root access.
Instead of trying to use sed -i and hoping your version of sed implements that option and working out if it takes a mandatory argument or not (I have a feeling you're not using GNU sed like the linux tag suggests you should be), just use ed to edit files in scripts.
ed -s /etc/environment <<EOF
/^MY_VARIABLE=/c
MY_VARIABLE=VALUE_02
.
w
EOF
changes the first line starting with MY_VARIABLE= to the given new text, and writes the file back to disk.

Improve performance of Bash loop that removes windows line endings

Editor's note: This question was always about loop performance, but the original title led some answerers - and voters - to believe it was about how to remove Windows line endings.
The below bash loop below just remove the windows line endings and converts them to unix and appears to be running, but it is slow. The input files are small (4 files ranging from 167 bytes - 1 kb), and are all the same structure (list of names) and the only thing that varies is the length (ie. some files are 10 names others are 50). Is it supposed to take over 15 minutes to complete this task using a xeon processor? Thank you :)
for f in /home/cmccabe/Desktop/files/*.txt ; do
bname=`basename $f`
pref=${bname%%.txt}
sed 's/\r//' $f - $f > /home/cmccabe/Desktop/files/${pref}_unix.txt
done
Input .txt files
AP3B1
BRCA2
BRIP1
CBL
CTC1
EDIT
This is not a duplicate as I was more asking for why my bash loop that uses sed to remove windows line endings was running so slow. I did not mean to imply how to remove them, was asking for ideas that might speed up the loop and I got many. Thank you :). I hope this helps.
Use the utilities dos2unix and unix2dos to convert between unix and windows style line endings.
Your 'sed' command looks wrong. I believe the trailing $f - $f should simply be $f. Running your script as written hangs for a very long time on my system, but making this change causes it to complete almost instantly.
Of course, the best answer is to use dos2unix, which was designed to handle this exact thing:
cd /home/cmccabe/Desktop/files
for f in *.txt ; do
pref=$(basename -s '.txt' "$f")
dos2unix -q -n "$f" "${pref}_unix.txt"
done
This always works for me:
perl -pe 's/\r\n/\n/' inputfile.txt > outputfile.txt
you can use dos2unix as stated before or use this small sed:
sed 's/\r//' file
The key to performance in Bash is to avoid loops in general, and in particular those that call one or more external utilities in each iteration.
Here is a solution that uses a single GNU awk command:
awk -v RS='\r\n' '
BEGINFILE { outFile=gensub("\\.txt$", "_unix&", 1, FILENAME) }
{ print > outFile }
' /home/cmccabe/Desktop/files/*.txt
-v RS='\r\n' sets CRLF as the input record separator, and by virtue of leaving ORS, the output record separator at its default, \n, simply printing each input line will terminate it with \n.
the BEGINFILE block is executed every time processing of a new input file starts; in it, gensub() is used to insert _unix before the .txt suffix of the input file at hand to form the output filename.
{print > outFile} simply prints the \n-terminated lines to the output file at hand.
Note that use of a multi-char. RS value, the BEGINFILE block, and the gensub() function are GNU extensions to the POSIX standard.
Switching from the OP's sed solution to a GNU awk-based one was necessary in order to provide a single-command solution that is both simpler and faster.
Alternatively, here's a solution that relies on dos2unix for conversion of Window line-endings (for instance, you can install dos2unix with sudo apt-get install dos2unix on Debian-based systems); except for requiring dos2unix, it should work on most platforms (no GNU utilities required):
It uses a loop only to construct the array of filename arguments to pass to dos2unix - this should be fast, given that no call to basename is involved; Bash-native parameter expansion is used instead.
then uses a single invocation of dos2unix to process all files.
# cd to the target folder, so that the operations below do not need to handle
# path components.
cd '/home/cmccabe/Desktop/files'
# Collect all *.txt filenames in an array.
inFiles=( *.txt )
# Derive output filenames from it, using Bash parameter expansion:
# '%.txt' matches '.txt' at the end of each array element, and replaces it
# with '_unix.txt', effectively inserting '_unix' before the suffix.
outFiles=( "${inFiles[#]/%.txt/_unix.txt}" )
# Create an interleaved array of *input-output filename pairs* to be passed
# to dos2unix later.
# To inspect the resulting array, run `printf '%s\n' "${fileArgs[#]}"`
# You'll see pairs like these:
# file1.txt
# file1_unix.txt
# ...
fileArgs=(); i=0
for inFile in "${inFiles[#]}"; do
fileArgs+=( "$inFile" "${outFiles[i++]}" )
done
# Now, use a *single* invocation of dos2unix, passing all input-output
# filename pairs at once.
dos2unix -q -n "${fileArgs[#]}"

sed command that works for Solaris, Linux and HPUX

I need to change a directive in a config file and got it working in Linux but in Solaris, it says command garbled.
Here is the directive
enable-cache passwd yes
I need to simply change the yes to no. How can I do with with sed that will work for Solaris, HPUX and Linux?
Here is the sed command that worked in Linux. Solaris doesn't like the -r
sed -r 's/^([[:space:]]*check-files[[:space:]]+passwd[[:space:]]+)yes([[:space:]]*)$/\1no\2/' inputfile
The end goal is to put this command in a script and run it across the enterprise.
Thanks
Greg
I also posted something similar yesterday which worked for Linux but not for the others.
Solaris has /usr/bin/sed and /usr/xpg4/bin/sed. None of these support an -r option, which option for Linux is to use an extended regex. sed in Solaris does not have any option to set the regex like that. You can use other tools, specifically awk, if you want simpler portability. Or you will have to use two flavors of regex, one with -r and an extended regex, one without -r and a different regex. And you probably want to specify /usr/xpg4/bin/sed on Solaris boxes only:
#!/bin/bash
sun=`expr index Solaris $(uname -a)`
if [ $sun -ne 0 ] ; then
/usr/xpg4/bin/sed [longer regex here ]
else
/usr/bin/sed -r [ extended regex here ]
fi
This is not strictly equivalent as :space: match more characters but I assume only space and tab are to be expected in your file. I'm only using standard shell and sed commands so this should work on Solaris, Linux and HP-UX:
space=$(printf " \t")
sed 's/^\(['"$space"']*check-files['"$space"']+passwd['"$space"']+\)yes\(['"$space"']*\)$/\1no\2/' inputfile
Note that your script doesn't match your sample directive as it expects check-files but is given enable-cache.
The GNU [[:space:]] is usually equivalent to just [ \t]. And you need to escape the parentheses. And + is not supported. So with these replacements, your working sed command becomes:
sed 's/^\([ \t]*check-files[ \t][ \t]*passwd[ \t][ \t]*\)yes\([ \t]*\)$/\1no\2/' inputfile
Further note: The older sed's don't have a -i option for doing in-place changes, so you might first have to copy your target to a temporary file, and apply sed to it, redirecting the output to the target.

Change a string in a file with sed?

I have a inputfile with template as shown below. I want to change the Version: using sed.
Package: somename
Priority: extra
Section: checkinstall
Maintainer: joe#example.com
Architecture: i386
Version: 3.1.0.2-1
Depends:
Provides: somename
Description: some description
Currently I am getting the current version using grep -m 1 Version inputfile | sed 's/[:_#a-zA-Z\s"]*//g' and I am trying to replace the current version with sed 's/3.1.0.2-1/3.1.0.2/' inputfile
However this does not seem to work, but when I try it in command line using echo it works.
echo 'Version: 3.0.9.1' | sed 's/3.0.9.1/3.2.9.2/'
Output: Version: 3.2.9.2
Any help on how I can accomplish this would be appreciated. Preferably I would like to change the version without getting the current version in the file.
Thanks In Advance
You don't need the grep.
sed -i '/Version/s/3\.1\.0\.2-1/3.1.0.2/' <files>
You want to use the "-i" switch to sed for "edit file [I]n place."
See sed man page: http://unixhelp.ed.ac.uk/CGI/man-cgi?sed
The name sed literally comes from "Stream EDitor" - the behavior you're seeing is the way it was designed. When you say:
sed 'some commands' file
it reads the file, executes the commands and prints the result - it doesn't save it back to the file (although some versions of sed have some options to tell it to do that). You probably want to do this:
sed 'some commands' file > newfile
Then verify that newfile is correct, and then mv newfile file. If you're absolutely certain your edit script is correct, and you can deal with the consequences of overwriting your file with wrong data if they're not, then you might consider using the in-place editing flags, but it's generally safer to save to a temporary file so you can test/validate.
You have a typo, the last dot should be a dash, try this:
sed 's/3.1.0.2-1/3.1.0.2-2/'

Linux command to replace string in LARGE file with another string

I have a huge SQL file that gets executed on the server. The dump is from my machine and in it there are a few settings relating to my machine. So basically, I want every occurance of "c://temp" to be replace by "//home//some//blah"
How can this be done from the command line?
sed is a good choice for large files.
sed -i.bak -e 's%C://temp%//home//some//blah%' large_file.sql
It is a good choice because doesn't read the whole file at once to change it. Quoting the manual:
A stream editor is used to perform
basic text transformations on an input
stream (a file or input from a
pipeline). While in some ways similar
to an editor which permits scripted
edits (such as ed), sed works by
making only one pass over the
input(s), and is consequently more
efficient. But it is sed's ability to
filter text in a pipeline which
particularly distinguishes it from
other types of editors.
The relevant manual section is here. A small explanation follows
-i.bak enables in place editing leaving a backup copy with .bak extension
s%foo%bar% uses s, the substitution command, which
substitutes matches of first string
in between the % sign, 'foo', for the second
string, 'bar'. It's usually written as s//
but because your strings have plenty
of slashes, it's more convenient to
change them for something else so you
avoid having to escape them.
Example
vinko#mithril:~$ sed -i.bak -e 's%C://temp%//home//some//blah%' a.txt
vinko#mithril:~$ more a.txt
//home//some//blah
D://temp
//home//some//blah
D://temp
vinko#mithril:~$ more a.txt.bak
C://temp
D://temp
C://temp
D://temp
Just for completeness. In place replacement using perl.
perl -i -p -e 's{c://temp}{//home//some//blah}g' mysql.dmp
No backslash escapes required either. ;)
Try sed? Something like:
sed 's/c:\/\/temp/\/\/home\/\/some\/\/blah/' mydump.sql > fixeddump.sql
Escaping all those slashes makes this look horrible though, here's a simpler example which changes foo to bar.
sed 's/foo/bar/' mydump.sql > fixeddump.sql
As others have noted, you can choose your own delimiter, which would prevent the leaning toothpick syndrome in this case:
sed 's|c://temp\\|home//some//blah|' mydump.sql > fixeddump.sql
The clever thing about sed is that it operating on a stream rather than a file all at once, so you can process huge files using only a modest amount of memory.
There's also a non-standard UNIX utility, rpl, which does the exact same thing that the sed examples do; however, I'm not sure whether rpl operates streamwise, so sed may be the better option here.
The sed command can do that.
Rather than escaping the slashes, you can choose a different delimiter (_ in this case):
sed -e 's_c://temp/_/home//some//blah/_' file1.txt > file2.txt
perl -pi -e 's#c://temp#//home//some//blah#g' yourfilename
The -p will treat this script as a loop, it will read the specified file line by line running the regex search and replace.
-i This flag should be used in conjunction with the -p flag. This commands Perl to edit the file in place.
-e Just means execute this perl code.
Good luck
gawk
awk '{gsub("c://temp","//home//some//blah")}1' file

Resources