Using Perl Win7 to write a file for Linux and having only Linux line endings - linux

This Perl script is running on Win7, modifying a Clearcase config spec that will be read on a Linux machine. Clearcase is very fussy about its line endings, they must be precisely and only \n (0x0A) however try as I may I cannot get Perl to spit out only \n endings, they usually come out \r\n (0x0D 0x0A)
Here's the Perl snippet, running over an array of config spec elements and converting element /somevob/... bits into element /vobs/somevob/... and printing to a file handle.
$fh = new FileHandle;
foreach my $line (#cs_array)
{
$line =~ s/([element|load])(\s+\/)(.+)/$1$2vobs\/$3/g;
$line =~ s/[\r\n]/\n/g; # tried many things here
$fh->print($line);
}
$fh->close();
Sometimes the elements in the array are multi-line and separated by \n
element /vob1/path\nelement\n/vob2/path\nload /vob1/path\n element\n
/vob3/path
load /vob3/path
When I look into the file written on Win7 in a binary viewer there is always a 0x0D 0x0A newline sequence which Clearcase on Linux complains about. This appears to come from the print.
Any suggestions? I thought this would be a 10 minute job...

Try
$fh->binmode;
Otherwise you're probably in text mode, and for Windows this means that \n is translated to \r\n.

You are running afoul of the :crlf IO Layer that is the default for Perl on Windows.
You can use binmode after the fact to remove this layer, or you can open the filehandle with :raw (the default layer for *nix) or some other appropriate IO Layer in the 1st place.
Sample:
$fh = FileHandle->new($FileName, '>:raw')
Check perldoc open for more details on IO Layers.

Related

sed command working on command line but not in perl script

I have a file in which i have to replace all the words like $xyz and for them i have to substitutions like these:
$xyz with ${xyz}.
$abc_xbs with ${abc_xbc}
$ab,$cd with ${ab},${cd}
This file also have some words like ${abcd} which i don't have to change.
I am using this command
sed -i 's?\$([A-Z_]+)?\${\1}?g' file
its working fine on command line but not inside a perl script as
sed -i 's?\$\([A-Z_]\+\)?\$\{\1\}?g' file;
What i am missing?
I think adding some backslashes would help.I tried adding some but no success.
Thanks
In a Perl script you need valid Perl language, just like you need valid C text in a C program. In the terminal sed.. is understood and run by the shell as a command but in a Perl program it is just a bunch of words, and that line sed.. isn't valid Perl.
You would need this inside qx() (backticks) or system() so that it is run as an external command. Then you'd indeed need "some backslashes," which is where things get a bit picky.
But why run a sed command from a Perl script? Do the job with Perl
use warnings;
use strict;
use File::Copy 'move';
my $file = 'filename';
my $out_file = 'new_' . $file;
open my $fh, '<', $file or die "Can't open $file: $!";
open my $fh_out, '>', $out_file or die "Can't open $out_file: $!";
while (<$fh>)
{
s/\$( [^{] [a-z_]* )/\${$1}/gix;
print $fh_out $_;
}
close $fh_out;
close $fh;
move $out_file, $file or die "Can't move $out_file to $file: $!";
The regex uses a negated character class, [^...], to match any character other than { following $, thus excluding already braced words. Then it matches a sequence of letters or underscore, as in the question (possibly none, since the first non-{ already provides at least one).
With 5.14+ you can use the non-destructive /r modifier
print $fh_out s/\$([^{][a-z_]*)/\${$1}/gir;
with which the changed string is returned (and original is unchanged), right for the print.
The output file, in the end moved over the original, should be made using File::Temp. Overwriting the original this way changes $file's inode number; if that's a concern see this post for example, for how to update the original inode.
A one-liner (command-line) version, to readily test
perl -wpe's/\$([^{][a-z_]*)/\${$1}/gi' file
This only prints to console. To change the original add -i (in-place), or -i.bak to keep backup.
A reasonable question of "Isn't there a shorter way" came up.
Here is one, using the handy Path::Tiny for a file that isn't huge so we can read it into a string.
use warnings;
use strict;
use Path::Tiny;
my $file = 'filename';
my $out_file = 'new_' . $file;
my $new_content = path($file)->slurp =~ s/\$([^{][a-z_]*)/\${$1}/gir;
path($file)->spew( $new_content );
The first line reads the file into a string, on which the replacement runs; the changed text is returned and assigned to a variable. Then that variable with new text is written out over the original.
The two lines can be squeezed into one, by putting the expression from the first instead of the variable in the second. But opening the same file twice in one (complex) statement isn't exactly solid practice and I wouldn't recommend such code.
However, since module's version 0.077 you can nicely do
path($file)->edit_lines( sub { s/\$([^{][a-z_]*)/\${$1}/gi } );
or use edit to slurp the file into a string and apply the callback to it.
So this cuts it to one nice line after all.
I'd like to add that shaving off lines of code mostly isn't worth the effort while it sure can lead to trouble if it disturbs the focus on the code structure and correctness even a bit. However, Path::Tiny is a good module and this is legitimate, while it does shorten things quite a bit.

perl output messed up in fedora, ubuntu

I wrote a perl script for mapping two data sets. When I run the program using the Linux terminal, the output is messed up. It seems like the output is overlapping. I am using Fedora 25. I have tried the code on Windows and it works fine.
Same problem is there on Ubuntu as well.
DESIRED:
ADAM 123 JOHN 321
TOM 473 BENTLY 564
and so on....
OUTPUT that i am getting:
ADAM 123N 321
TOM 473TLY 564
and so on......
I have tested the code on Windows and it works perfectly fine. Though the same problem remains on Ubuntu 16.04 lts.
please help.
code:
use warnings;
open F, "friendship_network_wo_weights1.txt", or die;
open G, "username_gender_1.txt", or die;
while (<G>){
chomp $_;
my #a = split /\t/, $_;
$list{$a[0]} = $a[1];
}
close G;
while (<F>){
chomp $_;
my #b = split /\t/, $_;
if ((exists $list{$b[0]}) && (exists $list{$b[1]})){
$get = "$b[0]\t${list{$b[0]}}\t$b[1]\t${list{$b[1]}}\n";
$get =~ s/\r//g;
print "$get";
}
}
close F;
The problem is on Windows the newline is \r\n. On everything else it's \n. Assuming these files were created on Windows, when you read them on Unix each line will still have a trailing \r after the chomp.
\r is the "carriage return" character. It's like on an old typewriter how you had to move the whole typehead back to the left side at the end of a line, computer displays used to be fancy typewriters called Teleprinters. When you print it, the cursor moves back to the beginning of the line. Anything you print after that gets overwritten. Here's a simple example.
print "foo\rbar\r\n";
What you'll see is bar. This is because it prints...
foo
\r sends the cursor back to the start of the line
bar overwrites foo
\r sends the cursor back to the start of the line
\n goes to the start of the next line (doesn't matter where the cursor is)
chomp will only remove whatever is in $/ off the end of the string. On Unix that's \n. On Windows it's \r\n.
There's a number of ways to solve this. One of the safest is to manually remove newlines of both types with a regex.
# \015 is octal character 015 which is carriage return.
# \012 is octal character 012 which is newline
$line =~ s{\015?\012$}{};
That says to remove maybe a \r and definitely a \n at the end of the line.

Concatenating string read from file with string literals creates jumbled output

My problem is that the result is jumbled. Consider this script:
#!/bin/bash
INPUT="filelist.txt"
i=0;
while read label
do
i=$[$i+1]
echo "HELLO${label}WORLD"
done <<< $'1\n2\n3\n4'
i=0;
while read label
do
i=$[$i+1]
echo "HELLO${label}WORLD"
done < "$INPUT"
filelist.txt
5
8
15
67
...
The first loop, with the immediate input (through something I believe is called a herestring (the <<< operator) gives the expected output
HELLO1WORLD
HELLO2WORLD
HELLO3WORLD
HELLO4WORLD
The second loop, which reads from the file, gives the following jumbled output:
WORLD5
WORLD8
WORLD15
WORLD67
I've tried echo $label: This works as expected in both cases, but the concatenation fails in the second case as described. Further, the exact same code works on my Win 7, git-bash environment. This issue is on OSX 10.7 Lion.
How to concatenate strings in bash |
Bash variables concatenation |
concat string in a shell script
Well, just as I was about to hit post, the solution hit me. Sharing here so someone else can find it - it took me 3 hours to debug this (despite being on SO for almost all that time) so I see value in addressing this specific (common) use case.
The problem is that filelist.txt was created in Windows. This means it has CRLF line endings, while OSX (like other Unix-like environments) expects LF only line endings. (See more here: Difference between CR LF, LF and CR line break types?)
I used the answer here to convert the file before consumption. Using sed I managed to replace only the final line's carriage return, so I stuck to known guns and went for the perl approach. Final script is below:
#!/bin/bash
INPUTFILE="filelist.txt"
INPUT=$(perl -pe 's/\r\n|\n|\r/\n/g' "$INPUTFILE")
i=0;
while read label
do
i=$[$i+1]
echo "HELLO${label}WORLD"
done <<< $'INPUT'
Question has been asked in a different form at Bash: Concatenating strings fails when read from certain files

How to clean a data file from binary junk?

I have this data file, which is supposed to be a normal ASCII file. However, it has some junk in the end of the first line. It only shows when I look at it with vi or less -->
y mon d h XX11 XX22 XX33 XX44 XX55 XX66^#
2011 6 6 10 14.0 15.5 14.3 11.3 16.2 16.1
grep is also saying that it's a binary file: Binary file data.dat matches
This is causing some trouble in my parsing script. I'm splitting each line and putting them to array. The last element(XX66) in first array is somehow corrupted, because of the junk and I can't make a match to it.
How to clean that line or the array? I have tried dos2unix to the file and substituting array members with s/\s+$//. What is that junk anyway? Unfortunately I have no control over the data, it's a third party data.
Any ideas?
Grep is trying to be smart and, when it sees an unprintable character, switches to "binary" mode. Add "-a" or "--text" to force grep to stay in "text" mode.
As for sed, try sed -e 's/\([^ -~]*\)//g', which says, "change everything not between space and tilde (chars 0x20 and 0x7E, respectively) into nothing". That'll strip tabs, too, but you can insert a tab character before the space to include them (or any other special character).
The "^#" is one way to represent an NUL (aka "ascii(0)" or "\0"). Some programs may also see that as an end-of-file if they were implemented in a naive way.
If it's always the same codes (eg ^# or related) then you can find/replace them.
In Vim for example:
:%s/^#//g in edit mode will clear out any of those characters.
To enter a character such as ^#, press and hold down the Ctrl button, press 'v' and then press the character you need - in the above case, remember to hold shift down to get the # key. The Ctrl key should be held down til the end.
The ^# looks like it's a control character. I can't figure out what character it should be, but I guess that's not important.
You can use s/^#//g to get rid of them, but you have to actually COPY the character, just putting ^ and # together won't do it.
e:f;b.
I created this small script to remove all binary, non-ASCII and some annoying characters from a file. Notice that the char are octal-based:
#!/usr/bin/perl
use strict;
use warnings;
my $filename = $ARGV[0];
open my $fh, '<', $filename or die "File not found: $!";
open my $fh2, '>', 'report.txt' ;
binmode($fh);
my ($xdr, $buffer) = "";
# read 1 byte at a time until end of file ...
while (read ($fh, $buffer, 1) != 0) {
# append the buffer value to xdr variable
$xdr .= $buffer;
if (!($xdr =~ /[\0-\11]/) and (!($xdr =~ /[\13-\14]/))and (!($xdr =~ /[\16-\37]/)) and (!($xdr =~ /[\41-\55]/)) and (!($xdr =~ /[\176-\177]/))) {
print $fh2 $xdr;
}
$xdr = "";
}
# finaly, clean all the characters that are not ASCII.
system("perl -plne 's/[^[:ascii:]]//g' report.txt > $filename.clean.txt");
Stripping individual characters using sed is going to be very slow, perhaps several minutes for 100MB file.
As an alternative, if you know the format/structure of the file, e.g. a log file where the "good" lines of the file start with a timestamp, then you can grep out the good lines and redirect those to a new file.
For example, if we know that all good lines start with a timestamp with the year 2021, we can use this expression to only output those lines to a new file:
grep -a "^2021" mylog.log > mylog2.log
Note that you must use the -a or --text option with grep to force grep to output lines when it detects that the file is binary.

How to remove ^M (CRLF) from w file sent from Windows to linux FTP server in perl?

I'm sending a comma delimited file (in ASCII) via Net::FTP in perl (generated on Windows) to a linux based FTP account. The issue is that my file on the linux side has ^M at the end of each line. I know I can remove these by calling a
dos2unix" command on that file but how do I remove ^M on the windows side so that I send a correct file in the first place.
I tried doing the below but that doesn't affect the file on the linux side.
$content =~ s/^M//g;
If you had "^","M", then s/\^M//g would work. ("^" is special in regex patterns.) If you had a CR, then s/\r\n/\n/g (or just s/\r//g) would work.
If neither work, please provide a portion of "od -c" of your data file.
When you are writing the file:
open my $fh, '>:raw', $file or die "could not open $file: $!\n";
See perldoc -f binmode.

Resources