Preserving accentuated letters when running a PERL script from linux terminal - linux

I want to get a plain text file from the French Wikipedia dump XML file.
To that end, I am applying a Perl script
I can give the full file if necessary, I only added the line
tr/a-zàâééèëêîôûùç-/ /cs;
to the script here: http://mattmahoney.net/dc/textdata.html
However, when I run on linux terminal:
perl filterwikifr.pl frwiki.xml > frwikiplaintext.txt
the output text file does not print accentuated letters correctly. For example, I get catégorie instead of catégorie...
I also tried:
perl -CS filterwikifr.pl frwiki.xml > frwikiplaintext.txt
without better success (and other variants instead of -CS...)

the problem is with the text editor gedit.
If, instead of opening the file directly, I open gedit, and then go to "open" and down, in "Character encoding", I choose UTF-8 instead of "Automatically Detected", then the accents are printed correctly.

Related

Unicode character not visible while doing cat

I have a CSV file generated by a windows system. The file is then moved to linux. The linux environment is NAME="Red Hat Enterprise Linux Server".VERSION="7.3 (Maipo)".ID="rhel".
When I use vi editor, all characters are visible. For example, one line is given :"Sarah--bitte nicht löschen".
But when i cat the file, i get something like "Sarah--bitte nicht l▒schen".
This file is consumed by datastage application and this unicode characters are coming as "?" in datastage. Since cat is not showing the character properly, I believe the issue is at the linux server. Any help is appreciated.
vi reads the file using encoding according fenc setting and show the content using your locales setting ($LANG env). If fenc is different from LANG, vi can handle the translate.
But cat doesn't handle the translate, it always output the exact byte stream without any convert.
Your terminal will show the output content of both vi and cat using your local PC locale setting.

How to open a "-" dashed filename using terminal?

I tried gedit, nano, vi, leafpad and other text editors , it won't open, I tried cat and other file looking commands, and I ensure you it's a file not a directory!
This type of approach has a lot of misunderstanding because using - as an argument refers to STDIN/STDOUT i.e dev/stdin or dev/stdout .So if you want to open this type of file you have to specify the full location of the file such as ./- .For eg. , if you want to see what is in that file use cat ./-
Both cat < - and ./- command will give you the output
you can use redirection
cat < -file_name
It looks like the rev command doesn't treat - as a special character.
From the man page
The rev utility copies the specified files to standard output, reversing the order of characters in every line.
so
rev - | rev
should show what's in the file in the correct order.
I tried with pico or vi command.pico readme which allowed me open in editor and read the contents.
if you want to open this type of file you have to specify the full location of the file such as ./- .For eg. , if you want to see what is in that file use cat ./-
cat ./- is the syntax that reveals the correct password for bandit the "rev -" reveals something else

syntax error near unexpected token ' - bash

I have a written a sample script on my Mac
#!/bin/bash
test() {
echo "Example"
}
test
exit 0
and this works fine by displaying Example
When I run this script on a RedHat machine, it says
syntax error near unexpected token '
I checked that bash is available using
cat /etc/shells
which bash shows /bin/bash
Did anyone come across the same issue ?
Thanks in advance !
It could be a file encoding issue.
I have encountered file type encoding issues when working on files between different operating systems and editors - in my case particularly between Linux and Windows systems.
I suggest checking your file's encoding to make sure it is suitable for the target linux environment. I guess an encoding issue is less likely given you are using a MAC than if you had used a Windows text editor, however I think file encoding is still worth considering.
--- EDIT (Add an actual solution as recommended by #Potatoswatter)
To demonstrate how file type encoding could be this issue, I copy/pasted your example script into Notepad in Windows (I don't have access to a Mac), then copied it to a linux machine and ran it:
jdt#cookielin01:~/windows> sh ./originalfile
./originalfile: line 2: syntax error near unexpected token `$'{\r''
'/originalfile: line 2: `test() {
In this case, Notepad saved the file with carriage returns and linefeeds, causing the error shown above. The \r indicates a carriage return (Linux systems terminate lines with linefeeds \n only).
On the linux machine, you could test this theory by running the following to strip carriage returns from the file, if they are present:
cat originalfile | tr -d "\r" > newfile
Then try to run the new file sh ./newfile . If this works, the issue was carriage returns as hidden characters.
Note: This is not an exact replication of your environment (I don't have access to a Mac), however it seems likely to me that the issue is that an editor, somewhere, saved carriage returns into the file.
--- /EDIT
To elaborate a little, operating systems and editors can have different file encoding defaults. Typically, applications and editors will influence the filetype encoding used, for instance, I think Microsoft Notepad and Notepad++ default to Windows-1252. There may be newline differences to consider too (In Windows environments, a carriage return and linefeed is often used to terminate lines in files, whilst in Linux and OSX, only a Linefeed is usually used).
A similar question and answer that references file encoding is here: bad character showing up in bash script execution
try something like
$ sudo apt-get install dos2unix
$ dos2unix offendingfile
Easy way to convert example.sh file to UNIX if you are working in Windows is to use NotePad++ (Edit>EOL Conversion>UNIX/OSX Format)
You can also set the default EOL in notepad++ (Settings>Preferences>New Document/Default Directory>select Unix/OSX under the Format box)
Thanks #jdt for your answer.
Following that, and since I keep having this issue with carriage return, I wrote that small script. Only run carriage_return and you'll be prompted for the file to "clean".
https://gist.github.com/kartonnade/44e9842ed15cf21a3700
alias carriage_return=remove_carriage_return
remove_carriage_return(){
# cygwin throws error like :
# syntax error near unexpected token `$'{\r''
# due to carriage return
# this function runs the following
# cat originalfile | tr -d "\r" > newfile
read -p "File to clean ? "
file_to_clean=$REPLY
temp_file_to_clean=$file_to_clean'_'
# file to clean => temporary clean file
remove_carriage_return_one='cat '$file_to_clean' | tr -d "\r" > '
remove_carriage_return_one=$remove_carriage_return_one$temp_file_to_clean
# temporary clean file => new clean file
remove_carriage_return_two='cat '$temp_file_to_clean' | tr -d "\r" > '
remove_carriage_return_two=$remove_carriage_return_two$file_to_clean
eval $remove_carriage_return_one
eval $remove_carriage_return_two
# remove temporary clean file
eval 'rm '$temp_file_to_clean
}
I want to add to the answer above is how to check if it is carriage return issue in Unix like environment (I tested in MacOS)
1) Using cat
cat -e my_file_name
If you see the lines ended with ^M$, then yes, it is the carriage return issue.
2) Find first line with carriage return character
grep -r $'\r' Grader.sh | head -1
3) Using vim
vim my_file_name
Then in vim, type
:set ff
If you see fileformat=dos, then the file is from a dos environment which contains a carriage return.
After finding out, you can use the above mentioned methods by other people to correct your file.
I had the same problem when i was working with armbian linux and Windows .
i was trying to coppy my codes from windows to armbian and when i run it this Error Pops Up. My problem Solved this way :
1- try to Coppy your files from windows using WinSCP .
2- make sure that your file name does not have () characters

Linux replace ^M$ with $ in csv

I have received a csv file from a ftp server which I am ingesting into a table.
While ingesting the file I am receiving the error "File was a truncated file"
The actual reason is the data in a file contains $ and ^M$ in end of the line.
e.g :
ACT_RUN_TM, PROG_RUN_TM, US_HE_DT*^M$*
"CONFIRMED","","3600"$
How can I remove these $ and ^M$ from end of the line using linux command.
The ultimately correct solution is to transfer the file from the FTP server in text mode rather than binary mode, which does the appropriate end-of-line conversion for you. Change your download scripts or FTP application configuration to enable text transfers to fix this in future.
Assuming this is a one-shot transfer and you have already downloaded the file and just want to fix it, you can use tr(1) to translate characters. So to remove all control-M characters from a file, you can pipe through tr -d '\r'. Or if you want to replace them with control-J instead – for example you would do this if the file came from a pre-OSX Mac system — do tr '\r' '\n'.
It's odd to see ^M as not-the-last character, but:
sed -e 's/^M*\$$//g' <badfile >goodfile
Or use "sed -i" to update in-place.
(Note that "^M" is entered on the command line by pressing CTRL-V CTRL_M).
Update: It's been established that the question is wrong as the "^M$" are not in the file but displayed with VI. He actually wants to change CRLF pairs to just LF.
sed -e 's/^M$//g' <badfile >goodfile

encoding problem?

i work with txt files, and i recently found e.g. these characters in a few of them:
http://pastebin.com/raw.php?i=Bdj6J3f4
what could these characters be? wrong character-encoding? i just want to use normal UTF-8 TXT files, but when i use:
iconv -t UTF-8 input.txt > output.txt
it's still the same.
When i open the files in gedit, copy+paste them in another txt files, then there's no characters like in the ones in pastebin. so gedit can solve this problem, it encodes the TXT files well. but there are too many txt files.
why are there http://pastebin.com/raw.php?i=Bdj6J3f4 -like chars in the text files? can they be converted to "normal chars"? I can't see e.g.: the "Ì" char, when i open the files with vim, only after i "work with them" (e.g.: awk, etc)
It would help if you posted the actual binary content of your file (perhaps by using the output of od -t x1). The pastebin returns this as HTML:
"Ì"
"Ã"
"é"
The first line corresponds to U+00C3 U+0152. THe last line corresponds to U+00C3 U+00A9, which is the string "\ux00e9" in UTF ("\xc3\xa9") with the UTF-8 bytes reinterpreted as Latin-1.
From man iconv:
The iconv program converts text from
one encoding to another encoding. More
precisely, it converts from the
encoding given for the -f option to
the encoding given for the -t option.
Either of these encodings defaults to
the encoding of the current locale
Because you didn't specify the -f option it assumes the file is encoded with your current locale's encoding (probably UTF-8), which apparently is not true. Your text editors (gedit, vim) do some encoding detection - you can check which encoding do they detect (I don't know how - I don't use any of them) and use that as -f iconv option (or save the open file with your desired encoding using one of those text editors).
You can also use some tool for encoding detection like Python chardet module:
$ python -c "import chardet as c; print c.detect(open('file.txt').read(4096))"
{'confidence': 0.7331842298102511, 'encoding': 'ISO-8859-2'}
..solved !
how:
i just right clicked on the folders containing the TXT files, and pasted them to another folder.. :O and presto..theres no more ugly chars..

Resources