linux console how to change the codepage to dos cp437 - linux

I want to view some ansi-art on the linux local-console. (my setup:raspberry pi3 / newest raspbian - no x11)
i've tried many different settings in raspi-config, dpkg-reconfigure console-setup, /etc files, environment vars but i had no luck yet. do i need a special pcf font to get it working?
a reliable way to enable it for remote terminals would also be great.
thanks in advance

It depends on what your data uses (see chart). Codes 0..31 are a problem unless you have a program that can map those codes to a printable value (as noted in Why does showconsolefont have different output in tmux?, the showconsolefont program does this mapping of 0..31).
Most of the usable fonts for the Linux console are "psf" fonts: having a header which tells which Unicode values each glyph corresponds to. Using that, along with a known character set (cp437), you could convert the data or "play" it using an application which knows how to do this:
You could convert it using iconv or recode, or
The line-drawing (128..255) could be done using luit in a UTF-8 console.

Related

How to disable encoding in a text-editor?

This is such a basic question I am surprised I could not easily find an answer to it:
I use Notepad++ to write my scripts in. Someone sent me some code for a shell script (.sh) that I could modify to suit my needs. I simply changed a small bit of text using Notepad++ (on Windows) and used FileZilla (SFTP) to upload it to my server (Debian Linux).
There were a few problems with this that it took my server admin an hour to find, namely:
FileZilla, for whatever reason, defaults to ASCII rather than binary! (changed it to binary and removed the .sh association with ASCII)
The permissions were wrong, chmod took care of this
Problem is it STILL did not work. To fix it my server admin simply copied the text right on the server (using vim or nano) into a new shell script file and saved that. Before he kept saying the problem was Windows (which he loves to hate on) but it seems it is the encoding that text-editors are using that is corrupting the files.
He said my text-editor encoding needs to be said to "None". However, that is not an option - only ANSI, UTF and UTS variants are options!
How can I create a shell script on Windows with no encoding whatsoever so that it doesn't get corrupted?
I need to be able to simply transfer the file to the server, I can't mess around with modifying it once on the server which is wholly impractical.
To fix it EndOfLine and encoding on Notepad++ :
On the bottom right of Notepad++ you can right click on the left of the encoding "UTF-8" and click on Convert UNIX(LF) format. Be sure to change encoding to UTF-8 if it is not the case.
In Filezilla :
Transfert mode : auto

D Language fails to display german Umlaute on Windows?

As you can see, D fails to output german Umlaute. At least on Windows. On Linux or BSD the same program outputs the string as I've saved it.
I already tried wstring or dstring, but the output is the same.
What am I doing wrong?
D will output UTF-8 regardless of the operating system. How the output will be interpreted depends on how it is displayed. In this particular case, it looks like your IDE is interpreting the output as if it was encoded in the Windows-1252 encoding.
For the standard Windows console, you could change the output encoding by calling SetConsoleOutputCP(65001), but note that this may have some undesired side effects (you should restore the codepage before your progam exits, and batch files may not run while the console output codepage is set to 65001).
CyberShadows post guided me to an acceptable answer. :-)
In Eclipse it is possible to change the output-encoding without changing global settings of the OS.
Go to Run --> Run-Configurations...
There select the Common-Tab and change the encoding to UTF-8. Now german Umlaute are displayed correctly. At least in Eclipse. :-)
Another possibility is to use https://babun.github.io/ . It is a Cygwin-based Shell that ouputs UTF-8:

How to convert old Japanese text encodings?

I run MacBook Pro under 10.6.7, and I am competent in Unix. I have old Japanese text files in various encodings (EUC, SJS, New-JIS) that I can no longer read or display. The old program jconv.c does not help, since it only converts among these encodings. Is there a way to convert them (or any one of them) to the current "normal" Japanese text that can be seen in TextEdit, etc.? I have set the Terminal preferences to SJS and EUC (can't find NewJIS), among others, including UTF-8. Eleanor
I recommend you look into iconv for doing such conversions.
nkf is a Linux command line program which can meet your requirement.
I am sure it is available at Debian. So you can download sorce code from the Net.

I exported via mysqldump to a file. How do I find out the file encoding of the file?

Given a text file in ubuntu (or debian unix in general), how do I find out the file encoding of the file ? Can I run od or hexdump on it to fingerprint its encoding ? What should I be looking out for ?
There are many tools to do this. Try a web search for "detect encoding". Here are some of the tools I found:
The Internationalizations Classes for Unicode (ICU) are a great place to start. See especially their page on Character Set Detection.
Chardet is a Python module to guess the encoding
of a file. See chardet.feedparser.org
The *nix command-line tool file detects file types, but might also detect encodings if mentioned in the file (e.g. if there's a mime-type notation in
the file). See man file
Perl modules Encode::Detect and Encode::Guess .
Someone asked a similar question in StackOverflow. Search for the question, PHP: Detect encoding and make everything UTF-8. That's in the context of fetching files from the net and using PHP, but you could write a command-line PHP script.
Note well what the ICU page says about character set detection: "Character set detection is ..., at best, an imprecise operation using statistics and heuristics...." In my experience the problem domain makes a big difference in how easy or difficult the job is. Don't forget that it's possible that the octets in a file can be of ambiguous encoding, i.e. sensibly interpreted using multiple different encodings. They can also be of mixed encoding, i.e. different subsets of the octets make sense interpreted in different encodings. This is why there's not a single command-line tool I can recommend which always does the job.
If you have a single file and you just want to get it into a known encoding, my trick is to open the file with a text editor which can import using a bunch of different encodings, such as TextWrangler or OpenOffice.org. First, open the file and let the editor guess the encoding. Take a look at the result. If you aren't satisfied with it, guess an encoding, open the file with the editor specifying that encoding, and take a look at the result. Then save as a known encoding, e.g. UTF-16.
You can use enca. Enca is a small command line tool for encoding detection and convertion.
You can install it at debian / ubuntu by:
apt-get install enca
In order to use it, just call
enca FILENAME
Also see the manpage for more information.

How do I have Emacs load a font from a file?

In the interest of making my emacs setup more portable, I'd like to be able to set the current font by specifying a file rather than a font name, i.e. "Load ~/config/myfont.ttf and use size 12". Is there a way to do that in my .emacs? All the instructions I've found assume the font is already installed on the system. I'm using the XFT support on Linux, so a linux specific hack would be OK but I'd prefer something that would work on all targets.
Update: To be clear, I'm using a font that isn't standard on Windows / OS X / Linux. I'm not just looking to set a different font based on platform, but to specify a specific font file that I have (TTFs work on Windows and Linux, if not on Mac I'll get another version of the file but I still want to specify the font via file rather than name).
Unfortunately, you can't.
Emacs on different platforms uses different windowing toolkits, all of which take care of font handling for it. I don't believe you can specify a font filename in Emacs on any platform - it just doesn't work that way.
As for how to find the font:
On Linux, you could use XFT's support for a user-specific font config file which is usually ~/.fonts.conf (but check /etc/fonts/font.conf to be sure) to add whatever directory you place your fonts into.
On a Mac, you can add the font into ~/Library/Fonts. TTFs work fine on Macs, BTW.
On Windows, I think you'd just have to add it to the system fonts directory.
From there, you then go and tell Emacs (through customize or not) to use your font. You'll find the naming schemes to be different on each platform (not sure what Windows looks like), but customize should help take care of this for you - just keep a separate customize file per machine if need be.
...so basically your portable Emacs setup has to encompass more than just an Emacs config file (which, given that you're carrying a font file around, it already does).

Resources