node JS console log ascii symbol - node.js

HiI am using node JS for my app, and I want to print ascii symbols in terminal.Here is a table for ascii symbols. Please check Extended ASCII Codes field. I want to print square or circle, for example 178 or 219.
Can anyone say me, how can do it?Thank you

Like several other languages, Javascript suffers from The UTF‐16
Curse. Except that Javascript has an even worse form of it, The UCS‐2
Curse. Things like charCodeAt and fromCharCode only ever deal with
16‐bit quantities, not with real, 21‐bit Unicode code points.
Therefore, if you want to print out something like 𝒜, U+1D49C,
MATHEMATICAL SCRIPT CAPITAL A, you have to specify not one character
but two “char units”: "\uD835\uDC9C".
Please refer to this link: https://dheeb.files.wordpress.com/2011/07/gbu.pdf
Your desired character is not a printable ASCII character. On linux you can print all the printable ascii characters by running this command:
for((i=32;i<=127;i++)) do printf \\$(printf '%03o\t' "$i"); done;printf "\n"
or
man ascii
So what you can do is to print unicode characters. Here is a list of all the available unicode characters, and you can select one which is looking almost identical with your desired character.
http://unicode-table.com/en/#2764
I've tested on a windows terminal but it is still not showing the desired character, but it's working on linux. If it's still not working you had to make sure to set LANGUAGE="en_US.UTF-8" in /etc/rc.conf and LANG="en_US.UTF-8" in /etc/locale.conf.
So printing out something like this on node console:
console.log('\u2592 start typing...');
will output this result:
▒ start typing...

Actually, if you only care about ASCII that should not be a real problem at all. You only have to properly escape them. A good reference for this is https://mathiasbynens.be/notes/javascript-escapes
console.log('\xB2 \xDB')
Works for me with recentish node under Windows (cmd shell) and mac OS. For ASCII characters you can just convert them to hex and prepend them with \x in your strings. Give it a try with node -e "console.log('\xB2')"

And when you try this answer, and it works, you might want to try:
node -e "console.log('\x07')"

Related

my bashrc contains strange characters (if Ä -f ü/.bash_aliases Å; then . ü/.bash_aliases fi)

In GCP compute Linux Accidentally did cat filebeat instead of filebeat.yaml
after that my bashrc contains below chars and if I type '~' bash is printing 'ü'
Need help in fixing this
if Ä -f ü/.bash_aliases Å; then
. ü/.bash_aliases
fi
This looks like your terminal was accidentally configured for legacy ISO-646-SE or a variant. Your file is probably fine; it's just that your terminal remaps the display characters according to a scheme from the 1980s.
A quick hex dump should verify that the characters in the file are actually correct. Here's an example of what you should see.
bash$ echo '[\]' | xxd
00000000: 5b5c 5d0a [\].
Even if the characters are displayed as ÄÖÅ, they are correct if you see the hex codes 5B, 5C, and 5D. (If you don't have xxd, try hexdump or od -t x1.)
Probably
bash$ tput reset
can set your terminal back to sane settings. Maybe stty sane might work too (but less likely, in my experience). Else, try logging out and back in.
Back when ASCII was the only game in town, but American (or really any) hardware was exported to places where the character repertoire was insufficient, the local vendor would replace the ROM chips in terminals to remap some slightly less common character codes to be displayed as the missing local glyphs. Over time, this became standardized; the ISO-646 standard was updated to document these local overrides. (The linked Wikipedia page has a number of tables with details.)
Eventually, 8-bit character sets became the norm, and then most locales switched to Latin-1 or some other suitable character set which no longer needed this hack. However, it was still rather prevalent even in the early 1990s. In the early 2000s, Unicode started taking over, and so now this seems like an absurd arrangement.
I'm guessing the file you happened to cat contained some control characters which instructed your terminal to switch to this legacy character set. It's not entirely uncommon (though usually when it happens to me, it switches to some "graphical" character set where some characters display box-drawing characters or mathematical symbols).

How to echo/print actual file contents on a unix system

I would like to see the actual file contents without it being formatted to print. For example, to show:
\n0.032,170\n0.034,290
Instead of:
0.032,170
0.34,290
Is there a command to echo the file's actual data in bash? I've tried using head, cat, more, etc. but all those seem to echo the "print-formatted" text. For example:
$ cat example.csv
0.032,170
0.34,290
How can I print the actual characters within the file?
This reads as if you miss understand what the "actual characters in the file" are. You will not find the characters \ and n in that file. But only a line feed, which is a specific character. So the utilities like cat do actually output exactly the characters in the file.
Putting it the other way around: if you really had those two characters literally in the file, then a utility like cat would actually output them. I just checked that, just to be sure.
You can easily check that yourself if you open the file using a hexeditor. There you will see the character 0A (decimal 10) which is a line feed character. You will not see the pair of the two characters \ and n somewhere in that file.
Many programming languages and also shell environments use escape sequences like \n in string definitions and identify those as control characters which would not be typable otherwise. So maybe that is where your impression comes from that your files should contain those two characters.
To display newlines as \n, you might try:
awk 1 ORS='\\n' input-file
This is not the "actual characters in the file", as \n is merely a conventional method of displaying a newline, but this does seem to be what you want.

unidentified characters in terminal

I faced these strange abnormal characters when I was trying to calculate PI number in terminal over a Beowulf cluster.
how can I convert these characters into some legible characters?
it's interesting that when I make less processes , the result is normal .
Thanks in advance.
edit:
This was done with mpich 1 and with 1000 processes over a 3-computer cluster.
Because the output has lots of Unicode replacement characters, it looks as if the locale settings on your machine are not set to use UTF-8 encoding.
Of course, it could simply be from attempting to print binary data on the terminal. But locale is a possibility. In either case, the terminal is running with UTF-8 encoding and your output is not valid UTF-8 text.
Resetting the terminal will not be helpful; it is the application (or your use of it) which is the problem.
Further reading:
Overcoming frustration: Correctly using unicode in python2
Avoid printing unicode replacement character in Java

Text to hexadecimals using hexdump - æøå characters

Using a shell/bash script, I need to convert some text to hexadecimals so I pipe the source text into hexdump, so far so good. The problem is æøå characters. They show up fine in the console (UTF-8), but the hexadecimal values hexdump provides isn't correct. All other standard latin letters. echo -en "Some text containing æøåÆØÅ"|hexdump -v -e '"xx" 1/1 "%02X", then I use sed to replace the xx with %. Well, all letters, punctations, new line, etc is, just not non-standard-latin letters.
So, how do I go about solving this? Is it the input codepage that is the problem, or is there some limitations wiht hexdump? Thanks!
EDIT: By codepage, I mean character encoding. Not 100% sure it is the same thing. Bear with me please! :)

Removing lines containing encoding errors in a text file

I must warn you I'm a beginner. I have a text file in which some lines contain encoding errors. By "error", this is what I get when parsing the file in my linux console (question marks instead of characters):
I want to remove every line showing those "question marks". I tried to grep -v the problematic character, but it doesn't work. The file itself is UTF8 and I guess some of the lines come from texts encoded in another format. I know I could find a way to reconvert them properly, but I just want them gone for now.
Do you have any ideas about how I could do this please?
PS: Some lines contain diacritics which are displayed fine. The "strings" command seems to remove too many "good" lines.
When dealing with mojibake on character encodings other than ANSI you must check 2 things:
Is the file really encoded in X? (X being UTF-8 WITHOUT BOM in your case. You could be trying to read UTF-8 WITH BOM, UTF-16, latin-1, etc. as UTF-8, and that would be the problem). Try reading in (not converting to) other encodings and see if any of them fits.
Is your locale or text editor set to read the file as UTF-8? If not, that may be the problem. Check for support and figure out how to change the setting. In linux try locale and setlocale commands to check and set it properly.
I like how notepad++ for windows (which also runs perfectly in linux using wine) lets you set any encoding you want to read the file without trying to convert it (of course if you set any other than the one the file is encoded in you will only see those weird characters), and also has a different option which allows you to convert it from one encoding to another. That has been pretty useful to me.
If you are a beginner you may be interested in this article. It explains briefly and clearly the whats, whys and hows of character encoding.
[EDIT] If the above fails, even windows-1252 and such ANSI encodings, I've just learned here how to remove non-ascii characters using tr unix command, turning it into ASCII (but be aware information on extra characters is lost in this output and there is no coming back, so keep the input file just in case you find a better fix):
tr -cd '\11\12\40-\176' < $INPUT_FILE > $OUTPUT_FILE
or, if you want to get rid of the whole line:
grep -v -P "[^\11\12\40-\176]" $INPUT_FILE > $OUTPUT_FILE
[EDIT 2] This answer here gives a pretty good guess of what could be happening if none of the encodings work on your file (Unfortunately the only straight forward solution seems to be removing those problematic characters).
You can use a micro-Perl script like:
perl -pe 's/[^[:ascii:]]+//g;' my_utf8_file.txt

Resources