I have a failing mocha test that outputs my string with the "Actual" and "Expected" highlighting... except that nothing's highlighted.
After some head-bashing, I think I've determined that my actual string contains some whacky UTF-8 characters that are completely hidden from me, and Mocha doesn't seem to know to highlight them.
I figured this out by writing out my expected and actual values to raw text files and loading them up in Kaleidoscope, which shows that they differ by highlighting what appears to be empty spaces between words.
I tried loading the utf8 library (on npm) and encoding one of the strings with utf8.encode str, and that still failed, but now the characters appear as something more than blank spaces, and Mocha does highlighting:
But either way, my tests are failing. How can I encode/decode/whatever these strings so that they match and my tests pass?
Btw, the comparison string I'm using in my test looks like this:
Make sure that either your text editor is saving your source code as proper UTF-8, or convert those copy/pasted chars to escaped literals as #loganfsmyth correctly comments.
Related
What did I want to do?
I was reading file names with various organ names in their file endings and there are many such files using glob.glob('filename/**/blabla')
Later, I tried to match a particular string if present inside the filename using IN operator. like
"ADRENALGLAND(LEFT).NRRD" IN "blabla/blabla/blabla/blablabla_ADRENALGLAND(LEFT).NRRD"
It worked for other filenames with the same ending whereas it did not work for a few.
To debug, I was trying to match if visually the same filename endings from two files are the same programmatically, but they are not!!! why?
For debug, I tried to match string to string. Like below. But I saw a peculiar thing while comparing strings in python.
Can anyone tell me what is the difference here?
**
'ADRENALGLAND(LEFT).NRRD' == 'АDRENALGLAND(LEFT).NRRD' => False !!!
**
I bring it down to this part where 'A's do not match whereas others matched properly.
As mentioned by #canbax, I checked the underline ASCII value for both the character and found that they are different. One gave 65 (Normal ASCII Code for English Alphabet 'A') whereas the other one gave 1040.
You can use ord() to get the ASCII int value of a character.
Although the int values are different, visually they look the same, which might be an issue from the jupyter notebook side.
Final Solution: Replaced the fancy A with the normal A in the file.
HiI am using node JS for my app, and I want to print ascii symbols in terminal.Here is a table for ascii symbols. Please check Extended ASCII Codes field. I want to print square or circle, for example 178 or 219.
Can anyone say me, how can do it?Thank you
Like several other languages, Javascript suffers from The UTF‐16
Curse. Except that Javascript has an even worse form of it, The UCS‐2
Curse. Things like charCodeAt and fromCharCode only ever deal with
16‐bit quantities, not with real, 21‐bit Unicode code points.
Therefore, if you want to print out something like 𝒜, U+1D49C,
MATHEMATICAL SCRIPT CAPITAL A, you have to specify not one character
but two “char units”: "\uD835\uDC9C".
Please refer to this link: https://dheeb.files.wordpress.com/2011/07/gbu.pdf
Your desired character is not a printable ASCII character. On linux you can print all the printable ascii characters by running this command:
for((i=32;i<=127;i++)) do printf \\$(printf '%03o\t' "$i"); done;printf "\n"
or
man ascii
So what you can do is to print unicode characters. Here is a list of all the available unicode characters, and you can select one which is looking almost identical with your desired character.
http://unicode-table.com/en/#2764
I've tested on a windows terminal but it is still not showing the desired character, but it's working on linux. If it's still not working you had to make sure to set LANGUAGE="en_US.UTF-8" in /etc/rc.conf and LANG="en_US.UTF-8" in /etc/locale.conf.
So printing out something like this on node console:
console.log('\u2592 start typing...');
will output this result:
▒ start typing...
Actually, if you only care about ASCII that should not be a real problem at all. You only have to properly escape them. A good reference for this is https://mathiasbynens.be/notes/javascript-escapes
console.log('\xB2 \xDB')
Works for me with recentish node under Windows (cmd shell) and mac OS. For ASCII characters you can just convert them to hex and prepend them with \x in your strings. Give it a try with node -e "console.log('\xB2')"
And when you try this answer, and it works, you might want to try:
node -e "console.log('\x07')"
Right now dealing with a weird problem when trying to match two Scala strings. When trying to determine if the following two strings are the same:
SM8lz5IEIWs7TUhR3ke27pnY3XsjojxqaMEg+ARCGs1nm3sVkwA+CM+XJfdsUxqzqH7LZdkflvny
z621tYkmXA== and SM8lz5IEIWs7TUhR3ke27pnY3XsjojxqaMEg+ARCGs1nm3sVkwA+CM+XJfdsUxqzqH7LZdkflvny
z621tYkmXA==
Scala returns false. So if I do the following if(hash1 == hash2) it returns false.
I suspect this is either a whitespace or character encoding issue, since hash matching only fails when trying to match a hash that was produced on a computer of a different operating system. I already tried stripping whitespace using regex, but it still failed.
What have I overlooked? And are there better ways to clean and match hashes in Scala?
Update
After comparing the two strings, Scala thinks hash2 is a single character longer than hash1. So I ran the following functions on both hashes: .trim.replaceAll("""(?m)\s+$""", ""). Still, it says they're not the same. What other characters could be interfering?
I have found the cause of this particular problem. Apparently when processing strings on Macintosh, \r is added in addition to any line breaks. Even though line break characters don't print out on a console, they're still inside the string.
The remedy was to do the following: .trim.replaceAll("\r", "")
And now both strings match.
I have the following string, read from an XML attribute:
"OnTrak 4-3/4”, 6-3/4”, 8-1/4” / MPR"
In my C# application it shows up nicely formatted like this
"OnTrak 4-3/4”, 6-3/4”, 8-1/4” / MPR"
This is the form I see in the debugger, a combobox, or on this forum (if I don't indent to specify code).
What I want to do is specify the same string as a C# variable and have it show up nicely formatted when the application runs. Unfortunately, all I get is the string as I literally typed it.
I have tried to play around with converting the encoding from ASCII to UTF8 with no luck. How can I get this special character properly formatted, and where can I find a list of these symbols?
Those are called XML entities. Use HttpUtility.HtmlDecode to decode them back to plain text like you would like. Credit goes to C#, function to replace all html special characters with normal text characters for how to convert entities in C#
Note that converting from ASCII to UTF8 (and Unicode etc.) is called changing the character set and is usually done when specific characters are in the string. For instance if you strings contained Chinese characters you couldn't use ASCII. In this simple case you shouldn't need to convert character sets because C# strings are Unicode character set by default and XML entities are Unicode based (I believe).
Vims errorformat (for parsing compile/build errors) uses an arcane format from c for parsing errors.
Trying to set up an errorformat for nant seems almost impossible, I've tried for many hours and can't get it. I also see from my searches that alot of people seem to be having the same problem. A regex to solve this would take minutesto write.
So why does vim still use this format? It's quite possible that the C parser is faster but that hardly seems relevant for something that happens once every few minutes at most. Is there a good reason or is it just an historical artifact?
It's not that Vim uses an arcane format from C. Rather it uses the ideas from scanf, which is a C function. This means that the string that matches the error message is made up of 3 parts:
whitespace
characters
conversion specifications
Whitespace is your tabs and spaces. Characters are the letters, numbers and other normal stuff. Conversion specifications are sequences that start with a '%' (percent) character. In scanf you would typically match an input string against %d or %f to convert to integers or floats. With Vim's error format, you are searching the input string (error message) for files, lines and other compiler specific information.
If you were using scanf to extract an integer from the string "99 bottles of beer", then you would use:
int i;
scanf("%d bottles of beer", &i); // i would be 99, string read from stdin
Now with Vim's error format it gets a bit trickier but it does try to match more complex patterns easily. Things like multiline error messages, file names, changing directory, etc, etc. One of the examples in the help for errorformat is useful:
1 Error 275
2 line 42
3 column 3
4 ' ' expected after '--'
The appropriate error format string has to look like this:
:set efm=%EError\ %n,%Cline\ %l,%Ccolumn\ %c,%Z%m
Here %E tells Vim that it is the start of a multi-line error message. %n is an error number. %C is the continuation of a multi-line message, with %l being the line number, and %c the column number. %Z marks the end of the multiline message and %m matches the error message that would be shown in the status line. You need to escape spaces with backslashes, which adds a bit of extra weirdness.
While it might initially seem easier with a regex, this mini-language is specifically designed to help with matching compiler errors. It has a lot of shortcuts in there. I mean you don't have to think about things like matching multiple lines, multiple digits, matching path names (just use %f).
Another thought: How would you map numbers to mean line numbers, or strings to mean files or error messages if you were to use just a normal regexp? By group position? That might work, but it wouldn't be very flexible. Another way would be named capture groups, but then this syntax looks a lot like a short hand for that anyway. You can actually use regexp wildcards such as .* - in this language it is written %.%#.
OK, so it is not perfect. But it's not impossible either and makes sense in its own way. Get stuck in, read the help and stop complaining! :-)
I would recommend writing a post-processing filter for your compiler, that uses regular expressions or whatever, and outputs messages in a simple format that is easy to write an errorformat for it. Why learn some new, baroque, single-purpose language unless you have to?
According to :help quickfix,
it is also possible to specify (nearly) any Vim supported regular
expression in format strings.
However, the documentation is confusing and I didn't put much time into verifying how well it works and how useful it is. You would still need to use the scanf-like codes to pull out file names, etc.
They are a pain to work with, but to be clear: you can use regular expressions (mostly).
From the docs:
Pattern matching
The scanf()-like "%*[]" notation is supported for backward-compatibility
with previous versions of Vim. However, it is also possible to specify
(nearly) any Vim supported regular expression in format strings.
Since meta characters of the regular expression language can be part of
ordinary matching strings or file names (and therefore internally have to
be escaped), meta symbols have to be written with leading '%':
%\ The single '\' character. Note that this has to be
escaped ("%\\") in ":set errorformat=" definitions.
%. The single '.' character.
%# The single '*'(!) character.
%^ The single '^' character. Note that this is not
useful, the pattern already matches start of line.
%$ The single '$' character. Note that this is not
useful, the pattern already matches end of line.
%[ The single '[' character for a [] character range.
%~ The single '~' character.
When using character classes in expressions (see |/\i| for an overview),
terms containing the "\+" quantifier can be written in the scanf() "%*"
notation. Example: "%\\d%\\+" ("\d\+", "any number") is equivalent to "%*\\d".
Important note: The \(...\) grouping of sub-matches can not be used in format
specifications because it is reserved for internal conversions.
lol try looking at the actual vim source code sometime. It's a nest of C code so old and obscure you'll think you're on an archaeological dig.
As for why vim uses the C parser, there are plenty of good reasons starting with that it's pretty universal. But the real reason is that sometime in the past 20 years someone wrote it to use the C parser and it works. No one changes what works.
If it doesn't work for you the vim community will tell you to write your own. Stupid open source bastards.