What is the smallest URL friendly encoding? - browser

I am looking for a url encoding method that is most efficient in terms of space. Raw binary (base2) could be represented in base16 which is smaller and is url safe, but base64 is even more efficient. However, the usual base64 encoding isn't url safe....
So what is the smallest encoding method that is also safe for URLS?

This is what the Base64 URL encoding variant is for.
It uses the same standard Base64 Alphabet except that + is changed to - and / is changed to _.
Most modern Base64 implementations will support this alternate encoding. If yours doesn't, it's usually just a matter of doing a search/replace on the Base64 input prior to decoding, or on the output prior to sending it to a browser.

You can use a 62 character representation instead of the usual base 64. This will give you URLs like the youtube ones:
http://www.youtube.com/watch?v=0JD55e5h5JM
You can use the PHP functions provided in this page if you need to map strings to a database numerical ID:
http://bsd-noobz.com/blog/how-to-create-url-shortening-service-using-simple-php
Or this one if you need to directly convert a numerical ID to a short URL string:
http://kevin.vanzonneveld.net/techblog/article/create_short_ids_with_php_like_youtube_or_tinyurl/

"base66" (theoretical, according to spec)
As far as I can tell, the optimal encoding for URLs is a "base66" encoding into the following alphabet:
ABCDEFGHIJKLMNOPQRSTUVWXYZ
abcdefghijklmnopqrstuvwxyz
0123456789-_.~
These are all the "Unreserved characters" according the URI specification RFC 3986 (section 2.3), so they will appear as-is in the URL. Using this "base66" encoding could give a URL like:
https://example.org/articles/.3Ja~jkWe
The question is then if you want . and ~ in your URLs?
On some older servers (ancient by now, I guess) ~joe would mean the "www directory" of the user joe on this server. And thus a user might be confused as to what the ~ character is doing in the middle of your URL.
This is common for academic websites, especially CS professors (e.g. Donald Knuth's website https://www-cs-faculty.stanford.edu/~knuth/)
"base80" (in practice, but not battle-tested)
However, in my own testing the following 14 other symbols also do not get
percent-encoded (in Chrome 95 and Firefox 93):
!$'()*+,:;=#[]
(see also this StackOverflow answer)
leaving a "base80" URL encoding possible. Some of these (notably + and =) would not work in the query string part of the URL, only in the path part. All in all, this ends up giving you beautiful, hyper-compressed URLs like:
https://example.org/articles/1OWG,HmpkySCbBy#RG6_,
https://example.org/articles/21Cq-b6Ud)txMEW$,hc4K
https://example.org/articles/:3Tx**U9X'd;tl~rR]q+
There's a plethora of reasons why you might not want all of those symbols in your URLs. One example is that StackOverflow's own "linkifier" won't include that ending comma in the link it generates (I've manually made it a part of the link here).
Also the percent encoding seems to be quite finicky. In some cases Firefox would initially percent-encode ' and ~] but on later requests would not.

Related

Is there any reason why okhttp3.Credentials use ISO_8859_1 instead UTF-8?

I am using okhttp3.Credentials to get Base64 string in my current project. I spot an issue with cyrillic symbols I passed on server as Base64 string and eventually find out current implementation of okhttp3.Credentials uses ISO_8859_1.
Is there a thoughtful intent to go with ISO_8859_1 here instead of more universal UTF-8?
Update from answer:
From reference to spec
The original definition of this authentication scheme failed to
specify the character encoding scheme used to convert the user-pass
into an octet sequence. In practice, most implementations chose
either a locale-specific encoding such as ISO-8859-1 ([ISO-8859-1]),
or UTF-8 ([RFC3629]). For backwards compatibility reasons, this
specification continues to leave the default encoding undefined, as
long as it is compatible with US-ASCII (mapping any US-ASCII
character to a single octet matching the US-ASCII character code).
B.3. Why not simply switch the default encoding to UTF-8?
There are sites in use today that default to a local character
encoding scheme, such as ISO-8859-1 ([ISO-8859-1]), and expect user
agents to use that encoding. Authentication on these sites will stop
working if the user agent switches to a different encoding, such as
UTF-8.
Note that sites might even inspect the User-Agent header field
([RFC7231], Section 5.5.3) to decide which character encoding scheme
to expect from the client. Therefore, they might support UTF-8 for
some user agents, but default to something else for others. User
agents in the latter group will have to continue to do what they do
today until the majority of these servers have been upgraded to
always use UTF-8.
Discussion here https://github.com/square/okhttp/pull/3134
It's legacy and you can override with the optional param.

Animated icon in email subject

I know about Data URIs in which base64 encoded data can be used inline such as images. Today I received an email actually an spam one in which there was an animated (gif) icon in its subject:
Here is the icon alone:
So the only thing did cross my mind was all about Data URIs and if Gmail allows some sort of emoticons to be inserted in subject. I saw the full detailed version of email and pointed to subject line at the below picture:
So GIF comes from =?UTF-8?B?876Urg==?= encoded string which is similar to Data URI scheme however I couldn't get the icon out of it. Here is element HTML source:
Long story short, there are lots of emoticons from https://mail.google.com/mail/e/XXX where XXX are hexadecimal numbers. They are documented nowhere or I couldn't find it. If that's about Data URI, so how is it possible to include them in Gmail's email subject? (I forwarded that email to a yahoo email account, seeing [?] instead of icon) and if it's not, then how that encoded string is parsed?
#Short description:
They are referred to internally as goomoji, and they appear to be a non-standard UTF-8 extension. When Gmail encounters one of these characters, it is replaced by the corresponding icon. I wasn't able to find any documentation on them, but I was able to reverse engineer the format.
#What are these icons?
Those icons are actually the icons that appear under the "Insert emoticons" panel.
While I don't see the 52E icon in the list, there are several others that follow the same convention.
B0C
4F4
Note that there are also some icons whose names are prefixed, such as gtalk.03C . I was not able to determine if or how these icons can be used in this manner.
#What is this Data URI thing?
It's not actually a Data URI, though it does share some similarities. It's actually a special syntax for encoding non-ASCII characters in email subjects, defined in RFC 2047. Basically, it works like this.
=?charset?encoding?data?=
So, in our example string, we have the following data.
=?UTF-8?B?876Urg==?=
charset = UTF-8
encoding = B (means base64)
data = 876Urg==
#So, how does it work?
We know that somehow, 876Urg== means the icon 52E, but how?
If we base64 decode 876Urg==, we get 0xf3be94ae. This looks like the following in binary:
11110011 10111110 10010100 10101110
These bits are consistent with a 4-byte UTF-8 encoded character.
11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
So the relevant bits are the following.:
011 111110 010100 101110
Or when aligned:
00001111 11100101 00101110
In hexadecimal, these bytes are the following:
FE52E
As you can see, except for the FE prefix which is presumably to distinguished the goomoji icons from other UTF-8 characters, it matches the 52E in the icon URL. Some testing proves that this holds true for other icons.
#Sounds like a lot of work, is there a converter?:
This can of course be scripted. I created the following Python code for my testing. These functions can convert the base64 encoded string to and from the short hex string found in the URL. Note, this code is written for Python 3, and is not Python 2 compatible.
###Conversion functions:
import base64
def goomoji_decode(code):
#Base64 decode.
binary = base64.b64decode(code)
#UTF-8 decode.
decoded = binary.decode('utf8')
#Get the UTF-8 value.
value = ord(decoded)
#Hex encode, trim the 'FE' prefix, and uppercase.
return format(value, 'x')[2:].upper()
def goomoji_encode(code):
#Add the 'FE' prefix and decode.
value = int('FE' + code, 16)
#Convert to UTF-8 character.
encoded = chr(value)
#Encode UTF-8 to binary.
binary = bytearray(encoded, 'utf8')
#Base64 encode return end return a UTF-8 string.
return base64.b64encode(binary).decode('utf-8')
###Examples:
print(goomoji_decode('876Urg=='))
print(goomoji_encode('52E'))
###Output:
52E
876Urg==
And, of course, finding an icon's URL simply requires creating a new draft in Gmail, inserting the icon you want, and using your browser's DOM inspector.
If you use the correct hex code point (e.g. fe4f4 for 'pile of poo') and If it is correctly encoded within the subject line header, let it be base64 (see #AlexanderOMara) or quoted-printable (=?utf-8?Q?=F3=BE=93=B4?=), then Gmail will automatically parse and replace it with the corresponding emoji.
Here's a Gmail emoji list for copying and pasting into subject lines - or email bodies. Animated emojis, which will grab even more attention in the inbox, are placed on a yellow background:
Many thanks to Alexander O'Mara for such a well-researched answer about the goomoji-tagged HTML images!
I just wanted to add three things:
There are still many many emoji (and other Unicode sequences generating pictures) that spammers and other erstwhile marketers are starting to use in email subject lines and that gmail does not convert to HTML images. In some browsers these show up bold and colored, which is almost as bad as animation. Browsers could also choose to animate these, but I don't know if any do. These Unicode sequences get displayed by the browser as Unicode text, so the exact appearance (color or not, animated or not, ...) depends on what text rendering system the browser is using. The appearance of a given Unicode emoji also depends on any Unicode variation selectors and emoji modifiers that appear near it in the Unicode code point sequence. Unlike the image-based emoji spam, these sequences can be copied-and-pasted out of the browser and into other apps as Unicode text.
I hope the many marketers reading this StackOverflow question will just say no. It is a horrible idea to include these sequences in your email subject lines and it will immediately tarnish you and your brand as lowlife spammers. It is not worth the "attention" your email will get.
Of course the first question coming to everyone's mind is: "how do I get rid of these things?" Fortunately there is this open-source Greasemonkey/Tampermonkey/Violentmonkey userscript:
Gmail Subject Line Emoji Roach Motel
This userscript eliminates both HTML-image (thanks to awesome work of Alexander O'Mara) and pure-Unicode types.
For the latter type, the userscript includes a regular expression designed to capture the Unicode sequences likely to be abused by marketers. The regex looks like this in ES6 Javascript (the userscript translates this to widely-supported pre-ES6 regex using the amazing ES6 Regex Transpiler):
var re = /(\p{Emoji_Modifier_Base}\p{Emoji_Modifier}?|\p{Emoji_Presentation}|\p{Emoji}\uFE0F|[\u{2100}-\u{2BFF}\u{E000}-\u{F8FF}\u{1D000}-\u{1F5FF}\u{1F650}-\u{1FA6F}\u{F0000}-\u{FFFFF}\u{100000}-\u{10FFFF}])\s*/gu
// which includes the Unicode Emoji pattern from
// https://github.com/tc39/proposal-regexp-unicode-property-escapes
// plus also these blocks frequently used for spammy emojis
// (see https://en.wikipedia.org/wiki/Unicode_block ):
// U+2100..U+2BFF Arrows, Dingbats, Box Drawing, ...
// U+E000..U+F8FF Private Use Area (gmail generates them for some emoji)
// U+1D000..U+1F5FF Musical Symbols, Playing Cards (sigh), Pictographs, ...
// U+1F650..U+1FA6F Ornamental Dingbats, Transport and Map symbols, ...
// U+F0000..U+FFFFF Supplementary Private Use Area-A
// U+100000..U+10FFFF Supplementary Private Use Area-B
// plus any space AFTER the discovered emoji spam

Why do need metadata information specifying the encoding?

I feel a bit of a chicken and egg problem if i write a html meta tag specifying charset as say UTF-16 - like how do we decode the entire HTTP Request in the first place if we didn't know its UTF-16 data ? I believe request header needs to handle this and by the time we try to read metadata like say html tag charset="utf-16" we already know its UTF-16 .
Besides think one level higher about header information like Request Headers - are passed in ASCII as a standard ?
I mean at some level we need to agree upon and you can't set a data that is needed to decode as a metadata information . Can anyone clarify this ?
I am a bit confused on the idea of specifying a data that is needed to interpret the whole data as a metadata information inside the original data .
In general how can any form of encoding work if we don't have a standard agreed upon language/encoding to convey the data about the data itself ?
For example I am informed that Apache default has 8859-1 as the standard . So would all client need to enforce that for HTTP Headers and interpret the real content as UTF-8 if we want UTF-8 for the content-type ?
What character encoding should I use for a HTTP header? is a closely related question
UTF-16 (and other) encodings use a BOM (Byte Order Mark) that is read at the start of the file and that signals which encoding is being used. Only after that, the encoded part of the file begins.
For example, for UTF-16, you'll have the bytes FE FF if big-endian and FF FE if little-endian words are being used.
You also often see UTF-8 BOMs, although they don't need to be used (and may confuse some XML parsers).

Node.js URL-encoding for pre-RFC3986 urls (using + vs %20)

Within Node.js, I am using querystring.stringify() to encode an object into a query string for usage in a URL. Values that have spaces are encoded as %20.
I'm working with a particularly finicky web service that will only accept spaces encoded as +, as used to be commonly done prior to RFC3986.
Is there a way to set an option for querystring so that it encodes spaces as +?
Currently I am simply doing a .replace() to replace all instances of %20 with +, but this is a bit tedious if there is an option I can set ahead of time.
If anyone still facing this issue, "qs" npm package has feature to encode spaces as +
qs.stringify({ a: 'b c' }, { format : 'RFC1738' })
I can't think of any library doing that by default, and unfortunately, I'd say your implementation may be the more efficient way to do this, since any other option would probably either do what you're already doing, or will use slower non-compiled pure JavaScript code.
What about asking the web service provider to follow the RFC?
https://github.com/kvz/phpjs is a node.js package that provides all the php functions. The http_build_query implementation at the time of writing this only supports urlencode (the query string includes + instead of spaces), but hopefully soon will include the enc_type parameter / rawurlencode (%20's for spaces).
See http://php.net/http_build_query.
RFC1738 (+'s) will be the default enc_type either way, so you can use it immediately for your purposes.

Problem using unicode in URLs with cgi.PATH_INFO in ColdFusion

My ColdFusion (MX7 on IIS 6) site has search functionality which appends the search term to the URL e.g. http://www.example.com/search.cfm/searchterm.
The problem I'm running into is this is a multilingual site, so the search term may be in another language e.g. القاهرة leading to a search URL such as http://www.example.com/search.cfm/القاهرة
The problem is when I come to retrieve the search term from the URL. I'm using cgi.PATH_INFO to retrieve the path of the search page and the search term and extracting the search term from this e.g. /search.cfm/searchterm however, when unicode characters are used in the search they are converted to question marks e.g. /search.cfm/??????.
These appear actual question marks, rather than the browser not being able to format unicode characters, or them being mangled on output.
I can't find any information about whether ColdFusion supports unicode in the URL, or how I can go about resolving this and getting hold of the complete URL in some way - does anyone have any ideas?
Cheers,
Tom
Edit: Further research has lead me to believe the issue may related to IIS rather than ColdFusion, but my original query still stands.
Further edit
The result of GetPageContext().GetRequest().GetRequestUrl().ToString() is http://www.example.com/search.cfm/searchterm/????? so it appears the issue goes fairly deep.
Yeah, it's not really ColdFusion's fault. It's a common problem.
It's mostly the fault of the original CGI specification, which specifies that PATH_INFO has to be %-decoded, thus losing the original %xx byte sequences that would have allowed you to work out which real characters were meant.
And it's partly IIS's fault, because it always tries to read submitted %xx bytes in the path part as UTF-8-encoded Unicode (unless the path isn't a valid UTF-8 byte sequence in which case it plumps for the Windows default code page, but gives you no way to find out this has happened). Having done so, it puts it in environment variables as a Unicode string (as envvars are Unicode under Windows).
However most byte-based tools using the C stdio (and I'm assuming this applies to ColdFusion, as it does under Perl, Python 2, PHP etc.) then try to read the environment variables as bytes, and the MS C runtime encodes the Unicode contents again using the Windows default code page. So any characters that don't fit in the default code page are lost for good. This would include your Arabic characters when running on a Western Windows install.
A clever script that has direct access to the Win32 GetEnvironmentVariableW API could call that to retrieve a native-Unicode environment variable which they could then encode to UTF-8 or whatever else they wanted, assuming that the input was also UTF-8 (which is what you'd generally want today). However, I don't think CodeFusion gives you this access, and in any case it only works from IIS6 onwards; IIS5.x will throw away any non-default-codepage characters before they even reach the environment variables.
Otherwise, your best bet is URL-rewriting. If a layer above CF can convert that search.cfm/القاهرة to search.cfm/?q=القاهرة then you don't face the same problem, as the QUERY_STRING variable, unlike PATH_INFO, is not specified to be %-decoded, so the %xx bytes remain where a tool at CF's level can see them.
Here's what you could do:
<cfset url.searchTerm = URLEncodedFormat("القاهر", "utf-8") >
<cfset myVar = URLDecode(url.searchTerm , "utf-8") >
Ofcourse, I'd recommend that you work with something like this in that case:
yourtemplate.cfm?searchTerm=%C3%98%C2%A7%C3%99%E2%80%9E
And then you do URL rewriting in IIS (if not already done by framework/rest of the app) http://learn.iis.net/page.aspx/461/creating-rewrite-rules-for-the-url-rewrite-module/ to match your pattern.
You can set the character encoding of the URL and FORM scope using the setEncoding() function:
http://www.adobe.com/livedocs/coldfusion/7/htmldocs/wwhelp/wwhimpl/common/html/wwhelp.htm?context=ColdFusion_Documentation&file=00000623.htm
You need to do this before you access any of the variables in this scope.
But, the default encoding of those scopes is already UTF-8, so this may not help. Also, this would probably not affect the CGI scope.
Is the IIS Server logging the correct characters into the request log?

Resources