I've a Java agent (running on Linux server) that manage document attachments, but something wrong with accented chars in their names (ò,è,ù ecc..).
I wrote this code to display the charset used:
OutputStreamWriter writer = new OutputStreamWriter(new ByteArrayOutputStream());
String enc = writer.getEncoding();
System.out.println("CHARSET: " + enc);
This display
CHARSET: ASCII
In a server where everything works fine, same line print:
CHARSET: UTF8
Servers have same configuration (works with "Internet sites", where "Use UTF-8 for output" is set to "Yes").
Any idea about parameter to set (Domino/Linux)?
UPDATE
I'll try to explain better...
I call an agent through Ajax call.
In parameter, i pass "ààà" string. When i try to decode in UTF-8 inside agent, string is resolved with
"???"
instead of
"ààà"
This is what System.out.println() shows in console.
On another Domino server, everything works. I don't understand if it is a matter of server settings or OS settings.
Just a suggestion, but you could change the first line in your example to be:
OutputStreamWriter writer = new OutputStreamWriter(new ByteArrayOutputStream(),
Charset.forName("UTF-8"));
That will force the OutputStreamWriter to UTF8, and your sample code will show consistent output on both servers. Without knowing more details, I can't say for sure if that's relevant to the real problem.
Although this might not directly answer your question, maybe you might be interessted in this article about encoding.
Related
Hi I am trying to upload data to the Heidi SQL table, but it returned "SQL Error (1366): Incorrect string value: '\xE3\x82\xA8\xE3\x83\xBC...'".
This issue is prompted by this string - "エーペックスレジェンズ" , and the source data file has a number of special characters. Want to know if there's a way to override this, so that all forms of character could be uploaded?
My default setting is utf8 and I have also tried utf8mb4, but neither of them would work.
That happens when you select the wrong file encoding in HeidiSQL's open-file dialog:
Never select "Auto-detect" - I wrote that auto-detection, and I can tell you it often detects the wrong encoding. Use the right encoding instead, which is mostly utf-8 nowadays.
I am able to see the file name in the URL in lower environments like SIT and UAT. But in Production environment, some junk value is replacing the file name. Any help will be great.
File name is replaced with some junk value this -> "bWFzdGVyfGltYWdlc3w4OTM1fGltYWdlL3BuZ3xpbWFnZXMvaDk4L2g4My84ODA0MTAxMDk1NDU0LnBuZ3xjMWY2OTZmOGQ5ZGM2MTIxMmQxMmUwODI5ZGQwYTg5YzNhMjIyYjQzMTJlMzc1MTU0ZmUyZWFjOGE5MjUyMGFj"
If you are asking about Media URL.
In hybris, SEO friendly URL call prettyURL. That can be enabled by setting media.legacy.prettyURL = true in the local.properties.
With prettyURL disabled, URL looks something like this
/medias/fileName.jpg?context=NAYDCL3IGAZC6ZTPN4XGU4DHHI5DU4LXMVZHI6JRGIZTINI.....
Above, context request paramater is base64 encoded media details.
With prettyURL enabled, URL looks something like this
/medias/sys_master/images/h98/h83/8804101095454/yourFileName.jpg
Now verify you have the same value for media.legacy.prettyURL in all environment. By default, prettyURL is disabled(media.legacy.prettyURL = false).
Refer LocalMediaWebURLStrategy class and help.hybris for more detail.
This is not junk value, it is base64 encoded text. It has unavailable characters for URL so system auto encode your value.
master|images|8935|image/png|images/h98/h83/8804101095454.png|c1f696f8d9dc61212d12e0829dd0a89c3a222b4312e375154fe2eac8a92520ac
We are experiencing a very anoying encoding problem which started with loopback but seems to be nodejs related.
Basically, we just finished developping an API with Loopback based upon an existing SQL_ASCII encoded postgresql database. Since the API has to be in UTF-8, we try to convert the data sent through our API routes to ISO-8859-15 in order to insert them correctly in our base.
No matter what iconv, utf8, iso-8859 etc. modules we tried, we couldn't get to pass ISO-8859-15 converted strings, we ended up with very strange stuff. For example :
var Iconv = require('iconv').Iconv;
var iconv = new Iconv('UTF-8','ISO-8859-1');
var label = iconv.convert("bébé").toString();
If we insert then the "label" into our database, we end up with someting like that = "b�b�" !
So we just tried to look directly in the Terminal how it behaved with a basic nodejs application (without loopback or any other framework) but it didn't turn out to be better.
Once the Terminal encoding set up to "ISO Latin 1", the following code :
console.log('bébé');
Was displayed this way in the Terminal :
bébé
As if nodejs was completely unable to handle ISO-8859 strings.
Are we missing something there ?
Are we doomed to use UTF-8 string in order to make this work ?
I want to load audio files with names like здраво.mp3, using a NodeJS server. (That's "zdravo" or "hello" in Serbian, if you were wondering).
However, NodeJS makes a request for %D0%B7%D0%B4%D1%80%D0%B0%D0%B2%D0%BE.mp3 instead, which results in the file not being found.
If I drag the file into a browser window from my desktop, the browser is happy to load it as file///path/здраво.mp3, so the issue is not with the way the browser is treating the Unicode string
The HTML page containing the link to the file has this meta tag in the head section...
<meta charset="utf-8" />
... and it is quite happy to display the text "Здраво" on the page, so the Unicode strings are properly formed within the browser.
I am guessing that the browser is converting the name to ISO-8859-1 before sending the request, and that the NodeJS server somehow needs to convert it back to Unicode before looking for it in the file system.
My question is: is there already a module that I can use to do this conversion, and are there examples of how to use it?
SOLUTION: Following the reply from Edwin Dalorzo, here is the one-line fix that I made to my handleRequest() function:
function handleRequest(request, response) {
request.url = decodeURIComponent(request.url) // the fix
var pathname = url.parse(request.url).pathname
It is not clear how you are receiving the encoded string, but for sure you can decode by simply doing:
decodeURIComponent("%D0%B7%D0%B4%D1%80%D0%B0%D0%B2%D0%BE")
And this will give you back your string "здраво"
When I set a site on the Dreamweaver and configure a server ftp for the site, many times I forgot the passwords so that I want to find a way to recover it.
The Dreamweaver just gives you the option to export the site manager data.
Site=>Manage Sites=>select the site=>export
this will save on your computer a readable xml file with extension .ste
Just open it in word-pad or any other application to read the xml and search for the needed server password and get the value of the attribute pw in the tag server
Then use this javascript function to decrypte the password from this value
function decodeDreamWaverPass(hash){
var pass = '';
for (var i=0 ; i<hash.length ; i+=2){
pass+=String.fromCharCode(parseInt(hash[i]+''+hash[i+1],16)-(i/2));
}
return pass;
}
Hope that it will be helpful for you...
To decode the Dreamweaver password you break the password into pairs of hexadecimal (0-9, A-F) digits then subtract from each hex digit based on it's position in the string, starting with 0, then convert back to ascii.
Example on this stackoverflow post - Encode and decode an Adobe Dreamweaver password in *.ste file
We also have a page on our website you can use to convert the passwords...
http://www.mywebsitespot.com/dreamweaver-password-decode
Sorry to wake up an old thread but thought I'd post my solution. It's a single HTML5 page that can load and parse Dreamweaver STE files, decoding the password as part of that. I wanted something simple, offline/local (so details aren't transmitted when decoding) that I could to just load an STE file in to and it would give me all the FTP details:
Blog Post:
http://bobmckay.com/web/dreamweaver-password-ste-file-decoder
Decoder Page:
http://bobmckay.com/dreamweaver-password-decoder/
Hope it's useful to someone!
Bob