The browser Edge of its own volition converts an XSD file from encoding"UTF-8" to ISO-8859-1.
My questions are 1. Is this an EDGE peculiarity,or is EDGE doing it because UTF-8 was deprecated when I was not looking, and replaced by ISO-8859-1. 2. Why the processing differences between browsers, 3. Is there a globally preferred encoding structure. Have read the W3 schools and Stackoverflow articles, informative yes but no recommendations.
I would like to opt a single standard for processing all my XML files (which there will be many) across my application. Should I just use ISO-8859-1. Please guide direct, help I really feel illiteracy on my part taking an upper hand.
The facts are a. Visual Studio created the file using UTF, b.I have no special characters/ symbols in the associated Xml file nor will I need them in the future, c.the chrome browser ignores (does not display) the following XML tag which is displayed in edge '' d. Both browsers process the file:/// ...file.
The code used to test the xsd reference is file:///D:\prjClimateControl\prgrms\slnsVsTo\prjV00G00Jun0619\sysdocXml1.xsd
The xml tag in the edge display is
but is displayed as
no error messages the file displays correctly but changes the endcoding
Related
I am looking at Datalogic's Adobe pdf library to repair and optimize PDF files for printing. The APDFL v15.0.0PlusP1a (5/18/2016) version release notes make reference to PDFProcessor for C++ but that seems to be missing from the sample files. The PDFOptimizer looks promising but it does not repair known badly formed PDF files.
The Adobe PDF library PDDocOpenwithParams() method allows you to set a flag doRepair:
doRepair: If true, attempt to repair the file if it is damaged; if false, do not attempt to repair the file if it is damaged.
Will it fix a badly formed PDF? how bad is bad? If Acrobat is able to resolve the issues and display the document then Adobe PDF library should be able to deal with the document also.
Regarding the PDFProcessor sample, in Release v15.0.4PlusP2b the samples were restructured. The samples listed on our website reflects those changes. Some of the old samples were removed or rewritten. PDFProcessor has been temporarily removed but available if needed for evaluations or customers use. The PDFProcessor sample- shows how to convert PDF documents to PDF/A and PDF/X compliant PDF files.
With Windows 8.1 release, there are some new API changes/Added. As per new Addition, there is new feature called as "XAML Binary Format" which will improve performance of rendering on screen. XamlBinaryWriter class is responsible to convert into XAML Binary Format.All the XAML files will be converted to XBF. Has Any one Tried in Converting XBF file into XAML File. I have some dependency on XAML File.I cannot proceed without in XAML format. Please let me know how to convert XBF to XAML File.
As a starting point, download and install Microsoft's .NetNative, the ReducerEngine.dll installed as a part of that thing includes some primitive implementation of the decompiler.
However, the MS' implementation is very poor, it doesn't even support XAML namespaces. You can use the Microsoft's implementation to learn the structure of an XBF file, for decompiling however I suggest you implement your own solution. It's not that hard, mine is about 1000 lines of code in 12 C# files.
XBF files are rather simple. They contain a fixed header, followed by 6 lookup tables (strings, assemblies, type namespaces, types, properties, XML namespaces), followed by the DOM tree part, where objects reference values from those tables by integer keys.
P.S. The most interesting question I have about that, is why did Microsoft choose to reinvent the wheel instead of using their .NET Binary XML format or the subset of it? They have binary XML implementation for many years, and technically it's better format then XBF.
Imagine an environment in which users can upload images to a website by either uploading it from their pc or referring to a remote url.
As part of some security checks I'd like to make sure that the referenced object is indeed an image.
In the case of a remote-url, I of course check the content-type, but this isn't bullet-proof.
I figured I could use ImageMagick to do the task. Perhaps executing the ImageMagick.identify() method and if no error is returned and returned type is either JPG|GIF|,etc. the content is an image. (In a quick check I noticed that TXT files are identified correctly as well, so I have to blacklist these)
Is there any better way in doing this?
You could probably simply load the image via ImageMagick's appropriate function for your language of choice. If the image isn't formatted properly (in terms of internal formatting, not its aesthetic properties, that is), I would expect ImageMagick to refuse to load it and report an error. In PHP, for example, readImage returns false if the image fails to load.
Alternatively, you could read the first few hundred bytes of the file and determine if the expected image file format headers are present; e.g., "GIF89" etc.
These checks may backfire, if your image is in a compressable format (PNG, GIF) and it is constructed in a way similar to a zip bomb https://en.wikipedia.org/wiki/Zip_bomb
Some examples at ftp://ftp.aerasec.de/pub/advisories/decompressionbombs/pictures/ (nothing special about that site, I just googled decompression bombs)
Another related issue is that formats like SVG are in fact XML and some image processing tools are prone to a variant of "billion laughs" attack https://en.wikipedia.org/wiki/Billion_laughs
You should not store the original file. The generally recommended approach is to always re-process the image and convert it to an entirely new file. There have been vulnerabilites exploited inside valid image files (see GIFAR), so checking for this would have been useless.
Never expose your visitors to an image file that you have not written out yourself and for which you did not choose the file name yourself.
Or at least describe about.aspx
For .aspx I assumed it stands for:
Active Server Page eXtended format
Though another opinion is that:
these files typically contain static (X)HTML markup, as well as markup defining server-side Web Controls and User Controls
Apparently it was the cool thing to do at time (the quote actually talks about the original name XSP, but doesn't rule it out as an option):
The initial prototype was called "XSP"; Guthrie explained in a 2007 interview that, "People would always ask what the X stood for. At the time it really didn't stand for anything. XML started with that; XSLT started with that. Everything cool seemed to start with an X, so that's what we originally named it."
For the office documents, since they are in XML format, it stands for XML.
I guess it stands for XML.
Since XML was used heavily in .NET Framework and later on in Open XML formats for Excel, Word.
If I was correctly informed, it stands for 'XML' - these files are renamed, zipped XML documents. That goes for .docx, .xlsx etc.; don't know about .aspx since that's web stuff.
They usually contain static XHTML
This is to indicate that it is a page contains (X)HTML and the rest of the code is in the code behind (e.g. about.aspx.cs or about.aspx.vb)
Some files are uploaded with a reported MIME type:
image/x-citrix-pjpeg
They are valid jpeg files and I accept them as such.
I was wondering however: why is the MIME type different?
Is there any difference in the format? or was this mimetype invented by some light bulb at citrix for no apparent reason?
Update:
Ok, I did some more searching and testing on this question, and it turns out they're all lying about MIME-type (never trust any info send by the client, I know).
I've checked a bunch of files with different encodings (created with libjpeg)
Official MIME type for jpeg files: image/jpeg
But some applications (most notably MS Internet Explores but also Yahoo! mail) send jpeg files as image/pjpeg
I thought I knew that pjpeg stood for 'progressive' jpeg. It turns out that progressive/standard encoding has nothing to do with it.
MS Internet explorer send out all jpeg files as pjpeg regardless of the contents of the file.
The same goes for citrix: all jpeg files send from a citrix client are reported as the image/x-citrix-pjpeg MIME type.
The files themselves are untouched (identical before and after upload). So it turns out that difference in MIME type is only an indication the software used to send the file?
Why would people invent a new MIME type if there is no differences to the file contents?
image/x-citrix-pjpeg seems to be the MIME type sent by images which are exported from a Citrix session.
I haven't come across any format differences between them and regular JPEGs - most image conversion utilities will handle them the same as a regular pjpeg, once the appropriate mime-type rule is added.
It's possible that in a Citrix session there is some internal magic done when managing jpegs which led them to create this mime-type, which they leave on the file when it's exported from their systems, but that's only my guess. As I say, I haven't noticed any actual format differences from the occasional files in this format we receive.
The closest i have come to find out what this is, is this thread. Hope it helps.
http://forums.citrix.com/message.jspa?messageID=713174
For some reason, when people are running Internet Explorer via Citrix, it changes the mime type for GIF and JPG files.
JPG: image/x-citrix-pjpeg
GIF: image/x-citrix-gif
Based on my testing, PNG files are not affected. I don't know if this is an Internet Explorer issue or Citrix.
It's to do with a feature of Citrix called SpeedBrowse, which intercepts jpegs and gifs in webpages on the [Citrix] server side, so that it can send them whole via ICA (the Citrix remoting protocol) -- this is more efficient than screen-scraping them. As a previous poster suggested, this is implemented by marking the images with a changed mime type.
IIRC it hooks FindMimeFromData in IE to change the mime type on the fly, but this is being applied to uploaded files as well as downloaded ones - surely a bug.
From what I recall the Progressive JPG format is the one that would allow the image to be shown with progressively higher resolution as the download of the file progressed. I am not entirely aware of the details, but if you remember back in the days of dial up, some files would show blurry, then better and eventually complete as they were downloaded. For this to work the data needs to be sent in a different order than a JPEG would typically be sent.
The actual data, once you view it, is identical it is just sent in a different order. The JPEG encoding itself may very well group pixels differently, I forget.