I have extracted LaTeX content from .tex file, that I put on the website and choose SVG as output, because it provides the smallest possible size so I consider that It will be the best choise also because of his speed and widest versatility. I know, that .js files that contains configurations are cached on the disk in the browser for a few days (depends on the config of the web) or CDN file, but there could be problem with availability of that page, but what about SVG content?
Does it also cache on the disc?
Thank you
Yes and no, depending on what you mean.
For the SVG output, MathJax encodes its "fonts" as path data in JS files, see the code. These paths are cached like any other resource.
But the actual output is generated on the fly from these paths, so the individual equations will not be cached (because making MathJax aware of them would be difficult).
They are stable enough to be reused though, i.e. via local storage and you can generate SVG server side using MathJax-node.
Related
I'm new to web development and currently learning Pixi.js and I noticed there are few versions:
Pixi.js
Pixi.js.map
Pixi.min.js
Pixi.min.js.map
May I know is there any differences between these in terms of features?
they're not at all different version, it's just a common convention in javascript programming: "min.js" files are the "minified" lib source, that's mean the same ".js" file where spaces, end of line and other unnecessary (for the browser) characters are cut off, and also variables names were changed with shorter ones, to obtain a lighter source file, less heavy to download and easier to execute for the browser. This file is obviously automatically generated from tools like gulp and grunt and is the source file you have to upload to the server and finally serve to the user. ".map" files instead are, whell, maps who link minified files to their full size counterpart, whit just min and map files, chrome debugger can reconstruct the original source and show it to you in the inspector.
To answer your question: No they are exactly the same, when it comes to features.
A *.min.js file is just the original.js file but "minified".
That means names were shortened (e.g function getName changed to n).
It's size (bytes) is also reduced.
In the reveal.js documentation, the MathJax section says that normally, MathJax is loaded from a remote server, and if you want to use MathJax offline you'll need to download a copy of the library and adjust the mathjax configuration value.
The configuration I'm trying is
math: {
mathjax: 'MathJax/MathJax.js',
config: 'TeX-AMS_HTML-full'
}
where MathJax/MathJax.js is the relative address of the MathJax.js file in a clone of the MathJax repository I have locally on my computer.
When I first load a page in the presentation with equations on it (or reload it by pressing F5 in the browser), I first see the LaTeX code for the equations for a short while, before seeing a rendering that looks very basic and not very nice but still okay, but then finally I see what looks like a nice rendering of the equations, but with thick frames around every character in the equations which completely mess up everything.
How should MathJax be configured properly, so that it looks identical to when MathJax is loaded from a remote server?
I decided to start a project which is essentially a website. This website will be published through Github Pages.
My website will include an SVG file. The SVG file is generated by Graphviz from a DOT-file. The idea is that to modify the information displayed in the SVG, users can change the definition of the DOT-file, then Graphviz will re-generate the SVG, and the new SVG image will automatically be displayed once the web page is served.
However, I am left in the uncomfortable situation of requiring contributors who edit the DOT-file to run a script that calls Graphviz, and then commit changes to both the SVG and the DOT file.
If a contributor changes the DOT-file, but forgets to run the Graphviz script, then commits, the repository will contain a DOT-file and an SVG which are inconsistent with each other.
I can't not track versions of the DOT-file, because the SVG is gibberish - the DOT-file is the human-editable definition. I can't not track the SVG, because, how else will it stay up to date and available to Github Pages for consumption? And yet, with both of them tracked, I am essentially keeping track of changes in a redundant manner, and introducing opportunity for conflicts. It's a bit like versioning both your C code and the compiled .exe. Which is silly.
What's the best way of making sure that whenever the DOT-file is edited, the SVG will stay concurrent with it? Do I need to seriously rethink my strategy?
You might consider setting up a Jenkins instance to do this. Create a job that is triggered by a change in the dot file (using the git plugin). The job would execute the dot command and then commit the new svg file.
Generated files should not by committed to your repository.
By default, GitHub Pages uses Jekyll to create sites.
If you are using this workflow, then I would suggest taking a look at writing/using a Jekyll Generator plugin to dynamically create this SVG from your DOT-file.
We are considering allowing user uploaded SVGs in our web app. We've been hesitant to do this before, due to a large number of complex vulnerabilities that we know exist in untrusted SVGs. A coworker found the --vacuum-defs option to Inkscape, and believes that it renders all untrusted SVGS safe for processing.
According to the manpage, that option "Removes all unused items from the section of the SVG file. If this option is invoked in conjunction with --export-plain-svg, only the exported file will be affected. If it is used alone, the specified file will be modified in place." However, according to my coworker, "Scripting is removed, XML transformations are removed, malformations are not tolerated, encoding is removed and external imports are removed.
Is this true? If so, is it enough that we should feel safe accepting untrusted SVGs? Is there any other preprocessing we should do?
As I understand it, the main concern of serving untrusted SVGs is the fact that SVG files can contain Javascript. This is obvious for SVG because embedded javascript is part of the format, but it can happen with every type of uploaded file if the browser is not careful.
Therefore, and even though modern browsers do not execute scripts found in the < img > tags, just in case I think it's good to serve the images from a different domain with no cookies/auth attached to it, so that any executed script will not compromise users' data. That would be my first concern.
Of course if the user downloads the SVG and then opens it from the desktop and happens to open it with the browser, it might execute the potentially malicious load. So back to the original question, --export-plain-svg does remove scripting, but as I don't know of other SVG-specific vulnerabilites, I haven't checked for them.
I'm working on an application primary targeted for Linux, which use a TTF font. I need the font's file name and path, because I have to load it with SDL function TTF_OpenFont(char *file, ...). The problem is that there are a lot of different directories for TTF fonts on different distribution. Which is the best way to deal with this problem? I've came up some solutution, but each of them seems suboptimal for me:
pack the font along with the application, and install it to the application's own /usr/share/ directory.
check the font path with fc-list : file.
hardcode every path variation to the application and try them out when load the file.
Your first and second solutions are quite good, except it may be better to call FcFontList function. Third one is quite unreliable, but it highly depends on application type (it can be ok in some cases, if you have this path configurable by user).