Compress SVG or Convert SVG to SVGz [closed] - svg

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
Is there a way to compress SVG files?
I've been told that SVGz is the best way although I haven't found a converter to convert my SVG file.
Or is there an app where I can import an SVG file and export to SVGz?

It uses GZip:
http://en.wikipedia.org/wiki/Scalable_Vector_Graphics#Compression
If you host SVG files on a web server, and have a properly configured web server, your web server will already compress files sent to the client, so compression of the SVG to SVGz is unnecessary (in fact undesirable).
To test whether your web server is configured correctly, refer to: How can I tell if my server is serving GZipped content?

I know this is an older post but I thought it may be helpful to actually list out apps that can save in svgz format as per your request. Adobe Illustrator's Save As will allow you to save out to svgz format, as well as Inkscape, 7Zip, and if you are a fan of the command line and use windows this is a fairly simple solution GZip for Windows.

Related

Handling files on the production server [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 days ago.
Improve this question
I'm creating a website using MERN Stack the website will help people to create videos that create a voice over and subtitle on video.
The user would have to upload a video and write some text that would be sent to server and return for them the video they uploaded with subtitle and voice over of the content they have wrote.
so I have created a website that produce this functionality but what I got stuck with is how should I handle the files (videos uploaded, audio files, subtitles and final videos generated)
my current implementation handle everything on the production server as the client prefer that everything stay on the server (and he doesn't mind to through extra bucks for that)
the website is expected to handle between 1000-3000 request per day but I'm not sure with this load could a single server handle all of that? or should I get a server for handling the files and the other for my functionality?
I'm currently finishing my code to test this but I thought maybe someone here could help me to think about it differently or maybe I'm doing something naive and I should insist on using a third party service to handle the database and the files.

How to create a dump/mirror of an external wiki? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I am trying to create a local exact copy of a Wiki on my Linux machine, but no matter what I try, it won't work properly in the end.
The challenge is that I have no access other than web-access to the Wiki, but it would be sufficient to have just a snapshot of the current state. I tried to use wget, but it fails to download files properly and does not convert links inside those pages.
I tried to use websucker.py but again it did not properly convert links, and since most Wiki files have no extension, I could not get my web-server (lighttpd) to serve them as text/html.
Does anyone have a working tool or can tell me what parameters to use with either wget or websucker.py to create a working clone of an existing Wiki?
Since nobody seems to know I spent a few more hours on Google and found the answer myself. I put it here if others have the same issue.
Each Wiki has an API that beside other features has a dump feature. You can use that API for a full or current dump of any Wiki. See here for a tutorial on how to use the dumpgenerator.py created by the wikiteam.
You can later import that XML dump either through the Special:import page or use the importDump.php script as explained in the Mediawiki manual.

Are Modern Browsers susceptible to HTTP Splitting Attacks [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I am trying to imitate the HTTP Splitting attack on my machine.
For that I wrote this php code:
<?php
header("Location: " . $_GET['page']);
?>
And then I enter the following URL:
http://localhost/webgoat/httpsplitting.php?page=index%0aContent-Length:%200%0a%0aHTTP/1.1%20200%20OK%0aContent-Type:%20text/html%0aContent-Length:%2017%0a%0a<html>Hacked</html>
But then also when I intercept the request using webscarab, I see that these headers are not included in the web server's response.
Additionally I saw in wireshark that the LF sequence (i.e. %0a) is not converted into its ASCII format and is used as a string and not as a line feed.
So, I came to the deduction that modern web browsers are not susceptible to this attack. Am I correct ??
The attack is not only in the browser level, but also in caching and proxy servers! So, even if the browsers added protection, it might not be enough.
See the paper (search for proxy, for example):
http://www.securityfocus.com/archive/1/425593
And more recent one:
http://www.securityfocus.com/archive/1/425593

Why different web browsers interpret web page different [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
Why every web browser interpret the web page different. Is it some standard for interpreting HTML, CSS or JavaScript or that depends of company witch development the web browser.
The browser is what interprets the html. The browser engineers do have a standard to go by, but in the end, they choose how their browser will interpret and display the html, css, etc, and how it will function.
There is a standard specification set by the World Wide Web Consortium. Most browsers follow it pretty well. Firefox, Opera, et. al. follow it pretty much to the letter but Internet Explorer does not in some cases.
actually yes it depends upon the interpretation of CSS and in turn, many tags.
This article provides some more insight on the matter.

Playing audio files on a website [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
Can you suggest a good way to play audio files on a website?
I am building a browser based sound board with html5 audio and java script. I am testing it in Safari, chrome, ie and firefox. So far no problem. The only issue is the formats the browsers play ie firefox will only play .ogg.
To solve this i have a user agent detector to direct to a version of the site with has .ogg files on it.
I would recommend using the audio tag. As for fall back 'Yi Jiang' is dead right. You could use something like JPlayer which is an html5 audio plugin with fallback.
-kev
Please mark an answer
Have you tried a HTML5 Audio Google Search?
<audio src="elvis.ogg" controls autobuffer></audio>

Resources