DNN9 svg file upload - svg

Is anyone familiar with DNN 9 platform here? If so could someone direct me how to upload an svg file to the server. In older version of DNN(8,7 etc) there was a setting in the Host List settings where you could enable the file type, however in DNN these pages have been removed from the user interface.
The following command has been run in SSMS:
INSERT INTO Lists (ListName, Value, Text, DefinitionID, SystemList)
VALUES ('ImageTypes', 'svg', 'Scalable Vector Graphics', '-1', 'True');
This created a new line in the database, however when I try to upload a svg file it still show wrong format (The Allowed Filetypes are: "bmp,gif,jpeg,jpg,png").
Can someone direct me perhaps where can the svg file type be enabled?
Many thanks!

The option to add file types is still there. It has been moved and has a different name. Go to:
Settings > Security > More
There is a tab there called More Security settings. There you will find the Allowable File Extensions

Some DNN sites allow users to upload certain files to their sites. A malicious can upload an SVG file which can contain some malicious code to steal some users’ sensitive data (cookies, etc.)

Related

Prevent Cross Site Scripting but still support HTML file upload

I have a web application where user can upload and view files. The user has a link next to the file (s)he has uploaded. Clicking on the link will open the file in the browser (if possible) or show the download dialog (of the browser). Meaning that, if the user upload an html/pdf/txt file it will be rendered in the browser but if it is a word document, it will be downloaded.
It is identified that rendering the HTML file in the browser could be a vulnerability - Cross Site Scripting.
What is the right solution to this problem? The two options I am currently looking at are:
to put Content-Disposition header in the response to make HTML files downloaded instead viewed in the browser.
to find some html scrubbing/sanitizing library to remove any javascript from the file before I serve it.
Looking at the gmail, they do the second approach (of scrubbing) with having a separate domain for the file download - may be to minimize/distract the attack surface. However in this approach the receiver gets a different file than what was sent. Which is not 'right' in my opinion; may be I am biased. In my case, the first one is easy to fix. But I wonder if that is enough, or is there any thing that I overlook!
What are your thoughts on these approaches? Or do you have any other suggestions?
Based on your description, I can see 3 posible attack types (maybe there are more):
Client side code execution
As you said, your web server may serve a file as HTML and run javascript code on the client. This can be avoided with Content-Disposition but I would go with MIME types control through Content-Type. I would define my known type of files (e.g. pdf, jpeg etc.) and serve them with their respective MIME type (e.g. application/pdf, image/jpeg etc.). Anything else I would serve it as application/octet-stream.
Server side code execution
Althougth I see this as an out of topic attack (since it involves other parts of your application and your server) be sure to avoid executing files on the server (e.g. PHP code through LFI). Your webserver should not access directly the files (e.g. again PHP), better store them somethere not accesible through a URL and retrive them on request.
Think if here you are able to reject files (e.g. reject .exe uploads) and ask the user to zip them first.
Trust issues
Since the files are under the same domain, the files will be accesible from javascript (ajax or load as script) and other programs (or people) may trust your links. This is also related to the previous point, if you don't need unzipped exe files, don't allow them. Using an other domain may mitigate some trust problems.
Other ideas:
Zip all files uploaded
Scan each file with antivirus software
PS: For me sanitization would not work in your case. The risk of missing something is too high.

How to hide the actual file name when accessing files through Amazon CloudFront?

There is a S3 bucket with millions of MP3 files in them. Each music file along with its preview MP3 file is in a folder inside that bucket. For example for a given music:
/music/123456/file_master.mp3
/music/123456/file_preview.mp3
I want to let the end-users to have access to the preview file through CloudFront and its Web Streaming feature. So I have set up Cloud Front and so uses can click on a link which points to the file on CloudFront:
http://blahblah.cloudfont.net/music/123456/file_preview.mp3
It works perfect except that a user can grab the file URL, replace the _preview part with _master and then listen to the entire track. Unfortunately moving the master file and preview file to two different locations is not an option because not only there are millions of them but also an ingestion system is constantly publishing the files with that structure.
Is there a way to hide the file name and/or file path? e.g. something like http://blahblah.cloudfront.net/music/123456/ABC would be perfect.
I don't think CloudFront supports the kind of URL-rewriting that you propose, but you might be able to solve your problem by adding a new Behavior for the CloudFront distribution and use the "Path Pattern" in that behavior to only match f.ex. "*_preview.mp3" and then use the behavior precedence to put that new behaviour in front of the default behavior of the distribution (behaviors are handled in sequential order with first match), and finally set the default behavior to have "Restrict Viewer Access" set to "Yes" while you set "Restricted Viewer Access" to "No" in the new behavior that then only matches "*_preview.mp3"
See http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesPathPattern for more information in regards to the Path Pattern.

Why save file with original name is dangerous in an upload?

Currently I'm working on a web project (Classic Asp) and I'm going to make an upload form.
Folklore says:
"Don't use the real name to save the uploaded files"
.
What are the problems, dangers, from the security point of view ?
Proper directory permissions should stop most of this stuff but I suppose for file names a potential danger is that they could name it something like "../Default.asp" or "../Malware.asp" or some other malicious path attempting to overwrite files and/or have an executable script on your server.
If I'm using a single upload folder, I always save my users uploads with a GUID file name just because users aren't very original and you get name conflicts very often otherwise.

Loading data on Unidata Integrated Data Viewer (IDV) from a THREDDS catalog?

I am trying to load some data to IDV from a THREDDS server via the catalog, but I get error messages such as
Server does not support Content-Length
I can add netcdf data from my local folders, but could not get this one to work. It seems like I am missing a basic step because I could not find the information that I am looking for on the user's manual either. I wonder if anybody had a similar issue... I am trying the catalog below.
http://opendap.co-ops.nos.noaa.gov/thredds/catalog/NOAA/GBOFS/MODELS/201302/catalog.html
That is the error you will get in IDV if you try to open a catalog using the .html extension instead of the .xml extension.
In IDV, you can open datasets several ways:
"Data=>Choose Data=>From a Catalog": specify the thredds catalog
(using the xml extension) and then navigate to the dataset you want.
I usually use a regular web browser (like Chrome) to locate the
thredds catalog I'm interested in, then change the catalog URL from
the .html extension to the .xml extension.
For the Galveston Bay data, specify
"http://opendap.co-ops.nos.noaa.gov/thredds/gbofs_catalog.xml".
You will see a list of folders. You need to click the dot to the left
of the folder name to expand the folder so you can see the datasets.
Then select a dataset name and click the "add source" button at the
bottom of the page.
"Data=>Choose Data=>From a Web Server": specify the DAP URL of the
dataset you want.
I usually use a regular web browser (like Chrome) to navigate on the
opendap server until I reach an OPeNDAP Dataset Access Form, like:
http://opendap.co-ops.nos.noaa.gov/thredds/dodsC/NOAA/GBOFS/MODELS/201302/nos.gbofs.fields.forecast.20130205.t00z.nc.html
and then I cut-and-paste the "Data URL" near the top of the form into
the IDV URL box.
"Data=>Choose Data=>From the File System": specify a local NetCDF,
Grib, HDF5, or NcML file.
Loading a local NcML file can be particularly handy when the dataset you are trying to load doesn't meet CF conventions, and you need to make some fixes so that the dataset can be read in IDV.

Drupal 7: how to restrict file access to specific user roles

I need to develop a site on Drupal 7. I have some content types with File fields in CCK. And access to nodes of these types should be granted only to specific Drupal user role. And at any moment site administrator should be able to make these Nodes 'public' or 'private'.
I can make nodes visible only to specific user roles, but this is not secure enough. If anonymous user knows the path to file ( www.mysite.org/hidden_files/file1 ), he can download it.
What is the most elegant way to solve this problem?
Thanks in advance.
Check out this documentation here: http://drupal.org/documentation/modules/file
Specifically, the section titled "Managing file locations and access" which talks about setting up a private data store (all supported by Drupal 7, it just needs to be configured).
To paraphrase, create a folder such as:
sites/default/files/private
Put a .htaccess file in that folder with the following to prevent direct access to the files via the web:
Deny from all
(the documentation claims that the following step does the above steps automatically, I haven't tested that unfortunately but you may be able to save some time if you skip the above two steps)
Log into Drupal's admin interface, go to /admin/config/media/file-system, configure the private URL and select Private Files Served by Drupal as the default download method.
In order to define the fine-grained access to nodes and fields, you can use Content Access: http://drupal.org/project/content_access
You will also need to edit your content types and set the file / image upload fields to save the uploaded files into Private Files instead of Public Files.
At this point, the node and field level permissions will determine whether or not users are allowed to access the files which will be served through menu hooks that verify credentials before serving the file.
Hope this helps.

Resources