I'm using this code inside server block to force download mp3 files
location ~ /mp3folder/.+\.mp3$ {
types {
application/octet-stream;
}
}
I want to specify multiple extensions like mp4, wmv, flv ... what changes should make for this?
You can use
/mp3folder/.+\.(mp3|mp4|wmv|flv)$
Related
I've created web app and analyzed it with Google site analyzer.
In most cases I need to configure htaccess file. As I understand this file can be used only on Nginx or Apache server, but I don't want to use any of these.
I want to configure htaccess only with golang tools. Currently my app running on VPS server.
This project allows you to support the http auth standard with GO, zero apache code.
You can even use a password file created with the Apache htpasswd (bad) or htdigest (good) commands:
https://github.com/abbot/go-http-auth
You don't need .htaccess as it's only meant for Apache:
http://httpd.apache.org/docs/2.2/howto/htaccess.html
If you use Apache, external services like Google Site Analyzer can't see .htaccess since it's not served by Apache. It's kept private.
Everything Apache can do with .htaccess, Go can do with net/http or with a 3rd package like Gorilla to help.
If you want to do some constraints, then you may reference the following.
package main
import (
"github.com/gorilla/mux"
"io/fs"
"net/http"
"path/filepath"
)
type TxtFileSystem struct {
http.FileSystem
}
func (txtFS TxtFileSystem) Open(path string) (http.File, error) {
if filepath.Ext(path) != ".txt" {
return nil, &fs.PathError{Op: "open", Path: path, Err: fs.ErrNotExist}
}
return txtFS.FileSystem.Open(path)
}
func main() {
m := mux.NewRouter()
m.PathPrefix("/doc/").Handler(http.FileServer(TxtFileSystem{http.Dir("./doc")}))
}
That will only allow you to visit the file extension is .txt
I have an "upload" directory where users can upload confidential files (jpg, png, pdf). Each user gets assigned a folder inside upload, ex: /001/, /002/, ..., /999/, etc.
I want these files to be accessible only through SFTP, so the url http://example.com/upload/259/image.jpg should return a 403 error message.
I tried many variations, but still the files can be accessed through the url.
location ~ /upload/\.(jpe?g|png|gif|ico)$ {
deny all;
return 403;
}
Any thoughts?
You still need to match that part: '/259/image'
This should work:
location ~ /upload/.*\.(jpe?g|png|gif|ico)$ {
deny all;
return 403;
}
If access to /upload is only via sftp, then this is all you should need:
location ^~ /download/ {return 403;}
By skipping the regex cycle with ^~ you'll improve performance. Also your configuration will scale with fewer problems by not using a regex location. A prefix location can go anywhere, but not a regex location. The first regex match will be used which can lead to confusion down the road.
This is most likely an anti-pattern, but I'd like to know nonetheless:
I need to extract a tgz which is in puppet and then move the contents somewhere else. Is it possible, in a puppet exec { }, to refer to the file where it is stored on disk?
For example, puppet is available at /usr/local/puppet, and the tgz file I need it in /usr/local/puppet/modules/components/files/file.tgz. In the exec { } can I do something like command => "/bin/cp $modules/components/files/file.tgz /somewhere_else" ? Or do I have to declare a file { source => "..." } block first?
Both approaches are correct if you run puppet with puppet apply.
In master-agent architecture using exec to copy file probably will not work at all.
In my opinion using file resource is more "puppet-like" but is has one significant drawback.
You can use:
file { '/some_path/somewhere_else':
source => '/usr/local/puppet/modules/components/files/file.tgz',
}
This will create file /some_path/somewhere_else with the same content as /usr/local/puppet/modules/components/files/file.tgz (it will make a copy of the original file).
But if /some_path doesn't not exist in the file system, the command will fail.
If you are working with tgz files you can also consider using some of the archive puppet modules e.g gini.
UPDATE:
I can propose two approaches:
Use puppet file server to serve files (or define module path for old puppet versions). Next just use it e.g:
file { '/some_path/somewhere_else':
source => "puppet:///modules/components/file.tgz',
}
Define custom facter fact 1, 2 that points path in your filesystem containing required files. E.g:
file { '/some_path/somewhere_else':
source => "${::my_custom_fact}/some_path/file.tgz',
}
I do not think that any of the core facts might be useful for you.
I'm using the config var plugin for heroku (see https://devcenter.heroku.com/articles/config-vars).
It allows me to use a .env file which I do not push to source control to define the heroku environment.
So when I'm on heroku, I can access sensible information via System.properties.
In dev, I would like to read from this file so it would be best if it were on my classpath.
The .env file is at the root of my project so I cannot use something like this :
sourceSets {
main {
resources {
srcDirs = ['src/main/resources', '/']
}
}
}
What is the simplest way to include a single file into gradle resources ?
The earlier answers seemed more complicated than I was hoping for, so I asked on the gradle forums and got a reply from a Sterling Green who's one of the gradle core devs.
He suggested configuring processResources directly
processResources {
from(".env")
}
He also mentioned that it might be a bad idea to include the project root as an input directly.
After merging the previous answer here with another tips from other sites, I came up with the following solution:
sourceSets {
specRes {
resources {
srcDir 'extra-dir'
include 'extrafiles/or-just-one-file.*'
}
}
main.resources {
srcDir 'src/standard/resources'
srcDir specRes.resources
}
}
processResources {
rename 'some.file', 'META-INF/possible-way-to-rename.txt'
}
I still wonder whether there is some better way.
What I ended up doing was to add project root directory as a resource folder including only the file I was interested in :
sourceSets.main.resources { srcDir file('.') include '.env' }
Seems to do the trick. I wonder if it's the best solution thought
I am facing a issue while trying configuring something with CFENGINE3.5, I have created a policy to install some package from source, which download tar balls from some url and then untar it and further digs it with make and make install, everything working fine except while it download tar balls it keeps at "/etc" location, I want cfengine to put this file at /tmp.
Is there any way to customize this default behavior of cfengine to keep all temporary downloaded files at "/tmp" instead of "/etc".
Here is the Policy snippet:
bundle agent install
{
vars:
"packages" slist => {
"Algorithm-Diff-1.1902",
"Apache-DB-0.13",
"Apache-DBI-1.06",
"Apache-Session-1.83",
"Apache-SessionX-2.01",
"AppConfig-1.65",
"Archive-Tar-1.32",
};
commands:
"/usr/bin/wget http://10.X.X.X/downloads/perl-modules/$(packages).tar.gz;
/usr/bin/gunzip $(packages).tar.gz;
tar -xf $(packages).tar;
cd $(packages);
/usr/bin/perl Makefile.PL;
/usr/bin/make;
/usr/bin/make install;"
contain => standard,
classes => satisfied(canonify("$(packages)-installed"));
}
body contain standard
{
useshell => "true";
exec_owner => "root";
}
Thanks in advance.
You can add the directory in which the commands should be executed to the contain body, like this:
body contain standard
{
useshell => "true";
exec_owner => "root";
chdir => "/tmp";
}
Please note there are already a few contain bodies in the standard library (lib/3.5/commands.cf), maybe one of those can be used so you don't have to write your own. Note that CFEngine already executes as root, so exec_owner => "root" is not strictly necessary.