To explain quickly, I have an nginx server running within Docker, acting as a reverse proxy for my Flask web app. A part of my configuration is using proxy_set_header Authorization to pass some credentials into the proxy_pass website.
This all works fine - but, I want to push all this stuff to GitHub, and of course don't want want my creds, encoded or not, to show up.
What is my best option here? All I can think of is having something similar to dotenv, but for nginx.conf files rather than .py files.
Does anyone have any ideas on what I could to in order to pass my creds in but without hardcoding them explicitly in the config?
You can use another configuration file create Variables with NGINX using set and add this file to gitignore.
conf.d/creds.include
set $apiuser "user";
set $apipass "pass";
http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#set
app.conf
server {
include conf.d/creds.include;
...
location / {
proxy_pass ...
proxy_set_header "Authorization: $apiuser:apipass"
}
}
You should mention this in the README of your repo that anybody know how to use it.
Related
I have a Node 10 app running on Elastic Beanstalk, and it throws 413 errors when the request payload is larger than ~1MB.
<html>
<head>
<title>413 Request Entity Too Large</title>
</head>
<body>
<center>
<h1>413 Request Entity Too Large</h1>
</center>
<hr>
<center>nginx/1.16.1</center>
</body>
</html>
The request is not hitting my app at all; it's being rejected by nginx.
I have tried configuring AWS to increase the size of the allowed request body based on this answer, to no avail.
I've tried adding a file at .ebextensions/01_files.config with the contents:
files:
"/etc/nginx/conf.d/proxy.conf" :
mode: "000755"
owner: root
group: root
content: |
client_max_body_size 20M;
That didn't work, so I tried adding the file directly to .ebextensions/nginx/conf.d/proxy.conf with only:
client_max_body_size 20M;
And this also didn't work. Then I SSH'ed into the instance and added the file directly. Upon re-deploy, the entire conf.d directory was deleted and re-written, without this file.
How can I get AWS Elastic Beanstalk with Node.js 10 running on 64bit Amazon Linux 2/5.1.0 to accept nginx configuration?
The nginx setting you are trying to use (/etc/nginx/conf.d/proxy.conf) is for Amazon Linux 1.
Since you are using Amazon Linux 2 you should be using different files for setting nginx. For AL2, the nginx settings should be in .platform/nginx/conf.d/, not in .ebextentions as shown in the docs.
Therefore, you could have the following .platform/nginx/conf.d/myconfig.conf with content:
client_max_body_size 20M;
The above is an example only of the config file. I can't verify if the setting will actually work, but you are definitely using wrong folders to set nginx options.
My recommendation would be to try to make it work manually through ssh as you are attempting now. You may find that you need to overwrite entire nginx setting if nothing works by providing your own .platform/nginx/nginx.conf file.
adding the client_max_body_size 20M; in the folder
.platform/nginx/conf.d/proxy.conf fixed it for me. Please make sure to check whether you are using Amazon Linux 2
I run caddy with docker. I have my website loaded to /etc/license inside the docker container.
When I serve from the root, with the following Caddyfile:
$MYDOMAIN {
root * /etc/license
file_server
}
It works as expected, my website loads when i go to $MYDOMAIN.
Now I want to put this website under the route /license so when I go to $MYDOMAIN/license I see my website. It seems that it should it be straightforward but I tried everything I could think of and I can't get it to work.
This is my latest attempt Caddyfile:
$MYDOMAIN {
handle /license {
root * /etc/license
file_server
}
# handle other routes
}
Does anybody know how to make it work the way I want and why the current setup doesn't work. Thank you
your config has a little mistake. Your folder structure need to be the same as your route. If your subroute is $MYDOMAIN/license/ and your website resides in /etc/license, you need to point your root to the directory one level higher (etc). But I would recommend to create a new directory in license with the same name and your website: /etc/license/license
you could solve it also like this:
$MYDOMAIN {
# root * /var/wwwroot
root /license/* /etc/license
file_server
#reverse_proxy /api/ localhost:5000
}
Sources: AWS Elastic Beanstalk Linux 2 .NET Core API
After receiving the 413 Request Entity Too Large error, I made researches for a solution. Everything works when I connect into the machine with ssh and do client_max_body_size 20M with the nano /etc/nginx/nginx.conf command.
But when a new deploy comes, it goes back to the old one.
With my research, I created .platform/nginx/conf.d/proxy.conf files in the root directory of the project.
proxy.conf content:
client_max_body_size 1024M;
I also created the .platform/00_myconf.config file.
00_myconf.config content:
container_commands:
01_reload_nginx:
command: "service nginx reload"
I keep getting the same error (413 Request Entity Too Large) when I upload and deploy after adding these.
By connecting to the machine with ssh
I can't find the max_body_size line in the file with the nano /etc/nginx/nginx.conf command.
This is old and hopefully you figured it out already.
If not, the reason you can't find the client_max_body_size line is that it isn't there by default, you have to add it.
ssh into the instance then:
cd /etc/nginx/conf.d/
sudo nano proxy.conf
paste in:
client_max_body_size 1024M;
and save and exit.
finally run:
sudo service nginx restart
Unfortunately the files stored on ec2 are ephemeral and will be lost on each deploy or if an instance goes down. As yet I have not been able to find an in-code solution for dotnet, but will update when I do.
Edit: Okay so in code solution (ie, not dependent on EC2 ephemeral storage):
Make a file in /.platform/nginx/conf.d/proxy.conf with the content:
client_max_body_size 100M;
Make sure in properties on Visual Studio that Copy to output directory is set to "Always" for this file and that the build action is "content."
I do as toran proxy document step by step, but i occur a problem
I have configure toran_host and toran_prot in parameters.yml, but generated package josn remain using the the domain
example.org
Package.json
I try to find issue source, but the application is too complex for me.
this is my parameters.yml:
parameters:
# this secret should be changed to something unique and random if possible
secret: ThisTokenIsNotSoSecret-Change-It
# http or https depending on your hosting setup
toran_scheme: http
# in case you use non-standard ports you can update them here
toran_http_port: 91
toran_https_port: 443
# the hostname toran is hosted at
toran_host: 121.199.35.34:91
# e.g. /foo if toran is hosted in a sub-directory, or leave it empty if it is on its own domain, no trailing slash!
toran_base_url:
but have you tried clearing the cache? Delete
everything in ·app/cache/· and try again, because the parameters need to
be set correctly before the cache is created otherwise it is not rebuilt.
this answer come from Jordi's email to me, he is a author of toran, thank for nice and hard work .
You have to modify your config file at app/config/parameters.yml:
# the hostname toran is hosted at
toran_host: example.org
Then delete the production cache at app/cache/prod, and finally run the cron job again php bin/cron -v
For further instructions follow official installation documentation here: https://toranproxy.com/download
So I've just pushed my app to Dokku (Digital Ocean) - and get the following error returned from an ajax post:
POST http://example.com/foo 413 (Request Entity Too Large)
A quick google shows this problem is due to client_max_body_size being too low. So I've SSH'd into the server, opened up the apps nginx.conf and increased it as per instructions here:
client_max_body_size 100M;
https://github.com/progrium/dokku/issues/802
However, I still have the same problem... Do I need to restart a process or something? I tried restarting the dokku app - but all this did was to overwrite my nginx.conf file.
#Rob s answer is correct, but has the problem that the changes are not persisted, because the nginx.conf might become regenerated e.g. when deploying.
The solution I use is outlined in this github commit https://github.com/econya/dokku/commit/d4ea8520ac3c9e90238e75866906a5d834539129 .
Basically, dokkus default nginx templates include every file in the nginx.conf.d/ subfolder into the main server configuration block, thus
mkdir /home/dokku/myapp/nginx.conf.d/
echo 'client_max_body_size 50M;' > /home/dokku/myapp/nginx.conf.d/upload.conf
chown dokku:dokku /home/dokku/myapp/nginx.conf.d/upload.conf
service nginx reload
Will create a file that is merged into the nginx.conf (at nginx startup time I believe) and kept untouched by dokku as long as you do not use interfering plugins or define another nginx template (as of 2017/08).
This has been updated in Dokku and can be done from the CLI: dokku nginx:set node-js-app client-max-body-size 50m. https://dokku.com/docs/networking/proxies/nginx/#specifying-a-custom-client_max_body_size
I figured it out - I had to cd into the apps directory (as per github instructions: https://github.com/progrium/dokku/issues/802
The right file to modify is /home/dokku//nginx.conf and as #dipankar mentioned, you should add a client_max_body_size 20M; line to the server scope.
and then I typed
reload nginx
into the command line. All works :)