How to use nginx as proxy to s3 aws? - node.js

I try to build nginx in docker as proxy to aws s3.
The problem is there is a variables that I don't understand where they come from?
First, this is how my bucket look like:
Inside this bucket I have pic.png file.
When I using nginx I start from docker using docker-compose:
web:
image: nginx
volumes:
- ./example.com.conf:/etc/nginx/conf.d/default.conf
ports:
- '8080:80'
And I start the docker using docker-compose up.
I have my aws_access_key and aws_secret_key from IAM key.
This is how I defined my example.com.conf file:
server {
listen 80;
server_name localhost;
location ~ '^/([^/]+)/(.*)$' {
set $bucket 'my-bucket';
set $key '';
# Setup AWS Authorization header
set $aws_signature '';
# the only reason we need lua is to get the current date
set_by_lua $now "return ngx.cookie_time(ngx.time())";
#the access key
set $aws_access_key 'AKIA6*******';
set $aws_secret_key '1wLXpiNN0***********';
# the actual string to be signed
# see: http://docs.amazonwebservices.com/AmazonS3/latest/dev/RESTAuthentication.html
set $string_to_sign "$request_method\n\n\n\nx-amz-date:$now\n/$bucket/$key";
# create the hmac signature
set_hmac_sha1 $aws_signature $aws_secret_key $string_to_sign;
# encode the signature with base64
set_encode_base64 $aws_signature $aws_signature;
proxy_set_header x-amz-date $now;
proxy_set_header Authorization "AWS $aws_access_key:$aws_signature";
rewrite .* /$key break;
# we need to set the host header here in order to find the bucket
proxy_set_header Host $bucket.s3.amazonaws.com;
rewrite .* /$key break;
# another solution would be to use the bucket in the url
# rewrite .* /$bucket/$key break;
proxy_pass http://s3.amazonaws.com;
}
}
But I getting error when I run docker with nginx:
nginx: [emerg] unknown directive "set_by_lua" in /etc/nginx/conf.d/default.conf:13
So I'm not sure I doing this right.
I need an explanation and an example of how to do it right please.
for example what is $key? what the the request should look like? http://localhost:8080/pic.png?

try to use nginx with lua installed:
web:
image: firesh/nginx-lua
volumes:
- ./example.com.conf:/etc/nginx/conf.d/default.conf
ports:
- '8080:80'
the problem is set_by_lua need nginx to be compiled with ngx_devel_kit
UPDATE
it seems that you miss many modules , I suggest you to use this Dockerfile
example:
docker run -v /path/to/example.com.conf:/etc/nginx/conf.d/default.conf openresty/openresty:centos

An alternative to using lua for proxying S3 is to use njs to do the proxying. This project shows an example with nginx proxying S3 and running in Docker.

Related

Can't redirect traffic to localhost with nginx and docker

I'm new to Docker and nginx so this may be a simple question but I've been searching through question/answers for a while and haven't found the correct solution.
I'm trying to run an nginx server through docker to reroute all requests to my.example.com/api/... to localhost:3000/api/...
I have the following Dockerfile:
FROM nginx
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
and the nginx.conf file:
server {
server_name my.example.com;
location / {
proxy_pass http://localhost:3000/;
}
}
When I make the calls to the api on localhost:3000 this works fine but when I try to run against my.example.com I get a network error that the host isn't found. To be clear the domain I want to 'redirect' traffic from to localhost host is a valid server address, but I want to mock the api for it for development.
This isn't working because your nginx proxying request to localhost which is container itself but your app is running on host's port 3000 -- outside of container. check this article
change
proxy_pass http://localhost:3000/;
  to
proxy_pass http://host.docker.internal.
add 127.0.0.1 example.com my.example.com in etc/hosts

How to configure docker-compose.yml and nginx conf file to read an external drive?

I have nginx in a docker container. My docker-compose.yml is like this (simplified):
nginx:
volumes:
- /var/www/html:/www/:rw
- /media/storage:/storage/:rw
Where /var/www/html is my website root and /media/storage is an external drive in my host machine (Azure).
Now I'm trying to point the website URL example.com/downloads to /storage but without success. My nginx/conf.d/example.com.conf is as following (simplified):
server {
listen 80 default;
server_name example.com;
# this works
root /www;
index index.php;
# this get a 404 error
location /downloads{
root /storage;
}
}
But I get a 404 error for example.com/downloads. What am I forgetting here? The file permissions and owner to both paths are the same. I don't know if the bad configuration is in example.com.conf or in docker-compose.yml. How should I configure these?
I solved this myself using alias /storage; instead of root /storage.

How to setup nginx with docker compose

I have nginx config and docker compose but I am not able to link them together properly.
default.conf:
worker_processes auto;
events { worker_connections 1024; }
http {
upstream docker_nginx {
server nginx:6000;
}
server {
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://localhost:4000;
proxy_set_header Host $host;
}
location /jwt/user/ {
proxy_set_header X-Forwarded-For $remote_addr;
pproxy_pass http://localhost:4000;
proxy_set_header Host $host;
}
}
}
Docker compose:
version: '2'
services:
reverseproxy:
image: nginx
container_name: docker_nginx
ports:
- 6000:6000
volumes:
- ./services:/default.conf
restart: always
jwt:
build: ./jwt
image: jwt:1.0
ports:
- 4000:4000
restart: always
volumes:
- ./jwt
I want to listen on port 6000 and route request to a service on port 4000. I tried running http://nginx:6000/ and http://localhost:6000/ but I get this:
This site can’t be reached. The web page at http://localhost:6000/ might be temporarily down or it may have moved permanently to a new web address.
ERR_UNSAFE_PORT
Let me give you my repo, you can check my docker-compose.yml, it uses several services and are proxied using Nginx and are worked together.
On this link you’ll find a default.conf file and it contains several Nginx upstream configuration please take a look at how I used docker-compose service references there.
Inside the client/ directory, I also have another Nginx as a web server of a react.js project.
On server/ directory, it has a Node.js API, It connects to Redis and Postgres SQL database also built from docker-compose.yml.
I think this use case can be useful for you and other users!
There are few things which are misconfigured.
config file is mapped to wrong location
It should be ./services/default.conf:/etc/nginx/nginx.conf
You've to explicitly tell nginx (in conf file) to listen to 6000 port or otherwise map your 6000 port to 80 port inside nginx (default for nginx) "6000:80"
Yes, and as suggested in other answer, instead of proxying to localhost:4000 you have to proxy it to jwt:4000.
For more on nginx in docker see it's doc https://hub.docker.com/_/nginx/
Should - ./services:/default.conf be something like - ./default.conf:/etc/nginx/sites-enables/default.conf?
Also try replacing proxy_pass http://localhost:4000; with proxy_pass http://jwt:4000; and you probably don't need the upstream section.
The "proxy_pass" is the address of the proxied service from within the network created by docker compose - and each service's domain is the same as its name in docker-compose.yml
I have never used upstream, but if you want to use it, I think you have to put this
upstream docker-jwt {
server jwt:4000;
}
# and
proxy_pass http://docker-jwt;
but as I put above, I would skip that and just put http://jwt:4000 as the proxied address.
You also seem to have a typo here: pproxy_pass http://localhost:4000;
But if you leave the typo, and look at the docker compose logs, you will hopefully see an error from nginx, which means that you successfully mapped the config file

Reroute specific outgoing url to another url - linux ubuntu 16.04

I am working on building a very old program that has many outdated links to dependencies.
These links might be in other dependencies downloaded from the web which takes out the option to change the url
I am able to find all of these dependencies with other links but changing the paths has become a endless task.
Is it possible to create a list of rules for outgoing urls that map one to one.
For example:
http://Oldserver.com/this/is/one/OldDependency.jar -> http://Newserver.com/this/is/one/with/other/url/NewDependency.jar
It does not matter what tool is used for the routing, iptables or something else. I am willing to set anything up for this.
This needs to happen on the OS level because the paths are inside tar files
I was able to get this working by using a local nginx.
The best solution here was a dockerized nginx container.
I will use the example above
http://Oldserver.com/this/is/one/OldDependency.jar -> http://Newserver.com/this/is/one/with/other/url/NewDependency.jar
Steps:
Edit your host file to route the host to your localhost
$ sudo vim /etc/hosts
Add this line to your hosts file
127.0.0.1 Oldserver.com
Pull docker the nginx docker container
docker pull nginx
Save this nginx configuration file to some path (code tags not working, sorry)
events {
worker_connections 4096; ## Default: 1024
}
http {
server {
listen 80;
server_name Oldserver.com;
location = /this/is/one/OldDependency.jar {
proxy_pass http://Newserver.com/this/is/one/with/other/url/NewDependency.jar;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location / {
proxy_pass http://Oldserver.com;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
}
If you have more paths, add then above the wildcard location /.
The location / forwards all paths not matched to the original server with the path reserved
Set permissions on config
chmod 600 /some/path/to/nginx.conf
Start up a nginx docker container with the configuration file
docker run --name proxy -v /some/path/to/nginx.conf:/etc/nginx/nginx.conf:ro -p 80:80 -d nginx
Now, every request to the Oldserver.com will go through your nginx proxy and reroute if it matches any of your location configurations
I had a similar problem to this, needing to rewrite an outgoing url. In my case it was in a docker container running on kubernetes.
In my case it was because of this issue: https://stackoverflow.com/a/63712440
The app runtime (.net core 3.1) crypto code only checks the first url in the list of certificate revocation endpoints. I was doing an SSL client certificate setup (mTLS).
The PKI cert I was issued contained an internal domain address first+second, and then a publicly addressable url third:
X509v3 CRL Distribution Points:
Full Name:
URI:http://some.domain.1/CRL/Cert.crl
URI:http://some.domain.2/CRL/Cert.crl
URI:http://x509.domain.com/CRL.crl
Because the domain addresses use a 'CRL' folder in the path, but the public url does not, just mapping the public IP address to the local domain host via /etc/hosts (or k8s hostAliases) didn't work.
To solve this in k8s, I added a sidecar to my pod; here's the details:
First, start with an nginx.conf:
events { }
http {
server {
listen 80;
location / {
proxy_pass http://127.0.0.1:5555;
}
location /CRL/ {
proxy_pass http://x509.domain.com/;
}
}
}
This kind of looks like a reverse proxy, but really it just an actual proxy. My dotnet app serves on port 5555 inside the pod. 127.0.0.1 will route to the pod, not the nginx container. Note that the second proxy_pass value doesn't include the 'CRL' path, that allows the url to rewritten not just redirected.
I then built an nginx docker image called crl-rewrite-proxy:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
I then added the image into my pod yaml as a sidecar:
- name: crl-rewrite-proxy
image: $(image):crl-rewrite-proxy
ports:
- containerPort: 80
And then added an alias for the internal domain address, so outgoing calls to it from the app would route back into the pod:
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "some.domain.1"
Lastly I defined an ingress in my k8s yaml, so the aliased calls will be routed to the sidecar:
- kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: $(name)-crl-rewrite-proxy
namespace: $(namespace)
spec:
rules:
- host: $(name).$(host)
http:
paths:
- path: /CRL
pathType: ImplementationSpecific
backend:
service:
name: $(name)
port:
number: 80
The app runtime makes a call to http://some.domain.1/CRL/Cert.crl; the host alias routes that to 127.0.0.1:80; k8s passes that to the sidecar; the sidecar passes that to nginx; nginx rewrite the host+url to a public IP on a different path; the resource then gets fetched successfully.
Thanks to thor above for the local setup, I used this to verify it would work locally before doing up the k8s bits.

Nginx + node.js configuration

I need the right configuration of nginx for my problem.
Suppose the nginx + nodejs serverprograms are running on the same debian machine.
Domain name for my website is for simplicity just webserver.com (and www.webserver.com as alias)
Now, when someone surfs on the internet to "webserver.com/" it should pass the request to the nodejs application which should run on a specific port like 3000 for example. But the images and css files should get served by nginx as static files and the filestructure should looke like webserver.com/images or webserver.com/css .. images + css should get served by nginx like a static server
Now it gets tricky:
But when someone surfs on webserver.com/staticsite001 or webserver.com/staticsite002 then it should get served by the nginx server only. no need for nodejs then.
And for the nodejs server, I am just setting up my nodejs application with port 3000 for example to receive the bypass from nginx for webserver.com/
to put it in a more understandable language: when someone surfs to webserver.com/staticsite001 it should NOT pass it to the node application. It should only pass it to the node application if its inside of the first webserver.com/ directory that the outsiders can see. The webserver.com/staticsite001 should only get serverd by nginx.
How, how do I do that ? And what should the http and server block look like for the nginx configuration look like?
I am familiar with nodejs. But I am new to nginx and new to reverse proxying.
thanks
the file structure on the debian hard drive looks like:
/home/wwwexample/staticsite001 (for www.webserver.com/staticsite001/) only handled by nginx
/home/wwwexample/staticsite002 (for www.webserver.com/staticiste002/) only handlex by nginx
/home/wwwexample/images
/home/wwwexample/css
and in
/home/nodeapplication is my node js application
This server block should work:
server {
listen 80;
server_name webserver.com www.webserver.com;
root /home/wwwexample;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
}
location /staticsite001 {
}
location /staticsite002 {
}
location /images {
}
location /css {
}
}
First location makes nginx to proxy everything to localhost:3000. Following empty locations instruct nginx to use default behavior, that is to serve static files.
Put this code into file /etc/nginx/sites-available/my-server and create a symlink to it in /etc/nginx/sites-enabled. There is a default config, which you could use as a reference.
After that you could use command sudo /usr/sbin/nginx -t to check configuration. If everything is OK use /etc/init.d/nginx reload to apply new configuration.

Resources