Nginx Cache for Azure blob authentication is notworking and getting below error
HTTP/1.1 403 Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
Server: nginx/1.23.3
Date: Tue, 14 Feb 2023 04:50:24 GMT
Content-Type: application/xml
Content-Length: 408
Connection: keep-alive
x-ms-request-id: fb70e59e-701e-0040-172f-40ef6a000000
Access-Control-Expose-Headers: content-length
Access-Control-Allow-Origin: *
Below is my nginx.conf:
events {
worker_connections 1024;
}
# Nginx configuration file
http {
# Define the proxy cache settings
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=static_cache:10m inactive=60m;
proxy_cache_valid 200 60m;
proxy_cache_valid 404 1m;
proxy_cache_bypass $http_pragma;
proxy_cache_revalidate on;
server {
listen 80;
server_name <nginx-host-name>;
location / {
proxy_pass https://<my-blob-account>.blob.core.windows.net/nasunifiler64e2bc43-41f8-424d-8f4d-ed85d8fd1ab1-5/;
proxy_set_header Host <my-blob-account>.blob.core.windows.net;
proxy_set_header Authorization "SharedKey my-blob-account:<access-key>";
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
}
No network issue, I have verified with azcopy and curl (curl request directly to azure blob and worked). My authorization header is correct? or any additional headers need to set?
Related
I have a webchat (example.com/chat) listening on tcp://0.0.0.0:50500 on my server.
I've also configured nginx reverse proxy to send data from my example.com/chat to 0.0.0.0:50500.
My site nginx conf goes like this:
map $http_upgrade $connection_upgrade {
default Upgrade;
'' close;
}
server {
server_name example.com www.example.com ;
listen 5.4.....0:443 ssl http2 ;
listen [2a0......:a4]:443 ssl http2 ;
ssl_certificate "/var/www......._51.crt";
ssl_certificate_key "/var/www/.......51.key";
add_header Strict-Transport-Security "max-age=31536000" always;
charset utf-8;
gzip on;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/css text/xml application/javascript text/plain application/json image/svg+xml image/x-icon;
gzip_comp_level 1;
set $root_path /var/www/user/data/www/example.com;
root $root_path;
disable_symlinks if_not_owner from=$root_path;
location / {
proxy_pass http://127.0.0.1:81;
proxy_redirect http://127.0.0.1:81/ /;
include /etc/nginx/proxy_params;
}
location ~ ^/chat {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://0.0.0.0:50500;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 300s;
proxy_buffering off;
}
location ~* ^.+\.(jpg|jpeg|gif|png|svg|js|css|mp3|ogg|mpeg|avi|zip|gz|bz2|rar|swf|ico|7z|doc|docx|map|ogg|otf|pdf|tff|tif|txt|wav|webp|woff|woff2|xls|xlsx|xml)$ {
try_files $uri $uri/ #fallback;
expires 30d;
}
location #fallback {
proxy_pass http://127.0.0.1:81;
proxy_redirect http://127.0.0.1:81/ /;
include /etc/nginx/proxy_params;
}
include "/etc/nginx/fastpanel2-sites/fastuser/example.com.includes";
include /etc/nginx/fastpanel2-includes/*.conf;
error_log /var/www/user/data/logs/example.com-frontend.error.log;
access_log /var/www/user/data/logs/example.com-frontend.access.log;
}
server {
server_name example.com www.example.com ;
listen 5.4.....0:80;
listen [2a.....:a4]:80;
return 301 https://$host$request_uri;
error_log /var/www/user/data/logs/example.com-frontend.error.log;
access_log /var/www/user/data/logs/example.com-frontend.access.log;
}
The webchat is configured to use these settings:
SOCKET_CHAT_URL="wss://example.com"
SOCKET_CHAT_PORT=50500
Since I have an upgrade header, the 426 Upgrade required error looks strange to me.
I know there are a lot of similar threads related to this issue, however, they all suggest creating an upgrade header that I already have.
I've tried to:
Use both SOCKET_CHAT_URL="ws://example.com" and "wss://example.com"
Changing the proxy_pass line to https: https://0.0.0.0:50500; << in this case the /chat page goes nginx 504 timeout.
changing the WebSocket line to the server IP: wss://123.312.123.321
wss://example.com/chat format << in this case the page closes a websocket connection instantly
Also, my header:
General
Request URL: https://example.com/chat
Request Method: GET
Status Code: 426
Remote Address: 5**.***.***.*50:443
Referrer Policy: strict-origin-when-cross-origin
Response Headers
date: Mon, 06 Sep 2021 21:11:50 GMT
sec-websocket-version: 13
server: nginx/1.18.0
strict-transport-security: max-age=31536000
upgrade: websocket
x-powered-by: Ratchet/0.4.3
Request Headers
:authority: example.com
:method: GET
:path: /chat
:scheme: https
accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
accept-encoding: gzip, deflate, br
accept-language: uk-UA,uk;q=0.9
cache-control: max-age=0
sec-ch-ua: "Chromium";v="94", "Google Chrome";v="94", ";Not A Brand";v="99"
sec-ch-ua-mobile: ?0
sec-ch-ua-platform: "Windows"
sec-fetch-dest: document
sec-fetch-mode: navigate
sec-fetch-site: none
sec-fetch-user: ?1
upgrade-insecure-requests: 1
user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.31 Safari/537.36
Okay, the server runs on HTTP/2, and WebSockets are not supported on it.
Also, it is not possible to switch only 1 site to HTTP/1.1 with Nginx, you should switch an entire server for that.
We've switched to sockets.io
I run a home server with nginx reverse proxied to a Node.js/PM2 upstream. Normally it works perfectly. However, when I want to make changes, I run pm2 reload pname or pm2 restart pname, which results in nginx throwing 502 Bad Gateway for about 10-20 seconds before it finds the new upstream.
My Node.js app starts very fast and I am 99% sure it is not actually taking that long for the upstream to start and bind to the port (when I don't use the nginx layer it is accessible instantly). How can I eliminate the extra time it takes for nginx to figure things out?
From nginx/error.log:
2021/01/29 17:50:35 [error] 18462#0: *85 no live upstreams while connecting to upstream, client: [ip], server: hostname.com, request: "GET /path HTTP/1.1", upstream: "http://localhost/path", host: "www.hostname.com"
From my nginx domain config:
server {
listen 80;
server_name hostname.com www.hostname.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name hostname.com www.hostname.com;
# ...removed ssl stuff...
gzip_types text/plain text/css text/xml application/json application/javascript application/xml+rss application/atom+xml image/svg+xml;
gzip_proxied no-cache no-store private expired auth;
gzip_min_length 1000;
location / {
proxy_pass http://localhost:3010;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_read_timeout 240s;
}
}
This is caused by the default behavior for an upstream, this may not be obvious since you're not explicitly declaring your upstream using the upstream directive. Your configuration with an upstream directive would look like this:
upstream backend {
server localhost:3010;
}
...
server {
listen 443 ssl;
...
location / {
proxy_pass http://backend;
...
}
}
In this form it's apparent you're just relying on the default options for the server directive. The server directive has many options, but two of them are important here: max_fails and fail_timeout. These options control failure states and how nginx should handle them. By default max_fails=1 and fail_timeout=10 seconds, this means that after one unsuccessful attempt to communicate with the upstream nginx will wait 10 seconds before attempting again.
To avoid this in your environment you could simply disable this mechanism by setting max_fails=0:
upstream backend {
server localhost:3010 max_fails=0;
}
I create AWS instance and installed Nginx server for my project. Now for angular I create ang.conf and for node create node.conf file in site-available. Share my conf file
ang.conf
server {
listen 80;
server_name IP;
location / {
root project_path;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
error_page 404 /404.html;
error_page 403 /403.html;
# To allow POST on static pages
error_page 405 =200 $uri;
}
node.conf
server {
listen 3004;
server_name IP;
location / {
proxy_pass http://IP:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
My node server working fine. I can use my api through postman with the port for example http://MY_YP:3000. But in angular site when I go to browser and go login page and click on submit button not connect to node js server. When I check my response in network it return like this.
Response
HTTP/1.1 200 OK
ETag: W/"5b486d9c-848"
Content-Type: text/html
Date: Fri, 13 Jul 2018 09:56:38 GMT
Last-Modified: Fri, 13 Jul 2018 09:15:08 GMT
Server: nginx/1.10.3 (Ubuntu)
Connection: keep-alive
Content-Encoding: gzip
Transfer-Encoding: Identity
I don't what's wrong in this code. Please suggest me how to handle this.
Finally got the answer. I have to change my nginx.conf file.
events {
worker_connections 4096; ## Default: 1024
}
http {
# Change this depending on environment
upstream api {
server 192.168.0.1:9998;
#put here node.js ip with port
}
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html index.htm;
include /etc/nginx/mime.types;
location / {
# If you want to enable html5Mode(true) in your angularjs app for pretty URL
# then all request for your angularJS app will be through index.html
try_files $uri /index.html;
}
# /api will server your proxied API that is running on same machine different port
# or another machine. So you can protect your API endpoint not get hit by public directly
location /api {
proxy_pass http://api;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
#Static File Caching. All static files with the following extension will be cached for 1 day
location ~* .(jpg|jpeg|png|gif|ico|css|js)$ {
expires 1d;
}
}
}
I am trying to configure nginx to manage files upload for node.js app.
I have followed this tutorial: https://coderwall.com/p/swgfvw/nginx-direct-file-upload-without-passing-them-through-backend
I have made it with the following configuration:
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /upload {
auth_request /upload/authenticate;
limit_except POST { deny all; }
client_body_temp_path /tmp/;
client_body_in_file_only on;
client_body_buffer_size 128K;
client_max_body_size 1000M;
proxy_pass_request_headers on;
proxy_set_header X-FILE $request_body_file;
proxy_set_body off;
proxy_redirect off;
proxy_pass http://localhost:3000/uploads;
}
location /upload/authenticate {
internal;
proxy_set_body off;
proxy_pass http://localhost:3000/auth/isAuthenticated;
}
}
And I did the test with Postman as follows:
Upload post request
The post request :
POST /upload HTTP/1.1
Host: localhost
Cache-Control: no-cache
----WebKitFormBoundaryE19zNvXGzXaLvS5C
Content-Disposition: form-data; name="image"; filename="pic.jpg"
Content-Type: image/jpeg
----WebKitFormBoundaryE19zNvXGzXaLvS5C
It works fine and nginx uploads the image in the /tmp directory.
The problem is that the file is renamed as "0000000001" and when I rename it manually as "pic.jpg" and try to open it the viewer prompts "Error interpreting JPEG image file (Not a JPEG file: starts with 0x2d 0x2d)".
And when I run the file command(file pic.jpg) it returns: "pic.jpg: data".
Could you check the Postman version?
In my environment, Postman(v3.2.8) has "binary" radio button on request method.
According the blog post, "clientbodyinfileonly" method is incompatible with multi-part data and supports binary data upload only.
So please retry request with binary mode(Postman or another method, e.g. XHR2).
I'm trying to retrieve a large amount of data (1000+ rows from MongoDB, using the JugglingDB ORM). If I set the limit to be 1001 rows, my data is complete, but once I step up to 1002, the data is incomplete (doesn't matter if I hit it with cURL or the browser). I'm not entirely sure what the issue is, as the console.log shows all of my data, but it sounds like there might be an issue with my response headers or the response itself... here's the code that I'm trying to work with:
function getAllDevices(controller) {
controller.Device.all({limit: parseInt(controller.req.query.limit)}, function(err, devices) {
// This shows all of my devices, and the data is correct
console.log(devices, JSON.stringify(devices).length);
controller.req.headers['Accept-Encoding'] = '';
controller.res.setHeader('transfer-encoding', '');
controller.res.setHeader('Content-Length', JSON.stringify(devices).length);
return controller.send({success: true, data: devices});
});
}
Request headers:
Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding gzip, deflate
Accept-Language en-US,en;q=0.5
Connection keep-alive
Host localhost
Response headers:
Access-Control-Allow-Cred... true
Access-Control-Allow-Head... X-Requested-With,content-type
Access-Control-Allow-Meth... GET, POST, OPTIONS, PUT, PATCH, DELETE
Access-Control-Allow-Orig... *
Connection keep-alive
Content-Length 1610106
Content-Type application/json; charset=utf-8
Date Wed, 20 Aug 2014 17:20:51 GMT
Server nginx/1.6.0
X-Powered-By Express
It's good to note that I'm using nginx as a reverse proxy to my Node server, and this is what the config looks like:
location /nodeJs/ {
proxy_pass http://127.0.0.1:3005/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Connection close;
proxy_pass_header Content-Type;
proxy_pass_header Content-Disposition;
proxy_pass_header Content-Length;
}
I believe I have fixed this issue, and once again, it seems like an nginx change... in my main nginx.conf file, I added:
gzip on;
gzip_types application/json;
Adding application/json to the gzip_types is ultimately the solution.