We are using node js and nginx as Web Server.
We want to restrict XSS attack by mitigating script execution that uses Cookies.
How can I secure Cookies by adding HttpOnly flags in nginx settings?
From Nginx version 1.19.3 you can use proxy_cookie_flags to add additional flags for your cookies.
For all cookie use:
proxy_cookie_flags ~ secure samesite=strict;
For some of the cookies you can use (or regex):
proxy_cookie_flags one httponly;
Check more in documentation: https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cookie_flags
Edit your virtualhost file:-
sudo vim /etc/nginx/sites-enable/example.conf
Add Below Line under server block:-
add_header X-XSS-Protection "1; mode=block";
Example:-
server {
server_name a.com;
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
root /var/www/a/;
}
Related
Ii'm having a hard time figuring out how to proxypass into a nodejs container from a nginx container.
seems to me that http://localhost:3000 would fall inside the nginx container...so I thought this setup would make sense:
nginx container:
podman run -d \
--name nginx.main \
-p 0.0.0.0:8081:8080 \
-p 0.0.0.0:4431:4430 \
-p 0.0.0.0:3001:3000 \
-u root \
-v /home/_secrets/certbot/_certs:/etc/nginx/_cert \
-v /home/mee/_volumes/nginx_main:/etc/nginx \
nginx
nodjs container:
podman run -d \
-v /home/mee/dev/abd/:/usr/src/app -w /usr/src/app \
-p 3000:3000 \
--name next.dev node:latest \
npm run dev
firewalld, routing from 3001 to 3000:
sudo firewall-cmd --add-port=3000/tcp --permanent
sudo firewall-cmd --add-port=3001/tcp --permanent
sudo firewall-cmd --permanent \
--zone=mee_fd \
--add-forward-port=port=3001:proto=tcp:toport=3000
sudo firewall-cmd --reload
nginx config:
location / {
proxy_pass http://localhost:3000;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# enable strict transport security only if you understand the implications
}
really not sure how this should communicate... I've tried using the ipaddress instead of 'localhost', but I get the same response.
thanks
To allow communication between containers you need to setup a shared networks, e.g. in .yaml (this can be done as well as on ci, report in .yaml only for sake of code):
version: '2'
services:
proxy:
build: ./
networks:
- example1
- example2
ports:
- 80:80
- 443:443
networks:
example1:
external:
name: example1_default
example2:
external:
name: example2_default
Then in your nginx config:
location / {
proxy_pass http://myServiceName:3000; <-- note is not localhost but the name of node service
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# enable strict transport security only if you understand the implications
}
Let me know
My objective is to block my site from being accessed in an iFrame, with the exception of defend.net. I'm able to successfully do it with this line:
Header append X-Frame-Options: "ALLOW-FROM https://*.defend.net/"
However, I read that that's been depreciated.
<IfModule mod_headers.c>
Header set X-XSS-Protection "1; mode=block"
Header set X-Frame-Options "SAMEORIGIN"
Header set X-Content-Type-Options "nosniff"
Header always set Strict-Transport-Security "max-age=63072000; includeSubDomains"
Header add Content-Security-Policy "frame-src 'self' 'https://*.defend.net';"
Header set Referrer-Policy "same-origin"
</IfModule>
What is the most effective, secure way I can achieve my objective?
Can I safely remove this and have the same protection?
Header set X-Frame-Options "SAMEORIGIN"
This is what I came up with that seems to work as intended:
<IfModule mod_headers.c>
Header set X-XSS-Protection "1; mode=block"
Header set X-Content-Type-Options "nosniff"
Header always set Strict-Transport-Security "max-age=63072000; includeSubDomains"
Header set Content-Security-Policy "frame-src 'self' https://www.google.com https://www.youtube.com; frame-ancestors 'self' https://*.defend.net;"
Header set Referrer-Policy "same-origin"
</IfModule>
I'm trying to include CSP for Mapbox gl js in my nodejs app, where the map tile works properly on localhost but throw's issues in issues tab in chrome developer tool. And throw error on hosted website blob:https://example.com/ violates the Content-Security-Policy.
Issues tab in chrome developer tool in local environment
Indicate whether to send a cookie in a cross-site request by specifying its SameSite
attribute
Because a cookie’s SameSite attribute was not set or is invalid, it defaults to
SameSite=Lax, which prevents the cookie from being sent in a cross-site request.
This behavior protects user data from accidentally leaking to third parties and
cross-site request forgery.
Resolve this issue by updating the attributes of the cookie:
Specify SameSite=None and Secure if the cookie should be sent in cross-site requests.
This enables third-party use.
Specify SameSite=Strict or SameSite=Lax if the cookie should not be sent in
cross-site requests.
8 cookies
Name Domain & Path
_mkto_trk .mapbox.com/
_ga .mapbox.com/
mkjs_group_id .mapbox.com/
optimizelyEndUserId .mapbox.com/
mkjs_user_id .mapbox.com/
_uetvid .mapbox.com/
_cioid .mapbox.com/
_gid .mapbox.com/
Error in console
web_worker.js:9 Refused to create a worker from
'blob:https://example.com/20d2ed71-b218-4a21-b74d-8913226b398e' because it violates
the following Content Security Policy directive: "default-src * data: 'unsafe-eval'
'unsafe-inline'".Note that 'worker-src' was not explicitly set, so 'default-src'
is used as a fallback.
web_worker.js:9 Uncaught (in promise) DOMException: Failed to construct
'Worker': Access to the script at
'blob:https://example.com/20d2ed71-b218-4a21-b74d-8913226b398e'
is denied by the document's Content Security Policy.
nginx-conf file
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
location / {
rewrite ^ https://$host$request_uri? permanent;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com www.example.com;
server_tokens off;
#SSL configuration here
resolver 8.8.8.8;
location / {
try_files $uri #nodejs;
}
location #nodejs {
proxy_pass http://nodejs:8080;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
#here is the CSP header
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# enable strict transport security only if you understand the implications
}
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
}
I've tried several dozen thing's but nothing work's. Include helmet js and also csp header in server function in server side but find no success.
You need to add blob: to your "all allowed" Content-Security-Policy.
The below should fix the issue for you:
add_header Content-Security-Policy "default-src * data: blob: 'unsafe-eval' 'unsafe-inline'" always;
(I am assuming this is not the CSP you are actually going to run, because what would be the point?)
Just wondering what is the difference between
import { getContent } from '#/assets/welcome-content.js'
import Navigation from '#/components/Navigation'
and
import { getContent } from '~/assets/welcome-content.js'
import Navigation from '~/components/Navigation'
Both seems to work
but when I add the lines below in nuxt.config.js
router: {
base: '/siteA/'
},
I have the following error :
Uncaught (in promise) NavigationDuplicated: Avoided redundant navigation to current location: "/".
Context :
I have 3 nuxt website that I want to put under the same domain
mysite.fr/siteA/
mysite.fr/siteB/
mysite.fr/siteC/
As for my Nginx conf
server {
...
server_name example.com;
...
location /siteA {
root /var/www/siteA/dist;
...
}
location /siteB {
root /var/www/siteB/dist;
...
}
...
}
The error you are getting means that you are trying to navigate an url you are already on. It does not have any relations with the prefixes (aliases) you mentioned. They are just shortcuts to the "src" directory to be able to easily import the components you need.
Seems like this approach is not good.
What I end up doing to have multiple website / webapp under the same domain separated by their path / location is via reverse proxy of nginx.
If you are interested, you'll need to have docker and nginx in your VPS.
First if you haven't done it yet, deploy your apps in docker This guide also works with nuxt, vue and other node apps. if you have other then its also ok, the most important is to run your web app/site in bridge. ex : docker run -network=bridge -p 127.0.0.1:<hostport>:<containerport> whereas host is what you'll expose to nginx, and containerport is the port to access your app inside the container.
For more doc:
https://nodejs.org/en/docs/guides/nodejs-docker-webapp/ https://tecadmin.net/tutorial/docker/docker-manage-ports/
Once all your apps running in each container, time to reverse proxy with nginx.
Inside your nginx.conf
server { # domain.fr
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/domain.fr/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/domain.fr/privkey.pem; # managed by Certbot
server_name domain.art;
add_header Referrer-Policy "no-referrer" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Download-Options "noopen" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Permitted-Cross-Domain-Policies "none" always;
add_header X-Robots-Tag "none" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Strict-Transport-Security "max-age=15768000";
# Remove X-Powered-By, which is an information leak
fastcgi_hide_header X-Powered-By;
# set max upload size
client_max_body_size 512M;
fastcgi_buffers 64 4K;
location /sitea {
proxy_pass http://127.0.0.1:<hostport>/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 90;
access_log /nginx/sitea/access.log;
error_log /nginx/sitea/error.log;
}
location /siteb {
...
}
} # end of domain.fr
my nginx command sudo docker run --name nginx -v /docker/nginx:/etc/nginx --log-opt max-size=10m --log-opt max-file=5 --network=host
so that inside /docker/nginx I have nginx.conf and every changes I run docker restart nginx to apply changes.
Enjoy !
I am using AWS cloudfront with the origin pointing to AWS load balancer. Cloudfront is redirecting HTTP to HTTPS. ELB is listening on HTTP only.
I have added this to nginx.
# Media: images, icons, video, audio, HTC
location ~* \.
(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
expires 2M;
access_log off;
add_header Cache-Control "public";
}
# CSS and Javascript
location ~* \.(?:css|js)$ {
expires 1y;
access_log off;
add_header Cache-Control "public";
}
Some images are getting it from CF but some are not. I don't know why.
x-cache: Hit from cloudfront
curl -X GET -I https://test.ewhale.co/media/cache/attachment/resize/1296/product_gallery_main/5a6bf25ea84a0572610265.jpeg
x-cache: Miss from cloudfront
curl -X GET -I https://test.ewhale.co/media/cache/attachment/resize/914/product_gallery_main/5a6b9f07983ca610410692.png
Here is my CR configuration.