Resolve Strapi.io Content Security Policy Error - content-security-policy

The situation so far:
I've got a Strapi instance running on default port 1337. DNS is correctly set up to navigate traffic from cms.mysite.com to the public IP address of the server and an IIS website is configured with a reverse proxy to direct traffic from cms.mysite.com to port 1337. Strapi itself is instructed to fire up on server
"power on" via a scheduled task and cmd command. I've also set up an SSL certificate such that secure communication with https://cms.mysite.com is possible.
The problem:
When I navigate to https://cms.mysite.com from a browser outside of the server, I correctly get the "home" page for the headless CMS.
But if I click "Open the administration", I'm hit with a CSP error
I'm sure I'm missing a step. I have not configured anything specifically after following the official Hands-on tutorial. I feel like it's something to do with the security middleware, specifically security header with relation to Content Security Policy, but it's difficult to know exactly what to do with the config/middleware.js file.
A little help is mighty appreciated.
Edit: I feel like this is actually a reverse proxy issue since if I replace localhost in the error https://localhost:1337/admin/project-type with https://cms.mysite.com/admin/project-type I get a valid response:

Ran into the same issue.
First, changed the default security middleware in /config/middlewares.js
// ...
{
name: 'strapi::security',
config: {
contentSecurityPolicy: {
useDefaults: true,
directives: {
'connect-src': ["'self'", 'http:', 'https:'],
upgradeInsecureRequests: null,
},
},
},
},
// ...
After that I found out that admin assets where still loading from localhost:1337.
So, after looking at this issue on GH, I set the url param in config/server.js. Seems to have helped.
Hope this helps!
P.S. And don't forget to re-build everything after making changes to the configs.

If you need to config CORS, here is an example:
//middlewares.js
module.exports = [
'strapi::errors',
'strapi::security',
'strapi::poweredBy',
{
name: 'strapi::cors',
config: {
enabled: true,
headers: '*',
origin: ['http://localhost:1337', 'http://example2']
}
},
'strapi::logger',
'strapi::query',
'strapi::body',
'strapi::session',
'strapi::favicon',
'strapi::public',
];

Related

How to rewrite returned paths when using a proxy

I have a use case where 2 applications are hosted on the same system, 1 externally (from the system) accessible, and 1 only accessible via localhost:8081.
I have tried http-proxy and http-proxy-middleware. In both cases I can get things partially working, but when I run into a redirect things usually break.
My internal app is mounted onto my external app via express like this:
app.use('/myproxy', createProxyMiddleware('/myproxy', {
target: "http://localhost:8081",
followRedirects: true,
changeOrigin: true,
// autoRewrite: true,
prependPath: true,
pathRewrite: { '^/myproxy' : '/' },
selfHandleResponse: true,
onProxyRes: responseHandler( async (responseBuffer, proxyRes, req, res) => {
let response = responseBuffer.toString('utf8');
response = response.replace("js/", "myproxy/js/")
return response
} )
}))
I have fiddled extensively with the various parameters like:
pathRewrite: seems to affect the incoming request and I believe is set correctly
followRedirects: works as expected, I need this enabled as my internal app does a redirect
prependPath: This one I am a bit shaky on. I tried it both ways, can't figure out what it does
pathRewrite: This I understand changes the path of the incoming request, this appears to be configured correctly
onProxyRes: This was my last attempt, I thought I could just find all references in the text of the response and rewrite them, it does not appear to actually work as my references are unchanged in practice. selfHandleResponse was needed to use this.
My ultimate goal is that when I go to http://myServer/myproxy the responding documents that ask for things like /js/somescript.js are updated to /myproxy/js/somescript.js. Same for CSS and other static content.
Is this possible? Am I going about this the wrong way, maybe?

HTTP cookies are not working for localhost subdomains

I have a node/express API that will create an HTTP cookie and pass it down to my React app for authentication. The setup was based on Ben Awad's JWT HTTP Cookie tutorial on Youtube if you're familiar with it. Everything works great when I am running the website on my localhost(localhost:4444). The issue I am now running into is that my app now uses subdomains for handling workspaces(similar to how JIRA or Monday.com uses a subdomain to specify a workspace/team). Whenever I run my app on a subdomain, the HTTP cookies stop working.
I've looked at a lot of threads regarding this issue and can't find a solution, no matter what I try, the cookie will not save to my browser. Here are the current things I have tried so far with no luck:
I've tried specifying the domain on the cookie. Both with a . and without
I've updated my host file to use a domain as a mask for localhost. Something like myapp.com:4444 which points to localhost:4444
I tried some fancy configuration I found where I was able to hide the port as well, so myapp.com pointed to localhost:4444.
I've tried Chrome, Safari, and Firefox
I've made sure there were no CORS issues
I've played around with the security settings of the cookie.
I also set up a ngrok server so there was a published domain to run in the browser
None of these attempts have made a difference so I am a bit lost at what to do at this point. The only other thing I could do is deploy my app to a proper server and just run my development off that but I really really don't want to do that, I should be able to develop from my local machine I would think.
My cookie knowledge is a bit bare so maybe there is something obvious I am missing?
This is what my setup looks like right now:
On the API I have a route(/refresh_token) that will create a new express cookie like so:
export const sendRefreshToken = (res: Response, token: string): void => {
res.cookie('jid', token, {
httpOnly: true,
path: '/refresh_token',
});
};
Then on the frontend it will essentially run this call on load:
fetch('http://localhost:3000/refresh_token', {
credentials: 'include',
method: 'POST'
}).then(async res => {
const { accessToken } = await res.json()
setState({ accessToken, workspaceId })
setLoading(false)
})
It seems super simple to do but everything just stops working when on a subdomain. I am completely lost at this point. If you any ideas, that would be great!
if httpOnly is true, it won't be parsable through client side js. for working with cookies on subdomains, set domain as the main domain (xyz.com)
an eg in BE:
res.cookie('refreshToken', refreshToken, {
domain: authCookieDomain,
path: '/',
sameSite: 'None',
secure: true,
httpOnly: false,
maxAge: cookieRefreshTokenMaxAgeMS
});
and on FE add withCredentials: true as axios options or credentials: include with fetch, and that should work

Passport-SAML: entryPoint / auth request header overridden

Alright you gurus, I need some help / understanding of what is happening here. I'm leveraging passport and passport-saml to do single sign on for my application. I have been able to get things working locally on my development machine, but when I deploy to our staging server, something is amiss and not leveraging the entryPoint URL that I have configured...
Example code:
return new Strategy(
{
callbackUrl: "https://my.domain.com/staging/api/login/callback",
entryPoint: "https://my.idp.com/affwebservices/public/saml2sso",
issuer: "my.domain.com",
cert: "THE SECRET SAUCE"
},
function(profile, done) {
// .....
}
)
// ROUTES ----------------------------------------------------------------------
app.get('/api/login', passport.authenticate("saml", { successRedirect: '/', failureRedirect: '/login' }) );
app.post('/api/login/callback', passport.authenticate("saml", { failureRedirect: '/', failureFlash: true }), (request, response) => {
// .....
});
When I run this locally, as stated, it works and I see the following SAML request being made:
https://my.idp.com/affwebservices/public/saml2sso?SAMLRequest=.....
However, once deployed, the entryPoint URL domain is overriden with the staging domain:
https://my.domain.com/affwebservices/public/saml2sso?SAMLRequest=.....
I'm noticing that the request generated is assuming authority to my.domain.com rather than utilizing my.idp.com:
I will say that the only difference between the development server and staging/prod server is that staging/prod is utilizing IIS as a reverse proxy to route incoming traffic based on URL string (i.e. my.domain.com/production, my.domain.com/staging). I've enabled CORS on the node server, which was how I got it working on the development server in the first place, as well as tried configuring IIS to allow for it too...
Stumped at this point. Any ideas?
Well after enough headbanging, I found a way to resolve the issue. As suspected, it was with IIS, and solution I implemented was a URL redirect:
I don't know if this is the most robust solution; so if someone else stumbled upon this, feel free to reach out. Either way, this works.

nodejs keycloak-connect-graphql not working while in docker

I have hit an issue that I have struggled to figure out for the last little while regarding docker. Here is a shortened version of the story
In my development environment (everything is running on localhost), code works great, whenever I add my authorization token in my headers keycloak-connect detects it and works as usual. The problem occurs when I use it in docker and add custom network interfaces.
When the docker container boots up, I have it connect to keycloak via http://keycloak:8080/auth and generate the realm. That works fine, not an issue, thus I know that the keycloak network is up and running as expected. I am able to remote into the container, communicate with keycloak, get a token and etc...
In my graphql application, I use keycloak-connect-graphql to set the context of the graphql application which in turn uses keycloak-connect to set up all the headers. The problem is, that keycloak-connect-graphql tells me that there is no header.If I simply print out the request, I can clearly tell that there is a token being passed in, it's just that for some reason, keycloak-connect-graphql/keycloak-connect does not want to set it because I am using a different network besides localhost.
I was actually able to side-step this problem in production by setting the keycloak url to be a the public url (https://keycloak.DOMAINNAME.com) which makes absolutely no sense to me because http://keycloak:8080/auth does not work. I have digged through keycloak-connect and the keycloak-connect-graphql code and I could not find anything relating to a CORS issue or something else suspicious. If anyone has any ideas please let me know. This bug has been driving me crazy.
Code snippet:
keycloak configuration (app.config)
keycloak: {
realm: process.env.KEYCLOAK_REALM,
'auth-server-url': 'http://keycloak:8080/auth',
'ssl-required': 'none',
resource: process.env.KEYCLOAK_RESOURCE,
'public-client': true,
'use-resource-role-mappings': true,
'confidential-port': 0,
},
configurekeycloak.js
function configureKeycloak(app, graphqlPath) {
const keycloakConfig = require('../config/app.config').keycloak;
const memoryStore = new session.MemoryStore();
app.use(
session({
secret:
process.env.SESSION_SECRET_STRING || 'this should be a long secret',
resave: false,
saveUninitialized: true,
store: memoryStore,
}),
);
const keycloak = new Keycloak(
{
store: memoryStore,
},
keycloakConfig,
);
// Install general keycloak middleware
app.use(
keycloak.middleware({
admin: graphqlPath,
}),
);
// Protect the main route for all graphql services
// Disable unauthenticated access
app.use(graphqlPath, keycloak.middleware());
return { keycloak };
}
index.js
// perform the standard keycloak-connect middleware setup on our app
const { keycloak } = configureKeycloak(app, graphqlPath);
// Ensure entire GraphQL Api can only be accessed by authenticated users
app.use(playgroundPath, keycloak.protect());
const server = new ApolloServer({
gateway,
// uploads: false,
// Apollo Graph Manager (previously known as Apollo Engine)
// When enabled and an `ENGINE_API_KEY` is set in the environment,
// provides metrics, schema management and trace reporting.
engine: false,
// Subscriptions are unsupported but planned for a future Gateway version.
subscriptions: false,
// Disable default playground
playground: true,
context: ({ req }) => {
return {
kauth: new KeycloakContext({ req }),
};
},
// Tracing must be enabled for this plugin to add the headers
tracing: true,
// Register the plugin
plugins: [ServerTimingPlugin],
});
Edit:
setting network_mode to host fixes the issue, but I much prefer using dockers integrated networks rather than using network_mode host, especially for production
Edit 2:
I was able to reproduce the error in my local development environment (without shelling into docker and then debugging from there). I was able to do this buy using the IP address provided by docker. This yields the same results. I have narrowed the issue down to keycloak-connect, which for some reason, does no want to connect to keycloak or something. Still not sure

How to properly configure Browsersync to proxy backend

I'm struggling with proper configuration of Browsercync (and maybe some middleware?).
My configuration is like:
local.example.com it's mine local address configured via /etc/hosts.
devel.example.com it's our company devel environment (backend).
staging.example.com it's our company staging environment (backend).
As I'm UI developer I want to use my local code, but work with one of backend environments.
I'm using gulp to build my project etc. It's also has task to run browser-sync and watch file changes. But of course there is now problem with cookies domains that are coming from backend. CSRF token cookie domain is set by browser to currently used backend.
I have tried:
To use middleware http-proxy-middleware with configuration:
server: {
baseDir: './build',
middleware: [
proxyMiddleware('/api', {
target: 'http://devel.example.com',
changeOrigin: true,
})
]
]
But problem I have is that it's doing non-transparent redirects, which are visible in browser console. I thought that it will work like this, that proxy will mask those requests to browser will think that all requests and responses are coming from local.example.com. But it seems that it doesn't work like this (or maybe I configured it badly).
Also big problem with this solution is that it somehow changes my POST HTTP requests to GET (WTF?!).
To use build in browser-sync proxy option. In many tutorials I saw using proxy option with server option, but it seems not work anymore. So I have tried to use it with serveStatic like this:
serveStatic: ['./build'],
proxy: {
target: 'devel.example.com',
cookies: {
stripDomain: false
}
}
But this doesn't work at all...
I would really appriciate for any help in this topic.
Thanks

Resources