I make a RestAPI,
when post or get data it working correctly.
For example post a comment, post a status or get data.
But everytime I upload (picture) using fineuploader, It's always reject by server and return 404 error with message. but it's working fine when using postman. Here's the error message:
Firefox:
(Reason: CORS preflight channel did not succeed).
Chrome:
Response for preflight has invalid HTTP status code 404
This is my behavior with cors enabled in controller
public function behaviors()
{
$behaviors = parent::behaviors();
// replace to top contentNegotiator filter for displaying errors in correct format
$content_negotiator = $behaviors['contentNegotiator'];
unset($behaviors['contentNegotiator']);
$content_negotiator['formats'] = Yii::$app->params['formats'];
$behaviors = ArrayHelper::merge(
[
'contentNegotiator' => $content_negotiator,
'oauth2access' => [ // should be before "authenticator" filter
'class' => OAuth2AccessFilter::className()
],
'exceptionFilter' => [
'class' => ErrorToExceptionFilter::className()
],
'corsFilter' => [
'class' => \yii\filters\Cors::className(),
'cors' => [
'Origin' => ['*'],
'Access-Control-Allow-Headers' => 'X-Requested-With,Content-Type',
'Access-Control-Request-Method' => ['GET', 'POST', 'PUT', 'PATCH', 'DELETE', 'HEAD', 'OPTIONS'],
'Access-Control-Request-Headers' => ['*'],
'Access-Control-Allow-Credentials' => true,
],
],
],
$behaviors,
[
'access' => [
'class' => AccessControl::className(),
'rules' => $this->accessRules(),
'ruleConfig' => ['class' => 'api\components\AccessRule'],
],
]
);
return $behaviors;
}
and my .htaccess set like this
<IfModule mod_rewrite.c>
<IfModule mod_headers.c>
# Define the root domain that is allowed
SetEnvIf Origin .+ ACCESS_CONTROL_ROOT=mydomain.com
SetEnvIf Authorization .+ HTTP_AUTHORIZATION=$0
# Check that the Origin: matches the defined root domain and capture it in
# an environment var if it does
RewriteEngine On
RewriteCond %{ENV:ACCESS_CONTROL_ROOT} !=""
RewriteCond %{ENV:ACCESS_CONTROL_ORIGIN} =""
RewriteCond %{ENV:ACCESS_CONTROL_ROOT}&%{HTTP:Origin} ^([^&]+)&(https?://(?:.+?\.)?\1(?::\d{1,5})?)$
RewriteRule .* - [E=ACCESS_CONTROL_ORIGIN:%2]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
# Otherwise forward it to index.php
RewriteRule ^(.*)$ /index.php/$1 [L]
# Set the response header to the captured value if there was a match
Header set Access-Control-Allow-Origin %{ACCESS_CONTROL_ORIGIN}e env=ACCESS_CONTROL_ORIGIN
Header set Access-Control-Allow-Credentials "true"
</IfModule>
</IfModule>
and my fineuploader is like this
function createEventUploader() {
var uploader = new qq.FineUploader({
debug: false,
multiple: false,
element: document.getElementById('eventuploader'),
request: {
endpoint: apiUrl + '/uploads/event?access_token='+access_token,
method: 'POST'
},
resume: {
enabled: true
},
retry: {
enableAuto: false,
maxAutoAttempts: 1,
showButton: true,
},
validation: {
allowedExtensions: ['jpeg', 'jpg', 'png'],
// sizeLimit: 800000,// 200 kB = 200 * 1024 bytes
},
cors:{
allowXdr: true,
expected: true,
sendCredentials: true,
},
callbacks: {
onUpload: function(id, name) {},
onComplete: function(id, name, responseJSON) {
var data = JSON.stringify(responseJSON);
data = JSON.parse(data);
if (data.success) {
$('#eventcover').val(data.uploadName);
}
},
onError: function(id, name, errorReason, xhrOrXdr) {
bootbox.alert(qq.format("Error on file number {} - {}. Reason: {}", id, name, errorReason));
}
}
});
}
The problem happen only on upload.
Please somebody help me with this problem.
Related
Here is my vite config
server: {
host: "0.0.0.0",
proxy: {
"/api": {
target: "https://xxx.xxx.inc",
changeOrigin: true,
rewrite: (path) => {
return path.replace(/^\/api/, "");
},
secure: false,
},
},
},
When I post a data, weired things happend!
The POST was "301 Moved Permanently" And it became GET!
vite:proxy /api/core/login -> https://xxx.xxx.inc +0ms
vite:time 475.67ms /core/login +10s
vite:spa-fallback Not rewriting GET /core/login/ because the client prefers JSON. +10s
vite:time 0.73ms /core/login/ +7ms
vite:proxy /api/core/login -> https://xxx.xxx.inc +15m
vite:time 476.08ms /core/login +15m
vite:spa-fallback Not rewriting GET /core/login/ because the client prefers JSON. +15m
vite:time 0.66ms /core/login/ +5ms
For Proxy server,
A POST should be always POST after
Curious what others are doing with SvelteKit adapter-node builds to put them into production.
For example...
Serving pre-compressed files
Setting a cache TTL
Maybe something like helmet
Is it better to define an entryPoint for the adapter like a server.js that implements polka/express/connect like this...
// src/server.js
import { assetsMiddleware, prerenderedMiddleware, kitMiddleware } from '../build/middlewares.js'
import polka from 'polka'
import compression from 'compression'
import helmet from 'helmet'
const app = polka()
app.use(helmet())
app.use(assetsMiddleware, prerenderedMiddleware, kitMiddleware)
app.use(compression())
app.listen(3000)
or is it better to implement similar functionality in the handler() method of hooks.js?
Interested to know what people are doing to go from a build via adapter-node to production.
After examining what adapter-node generates in the build folder, I decided to set the entryPoint property for the adapter's options in svelte.config.js to ./src/server.mjs which gets added to the build. The handle() method in hooks.js/ts doesn't allow for any control over the static content.
In the code below, I set a redirect for non-https and use helmet to beef up security.
// /src/server.mjs
import polka from 'polka'
import helmet from 'helmet'
import { assetsMiddleware, prerenderedMiddleware, kitMiddleware } from '../build/middlewares.js'
const { PORT = 3000, DOMAIN } = process.env
const isHttpPerHeroku = (req) =>
req.headers['x-forwarded-proto'] &&
req.headers['x-forwarded-proto'] !== 'https'
polka()
// On Heroku (only), redirect http to https
.use((req, res, next) => {
if (isHttpPerHeroku(req)) {
let url = `${DOMAIN}${req.url}`
let str = `Redirecting to ${url}`
res.writeHead(302, {
Location: url,
'Content-Type': 'text/plain',
'Content-Length': str.length
})
res.end(str)
} else next()
})
// Apply all but two helmet protections
.use(helmet({
contentSecurityPolicy: false, // override below
referrerPolicy: false // breaks "Sign in with Gooogle"
}))
// Set the Content Security Policy on top of defaults
.use(helmet.contentSecurityPolicy({
useDefaults: true,
directives: {
scriptSrc: [
"'self'",
`'unsafe-inline'`,
'https://accounts.google.com/gsi/',
'https://assets.braintreegateway.com/web/',
'https://platform.twitter.com/',
'https://www.google-analytics.com/',
'https://www.google.com/recaptcha/',
'https://www.googletagmanager.com/',
'https://www.gstatic.com/recaptcha/'
],
connectSrc: [
"'self'",
'https://accounts.google.com/gsi/',
'https://api.sandbox.braintreegateway.com/merchants/',
'https://api.braintreegateway.com/merchants/',
'https://origin-analytics-sand.sandbox.braintree-api.com/',
'https://payments.sandbox.braintree-api.com/',
'https://payments.braintree-api.com/',
'https://stats.g.doubleclick.net/',
'https://www.google-analytics.com/',
'https://platform.twitter.com/',
'https://assets.braintreegateway.com/web/',
'https://www.googletagmanager.com/',
'https://www.google.com/recaptcha/',
'https://www.gstatic.com/recaptcha/',
'https://fonts.gstatic.com/',
'https://client-analytics.braintreegateway.com/'
],
childSrc: [
"'self'",
'https://accounts.google.com/gsi/',
'https://assets.braintreegateway.com/web/',
'https://platform.twitter.com/',
'https://syndication.twitter.com/i/jot',
'https://www.google.com/maps/',
'https://www.google.com/recaptcha/'
],
fontSrc: [
"'self'",
'https:',
'data:',
'https://fonts.gstatic.com'
],
imgSrc: [
"'self'",
'data:',
'https://www.google-analytics.com/',
'https://www.googletagmanager.com/',
'www.w3.org/2000/svg',
],
frameSrc: [
'https://accounts.google.com/gsi/',
'https://www.google.com/recaptcha/',
'https://platform.twitter.com/',
'https://assets.braintreegateway.com/web/',
'https://www.google.com/maps/',
'https://syndication.twitter.com/i/jot'
],
workerSrc: [
"'self'"
]
}
}))
// Load the SvelteKit build
.use(assetsMiddleware, prerenderedMiddleware, kitMiddleware)
// Listen on the appropriate port
.listen(PORT)
I am creating an application in NextJs and I set the cookie authorization when the user make login:
res.setHeader("Set-Cookie", [
cookie.serialize("authorization", `Bearer ${jwtGenerated}`, {
httpOnly: true,
secure: process.env.NODE_ENV !== "development",
sameSite: true,
maxAge: 60 * 60 * 12,
path: "/",
})
]);
This part of the code works perfectly, it sets the cookie in the browser. However when I log out, I make a request to the url /api/logout that executes this code:
import cookie from "cookie";
export default (req, res) => {
res.setHeader("Set-Cookie", [
cookie.serialize("authorization", "false", {
httpOnly: true,
secure: process.env.NODE_ENV !== "development",
sameSite: true,
maxAge: 5,
path: "/",
})
]);
return res.status(200).json({ roles: null, auth: false });
};
however it seems that it does not work in production. When I'm at localhost it removes cookies and changes their value. However in production nothing is changed. The expiration remains the same, value and everything else.
Am I doing something wrong? Is there any other way to remove this cookie when the user make logout?
Are you using Vercel as the deployment platform? This bug is caused because Next.js's serverless features always return a 304 Not Modified. Quite frankly I don't know why this happens on the server, but I believe that it has something to do with HTTP requests on Next.js's internals.
In order to fix this problem, I made the logout request a POST request with a static key. This will prevent 304 Not Modified error from happening.
import cookie from "cookie";
export default (req, res) => {
if (req.method !== 'POST') return res.status(405).json({ status: 'fail', message: 'Method not allowed here!' });
if (req.body.key === 'static_key') {
res.setHeader("Set-Cookie", [
cookie.serialize("authorization", "false", {
httpOnly: true,
secure: process.env.NODE_ENV !== "development",
sameSite: true,
maxAge: 5,
path: "/",
})
]);
return res.status(200).json({ roles: null, auth: false });
}
return res.status(400).json({ status: 'fail', message: 'Bad request happened!' });
};
This error occured in next.js app when i send a get request using axios in getInitialPros of _app.js file.
if (typeof window === "undefined") {
// user = await checkAuth(ctx);
// const token = ctx.req.headers.cookie;
console.log("TOKEN", ctx.req.headers);
if (ctx.req && ctx.req.headers.cookie) {
try {
res = await axiosClient("get", { cookie: ctx.req.headers.cookie }).get(
"/auth/currentuser"
);
user = res.data;
console.log("USER IN SERVER SIDE", user);
ctx.store.dispatch(setAuthenticatedUser(res.data));
} catch (err) {
console.log("ERROR in APP", err);
// console.log("USER FOUND IN APP.JS", res.data);
ctx.store.dispatch(removeAuthenticatedUser());
}
}
} else {
try {
res = await axiosClient("get").get("/auth/currentuser");
user = res.data;
// await checkAuth(ctx);
// await checkAuth(ctx,)
console.log("IN CLIENT", res.data);
} catch (err) {}
}
this error occurred when the page is refreshed but it only occurs on server side, not in client side.
ERROR in APP Error: read ECONNRESET
at TLSWrap.onStreamRead (internal/stream_base_commons.js:205:27) {
errno: 'ECONNRESET',
code: 'ECONNRESET',
syscall: 'read',
config: {
url: '/auth/currentuser',
method: 'get',
headers: {
Accept: 'application/json, text/plain, */*',
cookie: 'token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VySWQiOiI1ZjNhYTJlMmQxN2YxMzAxYTA0NGUxYTIiLCJpYXQiOjE1OTgyODUyMDMsImV4cCI6MTU5ODI4ODgwM30.qtaW-D9P6tJHzL1uHZs3wlzF39UPVkPTLEieuqaVEJY',
'User-Agent': 'axios/0.19.2'
},
baseURL: 'https://tatkaladda.com/api/',
transformRequest: [ [Function: transformRequest] ],
transformResponse: [ [Function: transformResponse] ],
timeout: 0,
adapter: [Function: httpAdapter],
xsrfCookieName: 'XSRF-TOKEN',
xsrfHeaderName: 'X-XSRF-TOKEN',
maxContentLength: -1,
validateStatus: [Function: validateStatus],
data: undefined
},
this error only occurred in production app not in development mode.
Node.js is not aware of any baseURL. Your browser is. So on the server side you have to provide the full path, on the client side when using relative links they're relative to the base url, eg https://example.com
By: Tim Neutkens - Co-author of Next.js and MDX
One of the suggested workaround is to use full path in the getInitialProps when using axios. In your case change this:
res = await axiosClient("get").get("/auth/currentuser");
to
res = await axiosClient("get").get("http://localhost:3000/auth/currentuser");
//or use external domian if you not on localhost i.e. https:api.example.com/auth/currentuser
If still this does not work, use axios API and set full path on baseUrl as follow:
// axios call
await axios({
method: 'GET',
url: 'http://abc.herokuapp.com/api/' //or baseURL: 'http://abc.herokuapp.com/api/'
)}
Have time! kindly read this, it might help: https://github.com/vercel/next.js/issues/5009
UPDATE
Also you can try to construct your baseURL from getInitialProps context
async getInitialProps({ req }) {
const protocol = req.headers['x-forwarded-proto'] || 'http'
const baseUrl = req ? `${protocol}://${req.headers.host}` : ''
const res = await fetch(baseUrl + '/api/recent/1')
...
}
I am currently setting up a reverse proxy in puppet so that I can authenticate using Active Directory.
I have the following in my puppet module.
class { 'apache::mod::ldap' :}
class { 'apache::mod::authnz_ldap' :}
apache::vhost { 'reverse-proxy':
port => '443',
docroot => '/var/www/html',
ssl => true,
ssl_cert => '/etc/httpd/ssl/cert.crt',
ssl_key => '/etc/httpd/ssl/cert.key',
require => [ File['/etc/httpd/ssl/cert.crt'], File['/etc/httpd/ssl/cert.key']],
rewrites => [
{
comment => 'Eliminate Trace and Track',
rewrite_cond => ['%{REQUEST_METHOD} ^(TRACE|TRACK)'],
rewrite_rule => [' .* - [F]'],
},
],
proxy_preserve_host => true,
proxy_pass => {
path => '/',
url => 'http://127.0.0.1:5601/',
},
directories => [
{
path => '/',
provider => 'location',
auth_name => 'Kibana Authentication',
auth_type => 'Basic',
auth_basic_provider => 'ldap',
auth_ldap_bind_dn => 'cn=serviceuser,ou=Users,dc=example,dc=com',
auth_ldap_bind_password => 'supersecretpassword',
auth_ldap_url => 'ldaps://ldap.example.com/dc=example,dc=com?CN?
sub?(objectClass=user)',
require => 'ldap-group
cn=application_users,ou=application_groups,ou=groups,dc=example,dc=com',
},
],
}
The problem I'm running into is that when I apply this configuration to my apache server auth_ldap_bind_dn, auth_ldap_bind_password, and auth_ldap_url are not being copied over. Puppet isn't throwing any errors and apache runs fine, but it isn't authenticating against LDAP.
old thread but for the benefit of anyone else with the same issue:
I've taken a look at the apache module's code in github and it doesn't appear to support the parameters you've mentioned (auth_ldap_bind_dn, auth_ldap_bind_password, and auth_ldap_url).
However, the directories resource allows you to include custom fragments, which you can use to inject any custom configuration outside of the apache module's scope into your config.
In your case, this should work:
class { 'apache::mod::ldap' :}
class { 'apache::mod::authnz_ldap' :}
apache::vhost { 'reverse-proxy':
port => '443',
docroot => '/var/www/html',
ssl => true,
ssl_cert => '/etc/httpd/ssl/cert.crt',
ssl_key => '/etc/httpd/ssl/cert.key',
require => [ File['/etc/httpd/ssl/cert.crt'], File['/etc/httpd/ssl/cert.key']],
rewrites => [
{
comment => 'Eliminate Trace and Track',
rewrite_cond => ['%{REQUEST_METHOD} ^(TRACE|TRACK)'],
rewrite_rule => [' .* - [F]'],
},
],
proxy_preserve_host => true,
proxy_pass => {
path => '/',
url => 'http://127.0.0.1:5601/',
},
directories => [
{
path => '/',
provider => 'location',
auth_name => 'Kibana Authentication',
auth_type => 'Basic',
auth_basic_provider => 'ldap',
custom_fragment => "AuthLDAPURL 'ldaps://ldap.example.com/dc=example,dc=com?CN?sub?(objectClass=user)'
AuthLDAPBindDN 'cn=serviceuser,ou=Users,dc=example,dc=com'
AuthLDAPBindPassword supersecretpassword",
require => 'ldap-group cn=application_users,ou=application_groups,ou=groups,dc=example,dc=com',
},
],
}