Nginx rewrite rule without changing url - linux

I need rewrite rule for my nginx server without changing the url.
For example :
The following link
http://example.com/old/path/index.php?cid=XXhhHHnmmm
to become :
http://example.com/newpath/index.php?cid=XXhhHHnmmm
and to point in that specific folder (/old/path).
till now I've tried the following which is working if I try opening _http://example.com/newpath:
but not working if I try
_http://example.com/newpath/index.php?cid=XXhhHHnmmm
location ~ /old/path {
rewrite "/old/path" http://www.example.com/newpath$2 break;
}
I've also tried with proxypass :
location /newpath {
proxy_pass http://www.example.com/old/path;
}
but still not working as desired.

Try
location ~ ^/newpath {
rewrite ^/newpath/(.*) /old/path/$1 break;
}

For the reference, since it did not work for a file to file redirect, which is as simple as:
rewrite ^/terms-of-use.html /legal/some-terms-of-use-v3.html break;

Related

How to do a redirect in Laravel for ads.txt

Google wants have access to ads.txt file like "example.com/ads.txt", but in Laravel we need redirect it to "public" folder.
How do it?
I try RedirectMatch 301 ^ads\.txt$ \/public\/ads\.txt, but it doesn't work.
Put your ads.txt file in the folder: public (for Laravel), then it will be accessible to the link: example.com/ads.txt . I have made on my laravel website and it's working fine
I have solved the issue in following way.
In my web route file:
Route::get('/ads.txt', function () {
$content = view('ads');
return response($content, 200)
->header('content-Type', 'text');
});
and I have rename the ads.txt file to ads.blade.php
Or Simply add your ads.txt file inside public folder of your application it will work fine.
You can create blade file has name ads.blade.php and put all text from ads.txt to this blade , and create route like that :
Route::get('/ads.txt',function(){
return view('ads');
});
I tried that when register my website in google

Pm2, Nginx or Nodejs doesent provide the newest Version of a deployed Route

today I've stumbled across a big issue for one of our internal software solutions, and I have never been desperate enough to open a ticket here before, but even after consulting with another co-worker we couldn't manage to figure out what is wrong.
I guess this will turn out as a good lesson on Nginx, pm2 or Nodejs caches which might be interesting for anyone who's developing with these Applications/Frameworks.
The Facts:
Setup
Ubuntu 16.04 Server
deployment to Server via pm2 from a Windows 10 Computer
the application is written in nodejs v8.9.4
Story before the Actual Error
The application worked for about one and a half Months after the first deployment without any problem
Today the Application displayed the Error: TypeError: Path must be a String. Received undefined
To get the current state of the program onto the Machine, I proceeded with updating the application with pm2(s) Update functionality.
The application worked again after the update. While deploying npm install is being executed again, and I thought it just fixed the issue.
The actual Error
Through further testing, I found out that one of the download Functions for a Zip File isn't working anymore.
After trying/testing (with mocha) the same route on my local Computer, I came to the resolution that something is wrong with the path to the location where the file shall be saved and retrieved from.
Testing around to get further logging from the application didn't work.
When entering the route's webpage the Nginx, pm2 or application seems to provide the old Page before I edited it. Even though the code on the Server is definitely the same as the one on my computer.
Cognito doesn't show/uses the new page either.
I got really assured of this fact after deploying multiple times with different other path names for the zip file to be saved in as you can see in the example. He is always trying to find a file with this path: '/web/l18n/archive.zip'
Even though I provide a different path in the Route file the zip file is always being saved in '/web/archive.zip'
The Code
Don't forget that I've already played around a lot with the code to cut out some obvious issues. Both parts of the if request setting the path to where the archive.zip path should be saved are not taking any effect on the Error Message
var zip = new JSZip()
const dictionaryName = process.env.dictionaryNames
const splitDictionaryName = dictionaryName.split(';')
for (const dictionary of splitDictionaryName) {
const tempDictionaryPath = path.join(__dirname, '/../../' + process.env.dictionaryFolder + '/dictionary.' + dictionary + '.json')
const dictionaryContent = fs.readFileSync(tempDictionaryPath, 'utf8')
zip.file('dictionary.' + dictionary + '.json', dictionaryContent)
}
await zip.generateAsync({
type: 'nodebuffer'
})
.then(async function (content) {
const stateString = process.env.state | 'local'
let archivePath = ''
if (stateString === 'staging') {
archivePath = '/web/l18n/current/afasfsagf.zip'
} else {
archivePath = path.join(__dirname, '/../aasdasfgasfgs.zip')
}
await fs.writeFile(archivePath, content, function (err) {
if (err) {
global.log.info(err)
res.send({})
} else {
res.download(archivePath)
}
})
}).catch(function (err) {
global.log.err(err)
})
Routing
The route is being directly loaded via the file provided above. Every other route, even the more advanced file handling ones are working fine.
const downloadAll = require('./routes/downloadAll.js')
this.http = express()
this.http.get('/download_all', downloadAll)
Handlebars "Request"
Being accessed is the route via a new window, being opened when a button is pressed on the main home route.
$('button').click(function (event) {
let buttonId = $(this).attr('id')
if (buttonId === 'downloadAllDictionaries') {
var win = window.open('/download_all')
window.location.reload()
}
})
Nginx
server {
listen 80;
server_name ****;
access_log /var/log/nginx/****.access.log;
error_log /var/log/nginx/****.error.log;
# pass the request to the node.js server with the correct headers
# and much more can be added, see nginx config options
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://127.0.0.1:7000;
proxy_set_header Host **********;
proxy_redirect off;
}
listen 443 ssl; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
if ($scheme != "https") {
return 301 https://$host$request_uri;
} # managed by Certbot
}
Make sure nginx isn't cache the proxied requests.
Add this 2 lines to the proxy location:
location / {
add_header 'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
expires off;
}
I have this problem.
My nodeJS app modifications are not activated.
Many thought it is the cache of Nginx or of the Browser but in fact it is PM2 that load all the JS files in memory and provide a nice acceleration feature but does not update when there is a file change for the standard installation of PM2.
Here is the solution : https://pm2.keymetrics.io/docs/usage/watch-and-restart/
So in place of launching your App with
pm2 start MyApp.js
do
pm2 start MyApp.js --watch
And for the development phase, you are safe from this mysterious caching phenomenon.

Get project base url that works in XAMPP and on production, considering .htaccess

I have a project that is not in the root of the XAMPP folder:
htdocs/my-folder/projects/my-project
htdocs/my-folder/projects/my-project/index.php
htdocs/my-folder/projects/my-project/js/
htdocs/my-folder/projects/my-project/css/
that folder contains an index.php file from where I try to call stylesheets and scripts. Initially, I'd just do it like this:
<script src="js/myscript.js"></script>
which worked. However, the project has expanded and now a user can "save" the current page (similar to how JSFiddle does it), and the URL will look different. Upon a first save a random string will be appended as a conf parameter, which results in something like this (locally) and should have a public equivalent:
localhost/my-folder/projects/my-project?conf=abcd # locally
http://mywebsite.com/projects/my-project?conf=abcd # publicly
Upon second save, the URL gets an additional parameters, a version number:
localhost/my-folder/projects/my-project?conf=abcd&v=2 # locally
http://mywebsite.com/projects/my-project?conf=abcd&v=2 # publicly
But, to get a nice URL I use this .htaccess:
Options +FollowSymLinks -MultiViews
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule (\w+)[/](\d+)$ ?conf=$1&v=$2 [L,NC]
which will result in something like this:
localhost/my-folder/projects/my-project/abcd/2 # locally
http://mywebsite.com/projects/my-project/abcd/2 # publicly
The thing is, when the URL is changed to some structure as above (without the parameters, but with the rewritten URLs, e.g. localhost/my-folder/projects/my-project/abcd/2) then the initial call to the resources (scripts, styles) in my index file won't be correct any longer.
In other words, if this is the url: localhost/my-folder/projects/my-project/abcd/2 then the server will look for a a script file in localhost/my-folder/projects/my-project/abcd/2/js/myscript.js, which is obviously wrong.
The question, thus, is: how can I get the absolute path to the current file, but that also works in XAMPP (so not __FILE__ or __DIR__ which will dig in the file strucure and return file://) and also on production environments.
You'll have to use the base element in your pages. The usage will be something like:
<base href="/projects/my-project/" />
for the public server and
<base href="/my-folder/projects/my-project/" />
locally.
I hackingly figured it out. By checking the end of the url, we determine if we are in root or if we are in a specific instance (ending with a digit e.g. /1.
<?php
// Is connection secure? Do we need https or http?
// See http://stackoverflow.com/a/16076965/1150683
$isSecure = false;
if (isset($_SERVER['HTTPS']) && $_SERVER['HTTPS'] == 'on') {
$isSecure = true;
} elseif (!empty($_SERVER['HTTP_X_FORWARDED_PROTO'])
&& $_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https'
|| !empty($_SERVER['HTTP_X_FORWARDED_SSL'])
&& $_SERVER['HTTP_X_FORWARDED_SSL'] == 'on') {
$isSecure = true;
}
$REQUEST_PROTOCOL = $isSecure ? 'https' : 'http';
$REQUEST_PROTOCOL .= '://';
// Create path variable to the root of this project
$path_var = $REQUEST_PROTOCOL.$_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI'];
// If URL ends in a slash followed by one or more digits (e.g. http://domain.com/abcde/1),
// returns a cleaned version of the URL, e.g. http://domain.com/
if (preg_match("/\/\d+\/?$/", $path_var)) {
$path_var = preg_replace("/\w+\/\d+\/?$/", '', $path_var);
}
?>
Now we can use it in our PHP file!
<link href="<?php echo $path_var; ?>css/styles.css" rel="stylesheet">

Purge a resource from referer VARNISH

I'm trying to PURGE a HTML page if a css file is not in the varnish cache anymore. This is what I'm doing :
if (beresp.status == 404 && req.url ~ "\.css$") {
ban("obj.http.x-url ~ "+ req.http.referer);
}
If I've got a 404 on a CSS file, I would like to ban the referer. The problem is that "req.http.referer" has "http://" in front of the URL. So, it doesn't work. (It works without the "http://".
I tried :
ban(req.http.referer);
but doesn't work.
Any idea please on how to remove the "http://" or how to do that job in a different way ?
Thanks.
EDIT
Find solution to remove the "http://":
ban("obj.http.x-url ~ "+ regsub(req.http.referer, "^http://", ""));
Thank you ;)

CouchDB vhost rewrite to access root API

I wish the following rewrite rule worked:
{
"from": "api/*",
"to": "../../../*"
}
… in a vhost rewrite like the following:
[vhosts]
myapp = /myapp/_design/myapp/_rewrite
Then it would be possible to access the root API in a following manner:
$.couch.urlPrefix = '/api';
var dbs = $.couch.allDbs({
success: function (data) {
console.log(data);
}
})
But unfortunately the request to http://myapp:5984/api/_all_dbs results into:
{"error":"insecure_rewrite_rule","reason":"too many ../.. segments"}
Am I missing something? Is something wrong with the rewrite? Does anyone know how to overcome that?
My Couchdb is 1.1.1.
I'm acquainted to this advice, but don't like any of the suggested ways.
Add
[httpd]
secure_rewrites=false
to your server's local.ini to disable this protection from cross-database rewrites.

Resources