Can I force .htaccess to refresh? - .htaccess

We are moving a site from one CMS to another. The .htaccess file has been changed and it needs to be refreshed for the new site to work right. From what I understand the .htaccess file will only be refreshed if the browser cache is cleared? It is fine for those creating the site to clear our cache, but is there a way to get the users' browsers to get the new .htaccess file without the user clearing cache manually on his own initiative?

If you're using RewriteRule, just use R instead of R=301. For other purposes, you'll have to clear your browser cache whenever you change a redirect.
from https://stackoverflow.com/a/7749784/1066234

Some servers will reload as soon as you replace the .htaccess file.
If so it instantly be used for all subsequent requests. You do not need to refresh any caches.
Some servers only check htaccess periodically.

If you already had a R=301 redirect, then you just changed where it redirects to in htaccess, but it isn't updating for you, then it's probably cached in your browser. You can't do a hard refresh on that page because it redirects. You can try in another browser and if it works there, you just need to clear cache in Chrome. Settings > Privacy > Clear browsing data. Then you only need to check "Cached images and files" This will not clear your logins or anything.

I had a rewriterule in my .htaccess file like,
RewriteCond %{HTTPS} !on
RewriteRule (.*) https://example.com%{REQUEST_URI} [L,R=301]
and once I opened the site in localhost it never gave me a chance to hard refresh the page.
Solution: I added a random string like localhost/mywebsite/index.php?1234

Related

Automatically change domain name in URLs

I'm creating a mirror for my website and I need to automatically change domain name in URLs on my new website, using some .htaccess command on my new site.
What I need is to change all links from www.old.com/any_link.html to www.new.com/any_link.html for all users on www.new.com without changing anything in database (having the same database for both sites). So that I get two independent websites www.old.com and www.new.com working at the same time.
I know about redirect 301 from old site to the new one, but I need redirecting INSIDE new website, without changing anything on the old one.
Is that possible at all?
try writing this on .htaccess :
RewriteEngine on
RewriteRule ^(.*)$ http://www.new.com/$1 [R=301,L]

Rewrite htaccess with shared ssl

My shared ssl is: https://server.domain.com/~username/folder/product.php?products_id=123
I am using the RewriteRule in htaccess:
RewriteRule ^(.*)-p-([0-9]+).html$ product.php?products_id=$2&%{QUERY_STRING}
The problem is, the link appear: https://server.domain.com/~username/folder/name-p-123.html
But when click appear page not found, because the ~username/folder/
Please how can i use the rewrite correct in this case ?
Thank you.
You can use http://www.visiospark.com/mod-rewrite-rule-generator/
It's fairly simple. Keep in mind that I'm not an expert. I was playing around with .htaccess a few days ago and the rules were not updating for client side which is my browser. The reason was that it was loading cache. I deleted the cache from firefox using ctrl+shift+delete, restarted httpd/apache and reloaded the page ctrl+F5 ( Must be CTRL+F5 ). Voila fixed.

new to .htaccess, how to redirect specific page to mainpage

I'm new to .htaccess file.
My site is hosted on 1and1 and by default it shows www.mydomain.com/defaultsite when nothing is uploaded to my account. Now I've uploaded my wp site and have managed to make it go to index, but if someone inputs in the url www.domain.com/defaultsite he will still get the wrong place.
How can I manage this issue with .htaccess file so that any request to defaultsite will take the user to www.mydomain.com ?
I'm not a 1and1 user, but this could be a DNS cache issue. First, check your document root for the presence of a directory called defaultsite. If it exists, remove it. If not, then you can attempt removing it using mod_rewrite. Insert this rule immediately after RewriteEngine On in your .htaccess file:
RewriteRule ^defaultsite/?$ http://yourdomain.com/ [R=302,L]
If it's working for you, you can safely change 302 to 301 to make in permanent and cache-able.
I have also seen comments referring to an index.html file in the document root. If you see one, delete it - it could be that, internally, 1and1 maps defaultsite to index.html.
Also, it will help for you to clear the cache in your browser when testing. If using Chrome, you can open the Developer Tools (Ctrl+Shift+I), click on the settings cog at the top right of the panel that opens, and check 'Disable cache (while DevTools is open)`.
I had a similar issue and was pulling out my hair trying to figure this out. 1&1 is hosting while Namecheap holds my domain. I was able to access my page without /defaultsite on Safari and mobile Chrome. But on desktop Chrome I was being redirected to /defaultsite.
To remedy this I cleared my cache, flushed my DNS cache, and cleared my browsing history. Not sure if the latter 2 were necessary but having done all three it did help resolve this issue.

htaccess auto redirect while attemp to view direct content

I am currently administrating some art website that contains lots of photos and other content files and it bugs me that ppl find a way around scripting and are accessing stuff directly, they download our copyright protected materials.
I was thinking about htaccess file that do the following:
someone type in address directly to the browser: http://www.mydomain.com/photos/photo.jpg
htaccess triggers and instead of showing the content - it redirects right away to: http://www.mydomain.com/ (this is important to do redirect before picture is displayed)
redirect is extremely important not just some preventing without redirect, but if someone attempts to use sowftware to download content via providing link to it then it rejects request
my knowledge about htaccess is really thin i could use a help on this one
This should work:
RewriteEngine on
RewriteCond %{HTTP_REFERER} !^http://www.mydomain.com/ [NC]
RewriteRule .*\.jpg|gif$ /nolinking.html [R]
If you try enter http://www.mydomain.com/photos/photo.jpg it will redirect you to http://www.mydomain.com/nolinking.html, but it will allow images to be loaded on pages if they are linked to,

Force HTTPS for specific URL

This should be a quick one... here is my current .htaccess file:
# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>
# END WordPress
What I need to do is make sure that if http://www.mydomain.com/cart/ is reached, it needs to force HTTPS ... so /cart/ and anything within /cart/
Once the request has been sent to http://www.mydomain.com/cart/, if there is any sensitive data in the request, it's too late. Force it to break! At least, it will give you an indication that there's something wrong with your links. More details in previous answers:
https://stackoverflow.com/a/8765067/372643
https://stackoverflow.com/a/8964190/372643
[ ... ] by the time the request reaches the server,
it's too late. If there is a MITM, he has done his attack (or part of
it) before you got the request.
The best you can do by then is to reply without any useful content. In
this case, a redirection (using 301 or 302 and the Location header)
could be appropriate. However, it may hide problems if the user (or
even you as a developer) ignores the warnings (in this case, the
browser will follow the redirection and retry the request almost
transparently).
Therefore, I would simply suggest returning a 404 status:
http://yoursite/ and https://yoursite/ are effectively two distinct sites. There is no reason to expect a 1:1 mapping of all
resources from the URI spaces from one to the other (just in the same
way as you could have a completely different hierarchy for
ftp://yoursite/).
More importantly, this is a problem that should be treated upstream: the link that led your user to this resource using http://
should be considered as broken. Don't make it work automatically.
Having a 404 status for a resource that shouldn't be there is fine. In
addition, returning an error message when there is an error is good:
it will force you (or at least remind you) as a developer that you
need to fix the page/form/link that led to this problem.
EDIT: (Example)
Let's say you have http://example.com/, the non-secure section of your site that allows the user to browse items. They're not logged in at that stage, so it's fine to do it over plain HTTP.
Now, it's cart/payment time. You want HTTPS. You send the user to https://example.com/cart/. If one of the links that sends the user to the cart part is using plain HTTP (i.e. http://example.com/cart/), it's a development mistake. It just shouldn't be there. Making the process break when you thought you were going to be sent to https://example.com/cart/ allows the developer to see it (and, once fixed, the user should never have the problem).
If it's just about the point to the HTTPS section of your site (typically, an HTTP GET via a link somewhere), it's not necessarily that big a risk.
Where automatic redirects become even more dangerous is when they hide bigger problems.
For example, you're on https://example.com/cart/creditcarddetails and you've filled in some information that should really just stay over SSL. However, the developer has made a mistake and a plain http:// link is used in the form. In addition, the developer (a user/human after all) has clicked on "don't show me this message again" in Firefox when it says "Warning: you're going from a secure page to a non-secure page" (by the way, unfortunately, Firefox warns a posteriori: it has already made the insecure request by the time it shows the user that message). Now, that GET/POST request with sensitive data is sent first to that incorrect plain http:// link and the automatic rewrites tells the browser to try the request again over https://. It looks fine because, as far as the user is concerned, this all happened in a fraction of a second. However, it's not: sensitive data was sent in clear.
Making the plain HTTP section of what should only be over HTTPS not do anything useful actually helps you see what's wrong more clearly. Since the users should never end up there anyway if the links are correctly implemented, this isn't really an issue for them.
Try adding this before the other rules (but after RewriteBase):
RewriteCond %{HTTPS} off
RewriteRule ^cart/(.*)$ https://www.mydomain.com/cart/$1 [R,L]

Resources