How can I show a picture before a link in google sites - google-sites

I have a website that i've set up through google sites. I have a link to an external webpage. What I'd really like to have happen is, if someone clicks the link, it shows a jpg picture for about 5 seconds and then forwards them off to the linked website. Is there a way to do that?
Thanks,
Rich

Adding to following tag:
<META http-equiv="refresh" content="5;URL=http://example.com">
to the <head> section of a webpage will redirect the user to example.com, or whatever the URL value is. You can display an image in the <body> section of this page. This seems like the simplest way to accomplish what you want.

Related

Setting custom referer in Express app before redirecting

I am working on this simple app which requires me to set custom referer value before redirecting.
Suppose A clicks on link X which I posted in Facebook. Now if I check the referer value in my server, then it shows 'facebook.com'. Now A after clicking link X is being redirected to B and B shows referer 'facebook.com'. But I want it to show 'mywebsite.com' in B instead of 'facebook.com'. How can I achieve this?
Please note that I read in MDN about the 'Forbidden Header Names' but there's this website called Hitleap which is a traffic exchange website. They let users set custom referer values for the traffic they send. So I guess it's possible to do it.
This is my route:
router.get('/:id', (req, res) => {
res.set('Referer', 'https://mywebsite.com');
res.redirect('https://boomboom.com');
});
UPDATE
I've found that it's not possible in conventional methods of setting the header. So, I have been thinking of achieving this result by using the following two methods but I don't know if that is going to work. So looking for feedbacks.
Method 1:
So when a user clicks on my link, he will visit a page on my server before redirecting to the final destination. The page on my server will just say "redirecting". And when that happens I will also set the full header for the user, including "Referer" field. Then redirect to the actual page.
Method 2:
Same approach as method 1 but this time I would like to copy the full header from the client but change the referer value when the user is in my "redirecting" page and the redirect to the final destination.
Are any of these processes possible? If you have any other solution please share it here. Thanks
Referer headers in the HTTP protocol go from browser to server, not the other direction. If your server sends one to a browser, the browser ignores it.
Standard commercial browsers make it hard to mess around with the value of the Referer header from browser Javascript. Because cybercreeps. Your plan might be perceived by some websites as an attempt to do a cross-site request forgery attack. So think through your goal carefully.
You could, from your site, serve a page that causes your user's browser immediately to redirect to the desired site. A page something like this may do the trick for you. This means refresh the current page after 0 seconds from the URL https://example.com.
The title tag sets the browser-tab caption to "Redirecting..." while the refresh is in progress. I've found that useful in single-signon redirection. It lets a user know something is coming.
<html>
<head>
<meta http-equiv="Refresh" content="0; URL=https://example.com/">
<title>Redirecting...</title>
</head>
</html>
If that doesn't set the correct Referer, which it might not in all browsers, you can use a little bit of Javascript to load an invisible form and then submit it immediately.
This tiny page might do it for you:
<html>
<head>
<title>Redirecting...</title>
</head>
<body>
<form method="GET" action="https://example.com/">
</form>
<script>
window.onload = function(){{
document.forms[0].submit()
}}
</script>
</body>
</html>
This second approach won't work if your user disables browser Javascript. But, then again, most websites won't work in that case.
You can troubleshoot all this with your browser devtools Network tab. It shows headers for each request.

Have a domain bring up a page hosted on another domain

I need a domain--without hosting--to bring up content from another server. I.e. I have access to the domain's DNS, but nothing else.
How do I make a page--hosted on my server--load when my clients' URL shows up?
To clarify, I want a visitor:
To type sl.wellnessandyourself.com
For the page www.ceramiclion.com/drops/index.html to show up
For sl.wellnessandyourself.com to still be the url in the user's browser.
I've heard mixed comments on this. Websites like Weebly are able to do this. How can I do this as well?
EDITED:
See the above comment about CNAME as a far better solution than what I just offered.
I am not fully certain, but I think the only way to really do this is to use an IFRAME tag embedded in your own page code:
<html>
<head>
<title>IFRAME Example</title>
</head>
<body>
<iframe style="min-width: 100%; min-height: 100%; outline: none;" src="http://www.ceramiclion.com/drops/index.html" />
</body>
</html>
This sort of hearkens back to the good ole days of frames in HTML. Ahh, memories.
Why not use a CNAME? This is pretty much what it's used for. All you need is to edit the DNS records for the domain name. Obviously, you have to have access to these, but, it's pretty simple to do. No hosting required on your end.
You'll set up the CNAME for sl.wellnessandyourself.com to be www.ceramiclion.com/drops, and it should work how you want it.
Here is more information:
https://serverfault.com/questions/65712/using-cname-to-point-to-another-domain-to-save-ip-addresses
https://support.dnsimple.com/articles/cname-record/

When using Video.js is there any way I can avoid the webpage content alert?

I'm trying to use Video.js to display a small video on our team sharepoint page (Sharepoint 2007). It works great, but the 'Do you want to view only the webpage content that was delivered securely?' alert always displays when you navigate to the page. The security settings on the computers cannot be changed, but is there any way I can avoid this alert being displayed?
Thanks in advance for any help.
Thanks,
Simon
This is happening because the page being accessed is served over HTTPS, but some content loaded remotely comes from HTTP. If you're using the Video.js CDN-hosted files, you'd want to change the include tags to look like this:
<link href="//vjs.zencdn.net/4.3/video-js.css" rel="stylesheet">
<script src="//vjs.zencdn.net/4.3/video.js"></script>
These are called protocol relative urls, meaning it will use whatever protocol the page is to load the files.
Hope that helps!

Security check to avoid direct visitor on a page

I have a page that I want to be loaded only from an iFrame.
I use the solution on this link to do it.
How to identify if a webpage is being loaded inside an iframe or directly into the browser window?
but it doesn't work in IE so I use conditional comments like this
<!--[if IE 9]>
<script type="text/javascript">
window.location = "http://mysite.com/nodirectvisit.html";
</script>
<![endif]-->
Together they are now working in IE, Chrome, FF, and Opera.
I just want to ask if there are still other ways to visit my page directly?
Thanks
If a user disables javascript, then your code will not run. Maybe use Sessions to track from where users are coming from?
Anyone with javascript disabled or NoScript would be able to open the page directly. The javascript should prevent the majority of users.
You can check the referrer of the iframe page to make sure it's the page that contains the iframe. The referrer isn't the most reliable field, but if you're more worried about people visiting the page not in an iframe than people not being able to see it at all, that might be the way to go.

Bot function after NoFollow rule

I was just wondering what the function of googlebot or any other search engine spider/bot was after you use the no follow rule in a meta tag. Presumably the bot is on your site and gets to a page through link redirection, etc but if the linked page includes the code <META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW">, where does the bot go after that? Does it go back to the previous page or does it do some other function? Hope this doesn't sound like a stupid question but I was just curious.
usually, a web crawler does not visit links found on a given webpage directly when he encounters them, instead these links are added to a waiting list, when the spider finish loading the current page, he just look up into this list and pop another url from there, the new link is not necessary from the last fetched page, it can be from the previous page or even another website ( depending how the list is organized ).

Resources