I want my site address bar not to change its address when I go to subpages, it should show my index.html, even though I enter tosub pages.
Like if I open www.xyz.com and I navigate to any page it should still show www.xyz.com.
I heard this can be done with .htaccess is it possible?
You really should think about why you want it, because this way of working has a couple of drawbacks with it:
Users can't see they are on a different page
Users can't bookmark your pages for fast access
Users can't share links to eachother
Search Engines may have trouble spidering your side
But basically, there are two main ways to do this:
Use frames. Put the page into a frame, and have all the links stay in this frame.
Use Javascript. Have each page "load" into the current page, using AJAX.
Related
I've configured my IIS (asp.net site) to use URL Rewrite.
In particular this is my rule (dynamic one): whatever url in format number/string will be redirected to a special aspx page.
SSo whatever url starts with mysite/id/Name is redirected to showprof.aspx?id=id&title=Name. This works perfectly.
My question is about search engines. I don't have any "fixed" page that contains links like mysite/id/Name that the spider can scan, so I'm trying to figure it out how search engines could index my dynamic pages. Should I create a sitemap.xml? if yes in wich way? or should I create a "hidden" page that contains every link to all my dynamic contents like mysite/id1/Name1 mysite/id2/Name2 and so on?
thank you
A starting point is definitely a Sitemap.xml, You could try for example the IIS SEO Toolkit and see if it is able to index any of your pages: http://www.iis.net/downloads/microsoft/search-engine-optimization-toolkit
It also has functionality to generate a sitemap.xml, although I'm guessing in your case you probably have some dynamic content, so a better approach would be to have a "handler" that generates it dynamically on demand (maybe cache it for performance reasons).
I would also recommend to have some pages that actually are accessible through normal links, for example maybe have in your home page of the site a link to a "site map" page (not sitemap.xml), where there you render a set of links that you want to index (at least the ones that are most important to you), and that will make them easy to discover.
If I go to this url
http://sppp.rajasthan.gov.in/robots.txt
I get
User-Agent: *
Disallow:
Allow: /
That means that crawlers are allowed to fully access the website and index everything, then why site:sppp.rajasthan.gov.in on google search shows me only a few pages, where it contains lots of documents including pdf files.
There could be a lot of reasons for that.
You don't need a robots.txt for blanket allowing crawling. Everything is allowed by default.
http://www.robotstxt.org/robotstxt.html doesn't allow blank Disallow lines:
Also, you may not have blank lines in a record, as they are used to delimit multiple records.
Check google webmasters tools to see if some pages have been dissallowed for crawling.
Submit a sitemap to google.
Use "Fetch as google" to see if google can even see the site properly.
Try manually submitting a link through the fetch as google interface.
Looking closer at it.
Google doesn't know how to navigate some of the links on the site. Specifically http://sppp.rajasthan.gov.in/bidlist.php the bottom navigation uses onclick javascript that gets dynamically loaded and it doesn't change the URL so google couldn't link to page 2 it even if it wanted to.
On the bidlist you can click into a bid list detailing the tender. These don't have public URLs. Google has no way of linking into them.
The PDFs I looked at were image scans in sanskrit put into PDF documents. While Google does OCR PDF documents (http://googlewebmastercentral.blogspot.sg/2011/09/pdfs-in-google-search-results.html) it's possibly they can't do it with sanskrit. You'd be more likely to fidn them if they contained proper text as opposed to images.
My original points remain though. Google should be able to find http://sppp.rajasthan.gov.in/sppp/upload/documents/5_GFAR.pdf which is on the http://sppp.rajasthan.gov.in/actrulesprocedures.php page. If you have a question about why a specific page might be missing, I'll try to answer it.
But basically the website does some bizarre non-standard things, this is exactly what you need a sitemap for. Contrary to popular belief sitemaps are not for SEO, it's for when google can't locate your pages.
I'm setting up a website using Google Sites.
If I add a hyperlink to a page and select 'Web Address', then enter http://www.example.com as the link, the actual page ends up being rendered with
http://www.google.com/url?q=http://www.example.com
as the hyperlink address. This injects an annoying 'redirecting you to www.example.com' Google page and a two-second delay into following hyperlinks off my page.
Is there any way to turn this behaviour off?
"If the site you're linking to isn't public, it will automatically redirect through www.google.com/url when opened to keep the site's anonymity."
Source: support.google.com/sites/answer/90538
Whatever was causing this behaviour, it seems to have stopped after a few days. No idea why, but I'll call that a fix - the site was very new at the time I posted, so possibly it's something to do with Google tracking people filling pages with dubious links?
I have an application that utilizes rather unfriendly dynamic URLs most of the time. I am providing friendly URLs to some content, but these are used only as an entry point into the application, after which point all of the generated URLs will be the unfriendly variety.
My question is, if I know that the user is on a page for which a friendly URL could be generated and they choose to bookmark it, is there a way to tell the browser to bookmark the friendly one instead of what is in the address bar?
I had hoped that rel="canonical" would help here, but it seems as if it's only used for indexing. Maybe one day browsers will utilise it.
No. This is by design, and a Good Thing.
Imagine the following scenario: Piskvor surfs to http://innocentlookingpage.example.com/ and clicks "bookmark". He doesn't notice that the bookmark he saved points to http://evilsite.example.net/ Next time he opens that bookmark, he might get a bit of a surprise.
Another example without cross-domain issues:
Piskvor the site admin clicks "bookmark" on the homepage of http://security-holes-r-us.example.org/ - unfortunately, the page is vulnerable to script injection, and the injected code changes the bookmark to http://security-holes-r-us.example.org/admin?action=delete&what=everything&sure=absolutely . If he's still logged in the next time he opens the bookmark, he may find his site purged of data (Granted, it was his fault not to prevent script injection AND to have non-idempotent GET resources, but that is all too common).
I am starting to create a site that uses Drupal. One of my requirements is that nobody will see any "real" content until they log in. The home page will basically be a static page with a logo, some basic "this is what the site does" copy, and then a login form. If you don't login, you can then only see some other static pages (faq, legal, privacy, etc...) but you can't use the actual site. Think Facebook's login page, basically just fluff with a login form.
From searching around, I have found 3 different methods for this:
Create a page that is basically separate from the Drupal installation, but then when the form submits, check it against the Drupal DB and then proceed if logged in successfully. This would be done with Apache, maybe an .htaccess file directive to change the first served page.
Use the Front Page extension. I haven't looked at this too extensively, has anyone used it? Pros/Cons?
Somehow finagle the default Druapl "Home Page" functionality to allow this to happen. I would rather not have to do this, unless someone knows that there is an easy way to do this.
One of my requirements is that nobody will see any "real" content until they log in.
There is a permission that users need have in order to access content on Drupal (access content); if anonymous users don't have that permission, then they would not be able to see any content.
Using the module you reported, you can create a different home page for anonymous users.
Solution #1 is not the ideal one as it requires more work for something that can be obtained from inside Drupal. Take in mind that the correct way to access Drupal DB is to use the DB API Drupal comes with.