Coming from the Python/Django universe, I discovered Node.JS, Express + EJS and I have a big crush on those techs (mostly for the deployment part).
To practice and keep the pleasure to discover them, I'd like to make a blog-like project. It will be mostly static content with big articles for SEO and some dynamic functions, each article page must have a "true" slug like www.site.com/my-article.
I just found a documentation on how to do SSR with Express+EJS : https://www.geeksforgeeks.org/node-js-server-side-rendering-ssr-using-ejs/
I'd like to know if some of you have some feedbacks on that type of implementation for live projects. Is it indexed well by Google ? Did you see some SEO issues ?
Many thanks :)
Googlebot can crawl, process and render JavaScript based websites, whether client side or server side rendered. Googlebot crawls the web using a headless evergreen Chrome. It's still best to render server side for performance depending on your specific site needs.
EJS is SEO friendly. You can customise all core SEO elements of your page/app so the GoogleBot can crawl it. In the end it all depends on how good your technical/onsite SEO is. I haven't got a problem with GoogleBot crawling my site. You must be careful to include correct technical SEO on your site at the beginning of development so you won't have problems later when site will be visible to public.
If you aren't sure about quality of your technical SEO you can use Google Lighthouse in DevTools (F12 or Ctrl+Shift+I) to check for the problems. Lighthouse will give you a better prospective of how GoogleBot crawls your page.
You need to check only SEO category. Device is optional. Then you click on "Generate report" and wait for the results.
The Dream would be:
More on how to use Lighthouse https://developers.google.com/web/tools/lighthouse#devtools
Related
I set up a very basic headless browser implementation with Puppeteer on a server, and the way I have it configured currently, I have the system scrape arbitrary websites based on user input. I then have the server send the html code of the page to a client using response.write. (I'm not actually deploying this as a solution to anything - it's really just a proof of concept.)
The results are mixed based on what website the system attempts to scrape from - but one thing they all have in common is that things like links and external stylesheets either work sporadically or not at all. My question is, is there a way to view the entire website, with clickable links and all, using Puppeteer? Or is this ridiculously impractical and totally hopeless?
If there is a way to approach this, some example code would be great.
Thanks!
I'm working on a project for class. To create a website and a website for mobile users. The site is to recongize the type of device/browser accessing the page and send the appropiate form. So if I was to visit the site on IE8 it will direct me to the mainpage for IE8, if I was to access the site with a mobile device it will direct me to the mobile website main page automatically.
Also, I need to design the website for at least two different screen sizes.
I'm coding in HTML5, I do not know the type of server the site will be hosted on. The use of Javascript is extra credited. The project details are to "design a small mobile web site. The web site should be tested on one or more mobile devices. The iPod Touch device will be used as the base for testing."
I know how to do 8/10 of the requirements (except the two mentioned). I looked at W3C and didn't find anything.
Any help would be much appreciated. Thank you!
Do a Google for:
CSS Browser Detection
JavaScript Browser Detection
Also you should think twice about creating multiple sites - with basically the same content - or creating proper stylesheets that are referred from the same site.
Hope that get's you the other 2 requirements
NOTE: Since this is homework I won't post any links...
I suspect that ServerFault isn't the best place for this question...but aside from that, your question is a little vague. A google search for "designing a mobile website" turns up what looks to be several pages of relevant information. If you first try working with the information in those documents and then come back with specific questions (e.g., "I tried this and it behaved this way instead of the way I expected") you're apt to get better answers.
I would like to use this JS plugin to use the CSS template layout :
http://code.google.com/p/css-template-layout/
But I know that it is recommended to first have a website working without JS. So, my project consist in doing a tourism website...will I lose a lot of 'potential user' if JS is required to visit my website ?
Tkx,
About 4% of my visitors don't have javascript support includes bots though, that would explain quite a few percent. There are a few classes of browser that won't run your javascript as intended:
Screen readers/accessible browsers (like for blind people)
Mobile browsers
Console-based browsers (Used sometimes by sysadmins from servers with no gui installed)
Off-brand browsers or older browsers with buggy javascript engines
Search Engines (Google)
I don't think that many people just turn javascript off anymore. However, things like NoScript --where javascript is disabled for a site initially and must be explicitly enabled-- are becoming more popular.
The problem is more apparent on mobile browsers, but you will likely serve different content to them anyway.
We're building a mobile-friendly site to work in tandem with our client's MOSS 2007 internet site. We need to be able to redirect users who hit the home page and are using a mobile device.
Our original intention was to add a custom control to the home page page layout that would detect the current user's device and redirect to the mobile site accordingly. We quickly realised that this would not work as we are using the Output Caching functionality provided by SharePoint/Asp.Net. This means that the detection code will only run for the first visitor to the home page until the cache expires.
Our next idea was to build a custom HTTP Module and process the detection there. However, we are finding that the Output Caching is not allowing that either. If the cache is set while a mobile device is visiting all browsers are subsequently redirected to the mobile site (until the cache expires).
If we turn off output caching it works just fine - but we cannot turn output caching off, especically for the home page. We did investigate Substitution (Donut) Caching but this is not working due to the fact we are filtering the Asp.Net response within another HTTP Module that tidies up the rendered HTML for XHTML compatiblity reasons. I've also experimented with the output cache profile by setting it to vary-by-header property to "User-Agent" but I am getting mixed results and am also concerned at the memory implications of caching multipel versions of pages (we already have memory issues now and then).
It's possible we could run the redirection code in JavaScript but then we risk not detecting a lot of devices that don't have JavaScript enabled. This is a government website so the usage of JavaScript has to abide by accessibility guidelines.
Does anyone have any other ideas as to how we can solve this issue. Has anyone done this before? Perhaps in a different way?
Hope you can help, thanks.
p.s. I have also asked this question on SharePoint.SE but wanted to get as many eyes on this as possible.
I would suggest you to try ISAPI filters
I've actually solved this one I think. I've pretty much followed this article here - http://msdn.microsoft.com/en-us/library/ms550239.aspx. We have updated the code in that article to build a cache key based on whether the current page is the home page, whether the current user is using a mobile device and whether or not a cookie exists forcing the user to the full site. I will probably write this up as a blog post. When I do I will update this answer providing a link.
If I have an iPhone version of my site, what are the things I need to make sure of so it doesn't interfere with SEO?
I've read quite a bit now about cloaking and sneaky javascript redirects, and am wondering how this fits into iPhone and Desktop websites playing together.
If my iPhone site has a totally different layout, where say the Desktop site has a page with 3 posts and 10 images all on the page, and my iPhone site makes that 2 pages, one with the posts, one with the images (trying to think up an example where the structure's decently different), that's probably not best practice for SEO, so should I just tell google not to look at the mobile site? If so, and assuming my client would like to automatically redirect mobile users to the iPhone site (I'm familiar with the id of taking them to the regular page with a link to the mobile version instead), how do I not make this look like cloaking?
Google actually has a separate index and crawler for mobile content. So all you need to do is design your URLs in such a way that you can exclude googlebot from the mobile pages and googlebot-mobile from the regular pages in robots.txt.
Certainly you have the option of telling the search engines to not look at the mobile page. I would leave it though because you never know who is looking for something specific and maybe Google will prefer certain pages over others for mobi users.
If the 2 pages on mobi make sense to the visitor then I would not worry about it for SEO. If you are redirecting based on mobi then I don't see how the search engines could think you are cloaking, but if you want to be totally sure I suggest using CSS to show different information based on Media type.
The only problem I can think of would be of duplicate content. The SEs may see both pages and not rank one as highly because it likes what it sees on the other page. There is no penalty other than the fact that one page is more interesting than the other and may get better rankings whereas the other drops in rank. If you are making two separate pages it would be an opportunity to tune your information to specific details and maybe get hits for both, but if you are using CSS then it will rank as one page.