How to: allow users to create their on pages on web app? - node.js

I'm relatively new to webdev, and essentially I'm trying to create functionality that allows users to create their own event pages (similar to Facebook Events, Meetup etc.).
My question is to do with the surrounding architecture of this, namely:
Given each event page will have its own url, do you need a separate file for each event in the backend?
Or is there some clever way of templating and directing traffic? (Sorry if this is vague, I'm trying to probe if there is a better way of doing this?)
I notice that most of the FB/Meetup events all have their own URLs, does this mean that they all have separate files in the backend?
I'm using Nodejs for the backend btw.
I've been Googling around but haven't been able to figure it out, I think I might be using the wrong wording... so even a point in the direction of the right wording would be much appreciated!
Thanks y'all

Generally this is handled via database-driven solutions. The "pages" are dynamic, and the URL is a parameter see #Konrad's comment that allows the pages to be looked up in a database which allows the dynamic content to be loaded into a single page which handles the complexity of each page seeming unique.

Related

Can server side rendering in React be helpful with dynamic presentation?

My issue: For my thesis I am creating an auction site. I have an admin panel in which I would like to have some configurations so that an admin can specify that if there are 10 days before the end of an auction some components should be displayed in different ways, some should be not visible at all etc. That’s what I call dynamic presentation.
My question: Right now I am working on architecture and wondering if SSR can be helpful in any way? I am already aware that it can shorten download time of some collections from my database even by half, but I am wondering if there is any way how it can be helpful with dynamic presentation itself?
What I already know: I have read all about advantages and disadvantages of ssr or universal rendering in react. Now I am only wondering if it can be in any way helpful with dynamic presentation or it won't matter if I choose SSR or CSR.
Small side question: I don’t have the whole architecture ready yet. What I know is that I would like to have a database, one separate app for an admin, backend and frontend (either ssr or csr). My first thought on how to manage this dynamic presentation was to store some rules in the database. Then the rules could be configured in admin app should an admin want to change anything. The rules should be send to backend and calculated with some additional data from frontend. Then backend could send some flag to frontend indicating which components to display etc. In theory I could move calculating to e.g. NodeJs server should I go with SSR. What I'm wondering about is; can you think of any better way to handle dynamic presentation? What I am most afraid of is numerous ifs in the fronetend. I would like to have some more elegant solution but I have no other idea so far. For some time I thought about a scoring system but I believe it would be too complicated (instead of sending a flag, send a score and frontend will display correct things based on the score). Also it wouldn’t solve the issue of ifs on the frontend.
I am aware that on StackOverflow questions which can be answered rather than discussed are preferred but I am really stuck and would appreciate help.
Basically SSR can provide some speed on your page because all of your data will not be trying to be fetched when the react script will end with an API call. Data are fetched from database when page is requested and be passed to the component to render with the script.
Also another very basic advantage and the reason why everyone are going the SSR way is SEO. You cannot achieve SEO page with react CSR. This is because google bot etc will try and crawl your page without even render it. Is like trying to "view source" of a page. When you are in CSR the page has no content only the initial react divs empty. You need SSR to have data on the first request of the user.
SSR brings the data on the first request of the user until a reload. In the meantime react router fetches data from the api.
Let me know if that help you.
PS: also a helpful link https://medium.com/walmartlabs/the-benefits-of-server-side-rendering-over-client-side-rendering-5d07ff2cefe8

Hiding route parameters

I am learning nodejs with express and i am creating my first single page application with the help of knockoutjs, i have a lot of routes and i am looking for a way to hide the parameters in the Url other than encoding them, if i have header links like :
http://www.mywebapp.com/login
http://www.mywebapp.com/logout
http://www.mywebapp.com/signup
http://www.mywebapp.com/users/username
http://www.mywebapp.com/users/1-8
can i still make those links appear as
http://www.mywebapp.com
no matter what the route to be called is?
if not possible can someone please explain why?
my application is completely ajax driven.
Well if your app is ajax then the url won't change. You can also wrap the entire thing in an iframe and only navigate inner frame. But keep in mind that this is generally bad practice as history and bookmarks don't work.

How to make a asynchronous app with Node, Express

I'm working on an app that i being built using Node and Express. All is fine, however the app is currently not asynchronous and I'd like it to be, so I'm currently investigating what would be the best way to do it.
As far as I can tell, socket.io seems to be the preferred choice to go with Node.
My question is, is socket.io's methodology the best way to move data between the server and client or is there a better, more robust way to do it? Maybe something accomplished with Node only?
PS: I think socket.io sounds really nice. Its just that I'm new to Node and though there would be a simpler way to move data back and forth.
Many thanks
EDIT:
Ok, I've seen the term "realtime" used before and was frown upon. The commenter implied that technically there is no "realtime" application, hence me choosing asynchronous, however realtime does describe what I'm after: An app that will be all ajax-like. For instance, in my app, when I need to edit a saved document (mongodb records are called documents), I need to redirect the page passing the document id as argument. I don't want that. I want all through ajax. I can achieve this with jQuery, however behind the scenes the server will still be moving through urls (I'll need to create loads of app.get('product/:id/edit', ...), app.post('/product/:id/edit'. ... and then use $.ajax to get and post stuff ) so I was wondering what's the best way to achieve this.
PS: I might be looking at this completely wrong. Like I said, I'm new to Node and for app development for that matter.
EDIT2: An example: Let's say I have a page with a table in it where I list all products. Each product will have a EDIT/DELETE button. At the moment, when I click edit, I'm redirected to another page where I can edit the product and save it, then I'm redirected to the product listing. I'd prefer to load the product into a modal window, make whatever edits I need, then update the product/listing without leaving the page.
Using $.ajax I can use the product ID, enquiry the db for that particular product, populate the field in the modal with the product details and display to the user. Then allow the user to make the changes and update the products, however the part in which I need to enquiry the db in order to populate the modal is muddy because the id needs to be passed through the url...
I don't know how to pass the id to the application unless is through app.get('/product/:id/edit', ...) then app.post('/product/:id/edit').

Ember.js on the server

I'm developing a very dynamic web application via ember.js. The client-side communicates with a server-side JSON API. A user can make various choices and see diced & filtered data from all kinds of perspectives, where all of this data is brought from said API.
Thing is, I also need to generate static pages (that Google can understand) from the same data. These static pages represent pre-defined views and don't allow much interaction; they are meant to serve as landing pages for users arriving from search engines.
Naturally, I'd like to reuse as much as I can from my dynamic web application to generate these static pages, so the natural direction I thought of going for is implementing a server-side module to render these pages which would reuse as much as possible of my Ember.js views & code.
However - I can't find any material on that. Ember's docs say "Although it is possible to use Ember.js on the server side, that is beyond the scope of this guide."
Can anyone point out what would be possible to reuse on the server-end, and best practices for designing the app in a way to enable maximal such reuse?
Of course, if you think my thinking here doesn't make sense, I'd be glad to hear this (and why) too :-)
Thanks!
C.
Handlebars - Ember's templating engine - does run on the server (at least under Node.js). I've used it in my own projects.
When serving an HTTP request for a page, you could quite possibly use your existing templates: pull the relevant data from the DB, massage it into a JSON object, feed it to handlebars along with the right template, then send the result to the client.
Have a look at http://phantomjs.org/
You could use it to render the pages on the server and return a plain html version.
You have to make it follow googles ajax crawling guides: https://developers.google.com/webmasters/ajax-crawling/docs/getting-started

how to check if my website is being accessed using a crawler?

how to check if a certain page is being accessed from a crawler or a script that fires contineous requests?
I need to make sure that the site is only being accessed from a web browser.
Thanks.
This question is a great place to start:
Detecting 'stealth' web-crawlers
Original post:
This would take a bit to engineer a solution.
I can think of three things to look for right off the bat:
One, the user agent. If the spider is google or bing or anything else it will identify it's self.
Two, if the spider is malicious, it will most likely emulate the headers of a normal browser. Finger print it, if it's IE. Use JavaScript to check for an active X object.
Three, take note of what it's accessing and how regularly. If the content takes the average human X amount of seconds to view, then you can use that as a place to start when trying to determine if it's humanly possible to consume the data that fast. This is tricky, you'll most likely have to rely on cookies. An IP can be shared by multiple users.
You can use the robots.txt file to block access to crawlers, or you can use javascript to detect the browser agent, and switch based on that. If I understood the first option is more appropriate, so:
User-agent: *
Disallow: /
Save that as robots.txt at the site root, and no automated system should check your site.
I had a similar issue in my web application because I created some bulky data in the database for each user that browsed into the site and the crawlers were provoking loads of useless data being created. However I didn't want to deny access to crawlers because I wanted my site indexed and found; I just wanted to avoid creating useless data and reduce the time taken to crawl.
I solved the problem the following ways:
First, I used the HttpBrowserCapabilities.Crawler property from the .NET Framework (since 2.0) which indicates whether the browser is a search engine Web crawler. You can access to it from anywhere in the code:
ASP.NET C# code behind:
bool isCrawler = HttpContext.Current.Request.Browser.Crawler;
ASP.NET HTML:
Is crawler? = <%=HttpContext.Current.Request.Browser.Crawler %>
ASP.NET Javascript:
<script type="text/javascript">
var isCrawler = <%=HttpContext.Current.Request.Browser.Crawler.ToString().ToLower() %>
</script>
The problem of this approach is that it is not 100% reliable against unidentified or masked crawlers but maybe it is useful in your case.
After that, I had to find a way to distinguish between automated robots (crawlers, screen scrapers, etc.) and humans and I realised that the solution required some kind of interactivity such as clicking on a button. Well, some of the crawlers do process javascript and it is very obvious they would use the onclick event of a button element but not if it is a non interactive element such as a div. The following is the HTML / Javascript code I used in my web application www.so-much-to-do.com to implement this feature:
<div
class="all rndCorner"
style="cursor:pointer;border:3;border-style:groove;text-align:center;font-size:medium;font-weight:bold"
onclick="$TodoApp.$AddSampleTree()">
Please click here to create your own set of sample tasks to do
</div>
This approach has been working impeccably until now, although crawlers could be changed to be even more clever, maybe after reading this article :D

Resources