Hey guys is there anyway to circumvent the Twitter rate limit by using a Twitter widget and embedding it in the end users browser? In other words would using Twitter Search widget apart of the user's browser's session (while they are using my app) so that their calls to Twitter are made through their IP address (and not the IP address of my app) - I would do this to avoid getting the IP of my app blacklisted. Is that fine or would that violate Twitter's terms of use?
I would use the Twitter search widget. Would using Twitter stream be a better idea?
Depending on your implementation, you may want to consider the Streaming API for this purpose. It's probably considered more "kosher". You can query for a particular set of phrases and open whats called a firehose, and Twitter will push updates to your application and it's not really bound by rate limits although there is a rate limit system in place here. For my particular use case, this didn't work and I had to do what you described in your question. But if you want to use the Twitter streaming API and are using PHP in conjunction, I would highly recommend looking at the 140 Twitter Server framework at the start. It will make it a lot easier to implement the streaming API at the get go.
This is fine, and this is the solution I'm using. Use jQuery or something similar for the Ajax calls and send the response to the server for processing. The carry will be on each of the IP's that use your application. So, if that user is spamming Twitter with requests - they would get blacklisted, not your application.
Related
I'm currently working on an analytics webapp with a react frontend and node (express) backend.
Describing the functionality in a nutshell:
A user will be able to login on the website, search for a YouTube username and then the YouTube API is called, the data will be stored in a mysql db via my express API and also the data will be used to calucalte some statistics which afterwards are displayed in a dashboard.
Now I was wondering if I should:
Call the YouTube API from the frontend, i.e. inside my react code, do the calculations display them and and then store it in the DB via my express API.
Or, from the react app call an endpoint in my express API that will then call the YouTube API, store the data in the DB and then pass on the data to my react app.
Are there any best practices or up-/downsides to either approach?
When answering questions like these, it's important to remember that the client-side is different for each and every user that visits your website, their internet speed, their GPU & CPU power, etc., but the server is most commonly held in a stable container and much more powerful than a client.
The proper way would be the following:
1. Obtain a search query from a client
Meaning you should get the user's search query from an input, or any other form of control (text area, checkbox, etc.), this way client is doing the least business logic, as it should. The client should always focus more on UI / UX rather than business logic.
2. Send query to the server
Let the server use the query you've just obtained from client, call the youtube api from the server (either explicitly using Axios, or find a node.js youtube library), and do all the necessary calculation on the backend
3. Send processed data to the client
Let client receive the data in the form which is ready for use (iterations, mappings, etc.) - again separating concerns, server - business logic, client - UI / UX
Now to be fair, the example you have will most commonly be done all on the client-side, since it is not as computationally heavy as other enterprise examples, but this is a good rule to follow for big projects, and no one would really complain if you did it this way, since it would be the proper way.
I try to solve my problem about architecture of application which is based on node.js. I have some ideas but I'm wondering about your thoughts.
well, my application will be stored activities from customer websites,
activities which will be defined by customer, eg. login on website, click on product, click on the category menu etc. These events will be pass to my application and after that, I show actitivities to customer dashboard for analytics or for another purpose.
First of all I think about sending request by ajax from websites to my server and after parsing data on my node.js server, send that data to dashboard account via socket.io. The user will see every event from him website on dashboard "on the flight". Do you think that concept is ok? Does every ajax request will not be too much costly ? Maybe I should think about another way to send data to my server?
You are trying to design classic analytical application which assembles data from different sources and shows some analytics upon this data like Google Analytic and so on.
Does every ajax request will not be too much costly?
Maybe I should think about another way to send data to my server?
For web applications that’s standard way to send data from client to server. Of course, Http protocol has more overheads comparing to UDP, WebSockets or protocols upon them but it doesn’t need additional security configurations, easy to use and scale. WebSockets require keeping TCP connection all the time and is harder to scale. You can find many post about comparison these protocols like here.
According to this question your app is going to be a proxy of events from web sites to another web apps where them will be analyzed somehow. Every event will have some information like type (login, click), url, user, date and so on. It can work up to some loading. Apparently, you will have problems on browser side accepting more 100 events (requests) per second even via WebSockets. JS with single-thread execution model is not so good for analytics (filtering, calculation, aggregation). I think you don’t need to pass events to dashboard as they are, it would be more useful to send some aggregated data like new events count, event histograms and so on. So, in this case you need to perform analytics on backend side (node.js app).
I recommend you also to look at CQRS approach where storing data (events) and their retrieve (query) is separated in order to achieve good performance for both write (log events) and read (retrieve events or some analytics) operations. Apparently you may need to use some database to do analytics like Mondo DB, maybe Redis and so on.
I read in an article that odata can be used for different combination of clients/servers.
Say I would like to develop a web application where i store data(say information about all mobile products on market) using mongoDB and use python as backend with Bottle framework to access data through browser as GET.
Then i decide to extend web app as android app. i can extend it to android without any code change on server side.
My doubt is does using odata here helps in any way? Say if i want to extend it to other clients?
Yes, you are right, you don't need to change even a single line of code on the server side if you change a client app. OData defines many conventions for the communications between the client and the server. such as:
What the URL looks like if you want to query some data
http://services.odata.org/V4/OData/OData.svc/Products?$filter=ID gt 2&$select=ID,Name,Rating,Price&$orderby=Price desc
Which http method should be used to Create/Retrieve/Update/Delete an entity
Generally speaking, Post for Create, Get for Retrieve, Patch/Put for Update, Delete for Delete.
What the payload looks like.
How to invoke a function/action
As long as the requests conform to these conventions, the server side always returns the predictable responsese regardless whether the clients is a browser or a mobile device.
I also find the examples for the odata:
https://aspnet.codeplex.com/SourceControl/latest#Samples/WebApi/OData/v4/ .
Hope this helps you.
I'm developing a very dynamic web application via ember.js. The client-side communicates with a server-side JSON API. A user can make various choices and see diced & filtered data from all kinds of perspectives, where all of this data is brought from said API.
Thing is, I also need to generate static pages (that Google can understand) from the same data. These static pages represent pre-defined views and don't allow much interaction; they are meant to serve as landing pages for users arriving from search engines.
Naturally, I'd like to reuse as much as I can from my dynamic web application to generate these static pages, so the natural direction I thought of going for is implementing a server-side module to render these pages which would reuse as much as possible of my Ember.js views & code.
However - I can't find any material on that. Ember's docs say "Although it is possible to use Ember.js on the server side, that is beyond the scope of this guide."
Can anyone point out what would be possible to reuse on the server-end, and best practices for designing the app in a way to enable maximal such reuse?
Of course, if you think my thinking here doesn't make sense, I'd be glad to hear this (and why) too :-)
Thanks!
C.
Handlebars - Ember's templating engine - does run on the server (at least under Node.js). I've used it in my own projects.
When serving an HTTP request for a page, you could quite possibly use your existing templates: pull the relevant data from the DB, massage it into a JSON object, feed it to handlebars along with the right template, then send the result to the client.
Have a look at http://phantomjs.org/
You could use it to render the pages on the server and return a plain html version.
You have to make it follow googles ajax crawling guides: https://developers.google.com/webmasters/ajax-crawling/docs/getting-started
I already asked a similar question but this one is a bit different/specific:
I'm about to start development of a social community site (for a local user group) with features like timeline, IM/chat, forums, ...
Node.js and socket.io (or now.js) on the backend. jQuery (and maybe backbone.js or similar) on the front end. Content is loaded via socket.io or ajax and navigation via url hash.
There are 2 things where I just can't decide which way to go. I hope here are some people who can provide some good or bad experience.
Templating on server or in browser? I'm not sure if it's better to load a complete html site + live updates (also in html) for timeline, forum posts, IM/chat, ... or use something like a REST api via ajax or socket.io and do the templating on the client site. I've never done that before. You need to download the templates, etc, etc. Has anyone experience in this? There are also 2 ways to implement a rest-like api: E.g. request a forum post, then request the user associated to that post and so on (just like server side MVC) - or - request a forum post and the server answers with all needed information.
Load content via ajax or socket.io? I'm definitively using socket.io or now.js for real-time communication (IM, chat) and pubsub (on mainpage -> subscribe to new timeline updates, on a forum topic -> subscribe to new posts). But should I also load HTML (or provide a REST-like API, see question 1) through the socket? When people open forum posts in tabs (which I usually do a lot) that would mean a lot of socket connections. And I'm not sure how long it takes for a websocket to establish connection.
So there a 4 ways to do this:
HTML via AJAX - probably the most stable way that doesn't need a lot javascript to do the templating - Browser can use open HTTP connections to request stuff.
HTML via socket.io - The websocket must be established to load content (may be slower)
API via AJAX - as it probably needs more requests as HTML via AJAX there might be some HTTP header overhead + you need to authentication in each request- I'm not a friend of too many ajax requests.
API via socket.io - Socket must only be authenticated only once and you can request API objects on the fly. However I would still load templates and js via HTTP for browser caching.
I know this is a huge post but I'm debating for many days now and just can't decide as it would be a lot of work to switch the system once started developing. This is not a public project, it's limited to ~10k-15k local people and thus must not be that perfect, a good opportunity to learn new things in my opinion (I'm completely new to node, classic PHP MVC + jquery dev here).
I think you should use a RESTful api on the backend, let the templating occur just on the frontend (maybe with Backbone) and only use Socket.IO for real realtime stuff (such as chat). It doesn't make any sense to use websockets for something like loading HTML, because it most likely never changes.
So my vote is:
1) HTML via AJAX
2) API via AJAX
3) Realtime communication, such as chat messaging (or other stuff that constantly changes) via Socket.IO
Though there really isn't a definitive answer, as it depends.
If you need to be search engine crawlable, you can NOT rely only on client-side processing. If your individual views are light, and/or you need to support mobile, you should have initial rendering server-side.
Currently, I would suggest using an API that both your client application and server-side can use. If you use node for the server-side rendering you can re-use a lot of the same logic, including the API client.
Going a few steps farther, if you look starting with the Yahoo flux examples project on github, you can use the same logic both client and server-side including rendering with React views. This is not an easy solution, and will take some work.
For interactive elements, server-side rendering can be minimal with your stores pushing an event wiring up via sockjs/socket.io when the client starts for your chat/im bits.
You will have scalability issues when it comes to running across multiple processes and will likely need a pub/sub chain backed by a db for longer re-connect cycles or missed IM messages. There isn't a magic bullet.
Right now, I like flux+react... When Angular2 comes out, it may have a better story for server-side rendering.