I'm quite new to node js and I've encountered a problem with cookies on my client side browser.
So to resume, I am trying to set cookie to a page (which is not yet opened) with node http server, my goal is that I set cookie to this page, then open this URL with node (opn module) and then this page access cookie with document.cookie.
Also, the URL I'm opening is hosted on the node server (localhost/mail/sendingMail/index.html) on port 32126.
I've tried a lot of module and request but couldn't find one that did the job, is this possible and if so, how?
If this isn't possible any way of sending data to this page would be nice!
i think you have to use redis for this i also use redis and it is veryfast because it store data on RAM.
in redis you can store objects or hashes or arrays and it is very easy to use with node by installing redis-npm
this is the best dock which i prefer and describe how to use redis with node but remember you have to install redis first .
about redis :-
Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs and geospatial indexes with radius queries. Redis has built-in replication, Lua scripting, LRU eviction, transactions and different levels of on-disk persistence, and provides high availability via Redis Sentinel and automatic partitioning with Redis Cluster.
Related
I want to make persistent sessions on server i am using node.js with express and for that first i read about connect-redis enter link description here and connect-mongo enter link description here i read that redis is faster then mongo that's why i decided to use it but now i also find a module named memcached enter link description here i dont know which will be better for my project, also in mamcache is data stored in memory or where because if it is memory then it must be fastest.
If you have already setup Redis then I would stick with it as it is very fast and easy to manage. MemCached and Redis are very similar when used for caching however the key difference is that Redis can be set to persist to disk in the background meaning that if the server goes down the data in memory can be reloaded.
Personally, I would not use MongoDb for session persistence for speed reasons however if I was using MemCached I'd possibly use it as a backup for the sessions. e.g. Write session data to MemCached and Mongo but only read from MemCached and use Mongo to restore is an error occurs.
Bottom line, I think your choice to use Redis is the best one for what you've described
I have an app that receives data from several sources in realtime using logins and passwords. After data is recieved it's stored in memory store and replaced after new data is available. Also I use sessions with mongo-db to auth user requests. Problem is I can't scale this app using pm2, since I can use only one connection to my datasource for one login/password pair.
Is there a way to use different login/password for each cluster or get cluster ID inside app?
Are memory values/sessions shared between clusters or is it separated? Thank you.
So if I understood this question, you have a node.js app, that connects to a 3rd party using HTTP or another protocol, and since you only have a single credential, you cannot connect to said 3rd party using more than one instance. To answer your question, yes it is possibly to set up your clusters to use a unique use/pw combination, the tricky part would be how to assign these credentials to each cluster (assuming you don't want to hard code it). You'd have to do this assignment when the servers start up, and perhaps use a a data store to hold these credentials and introduce some sort of locking mechanism for each credential (so that each credential is unique to a particular instance).
If I was in your shoes, however, what I would do is create a new server, whose sole job would be to get this "realtime data", and store it somewhere available to the cluster, such as redis or some persistent store. The server would then be a standalone server, just getting this data. You can also attach a RESTful API to it, so that if your other servers need to communicate with it, they can do so via HTTP, or a message queue (again, Redis would work fine there as well.
'Realtime' is vague; are you using WebSockets? If HTTP requests are being made often enough, also could be considered 'realtime'.
Possibly your problem is like something we encountered scaling SocketStream (websockets) apps, where the persistent connection requires same requests routed to the same process. (there are other network topologies / architectures which don't require this but that's another topic)
You'll need to use fork mode 1 process only and a solution to make sessions sticky e.g.:
https://www.npmjs.com/package/sticky-session
I have some example code but need to find it (over a year since deployed it)
Basically you wind up just using pm2 for 'always-on' feature; sticky-session module handles the node clusterisation stuff.
I may post example later.
Intoduction
My current project has a mix of common RESTful API concepts and modern realtime websocket/long poling. I'm using mongoDB to store persistant data such as users, products, and aggregated social content. The social content is basically links to tumblr posts, twitter tweets, and facebook posts which are compiled into what I call a "shout".
Implementation
What I'm trying to accomplish is rating "shouts" based on how many likes or follows the post has out of the combined total from all social medias used. I want the data to change on the frontend as the backend updates. The back-end calls all the social medias based on checking an expiration date on the data. The server will check for new data on event that a request was made for the data. A request is made for the data every time a client connects, or everytime someone posts a new shout through my app. If there is not activity in a given duration of time, the shout is archived and updated every so often with scheduled jobs. I use socket.io to send realtime updates.
What I'm Using Redis For
The reason I need Redis is to message all my servers when one of them starts requesting data from the social media sources so I don't run into the issue where all of my servers are essentially doing the same thing when the task only needs to be done once. I also need to message my other services once a change is made. For these implementations I'm currently using Redis pub/sub. Since I'm currently using Redis, I also store session tokens in redis, and use it as a cache.
What I'm Using Mongo For
I use MongoDB to persist data, and I've setup indexing to tune performance specifically for my application.
The Problem
My problem is I feel like my stack is too big. I feel like using redis and mongo can be over kill. Should I cut out redis and use an MQ system, and store my sessions and cache in mongo and just index them for fast lookups? If so what MQ system would be suitable for my application?
Should I cut out mongodb and use all redis? Would this be cost effective for relatively large sums of data? As I would be storing hundreds of thousands(maybe more) shouts(essentially just URIs), thousands of users, and hundreds of thousands of products.
I need to perform some background task periodically in CouchDB (guess that could be done through cronjob, just curious about some native CouchDB approaches). I also need to retrieve some resources from HTTP on server (e.g. to authenticate through OAuth2 and store token permanently in some document). Could it be achieved somehow (e.g. nodejs to be integrated with CouchDB. I don't really like the idea to have nodejs webserver in front of couchdb, I'm trying to avoid that additional layer and use couchdb as HTTP server, DB backed and server-side business logic).
CouchDB is a database. Its primary job is to store data. Yes, it has some JavaScript parts but those are to help it build indexes, or convert to and from JSON.
Asking CouchDB to run periodic cron-style tasks, or to fetch HTTP resources, is similar to asking MySQL to run periodic cron-style tasks, or to fetch HTTP resources. Unfortunately, it's not possible.
You do not necessarily need a HTTP server. You can build a 2.1-tier architecture, with direct browser-to-CouchDB connections as before; but run your periodic or long-lasting back-end programs yourself, and they simply read and write CouchDB data as a normal user (perhaps an admin user).
I can't setup redis server because i'm on windows.
How can I store the sessions on disk so they will persist through node restarts?
Also, do I have to restart node everytime I modify a JS file for the changes to go through?
Btw, I'm already using express for node. Express uses the memorystore which means that sessions reset everytime node restarts.
There are multiple solutions:
https://github.com/antirez/redis/issues/238 < actually Redis got a patch so it can be built on Windows, may not be perfect but works
Make an account on https://redistogo.com/ , they provide a free database of 5Mb (which is ok if you just want to test out some things)
You can use something like connect-cookie-session, so that you store the session into the cookie (this is ok if you are just developing stuff and need to have durable sessions, and then use Redis into production)
Also, do I have to restart node everytime I modify a JS file for the changes to go through?
There are dedicated modules for that, one of the most popular being node-supervisor. Read the docs on their official page, it's really easy to use.
How can I store the sessions on disk so they will persist through node
restarts?
To be honest I have only used redis as my session-store, but you could also try to use(also found using http://search.npmjs.org:
mongodb as your session store.
supermarket-cart which I believe uses supermarket under the covers.
connect-fs: https://www.npmjs.com/search?q=connect-fs
connect-mysql-session: A MySQL session store for node.js connect.
connect-cookie-session: Connect middleware to allow you to store your sessions directly in the client's cookie.
Also, do I have to restart node everytime I modify a JS file for the
changes to go through?
Will answer this later!
You can use a cloud-based redis like http://redis4you.com/