Connection pool with the fog gem? - fog

I notice that the fog gem takes a long time to connect to Amazon S3 and upload the files.
Is it possible to use a connection pool gem with fog? So it reduces the time the Sidekiq takes to upload the files.
Is it possible?

Definitely, e.g. https://github.com/mperham/connection_pool - note this gem requires self-healing connections. I am not sure if this is your case.
Edit: I misinterpreted the question, if you ask about connection pooling of the fog object itself, that might not work correctly.

Related

NodeJS Monitoring Website (Worker Threads?/Multi Process?)

I am doing small project of application that will monitor some servers.
It will base on telnet port check, ping, and also it will use libraries to connect directly to databases (MSSQL, Oracle, MySQL) to check their status.
I wonder what will be the best effective solution for this idea, currently with around 30 servers it works quite smooth, around 2.5sec to check status for all of them (running async). However I am worried that in the future with more servers it might get worse. Hence thinking about using some alternative like Worker Threads maybe? or some multi processing? Any ideas? Everything is happening in internal network so I do not expect huge latency.
Thank you in advance.
Have you ever tried the PM2 cluster mode:
https://pm2.keymetrics.io/docs/usage/cluster-mode/
The telnet stuff is TCP, which Node.js does very well using OS-level networking events. The connections to databases can vary. In the case of Oracle, you'll likely be using the node-oracledb. Those are SQL*Net connections that rely on the OCI libs and Node.js' thread pool. The thread pool defaults to four threads, but you can grow it up to 128 per Node.js process. See this doc for info:
https://oracle.github.io/node-oracledb/doc/api.html#-143-connections-threads-and-parallelism
Having said all that, other than increasing the size of the thread pool, I wouldn't recommend you make any changes. Why fight fires before they're burning? No need to over-engineer things. You're getting acceptable performance given the current number of servers you have.
How many servers do you plan to add in, say, 5 years? What's the difference in timing if you run the status checks for half of the servers vs all of them? Perhaps you could use that kind of data to make an educated guess as to where things would go.
As you add new ones, keep track of the total time to check the status. Is it slipping? If so, look into where the time is being spent and write the solution that will help.

NodeJS Performance Issue

I'm running an API server using NodeJS 6.10.3 LTS on Ubuntu 14.04 (trusty). I've noticed that my API server tops out at ~600 reqs/min running on a c4.large EC2 instance. By tops out I mean, I see the CPU go uptil 100% Note, I know that I'm not fully utilizing the instance by using the cluster module, but that's ok for now.
I took a .cpuprofile dump of my API server for 10 seconds, and noticed that every second, for ~300ms, the profiler shows my NodeJS code is sitting (idle).
Does anyone know what that (idle) implies? Is it a GC issue? Or is it a internal (to V8) lock that I'm triggering? Any help or pointers to tools to help debug this would be nice. I'm working on anonymizing some of stack traces in the cpuprofile so I can share.
The packages I'm using are ExpressJS 4, Couchbase NodeJS SDK, Socket.IO mainly. The codepaths are mainly reading requests, and pushing to Couchbase. And finally querying couchbase via Views API, and pushing some aggregated data on a Socket.IO channel. So all pretty I/O async friendly stuff. I've made sure that I'm not calling any synchronous functions. There are no patterns of function calls before the (idle) in the cpu profile.
It could also just be I/O wait, meaning none of the sockets have data ready to read yet and so the time is spent idle. If you are using a load testing library you should check that the requests are evenly distributed within a second.
Take a look at https://www.npmjs.com/package/gc-stats to check GC data. There are flags to increase heap space, and to change when GC runs, if the problem turns out to be GC related.

How to share Azure Redis Cache between environments?

We want to save a few bucks and share our 1GB dedicated Azure Redis Cache between Development, Test, QA and maybe even production.
Is there a better way than prefixing all keys with an environment string like "Dev_[key]", "Test_[key]" etc.
We are using the StackExchange Redis client for .NET.
PS: We tried using the cheap 250GB (Shared infrastructure), but had very slow performance. Read operations were consistent between 600-800ms... without any load (for a ~300KB object). Upgrading to dedicated 1GB services changed that to 30-40ms. See more here: StackExchange.Redis with Azure Redis is unusably slow or throws timeout errors
One approach is to use multiple Redis databases. I'm assuming this is available in your environment :)
Some advantages over prefixing your keys might be:
data is kept separate, you can flushdb in test and not touch the production data
keys are smaller and consume less memory
The main disadvantage would be not taking advantage of multiple cores, like you could do if you ran multiple instances of Redis on the same server. Obviously not an issue in this case. Also note that this feature is not deprecated, like one of the answers suggests.
Another thing I've seen people complain about is that databases are numbered, they don't have meaningful names. Some people create a hash in database 0 that maps each number to a name.
Here is another idea to save some bucks: use separate Redis cache machines for each environment - so no problems with the keys, but stop them when you don't use them, like in the weekend and during nights. Probably more than 50% of the time you are not using them. I think it would be easy to start and stop them with some PowerShell script, we are using AWS and here it is possible.
Now from what I see the Redis persistence in Azure is not enabled, but they started working on it http://feedback.azure.com/forums/169382-cache/status/191763 - it would be nice to do a RDB snapshot before stopping and then on start to load it. So if you need to save some values and reload them on start you should do it manually (with your own service).

How do I share a cache across Node workers with Redis?

Forgive me if this is a really dumb question. I have been googling for the past hour and can't seem to find it answered anywhere.
Our application needs to query our CMS database every hour or so to update all of its non-user-specfic CMS content. I would like to store that data in one place and have all the workers have access to it - w/o each worker having to call the API every hour. Also I would like this cache to persist in the event of a node worker crash. Since we're pretty new to node here I predict we might have some of those.
I will handle all the cache expiration logic. I just want a store that can be shared between users, can handle worker crashing and restarting, and is at the application level - not the user level. So user sessions are no good for this.
Is Redis even what I'm looking for? Sadly it may be too late to install mongo on our web layer for this release anyway. Pub/sub looks promising but really seems like it's made for messaging - not a shared cache. Maybe I am reading that wrong though.
Thank you so much stack overflow! I promise to be a good citizen now that I have registered.
Redis is a great solution for your problem. Not sure why you are considering pub/sub though. Doesn't sound like the workers need to be notified when the cache is updated, they just need to be able to read the latest value written to the cache. You can use a simple string value in redis for this stored under a consistent key.
In summary, you'd have a process that would update a redis key (say, cms-cache-stuff) every hour. Each worker which needs that data will just GET cms-cache-stuff from redis every time it needs that cached info.
This solution will survive both the cache refresh process crashing or workers crashing, since the key in redis will always have data in it (though that data will be stale if the refresh process doesn't come back up).
If for some wild reason you don't want the workers continually reading from redis (why not? its plenty fast enough) you could still store the latest cached data in cms-cache-stuff and then publish a message through pub/sub to your workers letting them know the cache is updated, so they can read cms-cache-stuff again. This gives you durability and recovery, since crashed workers can just read cms-cache-stuff again at startup and then start listening on the pub/sub channel for additional updates.
Pub/sub alone is pretty useless for caching since it provides no durability. If a worker is crashed and not listening on the channel, the messages are simply discarded.
Well as I suspected my problem was a super-basic noob mistake that's hard to even even explain well enough to get the "duh" answer. I was using the connect-redis package, which is really designed for sessions, not a cache. Once someone pointed to node_redis client I was able to pretty easily get it set up and do what I wanted to do.
Thanks a lot - hopefully this helps some redis noob in the future!

How to Scale Node.js WebSocket Redis Server?

I'm writing a chat server for Acani, and I have some questions about Scaling node.js and websockets with load balancer scalability.
What exactly does it mean to load balance Node.js? Does that mean there will be n independent versions of my server application running, each on a separate server?
To allow one client to broadcast a message to all the others, I store a set of all the webSocketConnections opened on the server. But, if I have n independent versions of my server application running, each on a separate server, then will I have n different sets of webSocketConnections?
If the answers to 1 & 2 are affirmative, then how do I store a universal set of webSocketConnections (across all servers)? One way I think I could do this is use Redis Pub/Sub and just have every webSocketConnection subscribe to a channel on Redis.
But, then, won't the single Redis server become the bottleneck? How would I then scale Redis? What does it even mean to scale Redis? Does that mean I have m independent versions of Redis running on different servers? Is that even possible?
I heard Redis doesn't scale. Why would someone say that. What does that mean? If that's true, is there a better solution to for pub/sub and/or storing a list of all broadcasted messages?
Note: If your answer is that Acani would never have to scale, even if each of all seven billion people (and growing) on Earth were to broadcast a message every second to everyone else on earth, then please give a valid explanation.
Well, few answers for your question:
To load balance Node.js, it means exactly what you thought about what it is, except that you don't really need separate server, you can run more then one process of your node server on the same machine.
Each server/process of your node server will have it's own connections, the default store for websockets (for example Socket.IO) is MemoryStore, it means that all the connections will be stored on the machine memory, it is required to work with RedisStore in order to work with redis as a connection store.
Redis PUB/SUB is a good way to achieve this task
You are right about what you said here, redis doesn't scale at this moment and running a lot of processes/connections connected to redis can make redis to be a bottleneck.
Redis doesn't scale, that is correct, but according to this presentation you can see that a cluster development is in top priority at redis and redis do have a cluster, it's just not stable yet: (taken from http://redis.io/download)
Where's Redis Cluster?
Redis development is currently focused on Redis 2.6 that will bring you support for Lua scripting and many other improvements. This is our current priority, however the unstable branch already contains most of the fundamental parts of Redis Cluster. After the 2.6 release we'll focus our energies on turning the current Redis Cluster alpha in a beta product that users can start to seriously test.
It is hard to make forecasts since we'll release Redis Cluster as stable only when we feel it is rock solid and useful for our customers, but we hope to have a reasonable beta for summer 2012, and to ship the first stable release before the end of 2012.
See the presentation here: http://redis.io/presentation/Redis_Cluster.pdf
2) Using Redis might not work to store connections: Redis can store data in string format, and if the connecion object has circular references (ie, Engine.IO) you won't be able serialise them
3) Creating a new Redis client for each client might not be a good approach so avoid that trap if you can
Consider using ZMQ node library to have processes communicate with each other through TCP (or IPC if they are clustered as in master-worker)

Resources