We're running several apps against the same Memcached, so I'd like to configure different prefixes for all apps using Rack::Attack. By default, several apps would overwrite each others' cache.
I've seen the prefix accessor in Rack::Attack::Cache and there's even a low-level spec for it but there are no examples on how to use it.
According to the README and the introductory blogpost, I never have to deal with Rack::Attack::Cache but always with the higher-level Rack::Attack.
So, how can two or more apps use the same memcached for Rack::Attack without overwriting each others' cache keys?
Rack::Attack.cache.prefix = "custom-prefix"
Rack::Attack.cache is an instance of the Rack::Attack::Cache class.
Related
I think that arangodb is presently the best nosql db and that foxx microservices are a great resource.
Alas, the related docs that comes with the 3.xxx version can help build only a minimalistic service.
Also, many apps you can find as examples in the arangodb store have been developed with deprecated tools (eg. controllers, repositories).
And while the wizard available in the web interface easily allows to create a new service, I don't understand why a new collection, prefixed with the mount point, has to be created. So a complete REST API is generated with a great documentation, but it is absolutely useless unless I change the name of an already existing collection. Why is that ???
The generator is meant as a quick boilerplate generator to allow you to build prototypes more easily. In practice it's not a great starting point for real-world projects (especially if you already have created collections manually) but if you just quickly need a REST API you can expand with your own logic it can come in handy.
As you've read the docs I'm sure you've followed this Getting Started guide: https://docs.arangodb.com/3/Manual/Foxx/GettingStarted.html
In it, the reasoning for prefixed vs non-prefixed collection names is given as such:
Because we have hardcoded the collection name, multiple copies of the service installed alongside each other in the same database will share the same collection. Because this may not always be what you want, the Foxx context also provides the collectionName method which applies a mount point specific prefix to any given collection name to make it unique to the service. It also provides the collection method, which behaves almost exactly like db._collection except it also applies the prefix before looking the collection up.
On the technical side the documentation for the Context#collection method further specifies what the method does:
Passes the given name to collectionName, then looks up the collection with the prefixed name.
The documentation for Context#collectionName:
Prefixes the given name with the collectionPrefix for this service.
And finally Context#collectionPrefix:
The prefix that will be used by collection and collectionName to derive the names of service-specific collections. This is derived from the service's mount point, e.g. /my-foxx becomes my_foxx.
So, yes, if you just want to use a collection shared by all your services the unprefixed version (using the db object directly) is the way to go. But this often encourages tight coupling between different services, defeating the purpose of having them as separate services in the first place and becomes problematic when you need multiple instances of the same service but don't want them to share data, so most examples encourage you to use the module.context.collection method instead.
I'm currently evaluating using Hazelcast for our software. Would be glad if you could help me elucidate the following.
I have one specific requirement: I want to be able to configure distributed objects (say maps, queues, etc.) dynamically. That is, I can't have all the configuration data at hand when I start the cluster. I want to be able to initialise (and dispose) services on-demand, and their configuration possibly to change in-between.
The version I'm evaluating is 3.6.2.
The documentation I have available (Reference Manual, Deployment Guide, as well as the "Mastering Hazelcast" e-book) are very skimpy on details w.r.t. this subject, and even partially contradicting.
So, to clarify an intended usage: I want to start the cluster; then, at some point, create, say, a distributed map structure, use it across the nodes; then dispose it and use a map with a different configuration (say, number of backups, eviction policy) for the same purposes.
The documentation mentions, and this is to be expected, that bad things will happen if nodes have different configurations for the same distributed object. That makes perfect sense and is fine; I can ensure that the configs will be consistent.
Looking at the code, it would seem to be possible to do what I intend: when creating a distributed object, if it doesn't already have a proxy, the HazelcastInstance will go look at its Config to create a new one and store it in its local list of proxies. When that object is destroyed, its proxy is removed from the list. On the next invocation, it would go reload from the Config. Furthermore, that config is writeable, so if it has been changed in-between, it should pick up those changes.
So this would seem like it should work, but given how silent the documentation is on the matter, I'd like some confirmation.
Is there any reason why the above shouldn't work?
If it should work, is there any reason not to do the above? For instance, are there plans to change the code in future releases in a way that would prevent this from working?
If so, is there any alternative?
Changing the configuration on the fly on an already created Distributed object is not possible with the current version though there is a plan to add this feature in future release. Once created the map configs would stay at node level not at cluster level.
As long as you are creating the Distributed map fresh from the config, using it and destroying it, your approach should work without any issues.
I am using kue.js, which is a redis-backed priority queue for node, for pretty straightforward job-queue stuff (sending mails, tasks for database workers).
As part of the same application (albeit in a different service), I now want to use redis to manually store some mappings for a url-shortener. Does concurrent manual use of the same redis instance and database as kue.js interfere with kue, i.e., does kue require exclusive access to its redis instance?
Or can I use the same redis instance manually as long as I, e.g., avoid certain key prefixes?
I do understand that I could use multiple databases on the same instances but found a lot of chatter from various sources that discourage the use of the database feature as well as talk of it being deprecated in the future, which is why I would like to use the same database for now if safely possibly.
Any insight on this as well as considerations or advice why this might or might not be a bad idea are very welcome, thanks in advance!
I hope I am not too late with this answer, I just came across this post ...
It should be perfectly safe. See the README, especially the section on redis connections.
You will notice that each queue can have its own prefix (default is q), so as long as you are aware of how prefixes are used in your system, you should be fine. I am not sure why it would be a bad idea as long as you know about the prefixes and load usage by various apps hitting the redis server. Can you reference a post/page where this was described as a bad idea ?
I wonder if there's a tool for automatic migrations between different DBMS's using the Persistent package. In theory, it should be relatively easy to do, so I thought there should be a tool already written to do it.
One (maybe hacky) solution is to just create a program that uses mkPersist twice in different modules with the same definitions but different backend configurations, and to then manually perform the copy operation.
There is, however, not a tool currently available to do this as far as I know.
I started to use Jackrabbit in my project. As i found out there is no complex LoginModule and AccessManager given. I mean we can find SimpleLoginModule but it is just a mock.
What i need is a simple LoginModule which can be configured eg from a file with users, passwords and groups. I know that i can implement my own classes, but it is hard to believe that after so many years there is no ready solution...
there are a couple Jackrabbit based open source / closed source projects out there that use JCR as their reference implementation and have implementations. Most probably you're best of choosing one of them in order to not reinvent the wheel. For a complete list: http://en.wikipedia.org/wiki/Apache_Jackrabbit
Are you running inside an app server or web container? If so, you would usually expect the container to provide a JAAS implementation. For example, for instructions on how to set it up with Jetty, storing user information in a database, a properties file, or LDAP, see:
http://www.eclipse.org/jetty/documentation/current/jaas-support.html