I`m new with couchdb and I was configuring it for a new project, I had a doubt about what is the difference beetween httpd and chttpd in couch db main config:
vs:
when do you use one or the other?
From the documentation regarding chttpd
In CouchDB 2.x, the chttpd section refers to the standard, clustered
port. All use of CouchDB, aside from a few specific maintenance tasks
as described in this documentation, should be performed over this
port.
chttpd essentially is for address and port bindings.
httpd on the other hand is for configuring behaviors over a connection (i.e. chttpd), for example, to configure CORS.
Related
Problem:
I have an AWS EC2 instance running FreeBSD. In there, I'm running a NodeJS TLS/TCP server. I'd like to create a set of rules (in my NodeJS application) to be able to individually block IP addresses programmatically based on a few logical conditions.
I'd like to run an external (not on the same machine/instance) firewall or load-balancer, that I can control from NodeJS programmatically, such that when certain conditions are given, I can block a specific remote-address(IP) before it reaches the NodeJS instance.
Things I've tried:
I have initially looked into nginx as an option, running it on a second instance, and placing my NodeJS server behind it, but after skimming through the NGINX
Cookbook
Advanced Recipes for High Performance
Load Balancing I've learned that only the NGINX Plus (the paid version) allows for remote/API control & customization. While I believe that paying $3500/license is not too much (considering all NGINX Plus' features), I simply can not afford to buy it at this point in time; in addition the only feature I'd be using (at this point) would be the remote API control and the IP address blocking.
My second thought was to go with the AWS/ELB (elastic-load-balancer) by integrating AWS' SDK into my project. That sounded feasible, unfortunately, after reading a few forum threads and part of their documentation (unless I'm mistaken) it seems these two features I need are not available on the AWS/ELB. AWS seems to offer an entire different service called WAF that I honestly don't understand very well (both as a service and from a feature-stand-point).
I have also (briefly) looked into CloudFlare, as it was recommended in one of the posts, here on Sackoverflow, though I can't really tell if their firewall would allow this level of (remote) control.
Question:
What are my options? What would you guys recommend I did?
I think Nginx provide such kind of functionality please refer to link
If you want to block an IP with Node TCP you can just edit a nginx config file and deny IP address.
Frankly speaking, If I were you, I would use AWS WAF but if you don’t want to use it, you can simply use Node JS
In Node JS You should have a global array variable where you will store all blocked IP addresses and upon connection, you will check whether connected host IP is in blocked IP variable. However there occurs a problem when machine or application is restarted, you will lose all information about blocked IP-s. So as a solution to that you can just setup Redis (It is key-value database but there are also other datatypes) DB and store blocked IP-s there. Inasmuch as Redis DB is in RAM all interaction with DB will be instantly and as long as machine or node is restarted, Redis makes a backup on hard drive and it syncs from it and continue to work in RAM with old databases.
AWS is new to me. I want to configure three VM on AWS to run one node.js app.
I want to set three VMs to run MongoDB, Memcached and node seperatedly.
The question description says that You
should also carefully configure the security groups inside of AWS, so that only your node instance can access your mongo and mcd instances, and so that your node instance is only reachable on port 8080.
When I am setting the security group, I feel really confused. If somebody can tell me how to configure this?
PS: I wanted to comment to OP's question, but I can't as I don't have enough reputation.
You need to go through some docs on AWS to understand this. If you are building enterprise level app you want to look into this docs where you can get more info on security groups and how you can setup your architecture on AWS with security.
Secondly, Security groups are the rules which are applied on instance level - consider as firewall, for your system more info here. In your case you can open node.js ports for mongodb (27017/18 TCP) and Memcached (11211 - TCP) instances as node only requires to connect to mongodb and memcached, also you can setup NAT if you want to keep your instances in private subnet.
I am using client - server mode of Hazelcast. Is it possible to control the logging level of Hazelcast server dynamically from Hazelcast client ?. My intention is that, by default I will start Hazelcast server in ERROR mode and in case of any problem, I want to change the log level to DEBUG mode without restarting the Hazelcast server.
Thanks
JK
Hazelcast does not depend on any custom logging frameworks and makes use of adaptors to connect to a number of existing logging frameworks. See some details here:
http://docs.hazelcast.org/docs/3.5/manual/html/logging.html
Most of the current logging frameworks allow you to dynamically / programmatically change the log levels. I'm at a loss here, since you haven't given any details of the logging framework you have used.
For example :
LogManager.getLogger("loggername").setLevel(newLoglevel);
will achieve whatever you are looking for. You can also change logj configuration file (logj.xml) in runtime and the changes will be in effect without restarting any of the hazelcast servers.
I want to install a Major upgrade of an application, the application uses particular services which listens at ports given as input by the user. The check for Port In Use is done programmatically before previous version of application is uninstalled and it's upgrade is installed.
I want my installer to allow its services listen at particular ports if the previous version of the application's services are listening at the same ports as input-ed by the user while doing major upgrade.
If you are serious about this approach, determine the PID of your running service and then look for it in the result of GetExtendedTcpTable with TCP_TABLE_OWNER_PID_ALL set.
However, I suppose you will have a much easier time forgetting about the actual listen sockets and just looking for the configuration information in the registry or config files, so maybe you should rethink your approach.
We would like to protect the Cassandra against man-in-the-middle attacks. Is there any way to configure Cassandra in a way that the client-server and server-server (replication) communications are SSL encrypted?
thank you
short answer: no :)
For client - server : THRIFT-151
Edit: You might want to follow this thread on the ML
Encrypted server server communication seems to be available now:
https://issues.apache.org/jira/browse/CASSANDRA-1567
Provide configurable encryption support for internode communication
Resolution: Fixed
Fix Version/s: 0.8 beta 1
Resolved: 19/Jan/11 18:11
The strategy I employ is to have Apache Cassandra nodes communicate through a site to site VPN tunnel.
Specific configurations for the cassandra.yaml file:
listen_address: 10.x.x.x # vpn network ip
rpc_address: 172.16.x.x. # non-vpn network for client access although, I leave it blank so that it listens on all interfaces
The benefits to this approach is that you can deploy Apache Cassandra to many different environments and you become provider agnostic. For example, hosting nodes in various Amazon EC2 environments and hosting nodes in your own physical data center and hosting a few others under your desk!
Cost an issue preventing you from looking into this approach? Check out Vyatta ...
As KajMagnus pointed out, there is a JIRA ticket resolved and available in the stable version of Apache Cassandra: https://issues.apache.org/jira/browse/CASSANDRA-1567 which enables you to accomplish what you would like via TLS/SSL .. but there are a few ways to accomplish what you would like.
Finally, if you want to host your instance on Amazon EC2, region to region can be problematic and although there is a patch available in 1.x.x, is it really the right approach? I have found the VPN approach reduces latency between nodes in different regions and still maintains the necessary level of security.
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Running-across-multiple-EC2-regions-td6634995.html
Finally -- part 2 --
If you want to secure client to server communications, have your clients (web servers) communicate through the same VPN .. The configuration I have:
Front end webservers communicate via internal network to application servers
Application servers sit on their own internal network and VPN network and communicate to the Data Layer via the VPN tunnel and between each other on the internal network
Data Layer exists on it's own network per Data Centre / Rack and receives requests via the VPN network
Node to Node (gossip) communication can be secured per the issue above. Client and server will both soon support Kerberos (In Hector master as of commit: https://github.com/rantav/hector/commit/08149a03c81b559cba5680d115943dbf334f58fa should hit Cassandra side shortly).