Using the rabbitmq plugin in Grails is it possible to define fallback/ failover addresses? - grails-plugin-rabbitmq

I am converting a Grails project from using ActiveMq to using RabbitMq. Is it possible to configure RabbitMQ with a multiple fallback/ failover addresses?

I put a oad balancer in front of the rabbit mq servers.

Related

Deploy a MEAN stack application to an existing server

I have a Ubuntu Server on DigitalOcean which hosts a website, and a Windows Server on AWS which hosts another website.
I just built a mean.js stack app on my MAC, and I plan to deploy it to production.
It seems that most of the existing threads discuss about using a new dedicated server. For example, this thread is about deploying on a new AWS EC2 instance; this video is about deploying on a new Windows Azure server; this is to create a new droplet in DigitalOcean.
My question is, is it possible to use an existing server (which hosts other websites), rather than creating a new server? If yes, will there be any difference in terms of performance?
My question is, is it possible to use an existing server (which hosts other websites), rather than creating a new server?
Yes. Both Windows and Ubuntu allows you to deploy multiple applications on same instance.
For Ubuntu you can read this post which will help you server multiple apps.
In this example used Nginx, but you can follow to this example and use it without any server like Apache or Nginx. If you need subdomains I would suggest to use Apache virtual hosts with reverse proxy module and pm2
For Windows and its IIS I would suggest to use iisnode, in google you can find a lot of articles how to configure it.
will there be any difference in terms of performance?
It is depended on your applications, if you are already serving applications which handles huge traffic and need CPU and memory, I would not suggest you to use multiple apps on same instance, but if you are going to use simple web apps, you can easily use same instance.
Hope this answer will help you!

Open a port for Kafka communication to the outside-world

I have a VM (Linux OS) in Azure which has Hortonworks on it, which launches Kafka.
Kafka service is running and I am able of creating producer and consumer inside the VM.
I have the server IP and I'm also able to log into Ambari using 8080 port.
When I am trying to send a message to Kafka from my Java application I get a TimoutEception after 60 seconds.
What do I need to do in order to set the right port for Kafka communication from outside the VM?
I think that the m,ain issue here, is that Kafka is listening on local IP and not on the VM IP (WAN).
Any help will be really appreciated...
If you have used the Azure Resource Manager workflow to create the VM you have a Network Security Group that has been created automatically. You need to create rules in the NSG to make Kafka available. See : https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-nsg/
If you have used the Azure classic deployment workflow, you need to define an endpoint to expose Kafka. See: https://azure.microsoft.com/fr-fr/documentation/articles/virtual-machines-windows-classic-setup-endpoints/
Hope this helps,
Julien
Did you set Kafka advertised.host.name and advertised.port environment variables? That's how you present yourself to the outside world.
(Copy and pasting my response to a similar post)
For the recent versions of Kafka (0.10.0 as of this writing), you don't want to use advertised.host.name at all. In fact, even the [documentation] states that advertised.host.name is already deprecated. Moreover, Kafka will use this not only as the "advertised" host name for the producers/consumers, but for other brokers as well (in a multi-broker environment)...which is kind of a pain if you're using using a different (perhaps internal) DNS for the brokers...and you really don't want to get into the business of adding entries to the individual /etc/hosts of the brokers (ew!)
So, basically, you would want the brokers to use the internal name, but use the external FQDNs for the producers and consumers only. To do this, you will update advertised.listeners instead.

Spring Integration without OSGi?

Some years ago we deployed several OSGi-based Spring Integration (SI) applications in Virgo. However, apparently SI has moved away from OSGi. So, in absence of Virgo container, what is best way to run an SI app in production now? Say, a simple app that monitors a file system location & loads file data into Oracle? Is it just java -jar?
Spring Boot makes it easy to create stand-alone, production-grade Spring based Applications that you can "just run". You can run a small production applications on it, you can consider using Spring Cloud then.
If you are looking for a container then think about SpringSource Tc Server - based on Pivotal Tc Server (an enterprise version of Apache Tomcat) as a platform. This is the drop-in replacement for Apache Tomcat that's optimized for Spring.

distributed logging: JMS and log4j?

Been doing some searching for a solution to this problem: I need log entries from apps running on several machines to be sent to & aggregated on a remote server. Requirements:
logging in the app needs to be asynchronous (can't wait for log entry to traverse network)
logging in the app needs to be queued; if the network fails, log entries need to be queued locally and sent to
centralized server when the network becomes available again
I'm looking at using log4j and a JMSAppender. Assuming that's a suitable solution, are there any examples available? What process would be running on the centralized server to receive log entries in this scenario?
Thanks.
One simple setup I came to think about is to use Apache ActiveMQ
It is an open source messaging broker (JMS compatible) that is able to cluster queues among several physical machines and the ActiveMQ installation is rather lightweight. You simple install one ActiveMQ on each of your applications machines. Then on the logging server (Physical Server C in the picture) you would have another ActiveMQ. Your application would use a JMS appender (read more here) and you could actually just use the included apache camel to read from the queue and write a log on file or database without needing to write an application for that task.
It could be as simple as adding something like the following to the camel.xml in the activemq /conf installation and import the camel.xml in the activemq.xml configuration.
<route>
<from uri="activemq:queue:LogQueue"/>
<to uri="file:target/folder/?fileName=logfile.log&fileExist=Append"/>
</route>
You could use a myrriad of other frameworks, JMS servers and technologies, but I think this is a rather easy approach to achieve with very low cost and high stability.

Securing Cassandra communication with TLS/SSL

We would like to protect the Cassandra against man-in-the-middle attacks. Is there any way to configure Cassandra in a way that the client-server and server-server (replication) communications are SSL encrypted?
thank you
short answer: no :)
For client - server : THRIFT-151
Edit: You might want to follow this thread on the ML
Encrypted server server communication seems to be available now:
https://issues.apache.org/jira/browse/CASSANDRA-1567
Provide configurable encryption support for internode communication
Resolution: Fixed
Fix Version/s: 0.8 beta 1
Resolved: 19/Jan/11 18:11
The strategy I employ is to have Apache Cassandra nodes communicate through a site to site VPN tunnel.
Specific configurations for the cassandra.yaml file:
listen_address: 10.x.x.x # vpn network ip
rpc_address: 172.16.x.x. # non-vpn network for client access although, I leave it blank so that it listens on all interfaces
The benefits to this approach is that you can deploy Apache Cassandra to many different environments and you become provider agnostic. For example, hosting nodes in various Amazon EC2 environments and hosting nodes in your own physical data center and hosting a few others under your desk!
Cost an issue preventing you from looking into this approach? Check out Vyatta ...
As KajMagnus pointed out, there is a JIRA ticket resolved and available in the stable version of Apache Cassandra: https://issues.apache.org/jira/browse/CASSANDRA-1567 which enables you to accomplish what you would like via TLS/SSL .. but there are a few ways to accomplish what you would like.
Finally, if you want to host your instance on Amazon EC2, region to region can be problematic and although there is a patch available in 1.x.x, is it really the right approach? I have found the VPN approach reduces latency between nodes in different regions and still maintains the necessary level of security.
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Running-across-multiple-EC2-regions-td6634995.html
Finally -- part 2 --
If you want to secure client to server communications, have your clients (web servers) communicate through the same VPN .. The configuration I have:
Front end webservers communicate via internal network to application servers
Application servers sit on their own internal network and VPN network and communicate to the Data Layer via the VPN tunnel and between each other on the internal network
Data Layer exists on it's own network per Data Centre / Rack and receives requests via the VPN network
Node to Node (gossip) communication can be secured per the issue above. Client and server will both soon support Kerberos (In Hector master as of commit: https://github.com/rantav/hector/commit/08149a03c81b559cba5680d115943dbf334f58fa should hit Cassandra side shortly).

Resources