Solace PubSub+ Software Message Broker AMI, how do we get updates? - amazon-ami

I am running Solace PubSub+ Software Message Broker Standard Edition (AMI) in one AWS VPC. Recently I have deployed another instance in another AWS VPC and noticed that their version number is different.
May I know how do I upgrade/update the AMI in the original AWS VPC to the latest version?
If I need to redeploy the new AMI to the older VPC, is there a quick way for me to "save all the current settings including all the authentication certificates and push it to the new AMI?
Thank you.
Cheers~
Dylan

For upgrading event broker on AWS, please follow instructions at https://docs.solace.com/Solace-SW-Broker-Upgrade/AWS-Upgrade.htm.
The event broker settings including the authentication certificates will be available to the new broker following the upgrading procedures.

Related

Rabbit MQ support in Azure

I need to read and publish messages to a Rabbit MQ instance from multiple app services on Azure.
Could anyone please suggest the Azure service that I should be using to host the Rabbit MQ instance?
Checkout RabbitMQ as a service on https://www.cloudamqp.com/
It's available on Azure, the free plan is somewhat restricted on regions but the paid plans are much better supported across Azure regions.
There is no Managed option available in Azure, you can consider installing in two ways,
Create individual Linux VM’s, Install RabbitMQ on it, Connect the RabbitMQ nodes installed in each VM.
Install RabbitMQ Cluster package provided by Bitnami in Azure.

Is it mandatory to use Cloud9 iDE to interact with the AWS?

I would like to know if it is mandatory to use Cloud9 IDE to interact with the amazon web services, or is it enough to have a local machine terminal ssh to AWS service?
What is the major difference of using Cloud9 IDE from the local machine ssh terminal?
I would like to accomplish building the Hyperledger fabric network, i.e., to create a Fabric network and provision a peer node in Amazon Managed Blockchain.
Here is the source where I came across Cloud9 IDE: https://github.com/aws-samples/non-profit-blockchain/blob/master/ngo-fabric/README.md, where they mentioned AWS Cloud9 IDE is one of the pre-requisites.
You don't need to use Cloud9 to connect with AWS services.
AWS provides several ways of connecting with them:
Web Management Console
Command Line Interface (CLI)
AWS SDKs
CloudFormation
REST API (which is used for example by Terraform)
To use AWS ClI on your local computer, you need to configure it with your AWS Access Key ID and Secret Access Key from IAM user which has programmatic access. https://aws.amazon.com/cli/
AWS Cloud9 comes with preinstalled AWS CLI and preconfigured IAM Role associated with it: https://docs.aws.amazon.com/cloud9/latest/user-guide/using-service-linked-roles.html
From AWS perspective, the IAM Role associated with Cloud9 has less access, than IAM administration user that you would probably create for AWS CLI on your local computer.

How does one deploy multiple micro-services in Node on a single AWS EC2 instance?

We are pretty new to AWS and looking to deploy multiple services into one EC2 instance.
Each micro-service is developed in its own repository.
Each service will have its own endpoint URL
Services may talk to each other
Services can be updated/deployed separately
Do we need a beanstalk for each? I hope not.
Thank you in advance
So the way we tackled a similar issue at our workplace was to leverage the multi-container docker platform supported by Elastic Beanstalk in most AWS regions.
The way this works in brief is, we had dedicated repositories for each of our services in ECR (Elastic Container Registry) where the different "versioned" images were deployed using a deploy script.
Once that is configured and set up, all you would need is deploy a Dockerrun.aws.json file which basically highlights all the apps you would want to deploy as part of the docker cluster into 1 EC2 instance (make sure it is big enough to handle multiple applications). This is the file where one would also highlight link between applications (so they can talk to one another), port configurations, logging drivers and groups (yea we used AWS CloudWatch for logging) and many other fields. This JSON is very similar to one's docker-compose.yml which is used to bring up your stack for local development and testing.
I would suggest checking out the sample example configuration that Amazon provides for more information. Also, I found the docker documentation to be pretty helpful in this regard.
Hope this helps!!
It is not clear if you have a particular tool in mind. If you are using any tool for deployment of a single micro-service, multiple should be the same.
How does one deploy multiple micro-services in Node on a single AWS
EC2 instance?
Each micro-service is developed in its own repository.
Services can be updated/deployed separately
This should be the same as deployment of a single micro-service. As long as they have different path and port that they are running on, it should be fine.
Each service will have its own endpoint URL
You can use nginx as a reverse proxy which can redirect your request from port 80 to the required port of your micro service.
Services may talk to each other
This again should not be an issue. You can either call them directly with the port number or via fully qualified name and come back via nginx.

What tools can I use to migrate infra from AWS to Azure automatically?

I have my Application running on AWS containing component as:
Multiple EC2 Instances (3 RHEL as Application Server, 1 Ubuntu as a File Server, 1 Ubuntu as a CronJob Server, 1 Windows as Bastion).
MySQL RDS Instance.
Barracuda WAF as an Instance (Implemented from Marketplace).
Route 53.
Now I want to migrate to Azure. Is there any tool available (free or paid) using which I can migrate whole infra?
I know there are separate steps to move each type of resource separately like ASR for VM etc. But I want to know any standalone tool that will do it for me, with all data. If not, then what are the best steps for migrating each resource separately?
Tools are good, but are no magical, we can also take some best practices to migrate resources from AWS to Azure.
1) Multiple EC2 Instances (3 RHEL as Application Server, 1 Ubuntu as a
File Server, 1 Ubuntu as a CronJob Server, 1 Windows as Bastion) .
For this Windows OS & Red Hat Enterprise Linux on EC2, you can Migrate VMs from AWS to Azure with Azure Site Recovery.
However, these EC2 instance should be running the 64-bit version of Windows Server 2008 R2 SP1 or later, Windows Server 2012, Windows Server 2012 R2 or Red Hat Enterprise Linux 6.7 (HVM virtualized instances only). The server must have only Citrix PV or AWS PV drivers. Instances running RedHat PV drivers aren't supported.
For Ubuntu on EC2, you can refer to this blog to migrate VM from AWS to Azure.
2) MySQL RDS Instance .
You can use common tools such as MySQL Workbench, Toad, or Navicat to remotely connect and import or export data into Azure Database for MySQL.
Use such tools on your client machine with an Internet connection to connect to Azure Database for MySQL. Use an SSL-encrypted connection for best security practices, as described in Configure SSL connectivity in Azure Database for MySQL.
You can create Amazon RDS Read Replicas for your database instance so that you needn't to shutdown your database. However, I'm not sure how long down time will you have. Because it's just for you database.
See more details about Migrating your MySQL database by using import and export in this document.
There is also a blog for this.
3) Barracuda WAF as an Instance (Implemented from Marketplace) .
Barracuda WAF is also available in the Marketplace in Azure. You can just go to azure portal and search Barracuda WAF. Then you see there are many types of Barracuda WAF for you to choose.
4) Route 53 .
On Azure , you can use Azure DNS to achieve this. You can refer to this blog to see details how to delegate DNS domain from AWS to Azure.
Hope this helps!
Sure export and import will work but it can have huge downtime depending on size of data.
If you want zero downtime; then you should first create Read replica from AWS to Azure and then migrate the read replica to master.
I think the best bet would be you do it on your own so that you have understanding of how it works which can further enhance your abilities to troubleshot.
That being said Like AWS have CLoudformation , Azure has Azure Resource Manager , you can create template in JSON like you do in AWS Couldformation and Deploy For Example :
In Cloudformation we have AWS::EC2::Instance you have Microsoft.Compute/virtualMachines in Azure.
You Can Refer this very informative Blog Post and Azure Documentation For Same.
Hope this Helps!

Consuming Amazon SQS (AMQP) from Azure

The need has come in which we have to consume data coming from a 3rd party in which they have an Amazon SQS instance setup on top of the AMQP protocol. They have given us the following:
queue name
user name
password
port
virtualhost
host
We are a cloud-born company in which we host everything in the Azure cloud, e.g. web services, web apps, databases, etc.
I would like to find out the following:
What "service" should I design or develop on from Azure that can consume messages from an Amazon SQS?
If Azure Service Bus supports AMQP 1.0 and Amazon SQS supports AMQP 0.9.3, can this be a plausible path?
I guess my question is more related on how to architect my solution. I know there are frameworks like RabbitMQ, but would like to avoid the VM path. If solutions like RabbitMQ are the way to go, can only the "consumer" pieces be utilized and not the "server" pieces of RabbitMQ implemented?
Any and all advice will be greatly appreciated.

Resources