spring cloud stream kinesis configuration - amazon

I am trying integrate spring cloud stream kinesis in my app but i cant find all configuration option in there manual. I have seen this link:
https://github.com/spring-cloud/spring-cloud-stream-binder-aws-kinesis/blob/master/spring-cloud-stream-binder-kinesis-docs/src/main/asciidoc/overview.adoc
There are few properties mentioned like:
spring.cloud.stream.instanceCount=
I would like to know how can i set some of the properties which i cant see in the documentation:
hostname
port number
access key
secret key
username
I am looking for something like:
spring.cloud.stream.binder.host=
spring.cloud.stream.binder.port=
spring.cloud.stream.binder.access_key=

There is no host or port for AWS services. You only do a connection to the AWS via an auto-configuration. The Spring Cloud Kinesis Binder is fully based on the auto-configuration provided by the Spring Cloud AWS project. So, you need to follow its documentation how to configure accessKey and secretKey: https://cloud.spring.io/spring-cloud-static/spring-cloud-aws/2.1.2.RELEASE/single/spring-cloud-aws.html#_spring_boot_auto_configuration:
cloud.aws.credentials.accessKey
cloud.aws.credentials.secretKey
You also may consider to use a cloud.aws.region.static if you don't run your application in the EC2 environment.
There is no more magic than standard AWS connection settings and auto-configuration provided by the Spring Cloud AWS.
Or you can rely on the standard AWS credentials file instead.

Related

Spring Integration AWS Local SQS

I want to implement spring-integration-aws to send and receive messages with SQS. I am looking at localstack and would like to know the recommendation of the spring team.
Which tool/api should I use for local setup of spring integration flows for SQS inbound and outbound adapters?
Also, will there be examples of AWS in spring-integration-samples in future? I am looking for an example with xml config that reads the aws config from credentials and send and receive messages via outbound adapters.
Not sure what recommendation you expect from us, but I see an answer in your own question - Localstack: https://github.com/localstack/localstack.
In the project test we indeed use this tool over a docker container:
https://github.com/spring-projects/spring-integration-aws/blob/master/src/test/java/org/springframework/integration/aws/lock/DynamoDbLockRegistryTests.java#L62
We don't have such a test against SQS, but the configuration technique is similar.
I recall I heard that Testcontainers project can be used for testing AWS services locally as well: https://www.testcontainers.org/modules/localstack/
We don't have resources to write samples for this project.

Google Cloud Platform to AWS migration [PostgreSQL 9.6]

I'm working on a Django (2.1) project that is hosted on Google Cloud Platform with a ~= 7GB size PostgreSQL (9.6) database.
The documentation doesn't cover this specific version of PostgreSQL, so I'm stuck in the DMS endpoints configurations to connect the old database and perform the instance replication with DMS (Database Migration Service) from AWS.
I've followed this tutorial, but there is no details about the endpoints configuration. Nothing on the documentations too (I've spent a lot of time searching on it). Only with a feel other specific databases like Oracle and MySQL.
I need to know how to configure the Source and Target endpoints of the instance on AWS DMS, so I can connect my database on GCP and start the replication.
I've found my answer by trial and error.
Actually the configuration is pretty straightfoward, after I found out that I did not create the RDS instance first:
RDS - First you need to create your DB instance that will host your DB. After creating you can see the endpoint and port of your database: e.g. Endpoint your-database.xxxxxxxxxxxx.sa-east-1.rds.amazonaws.com port 5432;
DMS - On Database Migration Service painel, go to Replication Instance and create a new one. Set the VPC to the one that you've created or the default one if it works for you.
Source Endpoint - Configure with the Google Cloud PLatform IP setted on your Django project settings.py. The source endpoint work getting your DB from GCP using the IP;
Target Endpoint - Set this one with the address and port that you created at step 1;
Test connection.
After many trials I've completed my database migration successfully.
Hope this help someone who's passing trough the same problems.

Implementation of Service Broker API for Cassandra and Aerospike in open source Cloud Foundry

I need to use Cassandra and Aerospike as services with open source cloud foundry. Service Broker implementations with PCF i.e. Pivotal Cloud Foundry, are available as tiles for most DBs. Can these be used with open source cloud foundry? If yes, how? If not, how to go about implementing the service broker API for my use case? Are there any already available?
Aerospike is available on PCF - https://discuss.aerospike.com/t/aerospike-now-available-on-pivotal-cloud-foundry-pcf-aerospike-blog-post-october-19-2016/3524
the write up then refers to:
https://network.pivotal.io/products/aerospike

Is it possible to connect to third party websocket using azure functions?.How to connect from azure functions to locally installed kafka?

I am trying azure functions to connect to websocket to get the data and send to locally installed kafka in my laptop. but unable to send to kafka?.Is there any way to send data from azure functions to locally installed kafka?
There is nothing in Azure Function that will do that for you, so you need to send your messages to Kafka the same you would do from any other .NET code.
The steps:
Make sure Kafka is accessible from the internet as #Dhinesh suggested.
Pick a Kafka client library to use (the official one is here, but there are others too).
Write a simple console app which would send some sample messages to your Kafka and make sure it works.
Use exactly the same code inside your Azure Function body to send real messages to Kafka.
If you need something more elaborate than this basic guide, feel free to adjust your question.
Azure AppService supports Virtual Networks (vNets). Functions run on top of an AppService so this means that your Functions can securely access your on-prem Kafka instance over VPN. For this to work you would need to:
Create a site to site or a site to point VPN using the VNET integration -instructions here: https://learn.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-site-to-site-create
Ensure your function and app service is inside this vNET - instructions here: https://learn.microsoft.com/en-us/azure/app-service-web/web-sites-integrate-with-vnet
I hope this helps

Do I need to cache the IAM role credentials when using the AWS Node.JS SDK

We're using Role Based IAM credentials in our AWS VPC. This means that you never pass in keys to the client of the AWS SDK.
Previously we've used the PHP SDK. Amazon specifically recommends to cache the credentials when using role based authentication with the PHP SDK:
https://docs.aws.amazon.com/aws-sdk-php/guide/latest/credentials.html#caching-iam-role-credentials
I'm now writing a Node.JS application using the S3 client. I'm wondering if I need to cache the credentials (as per PHP SDK) or is this something that the Node.JS SDK automatically does for us?
The docs for the Node.JS SDK do not specifically mention anything about caching role based credentials:
http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/node-configuring.html#Credentials_from_IAM_Roles_for_EC2_Instances
Thanks
No, you do not need to cache IAM role credentials when using the AWS Node.js SDK.
I believe that the recommendation to cache credentials when using PHP is related to the request/CGI model that PHP uses. Without caching, your per-request PHP process would have to call out to the EC2 instance metadata service to retrieve credentials. Not ideal if you're handling high load.
With Node.js, you have a node process constantly running and it can persist credentials so will only need to call out to the EC2 instance metadata service once to retrieve initial credentials and then periodically to renew credentials when they are auto-rotated (every few hours, I believe).
As far as I can work out, unless you keep the client object around, the SDK will go back to the instance metadata service when it's instantiated again (except in the case where you instantiate a bunch of clients at the same time, in which case they all use the same instance metadata request event - odd).
i.e. cache your Amazon client objects (but this is not PHP, so you don't need an on-disk cache :) ).

Resources