I'm working on CloudFormation template which includes RDS Database and I wanted to attach security group to RDS. There is a resource AWS::RDS::DBSecurityGroup where I would like to write my own Ingress Rules which allows MySQL traffic from the front end instances by attaching this resource AWS::RDS::DBSecurityGroupIngress but, it doesn't show any properties like "FromPort" , "ToPort" , "Protocol" , etc..
I'm unsure whether the above listed properties will support or not.
From Working with DB Security Groups:
A DB security group controls network access to a DB instance that is not inside a VPC.
If you are using a VPC (which should always be the case unless you systems setup many years ago), you should use an AWS::EC2::SecurityGroup to control security. It does the properties you desire, eg:
"InstanceSecurityGroup" : {
"Type" : "AWS::EC2::SecurityGroup",
"Properties" : {
"GroupDescription" : "Allow http to client host",
"VpcId" : {"Ref" : "myVPC"},
"SecurityGroupIngress" : [{
"IpProtocol" : "tcp",
"FromPort" : "80",
"ToPort" : "80",
"CidrIp" : "0.0.0.0/0"
}],
"SecurityGroupEgress" : [{
"IpProtocol" : "tcp",
"FromPort" : "80",
"ToPort" : "80",
"CidrIp" : "0.0.0.0/0"
}]
}
}
Related
For our IoT solution we are trying to tackle a synchronizing issue with the device Twin.
In the normal situation the Cloud is in charge. So the cloud will set a desired property in the IoT hub device twin. The device will get a notification, change the property on the device and write the reported property that the device is in sync.
But for our case the user of the device can also change properties locally. So in this case the reported property will change and is out of sync with the desired.
How should we handle this? update the desired? leave it as is?
And a other case can be that properties can be deleted from both sides. see the attacted picture.
Writen use cases
here an example of the json twin:
"desired" : {
"recipes" : {
"recipe1" : {
"uri" : "blob.name.csv",
"version" : "1"
},{
"recipe2" : {
"uri" : "blob.name.csv",
"version" : "1"
},{
"recipe3" : {
"uri" : "blob.name.csv",
"version" : "1"
}
}
},
"reported" : {
"recipes" : {
"recipe1" : {
"uri" : "blob.name.csv",
"version" : "1"
},{
"recipe2" : {
"uri" : "blob.name.csv",
"version" : "3"
},{
"recipe3" : {
"uri" : "blob.name.csv",
"version" : "2"
}
}
I hope the question is clear. Thanks in advance.
Kind regards,
Marc
The approach to conflict resolution is specific to the business, it's not possible to define a universal rule. In some scenarios the user intent is more important than the service, and viceversa.
For instance an employee working late wants an office temperature of 76F, and automatic building management service wants a temp of 70F out of hours, in this case the user wins (desired property is discarded). In another example, an employee wants to enter the office building out of hours and turn on all the light, but the building management service won't allow it (while a building admin would be allowed instead...) etc.
I have created application using mongodb+nodejs
I have added new user with following roles in mongodb
{
"_id" : "testdb.testdbuser",
"user" : "testdbuser",
"db" : "testdb",
"roles" : [
{
"role" : "read",
"db" : "testdb"
},
{
"role" : "readWrite",
"db" : "testdb"
}
]
}
Started server using --auth option.
And also started node application using mongodb user credentials.
But, I am not able to read data from user collection in testdb database.
Getting this error :
MongoError: not authorized for query on testdb.user
Please any suggestion? Anything I am missing?
You need to assign the read role to the user testdbuser.
testsdbuser role does not include read access on non-system collections.
You can give it like this, it seems,
db.grantRolesToUser(
"testdbuser",
[
{ role: "read", db: "testdb" }
]
)
So recently we redesigned our MongoDB database cluster to use SSL and replica sets in addition to the sharding we had already implemented. SSL wasn't too difficult to get working, we just needed to split up the private key and certificate and then everything worked fine. However, getting my Node.js app to connect to both mongos instances is proving to be more difficult than I anticipated.
Before we implemented replica sets, we just had two shards, each of them running a mongos router, and in mongoose I gave it the following connection string:
mongodb://Host1:27017,Host2:27017/DatabaseName
Then, in the options object to the connection, I passed in the following:
{mongos: true}
This seems to work just fine. However, after the replica sets are implemented, whenever I pass the mongos option, the application never connects. Our cluster is now setup so that there are 4 MongoDB servers in 2 replica sets of 2 servers each. The master in each replica set is also running a mongos router instance. I assumed I should be able to connect the same way as before, however it never connects. If I create the connection using just 1 shard with no options, the application connects just fine. However, this is not ideal as the whole point is to have redundancy among the router instances. Can anyone offer some insight here?
Here is the output of sh.status():
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("57571fc5bfe098f05bbbe370")
}
shards:
{ "_id" : "rs0", "host" : "rs0/mongodb-2:27018,mongodb-3:27018" }
{ "_id" : "rs1", "host" : "rs1/mongodb-4:27018,mongodb-5:27018" }
active mongoses:
"3.2.7" : 4
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "Demo", "primary" : "rs0", "partitioned" : true }
I was asked to output rs.config(), here it is from the 1st master node:
{
"_id" : "rs0",
"version" : 1,
"protocolVersion" : NumberLong(1),
"members" : [
{
"_id" : 0,
"host" : "mongodb-2:27018",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "mongodb-3:27018",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("57571692c490a699f61e3784")
}
}
Alright, so I finally figured it out. I went through the logs on the server and saw that the client was trying to connect and wasn't using SSL so kept getting booted by the server. This was confusing to me because I set SSL in the server options and had the correct keys and cert bundle, as I was able to connect to a single instance just fine. Then I looked through the mongo driver options here. It shows that there are options you need to set for mongos itself regarding SSL. After setting these explicitly I was able to connect.
In summary, this options object allowed me to connect:
var options = {
"server": {
"ssl": true,
"sslCA": sslCAbuffer,
"sslCert": sslCertbuffer,
"sslKey": sslKeybuffer
},
"mongos": {
"ssl": true,
"sslCA": sslCAbuffer,
"sslCert": sslCertbuffer,
"sslKey": sslKeybuffer
}
}
while this options object did not:
var options = {
"server": {
"ssl": true,
"sslCA": sslCAbuffer,
"sslCert": sslCertbuffer,
"sslKey": sslKeybuffer
},
"mongos": true
}
I think the server object is probably redundant, but I left it in.
I'm trying to create an EC2 instance, that will use autoscaling, attached to a load balancer.
Unfortunately, I'm getting the error
The availability zones of the specified subnets and the AutoScalingGroup do not match
However, this is my current Cloudformation script:
"ApiAutoScaling" : {
"Type" : "AWS::AutoScaling::AutoScalingGroup",
"Properties" : {
"VPCZoneIdentifier" : [ "subnet-5ff05206", "subnet-b1109fc6", "subnet-948ce5f1" ],
"InstanceId" : {
"Ref" : "ApiEC2"
},
"MaxSize" : 3,
"MinSize" : 1,
"LoadBalancerNames" : [ "Api" ]
}
},
"ApiLoadBalancer" : {
"Type" : "AWS::ElasticLoadBalancing::LoadBalancer",
"Properties" : {
"LoadBalancerName" : "Api",
"Listeners" : [
{
"InstancePort" : "80",
"InstanceProtocol" : "HTTP",
"LoadBalancerPort" : "80",
"Protocol" : "HTTP"
},
{
"InstancePort" : "80",
"InstanceProtocol" : "HTTP",
"LoadBalancerPort" : "443",
"Protocol" : "HTTPS",
"SSLCertificateId" : "arn:aws:iam::xxx"
}
],
"SecurityGroups" : [ "sg-a88444cc" ],
"Subnets" : [ "subnet-5ff05206", "subnet-b1109fc6", "subnet-948ce5f1" ]
}
}
As you can see, my subnet list is the same for both my autoscaling group and my load balancer. Clearly I've misunderstood how this is supposed to work, but I can't work it out.
Try specifying the AvailabilityZones property for the auto scaling group. The default is for it to use all of them, so if your subnets only use a subnet of the zones, you would get this error message.
(As pointed out in the comments, "AvailabilityZones" : { "Fn::GetAZs" : "" } should do the trick.)
I have a Beanstalk App which has a app_name.elasticbeanstalk.com domain name by default.
I want a domain name like www.app_name.com that can access by bowser, and take following steps.
Register the domain name app_name.com
Set www.app_name.com as a CNAME of the ELB's public DNS.
In this way, I can access the www.app_name.com by the browser.
But, once the browser is loaded, the URL suddenly changes to app_name.elasticbeanstalk.com
I do not want to show the app_name.elasticbeanstalk.com to anyone. Can I just use the www.app_name.com? How?
Help me please.
You can do this by using Route53 and CloudFormation. To do this you would use the Elastic Beanstalk resource inside the CloudFormation template to create your Elastic Beanstalk stack. You would also use the Route53 resource to create your desired domain name. Then inside your Route53 resource you would create an alias that maps to your Elastic Beanstalk endpoint.
This might look something like:
"Resources" : {
"DNS" : {
"Type" : "AWS::Route53::RecordSetGroup",
"Properties" : {
"HostedZoneName" : "example.com",
"Comment" : "CNAME alias targeted to Elastic Beanstalk endpoint.",
"RecordSets" : [
{
"Name" : "example.example.com",
"Type" : "CNAME",
"TTL" : "900",
"ResourceRecords" : [{ "Fn::GetAtt" : ["sampleEnvironment","EndpointURL"] }]
}]
}
},
"sampleApplication" : {
"Type" : "AWS::ElasticBeanstalk::Application",
"Properties" : {
"Description" : "AWS Elastic Beanstalk Ruby Sample Application",
"ApplicationVersions" : [{
"VersionLabel" : "Initial Version",
"Description" : "Version 1.0",
"SourceBundle" : {
"S3Bucket" : { "Fn::Join" : ["-", ["elasticbeanstalk-samples", { "Ref" : "AWS::Region" }]]},
"S3Key" : "ruby-sample.zip"
}
}],
"ConfigurationTemplates" : [{
"TemplateName" : "DefaultConfiguration",
"Description" : "Default Configuration Version 1.0 - with SSH access",
"SolutionStackName" : "64bit Amazon Linux running Ruby 1.9.3",
"OptionSettings" : [{
"Namespace" : "aws:autoscaling:launchconfiguration",
"OptionName" : "EC2KeyName",
"Value" : { "Ref" : "KeyName" }
}]
}]
}
},
"sampleEnvironment" : {
"Type" : "AWS::ElasticBeanstalk::Environment",
"Properties" : {
"ApplicationName" : { "Ref" : "sampleApplication" },
"Description" : "AWS Elastic Beanstalk Environment running Ruby Sample Application",
"TemplateName" : "DefaultConfiguration",
"VersionLabel" : "Initial Version"
}
}
},
More information on using CloudFormation resources can be found here and sample templates can be found here
CloudFormation enables interacting with resources dynamically extremely easy and clean... no to mention completely scripted :)