AWS Autoscale Load Balancing with Cloudformation - autoscaling

I'm trying to create an EC2 instance, that will use autoscaling, attached to a load balancer.
Unfortunately, I'm getting the error
The availability zones of the specified subnets and the AutoScalingGroup do not match
However, this is my current Cloudformation script:
"ApiAutoScaling" : {
"Type" : "AWS::AutoScaling::AutoScalingGroup",
"Properties" : {
"VPCZoneIdentifier" : [ "subnet-5ff05206", "subnet-b1109fc6", "subnet-948ce5f1" ],
"InstanceId" : {
"Ref" : "ApiEC2"
},
"MaxSize" : 3,
"MinSize" : 1,
"LoadBalancerNames" : [ "Api" ]
}
},
"ApiLoadBalancer" : {
"Type" : "AWS::ElasticLoadBalancing::LoadBalancer",
"Properties" : {
"LoadBalancerName" : "Api",
"Listeners" : [
{
"InstancePort" : "80",
"InstanceProtocol" : "HTTP",
"LoadBalancerPort" : "80",
"Protocol" : "HTTP"
},
{
"InstancePort" : "80",
"InstanceProtocol" : "HTTP",
"LoadBalancerPort" : "443",
"Protocol" : "HTTPS",
"SSLCertificateId" : "arn:aws:iam::xxx"
}
],
"SecurityGroups" : [ "sg-a88444cc" ],
"Subnets" : [ "subnet-5ff05206", "subnet-b1109fc6", "subnet-948ce5f1" ]
}
}
As you can see, my subnet list is the same for both my autoscaling group and my load balancer. Clearly I've misunderstood how this is supposed to work, but I can't work it out.

Try specifying the AvailabilityZones property for the auto scaling group. The default is for it to use all of them, so if your subnets only use a subnet of the zones, you would get this error message.
(As pointed out in the comments, "AvailabilityZones" : { "Fn::GetAZs" : "" } should do the trick.)

Related

How to detect access denied/unauthorized activity logs in Azure?

My objective is to detect actions performed by users that resulted in an access denied or unauthorized error using activity logs.
To detect error I use the field "resultType" field. When it is "Failure", I know that this is an error record. I want to go one step further and filter those which are "access denied" or "unauthorized" error records.
I have considered following fields so far as potential candidates for the same, however haven't found any relevant information in them.
resultDescription
properties.statusCode
Following is the sample schema of the activity log we get on our end. The schema is such because we stream our activity log to a storage account(https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log-schema#schema-from-storage-account-and-event-hubs)
When streaming the Azure Activity log to a storage account or event hub, the data >?>follows the resource log schema.
{
"callerIpAddress" : "0.0.0.0",
"resourceGroup" : "group",
"resourceId" : "dummy",
"level" : "Information",
"production" : false,
"operationName" : "MICROSOFT.WEB/DUMMY",
"ingestTime" : "time",
"resultSignature" : "Succeeded.OK",
"accountId" : "dummyId",
"identity" : {
"authorization" : {
"evidence" : {
"roleAssignmentScope" : "group",
"role" : "dummy",
"roleDefinitionId" : "dummy",
"roleAssignmentId" : "dummy",
"principalId" : "dummy",
"principalType" : "dummy"
},
"scope" : "dummy",
"action" : "dummy"
},
"claims" : {
"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" : "dummy",
"appid" : "dummy",
"http://schemas.microsoft.com/identity/claims/objectidentifier" : "dummy"
}
},
"customerID" : "dummy",
"correlationId" : "dummy",
"time" : "dummy",
"category" : "dummy",
"resultType" : "Failure",
"resultDescription": "dummy",
"durationMs" : "dummy",
"properties" : {
"eventCategory" : "Administrative",
"statusCode" : "OK"
}
}

mongodb taking too much time for old entries

i am new in mongodb and i am facing an issue, i have around millions of documents in my collectionand i am trying to find single entry using findOne({}) command and when i am trying to find recent entries then response comes in miliseconds but when i am trying to fetch older entries around 600 millionth document then it takes around 2 minutes on mongo shell and my node server gives
{ MongoErro : connection 1 to 127.0.0.1:27017 timed out }
and my nodejs server sends an empty response. can any one tell me what should i do to resolve this issueThanks in advance
explain gives me
db.contacts.find({"phoneNumber":"9165900137"}).explain("executionStats")
{
"queryPlanner" : {
"plannerVersion" : 1,
"namespace" : "meanApp.contacts",
"indexFilterSet" : false,
"parsedQuery" : {
"phoneNumber" : {
"$eq" : "9165900137"
}
},
"winningPlan" : {
"stage" : "COLLSCAN",
"filter" : {
"phoneNumber" : {
"$eq" : "9165900137"
}
},
"direction" : "forward"
},
"rejectedPlans" : [ ]
},
"executionStats" : {
"executionSuccess" : true,
"nReturned" : 1,
"executionTimeMillis" : 321188,
"totalKeysExamined" : 0,
"totalDocsExamined" : 495587806,
"executionStages" : {
"stage" : "COLLSCAN",
"filter" : {
"phoneNumber" : {
"$eq" : "9165900137"
}
},
"nReturned" : 1,
"executionTimeMillisEstimate" : 295230,
"works" : 495587808,
"advanced" : 1,
"needTime" : 495587806,
"needYield" : 0,
"saveState" : 3871779,
"restoreState" : 3871779,
"isEOF" : 1,
"invalidates" : 0,
"direction" : "forward",
"docsExamined" : 495587806
}
},
"serverInfo" : {
"host" : "li1025-15.members.linode.com",
"port" : 27017,
"version" : "3.2.16",
"gitVersion" : "056bf45128114e44c5358c7a8776fb582363e094"
},
"ok" : 1
}
As indicated in the explain plan results, the current query is doing Collection Scan. This means it has to scan every document in collection to produce the match and you have got about half a billion documents.
Try adding this index and it might take a bit to create it.
db.contacts.createIndex( { phoneNumber: 1 }, { background: true } )
Run the query once the index creation is successful, you must see a dramatic improvement in performance. To be certain whether index got picked up, try explain again and it should no longer say COLLSCAN.

Editable documents fields in elasticsearch

I have documents that contains a object which the attributes are editable (add/delete/edit) in runtime.
{
"testIndex" : {
"mappings" : {
"documentTest" : {
"properties" : {
"typeTestId" : {
"type" : "string",
"index" : "not_analyzed"
},
"createdDate" : {
"type" : "date",
"format" : "dateOptionalTime"
},
"designation" : {
"type" : "string",
"fields" : {
"raw" : {
"type" : "string",
"index" : "not_analyzed"
}
}
},
"id" : {
"type" : "string",
"index" : "not_analyzed"
},
"modifiedDate" : {
"type" : "date",
"format" : "dateOptionalTime"
},
"stuff" : {
"type" : "string"
},
"suggest" : {
"type" : "completion",
"analyzer" : "simple",
"payloads" : true,
"preserve_separators" : true,
"preserve_position_increments" : true,
"max_input_length" : 50,
"context" : {
"typeTestId" : {
"type" : "category",
"path" : "typeTestId",
"default" : [ ]
}
}
},
"values" : {
"properties" : {
"Att1" : {
"type" : "string"
},
"att2" : {
"type" : "string"
},
"att400" : {
"type" : "date",
"format" : "dateOptionalTime"
}
}
}
}
}
}
}
}
The field values is a object that can be edited throug typeTest, so if I change something in typeTestit should be reflected here. If i create a new field theres no problem, but it should be possible to edit or delete existing fields in typeTest. For example If I delete values.att1 all documentTest should lose these, as well as the mapping should be updated.
For what I saw, we cannot do these without reindexing. So for now my solution is to remove the fields in elastic search just like mentioned in this question and have a worker do the reindexing time to time if needed.
This does not seems to me a "solution". Is there a better way to have document of this type in elasticsearch? with this flexibility without having to reindex time to time?
You can use the Update API to delete, add or modify a field.
The issue is docs are immutable in elasticsearch, so when you make some changes with the update API it is executed in a manner mark as deleted to old one and add a new one with the updates.
The deletion and the creating the new documents is transparent to you, so you do not have to reindex or do any other thing. Down side is if you are planning to modify very large numbers of documents (like an update query to modify 5mil documents.) it will be very I/O intensive for the nodes.
BTW, this is also applies to deletions

MongoDB remove the lowest score, node.js

I am trying to remove the lowest homework score.
I tried this,
var a = db.students.find({"scores.type":"homework"}, {"scores.$":1}).sort({"scores.score":1})
but how can I remove this set of data?
I have 200 pieces of similar data below.
{
"_id" : 148,
"name" : "Carli Belvins",
"scores" : [
{
"type" : "exam",
"score" : 84.4361816750119
},
{
"type" : "quiz",
"score" : 1.702113040528119
},
{
"type" : "homework",
"score" : 22.47397850465176
},
{
"type" : "homework",
"score" : 88.48032660881387
}
]
}
you are trying to remove an element but the statement you provided is just to find it.
Use db.students.remove(<query>) instead. Full documentation here

How to setup a customized DNS name for an Elastic Beanstalk App

I have a Beanstalk App which has a app_name.elasticbeanstalk.com domain name by default.
I want a domain name like www.app_name.com that can access by bowser, and take following steps.
Register the domain name app_name.com
Set www.app_name.com as a CNAME of the ELB's public DNS.
In this way, I can access the www.app_name.com by the browser.
But, once the browser is loaded, the URL suddenly changes to app_name.elasticbeanstalk.com
I do not want to show the app_name.elasticbeanstalk.com to anyone. Can I just use the www.app_name.com? How?
Help me please.
You can do this by using Route53 and CloudFormation. To do this you would use the Elastic Beanstalk resource inside the CloudFormation template to create your Elastic Beanstalk stack. You would also use the Route53 resource to create your desired domain name. Then inside your Route53 resource you would create an alias that maps to your Elastic Beanstalk endpoint.
This might look something like:
"Resources" : {
"DNS" : {
"Type" : "AWS::Route53::RecordSetGroup",
"Properties" : {
"HostedZoneName" : "example.com",
"Comment" : "CNAME alias targeted to Elastic Beanstalk endpoint.",
"RecordSets" : [
{
"Name" : "example.example.com",
"Type" : "CNAME",
"TTL" : "900",
"ResourceRecords" : [{ "Fn::GetAtt" : ["sampleEnvironment","EndpointURL"] }]
}]
}
},
"sampleApplication" : {
"Type" : "AWS::ElasticBeanstalk::Application",
"Properties" : {
"Description" : "AWS Elastic Beanstalk Ruby Sample Application",
"ApplicationVersions" : [{
"VersionLabel" : "Initial Version",
"Description" : "Version 1.0",
"SourceBundle" : {
"S3Bucket" : { "Fn::Join" : ["-", ["elasticbeanstalk-samples", { "Ref" : "AWS::Region" }]]},
"S3Key" : "ruby-sample.zip"
}
}],
"ConfigurationTemplates" : [{
"TemplateName" : "DefaultConfiguration",
"Description" : "Default Configuration Version 1.0 - with SSH access",
"SolutionStackName" : "64bit Amazon Linux running Ruby 1.9.3",
"OptionSettings" : [{
"Namespace" : "aws:autoscaling:launchconfiguration",
"OptionName" : "EC2KeyName",
"Value" : { "Ref" : "KeyName" }
}]
}]
}
},
"sampleEnvironment" : {
"Type" : "AWS::ElasticBeanstalk::Environment",
"Properties" : {
"ApplicationName" : { "Ref" : "sampleApplication" },
"Description" : "AWS Elastic Beanstalk Environment running Ruby Sample Application",
"TemplateName" : "DefaultConfiguration",
"VersionLabel" : "Initial Version"
}
}
},
More information on using CloudFormation resources can be found here and sample templates can be found here
CloudFormation enables interacting with resources dynamically extremely easy and clean... no to mention completely scripted :)

Resources