how to fix cross domain problem with cas 6.5 - cross-domain

I have a front-end and back-end separation project I want to redirect to cas from backend, But I got the cross domain problem like this enter image description here.
And I have try fixed it by the user guide: https://apereo.github.io/cas/6.5.x/services/Configuring-Service-Http-Security-Headers.html
Througth it, I have config the follow propertise, but it does not work.
# global config
cas.http-web-request.cors.enabled=true;
cas.http-web-request.header.enabled=true
# client config
"properties" : {
"#class" : "java.util.HashMap",
"corsAllowedOrigins" : {
"#class" : "org.apereo.cas.services.DefaultRegisteredServiceProperty",
"values" : [ "java.util.HashSet", [ "Access-Control-Allow-Origin" ] ]
},
"corsAllowedHeaders" : {
"#class" : "org.apereo.cas.services.DefaultRegisteredServiceProperty",
"values" : [ "java.util.HashSet", [ "Access-Control-Allow-Origin" ] ]
}
}

Related

Write test output to file in NightwatchJS

I want to write the actual test output of NightwatchJS tests (not the browser console) to a file. I can't seem to find any resource regarding about this.
The log_path option is ok, it does log some stuff on the location you specified after the test, but its not the same data vs the actual test output.
below is my nightwatch.json file
{
"src_folders" : [
"tests" ,
"tests/settings/general"
],
"page_objects_path" : [
"page_objects/backend" ,
"page_objects/frontend" ,
"page_objects/backend/settings/general"
],
"globals_path" : "./nightwatch.globals.js",
"webdriver" : {
"start_process" : true,
"log_path" : "./logs"
},
"test_settings" : {
"default" : {
"webdriver": {
"server_path" : "node_modules/.bin/chromedriver",
"port" : 9515,
"cli_args" : [ "--log" , "debug" ]
},
"desiredCapabilities": {
"browserName" : "chrome",
"acceptInsecureCerts" : true,
"javascriptEnabled" : true,
"acceptSslCerts" : true
}
},
"firefox" : {
"webdriver": {
"server_path" : "node_modules/.bin/geckodriver",
"port" : 4444,
"cli_args" : [ "--log" , "debug" ]
},
"desiredCapabilities": {
"browserName" : "firefox",
"acceptInsecureCerts" : true,
"javascriptEnabled" : true,
"acceptSslCerts" : true
}
}
}
}
Hope you can help me on this.
Thanks in advance.
You could add '> tests_output/Test_filename.txt' after the command to run the test. So, I have my package.json file set up with the script 'test' for running Nightwatch. My terminal input to save the output as a file would look like this:
npm test > /tests_output/testRun100720.txt
This will place the output file I name testRun100720.txt into the tests_output folder.
Can you provide a little more detail of what you mean by 'output'?
In the meantime, try adding "output_folder": "reports/", to your json file, this should generate XML output from the Nightwatch logger which may provide more details.

Firebase Database: Get only one child node, out of many child nodes

I am using firebase real-time database. I don't want to get all child nodes for a particular parent node, I am concerned this with a particular node, not the sibling nodes. Fetching all the sibling nodes increases my billing in firebase as extra XXX MB of data is fetched. I am using NodeJs admin library for fetching this.
Adding a sample JSON
{
"phone" : {
"shsjsj" : {
"battery" : {
"isCharging" : true,
"level" : 0.25999999046325684,
"updatedAt" : "2018-05-15 12:45:29"
},
"details" : {
"deviceCodeName" : "sailfish",
"deviceManufacturer" : "Google",
"deviceName" : "Google Pixel",
},
"downloadFiles" : {
"7bfb21ff683f8652ea390cd3a380ef53" : {
"uploadedAt" : 1526141772270,
}
},
"token" : "cgcGiH9Orbs:APA91bHDT3mI5L3N62hqUT2LojqsC0IhwntirCd6x0zz1CmVBz6CqkrbC",
"uploadServer" : {
"createdAt" : 1526221336542,
}
},
"hshssjjs" : {
"battery" : {
"isCharging" : true,
"level" : 0.25999999046325684,
"updatedAt" : "2018-05-15 12:45:29"
},
"details" : {
"deviceCodeName" : "sailfish",
"deviceManufacturer" : "Google",
"deviceName" : "Google Pixel",
},
"downloadFiles" : {
"7bfb21ff683f8652ea390cd3a380ef53" : {
"uploadedAt" : 1526141772270,
}
},
"token" : "cgcGiH9Orbs:APA91bH_oC18U56xct4dRuyw9qhI5L3N62hqUT2LojqsC0IhwntirCd6x0zz1CmVBz6CqkrbC",
"uploadServer" : {
"createdAt" : 1526221336542,
}
}
}
}
In the above sample JSON file, i want to fetch all phone->$deviceId->token. Currently, I am fetching the whole phone object, then I iterate over all the phone ID's to fetch the token. This spikes my database download usage and increases billing. I am only concerned with the token of all the devices. Siblings of the token is unnecessary.
All queries to Realtime Database fetch everything under the location requested. There is no way to limit to certain children under that location. If you want only certain children at a location, but not everything under that location, you'll have to query for each one of them separately. Or, you can restructure or duplicate your data to support the specific queries you want to perform - duplication is common for nosql type databases.

Analyzing apache access logs with elasticsearch Watcher

I am using the ELK Stack to analyze logs and I need to analyze and detect anomalies of apache access logs. What can I analyze with apache access logs and how should I give the conditions with curl -XPUT to Watcher?
If you haven't found it already, there's a decent tutorial at https://www.elastic.co/guide/en/watcher/watcher-1.0/watch-log-data.html. It provides a basic example of creating a log watch.
You can analyze/watch anything that you can query in Elasticsearch. It's just a matter of formatting the query with the correct JSON syntax. The guide for crafting the conditions is at https://www.elastic.co/guide/en/watcher/watcher-1.0/condition.html.
You'll also want to look at https://www.elastic.co/guide/en/watcher/watcher-1.0/actions.html to get an idea of the possible actions Watcher can take when a query meets a condition.
As far as the post to Watcher, each watch is essentially a JSON object. Because they can get pretty elaborate, I have found that it's best to create a file for each watch you want to create, and post them like this:
curl -XPUT http://my_elasticsearch:9200/_watcher/watch/my_watch_name -d #/path/to/my_watch_name.json
my_watch_name.json should have these basic elements (as described in the first link above):
{
"trigger" : { ... },
"input" : { ... },
"condition" : { ... },
"actions" : { ... }
}
The actions section is going to be specific to your use case, but here's a basic example of the other sections that I'm using successfully:
{
"trigger" : {
"schedule" : { "interval" : "5m" }
},
"input" : {
"search" : {
"request" : {
"indices" : [ "logstash" ],
"body" : {
"query" : {
"filtered" : {
"query" : {
"match" : { "message" : "error" }
},
"filter" : {
"range" : { "#timestamp" : { "gte" : "now-5m" } }
}
}
}
}
}
}
},
"condition" : {
"compare" : { "ctx.payload.hits.total" : { "gt" : 0 } }
},
"actions" : {
...
}
}

AWS Autoscale Load Balancing with Cloudformation

I'm trying to create an EC2 instance, that will use autoscaling, attached to a load balancer.
Unfortunately, I'm getting the error
The availability zones of the specified subnets and the AutoScalingGroup do not match
However, this is my current Cloudformation script:
"ApiAutoScaling" : {
"Type" : "AWS::AutoScaling::AutoScalingGroup",
"Properties" : {
"VPCZoneIdentifier" : [ "subnet-5ff05206", "subnet-b1109fc6", "subnet-948ce5f1" ],
"InstanceId" : {
"Ref" : "ApiEC2"
},
"MaxSize" : 3,
"MinSize" : 1,
"LoadBalancerNames" : [ "Api" ]
}
},
"ApiLoadBalancer" : {
"Type" : "AWS::ElasticLoadBalancing::LoadBalancer",
"Properties" : {
"LoadBalancerName" : "Api",
"Listeners" : [
{
"InstancePort" : "80",
"InstanceProtocol" : "HTTP",
"LoadBalancerPort" : "80",
"Protocol" : "HTTP"
},
{
"InstancePort" : "80",
"InstanceProtocol" : "HTTP",
"LoadBalancerPort" : "443",
"Protocol" : "HTTPS",
"SSLCertificateId" : "arn:aws:iam::xxx"
}
],
"SecurityGroups" : [ "sg-a88444cc" ],
"Subnets" : [ "subnet-5ff05206", "subnet-b1109fc6", "subnet-948ce5f1" ]
}
}
As you can see, my subnet list is the same for both my autoscaling group and my load balancer. Clearly I've misunderstood how this is supposed to work, but I can't work it out.
Try specifying the AvailabilityZones property for the auto scaling group. The default is for it to use all of them, so if your subnets only use a subnet of the zones, you would get this error message.
(As pointed out in the comments, "AvailabilityZones" : { "Fn::GetAZs" : "" } should do the trick.)

How to setup a customized DNS name for an Elastic Beanstalk App

I have a Beanstalk App which has a app_name.elasticbeanstalk.com domain name by default.
I want a domain name like www.app_name.com that can access by bowser, and take following steps.
Register the domain name app_name.com
Set www.app_name.com as a CNAME of the ELB's public DNS.
In this way, I can access the www.app_name.com by the browser.
But, once the browser is loaded, the URL suddenly changes to app_name.elasticbeanstalk.com
I do not want to show the app_name.elasticbeanstalk.com to anyone. Can I just use the www.app_name.com? How?
Help me please.
You can do this by using Route53 and CloudFormation. To do this you would use the Elastic Beanstalk resource inside the CloudFormation template to create your Elastic Beanstalk stack. You would also use the Route53 resource to create your desired domain name. Then inside your Route53 resource you would create an alias that maps to your Elastic Beanstalk endpoint.
This might look something like:
"Resources" : {
"DNS" : {
"Type" : "AWS::Route53::RecordSetGroup",
"Properties" : {
"HostedZoneName" : "example.com",
"Comment" : "CNAME alias targeted to Elastic Beanstalk endpoint.",
"RecordSets" : [
{
"Name" : "example.example.com",
"Type" : "CNAME",
"TTL" : "900",
"ResourceRecords" : [{ "Fn::GetAtt" : ["sampleEnvironment","EndpointURL"] }]
}]
}
},
"sampleApplication" : {
"Type" : "AWS::ElasticBeanstalk::Application",
"Properties" : {
"Description" : "AWS Elastic Beanstalk Ruby Sample Application",
"ApplicationVersions" : [{
"VersionLabel" : "Initial Version",
"Description" : "Version 1.0",
"SourceBundle" : {
"S3Bucket" : { "Fn::Join" : ["-", ["elasticbeanstalk-samples", { "Ref" : "AWS::Region" }]]},
"S3Key" : "ruby-sample.zip"
}
}],
"ConfigurationTemplates" : [{
"TemplateName" : "DefaultConfiguration",
"Description" : "Default Configuration Version 1.0 - with SSH access",
"SolutionStackName" : "64bit Amazon Linux running Ruby 1.9.3",
"OptionSettings" : [{
"Namespace" : "aws:autoscaling:launchconfiguration",
"OptionName" : "EC2KeyName",
"Value" : { "Ref" : "KeyName" }
}]
}]
}
},
"sampleEnvironment" : {
"Type" : "AWS::ElasticBeanstalk::Environment",
"Properties" : {
"ApplicationName" : { "Ref" : "sampleApplication" },
"Description" : "AWS Elastic Beanstalk Environment running Ruby Sample Application",
"TemplateName" : "DefaultConfiguration",
"VersionLabel" : "Initial Version"
}
}
},
More information on using CloudFormation resources can be found here and sample templates can be found here
CloudFormation enables interacting with resources dynamically extremely easy and clean... no to mention completely scripted :)

Resources