I'm kind of surprised that I'm running into this problem. I created an aws_wafregional_regex_pattern_set to block incoming requests that contain php in their URI. I expected all requests with php in them to be blocked. However the requests are still making it through. Perhaps, I'm misunderstanding what this resource actually does? I have attached some sample code below.
resource "aws_wafregional_rule" "block_uris_containining_php" {
name = "BlockUrisContainingPhp"
metric_name = "BlockUrisContainingPhp"
predicate {
data_id = "${aws_wafregional_regex_match_set.block_uris_containing_php.id}"
negated = false
type = "RegexMatch"
}
}
resource "aws_wafregional_regex_match_set" "block_uris_containing_php" {
name = "BlockUrisContainingPhp"
regex_match_tuple {
field_to_match {
type = "URI"
}
regex_pattern_set_id = "${aws_wafregional_regex_pattern_set.block_uris_containing_php.id}"
text_transformation = "NONE"
}
}
resource "aws_wafregional_regex_pattern_set" "block_uris_containing_php" {
name = "BlockUrisContainingPhp"
regex_pattern_strings = [ "php$" ]
}
This code creates a String and regex matching condition in AWS WAF. So, I know it's at least getting created. I used cloudwatch to check for blocked requests as I sent requests containing php to the load balancer, but each request went through successfully. Any help with this would be greatly appreciated.
I can't tell from snippet but did you add the rule to web ACL and set the rule action to block?
Also you should try using wafv2 instead of wafregional as wafv2 comes with new features and easier to express rules.
Related
Here is my scenario:
I have an azure website, https://[my app on azure].azurewebsites.us
A user of this application wishes to schedule a future action,
reached via a uri.
The application keeps a record of this desire in an Azure SQL database table
having a "When to act" datetime column and an "Action GUID".
Example: User "bob" will go to the scheduling page in my website
and enter a date and time he wishes the action to execute.
Bob enters 2020-07-11 # 11:11 am (PDT).
Upon successful submission a record gets added to the database
with an application generated GUID, "AC5ECA4B-FB4F-44AE-90F9-56931329DB2E"
with a "When to act" value of 2020-07-11 11:11:00.00 -07:00
The action url will an url in my website https://[my app on azure].azurewebsites.us/PerformAction/AC5ECA4B-FB4F-44AE-90F9-56931329DB2E
The SQL database is NOT a Managed Instance.
Would this be possible using SQL CLR?
I'm thinking not.
Is there a way to do this using an Azure Function and/or Logic App?
Any other ideas?
Much appreciated!
The pattern you are putting together is a simple polling pattern. You need a function that will poll your data store every x minutes(or seconds) (function cron expression => * */x * * * *). It would then check that the current time is on or about to be exceeded by the stored time. If this is the case, it could then make your call. Don't forget to write back to the data store the success or failure of the call. If there are multiple steps, you should record where in the full life-cycle that command is. This way you can take the appropriate action.
To make your function app better, I would introduce a message queue (Azure Storage Queue) that receives the parts required to make the call. Then have a function (triggered by the messages on the queue) to send your web call. This way each function is operating separately from each other.
To reduce the need to poll all of the records, you could filter your records with SQL to be "between" certain timestamps guided by the current time of the the polling function.
If you need more clarity, leave comments and I'll try to provide more context.
Store or convert times to UTC to avoid timezone mismatches.
Logic apps have a Built-in step Action named "Delay until"
Using this step action in a logic app workflow solved the issue.
Here is what it looks like in the designer:
Here are further details.
The applications calls the logic app via the HTTP POST URL in the first step
passing in these properties:
{
"properties": {
"GUID": { // an identifier of the apps schedule
"type": "string"
},
"ID": { // an simpler identifier of the apps schedule
"type": "integer"
},
"MessageID": { // yet another identifier
"type": "integer"
},
"WhenToSend": { // the datetime in UTC format for when to Delay Until
"type": "string"
}
},
"type": "object"
}
Example of payload sent:
Here is a snippet of the app calling the LA:
newMessageSchedule.MessageScheduleStatuses.Add(new MessageScheduleStatus()
{
MessageScheduleStatusID = (int)ApplicationEnums.MessageScheduleStatuses.New
});
messageGettingSent.MessageSchedules.Add(newMessageSchedule);
await _dbContext.SaveChangesAsync();
dynamic schedulePayloadJSON = JsonConvert.SerializeObject(new
{
ID = newMessageSchedule.ID,
GUID = newMessageSchedule.MSGuid,
MessageID = newMessageSchedule.MessageID,
WhenToSend = newMessageSchedule.WhenToSend.UtcDateTime
});
HttpClient httpClient = new HttpClient();
HttpContent c = new StringContent(schedulePayloadJSON, Encoding.UTF8, "application/json");
var response = await httpClient.PostAsync($"{dto.SchedulerUri}", c);
if (response.StatusCode == HttpStatusCode.OK)
{
var content = await response.Content.ReadAsStringAsync();
//dynamic httpResponse = JsonConvert.DeserializeObject<dynamic>(content);
newMessageSchedule.MessageScheduleStatuses.Add(new MessageScheduleStatus()
{
MessageScheduleStatusID = (int)ApplicationEnums.MessageScheduleStatuses.Scheduled
});
messageGettingSent.MessageSchedules.Add(newMessageSchedule);
await _dbContext.SaveChangesAsync();
}
The dto.SchedulerUri is set from an appsettings setting,
TextMessaging:Scheduler and has a value which looks something like this:
"https://prod-123.usgovarizona.logic.azure.us:443/workflows/1d8.../triggers/request/paths/invoke?api-version=2016-10-01&sp=%2Ftriggers%2Frequest%2Frun&sv=1.0&sig=CJt..."
Once the "Delay Until" time occurs, the LA calls back into the app w/ the IDs:
The controller action performs all the things.
Voila
/vwäˈlä/
I am using Service Bus Explorer as a quick way of testing a rule that does not work when deployed via ARM.
In JavaScript in the Azure Function I am setting the Topic message to:
context.bindings.outputSbMsg = { Indicator: 'Itinerary'};
In Service Bus Explorer I am setting a Rule on a Subscription with this string:
Indicator = 'Itinerary'
But messages sent to the Topic do not go to this Subscription ( they go to another with the rule 1 = 1)
Question: What am I missing here?
Supplementary info:
I do not seem to have access to the Indicator property. As a test I created an action on the 1=1 rule that appended to the Indicator property and the result was empty.
I am able to access the Indicator property in JavaScript if I have a Function that is triggered by the 1 = 1 rule, so the property is there.
The rule doesn't work because
The rule works against system or user-defined properties rather than message body.
What js function outputs is merely message body, i.e. context.bindings.outputSbMsg = { Indicator: 'Itinerary'}; sends a message { Indicator: 'Itinerary'} and no property is set by us.
And the default rule with 1=1 true filter enables all messages to be selected into the subscription, so you see messages went there all the time. Check doc of topic filters for more details.
For now, it's by design that js function output can't populate message properties. To make the filter work, we have to send messages with property using SDK instead. Install azure-sb package then try sample code below.
const azuresb = require("azure-sb");
const connStr = "ServiceBusConnectionString";
const mytopic = "mytopic";
var serviceBus = azuresb.createServiceBusService(connStr);
const msg =
{
body: "Testing",
customProperties: {
Indicator: 'Itinerary'
}
};
serviceBus.sendTopicMessage(mytopic, msg, function(error) {
if (error) {
context.log(error);
}
else{
context.log("Message Sent");
}
});
I'm creating a an AWS CodeBuild using the following (partial) Terraform Configuration:
resource "aws_codebuild_webhook" "webhook" {
project_name = "${aws_codebuild_project.abc-web-pull-build.name}"
branch_filter = "master"
}
resource "github_repository_webhook" "webhook" {
name = "web"
repository = "${var.github_repo}"
active = true
events = ["pull_request"]
configuration {
url = "${aws_codebuild_webhook.webhook.payload_url}"
content_type = "json"
insecure_ssl = false
secret = "${aws_codebuild_webhook.webhook.secret}"
}
}
for some reason two Webhooks are created on GitHub for that spoken project, one with events pull_request and push, and the second with pull request (the only one I've expected).
I've tried removing the first block (aws_codebuild_webhook) even though terraform documentation give an example with both:
https://www.terraform.io/docs/providers/aws/r/codebuild_webhook.html
but than I'm in a pickle because there isn't a way to acquire the payload_url the Webhook require and currently accept it from aws_codebuild_webhook.webhook.payload_url.
not sure what is the right approach here, Appreciate any suggestion.
We have a job hosted in an azure website, the job reads entries from a topic subscription. Everything works fine when we only have one instance to host the website. Once we scale out to more than one instance we observe the message is processed as many times as instances we have. Each instance points to the same subscription. From what we read, once the item is read, it won't be available for any other process. The duplicated processing is happening inside the same instance, meaning that if we have two instances, the item is processed twice in one of the instances, it is not splitted.
What can be possible be wrong in the way we are doing things?
This is how we proceed to configure the connection to the queue, if the subscription does not exists, it is created:
var serviceBusConfig = new ServiceBusConfiguration
{
ConnectionString = transactionsBusConnectionString
};
config.UseServiceBus(serviceBusConfig);
var allRule1 = new RuleDescription
{
Name = "All",
Filter = new TrueFilter()
};
SetupSubscription(transactionsBusConnectionString,"topic1", "subscription1", allRule1);
private static void SetupSubscription(string busConnectionString, string topicNameKey, string subscriptionNameKey, RuleDescription newRule)
{
var namespaceManager =
NamespaceManager.CreateFromConnectionString(busConnectionString);
var topicName = ConfigurationManager.AppSettings[topicNameKey];
var subscriptionName = ConfigurationManager.AppSettings[subscriptionNameKey];
if (!namespaceManager.SubscriptionExists(topicName, subscriptionName))
{
namespaceManager.CreateSubscription(topicName, subscriptionName);
}
var subscriptionClient = SubscriptionClient.CreateFromConnectionString(busConnectionString, topicName, subscriptionName);
var rules = namespaceManager.GetRules(topicName, subscriptionName);
foreach (var rule in rules)
{
subscriptionClient.RemoveRule(rule.Name);
}
subscriptionClient.AddRule(newRule);
rules = namespaceManager.GetRules(topicName, subscriptionName);
rules.ToString();
}
Example of the code that process the topic item:
public void SendInAppNotification(
[ServiceBusTrigger("%eventsTopicName%", "%SubsInAppNotifications%"), ServiceBusAccount("OutputServiceBus")] Notification message)
{
this.valueCalculator.AddInAppNotification(message);
}
This method is inside a Function static class, I'm using azure web job sdk.
Whenever the azure web site is scaled to more than one instance, all the instances share the same configuration.
It sounds like you're creating a new subscription each time your new instance runs, rather than hooking into an existing one. Topics are designed to allow multiple subscribers to attach in that way as well - usually though each subscriber has a different purpose, so they each see a copy of the message.
I cant verify this from your code snippet but that's my guess - are the config files identical? You should add some trace output to see if your processes are calling CreateSubscription() each time they run.
I think I can access the message id, I'm using azure web job sdk but I think I can find a way to get it. Let me check it and will let you know.
I've created and registered custom http module to show maintenance message to user after administrator turns on maintenance mode via configuration change.
When I pass request for html it should return custom html loaded from file, but it returns message: "The service is unavailable." I can't find that string in my entire solution. Custom log message from custom maintenance module is written to log4net logs.
... INFO DdiPlusWeb.Common.MaintenanceResponder - Maintenance mode is on. Request rejected. RequestUrl=...
Seems something is miss configured in IIS on Azure. Something intercepts my 503 response. How to fix it?
Module code
void context_BeginRequest(object sender, EventArgs e)
{
HttpApplication application = (HttpApplication)sender;
HttpContext context = application.Context;
if (AppConfig.Azure.IsMaintenance)
{
MaintenanceResponder responder = new MaintenanceResponder(context, MaintenaceHtmlFileName);
responder.Respond();
}
}
Interesting part of responder code.
private void SetMaintenanceResponse(string message = null)
{
_context.Response.Clear();
_context.Response.StatusCode = 503;
_context.Response.StatusDescription = "Maintenance";
if (string.IsNullOrEmpty(message))
{
_context.Response.Write("503, Site is under maintenance. Please try again a bit later.");
}
else
{
_context.Response.Write(message);
}
_context.Response.Flush();
_context.Response.End();
}
EDIT: I lied. Sorry. Maintenance module returns the same message for requests that expect json or html.
This answer led me to the solution.
I've added another line to SetMaintenanceResponse method.
_context.Response.TrySkipIisCustomErrors = true;
It works now. Here is more about what it exactly means.