Custom maintenance mode module does not work on Azure Web Role - iis

I've created and registered custom http module to show maintenance message to user after administrator turns on maintenance mode via configuration change.
When I pass request for html it should return custom html loaded from file, but it returns message: "The service is unavailable." I can't find that string in my entire solution. Custom log message from custom maintenance module is written to log4net logs.
... INFO DdiPlusWeb.Common.MaintenanceResponder - Maintenance mode is on. Request rejected. RequestUrl=...
Seems something is miss configured in IIS on Azure. Something intercepts my 503 response. How to fix it?
Module code
void context_BeginRequest(object sender, EventArgs e)
{
HttpApplication application = (HttpApplication)sender;
HttpContext context = application.Context;
if (AppConfig.Azure.IsMaintenance)
{
MaintenanceResponder responder = new MaintenanceResponder(context, MaintenaceHtmlFileName);
responder.Respond();
}
}
Interesting part of responder code.
private void SetMaintenanceResponse(string message = null)
{
_context.Response.Clear();
_context.Response.StatusCode = 503;
_context.Response.StatusDescription = "Maintenance";
if (string.IsNullOrEmpty(message))
{
_context.Response.Write("503, Site is under maintenance. Please try again a bit later.");
}
else
{
_context.Response.Write(message);
}
_context.Response.Flush();
_context.Response.End();
}
EDIT: I lied. Sorry. Maintenance module returns the same message for requests that expect json or html.

This answer led me to the solution.
I've added another line to SetMaintenanceResponse method.
_context.Response.TrySkipIisCustomErrors = true;
It works now. Here is more about what it exactly means.

Related

Azure Web API giving a 404 when deployed but works locally

We have a .NET Core Web API deployed as an Azure Web App. All endpoint work locally, however, once deployed, we have one controller that is gives us a 404 for all endpoint we hit within it.
We have checked and triple checked that the url we are calling is correct & from what we can tell, there is nothing different about this controller relative to the others in our application.
This is our BinController that is giving us 404's:
namespace API.Controllers
{
[Route("api/[controller]")]
[Authorize]
[ApiController]
public class BinController : ControllerBase
{
private readonly IBinRepository _binRepo;
private readonly ILogger _logger;
public BinController(IBinRepository binRepo, ILogger<BinController> logger)
{
_binRepo = binRepo;
_logger = logger;
}
[HttpGet("{locationId}/{binId}")]
public async Task<IActionResult> CheckBinExists(int locationId, string binId)
{
try
{
bool result = await _binRepo.CheckBinExists(locationId, binId);
return Ok(result);
}
catch (Exception e)
{
return StatusCode(StatusCodes.Status500InternalServerError, e.Message);
}
}
[HttpGet("findAll/{locationId}/{itemId}")]
public async Task<IActionResult> FindAllBinsWithItem(int locationId, string itemId)
{
try
{
var result = await _binRepo.FindAllBinsWithItem(locationId, itemId);
return Ok(result);
}
catch (Exception e)
{
_logger.LogError(e.Message);
return StatusCode(StatusCodes.Status500InternalServerError, e.Message);
}
}
[HttpGet("contents/{locationId}/{bin}")]
public async Task<IActionResult> GetBinContents(int locationId, string bin)
{
try
{
List<BatchLine> contents = await _binRepo.GetBinContents(locationId, bin);
return Ok(contents);
}
catch (Exception e)
{
_logger.LogError(e.Message);
return StatusCode(StatusCodes.Status500InternalServerError, e.Message);
}
}
}
}
We are calling https://ourapiname.azurewebsites.net/api/Bin/1234/TestBin.
To Summarize:
All endpoints work locally
All controllers work when deployed except for one
We have multiple other controllers in our application with similar if not the same setup, but this one is returning a 404 when deployed
We saw these similar posts, but they did not resolve this issue:
Web API interface works locally but gets 404 after deployed to Azure Website
Web api call works locally but not on Azure
I wish I could provide more insight, but we are really at a loss for what could be going on here. Any ideas would be greatly appreciated.
You can deploy your web project with Self-Contained mode, then find the project_name.exe and double-click it. Test it in your local, and your test url should be https://localhost:5001/api/Bin/1234/TestBin.
If you can run it works well in your local, the second step you should to create a new webapp, then deploy it as usual. We just rult out the some specifical reason in your original webapp.(like: deploy failed)
If it still not work, my suggestion is you can manually drag and drop the publish file to the kudu site of the azure webapp.
The steps above should be useful to you, but I think the easiest way is to go to the kudu site to get the dll file, and then decompile it, so that the root cause of the problem can be found.
This does not make any sense, but I simply changed the name of the Controller from "BinController" to "BinsController" and now it works...
Must not be able to name a controller "Bin".

Application Insights for WebAPI application

Is it possible to tell Application Insights to use a different InstrumentationKey depending on the request URL?
Our application works with different clients and we want to separate the logs for them in different instances of Application Insights.
Url format: https://webapi.com/v1/{client_name}/bla/bla
It would be great to setup configuration to select InstrumentationKey by client_name from request.
If the goal is to send different telemetry items to different instrumentation key, the correct way to achieve that is by modifying the individual item with a TelemetryInitializer to have the correct ikey.
An initializer like the following:
item.Context.InstrumentationKey = ikey.
This initializer should access HttpContext and decide the ikey dynamically from request route/other params.
Modifying TC.Active is not recommended for this purpose as its a global shared setting.
(This is not a very common use case - but there are teams inside Microsoft who does this for PROD scale apps)
You can do that. If you have a logger, have the ApplicationInsightsKey parameter-ized and pass the Key for the client on every call, or inject it on load if your application is tenant based.
Checkout the Docs here: Separating telemetry from Development, Test, and Production
Microsoft.ApplicationInsights.Extensibility.
TelemetryConfiguration.Active.InstrumentationKey = <App-Insights-Key-for-the-client>
Just change the Application Insights key before logging and it will do the job.
It would be great to setup configuration to select InstrumentationKey
by client_name from request.
You can dynamically select the ikey as per the client_name from the request. First, you need to get the request url, then check the client_name.
To do that, you can add the following code to the Global.asax file:
void Application_BeginRequest(Object source, EventArgs e)
{
var app = (HttpApplication)source;
//get the request url
var uriObject = app.Context.Request.Url.ToString();
if (uriObject.Contains("/client_name_1"))
{
Microsoft.ApplicationInsights.Extensibility.
TelemetryConfiguration.Active.InstrumentationKey = "ikey_1";
}
else if (uriObject.Contains("/client_name_2"))
{
Microsoft.ApplicationInsights.Extensibility.
TelemetryConfiguration.Active.InstrumentationKey = "ikey_2";
}
else
{
Microsoft.ApplicationInsights.Extensibility.
TelemetryConfiguration.Active.InstrumentationKey = "ikey_3";
}
}
The test result:
But I want to say we rarely use 1 more ikeys in one environment. If your goal is to make the data not being cluttered, I suggest you can use only 1 ikey, and then use Kusto query for your purpose.
Thanks to the answers from #cijothomas and #danpop (link) I was able to understand the whole picture.
Step 1: Create custom ITelemetryInitializer (Microsoft Documentation):
public class MyTelemetryInitializer : ITelemetryInitializer
{
public void Initialize(ITelemetry telemetry)
{
var appKey = CallContext.LogicalGetData("ApplicationKey")?.ToString();
switch (appKey)
{
case "App1":
telemetry.Context.InstrumentationKey = "d223527b-f34e-4c47-8aa8-1f21eb0fc349";
return;
default:
telemetry.Context.InstrumentationKey = "f8ceb6cf-4357-4776-a2b6-5bbed8d2561c";
return;
}
}
}
Step 2: Register custom initializer:
<ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings">
<TelemetryInitializers>
<Add Type="Application.WebAPI.MyTelemetryInitializer, Application.WebAPI"/>
</TelemetryInitializers>
<!--<InstrumentationKey>f8ceb6cf-4357-4776-a2b6-5bbed8d2561c</InstrumentationKey>-->
</ApplicationInsights>
OR
protected void Application_Start()
{
// ...
TelemetryConfiguration.Active.TelemetryInitializers.Add(new MyTelemetryInitializer());
}
Step 3: Make some adjustments to the logger (source code taken from #danpop answer Logger target configuration):
var config = new LoggingConfiguration();
ConfigurationItemFactory.Default.Targets.RegisterDefinition("ai", typeof());
ApplicationInsightsTarget aiTarget = new ApplicationInsightsTarget();
aiTarget.InstrumentationKey = "your_key";
aiTarget.Name = "ai";
config.AddTarget("ai", aiTarget);
LogManager.Configuration = config;
ILogger configuration exmples: Log4Net, NLog, System.Diagnostics

Handling Acumatica timeout on API Invoke action

I have code in a standalone application that invokes an Acumatica action to generate reports; I am running into timeouts on large documents while the action completes.
What is the best method to handle these timeouts? I need to wait for the action to complete in order to retrieve the files I've generated.
Standalone application code:
public SalesOrder GenerateAcumaticaLabels(string orderNbr, string reportType)
{
SalesOrder salesOrder = null;
using (ISoapClientProvider clientProvider = soapClientFactory.Create())
{
try
{
SalesOrder salesOrderToFind = new SalesOrder
{
OrderType = new StringSearch { Value = orderNbr.Split(OrderSeparator.SalesOrder).First() },
OrderNbr = new StringSearch { Value = orderNbr.Split(OrderSeparator.SalesOrder).Last() },
ReturnBehavior = ReturnBehavior.OnlySpecified,
};
salesOrder = clientProvider.Client.Get(salesOrderToFind) as SalesOrder;
InvokeResult invokeResult = new InvokeResult();
invokeResult = clientProvider.Client.Invoke(salesOrder, new exportSFPReport());
ProcessResult processResult = clientProvider.Client.GetProcessStatus(invokeResult);
//Wait for the update to complete before we attempt to retrieve the files
while (processResult.Status == ProcessStatus.InProcess)
{
Thread.Sleep(1000); //pause for 1 second
processResult = clientProvider.Client.GetProcessStatus(invokeResult);
}
}
And the action in Acumatica:
public PXAction<SOOrder> ExportSFPReport;
[PXButton]
[PXUIField(DisplayName = "Generate Robot SFP PDF")]
protected IEnumerable exportSFPReport(PXAdapter adapter)
{
//Report Paramenters
Dictionary<String, String> parameters = new Dictionary<String, String>();
parameters["SOOrder.OrderType"] = Base.Document.Current.OrderType;
parameters["SOOrder.OrderNbr"] = Base.Document.Current.OrderNbr;
IEnumerable reportFileInfo = ExportReport(adapter, "IN619217", parameters);
exportTrayLabelReport(adapter, "SFP");
return reportFileInfo;
}
The problem here is that your action is synchronous, so it is trying to complete within the Invoke call (which is not a good thing for long processes). You have to explicitly make your operation long-running by using PXLongOperation.StartOperation inside your handler, and then your client code should work properly, as it already handles the waiting and checking.
I believe the reason why you encounter time-out is because there is no TCP communication between the time you sent the request and receive the response. With TCP KeepAlive flag set to true, the client will periodically ping the server to reset the time-out period.
That would be the best way. However Acumatica connections are rather high level so I don't think you'll be able to easily access that flag. What I would try first in a scenario that doesn't involve external application is to wrap your action event-handler code in a PXLongOperation block which has to do something similar to keep connection alive under the hood:
PXLongOperation.StartOperation(this or Base, delegate
{
your code here
});
When I do encounter time-outs in Acumatica that can't be solved with PXLongOperation I go for the simplest method which is increasing IIS timeout in Web.Config file. I'm not sure if your use case with external application will go well with async PXLongOperation. The handler would return prematurely and the client could not be able to retrieve the async payload.
So you might have to increase time-out instead. As far as I know there's no real practical drawback to doing this unless your website is under threat of DOS attacks.
You can locate and edit the Web.Config file of your Acumatica instance using inetmgr program if you are self-hosting Acumatica. Otherwise talk to your SAAS contact to see if that's an option.
I'm pretty sure you are hitting IIS time-out. A tell-tale sign would be lost connection after exactly 5 minutes which is the default 300 seconds value. You can edit Web.Config file to increase executionTimeout value. It's not a bad idea to increase maxRequestLength too if you are requesting large amount of data from Acumatica API as this is also a common cause of failure that you miss in testing and occurs in real-life scenarios:
<httpRuntime executionTimeout="300" requestValidationMode="2.0" maxRequestLength="1048576" />

Azure Autoscale Restarts Running Instances

I've been using Autoscale to shift between 2 and 1 instances of a cloud service in a bid to reduce costs. This mostly works except that from time to time (not sure what the pattern seems to be here), the act of scaling up (1->2) causes both instances to recycle, generating a service outage for users.
Assuming nothing fancy is going on in RoleEntry in response to topology changes, why would scaling from 1->2 restart the already running instance?
Additional notes:
It's clear both instances are recycling by looking at the Instances
tab in Management Portal. Outage can also be confirmed by hitting the
public site.
It doesn't happen consistently but I'm not sure what the pattern is. It feels like when the 1-instance configuration has been running for multiple days, attempts to scale up recycle both. But if the 1-instance configuration has only been running for a few hours, you can scale up and down without outages.
The first instance always comes back much faster than the 2nd instance being introduced.
This has always been this way. When you have 1 server running and you go to 2+, the initial server is restarted. In order to have a full SLA, you need to have 2+ servers at all time.
Nariman, see my comment on Brent's post for some information about what is happening. You should be able to resolve this with the following code:
public class WebRole : RoleEntryPoint
{
public override bool OnStart()
{
// For information on handling configuration changes
// see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.
IPHostEntry ipEntry = Dns.GetHostEntry(Dns.GetHostName());
string ip = null;
foreach (IPAddress ipaddress in ipEntry.AddressList)
{
if (ipaddress.AddressFamily.ToString() == "InterNetwork")
{
ip = ipaddress.ToString();
}
}
string urlToPing = "http://" + ip;
HttpWebRequest req = HttpWebRequest.Create(urlToPing) as HttpWebRequest;
WebResponse resp = req.GetResponse();
return base.OnStart();
}
}
You should be able to control this behavior. In the roleEntrypoint, there's an event you can trap for, RoleEnvironmentChanging.
A shell of some code to put into your solution will look like...
RoleEnvironment.Changing += RoleEnvironmentChanging;
private void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e)
{
}
RoleEnvironment.Changed += RoleEnvironmentChanged;
private void RoleEnvironmentChanged(object sender, RoleEnvironmentChangedEventArgs e)
{
}
Then, inside the RoleEnvironmentChanged method, we can detect what the change is and tell Azure if we want to restart or not.
if ((e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange)))
{
e.Cancel = true; // don't recycle the role
}

How can I disable cakephp security?

I used
public function beforeFilter() {
parent::beforeFilter();
$this->Security->validatePost = false;
$this->Security->csrfCheck = false;
$this->Security->unlockedActions = array('my_action');
}
but it's not work and still report
Security Error
The requested address was not found on this server.
Request blackholed due to "auth" violation.
I remember that it was working normally and I can post my data but it stopped suddenly. I'm not sure what happens and try all my search result but it's not work. How can I stop Security Components in CakePHP ?
I even use
public function beforeFilter() {
parent::beforeFilter();
$this->Components->disable('Security');
}
you could try with SecurityComponent::validatePost using configuration option is for:
here is i am just defined for particular action you can chage it as per your need.
if(in_array($this->action,array(“some_action”))){
$this->Security->validatePost = false;
}

Resources