I'm looking at implementing the necessary handling for CORS on a Device which exposes a basic HTTP server for control. In order to define how the Device should respond I've found myself attempting to understand just how the browser would be configured to perform the requests. In particular, I'm trying to better understand the behaviour of a browser around Fetch's Request credentials mode (and to a lesser extent XMLHttpRequest.withCredentials).
I'm aware that it would be problematic for the Device to respond with Access-Control-Allow-Origin: * if the Initial Server has set the request's credentials mode to be include. This is per the table here.
I don't quite understand when / why the Initial Server would actually set this mode.
Is this...
a flag to the browser so its aware the Initial Server has explicitly added credentials in its request to Device
a request to the browser's internal mechanisms to find any credentials for Device it has stored, and to include them in the CORS Request
a request to the browser's internal mechanisms to forward on any credentials it has used when talking to Initial Server onto the Device
other (?)
I'm assuming its intended for enabling option 2 due to the reference to the Confused Deputy problem in the above link but my knowledge of browser behaviour is non-existent.
If there were certain URIs on Device which required an Authorization header, would that have a definitive effect on the value of the credentials mode specified with the request (even if I assume the Initial Server was likely to be the one specifying the value for that header)? This is of interest, as I believe if include mode is required for these paths, then this would necessitate the addition of a Access-Control-Allow-Credentials header on the CORS Response.
The following diagram is just an overview of the expected interactions. Its assumed the CORS Preflight request is necessary for the resources being retrieved.
Browser Initial Server Device
| | |
| GET | |
| ----------------> | |
| | |
| Also GET Device | |
| <---------------- | |
| | |
| CORS Preflight Request (OPTIONS) |
| ----------------------------------------> |
| | |
| CORS Preflight Response |
| <---------------------------------------- |
| | |
| CORS Request (GET) |
| ----------------------------------------> |
| | |
| CORS Response |
| <---------------------------------------- |
Related
I have an application for which I am writing tests using pytest to validate the behavior. The application is as follows
+-------------------+
| App under test |
+-------------------+ +------------------+
| external library | | remote endpoint |
| (socket client)-+--->| (socket server) |
+-------------------+ +------------------+
I am writing tests for the "App Under Test". The App imports external library and the socket client is an integral part of external library. I want to mock the remote endpoint and must be able to respond mock data when ever there was a request made and validate the "App Under Test"'s behavior. How to achieve this?
P.S. I am interested in the behavior of "app Under Test". The behavior of the app depends on the data it receives from the socket server. I am NOT interested in the underlying "external library".
How can i set an alert in OMS when a server is powered off or is not available? I have searched on google but the alerts either dont work or too many get sent . I need the alert to be generated as soon as the server goes offline
i found out myself
Heartbeat | summarize LastHeartbeat = max(TimeGenerated) by Computer | where Computer == "server1" | where LastHeartbeat < ago(5m)
I cannot find a way to fail a http call to a nodejs azure function, and include a custom error response.
Calling context.done() allows for a custom response (but not indicated as a failure in Application Insights)
Calling context.done(true, XXX) does create a failure, but returns a generic error to the user (no matter what I put in XXX):
{"id":"b6ca6fb0-686a-4a9c-8c66-356b6db51848","requestId":"7599d94b-d3f2-48fe-80cd-e067bb1c8557","statusCode":500,"errorCode":0,"message":"An error has occurred. For more information, please check the logs for error ID b6ca6fb0-686a-4a9c-8c66-356b6db51848"}
This is just the latest headache I have ran into in trying to get a fast web api running on Azure funcs. If you cant track errors, than it should hardly be called "Application Insights". Any ideas?
Success will be true, but resultCode will be set to your value.
Try an AppInsights query like this:
// Get all errors
requests
| where toint(resultCode) >= 400
| limit 10
[Update]
The Id value in Requests is the 'function instance id', which uniquely identifies that invocation.
There is also a 'traces' table that contains the logging messages from your azure function. You can join between requests and traces via the operation_Id.
requests
| where toint(resultCode) >= 400
| take 10
| join (traces) on operation_Id
| project id, message, operation_Id
The response body is not automatically logged to AppInsights. You'll need to add some explicit log statements to capture that.
Why not use context.res to return a customer response for an HTTP trigger function?
Framework: Laravel 4.2
Server: Amazon ec2 Linux
Yesterday \Mailing::send was still working and then suddenly today it stop sending emails. I don't know what has happened.
here is my mail.php config:
<?php
return array(
/* x
|--------------------------------------------------------------------------
| Mail Driver
|--------------------------------------------------------------------------
|
| Laravel supports both SMTP and PHP's "mail" function as drivers for the
| sending of e-mail. You may specify which one you're using throughout
| your application here. By default, Laravel is setup for SMTP mail.
|
| Supported: "smtp", "mail", "sendmail", "mailgun", "mandrill", "log"
|
*/
'driver' => 'mail',
/*
|--------------------------------------------------------------------------
| SMTP Host Address
|--------------------------------------------------------------------------
|
| Here you may provide the host address of the SMTP server used by your
| applications. A default option is provided that is compatible with
| the Mailgun mail service which will provide reliable deliveries.
|
*/
'host' => 'mail.1fx.cash',
/*
|--------------------------------------------------------------------------
| SMTP Host Port
|--------------------------------------------------------------------------
|
| This is the SMTP port used by your application to deliver e-mails to
| users of the application. Like the host we have set this value to
| stay compatible with the Mailgun e-mail application by default.
|
*/
'port' => 587,
/*
|--------------------------------------------------------------------------
| Global "From" Address
|--------------------------------------------------------------------------
|
| You may wish for all e-mails sent by your application to be sent from
| the same address. Here, you may specify a name and address that is
| used globally for all e-mails that are sent by your application.
|
*/
'from' => array('address' => 'info#philwebservicescloud.net', 'name' => '1Fx Cash'),
/*
|--------------------------------------------------------------------------
| E-Mail Encryption Protocol
|--------------------------------------------------------------------------
|
| Here you may specify the encryption protocol that should be used when
| the application send e-mail messages. A sensible default using the
| transport layer security protocol should provide great security.
|
*/
'encryption' => 'tls',
/*
|--------------------------------------------------------------------------
| SMTP Server Username
|--------------------------------------------------------------------------
|
| If your SMTP server requires a username for authentication, you should
| set it here. This will get used to authenticate with your server on
| connection. You may also set the "password" value below this one.
|
*/
'username' => 'info#philwebservicescloud.net',
/*
|--------------------------------------------------------------------------
| SMTP Server Password
|--------------------------------------------------------------------------
|
| Here you may set the password required by your SMTP server to send out
| messages from your application. This will be given to the server on
| connection so that the application will be able to send messages.
|
*/
'password' => '******',
/*
|--------------------------------------------------------------------------
| Sendmail System Path
|--------------------------------------------------------------------------
|
| When using the "sendmail" driver to send e-mails, we will need to know
| the path to where Sendmail lives on this server. A default path has
| been provided here, which will work well on most of your systems.
|
*/
'sendmail' => '/usr/sbin/sendmail -bs',
/*
|--------------------------------------------------------------------------
| Mail "Pretend"
|--------------------------------------------------------------------------
|
| When this option is enabled, e-mail will not actually be sent over the
| web and will instead be written to your application's logs files so
| you may inspect the message. This is great for local development.
|
*/
'pretend' => false,
);
I have tried restarting the instance. Its still not sending.
First find out if there is any mails stuck in email spool. If it is empty then it is lavarels problem. Otherwise mail is stuck and could be affected by antispam filter. To resolve it fill this form
This is what I understand of it:
The .x file defines the interface and the parameters that are shared by the server and client. When you compile it with rpcgen, it generates the .h, _xdr.c, _clnt.c and _svc.c. The _clnt.c would be the stub and the _svc.c is the skelleton, right?
I understand that they intermediate the communication between the 2, but how so? Also, the example I've seen running had you specify the IP address of the machine to connect to (in the example it was using the same one, 127.0.0.1), but you don't specify the port. Does it have a reserved port?
The procedure has two steps. There is a port mapper running on port 111 and a RPC service registers through and is discovered by this service but may itself run on an arbitrary port.
See RFC 1833 - Binding Protocols for ONC RPC Version 2 for details.
On an RPC server machine, there is a process running called the endpoint mapper (this applies specifically to ONC RPC but other RPC mechanisms will be similar). This process runs on a known port so anyone can connect to it (security and existence allowing, of course).
An RPC server will start up and register itself with the endpoint mapper, giving its code (e.g., MULT) and port number, and the endpoint mapper will dutifully store that information for later use:
+---------+ +--------+
| Mapper, | <- Register MULT, port Y -- | Server |
| known | | for |
| port X | | MULT |
+---------+ +--------+
When a client subsequently connects to the endpoint mapper using the IP address, it gives the desired code (MULT) and the endpoint mapper then provides the final destination - now the client knows both the IP address and port for the MULT service:
+--------+ +---------+
| Client | -- Request MULT -> | Mapper, |
| | <- Return port Y -- | known |
| | | port X |
+--------+ +---------+
At that point, the endpoint mapper can step out of the way and let the client open up a session directly with the MULT service itself.
+--------+ +--------+
| Client | -- Connect to MULT -> | Server |
| | <- Do stuff -> | for |
| | | MULT |
+--------+ +--------+