I have a user receiving the following error in response to an ItemQueryRq with the QuickBooks Web Connector and IIS 7.
Version:
1.6
Message:
ReceiveResponseXML failed
Description:
QBWC1042: ReceiveResponseXML failed
Error message: There was an exception running the extensions specified in the config file. --> Maximum request length exceeded. See QWCLog for more details. Remember to turn logging on.
The log shows the prior request to be
QBWebConnector.SOAPWebService.ProcessRequestXML() : Response received from QuickBooks: size (bytes) = 3048763
In IIS 7, the max allowed content length is set to 30000000, so I'm not sure what I need to change to allow this response through. Can someone point me in the right direction?
Chances are, your web server is rejecting the Web Connector's HTTP request because you're trying to POST too much data to it. It's tough to tell for sure though, because it doesn't look like you have the Web Connector in VERBOSE mode, and you didn't really post enough of the log to be able to see the rest of what happened, and you didn't post the ItemQuery request you sent or an idea of how many items you're getting back in the response.
If I had to guess, you're sending a very generic ItemQueryRq to try to fetch ALL items, which has a high likelihood of returning A LOT of data, and thus having IIS reject the HTTP request.
Whenever you're fetching a large amount of data using the Web Connector, you should be using iterators. Iterators allow you to break up the result set into smaller chunks.
qbXML Iterator example
other qbXML examples
If you just need to determine if an item exists in QB you can simply add IncludeRetElement to your ItemQuery
So you should post something like
<ItemQueryRq requestID="55">
<FullName>Prepay Discount</FullName>
<IncludeRetElement>ListID</IncludeRetElement>
</ItemQueryRq>
And in Item query response just check the status code. If it is equal to 500 then it means that you should push your item into QB, if it is equal to 0 then it means that item exists
That workaround will save plenty of bytes in your response
Related
I want to treat 4xx HTTP responses from a function app (e.g. a 400 response after sending a HTTP request to my function app) as failures in application insights. The function app is being called by another service I control so a 4xx response probably means an implementation error and so I'd like to capture that to ultimately run an alert on it (so I can get an email instead of checking into Azure everytime).
If possible, I'd like it to appear here:
If not, are there any alternative approaches that might fit my use case?
Unless an unhandled exception occurs the function runtime will mark the invocation as succesful, whether the status code is actually denoting an error or not. Since this behavior is defined by the runtime there are 2 things you can do: throw an exception in the code of the function and/or remove exception handling so the invocation is marked as not succesful.
Since you ultimately want to create an alert, you better alert on this specific http status code using a "Custom log search" alert
requests
| where toint(resultCode) >= 400
I wrote an azure function with python that do some data processing, when I test on large dataset (150 lines), chrome raise a 502 http error : (tested the azure function on 10 lines and everything was ok)
I think the problem is that chrome browser wait for so long and when no response coming from azure function it automatically raises 502 error. I checked that the logic function is executed till the end but I don't get my json response when code is completed. Here is my http response I should get
return func.HttpResponse(json.dumps({"file" : file.name.split('/')[2]}),
mimetype="application/json",)
expected output :
{"file": "filename.json"}
In production I have to process more then 1500 lines, and within 150 lines the azure function take about 2 minutes to complete.
How to force chrome client or any client who hit the url of my azure function to wait to complete? is there any workaround pls?
For this problem, we are not client so we can't determine timeout value of client.
For your problem of force chrome client to wait the function complete, I'm afraid we can't do this setting. You can refer to this post (also shown as below screenshot).
According to the screenshot above, we can see chrome can't change the timeout setting and we can change it in other browsers.
If the client do not use browser but use code(such as .net) to request the function, the code should be like:
HttpClient httpClient = new HttpClient();
httpClient.Timeout = TimeSpan.FromMinutes(10);
I am using a HTTP connector to download a file using the get Request. Have set Allow chunking to On but still getting the error
"Http request failed as there is an error: 'Cannot write more bytes to the buffer than the configured maximum buffer size: 104857600.'."
Tested the endpoint with a HEAD request and it returns Range header as bytes which means it supports chunking but if I send the header "Range": "bytes=0-1023" getting below exception
BadRequest. The provided workflow action input is not valid.
How do I read this file from Logic apps and write to datalake ? Is the restriction coming from connector or logic apps? How can this be accomplished ?
I want to do something when/if an insert operation on Azure Table Storage fails. Assume that I want to return false from the below code when I receive an error. _table is of type CloudTable and the code below works.
public bool InsertEntity(TableEntity entity)
{
var insertOperation = TableOperation.Insert(entity);
var result = _table.Execute(insertOperation);
return (result.HttpStatusCode == (int)System.Net.HttpStatusCode.OK);
}
I get the result 203 when the operation succeeds. But there are other possible results like "200 OK".
How can I write a piece of code that will allow me to understand from the status code that something went wrong?
Using the .NET SDK, any situation that needs to be handled will throw an exception. i.e. Any status code that is not 2xx will cause an exception.
To handle situations where something went wrong, I don't have to manually check the status code of the result for every request. All I have to do is to write exception handling code. Like below:
try
{
var result = _table.Execute(insertOperation);
}
catch (Exception)
{
Log("Something went wrong in table operation.");
}
From this page:
REST API operations for Azure storage services return standard HTTP
status codes, as defined in the HTTP/1.1 Status Code Definitions.
So every successful operation against table service will return 2XX status code. To find out about the exact code returned, I would recommend checking out each operation on the REST API Documentation page. For example, Create Table operation returns 201 status code if the operation is successful.
Similarly, for errors in table service you will get error code in 400 range (that would mean you provided incorrect data e.g. 409 (Conflict) error if you're trying to create a table which already exists) or in 500 range (for example, table service is unavailable). You can find the list of all Table Service Error Codes here: https://msdn.microsoft.com/en-us/library/azure/dd179438.aspx.
Basically, any return in 2xx is "OK". In this example:
https://msdn.microsoft.com/en-us/library/system.net.httpstatuscode%28v=vs.110%29.aspx
203 Non-Authoritative Information:
Indicates that the returned metainformation is from a cached copy
instead of the
origin server and therefore may be incorrect.
This Azure white paper elaborates further:
http://go.microsoft.com/fwlink/?LinkId=153401
9.6.5 Error handling and reporting
The REST API is designed to look like a standard HTTP server interacting with existing HTTP clients
(e.g., browsers, HTTP client libraries, proxies, caches, and so on).
To ensure the HTTP clients handle errors properly, we map each Windows
Azure Table error to an HTTP status code.
HTTP status codes are less expressive than Windows Azure Table error
codes and contain less information about the error. Although the HTTP
status codes contain less information about the error, clients that
understand HTTP will usually handle the error correctly.
Therefore, when handling errors or reporting Windows Azure Table
errors to end users, use the Windows Azure Table error code along with
the HTTP status code as it contains more information about the error.
Additionally, when debugging your application, you should also consult
the human readable element of the XML error
response.
These links are also useful:
Microsoft Azure: Status and Error Codes
Clean way to catch errors from Azure Table (other than string match?)
If you are using Azure Storage SDK accessing Azure Table Storage, the SDK would throw a StorageException on the client side for unexpected Http Status Codes returned from the table storage service. To extract the actual HttpStatusCode you would need to wrap your code in a try {} catch(StorageException ex){} block. And then parse the actual exception object to extract the HttpStatusCode embedded in it.
Have a look at Azure Storage Exception parser I implemented in Nuget:
https://www.nuget.org/packages/AzureStorageExceptionParser/
This extracts HttpStatusCode and many other useful fields from Azure StorageExceptions. You can use the same library accross table, blob, queue clients etc. as they all follow the same StorageException pattern.
Note that there will be some exceptions thrown by the Azure Storage SDK that are not StorageExceptions, those are mostly client side request validation type of exceptions and naturally they do not contain any HttpStatusCode. (Hence you would need to have a catch for specifically StorageExceptions to extract HttpStatusCode s).
As a separate note, Azure Storage SDK has a fairly robust retry mechanism for failed requests. Below is the snippet from SDK source code where they decide if the failed response is retrieable or not.
https://github.com/Azure/azure-storage-net/blob/master/Lib/Common/RetryPolicies/ExponentialRetry.cs
if ((statusCode >= 300 && statusCode < 500 && statusCode != 408)
|| statusCode == 501 // Not Implemented
|| statusCode == 505 // Version Not Supported
|| lastException.Message == SR.BlobTypeMismatch)
{
return false; //aka. do not Retry if w are here otherwise Retry if within max retry count..
}
In a Sharepoint 2010 installation, we are trying to crawl the content of a small, single-node SharePoint installation. The crawling is partially successful. We are able to retrieve data delivered from the web services (_vti_bin/sitedata.asmx), but when the crawler tries to access the full page contents, it fails. The error message shown in the Crawl Log is:
The crawler could not communicate with the server. Check that the server is available and that the firewall access is configured correctly.
The error which is logged in the ULS is:
08/27/2010 01:52:02.92 mssdmn.exe (0x0A7C) 0x03E4 SharePoint Server Search HTTP Protocol Handler du54 High CHttpAccessorHelper::InitRequestInternal - unexpected status (500) on request for 'http://staging.dsr.dk/_layouts/error.aspx' Authentication 1. [httpacchelper.cxx:657] d:\office\source\search\native\gather\protocols\http\httpacchelper.cxx
08/27/2010 01:52:02.92 mssdmn.exe (0x0A7C) 0x03E4 SharePoint Server Search PHSts dv44 High CSTS3Accessor::Init: InitRequest failed for URL http://staging.dsr.dk/Pages/Forside.aspx Return error to caller, hr=80041206 [sts3acc.cxx:546] d:\office\source\search\native\gather\protocols\sts3\sts3acc.cxx
08/27/2010 01:52:02.92 mssdmn.exe (0x0A7C) 0x03E4 SharePoint Server Search PHSts dvb1 High CSTS3Accessor::Init fails, Url sts4://staging.dsr.dk/siteurl=/siteid={a78b7d4f-059f-4484-8564-449cd12a97cf}/weburl=/webid={1189e380-76fd-44b7-99a2-ebd4f7245c3d}, hr=80041206 [sts3handler.cxx:312] d:\office\source\search\native\gather\protocols\sts3\sts3handler.cxx
08/27/2010 01:52:02.92 mssdmn.exe (0x0A7C) 0x03E4 SharePoint Server Search PHSts dvb2 High CSTS3Handler::CreateAccessorExD: Return error to caller, hr=80041206 [sts3handler.cxx:330] d:\office\source\search\native\gather\protocols\sts3\sts3handler.cxx
We have configured the system according to _http://support.microsoft.com/kb/896861 (method 1).
We have used Fiddler2 to look at the HTTP traffic, which seems normal, i.e., we can see all the requests to _vti_bin/... But the request shown above, to the sts4 protocol, is not caught by Fiddler2. Hints on how to debug the STS4 traffic would be welcome.
Any suggestions on how to make the crawler successfully crawl the full page contents?
Thank you!
Thomas
It turned out the hint was lying a little higher up the ULS log:
Unexpected System.FormatException: Input string was not in a correct format. at System.Number.StringToNumber(String str, NumberStyles options, NumberBuffer& number, NumberFormatInfo info, Boolean parseDecimal) at System.Number.ParseInt32(String s, NumberStyles style, NumberFormatInfo info) at System.Convert.ToInt32(String value) at DSR.Portal.Core.Service.Identity.IdentityUtility.GetMember(String memberNumberOrCPR) at DSR.Portal.Core.Service.Identity.DSRMembershipProvider.GetUser(String username, Boolean userIsOnline) at DSR.Portal.Core.Service.Identity.DSRMembershipUser.get_Current()
We had implemented a custom MembershipProvider, which was expecting user id’s to be numbers. This failed for Windows Authenticated users, throwing the above stack trace. As a result, the crawler account was not able to retrieve pages, and this caused the problem for the “gatherer”.
So the morale of the story: ALWAYS make sure Windows Authentication works.
Regards
Thomas