I'm pretty new here, so hope I can get some help with a basic doubt which I couldn't get around yet.
I'm using NodeJs and I have followed the Rabbiq GetStart and could understand the flow, however my doubt is with regards Http request.
What I need:
Manage http (POST, PUT, GET, DELETE) requests to another server.
What I was expecting:
RabitMQ manage the request QUEUE, so if some request fail it would retry again. When its successful, it would call another API on my end to flag the request was successfull.
What is my question:
I couldn't find any example which I would setup this request, providing the sender URL, METHOD and PAYLOAD and also the callback URL, METHOD, HEADERS, and PAYLOAD.
Is that something related to RabbitMQ or am I getting it wrong?
Related
I am trying to send a get request to my API to get a list of users. But I need there is an exclude list that the response must exclude. How can I send this exclude list in my GET request?
You can send a body with the request. Query parameters is probably the best way to do it though. The folks at Elastic.co say:
The truth is that RFC 7231—the RFC that deals with HTTP semantics and
content—does not define what should happen to a GET request with a
body! As a result, some HTTP servers allow it, and some—especially
caching proxies—don’t.
The authors of Elasticsearch prefer using GET for a search request
because they feel that it describes the action—retrieving
information—better than the POST verb. However, because GET with a
request body is not universally supported, the search API also accepts
POST requests:
You cannot send a request body when making a GET request. However, you can add it as a query parameter. Alternatively, you can make a POST request.
I'm setting up a website that will be mobile focused and one of the features I wan't to implement is users to be able to log an entry by just scanning a QR code.
For what I read is not really possible to make a POST request directly from a QR code, so I was thinking in two different options:
1. Make a GET request and then redirect that inside my server to a POST route in my routes.
So the URL would be something like https://example.com/user/resources/someresourceid123/logs/new and then this would create a POST request to https://example.com/user/resources/someresourceid123/logs/ and create the new entry to then send a response to the user but I'm not really sure this is the best approach or if it's possible at all.
My POST request only requires the resourceid which I should be able to get from req.params and the userid which I get from my req.user.
2. Do my logic and log the entry to my DB using the GET request to https://example.com/user/resources/someresourceid123/logs/new.
This would mean that my controller for that request will do everything needed from the GET request without having to make an additional POST request afterwards. I should be able to get both the resourceid and userid from the req object but not sure if being a GET request limits what I can do with it.
If any of those are possible, which would be the best approach?
I'd propose to go with a second option simply for the sake of performance. But you need to make sure your requests are not cached by any proxy, which is usually the case with GET requests.
If I understand correctly, a preflight OPTIONS request is sent as a way of asking "what's allowed here?". Then, once the response comes, if allowed, the calling site sends the POST request (or GET but in my case it's a post). I have figured out that, at least with Azure Function Apps, the OPTIONS request is executing the code that I expected only the POST to execute. I believe this to be the case because once I added some null checking (since the OPTIONS request doesn't have a payload in the body) everything worked fine.
I'm wondering if this is standard.
Seems to me that if I had written the API without using Azure Function Apps, I'd have the OPTIONS request sent down a path that would set the appropriate headers and return a 200 response. And the POST request would be sent down a different path that would expect a payload in the body. If that's how it usually works then that means I've just found an idiosyncrasy of the Azure functionality. But if not it means that I have something to learn about the OPTIONS preflight request.
Thanks in advance for your advice.
Denise
As sideshowbarker mentioned, the OPTIONS request is sent automatically by the browser to check if the cross-origin request can be made.
In case of Azure Functions, this will handled by the Azure when running in the cloud.
If your function is being triggered, that would mean that you have "options" as a supported method for your HTTP Trigger
In the HTTPTrigger attribute for C# functions
In functions.json for non-C# functions
If you want to customize the CORS responses and/or running functions in a container, you could always include "options" as supported and respond differently when the incoming HTTP method is OPTIONS.
Also, if you are using Azure API Management with Azure Functions, you could offload CORS handling to it instead or even use Functions Proxies as shown here.
Thanks y'all! Sorry I was unclear. And sorry it took me a while to get back. Things have been a bit crazy on this end.
Yes, the function being called is mine. And now I understand the browser doesn't have much choice as to whether or not it makes the OPTIONS call.
And yes, I could make my Azure function handle an options call differently and thanks for that suggestion too. That's sort of what I ended up doing but basically I did it by handling an empty payload. I didn't follow that best practice originally because I thought any valid request would have a payload. Accordingly, any request that did not have a payload was invalid and should be turned away as a failure of some sort. This was before I knew that the OPTIONS call was actually executing that function.
My remaining question is if I had NOT been using Azure... if I had rolled my own solution and hosted it somewhere, I'd have a class or at least methods that handle calls to this particular API. (This is something I'm new to so bear with me if my terms aren't quite right and please do correct me). So if I'd done my own API, I'd have one method to handle a POST call and a different method to handle an OPTIONS call, wouldn't I? And the method that handles the OPTIONS call would return information about what's legally do-able with this API. And the method that handles a POST call would handle the payload sent with it. And the method that handles the POST wouldn't get executed when an OPTIONS request is sent. At least that's how I figured it would work. And that's my question -- is that how it's done when not letting something like Azure handle some of the infrastructure?
I'm just trying to learn if the OPTIONS request executing a POST's function is a standard practice or if it's some kind of idiosyncrasy to working with Azure functions.
Thanks again for the advice and for helping me understand these questions.
I would like to build an integration using the Acumatica REST API. However, before logging in or anything. I would like to know if there is a way to test that the server is up and running.
I've tried logging in and I looked at the swagger.json to look at all the endpoints but I think they require to be logged in.
I would expect a 200 response when the server is up and a 500 when the server is not. 5XX if there is server issues and an error if it is completely down.
There is no Test function that I know of. I would recommend doing a HTTP GET request on the endpoint URL. If the request succeeds it will return the WSDL schema with 200 success code.
I have a RequestLog feature completely decoupled from the application logic.
I capture the request/response in a pre request filter. To acomplish this, I instantiate a request scope object that keeps the request context. And before everything gets disposed (at AppHost's OnEndRequest handler), I write to the db. One line per http req.
I'm able to access the response code, the path, the method, request body, request headers, etc.
However the response stream is not available as it was already disposed. What's the logic behind this? Is it something like, IIS writes the stream content to the wire, and releases the resource imediately? Is there any way I can capture the Response body at the OnEndRequest handler?
Thanks
No, ServiceStack doesn't buffer the Response stream it gets written directly to the ASP.NET response.
You can add a Global Response Filter or Custom ServiceRunner to capture the Services response if the request reaches that far, but the request can be short-circuited at anytime throughout the request pipeline by closing the response.
I had the same issue with capturing the response in ServiceRequestLogger. Read this post - it is solved in 4.0.39+ (currently pre-release)
ServiceStack response filter "short circuiting" causing problems