I am using the Domino Data Access (via RESTClient) to update docs in a database. I'm using PATCH and PUT. In both cases (PATCH is a header override) I don't get a response back from the Domino server. RESTClient gives me a "processing data" and that's it. If I abort, I can see the replace or update has been done. So DDA is working, except I'm not getting a 200 or other response back. Default and Anonymous can create/edit (database is still in testing), and I've tried with and without the form and computewithform parameters. I'm not seeing anything in the server log.
Could someone give me a pointer of where to look? It seems that something is keeping the complete acknowledgment from being sent, but I don't know what that would be. Other testing, for GET, respond fine.
Thanks,
Brian
urns out this was a problem with RESTClient, not DDA/DDS. Thanks to Fotios Hatzis who figured it out. I tried with a Chrome extension, and the response displays as exected. –
Related
I started working with the WebDAVSharp.Server library to make it work in a custom project of mine.
The amount of changes I had to do was breathtaking.
I have reached the point of making it work in an acceptable manner for my case, but I still have an issue with Office 2003 opening the files as read only.
I have fixed the non-root problem that I had, and my server responds to the PROPFIND and OPTIONS methods on all the levels of the webdav link, but Office 2003 opens the document as read-only and when trying to save it on the WebDav URL manually, it stops itself from saving the document after my PROPFIND response. And I cannot find why it does that, because I have checked almost everything. Also, newer Office versions (e.g. 2013, 2016) don't have this problem.
Here is the request from MS Word:
And here is the WebDav response:
And here are the response headers:
I am not an expert on WebDav, but it asks for all the properties of the documents and the response gives it just that. What else does it want?
Also, it never sends a LOCK method, so I doubt it is a LOCK problem.
And the request flow is like that:
The unauthorized requests are because it first sends the request without credentials and then it re-tries with credentials.
The first PROPFIND is for the containing folder/collection. I give a proper response to that, signifying it is a folder.
What is weird, is that Office 2003 never seems to issue an OPTIONS request, so I never send the MS-Author-Via header. So, this can be what causes the problem, but what can I do to force it send this request?
Any ideas or hints are welcome. I am sure there are people out there that can find the problem by simply looking at this.
UPDATE:
After seeing this, I added the Win32FileAttributes property and others from the same Microsoft namespace, but I still don't see any improvement in the behavior. This is my new PROPFIND response XML:
But I think I should approach it the other way around: Find out why after the GET method the document is in Read-Only mode.
One thing that I also fixed, is to trust my proxy certificate, which changed the request flow to this:
When I am going through the linked documents it is saying 401 unauthorized. How to fix this issue?
It shows like this in the terminal:
"HTTP request sent, awaiting response ... 401 unauthorized"
These look like files uploaded to an issue, merge request, comment, etc. You are only able to access those files within the context of the issue, merge request, comment where they were originally posted. This is a security measure as you wouldn't want these files to be accessible to someone who wasn't a part of your project, or if the issue was confidential, etc. Instead you should either give the issue/merge request link to the person you wish to share with, or you should download the file and send through other means.
when trying to post a WakeUp event with a JSON body to the Alexa events API using nodejs with axios or request-promise, the API always returns an error 500.
I posted to an online endpoint to actually see what gets posted and learned that the post body gets truncated which obviously results in invalid json. I abstracted the problem and tried to run it from a virgin nodejs installation by using repl.it and the result is the same.
Interestingly enough, there seems to be a relation between the length of the header and the body. So when I shorten the auth token in the header, more characters of the body get transferred. If I shorten the long tokens in the body to about 450 to 500 characters (it seems to vary) the whole request gets through. Obviously this is not a solution, because the tokens are needed for authentication.
When I experimented with the axios version used lowering it to 0.10 I once got a result but posting again lead to another 500. If I post often enough some requests get trough complete, even on the current axios version. I also tried using request-promise with the same outcome.
I got the feeling that I made a really stupid mistake but I can't find it and I really couldn't find anything on this topic, so it's driving me crazy. Any help would be greatly appreciated!
This looks like a tricky one.. first of all, I don't think you're making a really stupid mistake. It looks to me like one of the low-level modules doesn't like something in the POST body for some reason (really weird.).. I've played about with this and I'm getting exactly the same behaviour with both Axios and Request.. if I comment out the tokens (correlationToken and bearer token ) everything works fine.
If I test this locally, everything works as it should (e.g. set up express server and log POST body).
Also posting to https://postman-echo.com/post works as expected (with the original post data)..
I've created this here: https://repl.it/repls/YoungPuzzlingMonad
It looks to me like the original request to http://posthere.io is failing because of the request size only. If you try a very basic POST with a large JSON body you get the same result.
I get the same result with superagent too.. this leads me to believe this is something server side...
This was not related to the post request at all. The reason for the error after sending the WakeUp event was the missing configuration parameter containing the MACAdresses in the Alexa.WakeOnLANController interface.
I used the AlexaResponse class to add the capability via createPayloadEndpointCapability which had not been modified to support the "new" WakeOnLANController interface yet.
It's a pity that the discovery was accepted and my WOL-capable device was added to my smart home devices although a required parameter was missing :(
posthere.io cutting off long post bodys cost me quite a few hours... On the upside, I go to know many different ways of issuing a post request in node ;)
Thanks again Terry for investigating!
I've recently started using Restangular for making cross domain requests to a RESTful service, and so far everything works great.
But with IE10 when a make a GET request only for the first time it gets data from the Server, and for subsequent calls it does not hit the server, and returns probably cached data. I need to get the data refreshed from the Server. I tried setting defaultHttpFields cache to false, but no luck. Please help!
Thanks,
Lakshmi
I'm the creator of Restangular.
Could you please post an example? If you didn't set the cache to true in defaultHttpfields, Restangular shouldn't cache this at all.
Have you chcked if the requests are going out in the Network tab of the developers console? Does it work in other browsers? Check in restangular Library for RestangularResource to see if it's doing $http call.
Hope it helps!
I just hit this one too. Seems that IE10 is particularly keen on caching results from RESTful calls.
One workaround I used was to just provide some unique value as a parameter to each request and then IE10 seems happy not to cache it. I used the current timestamp in ms since I've seen jQuery use similar workarounds in the past.
var postsApi = Restangular.all("posts");
$scope.allPosts = postsApi.getList({ nocache : new Date().getTime() });
Works for now.
I created a userscript for myself which is active on all webpages i visit. It sends data to my debugger/app via jquery's post ($.post).
I notice one site not allowing me to send data even though it worked before and after a quick look it appears there is some kind of error via xhr-src. It appears the response headers has a 'X-Content-Security-Policy' which list a bunch of sites (google being one). So when i try to do a post to localhost:myport/ it violates the rule thus doesn't post.
What can I do to get this working again? I can't exactly edit the headers (unless i write my own http proxy?) would i be able to create an iframe using localhost:1234/workaround and post via that? But the issue is i still dont know if thats a violation or how to give it data.