I am adding a message to a gmail folder using this (example) URL:
https://www.googleapis.com/gmail/v1/users/user#domain.com/messages/import?uploadType=multipart
The body of the request looks like this:
--test_abc123
Content-Type: application/json; charset=UTF-8
{
"labelIds": [ "Label_525" ],
"raw": "RnJvbTogIlNlY3RpZ28gQ2VydGlmaWNh..."
}
--test_abc123--
The raw data is a base64 encoded standard MIME message that looks normal to me. The result of this POST is http error 400 with the error response "Payload parts count different from expected 2. Request payload parts count: 1".
I can supply the original MIME text if that is helpful, but let me emphasize that I have been running this code for several years without problem. I've tried different messages to test this out, but it appears that Google has changed something to break my software.
Is Google objecting to my raw data, or something about the MIME encoding? Any ideas what the problem could be?
---- Addendum ----
I have gotten a few messages to work, they seem to all have image or data attachments. However I really don't see any problem with the messages that are failing - I can import them into Office 365 or Thunderbird or anything else and they render just fine. As a test, I tried importing the message below, which was taken from the MIME RFC. It fails with the same error. I think that Google has changed something to make their MIME parser very fussy, but I don't see how I can fix my input data.
From: Nathaniel Borenstein <nsb#bellcore.com>
To: Ned Freed <ned#innosoft.com>
Subject: Sample message
MIME-Version: 1.0
Content-type: multipart/mixed; boundary="simple boundary"
This is the preamble. It is to be ignored, though it
is a handy place for mail composers to include an
explanatory note to non-MIME compliant readers.
--simple boundary
This is implicitly typed plain ASCII text.
It does NOT end with a linebreak.
--simple boundary
Content-type: text/plain; charset=us-ascii
This is explicitly typed plain ASCII text.
It DOES end with a linebreak.
--simple boundary--
This is the epilogue. It is also to be ignored.
Addendum 2: I tried a simple upload (using content-type header message/rfc822) and it worked, except the message was unlabeled. How
would I specify what label I want applied to a message? I was originally trying to follow the documentation here
link
which tells me to create the json body that I gave above. This allows me to specify the label. But I cannot seem to use
this body in a simple upload. The content type is either invalid, or what Gmail imports is just literally the json body,
it does not parse out the raw data. If you could point me to a specific example showing the URI, message body, http headers
(not java code) that would be very useful to me.
OK never mind, I got it working by adding an empty message/rfc822 part to the body of the multipart upload. That satisfies Google, and the empty part is ignored in favor of the raw data.
You are doing a multipart upload,see here:
The body of the request is formatted as a multipart/related content
type [RFC2387] and contains exactly two parts. The parts are
identified by a boundary string, and the final boundary string is
followed by two hyphens.
This is why it works only for your messages with images or attachments, since your message
--test_abc123
is only one part.
In the past there was no check if this condition is fulfilled, so you might have gotten away with using multipart for 1-part-messages.
But now it's not possible anymore, so if have a single-part message, you should use Simple upload.
If you do not know in advance how many parts your message has, you can always try the multipart first, implementing a try...catch statement, and implement a simple upload request within catch in case of failure.
Related
I am building a web server from an ESP8266 that will send environmental data to any web client as a web page. I'm using the Arduino IDE.
The problem is that the data can get rather large at times, and all of the examples I can find show assembling a web page in memory and sending it all at once to the client via ESP8266WebServer.send(). This is ok for small web pages, but won't work with the amount of data I need to send.
What I want to do is send the first part of the web page, then send the data out directly as I gather it, then send the closing parts of the web page. Is this even possible? I've looked unsuccessfully for documentation and there doesn't seem to be any examples anywhere.
For future reference, I think I figured out how to do it, with help from this page: https://gist.github.com/spacehuhn/6c89594ad0edbdb0aad60541b72b2388
The gist of it is that you still use ESP8266WebServer.send(), but you first send an empty string with the Content-Length header set to the size of your data, like this:
server.sendHeader("Content-Length", (String)fileSize);
server.send(200, "text/html", "");
Then you send buffers of data using ESP8266WebServer.sendContent() repeatedly until all of the data is sent.
Hope this helps someone else.
I was having a big issue and a headache in serving big strings concatenating together with other strings variables to the ESP32 Ardunio webserver with
server.send(200, "text/html", BIG_WEBPAGE);
and often resulted in a blank page as I reported in my initial error.
What was happening was this error
E (369637) uart: uart_write_bytes(1159): buffer null
I don't reccommend to use the above server.send() function
After quite a lot of reaserch I found this piece of code that simply works like a charm. I just chunked my webpage in 5 pieces like you see below.
server.sendHeader("Cache-Control", "no-cache, no-store, must-revalidate");
server.sendHeader("Pragma", "no-cache");
server.sendHeader("Expires", "-1");
server.setContentLength(CONTENT_LENGTH_UNKNOWN);
// here begin chunked transfer
server.send(200, "text/html", "");
server.sendContent(WEBPAGE_BIG_0);
server.sendContent(WEBPAGE_BIG_1);
server.sendContent(WEBPAGE_BIG_2);
server.sendContent(WEBPAGE_BIG_3);
server.sendContent(WEBPAGE_BIG_4);
server.sendContent(WEBPAGE_BIG_5);
server.client().stop();
I really own much to this post. Hope the answer hepls someone else.
After some more experiments I realized it is faster and more efficient the code if you do not feed the string variable into the server.sendContent function. Instead you just paste there the actual string value.
server.sendContent("<html><head>my great page</head><body>");
server.sendContent("my long body</body></html>");
It is very important the when you chunk the webpage you don't chunk html tags and you don't chunk an expression of a javascript code (like cutting in half a while or an if), while chunking scripts just chunk after the semicolon or better between two function declarations.
Chunked transfer encoding is probably what you want, and it's helpful in the situation where the web page you are sending is being dynamically created on-the-fly and is also too large to fit into memory. In this situation, you have two problems. One, you can't send it all at once, and two, you don't know ahead of time how big the result is going to be. Both problems can be fixed like this:
String webPageChunk = "some html";
server.setContentLength(CONTENT_LENGTH_UNKNOWN);
server.send ( 200, "text/html", webPageChunk);
while (<page is being generated>) {
webPageChunk = "some more html";
server.sendContent(webPageChunk);
}
server.sendContent("");
Sending a blank line will tell the client to terminate the session. Be careful not to send one in your loop before you're done generating the whole page.
I want to parse interaction message requests coming from Slack. This is what Slack says in their docs:
The body of that request will contain a payload parameter. Your app
should parse this payload parameter as JSON.
That seemed straightforward, so I parsed it like so:
JSON.parse(decodeURIComponent(body.split('=')[1]))
However, in the string-fields of the resulting object, I see pluses instead of spaces:
"There+should+not+be+pluses+here"
What am I doing wrong here?
Took a look at their library here, and it turns out, they use node's querystring.parse().
So the parsing procedure should look like this:
JSON.parse(querystring.parse(body).payload)
I'm having an issue with a NodeJS REST api created using express.
I have two calls, a get and a post set up like this:
router.get('/:id', (request, response) => {
console.log(request.params.id);
});
router.post('/:id', (request, response) => {
console.log(request.params.id);
});
now, I want the ID to be able to contain special characters (UTF8).
The problem is, when I use postman to test the requests, it looks like they are encoded very differently:
GET http://localhost:3000/api/â outputs â
POST http://localhost:3000/api/â outputs â
Does anyone have any idea what I am missing here?
I must mention that the post call also contains a file upload so the content type will be multipart/form-data
You should encode your URL on the client and decode it on the server. See the following articles:
What is the proper way to URL encode Unicode characters?
Can urls have UTF-8 characters?
Which characters make a URL invalid?
For JavaScript, encodeURI may come in handy.
It looks like postman does UTF-8 encoding but NOT proper url encoding. Consequently, what you type in the request url box translates to something different than what would happen if you typed that url in a browser.
I'm requesting: GET localhost/ä but it encodes it on the wire as localhost/ä
(This is now an invalid URL because it contains non ascii characters)
But when I type localhost/ä in to google chrome, it correctly encodes the request as localhost/%C3%A4
So you could try manually url encoding your request to http://localhost:3000/api/%C3%A2
In my opinion this is a bug (perhaps a regression). I am using the latest version of PostMan v7.11.0 on MacOS.
Does anyone have any idea what I am missing here?
yeah, it doesn't output â, it outputs â, but whatever you're checking the result with, think you're reading something else (iso-8859-1 maybe?), not UTF-8, and renders it as â
Most likely, you're viewing the result in a web browser, and the web server is sending the wrong Content-Type header. try doing header("Content-type: text/plain;charset=utf-8"); or header("Content-type: text/html;charset=utf-8"); , then your browser should render your â correctly.
I'm trying to make a request with Content-Type x-www-form-urlencoded that works perfectly in postman but does not work in Azure Logic App I receive a Bad Request response for missing parameters, like I'd not send enything.
I'm using the Http action.
The body value is param1=value1¶m2=value2, but I tried other formats.
HTTP Method: POST
URI : https://xxx/oauth2/token
In Headers section, add the below content-type:
Content-Type: application/x-www-form-urlencoded
And in the Body, add:
grant_type=xxx&client_id=xxx&resource=xxx&client_secret=xxx
Try out the below solution . Its working for me .
concat(
'grant_type=',encodeUriComponent('authorization_code'),
'&client_id=',encodeUriComponent('xxx'),
'&client_secret=',encodeUriComponent('xxx'),
'&redirect_uri=',encodeUriComponent('xxx'),
'&scope=',encodeUriComponent('xxx'),
'&code=',encodeUriComponent(triggerOutputs()['relativePathParameters']['code'])).
Here code is dynamic parameter coming from the previous flow's query parameter.
NOTE : **Do not forget to specify in header as Content-Type ->>>> application/x-www-form-urlencoded**
Answering this one, as I needed to make a call like this myself, today.
As Assaf mentions above, the request indeed has to be urlEncoded and a lot of times you want to compose the actual message payload.
Also, make sure to add the Content-Type header in the HTTP action with value application/x-www-form-urlencoded
therefore, you can use the following code to combine variables that get urlEncoded:
concat('token=', **encodeUriComponent**(body('ApplicationToken')?['value']),'&user=', **encodeUriComponent**(body('UserToken')?['value']),'&title=Stock+Order+Status+Changed&message=to+do')
When using the concat function (in composing), the curly braces are not needed.
First of all the body needs to be:
{ param1=value1¶m2=value2 }
(i.e. surround with {})
That said, value1 and value2 should be url encoded. If they are a simple string (e..g a_b) then this would be find as is but if it is for exmaple https://a.b it should be converted to https%3A%2F%2Fa.b
The easiest way I found to do this is to use https://www.urlencoder.org/ to convert it. convert each param separately and put the converted value instead of the original one.
Here is the screenshot from the solution that works for me, I hope it will be helpful. This is example with Microsoft Graph API but will work with any other scenario:
I am trying to return text/plain from a couchdb list but it is not working. The content is returned correctly but the content type seems to be forced to application/json.
Code snippet is
start({ "headers" : {"Content-type" : "text/plain"}});
send("Nono, you can't do this");
Before this code there is one getRow invoked. If I remove that the text/plain content type is returned as expected.
Not sure what is going wrong here and I can't really avoid the getRow as the result of that determines the content type to return.
Any guidance warmly welcomed!
CouchDB starts the response (including sending the default Content-Type header) when you first call getRow(), so what you are seeing is expected behaviour.
Submit a ticket to our JIRA (http://issues.apache.org/jira/browse/COUCHDB), though, and perhaps it can be delayed, allowing the useful effect you are attempting.