curl chunky parser error - linux

"Received problem 3 in the chunky parser"
I can't for the life of me find what "problem 3" in curl refers to. I'm sure it has to do with the format of the chunk I'm sending from the app server to curl, but I can't figure out what is wrong with the chunk because I can't tell what "problem 3" is.
Any ideas?

The number you see there is CHUNKE_BAD_CHUNK from the CHUNKcode enum from lib/http_chunks.h from the libcurl source code. Given a quick look, it seems that it is mostly used when a CR or LF is missing from the chunked data.
I would recommend you investigate on the raw HTTP content stream to see what the problem is with the chunked format. RFC2616 section 3.6.1 documents it.

There is a similar post to yours. Again I'm not sure what your trying to sen across so I can not point out the problem but have look at this,
Why is this warning being shown: "Received problem 2 in the chunky parser"?
Hope this helps!

So, I ran into this with a CGI program.
Long story short, the CGI script was using Python, and printing the chunk header using the length of the string, then sending to the client using:
print data,
This appends a space, making the data one byte longer than the chunk header says it is. I fixed this by changing that line to:
stdout.write( data )
A hexdump of the data out of the CGI script was the tool that finally told me what was going on.

Related

^HV Only returning some values

I'm trying to print some RFID tags and retrieve their TIDs to store them in my system and know which tags have been printed. Right now I'm reading the TID and sending it back to my computer (connected via USB with the my ZT421 printer) with the following code:
^RFR,H,0,12,2^FN0^FS^FH_^HV0,24,,_0D_0A,L^FS
^RFW,H,2,12,1^FD17171999ABABABAAAAAAAAAB^FS
This is repeated for each tag that I'm printing. However, when printing 10 tags, I only get 9 TIDs. If after that I try to print 7 tags, I still get 9 TIDs. To be honest I'm a bit lost now, because even trying to use the code examples from the ZPL manual (I've tried the ^RI instruction also) it doesn't seem to work.
The communication with the printer is beeing done through Zebra Setup Utilities' direct communication tool.
I tried to retrieve each printed tag TID with:
^RFR,H,0,12,2^FN0^FS^FH_^HV0,24,,_0D_0A,L^FS
^RFW,H,2,12,1^FD17171999ABABABAAAAAAAAAB^FS
but I always get 9 TIDs.
I also tried getting the TID with the ZPL manual example for the ^RI command:
^XA
^FO20,120^A0N,60^FN0^FS
^RI0,,5^FS
^HV0,,Tag ID:^FS
^XZ
And I got absolutely nothing returned to the computer, just a mssage saying "Tag ID:" and no value shown.
I would really appreciate some help with this...
Thanks in advance!
I've fixed the issue, but I'm going to leave the solution here just in case someone else is facing the same problem.
I thought that maybe it wasn't a code issue, but something related to the computer-printer communication. It turned out to be the case. The Zebra Setup Utilities program has a button that says "options". If you click it, a new screen will open and there you can configure the seconds that the program will wait for the printer response (in this case through USB). By default it's set to 5, i changed this value to 100, which is the maximum. This meant that instead of just printing and retrieving the TIDs of 6-9 tags, now I can do it for about 100.
This is not amazing because in my case it implied creating 25 files for the 2500 tags I had to print and store the TIDs, however it's far better than before.

XML data format for a query on Nextcloud OCS

The problem is about getting a direct download link to a Netcloud file.
Documentation :
To obtain a direct link:
POST /ocs/v2.php/apps/dav/api/v1/direct
With the fileId in the body (so fileId=42 for example).
I want to POST this URL with Nodes.js. What is the XML (I guess) format to set "fileId=42" in the body of my request ?
Everything I try returns me
"Invalid query, please check the syntax. API specifications are here:http://www.freedesktop.org/wiki/Specifications/open-collaboration-services." but noway there to get the format.
Samething with curl, no syntax is working.
In a more generic way what is the XML (I guess) format when querying the OCS API?
I answer my question:
At last I suceeded in making it work with curl, single/double quotes in Windows was the problem, fixed using "{" escape character.
This told me this was not about OCS data format itself as data is to be sent as it, ie "fieldId=42"
Switching from NojesJs/https lib to Axios fixed the problem. Probably some content-type header, however I tried 'application/x-www-form-urlencoded' which fits with the data to send.

How Do I resolve "Illuminate\Queue\InvalidPayloadException: Unable to JSON encode payload. Error code: 5"

Trying out the queue system for a better user upload experience with Laravel-Excel.
.env was been changed from 'sync' to 'database' and migrations run. All the necessary use statements are in place yet the error above persists.
The exact error happens here:
Illuminate\Queue\Queue.php:97
$payload = json_encode($this->createPayloadArray($job, $queue, $data));
if (JSON_ERROR_NONE !== json_last_error()) {
throw new InvalidPayloadException(
If I drop ShouldQueue, the file imports perfectly in-session (large file so long wait period for user.)
I've read many stackoverflow, github etc comments on this but I don't have the technical skills to deep-dive to fix my particular situation (most of them speak of UTF-8 but I don't if that's an issue here; I changed the excel save format to UTF-8 but it didn't fix it.)
Ps. Whilst running the migration, I got the error:
SQLSTATE[42000]: Syntax error or access violation: 1071 Specified key was too long; max key length is 767 bytes (SQL: alter table `jobs` add index `jobs_queue_index`(`queue`))
I bypassed by dropping the 'add index'; so my jobs table is not indexed on queue but I don't feel this is the cause.
One thing you can do when looking into json_encode() errors is use the json_last_error_msg() function, which will give you a bit more of a readable error message.
In your case you're getting a '5' back, which is the JSON_ERROR_UTF8 error code. The error message back for this is a slightly more informative one:
'Malformed UTF-8 characters, possibly incorrectly encoded'
So we know it's encountering non-UTF-8 characters, even though you're saving the file specifically with UTF-8 encoding. At first glance you might think you need to convert the encoding yourself in code (like this answer), but in this case, I don't think that'll help. For Laravel-Excel, this seems to be a limitation of trying to queue-read .xls files - from the Laravel-Excel docs:
You currently cannot queue xls imports. PhpSpreadsheet's Xls reader contains some non-utf8 characters, which makes it impossible to queue.
In this case you might be stuck with a slow, non-queueable option, or need to convert your spreadsheet into a queueable format e.g. .csv.
The key length error on running the migration is unrelated. It has been around for a while and is a side-effect of using an older version of MySQL/MariaDB. Check out this answer and the Laravel documentation around index lengths - you need to add this to your AppServiceProvider::boot() method:
Schema::defaultStringLength(191);

The XML parser detected error code 302

I am using the XML-INTO op-code to parse a web service request. Every now and then I get errors in the logs
(RNX0351 - "The XML parser detected error code 302").
The help for a 302 is
302 The parser does not support the requested CCSID value or
the first character of the XML document was not '<'
To the best of my knowledge, the first character is "<" and the request is generated from a previous web service call so I would be very suprised if the CCSID has changed.
The error is repeatable, for the specific query so it is almost certainly data related, I am just unsure how I would go about identifying the offending item.
Any thoughts on how to determine the issue, or better yet, how to overcome it?
cheers
CCSID is an AS400/iSeries/Power System attribute, and it applies to the whole IFS.It's like a declaration of what inside the file is, or in other words what its internal encoding "should be".
It's supposed that data content encoding in the file and the file one (the envelope) match, and the box uses this attribute to show and handle corresponding characters.
It sounds like you receive data under one encoding, but CCSID file doesn't match.
Try changing CCSID on your file (only the envelope). E.G.: 37 (american), 500 (latin-1), 819 (utf-8), 850 (dos), 1252 (win) and display file after.You can check first using ls -Sla yourfile in QSH or QP2TERM, or EDTF as well. CHGATTR allows you to change CCSID, as well as setccsid in QSH (again).
This way helped me to find related issues. Remember that although data may be visible in the four hundred, they may not be visible through a share folder in Win. It means that CCSID file, an content encoding don't match.
Hope it helps.
Hi I've seen this error with XML data uploaded to AS400/iSeries/IBM i with FTP and the CCSID 819 (ISO 8859-1 ASCII) and it has some binary garbage in first few positions of file. Changed encoding to CCSID 1208 (UTF-8 with IBM PUA) using FTP "quote type c 1208" and the problem cleared and XML-INTO was successful.
So, suggestion about XML parser error 302 received when using XML-INTO is to look at the file (wrklnk ...) and if first character is not "<" but instead some binary garbage then try CCSID 1208 for utf-8.
Statements in this answer about what 819 is and what ccsid represents utf-8 do not agree with previous answer but are correct, according to IBM documentation:
https://www-01.ibm.com/software/globalization/ccsid/ccsid819.html
https://www-01.ibm.com/software/globalization/ccsid/ccsid1208.html
I'm working on this problem a couple hours,
for me the solution was use option ccsid=UCS2 when you use data structure or variable to store xml.
something like that :
XML-INTO customer %XML( xmlSource : 'ccsid=UCS2');
I have the program running on ccsid = 870, every conversion to ccsid on the xmlSource field don't work,
The strange thing that when I use the file with ccsid = 850, every thing work fine
I mention that becouse this is the first page when you looking about this problem.
Maybe this help someone.

Is it possible to read only first N bytes from the HTTP server using Linux command?

Here is the question.
Given the url http://www.example.com, can we read the first N bytes out of the page?
using wget, we can download the whole page.
using curl, there is -r, 0-499 specifies the first 500 bytes. Seems solve the problem.
You should also be aware that many HTTP/1.1 servers do not have this feature enabled, so that when you attempt to get a range, you'll instead get the whole document.
using urlib in python. similar question here, but according to Konstantin's comment, is that really true?
Last time I tried this technique it failed because it was actually impossible to read from the HTTP server only specified amount of data, i.e. you implicitly read all HTTP response and only then read first N bytes out of it. So at the end you ended up downloading the whole 1Gb malicious response.
So the problem is that how can we read the first N bytes from the HTTP server in practice?
Regards & Thanks
You can do it natively by the following curl command (no need to download the whole document). According to the curl man page:
RANGES
HTTP 1.1 introduced byte-ranges. Using this, a client can request to get only one or more subparts of a specified document. curl
supports this with the -r flag.
Get the first 100 bytes of a document:
curl -r 0-99 http://www.get.this/
Get the last 500 bytes of a document:
curl -r -500 http://www.get.this/
`curl` also supports simple ranges for FTP files as well.
Then you can only specify start and stop position.
Get the first 100 bytes of a document using FTP:
curl -r 0-99 ftp://www.get.this/README
It works for me even with a Java web app deployed to GigaSpaces.
curl <url> | head -c 499
or
curl <url> | dd bs=1 count=499
should do
Also there are simpler utils with perhaps borader availability like
netcat host 80 <<"HERE" | dd count=499 of=output.fragment
GET /urlpath/query?string=more&bloddy=stuff
HERE
Or
GET /urlpath/query?string=more&bloddy=stuff
You should also be aware that many
HTTP/1.1 servers do not have this
feature enabled, so that when you
attempt to get a range, you'll instead
get the whole document.
You will have to get the whole web anyways, so you can get the web with curl and pipe it to head, for example.
head
c, --bytes=[-]N
print the first N bytes of each file; with the leading '-', print all
but the last N bytes of each file
I came here looking for a way to time the server's processing time, which I thought I could measure by telling curl to stop downloading after 1 byte or something.
For me, the better solution turned out to be to do a HEAD request, since this usually lets the server process the request as normal but does not return any response body:
time curl --head <URL>
Make a socket connection. Read the bytes you want. Close, and you're done.

Resources