I'm using the grammar posted here:
https://github.com/antlr/grammars-v4/tree/master/python3
It sometime returns partial tokens or multiple tokens before emitting the correct token. I'm using the TestRig tool to print the following output.
Is this expected behavior ? Thank you.
#0,0:4='#3.31',<92>,channel=2,1:0]
****[#1,6:7='de',<34>,2:0]
[#2,6:8='def',<1>,2:0]****
[#3,10:23='reverse_string',<35>,2:4]
[#4,24:24='(',<47>,2:18]
[#5,25:30='answer',<35>,2:19]
[#6,31:31=')',<48>,2:25]
[#7,32:32=':',<50>,2:26]
****[#8,38:39='an',<34>,3:4]
[#9,38:42='answe',<94>,3:4]
[#10,38:43='answer',<35>,3:4]****
[#11,45:45='=',<53>,3:11]
[#12,47:51='input',<35>,3:13]
[#13,52:52='(',<47>,3:18]
[#14,53:82=''Enter a three-letter string:'',<36>,3:19]
[#15,83:83=')',<48>,3:49]
*[#16,89:90='re',<34>,4:4]
[#17,89:94='return',<2>,4:4]*
[#18,96:101='answer',<35>,4:11]
....
[#25,114:118='#3.32',<92>,channel=2,6:4]
*[#26,124:125='de',<34>,7:4]
[#27,124:126='def',<1>,7:4]*
...
**[#42,183:184='re',<34>,9:12]
[#43,183:195='return rate *',<94>,9:12]
[#44,183:188='return',<2>,9:12]
[#45,190:193='rate',<35>,9:19]
[#46,195:195='*',<46>,9:24]**
No, this is not expected behaviour.
During the creation of tokens, I assigned the wrong start and stop indices of custom created tokens (see private CommonToken commonToken(int type, String text) in Python3.g4).
These re and ret nodes were really NEWLINE and INDENT tokens. So only their inner text had misguided data, their token-types are the correct NEWLINE and INDENT.
Fixed in Pull Request: https://github.com/bkiers/python3-parser/pull/5 which will be merged shortly in master. And I've also proposed the change on the official ANTLR4 grammar repo: https://github.com/antlr/grammars-v4/pull/155
Related
I'm making an IMBD movie-searching tool with streamlit. I've completed the code, however, my URL query line is not working.
url = f'http://www.omdbapi.com/?t={title}&apikey={APIKEY}'
The code can be seen above:
The issue seems to be when the URL is taken in the braces {} gets changed into their ASCII counterparts instead of being interpreted as taking in the movie title and API key.
The issue can be seen below:
http://www.omdbapi.com/?t=**%7B**title%7D&apikey=%7BAPIKEY%7D
Any advice would be appreciated.
Cheers
Trying out the queue system for a better user upload experience with Laravel-Excel.
.env was been changed from 'sync' to 'database' and migrations run. All the necessary use statements are in place yet the error above persists.
The exact error happens here:
Illuminate\Queue\Queue.php:97
$payload = json_encode($this->createPayloadArray($job, $queue, $data));
if (JSON_ERROR_NONE !== json_last_error()) {
throw new InvalidPayloadException(
If I drop ShouldQueue, the file imports perfectly in-session (large file so long wait period for user.)
I've read many stackoverflow, github etc comments on this but I don't have the technical skills to deep-dive to fix my particular situation (most of them speak of UTF-8 but I don't if that's an issue here; I changed the excel save format to UTF-8 but it didn't fix it.)
Ps. Whilst running the migration, I got the error:
SQLSTATE[42000]: Syntax error or access violation: 1071 Specified key was too long; max key length is 767 bytes (SQL: alter table `jobs` add index `jobs_queue_index`(`queue`))
I bypassed by dropping the 'add index'; so my jobs table is not indexed on queue but I don't feel this is the cause.
One thing you can do when looking into json_encode() errors is use the json_last_error_msg() function, which will give you a bit more of a readable error message.
In your case you're getting a '5' back, which is the JSON_ERROR_UTF8 error code. The error message back for this is a slightly more informative one:
'Malformed UTF-8 characters, possibly incorrectly encoded'
So we know it's encountering non-UTF-8 characters, even though you're saving the file specifically with UTF-8 encoding. At first glance you might think you need to convert the encoding yourself in code (like this answer), but in this case, I don't think that'll help. For Laravel-Excel, this seems to be a limitation of trying to queue-read .xls files - from the Laravel-Excel docs:
You currently cannot queue xls imports. PhpSpreadsheet's Xls reader contains some non-utf8 characters, which makes it impossible to queue.
In this case you might be stuck with a slow, non-queueable option, or need to convert your spreadsheet into a queueable format e.g. .csv.
The key length error on running the migration is unrelated. It has been around for a while and is a side-effect of using an older version of MySQL/MariaDB. Check out this answer and the Laravel documentation around index lengths - you need to add this to your AppServiceProvider::boot() method:
Schema::defaultStringLength(191);
I have working code using re.compile that searches for a given key and extracts specified bytes from that line.
Working cypher
S011=re.compile(r"S0\w*\W*11\b")
Searches for 'S0' at the start and '11' further in (the intervening alphanumeric changes with each file)
S012PA041 11 1001650953.34N 72627.05E 426930.97227906.7 285.3227033224
I am trying to use the same method for a different input file but I can't work out the correct mask/cypher. There are several lines starting 'P1' so that in not exclusive enough; the 'P1....,V0' is the exclusive key. Again the numbers between the keys change with each event and file.
P1,0,01169-72-063,,1001,,1,2020:07:31:12:48:01.7,1,V01,2,,436389.57,7196330.69,,64.88354429,7.65691702,,64.88327349,7.65520631,,0.00,0.00,0.00,0.00,,248.04
I have tried these but with no success:
V0=re.compile(r"^P1\w*\W*V0")
V0=re.compile(r"^P1\w*\W*V0\w*\W*")
V0=re.compile(r"^P1\w*V0\w*")
After running more combinations than an safe-cracker on Red Bull, I've finally got the right sequence for the regex.
Line to be identified using 'P1' and further in 'V01' as search keys
P1,0,01169-72-063,,1001,,1,2020:07:31:12:48:01.7,1,V01,2,,436389.57,7196330.69,,64.88354429,7.65691702,,64.88327349,7.65520631,,0.00,0.00,0.00,0.00,,248.04
re.compile code to identify it.
V0=re.compile(r"^P1\s*,*:*\S*V01\s*,*:*\S*\b")
I am making a code in Python 3.7 for testing an application in Appium.
I am trying to send a text in an input field of an application. The text is in French with special characters (é, è, à, etc.).
My code managed to type character by character, one by one, but when it arrives to a special character with accent "é", it bugs! Here is error message:
Encountered internal error running command: io.appium.uiautomator2.common.exceptions.InvalidArgumentException: KeyCharacterMap.getEvents is unable to synthesize KeyEvent sequence out of '233' key code. Consider applying a patch to UiAutomator2 server code or try to synthesize the necessary key event(s) for it manually
I read the doc and forum and I added this capability:
desired_caps['unicodeKeyboard'] ='true'
But it didn't change anything. I still have same issue.
Try sending keys like:
self.driver.find_element().send_keys(u'éèà')
Change true to True
desired_caps['unicodeKeyboard'] ='True'
And this might help you
http://appium.io/docs/en/writing-running-appium/other/unicode/
I am using the XML-INTO op-code to parse a web service request. Every now and then I get errors in the logs
(RNX0351 - "The XML parser detected error code 302").
The help for a 302 is
302 The parser does not support the requested CCSID value or
the first character of the XML document was not '<'
To the best of my knowledge, the first character is "<" and the request is generated from a previous web service call so I would be very suprised if the CCSID has changed.
The error is repeatable, for the specific query so it is almost certainly data related, I am just unsure how I would go about identifying the offending item.
Any thoughts on how to determine the issue, or better yet, how to overcome it?
cheers
CCSID is an AS400/iSeries/Power System attribute, and it applies to the whole IFS.It's like a declaration of what inside the file is, or in other words what its internal encoding "should be".
It's supposed that data content encoding in the file and the file one (the envelope) match, and the box uses this attribute to show and handle corresponding characters.
It sounds like you receive data under one encoding, but CCSID file doesn't match.
Try changing CCSID on your file (only the envelope). E.G.: 37 (american), 500 (latin-1), 819 (utf-8), 850 (dos), 1252 (win) and display file after.You can check first using ls -Sla yourfile in QSH or QP2TERM, or EDTF as well. CHGATTR allows you to change CCSID, as well as setccsid in QSH (again).
This way helped me to find related issues. Remember that although data may be visible in the four hundred, they may not be visible through a share folder in Win. It means that CCSID file, an content encoding don't match.
Hope it helps.
Hi I've seen this error with XML data uploaded to AS400/iSeries/IBM i with FTP and the CCSID 819 (ISO 8859-1 ASCII) and it has some binary garbage in first few positions of file. Changed encoding to CCSID 1208 (UTF-8 with IBM PUA) using FTP "quote type c 1208" and the problem cleared and XML-INTO was successful.
So, suggestion about XML parser error 302 received when using XML-INTO is to look at the file (wrklnk ...) and if first character is not "<" but instead some binary garbage then try CCSID 1208 for utf-8.
Statements in this answer about what 819 is and what ccsid represents utf-8 do not agree with previous answer but are correct, according to IBM documentation:
https://www-01.ibm.com/software/globalization/ccsid/ccsid819.html
https://www-01.ibm.com/software/globalization/ccsid/ccsid1208.html
I'm working on this problem a couple hours,
for me the solution was use option ccsid=UCS2 when you use data structure or variable to store xml.
something like that :
XML-INTO customer %XML( xmlSource : 'ccsid=UCS2');
I have the program running on ccsid = 870, every conversion to ccsid on the xmlSource field don't work,
The strange thing that when I use the file with ccsid = 850, every thing work fine
I mention that becouse this is the first page when you looking about this problem.
Maybe this help someone.