I am having an issue where I cannot access the contents of a json array due to one of the arrays having the name of "0"
I have tried data.0.username and it just shows as red in my editor (the path is 100% correct)
Sorry if this is super basic I couldn't find the same issue online thanks
For Ex if the Json file like data = {0:{'name':'John'},1:{'name':'doe'}}
you can access it with data[0].name or data[1].name
Related
Trying out the queue system for a better user upload experience with Laravel-Excel.
.env was been changed from 'sync' to 'database' and migrations run. All the necessary use statements are in place yet the error above persists.
The exact error happens here:
Illuminate\Queue\Queue.php:97
$payload = json_encode($this->createPayloadArray($job, $queue, $data));
if (JSON_ERROR_NONE !== json_last_error()) {
throw new InvalidPayloadException(
If I drop ShouldQueue, the file imports perfectly in-session (large file so long wait period for user.)
I've read many stackoverflow, github etc comments on this but I don't have the technical skills to deep-dive to fix my particular situation (most of them speak of UTF-8 but I don't if that's an issue here; I changed the excel save format to UTF-8 but it didn't fix it.)
Ps. Whilst running the migration, I got the error:
SQLSTATE[42000]: Syntax error or access violation: 1071 Specified key was too long; max key length is 767 bytes (SQL: alter table `jobs` add index `jobs_queue_index`(`queue`))
I bypassed by dropping the 'add index'; so my jobs table is not indexed on queue but I don't feel this is the cause.
One thing you can do when looking into json_encode() errors is use the json_last_error_msg() function, which will give you a bit more of a readable error message.
In your case you're getting a '5' back, which is the JSON_ERROR_UTF8 error code. The error message back for this is a slightly more informative one:
'Malformed UTF-8 characters, possibly incorrectly encoded'
So we know it's encountering non-UTF-8 characters, even though you're saving the file specifically with UTF-8 encoding. At first glance you might think you need to convert the encoding yourself in code (like this answer), but in this case, I don't think that'll help. For Laravel-Excel, this seems to be a limitation of trying to queue-read .xls files - from the Laravel-Excel docs:
You currently cannot queue xls imports. PhpSpreadsheet's Xls reader contains some non-utf8 characters, which makes it impossible to queue.
In this case you might be stuck with a slow, non-queueable option, or need to convert your spreadsheet into a queueable format e.g. .csv.
The key length error on running the migration is unrelated. It has been around for a while and is a side-effect of using an older version of MySQL/MariaDB. Check out this answer and the Laravel documentation around index lengths - you need to add this to your AppServiceProvider::boot() method:
Schema::defaultStringLength(191);
Hi I am trying to upload data to the Heidi SQL table, but it returned "SQL Error (1366): Incorrect string value: '\xE3\x82\xA8\xE3\x83\xBC...'".
This issue is prompted by this string - "エーペックスレジェンズ" , and the source data file has a number of special characters. Want to know if there's a way to override this, so that all forms of character could be uploaded?
My default setting is utf8 and I have also tried utf8mb4, but neither of them would work.
That happens when you select the wrong file encoding in HeidiSQL's open-file dialog:
Never select "Auto-detect" - I wrote that auto-detection, and I can tell you it often detects the wrong encoding. Use the right encoding instead, which is mostly utf-8 nowadays.
I've compressed the text file in gzip format using powershell and uploaded into Azure blob . When i query the external table i'm getting the following error but offending value is nothing . Can any one tell me what is the issue and how can i find out the error row .
Note : After issue i de-compressed the file and checked it , but i didn't not see any issue with rows .
Please click here to look at the error
My read on that error is that one row has a blank value for (I think) the first column. Since you have it declared as SMALLINT NOT NULL then it fails. Can you try changing that column to NULL?
Upon further troubleshooting I believe we determined the issue was special characters. I believe fixing the file encoding and removing special characters solved the issue.
I'm having trouble finding examples/troubleshooting tips online, and am not quite sure that I'm interpreting the documentation correctly. Any assistance would be greatly appreciated.
I'm connecting to an e-mail server, and want to read the e-mail subjects, and bodies. I first make my connection like so:
import imaplib
c = imaplib.IMAP4_SSL(hostname, port)
c.login(username, password)
foldername = 'INBOX/SSR'
c.select(str.encode(foldername), readonly = True)
today = datetime.date.today().strftime('%d-%b-%Y')
searchcriteria = '(SENTON '{}')'.format(today)
typ, msg_ids = c.search(None, searchcriteria)
msg_ids = [s.decode('ascii') for s in msg_ids]
for idnumber in msg_ids:
print(c.fetch(idnumber, "(BODY.PEEK[HEADER])"))
The code and works and output looks as expected, up until the last line, at which point, I get
imaplib.error: FETCH command error: BAD [b' Command Argument Error. 12']
My line of thought, and subsequent testing examined the following possible issues:
bytes vs. string. I converted input back to bytes, but the error remained constant
improper syntax: I tried other commands, such as BODY, SUBJECT, and ENVELOPE but still got the same message.
I'm not sure how to interpret the error, and don't really know where to start. Referencing https://www.rfc-editor.org/rfc/rfc3501.html from pp. 102+, I noticed that the values are labeled differently, but don't understand what the issue is with my implementation. How should I interpret the error? What is wrong with my syntax?
P.S. Correct me if I'm wrong, but the c.search shouldn't change my directory, yes? As in, by selecting foldername, I "navigate" to the selected folder, but just searching only returns values and shouldn't change my location?
I encountered the same problem while I tried to list or select a new mailbox - BAD [b' Command Argument Error. 12'], in my case, it didn't work with “Sent Box”, but it worked well with “Outbox”, so the space symbol is the point.
So it worked with c.select('"{}"'.format("Sent Box")...
Hope this information could help you.
Your last line is not correct msg_ids = [s.decode('ascii') for s in msg_ids]
msg_ids is a list with bytes string, not with elements of a list - example: [b'123 124 125']
Change the last line into msg_ids = msg_ids[0].split(b' ') and it will work as expected.
I have a script that opens a folder and does some processing on the data present. Say, there's a file "XYZ.tif".
Inside this tif file, there are two groups of datasets, which show up in the workspace as
data.ch1eXYZ
and
data.ch3eXYZ
If I want to continue with the 2nd set, I can use
A=data.ch3eXYZ
However, XYZ usually is much longer and varies per file, whereas data.ch3e is consistent.
Therefore I tried
A=strcat('data.ch3e','origfilename');
where origfilename of course is XYZ, which has (automatically) been extracted before.
However, that gives me a string A (since I practically typed
A='data.ch3eXYZ'
instead of the matrix that data.ch3eXYZ actually is.
I think it's just a problem with ()'s, []'s, or {}'s but Ican't seem to figure it out.
Thanks in advance!
If you know the string, dynamic field references should help you here and are far better than eval
Slightly modified example from the linked blog post:
fldnm = 'fred';
s.fred = 18;
y = s.(fldnm)
Returns:
y =
18
So for your case:
test = data.(['ch3e' origfilename]);
Should be sufficient
Edit: Link to the documentation