I'm importing some JSON files into my Parse.com project, and I keep getting the error "invalid key:value pair".
It states that there is an unexpected "8".
Here's an example of my JSON:
}
"Manufacturer":"Manufacturer",
"Model":"THIS IS A STRING",
"Description":"",
"ItemNumber":"Number12345",
"UPC":083456789012,
"Cost":"$0.00",
"DealerPrice":" $0.00 ",
"MSRP":" $0.00 ",
}
If I update the JSON by either removing the 0 from "UPC":083456789012, or converting it to "UPC":"083456789012", it becomes valid.
Can JSON really not accept an integer that begins with 0, or is there a way around the problem?
A leading 0 indicates an octal number in JavaScript. An octal number cannot contain an 8; therefore, that number is invalid.
Moreover, JSON doesn't (officially) support octal numbers, so formally the JSON is invalid, even if the number would not contain an 8. Some parsers do support it though, which may lead to some confusion. Other parsers will recognize it as an invalid sequence and will throw an error, although the exact explanation they give may differ.
Solution: If you have a number, don't ever store it with leading zeroes. If you have a value that needs to have a leading zero, don't treat it as a number, but as a string. Store it with quotes around it.
In this case, you've got a UPC which needs to be 12 digits long and may contain leading zeroes. I think the best way to store it is as a string.
It is debatable, though. If you treat it as a barcode, seeing the leading 0 as an integral part of it, then string makes sense. Other types of barcodes can even contain alphabetic characters.
On the other hand. A UPC is a number, and the fact that it's left-padded with zeroes to 12 digits could be seen as a display property. Actually, if you left-pad it to 13 digits by adding an extra 0, you've got an EAN code, because EAN is a superset of UPC.
If you have a monetary amount, you might display it as € 7.30, while you store it as 7.3, so it could also make sense to store a product code as a number.
But that decision is up to you. I can only advice you to use a string, which is my personal preference for these codes, and if you choose a number, then you'll have to remove the 0 to make it work.
One of the more confusing parts of JavaScript is that if a number starts with a 0 that isn't immediately followed by a ., it represents an octal, not a decimal.
JSON borrows from JavaScript syntax but avoids confusing features, so simply bans numbers with leading zeros (unless then are followed by a .) outright.
Even if this wasn't the case, there would be no reason to expect the 0 to still be in the number when it was parsed since 02 and 2 are just difference representations of the same number (if you force decimal).
If the leading zero is important to your data, then you probably have a string and not a number.
"UPC":"083456789012"
A product code is an identifier, not something you do maths with. It should be a string.
Formally, it is because JSON uses DecimalIntegerLiteral in its JSONNumber production:
JSONNumber ::
-_opt DecimalIntegerLiteral JSONFraction_opt ExponentPart_opt
And DecimalIntegerLiteral may only start with 0 if it is 0:
DecimalIntegerLiteral ::
0
NonZeroDigit DecimalDigits_opt
The rationale behind is is probably:
In the JSON Grammar - to reuse constructs from the main ECMAScript grammar.
In the main ECMAScript grammar - to make it easier to distinguish DecimalIntegerLiteral from HexIntegerLiteral and OctalIntegerLiteral. OctalIntegerLiteral in the first place.
See this productions:
HexIntegerLiteral ::
0x HexDigit
0X HexDigit
HexIntegerLiteral HexDigit
...
OctalIntegerLiteral ::
0 OctalDigit
OctalIntegerLiteral OctalDigit
The UPC should be in string format. For the future you may also get other type of UPC such as GS128 or string based product identification codes. Set your DB column to be string.
If an integer start with 0 in JavaScript it is considered to be the Octal (base 8) value of the integer instead of the decimal (base 10) value. For example:
var a = 065; //Octal Value
var b = 53; //Decimal Value
a == b; //true
I think the easiest way to send your number by JSON is send your number as string.
I am trying to find the occurence of arabic character with its harakat in string such as "رَّ" in "بِسْمِ ٱللَّهِ ٱلرَّحْمَٰنِ ٱلرَّحِيمِ".
Arabic characters can take harakat for example "ر" is the original arabic character but can have harakat so it can look something like this "رَّ"> I am using Python 3 to find the character occurence with a specific harakat but could not do that. I have tried for loop and tried converting the string to unicode but could not do that.
str = "مرة رجل حكيم قال بِسْمِ ٱللَّهِ ٱلرَّحْمَٰنِ ٱلرَّحِيمِ"
i=0
for s in str:
if s == "رَّ":
i = i + 1
print(i)
Expected output is 2 but 0 is what I get.
len("رَّ") returns 3, which means the glyph is represented by three characters. Your loop checks a single character at a time and so never finds a match.
You need to be looking for substrings, which is exactly what .count() is for.
i = str.count('رَّ')
Can someone please help me deal with byte-order mark (BOM) bytes versus UTF8 characters in the first line of an XHTML file?
Using Python 3.5, I opened the XHTML file as UTF8 text:
inputTopicFile = open(inputFileName, "rt", encoding="utf8")
As shown in this hex-editor, the first line of that UTF8-encoded XHTML file begins with the three-bytes UTF8 BOM EF BB BF:
I wanted to remove the UTF8 BOM from what I supposed were equivalent to the three initial character positions [0:2] in the string. So I tried this:
firstLine = firstLine[3:]
Didn't work -- the characters <? were no longer present at the start of the resulting line.
So I did this experiment:
for charPos in range(0, 3):
print("charPos {0} == {1}".format(charPos, firstLine[charPos]))
Which printed:
charPos 0 ==
charPos 1 == <
charPos 2 == ?
I then added .encode to that loop as follows:
for charPos in range(0, 3):
print("charPos {0} == {1}".format(charPos, eachLine[charPos].encode('utf8')))
Which gave me:
charPos 0 == b'\xef\xbb\xbf'
charPos 1 == b'<'
charPos 2 == b'?'
Evidently Python 3 in some way "knows" that the 3-bytes BOM is a single unit of non-character data? Meaning that one cannot try to process the first three 8-bit bytes(?) in the line as if they were UTF8 characters?
At this point I know that I can "trick" my code into giving me with I want by specifying firstLine = firstLine[1:]. But it seems wrong to do it that way(?)
So what's the correct way to discard the first three BOM bytes in a UTF8 string on the way to working with only the UTF8 characters?
EDIT: The solution, per the comment made by Anthony Sottile, turned out to be as simple as using encoding="utf-8-sig" when I opened the source XHTML file:
inputTopicFile = open(inputFileName, "rt", encoding="utf-8-sig")
That strips out the BOM. Voila!
As you mentioned in your edit, you can open the file with the utf8-sig encoding, but to answer your question of why it was behaving this way:
Python 3 distinguishes between byte strings (the ones with the b prefix) and character strings (without the b prefix), and prefers to use character strings whenever possible. A byte string works with the actual bytes; a character string works with Unicode codepoints. The BOM is a single codepoint, U+FEFF, so in a regular string Python 3 will treat it as a single character (because it is a single character). When you call encode, you turn the character string into a byte string.
Thus the results you were seeing are exactly what you should have: Python 3 does know what counts as a single character, which is all it sees until you call encode.
On Python 3 printing unicode characters can be printed like this:
print('\uFFFF')
But how can I print higher unicode characters like 001FFFFF? print('\u001FFFFF') will just print 001F as unicode character and then 4 times F. Trying to use print('\u001F\uFFFF') will result in 2 unicode characters instead of the wanted one. Is it possible to print somehow the unicode character 001FFFFF in Python 3?
Use an upper-case U.
print('\U001FFFFF')
There is another way in Python 3, using the built-in function chr(i), which
Return the string representing a character whose Unicode code point is
the integer i.
and
The valid range for the argument is from 0 through 1,114,111 (0x10FFFF in base 16).
so there are no limitation for the hex digit value.
print(chr(97))
print(chr(0xFFFF))
print(chr(0x10080))
I started coding with groovy today and I notices that if I take the following code:
int aaa = "6"
log.info(aaa)
The output I get is:
54 <-- (ASCII Code for '6')
If I assign aaa with any number which is beyond the range of 0..9 I get a class cast exception.
Looks like if the string is actually a single character - groovy converts its ASCII code/hashCode.
I tried this code:
int aaa = "A"
log.info(aaa)
And the output I got was:
65 <-- (ASCII code for 'A')
What is the official reason for this?
Is it because groovy automatically changes "A" into 'A'?
As Jochen says here in the JIRA; Strings of length 1 are converted to chars if needed (and by putting it into an int variable, it is assuming that that is what you want to do)
If you want to accept bigger numbers, you can do:
int a = '12345' as int
And that will convert the whole number to an int.