How does Zipkin generate and store the 16 char trace id used in the get Api of /traces/{traceId} - zipkin

I am using Zipkin for distributed tracing. I have added zipkin-storage-mysql dependency in order to save the traces in MySQL DB. When I query ZIPKIN_SPANS table, I don't find the 16 char trace id in TRACE_ID Colum that I use in order to load the trace on zipkin UI.
for ex: localhost:9411/traces/4bcdd0bd5d2f70c0
Please help me understand how can I figure it out. Also, How can I add a new column to the table for associating an application-specific id with it

There are 2 entries in mysql zipkin_spans table
trace_id_high -> corresponds to decimal representation of first 16 hex character
id -> corresponds to decimal representation of lower 16 hex character
Example
32 character hex trace id 5ec92d0240cd9dee0421f4763e9f674f displayed in zipkin ui corresponds to
trace_id_high = 6830039797584469486 in mysql (5EC92D0240CD9DEE -> upper 16 hex character)
id = 297787839077115727 in mysql (421F4763E9F674F -> lower 16 hex charecter)

Related

cantools.database.errors.Error: Standard frame id is more than 11 bits

I used cantools python package to decode canbus message. I used a dbc file created by me for testing. I copied a sample file. When I use can id like 419358976, I get error. But for smaller can ids like 350, it works. Does cantools fail for extended frame ids? how do I get this working?
my code which is failing for extended id's is as follows:
db = cantools.database.load_file('.\\src\\test\\resources\\j1939.dbc')
print(db.decode_message(419358976,b'\xff\xff\xff\xc0\x0c\xff\xff\xff'))
error: cantools.database.errors.Error: Standard frame id 0x18fee900 is more than 11 bits in message EEC1.
I found the answer for my question. The can id like 419358976 is an extended id. So to map that id to the id in the dbc file, I need to add another 32 bit hex number 8000 0000 to the hex can id. Then convert that result hex number to decimal and use it as the id field in the dbc file. It works perfectly after. The above error message is gone after

Amazon Athena partition with colon(:) is not working

When creating partition in Athena, I tried to use the date in the format (yyyy-MM-ddTHH:mm:ssZ) then I am not able to query the data
Step 1: Create table
CREATE EXTERNAL TABLE my_info (
id STRING,
name STRING
) PARTITIONED BY (
part string
) STORED AS ORC LOCATION 's3://bucket1/data' tblproperties ("orc.compress"="SNAPPY");
Step 2: Create folder like below and added the files.
S3://bucket1/data/part=2019-11-12T14:15:16Z
Step 3: Refresh partition
MSCK REPAIR TABLE my_info
Step 4: Query the data
SELECT *
FROM my_info
With this I am not able to query any data
If I change the folder to format (yyyy-MM-ddTHH)
without ’:’ in Step 2
s3://bucket1/data/part=2019-11-12T14
Then I am able to get the results.
Any idea about why this is not working.
This is because when you create the partitioned table the partitioning is implemented as part of the S3 path e.g. for s3://bucket1/data/part=2019-11-12T14:15:16Z the part=2019-11-12T14:15:16Z section is an S3 path that Athena interprets as a partition when querying the data.
S3 path names have some restrictions on the characters that can be used:
The following characters in a key name might require additional code
handling and likely need to be URL encoded or referenced as HEX. Some
of these are non-printable characters and your browser might not
handle them, which also requires special handling:
Ampersand ("&")
Dollar ("$")
ASCII character ranges 00–1F hex (0–31 decimal) and 7F (127 decimal)
'At' symbol ("#")
Equals ("=")
Semicolon (";")
Colon (":")
Plus ("+")
Space – Significant sequences of spaces may be lost in some uses (especially multiple spaces)
Comma (",")
Question mark ("?")
In this case it's probably the colons in the path that are not being interpreted by Presto/Athena. To work around this you can use an alternative dividing character in the timestamp e.g. part=2019-11-12--14-15-16 or omit it altogether.
It seems you can use an URL encoded colon (%3A).
Further, if you which to use timestamp as the partition type instead of string, make sure to use a "java.sql.Timestamp compatible format" as documented for the CREATE TABLE statement.
So the final url would be s3://bucket1/data/part=2019-11-12 14%3A15%3A16/.

How to increase limit of 'decimal' field in expressionengine

I've got a Text Input field with 'Allowed Content' set to 'Decimal'. It won't let me set it to anything over a million on an entry, giving the error number_exceeds_limit.
I've thought about saving it as a string rather than use the decimal content type but I need to display them in order using orderby on the field and if it is a string it will treat 9 and being greater than 100 since 9 is greater than 1.
Is there a way to either increase or get around the million limit?
Expressionengine version 3.5.2 in case it's relevant.
You would need to edit the logic in cp/ee/EllisLab/Addons/text/ft.text.php (line 62 in 5.1.2) and also edit the column type in the DB structure (I just tried this locally setting the field length to 20,4 instead of 10,4.
It is probably worth raising an issue about this on the EE github as that limit seems restrictive to me (mySQL supports 65 digits). https://github.com/ExpressionEngine/ExpressionEngine/issues

How to generate a pin code in Symfony

I'm working right now on a symfony2 web app and I need to generate automatically and randomly pin-code composed by 6 alphanumeric characters example:
14gkf8
kfgh88
this code will be sent by mail to the use, that's how he will connect to the platform.
anyone have an idea how to make it or there is maybe a ready tool to do it ? thank you
You can generate random codes with the following code:
substr(bin2hex(openssl_random_pseudo_bytes(100)), 0, 6)
Online demo.
openssl_random_pseudo_bytes() will generate random binary data, bin2hex() will transform this binary data as hexadecimal data (e.g. 5c3aa…e55) and substr(…, 0, 6) will keep only the 6 first characters. Since hexadecimal uses values from 0 to 9 and a to f, there is 16 different characters available at each position, so it gives 16^6 = 16,777,216 possibilities (with 0 to 9 and a to z it's 36^6 = 2,176,782,336, only ± 130 times more). If the user doesn't need to type the key, you can use more characters, for example with 12 characters you have many more possibilities: 16^12 = 2,814×10¹⁴.
You can use uniqid() to generate a unique alphanumeric string

Questions about EXIF in hexadecimal form

I am trying to understand the EXIF header portion of a jpeg file (in hex) and how to understand it so I can extract data, specifically GPS information. For better or worse, I am using VB.Net 2008 (sorry, it is what I can grasp right now). I have extracted the first 64K of a jpg to a byte array and have a vague idea of how the data is arranged. Using the EXIF specification documents, version 2.2 and 2.3, I see that there are tags, that are supposed to correspond to actual byte sequences in the file. I see that there is a “GPS IFD” that has a value of 8825 (in hex). I search for the hex string 8825 in the file (which I understand to be two bytes 88 and 25) and then I believe that there is a sequence of bytes following the 8825. I suspect that those subsequent bytes denote where in the file, by way of an offset, the GPS data would be located. For example, I have the following hex bytes, starting with 88 25: 88 25 00 04 00 00 00 01 00 00 05 9A 00 00 07 14. Is the string that I am looking for longer than 16 bytes? I get the impression that in this string of data, it should be telling me where to find the actual GPS data in the file.
Looking at http://search.cpan.org/~bettelli/Image-MetaData-JPEG-0.153/lib/Image/MetaData/JPEG/Structures.pod#Exif_and_DCT, halfway down the page, it talks about “Each IFD block is a structured sequence of records, called, in the Exif jargon, Interoperability arrays. The beginning of the 0th IFD is given by the 'IFD0_Pointer' value. The structure of an IFD is the following:”
So, what is an IFD0_Pointer? Does it have to do with an offset? I presume an offset is so many bytes from a beginning point. If that is true, where is that beginning point?
Thanks for any responses.
Dale
I suggest you to read The Exif Specification (PDF); it is clear and quite easy to follow. For a short primer, here is the summary of an article I wrote:
A JPEG/Exif file starts with the start of the image marker (SOI). The SOI consists of two magic bytes 0xFF 0xD8, identifying the file as a JPEG file. Following the SOI, there are a number of Application Marker sections (APP0, APP1, APP2, APP3, ...) typically including metadata.
Application Marker Sections
Each APPn section starts with a marker. For the APP0 section, the marker is 0xFF 0xE0, for the APP1 section 0xFF 0xE1, and so on. Marker bytes are followed by two bytes for the size of the section (excluding the marker, including the size bytes). The length field is followed by variable size application data. APPn sections are sequential, so that you can skip entire sections (by using the section size) until you reach the one you are interested in. Contents of APPn sections vary, the following is for the Exif APP1 section only.
The Exif APP1 Section
Exif metadata is stored in an APP1 section (there may be more than one APP1 section). The application data in an Exif APP1 section consists of the Exif marker 0x45 0x78 0x69 0x66 0x00 0x00 ("Exif\0\0"), the TIFF header and a number of Image File Directory (IFD) sections.
The TIFF Header
The TIFF header contains information about the byte-order of IFD sections and a pointer to the 0th IFD. The first two bytes are 0x49 0x49 (II for Intel) if the byte-order is little-endian or 0x4D 0x4D (MM for Motorola) for big-endian. The following two bytes are magic bytes 0x00 0x2A (42 ;)). And the following four important bytes will tell you the offset to the 0th IFD from the start of the TIFF header.
Important: The JPEG file itself (what you have been reading until now) will always be in big-endian format. However, the byte-order of IFD subsections may be different, and need to be converted (you know the byte-order from the TIFF header above).
Image File Directories
Once you get this far, you have your pointer to the 0th IFD section and you are ready to read actual metadata. The remaining IFDs are referenced in different places. The offset to the Exif IFD and the GPS IFD are given in the 0th IFD fields. The offset to the first IFD is given after the 0th IFD fields. The offset to the Interoperability IFD is given in the Exif IFD.
IFDs are simply sequential records of metadata fields. The field count is given in the first two bytes of the IFD. Following the field count are 12-byte fields. Following the fields, there is a 4 byte offset from the start of the TIFF header to the start of the first IFD. This value is meaningful for only the 0th IFD. Following this, there is the IFD data section.
IFD Fields
Fields are 12-byte subsections of IFD sections. The first two-bytes of each field give the tag ID as defined in the Exif standard. The next two bytes give the type of the field data. You will have 1 for byte, 2 for ascii, 3 for short (uint16), 4 for long (uint32), etc. Check the Exif Specification for the complete list.
The following four bytes may be a little confusing. For byte arrays (ascii and undefined types), the byte length of the array is given. For example, for the Ascii string: "Exif", the count will be 5 including the null terminator. For other types, this is the number of field components (eg. 4 shorts, 3 rationals).
Following the count, we have the 4-byte field value. However, if the length of the field data exceeds 4 bytes, it will be stored in the IFD Data section instead. In this case, this value will be the offset from the start of the TIFF header to the start of the field data. For example, for a long (uint32, 4 bytes), this will be the field value. For a rational (2 x uint32, 8 bytes), this will be an offset to the 8-byte field data.
This is basically how metadata is arranged in a JPEG/Exif file. There are a few caveats to keep in mind (remember to convert the byte-order as needed, offsets are from the start of TIFF header, jump to data sections to read long fields, ...) but the format is quite easy to read. Following is the color-coded HEX view of a JPEG/Exif file. The blue block represents the SOI, orange is the TIFF header, green is the IFD size and offset bytes, light purple blocks are IFD fields and dark purple blocks are field data.
Here is a php script I wrote to modify exif headers.
<?php
$full_image_string=file_get_contents("torby.jpg");
$filename="torby.jpg";
if (isset($_REQUEST['filename'])){$filename=$_REQUEST['filename'];}
if (array_key_exists('file', $_REQUEST)) {
$thumb_image = exif_thumbnail($_REQUEST['file'], $width, $height, $type);
} else {
$thumb_image = exif_thumbnail($filename, $width, $height, $type);
}
if ($thumb_image!==false) {
echo $thumb_image;
$thumblen=strlen($thumb_image);
echo substr_count($full_image_string,$thumb_image);
$filler=str_pad("%%%THUMB%%%", $thumblen);
$full_image_string=str_replace($thumb_image,$filler,$full_image_string);
file_put_contents("torby.jpg",$full_image_string);
exit;
} else {
// no thumbnail available, handle the error here
echo 'No thumbnail available';
}
?>

Resources