I am connecting to a fanuc cnc machine with Python using a C library. I learned C++ about 20 years ago (haven't used it since) and am not super strong in Python so I'm struggling with data types.
I have data coming from the c library in the following format
I am using the following code to read it:
cnc_ids = (ctypes.c_uint32 * 11)()
ret = focas.cnc_rddynamic2(libh, 1, 44, cnc_ids)
if ret != 0:
raise Exception(f"Failed to read cnc id! ({ret})")
for i in range(11):
print(f"{i}: {cnc_ids[i]}")
If all of the pieces of data were a length of 4 bytes this would be easy, but the first two are only 2 bytes each.
I imagine I could just split the first 4 byte group into two with some extra code, but I will be interfacing about 300 different functions, all with similar structures so it will be a recurring issue. Some of these functions will have several different lengths of data that need to be processed.
What is the best way to process this data?
My end goal is to output the data in a json format that will be used for an api within flask.
Additional info - The typedef is also provided. If there is just a way I can use that directly that would be awesome. Below is the example they give for this function.
typedef struct odbdy2 {
short dummy ; /* not used */
short axis ; /* axis number */
long alarm ; /* alarm status */
long prgnum ; /* current program number */
long prgmnum ; /* main program number */
long seqnum ; /* current sequence number */
long actf ; /* actual feedrate */
long acts ; /* actual spindle speed */
union {
struct {
long absolute[MAX_AXIS] ; /* absolute */
long machine[MAX_AXIS] ; /* machine */
long relative[MAX_AXIS] ; /* relative */
long distance[MAX_AXIS] ; /* distance to go */
} faxis ; /* In case of all axes */
struct {
long absolute ; /* absolute */
long machine ; /* machine */
long relative ; /* relative */
long distance ; /* distance to go */
} oaxis ; /* In case of 1 axis */
} pos ;
} ODBDY2 ; /* MAX_AXIS is the maximum controlled axes. */
With ctypes, the structure can be declared and used directly. If you receive the data as a buffer of 32-bit values, you can cast that buffer into the structure as shown below:
import ctypes as ct
MAX_AXIS = 3 # Not provided by OP, guess...
class FAXIS(ct.Structure):
_fields_ = (('absolute', ct.c_long * MAX_AXIS),
('machine', ct.c_long * MAX_AXIS),
('relative', ct.c_long * MAX_AXIS),
('distance', ct.c_long * MAX_AXIS))
def __repr__(self):
return f'FAXIS({list(self.absolute)}, {list(self.machine)}, {list(self.relative)}, {list(self.distance)})'
class OAXIS(ct.Structure):
_fields_ = (('absolute', ct.c_long),
('machine', ct.c_long),
('relative', ct.c_long),
('distance', ct.c_long))
def __repr__(self):
return f'OAXIS({self.absolute}, {self.machine}, {self.relative}, {self.distance})'
class POS(ct.Union):
_fields_ = (('faxis', FAXIS),
('oaxis', OAXIS))
def __repr__(self):
return f'POS({self.faxis!r}, {self.oaxis!r})'
class ODBDY2(ct.Structure):
_fields_ = (('dummy', ct.c_short),
('axis', ct.c_short),
('alarm', ct.c_long),
('prgnum', ct.c_long),
('prgmnum', ct.c_long),
('seqnum', ct.c_long),
('actf', ct.c_long),
('acts', ct.c_long),
('pos', POS))
def __repr__(self):
return f'ODBDY2({self.dummy}, {self.axis}, {self.alarm}, {self.prgnum}, {self.prgmnum}, {self.seqnum}, {self.actf}, {self.acts}, {self.pos!r})'
cnc_one_axis = (ct.c_uint32 * 11)(1,2,3,4,5,6,7,8,9,10,11)
cnc_all_axis = (ct.c_uint32 * 19)(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19)
p = ct.pointer(cnc_all_axis)
data = ct.cast(p, ct.POINTER(ODBDY2))
print(data.contents)
p = ct.pointer(cnc_one_axis)
data = ct.cast(p, ct.POINTER(ODBDY2)) # Notice only OAXIS has valid data
print(data.contents)
Output:
ODBDY2(1, 0, 2, 3, 4, 5, 6, 7, POS(FAXIS([8, 9, 10], [11, 12, 13], [14, 15, 16], [17, 18, 19]), OAXIS(8, 9, 10, 11)))
ODBDY2(1, 0, 2, 3, 4, 5, 6, 7, POS(FAXIS([8, 9, 10], [11, 0, 1], [0, -1838440016, 32762], [1, 0, -1]), OAXIS(8, 9, 10, 11)))
Related
I am trying to render a gltf model with ash (vulkan) in rust.
I sent all my data to the gpu and I am seeing this:
Naturally my suspicion is that the normal data is wrong. So I checked with renderdoc:
Those seem ok, maybe the attributes are wrong?
All those normals seem like they add to 1, should be fine. Maybe my pipeline is wrong?
Seems like the correct format and binding (I am sending 3 buffers and binding one to binding 0, one to 1 and the last to 2, binding 1 has the normals).
The only thing I find that is weird is, if I go to the vertex input pipeline stage see the buffers:
This is what the buffer at index 1 shows:
This does not happen for the buffer at index 0 (positions) which also render properly. So Whatever is causing the normals to show up as hex codes here is likely the cause of the bug. But i have no idea why this is happening. As far as I can see the pipeline and buffer were all set properly.
You presumably want to use one separate buffer for each vertex attribute (aka non-interleaved vertex buffer, SoA),
but your VkVertexInputAttributeDescription::offset values [0, 12, 24] is what you would use for one vertex buffer interleaving all attributes (provided that their binding values point to one and the same VkVertexInputBindingDescription).
e.g.
// Interleaved:
// Buffer 0: |Position: R32G32B32_FLOAT, Normal: R32G32B32_FLOAT, Uv: R32G32B32_FLOAT|, * vertex count
VkVertexInputBindingDescription {
.binding = 0,
.stride = 12 * 3, // 3 `R32G32B32_FLOAT`s !
.inputRate = VK_VERTEX_INPUT_RATE_VERTEX
};
// All attributes in the same `binding` == `0`
VkVertexInputAttributeDescription[3] {
{
.location = 0,
.binding = 0,
.format = VK_FORMAT_R32G32G32_SFLOAT,
.offset = 0 // [0, 11] portion
},
{
.location = 1,
.binding = 0,
.format = VK_FORMAT_R32G32G32_SFLOAT,
.offset = 12 // [12, 23] portion
},
{
.location = 2,
.binding = 0,
.format = VK_FORMAT_R32G32G32_SFLOAT,
.offset = 24 // [24, 35] portion
}
};
Your VkVertexInputBindingDescription[1].stride == 12 tells Vulkan that your vertex buffer 1 uses 12 bytes for each vertex, and your VkVertexInputAttributeDescription[1].offset == 12 says the normal value is at offset 12, which is out of bounds.
Same deal with your VkVertexInputAttributeDescription[2].offset == 24 overstepping (by a large amount) VkVertexInputBindingDescription[2].stride == 12.
For using one tightly-packed buffer for each vertex attribute, you need to correctly set your VkVertexInputAttributeDescription[n].offset values to 0, which looks something like:
// Non-interleaved:
// Buffer 0: |Position: R32G32B32_FLOAT|, * vertex count
// Buffer 1: |Normal: R32G32B32_FLOAT|, * vertex count
// Buffer 2: |Uv: R32G32B32_FLOAT|, * vertex count
VkVertexInputBindingDescription[3] {
{
.binding = 0,
.stride = 12,
.inputRate = VK_VERTEX_INPUT_RATE_VERTEX
},
{
.binding = 1,
.stride = 12,
.inputRate = VK_VERTEX_INPUT_RATE_VERTEX
},
{
.binding = 2,
.stride = 12,
.inputRate = VK_VERTEX_INPUT_RATE_VERTEX
}
};
// Each attribute in its own `binding` == `location`
VkVertexInputAttributeDescription[3] {
{
.location = 0,
.binding = 0,
.format = VK_FORMAT_R32G32G32_SFLOAT,
.offset = 0 // Whole [0, 11]
},
{
.location = 1,
.binding = 1,
.format = VK_FORMAT_R32G32G32_SFLOAT,
.offset = 0 // Whole [0, 11]
},
{
.location = 2,
.binding = 2,
.format = VK_FORMAT_R32G32G32_SFLOAT,
.offset = 0 // Whole [0, 11]
}
};
Worth noting is the comment line // vertex stride 12 less than total data fetched 24 generated by RenderDoc in the Buffer Format section, and how it does so.
It detects when your vertex attribute description oversteps its binding description's stride:
if(i + 1 == attrs.size())
{
// for the last attribute, ensure the total size doesn't overlap stride
if(attrs[i].byteOffset + cursz > stride && stride > 0)
return tr("// vertex stride %1 less than total data fetched %2")
.arg(stride)
.arg(attrs[i].byteOffset + cursz);
}
I came across this link but am still struggling to construct an answer.
This is what one of the complex structs that I have looks like. This is actually a deep nested struct within other structs :)
/*
* A domain consists of a variable length array of 32-bit unsigned integers.
* The domain_val member of the structure below is the variable length array.
* The domain_count is the number of elements in the domain_val array.
*/
typedef struct domain {
uint32_t domain_count;
uint32_t *domain_val;
} domain_t;
The test code in C is doing something like this:
uint32_t domain_seg[4] = { 1, 9, 34, 99 };
domain_val = domain_seg;
The struct defined in python is
class struct_domain(ctypes.Structure):
_pack_ = True # source:False
_fields_ = [
('domain_count', ctypes.c_uint32),
('PADDING_0', ctypes.c_ubyte * 4),
('domain_val', POINTER_T(ctypes.c_uint32)),
]
How to populate the domain_val in that struct ? Can I use a python list ?
I am thinking something along dom_val = c.create_string_buffer(c.sizeof(c.c_uint32) * domain_count) but then how to iterate through the buffer to populate or read the values ?
Will dom_val[0], dom_val[1] be able to iterate through the buffer with the correct length ? Maybe I need some typecast while iterating to write/read the correct number of bytes
Here's one way to go about it:
import ctypes as ct
class Domain(ct.Structure):
_fields_ = (('domain_count', ct.c_uint32),
('domain_val', ct.POINTER(ct.c_uint32)))
def __init__(self, data):
size = len(data)
# Create array of fixed size, initialized with the data
self.domain_val = (ct.c_uint32 * size)(*data)
self.domain_count = size
# Note you can slice the pointer to the correct length to retrieve the data.
def __repr__(self):
return f'Domain({self.domain_val[:self.domain_count]})'
x = Domain([1, 9, 34, 99])
print(x)
# Just like in C, you can iterate beyond the end
# of the array and create undefined behavior,
# so make sure to index only within the bounds of the array.
for i in range(x.domain_count):
print(x.domain_val[i])
Output:
Domain([1, 9, 34, 99])
1
9
34
99
To make it safer, you could add a property that casts the pointer to single element to a pointer to sized-array of elements so length checking happens:
import ctypes as ct
class Domain(ct.Structure):
_fields_ = (('_domain_count', ct.c_uint32),
('_domain_val', ct.POINTER(ct.c_uint32)))
def __init__(self,data):
size = len(data)
self._domain_val = (ct.c_uint32 * size)(*data)
self._domain_count = size
def __repr__(self):
return f'Domain({self._domain_val[:self._domain_count]})'
#property
def domain(self):
return ct.cast(self._domain_val, ct.POINTER(ct.c_uint32 * self._domain_count)).contents
x = Domain([1, 9, 34, 99])
print(x)
for i in x.domain: # now knows the size
print(i)
x.domain[2] = 44 # Can mutate the array,
print(x) # and it reflects in the data.
x.domain[4] = 5 # IndexError!
Output:
Domain([1, 9, 34, 99])
1
9
34
99
Domain([1, 9, 44, 99])
Traceback (most recent call last):
File "C:\demo\test.py", line 27, in <module>
x.domain[4] = 5
IndexError: invalid index
! Hi, folks!
I'm trying to convert bcd coded date-time data I get from I2C connected RTC to time.struct_time so I can use it later as a native type. The RTC returns an array, that should be decoded before it can be used. The source code is something like this:
import time
...
def bcd2dec(bcd):
return(10 * (bcd >> 4) + (bcd & 0x0F))
...
rtcdata = [36, 54, 35, 48, 35, 36, 32] # BCD coded datetime from RTC
rtc_time = time.struct_time(
tm_year = bcd2dec(rtcdata[6]),
tm_mon = bcd2dec(rtcdata[5] & 0x1F), # last 5 bits
tm_mday = bcd2dec(rtcdata[3] & 0x3F), # last 6 bits
tm_hour = bcd2dec(rtcdata[2] & 0x3F), # last 6 bits
tm_min = bcd2dec(rtcdata[1] & 0x7F), # last 7 bits
tm_sec = bcd2dec(rtcdata[0] & 0x7F), # last 7 bits
tm_wday = bcd2dec(rtcdata[4] & 0x07) # last 3 bits
)
But this doesn't work. I spent several hours on trying to figure out how to fill this 'named tuple' with no luck. Can somebody suggest hot to declare such variable and fill all the corresponding properties by theirs names?
After several more painful hours of struggling and searching through the Internet I was able to compose a solution. But actually I found it is more readable to go away with just rtc_time = time.struct_time(()) solution. May be this will help someone.
import time
from collections import namedtuple
...
def bcd2dec(bcd):
return(10 * (bcd >> 4) + (bcd & 0x0F))
...
struct_time_ = namedtuple('struct_time', ['tm_year', 'tm_mon', 'tm_mday', 'tm_hour', 'tm_min', 'tm_sec', 'tm_wday', 'tm_yday', 'tm_isdst'])
rtcdata = [36, 54, 35, 48, 35, 36, 32] # BCD coded datetime from RTC
rtc_time = struct_time_(
tm_sec = bcd2dec(rtcdata[0] & 0x7F), # last 7 bits
tm_min = bcd2dec(rtcdata[1] & 0x7F), # last 7 bits
tm_hour = bcd2dec(rtcdata[2] & 0x3F), # last 6 bits
tm_mday = bcd2dec(rtcdata[3] & 0x3F), # last 6 bits
tm_wday = bcd2dec(rtcdata[4] & 0x07), # last 3 bits
tm_mon = bcd2dec(rtcdata[5] & 0x1F), # last 5 bits
tm_year = bcd2dec(rtcdata[6]),
tm_yday = 0,
tm_isdst = -1
)
I have a list with 4 element which contain integers
data = [134, 2, 4, 170]
hexdata = [0x86, 0x2, 0x4, 0xAA]
i need to get hex data from two last elements eg. (0x04 and 0xAA)
concatenate them to this view 0x04AA and convert to int
in the end i need to get integer with value = 1194.
i am stuck in this task/
data = [134, 2, 4, 170]
for x in data:
print("0x%x" % (x), end=" ")
print()
c = "0x%x%x" % (data[2], data[3])
print(c)
print(int(c))
Traceback (most recent call last):
File "123.py", line 7, in <module>
print(int(c))
ValueError: invalid literal for int() with base 10: '0x4aa'
You don't need to bother with string formatting here - use int.from_bytes instead, eg:
data = [134, 2, 4, 170]
res = int.from_bytes(data[-2:], 'big')
# 1194
data = [134, 2, 4, 170]
result = data[-2] << 8 | data[-1]
Simply multiply the 4 accordingly and add it up? No need to go hex on it ...
data = [134, 2, 4, 170]
rv = data[2]*256 + data[3] # 0x04AA == 0x04*256 + 0xAA
print(rv)
Output:
1194
I am making a python program that converts numbers to binary with a tkinter GUI. When using e.insert on a entry it will return a normal string:
0101010011
as
[0, 1, 0, 1...]
The function which converts a number to binary. I am aware the bin() alternative exists, I just wanted to create my own.
def dec2bin(decnum):
binarylist = []
while (decnum > 0):
binarylist.append(decnum % 2)
decnum = decnum // 2
binarylist.reverse()
binarylist = str(binarylist)
return "".join(binarylist)
The function that is called when a button in the tkinter gui is pressed which is intended to replace one of the entry box's text with the binary output.
def convert():
decimal = entrydec.get()
decimal = int(decimal)
entrybin.delete(0, END)
entrybin.insert(0, dec2bin(decimal))
I expect the output of 010101, but the actual output is [0, 1, 0, 1, 0, 1]
You can't use str() on list - str([0, 1, 0, 0]) to get list with strings - ["0", "1", "0", "0"]
You can use list comprehension:
binarylist = [str(x) for x in binarylist]
or map() :
binarylist = map(str, binarylist)
Or you have to convert numbers 0 ,1 to string when you add to list:
binarylist.append( str(decnum % 2) )
And later you can use join()
def dec2bin(decnum):
binarylist = []
while (decnum > 0):
binarylist.append( str(decnum % 2) ) # <-- use str()
decnum = decnum // 2
binarylist.reverse()
#binarylist = str(binarylist) # <-- remove it
return "".join(binarylist)
dec2bin(12)
Result:
"1100"