Suppose I have the following C++ function:
int summap(const map<int,int>& m) {
...
}
I try to call it from Python using cppyy, by sending a dict:
import cppyy
cppyy.include("functions.hpp")
print(cppyy.gbl.summap({55:1,66:2,77:3}))
I get an error:
TypeError: int ::summap(const map<int,int>& v) =>
TypeError: could not convert argument 1
How can I call this function?
There is no relation between Python's dict and C++'s std::map (the two have completely different internal structure), so this requires a conversion, there is currently no automatic one in cppyy, so do something like this:
cppm = cppyy.gbl.std.map[int, int]()
for key, value in {55:1,66:2,77:3}.items():
cppm[key] = value
then pass cppm to summap.
Automatic support for python list/tuple -> std::vector is available, but there, too, it's no smarter than copying (likewise, b/c the internal structure is completely different), so any automatic std::map <-> python dict conversion would internally still have to do a copy like the above.
Related
I've started taking a look at Nim for hobby game modding purposes.
Intro
Yet, I found it difficult to work with Nim compared to C when it comes to machine-specific low-level memory layout and would like to know if Nim actually has better support here.
I need to control byte order and be able to de/serialize arbitrary Plain-Old-Datatype objects to binary custom file formats. I didn't directly find a Nim library which allows flexible storage options like representing enum and pointers with Big-Endian 32-bit. Or maybe I just don't know how to use the feature.
std/marshal : just JSON, i.e. no efficient, flexible nor binary format but cross-compatible
nim-serialization : seems like being made for human readable formats
nesm : flexible cross-compatibility? (It has some options and has a good interface)
flatty : no flexible cross-compatibility, no byte order?
msgpack4nim : no flexible cross-compatibility, byte order?
bingo : ?
Flexible cross-compatibility means, it must be able to de/serialize fields independently of Nim's ABI but with customization options.
Maybe "Kaitai Struct" is more what I look for, a file parser with experimental Nim support.
TL;DR
As a workaround for a serialization library I tried myself at a recursive "member fields reverser" that makes use of std/endians which is almost sufficient.
But I didn't succeed with implementing byte reversal of arbitrarily long objects in Nim. Not practically relevant but I still wonder if Nim has a solution.
I found reverse() and reversed() from std/algorithm but I need a byte array to reverse it and turn it back into the original object type. In C++ there would be reinterprete_cast, in C there is void*-cast, in D there is a void[] cast (D allows defining array slices from pointers) but I couldn't get it working with Nim.
I tried cast[ptr array[value.sizeof, byte]](unsafeAddr value)[] but I can't assign it to a new variable. Maybe there was a different problem.
How to "byte reverse" arbitrary long Plain-Old-Datatype objects?
How to serialize to binary files with byte order, member field size, pointer as file "offset - start offset"? Are there bitfield options in Nim?
It is indeed possible to use algorithm.reverse and the appropriate cast invocation to reverse bytes in-place:
import std/[algorithm,strutils,strformat]
type
LittleEnd{.packed.} = object
a: int8
b: int16
c: int32
BigEnd{.packed.} = object
c: int32
b: int16
a: int8
## just so we can see what's going on:
proc `$`(b: LittleEnd):string = &"(a:0x{b.a.toHex}, b:0x{b.b.toHex}, c:0x{b.c.toHex})"
proc `$`(l:BigEnd):string = &"(c:0x{l.c.toHex}, b:0x{l.b.toHex}, a:0x{l.a.toHex})"
var lit = LittleEnd(a: 0x12, b:0x3456, c: 0x789a_bcde)
echo lit # (a:0x12, b:0x3456, c:0x789ABCDE)
var big:BigEnd
copyMem(big.addr,lit.addr,sizeof(lit))
# here's the reinterpret_cast you were looking for:
cast[var array[sizeof(big),byte]](big.addr).reverse
echo big # (c:0xDEBC9A78, b:0x5634, a:0x12)
for C-style bitfields there is also the {.bitsize.} pragma
but using it causes Nim to lose sizeof information, and of course bitfields wont be reversed within bytes
import std/[algorithm,strutils,strformat]
type
LittleNib{.packed.} = object
a{.bitsize: 4}: int8
b{.bitsize: 12}: int16
c{.bitsize: 20}: int32
d{.bitsize: 28}: int32
BigNib{.packed.} = object
d{.bitsize: 28}: int32
c{.bitsize: 20}: int32
b{.bitsize: 12}: int16
a{.bitsize: 4}: int8
const nibsize = 8
proc `$`(b: LittleNib):string = &"(a:0x{b.a.toHex(1)}, b:0x{b.b.toHex(3)}, c:0x{b.c.toHex(5)}, d:0x{b.d.toHex(7)})"
proc `$`(l:BigNib):string = &"(d:0x{l.d.toHex(7)}, c:0x{l.c.toHex(5)}, b:0x{l.b.toHex(3)}, a:0x{l.a.toHex(1)})"
var lit = LitNib(a: 0x1,b:0x234, c:0x56789, d: 0x0abcdef)
echo lit # (a:0x1, b:0x234, c:0x56789, d:0x0ABCDEF)
var big:BigNib
copyMem(big.addr,lit.addr,nibsize)
cast[var array[nibsize,byte]](big.addr).reverse
echo big # (d:0x5DEBC0A, c:0x8967F, b:0x123, a:0x4)
It's less than optimal to copy the bytes over, then rearrange them with reverse, anyway, so you might just want to copy the bytes over in a loop. Here's a proc that can swap the endianness of any object, (including ones for which sizeof is not known at compiletime):
template asBytes[T](x:var T):ptr UncheckedArray[byte] =
cast[ptr UncheckedArray[byte]](x.addr)
proc swapEndian[T,U](src:var T,dst:var U) =
assert sizeof(src) == sizeof(dst)
let len = sizeof(src)
for i in 0..<len:
dst.asBytes[len - i - 1] = src.asBytes[i]
Bit fields are supported in Nim as a set of enums:
type
MyFlag* {.size: sizeof(cint).} = enum
A
B
C
D
MyFlags = set[MyFlag]
proc toNum(f: MyFlags): int = cast[cint](f)
proc toFlags(v: int): MyFlags = cast[MyFlags](v)
assert toNum({}) == 0
assert toNum({A}) == 1
assert toNum({D}) == 8
assert toNum({A, C}) == 5
assert toFlags(0) == {}
assert toFlags(7) == {A, B, C}
For arbitrary bit operations you have the bitops module, and for endianness conversions you have the endians module. But you already know about the endians module, so it's not clear what problem you are trying to solve with the so called byte reversal. Usually you have an integer, so you first convert the integer to byte endian format, for instance, then save that. And when you read back, convert from byte endian format and you have the int. The endianness procs should be dealing with reversal or not of bytes, so why do you need to do one yourself? In any case, you can follow the source hyperlink of the documentation and see how the endian procs are implemented. This can give you an idea of how to cast values in case you need to do some yourself.
Since you know C maybe the last resort would be to write a few serialization functions and call them from Nim, or directly embed them using the emit pragma. However this looks like the least cross platform and pain free option.
Can't answer anything about generic data structure serialization libraries. I stray away from them because they tend to require hand holding imposing certain limitations on your code and depending on the feature set, a simple refactoring (changing field order in your POD) may destroy the binary compatibility of the generated output without you noticing it until runtime. So you end up spending additional time writing unit tests to verify that the black box you brought in to save you some time behaves as you want (and keeps doing so across refactorings and version upgrades!).
I have a function that receives a list of strings and concatenates each string into a new string, i do that using Enum.join. But when i try this operation, i get the following error:
** (Protocol.UndefinedError) protocol Enumerable not implemented for "int main(){return 2;}" of type BitString. This protocol is implemented for the following type(s): Date.Range, File.Stream, Function, GenEvent.Stream, HashDict, HashSet, IO.Stream, List, Map, MapSet, Range, Stream
My way around this was trying to convert the BitString into a String, but i can't find anything for doing this in Elixir's documentation.
My other solution was trying to not get that BitString at all but i don't even know why i'm getting that BitString to begin with.
The process i'm doing is to receive a list like this: [{"int main(){return 2;}", 1}]
Then i make a list but only using the string text=Enum.map(words, fn {string, _} -> string end)
I tried printing the result so i'm sure i'm giving the correct argument; by using IO.inspect(text), i got ["int main(){return 2;}"], which looks like a list of strings to me.
Then i pass that to a function using Enum.flat_map(text, &lex_raw_tokens(&1, line))
Inside of that function, i do
def lex_raw_tokens(program,line) when program != "" do
textString=Enum.join(program, " ")
This is where i get the error. Is there any way of turning that BitString back into a String or not get that BitString?
Sorry, i'm still learning Elixir and honestly so far it's the most dificcult lenguage i've learned and i'm having a lot of troubles with it. Also, this whole thing is part of a small C compiler i'm doing as a school proyect
You have text bound to ["int main(){return 2;}"], then you're doing an Enum.flat_map/2 over text, so inside lex_raw_tokens/2, program is bound to "int main(){return 2;}". You're then trying to do an Enum.join/2 on program, but since it's a string (which is a kind of BitString), it's not enumerable.
I am trying to use compile to runtime generate a Python function accepting arguments as follows.
import types
import ast
code = compile("def add(a, b): return a + b", '<string>', 'exec')
fn = types.FunctionType(code, {}, name="add")
print(fn(4, 2))
But it fails with
TypeError: <module>() takes 0 positional arguments but 2 were given
Is there anyway to compile a function accepting arguments using this way or is there any other way to do that?
Compile returns the code object to create a module. In Python 3.6, if you were to disassemble your code object:
>>> import dis
>>> dis.dis(fn)
0 LOAD_CONST 0 (<code object add at ...., file "<string>" ...>)
2 LOAD_CONST 1 ('add')
4 MAKE_FUNCTION 0
6 STORE_NAME 0 (add)
8 LOAD_CONST 2 (None)
10 RETURN_VALUE
That literally translates to make function; name it 'add'; return None.
This code means that your function runs the creation of the module, not returning a module or function itself. So essentially, what you're actually doing is equivalent to the following:
def f():
def add(a, b):
return a + b
print(f(4, 2))
For the question of how do you work around, the answer is it depends on what you want to do. For instance, if you want to compile a function using compile, the simple answer is you won't be able to without doing something similar to the following.
# 'code' is the result of the call to compile.
# In this case we know it is the first constant (from dis),
# so we will go and extract it's value
f_code = code.co_consts[0]
add = FunctionType(f_code, {}, "add")
>>> add(4, 2)
6
Since defining a function in Python requires running Python code (there is no static compilation by default other than compiling to bytecode), you can pass in custom globals and locals dictionaries, and then extract the values from those.
glob, loc = {}, {}
exec(code, glob, loc)
>>> loc['add'](4, 2)
6
But the real answer is if you want to do this, the simplest way is generally to generate Abstract Syntax Trees using the ast module, and compiling that into module code and evaluating or executing the module.
If you want to do bytecode transformation, I'd suggest looking at the codetransformer package on PyPi.
TL;DR using compile will only ever return code for a module, and most serious code generation is done either with ASTs or by manipulating byte codes.
is there any other way to do that?
For what's worth: I recently created a #compile_fun goodie that considerably eases the process of applying compile on a function. It relies on compile so nothing different than was explained by the above answers, but it provides an easier way to do it. Your example writes:
#compile_fun
def add(a, b):
return a + b
assert add(1, 2) == 3
You can see that you now can't debug into add with your IDE. Note that this does not improve runtime performance, nor protects your code from reverse-engineering, but it might be convenient if you do not want your users to see the internals of your function when they debug. Note that the obvious drawback is that they will not be able to help you debug your lib, so use with care!
See makefundocumentation for details.
I think this accomplishes what you want in a better way
import types
text = "lambda (a, b): return a + b"
code = compile(text, '<string>', 'eval')
body = types.FunctionType(code, {})
fn = body()
print(fn(4, 2))
The function being anonymous resolves the implicit namespace issues.
And returning it as a value by using the mode 'eval' is cleaner that lifting it out of the code contents, since it does not rely upon the specific habits of the compiler.
More usefully, as you seem to have noticed but not gotten to using yet, since you import ast, the text passsed to compile can actually be an ast object, so you can use ast transformation on it.
import types
import ast
from somewhere import TransformTree
text = "lambda (a, b): return a + b"
tree = ast.parse(text)
tree = TransformTree().visit(tree)
code = compile(text, '<string>', 'eval')
body = types.FunctionType(code, {})
fn = body()
print(fn(4, 2))
So I have a need to pass around a numpy array in my PyQt Application. I first tried using the new-style signals/slots, defining my signal with:
newChunkToProcess = pyqtSignal(np.array()), however this gives the error:
TypeError: Required argument 'object' (pos 1) not found
I have worked out how to do this with the old-style signals and slots using
self.emit(SIGNAL("newChunkToProcess(PyQt_PyObject)"), np.array([5,1,2])) - (yes, that's just testing data :), but I was wondering, is it possible to do this using the new-style system?
The type you're looking for is np.ndarray
You can tell this from the following code:
>>> arr = np.array([]) # create an array instance
>>> type(arr) # ask 'what type is this object?'
<type 'numpy.ndarray'>
So your signal should look more like:
newChunkToProcess = pyqtSignal(np.ndarray)
(Notice I'm passing the type np.ndarray, rather than an array instance as you tried).
If you don't want to worry about the type of the argument, you could instead use:
newChunkToProcess = pyqtSignal(object)
This should let you send any data type at all through the signal.
Also: numpy and Qt do not share any major functionality that I know of. In fact, the two are quite complementary and make a very powerful combination.
You are doing it wrong. You have to pass the data object type: int, str, ... in your case list
Like I am doing:
images = pyqtSignal(int, str);
failed = pyqtSignal(str, str);
finished = pyqtSignal(int)
If I use the inline function in MATLAB I can create a single function name that could respond differently depending on previous choices:
if (someCondition)
p = inline('a - b','a','b');
else
p = inline('a + b','a','b');
end
c = p(1,2);
d = p(3,4);
But the inline functions I'm creating are becoming quite epic, so I'd like to change them to other types of functions (i.e. m-files, subfunctions, or nested functions).
Let's say I have m-files like Mercator.m, KavrayskiyVII.m, etc. (all taking a value for phi and lambda), and I'd like to assign the chosen function to p in the same way as I have above so that I can call it many times (with variable sized matrices and things that make using eval either impossible or a total mess).
I have a variable, type, that will be one of the names of the functions required (e.g. 'Mercator', 'KavrayskiyVII', etc.). I figure I need to make p into a pointer to the function named inside the type variable. Any ideas how I can do this?
Option #1:
Use the str2func function (assumes the string in type is the same as the name of the function):
p = str2func(type); % Create function handle using function name
c = p(phi, lambda); % Invoke function handle
NOTE: The documentation mentions these limitations:
Function handles created using str2func do not have access to variables outside of their local workspace or to nested functions. If your function handle contains these variables or functions, MATLABĀ® throws an error when you invoke the handle.
Option #2:
Use a SWITCH statement and function handles:
switch type
case 'Mercator'
p = #Mercator;
case 'KavrayskiyVII'
p = #KavrayskiyVII;
... % Add other cases as needed
end
c = p(phi, lambda); % Invoke function handle
Option #3:
Use EVAL and function handles (suggested by Andrew Janke):
p = eval(['#' type]); % Concatenate string name with '#' and evaluate
c = p(phi, lambda); % Invoke function handle
As Andrew points out, this avoids the limitations of str2func and the extra maintenance associated with a switch statement.