There are two types of prefixes in the grpc source codeļ¼ grpc and gpr_, it seems that the gpr_ prefix is used for internal functions, but what exactly does it stand?
"gRPC Portable Runtime." There's really nowhere that spells that out. There's only one comment in one PR that mentions it.
If you've ever heard of APR (Apache Portable Runtime), then it might not be too surprising.
Related
This question is about design / is fairly open-ended.
I'd like to use OpenCV, a large C++ library, from Haskell.
The closest solution at the moment is probably Arjun Comar's attempt to adapt the Python / Java binding generator.
See here, here, and here.
His approach generates a C interface, which is then wrapped using hsc2hs.
Due to OpenCV's lack of referential transparency in its API, as well as its frequent use of call parameters for output, for Arjun's approach to fully succeed he'll need to define a new API for OpenCV, and implement it in terms of the existing one.
So, it seems it might not be too much extra work to go whole-hog and define an API using an interface description languages (IDL), such as SWIG, protobuf-with-RPC, or Apache Thrift.
This would provide interfaces to a number of languages besides Haskell.
My questions:
Is there anything better than SWIG for a server-free solution?
(I just want to call into C++; I'd rather not go through a local server.)
If there's no good server-free solution, should I use protobuf-with-RPC or Thrift?
Related: How good is Thrift's Haskell support?
From the code, it looks like it needs updating (I see references to GHC 6).
Related: What's a good protobuf-with-RPC solution?
With Apache Thrift, you get Haskell support. You are correct, code is not generally "latest", but you rarely care. You can do complex things on other abstraction levels and keep things as simple as possible at messaging level.
Google Protobuf has no support for Haskell, nor does SWIG. With Protobuf you get C++, Java, JavaScript and Python, to my knowledge the main languages at Google. Have a look at this presentation. Without contest, Thrift and Protobuf are the best in house.
It seems in your case you have to go with Thrift, as it supports Haskell.
It sounds like the foreign function interface for C++ is what you want:
Hackage,
Github
Disclaimer: I haven't used it, only heard good things about it.
I found that the naming conversion of node is a little strange. For instance, in the file system module, the read link function's letters are all lower cased:
fs.readlink
But the read file function's name are camelized:
fs.readFile
It confused me. After mistyping for times, I think I shoud ask. So is there a naming convention to help me memorize the api names?
Node's default convention is camelCase.
But functions in file system module named according to their respective POSIX C interface functions.
For example readdir, readlink.
These functions names are well-known by Linux developers and therefore it's often decided to use them as is(as single word), without camelizing.
Always go camel case, almostly everyone does it.
The Node core does have various disparities in this case, like the one you mentioned, process have some too (process.get*() x process.memoryUsage()), and others; but the majority of the core methods are camel cased.
Until you memorize the ones who are not camel cased, I'd say that it's a good tip to always develop with the docs open ;)
Suppose the following:
I have a type called World representing some simulation state. I also have this type synonym:
type Update = World -> World
Is Haskell capable of serializing the Update type such that it can be passed over the network? Or is there any other means of doing so? Maybe I'm not looking for a serialization of the code's logic so much, as some kind of pointer or identifier that can be read on the other end. Both the sending and receiving process are running the same Haskell program.
The distributed-process package is exactly what you described. Since each program already has the same set of functions a pointer to the function is passed from one process to another. There has been mentioned serialization of the function as a potential future goal but is sound like it might take a change to GHC. The github page is a good resource for what backends exist. The github-pages look very nice with some examples, but I did not know about it until a moment ago.
The few Haskell Parallel digests around issue 11 is where I remember learning the most. Definitely take some time to explore github-pages I know I will be.
If I remember correctly there are simple examples in the hackage package or on the github repo exploring work stealing vs work sharing and similar strategies.
I suggest making a DSL data structure. Sadly Haskell does not have lisp's runtime compilation features, but it should be sufficient.
I'm a reasonably competent programmer who knows haskell, but who hasn't used it in any major projects. I know enough about c and systems and network programming that I believe I can pick apart tsocks from the source code.
I don't have any experience with the low-level systems interfaces haskell provides. I'm looking for any advice people can offer me on the topic, including, "Don't do it; you'll hate yourself for it," provided there is an explanation.
I really wouldn't do this, except as an experiment. I'm a Haskell guy, but not a deep systems guy, so there's a caveat there. But nonetheless, I see the following on the tsocks page:
tsocks is based on the 'shared library
interceptor' concept. Through use of
the LD_PRELOAD environment variable or
the /etc/ld.so.preload file tsocks is
automatically loaded into the process
space of every executed program. From
there it overrides the normal
connect() function by providing its
own. Thus when an application calls
connect() to establish a TCP
connection it instead passes control
to tsocks. tsocks determines if the
connection needs to be made via a
SOCKS server (by checking
/etc/tsocks.conf) and negotiates the
connection if so (through use of the
real connect() function )
It is possible to call Haskell from C, and vice-versa. And its relatively easy, in fact. For shared libraries, see this: http://www.haskell.org/ghc/docs/6.12.1/html/users_guide/using-shared-libs.html.
But when you invoke Haskell from C, you need to A) link in the runtime and B) invoke the runtime.
So that works when the C knows that its calling Haskell. But its relatively trickier when the C doesn't know that it's calling Haskell, and so you'd need to wrap the Haskell shared library with a C library that invoked and managed the runtime transparently to the program that is preloading the haskell-tsocks library to intercept its normal connect functions.
So I'm sure this can be done -- but it sounds rather painful and complicated, and somewhat expensive in terms of having to link the whole ghc runtime in for this one feature. And frankly, I imagine the code you'd be writing (I haven't inspected the tsocks code itself yet) would largely be FFI calls anyway.
So a Haskell implementation of some element of socks -- a proxy, a client, etc. sounds interesting and potentially useful. But the exact preload magic that tsocks does sounds like a perhaps poor fit.
Bear in mind that there are Haskell hackers that are much better at this stuff than me, more knowledgeable, and more experienced. So they might say otherwise.
(Posting as a separate answer, since this is advice unrelated to the FFI)
You probably know this stuff, but in case its useful for anyone...
Read up on the Network.Socket module
Search the Haskell Wiki for pages that might help you (like Applications and libraries/Network)
Check out System Programming in Haskell and other chapters from RWH
Ignore the people that say "Haskell is terrible for I/O" - protip: you can just scare them away by saying fancy words like "endofunctor"
This may not be exactly the answer you were looking for, but instead of re-writing it in Haskell, you could just use the Foreign Function Interface to wrap the already-existing C implementation in Haskell types.
Note, one of the few major changes in Haskell 2010 was to officially include the FFI as a language feature. Link: Haskell 2010 FFI
I'm thinking of running an experiment to track DNS values in different ways (like how often they change and whatnot). To do this I will need to be able to make a DNS request directly to a server so that 1) I known what server it came from, 2) I can request responses from several servers and 3) I can avoid the local OS run cache.
Does anyone know of a library (c#, D, C, C++ in that order of preference) that will let me directly query a DNS server? Failing that, does anyone know of a easy to understand description of the DNS protocol that I could implement such a system from?
I have experience only with C, so here is my list:
libresolv is the old, traditional and standard way. It is available on every Unix (type man 3 resolver) and includes routines like
res_query which does more or less what you want. To query a specific name server, you typically update the global variable _res.nsaddr_list (do note that, apparently, it does not work with IPv6).
ldns is the modern and shiny solution. You have good documentation online.
a very common library, but apparently unmaintained, is adns.
For C, I'd go with http://cr.yp.to/djbdns/blurb/library.html (the low-level parts if you need total control, i.e. dns_transmit* and friends) -- for C#, maybe http://www.c-sharpcorner.com/UploadFile/ivxivx/DNSClient12122005234612PM/DNSClient.aspx (can't test that one right now, whence the "maybe"!).
The DNS specification is spread over many RFC (see a nice graph) and I would strongly advise not to implement a stub resolver from scratch. There are many opportunities to get it wrong. The DNS evolved a lot in the last years. If you are brave and crazy, here are the most important RFC:
RFC 1034, concepts
RFC 1035, format
RFC 2181, update to the specification, to fix many errors or ambiguities
RFC 2671, EDNS (mandatory today)
RFC 3597, handling the unknown resource record types
and many others...
libdns (I think it's part of bind). There's a cygwin port which may be useful for windows environments.
http://rpm2html.osmirror.nl/libdns.so.21.html