I am working on a new type of database, using GO. One of the things I would like to do is have a distributed disk so that I can distribute queries over multiple machines (think Pi type architectures). This means building my own structures on raw disk.
My challenge is that I can't find a GO package that will let me write N bytes from a pointer to a structure. All the IO packages limit the access to []byte slices.
That's nice for protection, but if I have to buffer everything through a byte array via some form of encoding it will slow down the access to a specific object.
Anyone got any idea on how to do raw IO? Or am I going to have to handle GOBs as my unit of IO and suffer the penalty for encoding/decoding?
Big warning first: don't do it: it is neither safe nor portable
For a given struct, you can reflect over it to figure out the in-memory size of the actual struct, then unsafely cast it to a []byte using unsafe.
eg: (*[in-mem size]byte)(unsafe.Pointer(&mystruct))
This will give you something C-ish with absolutely no safety guarantees or portability.
I'll quote the Go spec:
A package using unsafe must be vetted manually for type safety and may
not be portable.
You can find a lot more details in this Go and Memory layout post, including all the steps you need to unsafely treat structs as just bytes.
Overall, it's fascinating to examine how Go functions on a low level, but this is absolutely the wrong thing to do in your case. Any real data infrastructure will need storage logic way more complicated than just dumping in-memory structs to disk anyway.
In general, you cannot do raw IO of a Go struct (i.e. memdump). This is because many things in Go contain pointers, and the actual data is not contiguous in memory.
For example, a struct like this:
type Person struct {
Name string
}
contains a string, which in turn contains a pointer to the bytes of the string. A raw memdump would only dump the pointer.
The solution is serialization. This is never free, although some implementations do a pretty good job.
The closest to what you are describing is something like go-memdump, but I wouldn't recommend it for production.
Otherwise, I recommend looking at a performant serialization technique. (Go's gob encoding is not the best.)
...Or am I going to have to handle GOBs as my unit of IO and suffer the penalty for encoding/decoding?
Just use GOBs.
Premature optimization is the root of all evil.
Related
Copy means that the struct could be copied just by copying bytes as is. As a result, it should be easily possible to re-interpret such a struct as [u8]. What's the most idiomatic way to do so, preferably without involving unsafe.
I want to have an optimized struct which could be easily sent via processes/wire/disk. I understand, that there're a lot of details which needs to be taken care of, like alignment, and looking for a solution for such a high performance use case. I.e. I am looking for close to zero copy high performance serialization.
Copy means that the struct could be copied just by copying bytes as is.
This is true.
As a result, it should be easily possible to re-interpret such a struct as [u8].
This is not true, because Copy structs can still contain padding, which is not permitted to be read except incidentally while copying.
What's the most idiomatic way to do so, preferably without involving unsafe.
You should start with bytemuck. It is a library which provides trivial conversion to and from [u8] when it is safe to do so. In particular, it checks that there is no padding in the struct, and that the representation is well-defined (not subject to the whims of the compiler).
You will still need to consider alignment, and for that purpose may need to introduce explicit “padding” fields (whose value is explicitly set rather than being left undefined) so that the alignment of other fields is satisfied.
Your program's data will also not be compatible with machines of different endianness unless you take care. (However, it is possible to do so, in ways which have zero run-time overhead if not necessary, and most machines are little-endian today so that cost will almost never actually apply.)
I would like to create a new data type in Rust on the "bit-level".
For example, a quadruple-precision float. I could create a structure that has two double-precision floats and arbitrarily increase the precision by splitting the quad into two doubles, but I don't want to do that (that's what I mean by on the "bit-level").
I thought about using a u8-array or a bool-array but in both cases, I waste 7 bits of memory (because also bool is a byte large). I know there are several crates that implement something like bit-arrays or bit-vectors, but looking through their source code didn't help me to understand their implementation.
How would I create such a bit-array without wasting memory, and is this the way I would want to choose when implementing something like a quad-precision type?
I don't know how to implement new data types that don't use the basic types or are structures that combine the basic types, and I haven't been able to find a solution on the internet yet; maybe I'm not searching with the right keywords.
The question you are asking has no direct answer: Just like any other programming language, Rust has a basic set of rules for type layouts. This is due to the fact that (most) real-world CPUs can't address individual bits, need certain alignments when referencing memory, have rules regarding how pointer arithmetic works etc. etc.
For instance, if you create a type of just two bits, you'll still need an 8-bit byte to represent that type, because there is simply no way to address two individual bits on most CPU's opcodes; there is also no way to take the address of such a type because addressing works at least on the byte-level. More useful information regarding this can be found here, section 2, The Anatomy of a Type. Be aware that the non-wasting bit-level type you are thinking about needs to fulfill all the rules mentioned there.
It's a perfectly reasonable approach to represent what you want to do e.g. either as a single, wrapped u128 and implement all arithmetic on top of that type. Another, more generic, approach would be to use a Vec<u8>. You'll always do a relatively large amount of bit-masking, indirecting and such.
Having a look at rust_decimal or similar crates might also be a good idea.
I am trying to develop an http client by using http-simple library. Some implementation of the library seems confusing to me.
This library makes heavy use of Conduit; however there is also this 'setRequestBodyLBS' function and interestingly, the function 'setRequestBodyBS' is missing here. It is documented that Conduit and lazy IO do not work well together. So my question is, why not the other way around? i.e., implement the BS version of the function instead of the LBS version? What is the idea behind the choice made here?
Internally, a lazy bytestring is like a linked list of strict bytestrings. Moving from a strict bytestring to a lazy one is cheap (you build a linked list of one element) but going in the reverse direction is costlier (you need to allocate a contiguous chunk of memory for the combined bytes, and then copy each chunk from the list).
Lazy IO uses lazy bytestrings, but they're also useful in other contexts, for example when you have strict chunks arriving from an external source and you want an easy way of accumulating them without having to preallocate a big area of memory or perform frequent reallocations/copies. Instead, you just keep a list of chunks that you later present as a lazy bytestring. (When list concatenations start getting expensive or the granularity is too small, you can use a Builder as a further optimization.)
Another frequent use case is serialization of some composite data structure (say, aeson's Value). If all you are going to do is dump the generated bytes into a file or a network request, it doesn't make much sense to perform a relatively costly consolidation of the serialized bytes of each sub-component. If needed, you can always perform it later with toStrict anyway.
I have to pick a type for a sequence of floats with 16K elements. The values will be updated frequently, potentially many times a second.
I've read the wiki page on arrays. Here are the conclusions I've drawn so far. (Please correct me if any of them are mistaken.)
IArrays would be unacceptably slow in this case, because they'd be copied on every change. With 16K floats in the array, that's 64KB of memory copied each time.
IOArrays could do the trick, as they can be modified without copying all the data. In my particular use case, doing all updates in the IO monad isn't a problem at all. But they're boxed, which means extra overhead, and that could add up with 16K elements.
IOUArrays seem like the perfect fit. Like IOArrays, they don't require a full copy on each change. But unlike IOArrays, they're unboxed, meaning they're basically the Haskell equivalent of a C array of floats. I realize they're strict. But I don't see that being an issue, because my application would never need to access anything less than the entire array.
Am I right to look to IOUArrays for this?
Also, suppose I later want to read or write the array from multiple threads. Will I have backed myself into a corner with IOUArrays? Or is the choice of IOUArrays totally orthogonal to the problem of concurrency? (I'm not yet familiar with the concurrency primitives in Haskell and how they interact with the IO monad.)
A good rule of thumb is that you should almost always use the vector library instead of arrays. In this case, you can use mutable vectors from the Data.Vector.Mutable module.
The key operations you'll want are read and write which let you mutably read from and write to the mutable vector.
You'll want to benchmark of course (with criterion) or you might be interested in browsing some benchmarks I did e.g. here (if that link works for you; broken for me).
The vector library is a nice interface (crazy understatement) over GHC's more primitive array types which you can get to more directly in the primitive package. As are the things in the standard array package; for instance an IOUArray is essentially a MutableByteArray#.
Unboxed mutable arrays are usually going to be the fastest, but you should compare them in your application to IOArray or the vector equivalent.
My advice would be:
if you probably don't need concurrency first try a mutable unboxed Vector as Gabriel suggests
if you know you will want concurrent updates (and feel a little brave) then first try a MutableArray and then do atomic updates with these functions from the atomic-primops library. If you want fine-grained locking, this is your best choice. Of course concurrent reads will work fine on whatever array you choose.
It should also be theoretically possible to do concurrent updates on a MutableByteArray (equivalent to IOUArray) with those atomic-primops functions too, since a Float should always fit into a word (I think), but you'd have to do some research (or bug Ryan).
Also be aware of potential memory reordering issues when doing concurrency with the atomic-primops stuff, and help convince yourself with lots of tests; this is somewhat uncharted territory.
I realize this may be a rather heretical question, but I wonder whether I can mmap a file of data, via System.IO.Posix.MMap, and then cast the resulting ByteString into a strict array of some other type? Eg. if I know that the file contains doubles, can I somehow get this mmapped data into an UArr Double so I can do sumU etc on it, and have the virtual memory system take care of IO for me? This is essentially how I deal with multi-GB data sets in my C++ code. Alternative more idiomatic ways to do this also appreciated, thanks!
Supreme extra points for ways I can also do multicore processing on the data :-) Not that I'm demanding or anything.
I don't think it is safe to do this. UArr are Haskell heap allocated unpinned memory, the GC will move it. ByteStrings (and mmapped ones) are ForeignPtrs to pinned memory. They're different objects in the runtime system.
You will need to copy for this to be safe, if you're changing the underlying type from ForeignPtr to a Haskell value 'a'.
I'm afraid I don't know how to cast a ByteString to a UArr T, but I'd like to claim some "extra points" by suggesting you take a look at Data Parallel Haskell; from the problem you've described it could be right up your street.
You probably want Foreign.Marshal here, and especially Foreign.Marshal.Array. It was designed to do exactly this.