Why Go use cgo on Windows for a simple File.Write? - linux

Rewriting a simple program from C# to Go, I found the resulting executable 3 to 4 times slower. Expecialy the Go version use 3 to 4 times more CPU. It's surprising because the code does many I/O and is not supposed to consume significant amount of CPU.
I made a very simple version only doing sequential writes, and made benchmarks. I ran the same benchmarks on Windows 10 and Linux (Debian Jessie). The time can't be compared (not the same systems, disks, ...) but the result is interesting.
I'm using the same Go version on both platforms : 1.6
On Windows os.File.Write use cgo (see runtime.cgocall below), not on Linux. Why ?
Here is the disk.go program :
package main
import (
"crypto/rand"
"fmt"
"os"
"time"
)
const (
// size of the test file
fullSize = 268435456
// size of read/write per call
partSize = 128
// path of temporary test file
filePath = "./bigfile.tmp"
)
func main() {
buffer := make([]byte, partSize)
seqWrite := func() error {
return sequentialWrite(filePath, fullSize, buffer)
}
err := fillBuffer(buffer)
panicIfError(err)
duration, err := durationOf(seqWrite)
panicIfError(err)
fmt.Printf("Duration : %v\n", duration)
}
// It's just a test ;)
func panicIfError(err error) {
if err != nil {
panic(err)
}
}
func durationOf(f func() error) (time.Duration, error) {
startTime := time.Now()
err := f()
return time.Since(startTime), err
}
func fillBuffer(buffer []byte) error {
_, err := rand.Read(buffer)
return err
}
func sequentialWrite(filePath string, fullSize int, buffer []byte) error {
desc, err := os.OpenFile(filePath, os.O_WRONLY|os.O_CREATE, 0666)
if err != nil {
return err
}
defer func() {
desc.Close()
err := os.Remove(filePath)
panicIfError(err)
}()
var totalWrote int
for totalWrote < fullSize {
wrote, err := desc.Write(buffer)
totalWrote += wrote
if err != nil {
return err
}
}
return nil
}
The benchmark test (disk_test.go) :
package main
import (
"testing"
)
// go test -bench SequentialWrite -cpuprofile=cpu.out
// Windows : go tool pprof -text -nodecount=10 ./disk.test.exe cpu.out
// Linux : go tool pprof -text -nodecount=10 ./disk.test cpu.out
func BenchmarkSequentialWrite(t *testing.B) {
buffer := make([]byte, partSize)
err := sequentialWrite(filePath, fullSize, buffer)
panicIfError(err)
}
The Windows result (with cgo) :
11.68s of 11.95s total (97.74%)
Dropped 18 nodes (cum <= 0.06s)
Showing top 10 nodes out of 26 (cum >= 0.09s)
flat flat% sum% cum cum%
11.08s 92.72% 92.72% 11.20s 93.72% runtime.cgocall
0.11s 0.92% 93.64% 0.11s 0.92% runtime.deferreturn
0.09s 0.75% 94.39% 11.45s 95.82% os.(*File).write
0.08s 0.67% 95.06% 0.16s 1.34% runtime.deferproc.func1
0.07s 0.59% 95.65% 0.07s 0.59% runtime.newdefer
0.06s 0.5% 96.15% 0.28s 2.34% runtime.systemstack
0.06s 0.5% 96.65% 11.25s 94.14% syscall.Write
0.05s 0.42% 97.07% 0.07s 0.59% runtime.deferproc
0.04s 0.33% 97.41% 11.49s 96.15% os.(*File).Write
0.04s 0.33% 97.74% 0.09s 0.75% syscall.(*LazyProc).Find
The Linux result (without cgo) :
5.04s of 5.10s total (98.82%)
Dropped 5 nodes (cum <= 0.03s)
Showing top 10 nodes out of 19 (cum >= 0.06s)
flat flat% sum% cum cum%
4.62s 90.59% 90.59% 4.87s 95.49% syscall.Syscall
0.09s 1.76% 92.35% 0.09s 1.76% runtime/internal/atomic.Cas
0.08s 1.57% 93.92% 0.19s 3.73% runtime.exitsyscall
0.06s 1.18% 95.10% 4.98s 97.65% os.(*File).write
0.04s 0.78% 95.88% 5.10s 100% _/home/sam/Provisoire/go-disk.sequentialWrite
0.04s 0.78% 96.67% 5.05s 99.02% os.(*File).Write
0.04s 0.78% 97.45% 0.04s 0.78% runtime.memclr
0.03s 0.59% 98.04% 0.08s 1.57% runtime.exitsyscallfast
0.02s 0.39% 98.43% 0.03s 0.59% os.epipecheck
0.02s 0.39% 98.82% 0.06s 1.18% runtime.casgstatus

Go does not perform file I/O, it delegates the task to the operating system. See the Go operating system dependent syscall packages.
Linux and Windows are different operating systems with different OS ABIs. For example, Linux uses syscalls via syscall.Syscall and Windows uses Windows dlls. On Windows, the dll call is a C call. It doesn't use cgo. It does go through the same dynamic C pointer check used by cgo, runtime.cgocall. There is no runtime.wincall alias.
In summary, different operating systems have different OS call mechanisms.
Command cgo
Passing pointers
Go is a garbage collected language, and the garbage collector needs
to know the location of every pointer to Go memory. Because of this,
there are restrictions on passing pointers between Go and C.
In this section the term Go pointer means a pointer to memory
allocated by Go (such as by using the & operator or calling the
predefined new function) and the term C pointer means a pointer to
memory allocated by C (such as by a call to C.malloc). Whether a
pointer is a Go pointer or a C pointer is a dynamic property
determined by how the memory was allocated; it has nothing to do with
the type of the pointer.
Go code may pass a Go pointer to C provided the Go memory to which it
points does not contain any Go pointers. The C code must preserve this
property: it must not store any Go pointers in Go memory, even
temporarily. When passing a pointer to a field in a struct, the Go
memory in question is the memory occupied by the field, not the entire
struct. When passing a pointer to an element in an array or slice, the
Go memory in question is the entire array or the entire backing array
of the slice.
C code may not keep a copy of a Go pointer after the call returns.
A Go function called by C code may not return a Go pointer. A Go
function called by C code may take C pointers as arguments, and it may
store non-pointer or C pointer data through those pointers, but it may
not store a Go pointer in memory pointed to by a C pointer. A Go
function called by C code may take a Go pointer as an argument, but it
must preserve the property that the Go memory to which it points does
not contain any Go pointers.
Go code may not store a Go pointer in C memory. C code may store Go
pointers in C memory, subject to the rule above: it must stop storing
the Go pointer when the C function returns.
These rules are checked dynamically at runtime. The checking is
controlled by the cgocheck setting of the GODEBUG environment
variable. The default setting is GODEBUG=cgocheck=1, which implements
reasonably cheap dynamic checks. These checks may be disabled entirely
using GODEBUG=cgocheck=0. Complete checking of pointer handling, at
some cost in run time, is available via GODEBUG=cgocheck=2.
It is possible to defeat this enforcement by using the unsafe package,
and of course there is nothing stopping the C code from doing anything
it likes. However, programs that break these rules are likely to fail
in unexpected and unpredictable ways.
"These rules are checked dynamically at runtime."
Benchmarks:
To paraphrase, there are lies, damn lies, and benchmarks.
For valid comparisons across operating systems you need to run on identical hardware. For example, the difference between CPUs, memory, and rust or silicon disk I/O. I dual-boot Linux and Windows on the same machine.
Run benchmarks at least three times back-to-back. Operating systems try to be smart. For example, caching I/O. Languages using virtual machines need warm-up time. And so on.
Know what you are measuring. If you are doing sequential I/O, you spend almost all your time in the operating system. Have you turned off malware protection? And so on.
And so on.
Here are some results for disk.go from the same machine using dual-boot Windows and Linux.
Windows:
>go build disk.go
>/TimeMem disk
Duration : 18.3300322s
Elapsed time : 18.38
Kernel time : 13.71 (74.6%)
User time : 4.62 (25.1%)
Linux:
$ go build disk.go
$ time ./disk
Duration : 18.54350723s
real 0m18.547s
user 0m2.336s
sys 0m16.236s
Effectively, they are the same, 18 seconds disk.go duration. Just some variation between operating systems as to what is counted user time and what is counted as kernel or system time. Elapsed or real time is the same.
In your tests, kernel or system time was 93.72% runtime.cgocall versus 95.49% syscall.Syscall.

Related

string vs integer as map key for memory utilization in golang?

I have a below read function which is called by multiple go routines to read s3 files and it populates two concurrent map as shown below.
During server startup, it calls read function below to populate two concurrent map.
And also periodically every 30 seconds, it calls read function again to read new s3 files and populate two concurrent map again with some new data.
So basically at a given state of time during the whole lifecycle of this app, both my concurrent map have some data and also periodically being updated too.
func (r *clientRepository) read(file string, bucket string) error {
var err error
//... read s3 file
for {
rows, err := pr.ReadByNumber(r.cfg.RowsToRead)
if err != nil {
return errs.Wrap(err)
}
if len(rows) <= 0 {
break
}
byteSlice, err := json.Marshal(rows)
if err != nil {
return errs.Wrap(err)
}
var productRows []ParquetData
err = json.Unmarshal(byteSlice, &productRows)
if err != nil {
return errs.Wrap(err)
}
for i := range productRows {
var flatProduct definitions.CustomerInfo
err = r.ConvertData(spn, &productRows[i], &flatProduct)
if err != nil {
return errs.Wrap(err)
}
// populate first concurrent map here
r.products.Set(strconv.FormatInt(flatProduct.ProductId, 10), &flatProduct)
for _, catalogId := range flatProduct.Catalogs {
strCatalogId := strconv.FormatInt(int64(catalogId), 10)
// upsert second concurrent map here
r.productCatalog.Upsert(strCatalogId, flatProduct.ProductId, func(exists bool, valueInMap interface{}, newValue interface{}) interface{} {
productID := newValue.(int64)
if valueInMap == nil {
return map[int64]struct{}{productID: {}}
}
oldIDs := valueInMap.(map[int64]struct{})
// value is irrelevant, no need to check if key exists
oldIDs[productID] = struct{}{}
return oldIDs
})
}
}
}
return nil
}
In above code flatProduct.ProductId and strCatalogId are integer but I am converting them into string bcoz concurrent map works with string only. And then I have below three functions which is used by my main application threads to get data from the concurrent map populated above.
func (r *clientRepository) GetProductMap() *cmap.ConcurrentMap {
return r.products
}
func (r *clientRepository) GetProductCatalogMap() *cmap.ConcurrentMap {
return r.productCatalog
}
func (r *clientRepository) GetProductData(pid string) *definitions.CustomerInfo {
pd, ok := r.products.Get(pid)
if ok {
return pd.(*definitions.CustomerInfo)
}
return nil
}
I have a use case where I need to populate map from multiple go routines and then read data from those maps from bunch of main application threads so it needs to be thread safe and it should be fast enough as well without much locking.
Problem Statement
I am dealing with lots of data like 30-40 GB worth of data from all these files which I am reading into memory. I am using concurrent map here which solves most of my concurrency issues but the key for the concurrent map is string and it doesn't have any implementation where key can be integer. In my case key is just a product id which can be int32 so is it worth it storing all those product id's as string in this concurrent map? I think string allocation takes more memory compare to storing all those keys as integer? At least it does in c/c++ so I am assuming it should be same case here in golang too.
Is there anything I can to improve here w.r.t map usage so that I can reduce memory utilization plus I don't lose performance as well while reading data from these maps from main threads?
I am using concurrent map from this repo which doesn't have implementation for key as integer.
Update
I am trying to use cmap_int in my code to try it out.
type clientRepo struct {
customers *cmap.ConcurrentMap
customersCatalog *cmap.ConcurrentMap
}
func NewClientRepository(logger log.Logger) (ClientRepository, error) {
// ....
customers := cmap.New[string]()
customersCatalog := cmap.New[string]()
r := &clientRepo{
customers: &customers,
customersCatalog: &customersCatalog,
}
// ....
return r, nil
}
But I am getting error as:
Cannot use '&products' (type *ConcurrentMap[V]) as the type *cmap.ConcurrentMap
What I need to change in my clientRepo struct so that it can work with new version of concurrent map which uses generics?
I don't know the implementation details of concurrent map in Go, but if it's using a string as a key I'm guessing that behind the scenes it's storing both the string and a hash of the string (which will be used for actual indexing operations).
That is going to be something of a memory hog, and there'll be nothing that can be done about that as concurrent map uses only strings for key.
If there were some sort of map that did use integers, it'd likely be using hashes of those integers anyway. A smooth hash distribution is a necessary feature for good and uniform lookup performance, in the event that key data itself is not uniformly distributed. It's almost like you need a very simple map implementation!
I'm wondering if a simple array would do, if your product ID's fit within 32bits (or can be munged to do so, or down to some other acceptable integer length). Yes, that way you'd have a large amount of memory allocated, possibly with large tracts unused. However, indexing is super-rapid, and the OS's virtual memory subsystem would ensure that areas of the array that you don't index aren't swapped in. Caveat - I'm thinking very much in terms of C and fixed-size objects here - less so Go - so this may be a bogus suggestion.
To persevere, so long as there's nothing about the array that implies initialisation-on-allocation (e.g. in C the array wouldn't get initialised by the compiler), allocation doesn't automatically mean it's all in memory, all at once, and only the most commonly used areas of the array will be in RAM courtesy of the OS's virtual memory subsystem.
EDIT
You could have a map of arrays, where each array covered a range of product Ids. This would be close to the same effect, trading off storage of hashes and strings against storage of null references. If product ids are clumped in some sort of structured way, this could work well.
Also, just a thought, and I'm showing a total lack of knowledge of Go here. Does Go store objects by reference? In which case wouldn't an array of objects actually be an array of references (so, fixed in size) and the actual objects allocated only as needed (ie a lot of the array is null references)? That doesn't sound good for my one big array suggestion...
The library you use is relatively simple and you may just replace all string into int32 (and modify the hashing function) and it will still work fine.
I ran a tiny (and not that rigorous) benchmark against the replaced version:
$ go test -bench=. -benchtime=10x -benchmem
goos: linux
goarch: amd64
pkg: maps
BenchmarkCMapAlloc-4 10 174272711 ns/op 49009948 B/op 33873 allocs/op
BenchmarkCMapAllocSS-4 10 369259624 ns/op 102535456 B/op 1082125 allocs/op
BenchmarkCMapUpdateAlloc-4 10 114794162 ns/op 0 B/op 0 allocs/op
BenchmarkCMapUpdateAllocSS-4 10 192165246 ns/op 16777216 B/op 1048576 allocs/op
BenchmarkCMap-4 10 1193068438 ns/op 5065 B/op 41 allocs/op
BenchmarkCMapSS-4 10 2195078437 ns/op 536874022 B/op 33554471 allocs/op
Benchmarks with a SS suffix is the original string version. So using integers as keys takes less memory and runs faster, as anyone would expect. The string version allocates about 50 bytes more each insertion. (This is not the actual memory usage though.)
Basically, a string in go is just a struct:
type stringStruct struct {
str unsafe.Pointer
len int
}
So on a 64-bit machine, it takes at least 8 bytes (pointer) + 8 bytes (length) + len(underlying bytes) bytes to store a string. Turning it into a int32 or int64 will definitely save memory. However, I assume that CustomerInfo and the catalog sets takes the most memory and I don't think there will be a great improvement.
(By the way, tuning the SHARD_COUNT in the library might also help a bit.)

How do I skip the filesystem cache when reading a file in Golang?

Assume that the contents of the file Foo.txt are as follows.
Foo Bar Bar Foo
Consider the following short program.
package main
import "syscall"
import "fmt"
func main() {
fd, err := syscall.Open("Foo.txt", syscall.O_RDONLY, 0)
if err != nil {
fmt.Println("Failed on open: ", err)
}
data := make([]byte, 100)
_, err = syscall.Read(fd, data)
if err != nil {
fmt.Println("Failed on read: ", err)
}
syscall.Close(fd)
}
When we run the program above, we get no errors, which is correct behavior.
Now, I modify the syscall.Open line to be the following.
fd, err := syscall.Open("Foo.txt", syscall.O_RDONLY | syscall.O_SYNC | syscall.O_DIRECT, 0)
When I run the program again, I get the following (undesirable) output.
Failed on read: invalid argument
How can I correctly pass the flags syscall.O_SYNC and syscall.O_DIRECT as specified by the the open man page for skipping the filesystem cache?
Note that I am using the syscall file interface directly instead of the os file interface because I could not find a way to pass those flags into the functions provided by os, but I am open to solutions that use os provided that they work correctly to disable the filesystem cache on reads.
Note also that I am running on Ubuntu 14.04 with ext4 as my filesystem.
Update: I tried to use #Nick Craig-Wood's package in the code below.
package main
import "io"
import "github.com/ncw/directio"
import "os"
import "fmt"
func main() {
in, err := directio.OpenFile("Foo.txt", os.O_RDONLY, 0666)
if err != nil {
fmt.Println("Error on open: ", err)
}
block := directio.AlignedBlock(directio.BlockSize)
_, err = io.ReadFull(in, block)
if err != nil {
fmt.Println("Error on read: ", err)
}
}
The output is the following
Error on read: unexpected EOF
You may enjoy my directio package which I made for exactly this purpose.
From the site
This is library for the Go language to enable use of Direct IO under all supported OSes of Go (except openbsd and plan9).
Direct IO does IO to and from disk without buffering data in the OS. It is useful when you are reading or writing lots of data you don't want to fill the OS cache up with.
See here for package docs
http://go.pkgdoc.org/github.com/ncw/directio
From the open man page, under NOTES:
The O_DIRECT flag may impose alignment restrictions on the length and address of user-space buffers and the file offset of I/Os. In Linux alignment restrictions vary by file system and kernel version and might be absent entirely.
So you could have alignment issues, of either the memory or the file offset, or your buffer size could be "wrong". What the alignments and sizes should be is not obvious. The man page continues:
However there is currently no file system-independent interface for an application to discover these restrictions for a given file or file system.
And even Linus weighs in, in his usual understated manner:
"The thing that has always disturbed me about O_DIRECT is that the whole interface is just stupid, and was probably designed by a deranged monkey on some serious mind-controlling substances." —Linus
Good luck!
p.s. Stab in the dark: why not read 512 bytes?
you can try to use fadvice and madvice, but there is no guarantee. both will work more probably with larger files/data, because:
Partial pages are deliberately preserved on the expectation that it is better to preserve needed memory than to discard unneeded memory.
see the linux source code, what will do something and what not. POSIX_FADV_NOREUSE for example doesn't do anything.
http://lxr.free-electrons.com/source/mm/fadvise.c#L62
http://lxr.free-electrons.com/source/mm/madvise.c
package main
import "fmt"
import "os"
import "syscall"
import "golang.org/x/sys/unix"
func main() {
advise := false
if len(os.Args) > 1 && os.Args[1] == "-x" {
fmt.Println("setting file advise")
advise =true
}
data := make([]byte, 100)
handler, err := os.Open("Foo.txt")
if err != nil {
fmt.Println("Failed on open: ", err)
}; defer handler.Close()
if advise {
unix.Fadvise(int(handler.Fd()), 0, 0, 4) // 4 == POSIX_FADV_DONTNEED
}
read, err := handler.Read(data)
if err != nil {
fmt.Println("Failed on read: ", err)
os.Exit(1)
}
if advise {
syscall.Madvise(data, 4) // 4 == MADV_DONTNEED
}
fmt.Printf("read %v bytes\n", read)
}
/usr/bin/time -v ./direct -x
Command being timed: "./direct -x"
User time (seconds): 0.00
System time (seconds): 0.00
Percent of CPU this job got: 0%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.03
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 1832
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 2
Minor (reclaiming a frame) page faults: 149
Voluntary context switches: 2
Involuntary context switches: 2
Swaps: 0
File system inputs: 200
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0

Go: Thread-Safe Concurrency Issue with Sparse Array Read & Write

I'm writing a search engine in Go in which I have an inverted index of words to the corresponding results for each word. There is a set dictionary of words and so the words are already converted into a StemID, which is an integer starting from 0. This allows me to use a slice of pointers (i.e. a sparse array) to map each StemID to the structure which contains the results of that query. E.g. var StemID_to_Index []*resultStruct. If aardvark is 0 then the pointer to the resultStruct for aardvark is located at StemID_to_Index[0], which will be nil if the result for this word is currently not loaded.
There is not enough memory on the server to store all of this in memory, so the structure for each StemID will be saved as separate files and these can be loaded into the StemID_to_Index slice. If StemID_to_Index is currently nil for this StemID then the result is not cached and needs to be loaded, otherwise it's already loaded (cached) and so can be used directly. Each time a new result is loaded the memory usage is checked and if it's over the threshold then 2/3 of the loaded results are thrown away (StemID_to_Index is set to nil for these StemIDs and a garbage collection is forced.)
My problem is the concurrency. What is the fastest and most efficient way in which I can have multiple threads searching at the same time without having problems with different threads trying to read and write to the same place at the same time? I'm trying to avoid using mutexes on everything as that would slow down every single access attempt.
Do you think I would get away with loading the results from disk in the working thread and then delivering the pointer to this structure to an "updater" thread using channels, which then updates the nil value in the StemID_to_Index slice to the pointer of the loaded result? This would mean that two threads would never attempt to write at the same time, but what would happen if another thread tried to read from that exact index of StemID_to_Index while the "updater" thread was updating the pointer? It doesn't matter if a thread is given a nil pointer for a result which is currently being loaded, because it will just be loaded twice and while that is a waste of resources it would still deliver the same result and since that is unlikely to happen very often, it's forgiveable.
Additionally, how would the working thread which send the pointer to be updated to the "updater" thread know when the "updater" thread has finished updating the pointer in the slice? Should it just sleep and keep checking, or is there an easy way for the updater to send a message back to the specific thread which pushed to the channel?
UPDATE
I made a little test script to see what would happen if attempting to access a pointer at the same time as modifying it... it seems to always be OK. No errors. Am I missing something?
package main
import (
"fmt"
"sync"
)
type tester struct {
a uint
}
var things *tester
func updater() {
var a uint
for {
what := new(tester)
what.a = a
things = what
a++
}
}
func test() {
var t *tester
for {
t = things
if t != nil {
if t.a < 0 {
fmt.Println(`Error1`)
}
} else {
fmt.Println(`Error2`)
}
}
}
func main() {
var wg sync.WaitGroup
things = new(tester)
go test()
go test()
go test()
go test()
go test()
go test()
go updater()
go test()
go test()
go test()
go test()
go test()
wg.Add(1)
wg.Wait()
}
UPDATE 2
Taking this further, even if I read and write from multiple threads to the same variable at the same time... it makes no difference, still no errors:
From above:
func test() {
var a uint
var t *tester
for {
t = things
if t != nil {
if t.a < 0 {
fmt.Println(`Error1`)
}
} else {
fmt.Println(`Error2`)
}
what := new(tester)
what.a = a
things = what
a++
}
}
This implies I don't have to worry about concurrency at all... again: am I missing something here?
This sounds like a perfect use case for a memory mapped file:
package main
import (
"log"
"os"
"unsafe"
"github.com/edsrzf/mmap-go"
)
func main() {
// Open the backing file
f, err := os.OpenFile("example.txt", os.O_RDWR|os.O_CREATE, 0644)
if err != nil {
log.Fatalln(err)
}
defer f.Close()
// Set it's size
f.Truncate(1024)
// Memory map it
m, err := mmap.Map(f, mmap.RDWR, 0)
if err != nil {
log.Fatalln(err)
}
defer m.Unmap()
// m is a byte slice
copy(m, "Hello World")
m.Flush()
// here's how to use it with a pointer
type Coordinate struct{ X, Y int }
// first get the memory address as a *byte pointer and convert it to an unsafe
// pointer
ptr := unsafe.Pointer(&m[20])
// next convert it into a different pointer type
coord := (*Coordinate)(ptr)
// now you can use it directly
*coord = Coordinate{1, 2}
m.Flush()
// and vice-versa
log.Println(*(*Coordinate)(unsafe.Pointer(&m[20])))
}
The memory map can be larger than real memory and the operating system will handle all the messy details for you.
You will still need to make sure that separate goroutines never read/write to the same segment of memory at the same time.
My top answer would be to use elasticsearch with a client like elastigo.
If that's not an option, it would really help to know how much you care about race-y behavior. If you don't care, a write could happen right after a read finishes, the user finishing the read will get stale data. You can just have a queue of write and read operations and have multiple threads feed into that queue and one dispatcher issue the operations to the map one-at-a-time as they come it. In all other scenarios, you will need a mutex if there are multiple readers and writers. Maps aren't thread safe in go.
Honestly though, I would just add a mutex to make things simple for now and optimize by analyzing where your bottlenecks actually lie. It seems like you checking a threshold and then purging 2/3 of your cache is a bit arbitrary, and I wouldn't be surprised if you kill performance by doing something like that. Here's on situation where that would break down:
Requesters 1, 2, 3, and 4 are frequently accessing many of the same words on files A & B.
Requester 5, 6, 7 and 8 are frequently accessing many of the same words stored on files C & D.
Now when requests interleaved between these requesters and files happen in rapid succession, you may end up purging your 2/3 of your cache over and over again of results that may be requested shortly after. There are a couple other approaches:
Cache words that are frequently accessed at the same time on the same box and have multiple caching boxes.
Cache on a per-word basis with some sort of ranking of how popular that word is. If a new word is accessed from a file while the cache is full, see if other more popular words live in that file and purge less popular entries in the cache in hopes that those words will have a higher hit rate.
Both approaches 1 & 2.

Parallel processing strings Delphi full available CPU usage

The goal is to achieve full usage of the available cores, in converting floats to strings in a single Delphi application. I think this problem applies to the general processing of string. Yet in my example I am specifically using the FloatToStr method.
What I am doing (I've kept this very simple so there is little ambiguity around the implementation):
Using Delphi XE6
Create thread objects which inherit from TThread, and start them.
In the thread execute procedure it will convert a large amount of
doubles into strings via the FloatToStr method.
To simplify, these doubles are just the same constant, so there is no
shared or global memory resource required by the threads.
Although multiple cores are used, the CPU usage % always will max out on the amount of a single core. I understand this is an established issue. So I have some specific questions.
In a simple way the same operation could be done by multiple app instances, and thereby achieve more full usage of the available CPU. Is it possible to do this effectively within the same executable ?
I.e. assign threads different process ids on the OS level or some equivalent division recognised by the OS ? Or is this simply not possible in out of the box Delphi ?
On scope :
I know there are different memory managers available & other groups have tried changing some of the lower level asm lock usage http://synopse.info/forum/viewtopic.php?id=57
But, I am asking this question in the scope of not doing things at such a low level.
Thanks
Hi J. My code is deliberately very simple :
TTaskThread = class(TThread)
public
procedure Execute; override;
end;
procedure TTaskThread.Execute;
var
i: integer;
begin
Self.FreeOnTerminate := True;
for i := 0 to 1000000000 do
FloatToStr(i*1.31234);
end;
procedure TfrmMain.Button1Click(Sender: TObject);
var
t1, t2, t3: TTaskThread;
begin
t1 := TTaskThread.Create(True);
t2 := TTaskThread.Create(True);
t3 := TTaskThread.Create(True);
t1.Start;
t2.Start;
t3.Start;
end;
This is a 'test code', where the CPU (via performance monitor) maxes out at 25% (I have 4 cores). If the FloatToStr line is swapped for a non string operation, e.g. Power(i, 2), then the performance monitor shows the expected 75% usage.
(Yes there are better ways to measure this, but I think this is sufficient for the scope of this question)
I have explored this issue fairly thoroughly. The purpose of the question was to put forth the crux of the issue in a very simple form.
I am asking about limitations when using the FloatToStr method. And asking is there an implementation incarnation which will permit better usage of available cores.
Thanks.
I second what everyone else has said in the comments. It is one of the dirty little secrets of Delphi that the FastMM memory manager is not scalable.
Since memory managers can be replaced you can simply replace FastMM with a scalable memory manager. This is a rapidly changing field. New scalable memory managers pop up every few months. The problem is that it is hard to write a correct scalable memory manager. What are you prepared to trust? One thing that can be said in FastMM's favour is that it is robust.
Rather than replacing the memory manager, it is better to replace the need to replace the memory manager. Simply avoid heap allocation. Find a way to do your work with need for repeated calls to allocate dynamic memory. Even if you had a scalable heap manager, heap allocation would still cost.
Once you decide to avoid heap allocation the next decision is what to use instead of FloatToStr. In my experience the Delphi runtime library does not offer much support. For example, I recently discovered that there is no good way to convert an integer to text using a caller supplied buffer. So, you may need to roll your own conversion functions. As a simple first step to prove the point, try calling sprintf from msvcrt.dll. This will provide a proof of concept.
If you can't change the memory manager (MM) the only thing to do is to avoid using it where MM could be a bottleneck.
As for float to string conversion (Disclamer: I tested the code below with Delphi XE) instead of
procedure Test1;
var
i: integer;
S: string;
begin
for i := 0 to 10 do begin
S:= FloatToStr(i*1.31234);
Writeln(S);
end;
end;
you can use
procedure Test2;
var
i: integer;
S: string;
Value: Extended;
begin
SetLength(S, 64);
for i := 0 to 10 do begin
Value:= i*1.31234;
FillChar(PChar(S)^, 64, 0);
FloatToText(PChar(S), Value, fvExtended, ffGeneral, 15, 0);
Writeln(S);
end;
end;
which produce the same result but does not allocate memory inside the loop.
And take attention
function FloatToStr(Value: Extended): string; overload;
function FloatToStr(Value: Extended; const FormatSettings: TFormatSettings): string; overload;
The first form of FloatToStr is not thread-safe, because it uses localization information contained in global variables. The second form of FloatToStr, which is thread-safe, refers to localization information contained in the FormatSettings parameter. Before calling the thread-safe form of FloatToStr, you must populate FormatSettings with localization information. To populate FormatSettings with a set of default locale values, call GetLocaleFormatSettings.
Much thanks for your knowledge and help so far. As per your suggestions I've attempted to write an equivalent FloatToStr method in a way which avoids heap allocation. To some success. This is by no means a solid fool proof implementation, just nice and simple proof of concept which could be extended upon to achieve a more satisfying solution.
(Should also note using XE6 64-bit)
Experiment result/observations:
the CPU usage % was proportional to the number of threads started
(i.e. each thread = 1 core maxed out via performance monitor).
as expected, with more threads started, performance degraded somewhat for each individual one (i.e. time measured to perform task - see code).
times are just rough averages
8 cores 3.3GHz - 1 thread took 4200ms. 6 threads took 5200ms each.
8 cores 2.5GHz - 1 thread took 4800ms. 2=>4800ms, 4=>5000ms, 6=>6300ms.
I did not calculate the overall time for a total multi thread run. Just observed CPU usage % and measured individual thread times.
Personally I find it a little hilarious that this actually works :) Or perhaps I have done something horribly wrong ?
Surely there are library units out there which resolve these things ?
The code:
unit Main;
interface
uses
Winapi.Windows, Winapi.Messages, System.SysUtils, System.Variants, System.Classes, Vcl.Graphics,
Vcl.Controls, Vcl.Forms, Vcl.Dialogs, Vcl.StdCtrls,
Generics.Collections,
DateUtils;
type
TfrmParallel = class(TForm)
Button1: TButton;
Memo1: TMemo;
procedure Button1Click(Sender: TObject);
private
{ Private declarations }
public
{ Public declarations }
end;
TTaskThread = class(TThread)
private
Fl: TList<double>;
public
procedure Add(l: TList<double>);
procedure Execute; override;
end;
var
frmParallel: TfrmParallel;
implementation
{$R *.dfm}
{ TTaskThread }
procedure TTaskThread.Add(l: TList<double>);
begin
Fl := l;
end;
procedure TTaskThread.Execute;
var
i, j: integer;
s, xs: shortstring;
FR: TFloatRec;
V: double;
Precision, D: integer;
ZeroCount: integer;
Start, Finish: TDateTime;
procedure AppendByteToString(var Result: shortstring; const B: Byte);
const
A1 = '1';
A2 = '2';
A3 = '3';
A4 = '4';
A5 = '5';
A6 = '6';
A7 = '7';
A8 = '8';
A9 = '9';
A0 = '0';
begin
if B = 49 then
Result := Result + A1
else if B = 50 then
Result := Result + A2
else if B = 51 then
Result := Result + A3
else if B = 52 then
Result := Result + A4
else if B = 53 then
Result := Result + A5
else if B = 54 then
Result := Result + A6
else if B = 55 then
Result := Result + A7
else if B = 56 then
Result := Result + A8
else if B = 57 then
Result := Result + A9
else
Result := Result + A0;
end;
procedure AppendDP(var Result: shortstring);
begin
Result := Result + '.';
end;
begin
Precision := 9;
D := 1000;
Self.FreeOnTerminate := True;
//
Start := Now;
for i := 0 to Fl.Count - 1 do
begin
V := Fl[i];
// //orignal way - just for testing
// xs := shortstring(FloatToStrF(V, TFloatFormat.ffGeneral, Precision, D));
//1. get float rec
FloatToDecimal(FR, V, TFloatValue.fvExtended, Precision, D);
//2. check sign
if FR.Negative then
s := '-'
else
s := '';
//2. handle negative exponent
if FR.Exponent < 1 then
begin
AppendByteToString(s, 0);
AppendDP(s);
for j := 1 to Abs(FR.Exponent) do
AppendByteToString(s, 0);
end;
//3. count consecutive zeroes
ZeroCount := 0;
for j := Precision - 1 downto 0 do
begin
if (FR.Digits[j] > 48) and (FR.Digits[j] < 58) then
Break;
Inc(ZeroCount);
end;
//4. build string
for j := 0 to Length(FR.Digits) - 1 do
begin
if j = Precision then
Break;
//cut off where there are only zeroes left up to precision
if (j + ZeroCount) = Precision then
Break;
//insert decimal point - for positive exponent
if (FR.Exponent > 0) and (j = FR.Exponent) then
AppendDP(s);
//append next digit
AppendByteToString(s, FR.Digits[j]);
end;
// //use just to test agreement with FloatToStrF
// if s <> xs then
// frmParallel.Memo1.Lines.Add(string(s + '|' + xs));
end;
Fl.Free;
Finish := Now;
//
frmParallel.Memo1.Lines.Add(IntToStr(MillisecondsBetween(Start, Finish)));
//!YES LINE IS NOT THREAD SAFE!
end;
procedure TfrmParallel.Button1Click(Sender: TObject);
var
i: integer;
t: TTaskThread;
l: TList<double>;
begin
//pre generating the doubles is not required, is just a more useful test for me
l := TList<double>.Create;
for i := 0 to 10000000 do
l.Add(Now/(-i-1)); //some double generation
//
t := TTaskThread.Create(True);
t.Add(l);
t.Start;
end;
end.
FastMM4, by default, on thread contention, when one thread cannot acquire access to data, locked by another thread, calls Windows API function Sleep(0), and then, if the lock is still not available enters a loop by calling Sleep(1) after each check of the lock.
Each call to Sleep(0) experiences the expensive cost of a context switch, which can be 10000+ cycles; it also suffers the cost of ring 3 to ring 0 transitions, which can be 1000+ cycles. As about Sleep(1) – besides the costs associated with Sleep(0) – it also delays execution by at least 1 millisecond, ceding control to other threads, and, if there are no threads waiting to be executed by a physical CPU core, puts the core into sleep, effectively reducing CPU usage and power consumption.
That’s why, in your case, CPU use never reached 100% - because of the Sleep(1) issued by FastMM4.
This way of acquiring locks is not optimal.
A better way would have been a spin-lock of about 5000 pause instructions, and, if the lock was still busy, calling SwitchToThread() API call. If pause is not available (on very old processors with no SSE2 support) or SwitchToThread() API call was not available (on very old Windows versions, prior to Windows 2000), the best solution would be to utilize EnterCriticalSection / LeaveCriticalSection, that don’t have latency associated by Sleep(1), and which also very effectively cedes control of the CPU core to other threads.
I have modified FastMM4 to use a new approach to waiting for a lock: CriticalSections instead of Sleep(). With these options, the Sleep() will never be used but EnterCriticalSection / LeaveCriticalSection will be used instead. Testing has shown that the approach of using CriticalSections instead of Sleep (which was used by default before in FastMM4) provides significant gain in situations when the number of threads working with the memory manager is the same or higher than the number of physical cores. The gain is even more evident on computers with multiple physical CPUs and Non-Uniform Memory Access (NUMA). I have implemented compile-time options to take away the original FastMM4 approach of using Sleep(InitialSleepTime) and then Sleep(AdditionalSleepTime) (or Sleep(0) and Sleep(1)) and replace them with EnterCriticalSection / LeaveCriticalSection to save valuable CPU cycles wasted by Sleep(0) and to improve speed (reduce latency) that was affected each time by at least 1 millisecond by Sleep(1), because the Critical Sections are much more CPU-friendly and have definitely lower latency than Sleep(1).
When these options are enabled, FastMM4-AVX it checks:
whether the CPU supports SSE2 and thus the "pause" instruction, and
whether the operating system has the SwitchToThread() API call, and,
and in this case uses "pause" spin-loop for 5000 iterations and then SwitchToThread() instead of critical sections; If a CPU doesn't have the "pause" instrcution or Windows doesn't have the SwitchToThread() API function, it will use EnterCriticalSection / LeaveCriticalSection.
I have made available the fork called FastMM4-AVX at https://github.com/maximmasiutin/FastMM4
Here are the comparison of the Original FastMM4 version 4.992, with default options compiled for Win64 by Delphi 10.2 Tokyo (Release with Optimization), and the current FastMM4-AVX branch. Under some scenarios, the FastMM4-AVX branch is more than twice as fast comparing to the Original FastMM4. The tests have been run on two different computers: one under Xeon E6-2543v2 with 2 CPU sockets, each has 6 physical cores (12 logical threads) - with only 5 physical core per socket enabled for the test application. Another test was done under a i7-7700K CPU.
Used the "Multi-threaded allocate, use and free" and "NexusDB" test cases from the FastCode Challenge Memory Manager test suite, modified to run under 64-bit.
Xeon E6-2543v2 2*CPU i7-7700K CPU
(allocated 20 logical (allocated 8 logical
threads, 10 physical threads, 4 physical
cores, NUMA) cores)
Orig. AVX-br. Ratio Orig. AVX-br. Ratio
------ ----- ------ ----- ----- ------
02-threads realloc 96552 59951 62.09% 65213 49471 75.86%
04-threads realloc 97998 39494 40.30% 64402 47714 74.09%
08-threads realloc 98325 33743 34.32% 64796 58754 90.68%
16-threads realloc 116708 45855 39.29% 71457 60173 84.21%
16-threads realloc 116273 45161 38.84% 70722 60293 85.25%
31-threads realloc 122528 53616 43.76% 70939 62962 88.76%
64-threads realloc 137661 54330 39.47% 73696 64824 87.96%
NexusDB 02 threads 122846 90380 73.72% 79479 66153 83.23%
NexusDB 04 threads 122131 53103 43.77% 69183 43001 62.16%
NexusDB 08 threads 124419 40914 32.88% 64977 33609 51.72%
NexusDB 12 threads 181239 55818 30.80% 83983 44658 53.18%
NexusDB 16 threads 135211 62044 43.61% 59917 32463 54.18%
NexusDB 31 threads 134815 48132 33.46% 54686 31184 57.02%
NexusDB 64 threads 187094 57672 30.25% 63089 41955 66.50%
Your code that calls FloatToStr is OK, since it allocates a result string using the memory manager, then reallocates it, etc. Even better idea would have been to explicitly deallocate it, for example:
procedure TTaskThread.Execute;
var
i: integer;
s: string;
begin
for i := 0 to 1000000000 do
begin
s := FloatToStr(i*1.31234);
Finalize(s);
end;
end;
You can find better tests of the memory manager in the FastCode challenge test suite at https://github.com/maximmasiutin/FastCodeBenchmark
Also, please note that reference counters in Delphi strings use locking operations, which are inherently slow. For example, on an Intel 2400MHz processor with Tiger Lake microarchitecture (released in October 2020), LOCK ADD is about 18 CPU cycles (7.5ns), while non-locked simple ADD is about 0.75 CPU cycles (0.3ns). If your code ensures that the strings are not assigned and modified from different threads, then you may not need this locking. One of the approaches to ensure that a string with multiple references is not manipulated from different threads is to call UniquesString() before such use. Therefore, to improve speed, you may modify the System.pas and to remove the LOCK prefix from the assembly instructions that operate the string reference counters. For example, instead of
LOCK INC [EDX-skew].StrRec.refCnt
there will be
INC [EDX-skew].StrRec.refCnt
However, compiling and using your own, custom version of System.pas may not be an easy task. You can find more information about reference counter locking in Delphi strings in a separate answer.

Why FFTW on Windows is faster than on Linux?

I wrote two identical programs in Linux and Windows using the fftw libraries (fftw3.a, fftw3.lib), and compute the duration of the fftwf_execute(m_wfpFFTplan) statement (16-fft).
For 10000 runs:
On Linux: average time is 0.9
On Windows: average time is 0.12
I am confused as to why this is nine times faster on Windows than on Linux.
Processor: Intel(R) Core(TM) i7 CPU 870 # 2.93GHz
Each OS (Windows XP 32 bit and Linux OpenSUSE 11.4 32 bit) are installed on same machines.
I downloaded the fftw.lib (for Windows) from internet and don't know that configurations. Once I build FFTW with this config:
/configure --enable-float --enable-threads --with-combined-threads --disable-fortran --with-slow-timer --enable-sse --enable-sse2 --enable-avx
in Linux and it results in a lib that is four times faster than the default configs (0.4 ms).
16 FFT is very small. What you will find is FFTs smaller than say 64 will be hard coded assembler with no loops to get the highest possible performance. This means they can be highly susceptible to variations in instruction sets, compiler optimisations, even 64 or 32bit words.
What happens when you run a test of FFT sizes from 16 -> 1048576 in powers of 2? I say this as a particular hard-coded asm routine on Linux might not be the best optimized for your machine, whereas you might have been lucky on the Windows implementation for that particular size. A comparison of all sizes in this range will give you a better indication of the Linux vs. Windows performance.
Have you calibrated FFTW? When first run FFTW guesses the fastest implementation per machine, however if you have special instruction sets, or a particular sized cache or other processor features then these can have a dramatic effect on execution speed. As a result performing a calibration will test the speed of various FFT routines and choose the fastest per size for your specific hardware. Calibration involves repeatedly computing the plans and saving the FFTW "Wisdom" file generated. The saved calibration data (this is a lengthy process) can then be re-used. I suggest doing it once when your software starts up and re-using the file each time. I have noticed 4-10x performance improvements for certain sizes after calibrating!
Below is a snippet of code I have used to calibrate FFTW for certain sizes. Please note this code is pasted verbatim from a DSP library I worked on so some function calls are specific to my library. I hope the FFTW specific calls are helpful.
// Calibration FFTW
void DSP::forceCalibration(void)
{
// Try to import FFTw Wisdom for fast plan creation
FILE *fftw_wisdom = fopen("DSPDLL.ftw", "r");
// If wisdom does not exist, ask user to calibrate
if (fftw_wisdom == 0)
{
int iStatus2 = AfxMessageBox("FFTw not calibrated on this machine."\
"Would you like to perform a one-time calibration?\n\n"\
"Note:\tMay take 40 minutes (on P4 3GHz), but speeds all subsequent FFT-based filtering & convolution by up to 100%.\n"\
"\tResults are saved to disk (DSPDLL.ftw) and need only be performed once per machine.\n\n"\
"\tMAKE SURE YOU REALLY WANT TO DO THIS, THERE IS NO WAY TO CANCEL CALIBRATION PART-WAY!",
MB_YESNO | MB_ICONSTOP, 0);
if (iStatus2 == IDYES)
{
// Perform calibration for all powers of 2 from 8 to 4194304
// (most heavily used FFTs - for signal processing)
AfxMessageBox("About to perform calibration.\n"\
"Close all programs, turn off your screensaver and do not move the mouse in this time!\n"\
"Note:\tThis program will appear to be unresponsive until the calibration ends.\n\n"
"\tA MESSAGEBOX WILL BE SHOWN ONCE THE CALIBRATION IS COMPLETE.\n");
startTimer();
// Create a whole load of FFTw Plans (wisdom accumulates automatically)
for (int i = 8; i <= 4194304; i *= 2)
{
// Create new buffers and fill
DSP::cFFTin = new fftw_complex[i];
DSP::cFFTout = new fftw_complex[i];
DSP::fconv_FULL_Real_FFT_rdat = new double[i];
DSP::fconv_FULL_Real_FFT_cdat = new fftw_complex[(i/2)+1];
for(int j = 0; j < i; j++)
{
DSP::fconv_FULL_Real_FFT_rdat[j] = j;
DSP::cFFTin[j][0] = j;
DSP::cFFTin[j][1] = j;
DSP::cFFTout[j][0] = 0.0;
DSP::cFFTout[j][1] = 0.0;
}
// Create a plan for complex FFT.
// Use the measure flag to get the best possible FFT for this size
// FFTw "remembers" which FFTs were the fastest during this test.
// at the end of the test, the results are saved to disk and re-used
// upon every initialisation of the DSP Library
DSP::pCF = fftw_plan_dft_1d
(i, DSP::cFFTin, DSP::cFFTout, FFTW_FORWARD, FFTW_MEASURE);
// Destroy the plan
fftw_destroy_plan(DSP::pCF);
// Create a plan for real forward FFT
DSP::pCF = fftw_plan_dft_r2c_1d
(i, fconv_FULL_Real_FFT_rdat, fconv_FULL_Real_FFT_cdat, FFTW_MEASURE);
// Destroy the plan
fftw_destroy_plan(DSP::pCF);
// Create a plan for real inverse FFT
DSP::pCF = fftw_plan_dft_c2r_1d
(i, fconv_FULL_Real_FFT_cdat, fconv_FULL_Real_FFT_rdat, FFTW_MEASURE);
// Destroy the plan
fftw_destroy_plan(DSP::pCF);
// Destroy the buffers. Repeat for each size
delete [] DSP::cFFTin;
delete [] DSP::cFFTout;
delete [] DSP::fconv_FULL_Real_FFT_rdat;
delete [] DSP::fconv_FULL_Real_FFT_cdat;
}
double time = stopTimer();
char * strOutput;
strOutput = (char*) malloc (100);
sprintf(strOutput, "DSP.DLL Calibration complete in %d minutes, %d seconds\n"\
"Please keep a copy of the DSPDLL.ftw file in the root directory of your application\n"\
"to avoid re-calibration in the future\n", (int)time/(int)60, (int)time%(int)60);
AfxMessageBox(strOutput);
isCalibrated = 1;
// Save accumulated wisdom
char * strWisdom = fftw_export_wisdom_to_string();
FILE *fftw_wisdomsave = fopen("DSPDLL.ftw", "w");
fprintf(fftw_wisdomsave, "%s", strWisdom);
fclose(fftw_wisdomsave);
DSP::pCF = NULL;
DSP::cFFTin = NULL;
DSP::cFFTout = NULL;
fconv_FULL_Real_FFT_cdat = NULL;
fconv_FULL_Real_FFT_rdat = NULL;
free(strOutput);
}
}
else
{
// obtain file size.
fseek (fftw_wisdom , 0 , SEEK_END);
long lSize = ftell (fftw_wisdom);
rewind (fftw_wisdom);
// allocate memory to contain the whole file.
char * strWisdom = (char*) malloc (lSize);
// copy the file into the buffer.
fread (strWisdom,1,lSize,fftw_wisdom);
// import the buffer to fftw wisdom
fftw_import_wisdom_from_string(strWisdom);
fclose(fftw_wisdom);
free(strWisdom);
isCalibrated = 1;
return;
}
}
The secret sauce is to create the plan using the FFTW_MEASURE flag, which specifically measures hundreds of routines to find the fastest for your particular type of FFT (real, complex, 1D, 2D) and size:
DSP::pCF = fftw_plan_dft_1d (i, DSP::cFFTin, DSP::cFFTout,
FFTW_FORWARD, FFTW_MEASURE);
Finally, all benchmark tests should also be performed with a single FFT Plan stage outside of execute, called from code that is compiled in release mode with optimizations on and detached from the debugger. Benchmarks should be performed in a loop with many thousands (or even millions) of iterations and then take the average run time to compute the result. As you probably know the planning stage takes a significant amount of time and the execute is designed to be performed multiple times with a single plan.

Resources