I have a structure, where I store all the hours and minutes into mongodb. In this case, when I get a request to modify the value, I get the hour and minute as string. Is there a way to find the field name from the string that is given as input
You can see it here
package main
import "fmt"
type Min struct {
v01 int `bson:"01",json:"01"`
v02 int `bson:"02",json:"02"`
}
type Hour struct {
v01 Min `bson:"01",json:"01"`
v02 Min `bson:"02",json:"02"`
}
func main() {
fmt.Println("Hello, playground")
var h Hour
h.v01.v01 = 1
h.v02.v01 = 2
fmt.Println(h)
h.Set("01", "01", 10)
fmt.Println(h)
}
func (h *Hour) Set(hour string, min string, value int) {
h.v01.v01 = 10 //Here I have hardcoded it
// Is there a way to do this from the given input
// e.g. h.Set("01","01",100)
}
If you notice, the input is "01","01". I would like to change this input as h.v01.v01. Is it possible in Go?
Note: I am using currently maps in this case. I would like to change this into structure access, if possible, so that I can use goroutine to speed up my program. Currently goroutines are not safe for writing into maps.
Currently goroutines are not safe for writing into maps.
That might need to be revisited with Go 1.9 (Aug. 2017) and the concurrent map present in the sync package
The new Map type in the sync package is a concurrent map with amortized-constant-time loads, stores, and deletes.
It is safe for multiple goroutines to call a Map's methods concurrently.
Combine that with oleiade/reflections that I mentioned, as commented, in "GoLang: Access struct property by name", and you should be able to do both (access property name, and use maps)
Related
Is assigning a pointer atomic in Go?
Do I need to assign a pointer in a lock? Suppose I just want to assign the pointer to nil, and would like other threads to be able to see it. I know in Java we can use volatile for this, but there is no volatile in Go.
The only things which are guaranteed to be atomic in go are the operations in sync.atomic.
So if you want to be certain you'll either need to take a lock, eg sync.Mutex or use one of the atomic primitives. I don't recommend using the atomic primitives though as you'll have to use them everywhere you use the pointer and they are difficult to get right.
Using the mutex is OK go style - you could define a function to return the current pointer with locking very easily, eg something like
import "sync"
var secretPointer *int
var pointerLock sync.Mutex
func CurrentPointer() *int {
pointerLock.Lock()
defer pointerLock.Unlock()
return secretPointer
}
func SetPointer(p *int) {
pointerLock.Lock()
secretPointer = p
pointerLock.Unlock()
}
These functions return a copy of the pointer to their clients which will stay constant even if the master pointer is changed. This may or may not be acceptable depending on how time critical your requirement is. It should be enough to avoid any undefined behaviour - the garbage collector will ensure that the pointers remain valid at all times even if the memory pointed to is no longer used by your program.
An alternative approach would be to only do the pointer access from one go routine and use channels to command that go routine into doing things. That would be considered more idiomatic go, but may not suit your application exactly.
Update
Here is an example showing how to use atomic.SetPointer. It is rather ugly due to the use of unsafe.Pointer. However unsafe.Pointer casts compile to nothing so the runtime cost is small.
import (
"fmt"
"sync/atomic"
"unsafe"
)
type Struct struct {
p unsafe.Pointer // some pointer
}
func main() {
data := 1
info := Struct{p: unsafe.Pointer(&data)}
fmt.Printf("info is %d\n", *(*int)(info.p))
otherData := 2
atomic.StorePointer(&info.p, unsafe.Pointer(&otherData))
fmt.Printf("info is %d\n", *(*int)(info.p))
}
Since the spec doesn't specify you should assume it is not. Even if it is currently atomic it's possible that it could change without ever violating the spec.
In addition to Nick's answer, since Go 1.4 there is atomic.Value type. Its Store(interface) and Load() interface methods take care of the unsafe.Pointer conversion.
Simple example:
package main
import (
"sync/atomic"
)
type stats struct{}
type myType struct {
stats atomic.Value
}
func main() {
var t myType
s := new(stats)
t.stats.Store(s)
s = t.stats.Load().(*stats)
}
Or a more extended example from the documentation on the Go playground.
Since Go 1.19 atomic.Pointer is added into atomic
The sync/atomic package defines new atomic types Bool, Int32, Int64, Uint32, Uint64, Uintptr, and Pointer. These types hide the underlying values so that all accesses are forced to use the atomic APIs. Pointer also avoids the need to convert to unsafe.Pointer at call sites. Int64 and Uint64 are automatically aligned to 64-bit boundaries in structs and allocated data, even on 32-bit systems.
Sample
type ServerConn struct {
Connection net.Conn
ID string
}
func ShowConnection(p *atomic.Pointer[ServerConn]) {
for {
time.Sleep(10 * time.Second)
fmt.Println(p, p.Load())
}
}
func main() {
c := make(chan bool)
p := atomic.Pointer[ServerConn]{}
s := ServerConn{ID: "first_conn"}
p.Store(&s)
go ShowConnection(&p)
go func() {
for {
time.Sleep(13 * time.Second)
newConn := ServerConn{ID: "new_conn"}
p.Swap(&newConn)
}
}()
<-c
}
Please note that atomicity has nothing to do with "I just want to assign the pointer to nil, and would like other threads to be able to see it". The latter property is called visibility.
The answer to the former, as of right now is yes, assigning (loading/storing) a pointer is atomic in Golang, this lies in the updated Go memory model
Otherwise, a read r of a memory location x that is not larger than a machine word must observe some write w such that r does not happen before w and there is no write w' such that w happens before w' and w' happens before r. That is, each read must observe a value written by a preceding or concurrent write.
Regarding visibility, the question does not have enough information to be answered concretely. If you merely want to know if you can dereference the pointer safely, then a plain load/store would be enough. However, the most likely cases are that you want to communicate some information based on the nullness of the pointer. This requires you using sync/atomic, which provides synchronisation capabilities.
What is the "right" way to stuff an arbitrary, odd sized struct into a swift 3 Data object ?
I think that I have got there, but it seems horribly convoluted for what from prior experience was no than
dataObject.append(&structInstance, sizeof(structInstance))
My case is as follows:
The structure of interest:
public struct CutEntry {
var itemA : UInt64
var itemB : UInt32
}
I have an array of these things that I want to stuff into a data object, in a specific manner as the data object becomes a file which is eventually read by a different application on a different architecture.
The function to put them into a Data object
open func encodeCutsData() -> Data
{
var data = Data()
for entry in cutsArray
{
// bigendian stuff, as a var, just so the you can get the address
var entryCopy = CutEntry(itemA: entry.itemA.bigEndian, itemB: entry.itemB.bigEndian)
// step 1 get the address of the item as a UnsafePointer
let d2 = withUnsafePointer(to: &entryCopy) { return $0}
// step 2 cast it to a raw pointer
let d3 = UnsafeRawPointer(d2)
// step 3 create a temp data object
let d4 = Data(bytes:d3, count: MemoryLayout<CutEntry>.size )
// step 4 add the temp to main data object
data.append(d4)
}
return data
}
Earlier when we only had NSMutableData it was
let item = NSMutableData()
for entry in cutsArray
{
var entryCopy = CutEntry(cutPts: entry.cutPts.bigEndian, cutType: entry.cutType.bigEndian)
item.append(&entryCopy, length: MemoryLayout<CutEntry>.size)
}
I've spent a few hours searching for examples of manipulating struct and Data objects. I though that I was close when I found references to unsafebufferpointer. That blew up in my face when I discovered that "buffer" bit uses core memory alignment (which can be useful) and it was stuffing 16 bytes into the data object instead of the expected 12.
I am quite prepared to say that I have missed the blindingly obvious bit of RTFM somewhere. Can anyone offer a cleaner solution ? or has Swift really gone backwards here ?
If I could find a way of getting a pointer to the item as a UInt8 pointer that would remove a couple of lines, but that looks just a difficult.
With checking the reference of Data, I can find two things which may be useful for you:
init(bytes: UnsafeRawPointer, count: Int)
func append(Data)
You can write something like this:
var data = Data()
for entry in cutsArray {
var entryCopy = CutEntry(cutPts: entry.cutPts.bigEndian, cutType: entry.cutType.bigEndian)
data.append(Data(bytes: &entryCopy, count: MemoryLayout<CutEntry>.size))
}
In Go, let's say I have this struct:
type Job struct {
totalTime int
timeToCompletion int
}
and I initialize a struct object like:
j := Job {totalTime : 10, timeToCompletion : 10}
where the constraint is that timeToCompletion is always equal to totalTime when the struct is created (they can change later). Is there a way to achieve this in Go so that I don't have to initialize both fields?
You can't avoid having to specify the value twice, but an idiomatic way would be to create a constructor-like creator function for it:
func NewJob(time int) Job {
return Job{totalTime: time, timeToCompletion: time}
}
And using it you only have to specify the time value once when passing it to our NewJob() function:
j := NewJob(10)
I'm writing a search engine in Go in which I have an inverted index of words to the corresponding results for each word. There is a set dictionary of words and so the words are already converted into a StemID, which is an integer starting from 0. This allows me to use a slice of pointers (i.e. a sparse array) to map each StemID to the structure which contains the results of that query. E.g. var StemID_to_Index []*resultStruct. If aardvark is 0 then the pointer to the resultStruct for aardvark is located at StemID_to_Index[0], which will be nil if the result for this word is currently not loaded.
There is not enough memory on the server to store all of this in memory, so the structure for each StemID will be saved as separate files and these can be loaded into the StemID_to_Index slice. If StemID_to_Index is currently nil for this StemID then the result is not cached and needs to be loaded, otherwise it's already loaded (cached) and so can be used directly. Each time a new result is loaded the memory usage is checked and if it's over the threshold then 2/3 of the loaded results are thrown away (StemID_to_Index is set to nil for these StemIDs and a garbage collection is forced.)
My problem is the concurrency. What is the fastest and most efficient way in which I can have multiple threads searching at the same time without having problems with different threads trying to read and write to the same place at the same time? I'm trying to avoid using mutexes on everything as that would slow down every single access attempt.
Do you think I would get away with loading the results from disk in the working thread and then delivering the pointer to this structure to an "updater" thread using channels, which then updates the nil value in the StemID_to_Index slice to the pointer of the loaded result? This would mean that two threads would never attempt to write at the same time, but what would happen if another thread tried to read from that exact index of StemID_to_Index while the "updater" thread was updating the pointer? It doesn't matter if a thread is given a nil pointer for a result which is currently being loaded, because it will just be loaded twice and while that is a waste of resources it would still deliver the same result and since that is unlikely to happen very often, it's forgiveable.
Additionally, how would the working thread which send the pointer to be updated to the "updater" thread know when the "updater" thread has finished updating the pointer in the slice? Should it just sleep and keep checking, or is there an easy way for the updater to send a message back to the specific thread which pushed to the channel?
UPDATE
I made a little test script to see what would happen if attempting to access a pointer at the same time as modifying it... it seems to always be OK. No errors. Am I missing something?
package main
import (
"fmt"
"sync"
)
type tester struct {
a uint
}
var things *tester
func updater() {
var a uint
for {
what := new(tester)
what.a = a
things = what
a++
}
}
func test() {
var t *tester
for {
t = things
if t != nil {
if t.a < 0 {
fmt.Println(`Error1`)
}
} else {
fmt.Println(`Error2`)
}
}
}
func main() {
var wg sync.WaitGroup
things = new(tester)
go test()
go test()
go test()
go test()
go test()
go test()
go updater()
go test()
go test()
go test()
go test()
go test()
wg.Add(1)
wg.Wait()
}
UPDATE 2
Taking this further, even if I read and write from multiple threads to the same variable at the same time... it makes no difference, still no errors:
From above:
func test() {
var a uint
var t *tester
for {
t = things
if t != nil {
if t.a < 0 {
fmt.Println(`Error1`)
}
} else {
fmt.Println(`Error2`)
}
what := new(tester)
what.a = a
things = what
a++
}
}
This implies I don't have to worry about concurrency at all... again: am I missing something here?
This sounds like a perfect use case for a memory mapped file:
package main
import (
"log"
"os"
"unsafe"
"github.com/edsrzf/mmap-go"
)
func main() {
// Open the backing file
f, err := os.OpenFile("example.txt", os.O_RDWR|os.O_CREATE, 0644)
if err != nil {
log.Fatalln(err)
}
defer f.Close()
// Set it's size
f.Truncate(1024)
// Memory map it
m, err := mmap.Map(f, mmap.RDWR, 0)
if err != nil {
log.Fatalln(err)
}
defer m.Unmap()
// m is a byte slice
copy(m, "Hello World")
m.Flush()
// here's how to use it with a pointer
type Coordinate struct{ X, Y int }
// first get the memory address as a *byte pointer and convert it to an unsafe
// pointer
ptr := unsafe.Pointer(&m[20])
// next convert it into a different pointer type
coord := (*Coordinate)(ptr)
// now you can use it directly
*coord = Coordinate{1, 2}
m.Flush()
// and vice-versa
log.Println(*(*Coordinate)(unsafe.Pointer(&m[20])))
}
The memory map can be larger than real memory and the operating system will handle all the messy details for you.
You will still need to make sure that separate goroutines never read/write to the same segment of memory at the same time.
My top answer would be to use elasticsearch with a client like elastigo.
If that's not an option, it would really help to know how much you care about race-y behavior. If you don't care, a write could happen right after a read finishes, the user finishing the read will get stale data. You can just have a queue of write and read operations and have multiple threads feed into that queue and one dispatcher issue the operations to the map one-at-a-time as they come it. In all other scenarios, you will need a mutex if there are multiple readers and writers. Maps aren't thread safe in go.
Honestly though, I would just add a mutex to make things simple for now and optimize by analyzing where your bottlenecks actually lie. It seems like you checking a threshold and then purging 2/3 of your cache is a bit arbitrary, and I wouldn't be surprised if you kill performance by doing something like that. Here's on situation where that would break down:
Requesters 1, 2, 3, and 4 are frequently accessing many of the same words on files A & B.
Requester 5, 6, 7 and 8 are frequently accessing many of the same words stored on files C & D.
Now when requests interleaved between these requesters and files happen in rapid succession, you may end up purging your 2/3 of your cache over and over again of results that may be requested shortly after. There are a couple other approaches:
Cache words that are frequently accessed at the same time on the same box and have multiple caching boxes.
Cache on a per-word basis with some sort of ranking of how popular that word is. If a new word is accessed from a file while the cache is full, see if other more popular words live in that file and purge less popular entries in the cache in hopes that those words will have a higher hit rate.
Both approaches 1 & 2.
I'm having a look at Go, which looks quite promising.
I am trying to figure out how to get the size of a go struct, for
example something like
type Coord3d struct {
X, Y, Z int64
}
Of course I know that it's 24 bytes, but I'd like to know it programmatically..
Do you have any ideas how to do this ?
Roger already showed how to use SizeOf method from the unsafe package. Make sure you read this before relying on the value returned by the function:
The size does not include any memory possibly referenced by x. For
instance, if x is a slice, Sizeof returns the size of the slice
descriptor, not the size of the memory referenced by the slice.
In addition to this I wanted to explain how you can easily calculate the size of any struct using a couple of simple rules. And then how to verify your intuition using a helpful service.
The size depends on the types it consists of and the order of the fields in the struct (because different padding will be used). This means that two structs with the same fields can have different size.
For example this struct will have a size of 32
struct {
a bool
b string
c bool
}
and a slight modification will have a size of 24 (a 25% difference just due to a more compact ordering of fields)
struct {
a bool
c bool
b string
}
As you see from the pictures, in the second example we removed one of the paddings and moved a field to take advantage of the previous padding. An alignment can be 1, 2, 4, or 8. A padding is the space that was used to fill in the variable to fill the alignment (basically wasted space).
Knowing this rule and remembering that:
bool, int8/uint8 take 1 byte
int16, uint16 - 2 bytes
int32, uint32, float32 - 4 bytes
int64, uint64, float64, pointer - 8 bytes
string - 16 bytes (2 alignments of 8 bytes)
any slice takes 24 bytes (3 alignments of 8 bytes). So []bool, [][][]string are the same (do not forget to reread the citation I added in the beginning)
array of length n takes n * type it takes of bytes.
Armed with the knowledge of padding, alignment and sizes in bytes, you can quickly figure out how to improve your struct (but still it makes sense to verify your intuition using the service).
import unsafe "unsafe"
/* Structure describing an inotify event. */
type INotifyInfo struct {
Wd int32 // Watch descriptor
Mask uint32 // Watch mask
Cookie uint32 // Cookie to synchronize two events
Len uint32 // Length (including NULs) of name
}
func doSomething() {
var info INotifyInfo
const infoSize = unsafe.Sizeof(info)
...
}
NOTE: The OP is mistaken. The unsafe.Sizeof does return 24 on the example Coord3d struct. See comment below.
binary.TotalSize is also an option, but note there's a slight difference in behavior between that and unsafe.Sizeof: binary.TotalSize includes the size of the contents of slices, while unsafe.Sizeof only returns the size of the top level descriptor. Here's an example of how to use TotalSize.
package main
import (
"encoding/binary"
"fmt"
"reflect"
)
type T struct {
a uint32
b int8
}
func main() {
var t T
r := reflect.ValueOf(t)
s := binary.TotalSize(r)
fmt.Println(s)
}
This is subject to change but last I looked there is an outstanding compiler bug (bug260.go) related to structure alignment. The end result is that packing a structure might not give the expected results. That was for compiler 6g version 5383 release.2010-04-27 release. It may not be affecting your results, but it's something to be aware of.
UPDATE: The only bug left in go test suite is bug260.go, mentioned above, as of release 2010-05-04.
Hotei
In order to not to incur the overhead of initializing a structure, it would be faster to use a pointer to Coord3d:
package main
import (
"fmt"
"unsafe"
)
type Coord3d struct {
X, Y, Z int64
}
func main() {
var dummy *Coord3d
fmt.Printf("sizeof(Coord3d) = %d\n", unsafe.Sizeof(*dummy))
}
/*
returns the size of any type of object in bytes
*/
func getRealSizeOf(v interface{}) (int, error) {
b := new(bytes.Buffer)
if err := gob.NewEncoder(b).Encode(v); err != nil {
return 0, err
}
return b.Len(), nil
}