How to make a shared-state object thread safe - multithreading

I am dealing with an class implementing shared-data copy semantics, so that the copy of an object will internally share the same data as the original object.
This class is essentially a wrapper around a std::shared_ptr with some convenience accessors. A stripped down version would be:
class ControlParameters {
using ControlsMap = std::map<std::string, float>;
public:
ControlParameters() : controls_(std::make_shared<ControlsMap>()) {};
ControlParameters(const ControlParameters& other) = default;
ControlParameters& operator=(const ControlParameter& other) = default;
ControlParameters(ChannelControlParameter&& other) = delete;
ControlParameters& operator=(ControlParameter&& other) = delete;
void setValue(float value, const std::string& key) const
{
auto& currentValue = (*controls_)[key];
if (currentValue != value)
currentValue = value;
};
float operator[](const std::string& key) const
{
return (*controls_)[key];
};
private:
std::shared_ptr<ControlsMap> controls_;
};
Now I realise the need to make the class thread-safe as well, to ensure atomic read/write. Note that concurrent access in this case should be intended as access from different threads and different instances sharing the same data. How can I go about it?
Note: I am aware of c++17 shared_mutex which could serve my purpose, but I cannot yet use c++17 features.

Related

std::list<int> predicate call to function: Error C3867 function call missing argument list

I am using std::list's predicate to update the list based on predicate. But calling in the OnInitDialog() throws compilation error. My code is as follows:
The below is .h:
class CDlgWindow : public CDialog
{
private:
bool single_digit (const int &value);
int _days;
}
The below is .cpp:
CDlgWindow::CDlgWindow(CWnd* pParent, CString strInfo, int days) //ctor
{
_days = days;
//_strInfo = strInfo
}
bool CDlgWindow::single_digit(const int& value)
{
return (value >= _days);
}
BOOL CDlgWindow::OnInitDialog()
{
CDialog::OnInitDialog();
CenterWindow();
.
.
.
int numArr[] = {10,20,30,40};
int size = sizeof(numArr)/sizeof(numArr[0]);
std::list<int> numList (numArr, numArr+size);
numList.remove_if(single_digit); //Error C3867 here!
.
.
}
Complete error message:
Error C3867 function call missing argument list, use '&CDlgWindow::single_digit' to create a pointer to member.
I am trying to understand the functors concept. As I checked in C++11, we have lambdas for easier implementation. Please guide me to understand more on this issue. Thanks!
std::list's remove_if member needs a unary predicate (p) that operates on values (v). The expression p(v) must be valid. Which it isn't if p is a non-static class member (see repro).
There are two options:
Make the predicate (single_digit) a static class member:
class CDlgWindow : public CDialog
{
private:
static bool single_digit (const int &value);
// ...
}
Make the predicate a free function:
bool single_digit(int const& value) {
static int days_ = ...;
return (value >= days_);
}
If you go with option 1 you will have to make _days static as well, since a static member function cannot access non-static instance data. If _days is a compile-time constant, make sure to mark it const as well. That'll open up some compiler optimizations.
This is all hoping that things haven't significantly changed between C++98 and C++11. It's hard to find a C++98 compiler to verify this.

What is the default value to be given for a std::thread reference in Thread_Guard class

I'm writing a class Thread_Guard as explained in the book Concurrency in Action at Listing 2.3 - Using RAII to wait for a thread to complete.
Below is the class
class Thread_Guard
{
thread& gThread;
public:
Thread_Guard() : gThread(NULL) {};
explicit Thread_Guard(thread& t1) : gThread(t1)
{
}
~Thread_Guard()
{
if (gThread.joinable())
gThread.join();
}
Thread_Guard(const Thread_Guard& gT1) = delete;
Thread_Guard& operator=(const Thread_Guard& gT1) = delete;
};
What is the value to be initialized for the reference to std::thread in the default constructor? I will be calling it from main like below. Where MyInt is a sample class I have written.
int main()
{
int a = 100;
MyInt customInt(a);
thread worker(customInt);
Thread_Guard(worker);
return 0;
}
The original version of thread_guard does not contain a default constructor; probably by design. It would be interesting to see the use-case motivating your question (perhaps there's an alternative?).
However, if you do want to do this, you could simply store a thread * in the class object, make the default constructor set it to nullptr, and have the destructor check this first:
class Thread_Guard
{
thread *gThread;
public:
Thread_Guard() : gThread(nullptr) {};
explicit Thread_Guard(thread& t1) : gThread(&t1)
{
}
~Thread_Guard()
{
if (gThread && gThread->joinable())
gThread->join();
}
Thread_Guard(const Thread_Guard& gT1) = delete;
Thread_Guard& operator=(const Thread_Guard& gT1) = delete;
};

How to check for correct type when calling ObjectWrap::Unwrap in a Nodejs add-on?

I have 2 C++ classes exposed as javascript classes, VanillaOption and NoomraEngine, both inheriting from ObjectWrap.
In the following method in NoomraEngine, I m supposed to receive a previously "wrapped" VanillaOption:
Handle<Value>
NoomraEngine::Price(const Arguments& args) {
HandleScope scope;
Local<Object> object = args[0]->ToObject(); // VanillaOption expected in args[0]
VanillaOption* equityOption = ObjectWrap::Unwrap<VanillaOption>(object);
Local<Number> x = Number::New(this->price(equityOption));
return scope.Close(x);
}
Everything works fine except that if I pass the wrong type to the method, node crashes in ObjectWrap::Unwrap.
My question is how can I make sure that I've received the correct type in args[0] ?
EDIT: a better method than the bare V8 one below is to use NanHasInstance (https://github.com/rvagg/nan#api_nan_has_instance)
In MyObject::Init:
Local<FunctionTemplate> tpl = NanNew<FunctionTemplate>(New);
tpl->SetClassName(NanNew<String>("MyObject"));
...
NanAssignPersistent(prototype, tpl);
where prototype is a static Persistent<FunctionTemplate> member of MyObject.
Use like this:
if (NanHasInstance(prototype, handle)) {
MyObject* obj = ObjectWrap::Unwrap<MyObject>(handle);
...
}
With the caveat that this is my first go at writing a Node addon, I solved this exact problem by checking the prototype of the object with my own wrapper around UnWrap.
Here's a patch to the addon factory class demo showing the method:
https://github.com/petli/node-addon-examples/commit/d3e92cd060a26da2623690718e78f9005db060a8
It will only support factory-generated objects, and not ones where a constructor is exposed so that users could inherit from the base class. However, that could be generalised by walking the prototype chain.
In summary, it grabs hold of the reference to the expected class prototype in MyObject::Init:
Local<Object> obj = constructor->NewInstance();
prototype = Persistent<Value>::New(obj->GetPrototype());
And then checks that before dereferencing the object:
MyObject* MyObject::CheckedUnWrap(Handle<Object> handle)
{
if (!handle.IsEmpty() && handle->InternalFieldCount() == 1) {
Handle<Value> objproto = handle->GetPrototype();
if (objproto == prototype) {
// OK, this is us
return ObjectWrap::Unwrap<MyObject>(handle);
}
}
ThrowException(Exception::TypeError(String::New("<this> is not a MyObject")));
return NULL;
}
All functions then use CheckedUnWrap instead:
Handle<Value> MyObject::PlusOne(const Arguments& args) {
HandleScope scope;
MyObject* obj = CheckedUnWrap(args.This());
if (obj) {
obj->counter_ += 1;
return scope.Close(Number::New(obj->counter_));
}
else {
// Invalid type, an exception has been thrown so return an empty value
return Handle<Value>();
}
}
I was also considering adding an internal field and set that to some some magic pointer, but then the code would have depended on that node::ObjectWrap wouldn't change how it uses the internal fields.
Update: Since NanHasInstance has been deprecated, the new solution to this answer is to use bool FunctionTemplate::HasInstance(Local<Value> object). This function returns true if the given object is an instance of this function template.
Persistent<Function> Wrapper::constructor;
Persistent<FunctionTemplate> Wrapper::tpl;
Then in your Wrapper::Init() function, set the public Persistent object:
Local<FunctionTemplate> tpl = FunctionTemplate::New(isolate, New);
// ...
Wrapper::tpl.Reset(isolate, tpl);
Now to unwrap:
Local<FunctionTemplate> wrapper_tpl = Wrapper::tpl.Get(isolate);
if (!(wrapper_tpl->HasInstance(args[0]))) {
isolate->ThrowException(Exception::TypeError(
String::NewFromUtf8(isolate, "Argument must be a Wrapper object")));
return;
}
// Now we are safe to call ObjectWrap::Unwrap
node::Buffer::HasInstance(args[0]);
Also see:
http://www.lamedoc.com/node.js/classnode_1_1Buffer.html#a4cc6fa734527df461641c6e93e6d1197

Is there an additional runtime cost for using named parameters?

Consider the following struct:
public struct vip
{
string email;
string name;
int category;
public vip(string email, int category, string name = "")
{
this.email = email;
this.name = name;
this.category = category;
}
}
Is there a performance difference between the following two calls?
var e = new vip(email: "foo", name: "bar", category: 32);
var e = new vip("foo", 32, "bar");
Is there a difference if there are no optional parameters defined?
I believe none. It's only a language/compiler feature, call it syntactic sugar if you like. The generated CLR code should be the same.
There's a compile-time cost, but not a runtime one...and the compile time is very, very minute.
Like extension methods or auto-implemented properties, this is just magic the compiler does, but in reality generates the same IL we're all familiar with and have been using for years.
Think about it this way, if you're using all the parameters, the compiler would call the method using all of them, if not, it would generate something like this behind the scenes:
var e = new vip(email: "foo", category: 32); //calling
//generated, this is what it's actually saving you from writing
public vip(string email, int category) : this(email, category, "bar") { }
No it is a compile-time feature only. If you inspect the generated IL you'll see no sign of the named parameters. Likewise, optional parameters is also a compile-time feature.
One thing to keep in mind regarding named parameters is that the names are now part of the signature for calling a method (if used obviously) at compile time. I.e. if names change the calling code must be changed as well if you recompile. A deployed assembly, on the other hand, will not be affected until recompiled, as the names are not present in the IL.
There shouldn't be any. Basically, named parameters and optional parameters are syntactic sugar; the compiler writes the actual values or the default values directly into the call site.
EDIT: Note that because they are a compiler feature, this means that changes to the parameters only get updated if you recompile the "clients". So if you change the default value of an optional parameter, for example, you will need to recompile all "clients", or else they will use the old default value.
Actually, there is cost at x64 CLR
Look at here http://www.dotnetperls.com/named-parameters
I am able to reproduce the result: named call takes 4.43 ns, and normal call takes 3.48 ns
(program runs in x64)
However, in x86, both take around 0.32 ns
The code is attached below, compile and run it yourself to see the difference.
Note that in VS2012 the default targat is AnyCPU x86 prefered, you have to switch to x64 to see the difference.
using System;
using System.Diagnostics;
class Program
{
const int _max = 100000000;
static void Main()
{
Method1();
Method2();
var s1 = Stopwatch.StartNew();
for (int i = 0; i < _max; i++)
{
Method1();
}
s1.Stop();
var s2 = Stopwatch.StartNew();
for (int i = 0; i < _max; i++)
{
Method2();
}
s2.Stop();
Console.WriteLine(((double)(s1.Elapsed.TotalMilliseconds * 1000 * 1000) /
_max).ToString("0.00 ns"));
Console.WriteLine(((double)(s2.Elapsed.TotalMilliseconds * 1000 * 1000) /
_max).ToString("0.00 ns"));
Console.Read();
}
static void Method1()
{
Method3(flag: true, size: 1, name: "Perl");
}
static void Method2()
{
Method3(1, "Perl", true);
}
static void Method3(int size, string name, bool flag)
{
if (!flag && size != -1 && name != null)
{
throw new Exception();
}
}
}

How to use stdext::hash_map?

I would like to see a simple example of how to override stdext::hash_compare properly, in order to define a new hash function and comparison operator for my own user-defined type. I'm using Visual C++ (2008).
This is how you can do it
class MyClass_Hasher {
const size_t bucket_size = 10; // mean bucket size that the container should try not to exceed
const size_t min_buckets = (1 << 10); // minimum number of buckets, power of 2, >0
MyClass_Hasher() {
// should be default-constructible
}
size_t operator()(const MyClass &key) {
size_t hash_value;
// do fancy stuff here with hash_value
// to create the hash value. There's no specific
// requirement on the value.
return hash_value;
}
bool operator()(const MyClass &left, const MyClass &right) {
// this should implement a total ordering on MyClass, that is
// it should return true if "left" precedes "right" in the ordering
}
};
Then, you can just use
stdext::hash_map my_map<MyClass, MyValue, MyClass_Hasher>
Here you go, example from MSDN
I prefer using a non-member function.
The method expained in the Boost documentation article Extending boost::hash for a custom data type seems to work.

Resources