I’m having a solution with switch cases but there are many cases so clang-tidy is giving warning for that function. My motive is to decrease the size of function. Is there any way that we can do to decrease size of function.
As enum class can be used as key for std::map, You can use the map to keep relation of enum <-> string, like this:
enum class test_enum { first, second, third };
const char* to_string(test_enum val) {
static const std::map<test_enum,const char*> dict = {
{ test_enum::first, "first" },
{ test_enum::second, "second" },
{ test_enum::third, "third" }
};
auto tmp = dict.find(val);
return (tmp != dict.end()) ? tmp->second : "<unknown>";
}
C++ has no reflection, so map cannot be filled automatically; however, using compiler-specific extensions (e.g. like __PRETTY_FUNCTION__ extension for GCC) it can be done, e.g. like in magic_enum library
Related
I'm trying to add a map to my datatype that maps member name strings to the local offset of the member variable like this:
struct E
{
B memberX;
B memberY;
constexpr static entry map[] = {
{ "memberX", offsetof( E, memberX ) },
{ "memberY", offsetof( E, memberY) }
};
};
This doesn't compile with VS2015. If fails at { "memberX", offsetof( E, memberX ) }, with error C2227.
Besides, I know that offsetof doesn't work reliably for non pod types.
Do you have a suggestion how to do what I want in a compatible, modern way?
Thanks!
Not that this way is modern, but offsetof is often defined as following:
#define offsetof(type, memb) (intptr_t)&(((type)NULL)->memb)
so you can try using that as alternative.
I am assuming that you want to use the offsets only to access the members later. In that case and given that all members have the same type, a pointer-to-data-member is probably safer and more general:
struct E
{
B memberX;
B memberY;
static const auto& getMemberMap {
static const std::map<std::string, B E::*> memberMap {
{ "memberX", &E::memberX },
{ "memberY", &E::memberY }
};
return memberMap;
};
B& getMember(const std::string& str) {
auto it = getMemberMap().find(str);
if(it == getMemberMap().end()) {
// throw some exception
}
return this->*(it->second);
};
};
std::map does not have a constexpr constructor, so the map will be built runtime rather than compile-time, but you can replace it with your own implementation.
I used a local static variable instead of a static member because you required the initializiation to be contained in the class definition.
I would like to replace my global string constants with a nested enum for the keys I'm using to access columns in a database.
The structure is as follows:
enum DatabaseKeys {
enum User: String {
case Table = "User"
case Username = "username"
...
}
...
}
Each table in the database is an inner enum, with the name of the table being the enum's title. The first case in each enum will be the name of the table, and the following cases are the columns in its table.
To use this, it's pretty simple:
myUser[DatabaseKeys.User.Username.rawValue] = "Johnny"
But I will be using these enums a lot. Having to append .rawValue to every instance will be a pain, and it's not as readable as I'd like it to be. How can I access the String value without having to use rawValue? It'd be great if I can do this:
myUser[DatabaseKeys.User.Username] = "Johnny"
Note that I'm using Swift 2. If there's an even better way to accomplish this I'd love to hear it!
While I didn't find a way to do this using the desired syntax with enums, this is possible using structs.
struct DatabaseKeys {
struct User {
static let identifier = "User"
static let Username = "username"
}
}
To use:
myUser[DatabaseKeys.User.Username] = "Johnny"
Apple uses structs like this for storyboard and row type identifiers in the WatchKit templates.
You can use CustomStringConvertible protocol for this.
From documentation,
String(instance) will work for an instance of any type, returning its
description if the instance happens to be CustomStringConvertible.
Using CustomStringConvertible as a generic constraint, or accessing a
conforming type's description directly, is therefore discouraged.
So, if you conform to this protocol and return your rawValue through the description method, you will be able to use String(Table.User) to get the value.
enum User: String, CustomStringConvertible {
case Table = "User"
case Username = "username"
var description: String {
return self.rawValue
}
}
var myUser = [String: String]()
myUser[String(DatabaseKeys.User.Username)] = "Johnny"
print(myUser) // ["username": "Johnny"]
You can use callAsFunction (New in Swift 5.2) on your enum that conforms to String.
enum KeychainKey: String {
case userId
case email
}
func callAsFunction() -> String {
return self.rawValue
}
usage:
KeychainKey.userId()
You can do this with custom class:
enum Names: String {
case something, thing
}
class CustomData {
subscript(key: Names) -> Any? {
get {
return self.customData[key.rawValue]
}
set(newValue) {
self.customData[key.rawValue] = newValue
}
}
private var customData = [String: Any]()
}
...
let cData = CustomData()
cData[Names.thing] = 56
Edit:
I found an another solution, that working with Swift 3:
enum CustomKey: String {
case one, two, three
}
extension Dictionary where Key: ExpressibleByStringLiteral {
subscript(key: CustomKey) -> Value? {
get {
return self[key.rawValue as! Key]
}
set {
self[key.rawValue as! Key] = newValue
}
}
}
var dict: [String: Any] = [:]
dict[CustomKey.one] = 1
dict["two"] = true
dict[.three] = 3
print(dict["one"]!)
print(dict[CustomKey.two]!)
print(dict[.three]!)
If you are able to use User as dictionary key instead of String (User is Hashable by default) it would be a solution.
If not you should use yours with a nested struct and static variables/constants.
I was wondering how I could change the code below such the bmBc is computed at compile time . The one below works for runtime but it is not ideal since I need to know the bmBc table at compile-time . I could appreciate advice on how I could improve on this.
import std.conv:to;
import std.stdio;
int [string] bmBc;
immutable string pattern = "GCAGAGAG";
const int size = to!int(pattern.length);
struct king {
void calculatebmBc(int i)()
{
static if ( i < size -1 )
bmBc[to!string(pattern[i])]=to!int(size-i-1);
// bmBc[pattern[i]] ~= i-1;
calculatebmBc!(i+1)();
}
void calculatebmBc(int i: size-1)() {
}
}
void main(){
king myKing;
const int start = 0;
myKing.calculatebmBc!(start)();
//1. enum bmBcTable = bmBc;
}
The variables bmBc and bmh can't be read at compile time because you define them as regular runtime variables.
You need to define them as enums, or possibly immutable, to read them at compile time, but that also means that you cannot modify them after initialization. You need to refactor your code to return values instead of using out parameters.
Alternatively, you can initialize them at runtime inside of a module constructor.
Consider the following struct:
public struct vip
{
string email;
string name;
int category;
public vip(string email, int category, string name = "")
{
this.email = email;
this.name = name;
this.category = category;
}
}
Is there a performance difference between the following two calls?
var e = new vip(email: "foo", name: "bar", category: 32);
var e = new vip("foo", 32, "bar");
Is there a difference if there are no optional parameters defined?
I believe none. It's only a language/compiler feature, call it syntactic sugar if you like. The generated CLR code should be the same.
There's a compile-time cost, but not a runtime one...and the compile time is very, very minute.
Like extension methods or auto-implemented properties, this is just magic the compiler does, but in reality generates the same IL we're all familiar with and have been using for years.
Think about it this way, if you're using all the parameters, the compiler would call the method using all of them, if not, it would generate something like this behind the scenes:
var e = new vip(email: "foo", category: 32); //calling
//generated, this is what it's actually saving you from writing
public vip(string email, int category) : this(email, category, "bar") { }
No it is a compile-time feature only. If you inspect the generated IL you'll see no sign of the named parameters. Likewise, optional parameters is also a compile-time feature.
One thing to keep in mind regarding named parameters is that the names are now part of the signature for calling a method (if used obviously) at compile time. I.e. if names change the calling code must be changed as well if you recompile. A deployed assembly, on the other hand, will not be affected until recompiled, as the names are not present in the IL.
There shouldn't be any. Basically, named parameters and optional parameters are syntactic sugar; the compiler writes the actual values or the default values directly into the call site.
EDIT: Note that because they are a compiler feature, this means that changes to the parameters only get updated if you recompile the "clients". So if you change the default value of an optional parameter, for example, you will need to recompile all "clients", or else they will use the old default value.
Actually, there is cost at x64 CLR
Look at here http://www.dotnetperls.com/named-parameters
I am able to reproduce the result: named call takes 4.43 ns, and normal call takes 3.48 ns
(program runs in x64)
However, in x86, both take around 0.32 ns
The code is attached below, compile and run it yourself to see the difference.
Note that in VS2012 the default targat is AnyCPU x86 prefered, you have to switch to x64 to see the difference.
using System;
using System.Diagnostics;
class Program
{
const int _max = 100000000;
static void Main()
{
Method1();
Method2();
var s1 = Stopwatch.StartNew();
for (int i = 0; i < _max; i++)
{
Method1();
}
s1.Stop();
var s2 = Stopwatch.StartNew();
for (int i = 0; i < _max; i++)
{
Method2();
}
s2.Stop();
Console.WriteLine(((double)(s1.Elapsed.TotalMilliseconds * 1000 * 1000) /
_max).ToString("0.00 ns"));
Console.WriteLine(((double)(s2.Elapsed.TotalMilliseconds * 1000 * 1000) /
_max).ToString("0.00 ns"));
Console.Read();
}
static void Method1()
{
Method3(flag: true, size: 1, name: "Perl");
}
static void Method2()
{
Method3(1, "Perl", true);
}
static void Method3(int size, string name, bool flag)
{
if (!flag && size != -1 && name != null)
{
throw new Exception();
}
}
}
I would like to see a simple example of how to override stdext::hash_compare properly, in order to define a new hash function and comparison operator for my own user-defined type. I'm using Visual C++ (2008).
This is how you can do it
class MyClass_Hasher {
const size_t bucket_size = 10; // mean bucket size that the container should try not to exceed
const size_t min_buckets = (1 << 10); // minimum number of buckets, power of 2, >0
MyClass_Hasher() {
// should be default-constructible
}
size_t operator()(const MyClass &key) {
size_t hash_value;
// do fancy stuff here with hash_value
// to create the hash value. There's no specific
// requirement on the value.
return hash_value;
}
bool operator()(const MyClass &left, const MyClass &right) {
// this should implement a total ordering on MyClass, that is
// it should return true if "left" precedes "right" in the ordering
}
};
Then, you can just use
stdext::hash_map my_map<MyClass, MyValue, MyClass_Hasher>
Here you go, example from MSDN
I prefer using a non-member function.
The method expained in the Boost documentation article Extending boost::hash for a custom data type seems to work.