Is there a way to convert a string value into time_t format?
Thanks in advance.
One approach could be to parse the string into the various components and use them to fill a tm structure. Once you have the structure filled with data you can use the C function mktime to convert the structure into a time_t type.
There might be more ideal ways to do this in visual C++, but in normal C/C++ this is probably the way i'd do it untill I'd find a better algorithm.
Related
The Haskell base documentation says that "A Word is an unsigned integral type, with the same size as Int."
How can I take an Int and cast its bit representation to a Word, so I get a Word value with the same bit representation as the original Int (even though the number values they represent will be different)?
I can't use fromIntegral because that will change the bit representation.
I could loop through the bits with the Bits class, but I suspect that will be very slow - and I don't need to do any kind of bit manipulation. I want some kind of function that will be compiled down to a no-op (or close to it), because no conversion is done.
Motivation
I want to use IntSet as a fast integer set implementation - however, what I really want to store in it are Words. I feel that I could create a WordSet which is backed by an IntSet, by converting between them quickly. The trouble is, I don't want to convert by value, because I don't want to truncate the top half of Word values: I just want to keep the bit representation the same.
int2Word#/word2Int# in GHC.Prim perform bit casting. You can implement wrapper functions which cast between boxed Int/Word using them easily.
I'm working on a NodeJs script that handles strings with exponential values.
Something like this:
1.070000000000000e+003
Which is the best way to convert (or parse) this string and obtain a floating value?
Thanks for the help.
You may convert by using parseFloat or Number
If you prefer to parse, maybe the best way is by a regular expression:
/-?(?:0|[1-9]\d*)(?:\.\d*)?(?:[eE][+\-]?\d+)?/
as suggested here and convert the single capturing group.
I've seen several questions/answers here that suggest the best way to get a string representation of an integer in Objective-C is to use [NSString stringWithFormat:#"%d", x]. I'm afraid the C/C++ programmer in me is having a hard time believing I want to bring all the formatting code into play for such a simple task. That is, I assume stringWithFormat needs to parse through the format string looking for all the different type specifiers, field widths, and options that I could possibly use, then it has to be able to interpret that variable length list of parameters and use the format specifier to coerce x to the appropriate type, then go through a lengthy process of conversion, accounting for signed/unsigned values and negation along the way.
Needless to say in C/C++ I could simply use itoa(x) which does exactly one thing and does it extremely efficiently.
I'm not interested in arguing the relative merits of one language over another, but rather just asking the question: is the incredibly powerful [NSString stringWithFormat:#"%d", x] really the most efficient way to do this very, very simple task in Objective-C? Seems like I'm cracking a peanut with a sledge hammer.
You could use itoa() followed by any of +[NSString stringWithUTF8String:] -[NSString initWithBytes:length:encoding: or +[NSString stringWithCString:encoding:] if it makes you feel better, but I wouldn't worry about it unless you're sure this is a performance problem.
You could also use description method. It boxes it as NSNumber and converts to a NSString.
int intVariable = 1;
NSString* stringRepresentation = [#(intVariable) description];
I have to multiply two big numbers - saved as string - any hint how to do that?
Think back to grade school, and how you would solve the problem long-hand.
Depends on the language and how large the numbers are. For example in C, you can convert string to int with atoi and then multiply if the product will fit in 32bit int. If number is too large for 32bit you'll probably have to use third-party BigInt library. Some languages (python, haskell) have built-in support for bigint, so you can multiply numbers of any size.
I have an MFC application in C++ that uses std::string and std::wstring, and frequently casts from one to the other, and a whole lot of other nonsense. I need to standardize everything to a single format, so I was wondering if I should go with CString or std::wstring.
In the application, I'll need to generate strings from a string table, work with a lot of windows calls that require constant tchar or wchar_t pointers, Edit Controls, and interact with a COM object's API that requires BSTR.
I also have vectors of strings, so is there any problem with a vector of CStrings?
Which one is better? What are the pros and cons of each?
Examples
BSTR to wstring
CComBSTR tstr;
wstring album;
if( (trk->get_Info((BSTR *)&tstr)) == S_OK && tstr!= NULL)
album = (wstring)tstr;
wstring to BSTR
CComBSTR tstr = path.c_str();
if(trk->set_Info(tstr) == S_OK)
return true;
String resource to wstring
CString t;
wstring url;
t.LoadString(IDS_SCRIPTURL);
url = t;
GetProfileString() returns a CString.
integer to string format:
wchar_t total[32];
swprintf_s(total, 32, L"%d", trk->getInt());
wstring tot(total);
std::basic_string<> (or rather its specialisations) is horrible to work with, it's imo one of the major shortcomings of the STL (and I'd say C++ in general). It doesn't even know about encodings - c'mon, this is 2010. Being able to define the size of your character isn't enough, 'cause there's no way indicate variable-size characters in a basic_string<>. Now, utf-8 isn't nice to work with with a CString, but it's not as bad as trying to do it with basic_string. While I agree with the spirit of the above posters that a standard solution is better than the alternatives, CString is (if your project uses MFC or ATL anyway) much nicer to work with than std::string/wstring: conversions between ANSI/Unicode (through CStringA and CStringW), BSTR, loading from string table, cast operators to TCHAR (.c_str()? really?), ...
CString also has Format(), which, although not safe and somewhat ugly, is convenient. If you prefer safe formatting libraries, you'll be better off with basic_string.
Furthermore, CString has some algorithms as member functions that you'll need boost string utilities for to do on basic_string such as trim, split etc.
Vectors of CString are no problem.
Guard against a dogmatic dismissal of CString on the basis of it being Windows-only: if you use it in a Windows GUI, the application is Windows-only anyway. That being said, if there's any chance that your code will need to be cross-platform in the future, you're going to be stuck with basic_string<>.
I personally would go with CStrings in this case, since you state that you're working with BSTRs, using COM, and writing this in MFC. While wstrings would be more standards compliant you'll run into issues with constant converting from one to another. Since you're working with COM and writing it in MFC, there's no real reason to worry about making it cross-platform, since no other OS has COM like Windows and MFC is already locking you into Windows.
As you noted, CStrings also have built-in functions to help load strings and convert to BSTRs and the like, all pre-made and already built to work with Windows. So while you need to standardize on one format, why not make it easier to work with?
std::wstring would be much more portable, and benefit from a lot of existing prewritten code in STL and in boost. CString would probably go better with windows API's.
wchat_t : remember that you can get the data out of wstring any time by using the data() function, so you get the needed wchar_t pointer anyway.
BSTR : use SysAllocString to get the BSTR out of a wstring.data().
As for the platform dependance, remember that you can use std::basic_string<T> to define your own string, based on what you want the length of a single character to be.
I'd go for wstring every day....