I'm trying to convert a character that I have specified in my application configuration file to a XNA keyboard key. How would I parse my character value to a key?
While harryovers has given you an exact answer to your question, perhaps a better solution for configuration files is to convert from a string instead of from a character. That way your configuration file may specify any key by name, not just alpha-numeric ones.
You could use Enum.Parse to convert the string to an enumeration (MSDN, example).
this should work:
char c = 'a';
Keys cAsKey = (Keys)((int)(char.ToUpper(c)));
bool compareKeys = (cAsKey == Keys.A); //true
If you're targeting Windows, take a look at the KeysConverter class in System.Windows.Forms. Technically XNA Keys != Windows Forms Keys, but internally they use the same integer values.
Related
I got a List of String. I am losing information (the dot) when I try to convert an entry to type Double. What am I doing wrong?
Dim list As New List(Of String)
Dim a As Double
list.Add("309.69686")
a = CDbl(list(0))
MsgBox(a)
'Output: 30969686
This happens because in your locale the separator for decimal numbers is probably not a point but something else (usually a comma).
You are using the old VB6 methods to convert this string to a double and this method (CDbl) has no way to use a different locale settings.
So in the most basic form you need to change that method to the native .NET methods
a = Double.Parse(list(0), CultureInfo.InvariantCulture)
Here we pass the information about what locale setting Parse should use in converting the input string to a double. And the InvariantCulture uses the point as separator.
Of course, you should consider that, if the input string is obtained from the user input, then you could face other problems (like invalid numeric strings). In this case you should not use double.Parse, but double.TryParse
If you have a German Windows, then the dot will be interpreted as thousands separator. You must specify the culture explicitly, if you need another behaviour.
Dim d = Double.Parse("309.69686", CultureInfo.InvariantCulture)
I'm trying to get a substring from an initial string in Smalltalk. I'm wondering if there's a way to do it. For example in Java, the method aStringObject.substring(index), allows you to trim a String object using an index (or its position in the array). I've been looking in the browser for something that works in a similar way, but couldn't find it. So far every trimming method uses a character or string to do the separation.
As an example of what I'm looking for:
initialString:='Hello'.
finalString:=initialString substring: 1
The value of finalString should be 'ello'.
In Smalltalk a String is a type of SequencableCollection so you can use the copying protocol messages as well.
For example you could use:
copyFrom: start to: stop
allButFirst (will not copy the first character)
allButFirst: n (more generally answer a copy of the receiver containing all but the first n elements.
In the ATEasy software development environment, I have a char[100] which contains a sentence and I want to convert it to String type.
How can I do that? Is there a special function for that?
The solution was pretty easy.
I just assigned the char array variable to the string variable, as follows:
sStringType = acCharType;
and it worked.
I assume that an implicit casting is performed.
I have an EditText object (et_travel) on my screen that's asking for miles traveled. I grab that data like this:
float travel = Float.parseFloat(et_travel.getText().toString());
if(travel > 40000){
I just discover that if someone puts 40000 in the EditText, everything works fine, but if they put 40,000 (adding a comma to the number), I force close on the float travel = ...statement.
How can I evaluate the number without having a problem from the user adding a comma?
Is this in Java? It appears to be, but I'm wondering if I'm mistaken. Regardless, I would suggest you remove all of the characters from the string that are not of a numeric type. A way to do this may be using a regular expression.
A way to do this in Java may be the following:
String input = et_travel.getText().toString();
input = input.replaceAll("[^0-9]", "");
float travel = Float.parseFloat(input);
...
This way, you strip anything that is a non-numeric value from the string first, and then attempt to do your work. Obviously do some error checking before this (like input is not null and such). One change that is needed however is that you may need to maintain the '.' character (if you're given non-integer values). This would require changing the first regex a bit.
Check here: http://download.oracle.com/javase/1.5.0/docs/api/java/lang/String.html#replaceAll(java.lang.String, java.lang.String)
What you need is some validation on the input. Before converting the string into a float parse the string. If there are any ','s then remove them. If there is just junk then reject the input, otherwise someone could put a word or anything else in the input and cause havoc in your program.
Check out
inputType to restrict user input
android:inputType="number"
I'm using the folowing code to read files from a folder in windows. However since this a MFC application I have to convert the char array to UNICODE. For example if I hard code the path as "C:\images3\test\" as shown below the code works.
WIN32_FIND_DATA FindFileData;
HANDLE hFind = INVALID_HANDLE_VALUE;
hFind = FindFirstFile(_T("C:\\images3\\test\\"), &FindFileData);
What I want is to get this working as follows:
char* pathOfFileType;
hFind = FindFirstFile(_T(pathOfFileType), &FindFileData);
Can anyone tell me how to fix this problem ?
Thanks
Thanks a lot for all your responses. I learnt a lot from those answers because I also didn't have much idea about what is happening underneath. Meanwhile I managed to get rid of the issue by simply converting to UNICODE using the following code with minimum changes to my existing code.
#include <atlconv.h>
USES_CONVERSION;
//An ANSI string
LPSTR lpsz_ANSI_String = pathOfFileType;
//ANSI string being converted to a UNICODE string
LPWSTR lpUnicodeStr = A2W( lpsz_ANSI_String );
hFind = FindFirstFile(lpUnicodeStr, &FindFileData);
You can use the MultiByteToWideChar function to convert a string from chars to UTF-16, but you'd better to get pathOfFileType directly in Unicode from the user or from wherever you take it, otherwise you may still experience problems with paths that contain characters not included in the current CP.
Your question demonstrates a confusion of several issues. First, using MFC doesn't mean you have to convert the character array to Unicode, one has nothing to do with the other. Furthermore, FindFirstFile is a Win32 API, not an MFC function. Finaly, _T("abc") is not necessarily unicode, rather _T(X) is a macro that in multi-byte builds expands to X, and in unicode builds expands to L X, creating a wide character literal. This is designed so that your code can compile in a unciode or multi-byte configuration. To achieve the same flexibility when declaring a variable, you use the TCHAR type instead of char or wchar_t. So your second snippet should look like
TCHAR* pathOfFileType;
hFind = FindFirstFile(pathOfFileType, &FindFileData);
Note no _T macro, that is only applied to string literals, not identifiers.
"since this a MFC application I have to convert the char array to UNICODE"
Not so. If you wish, you can use change to use the Multi-Byte Character Set.
In project properties, general change character set to 'Use Multi-Byte Character Set'
Now this will work
char* pathOfFileType;
hFind = FindFirstFile(pathOfFileType, &FindFileData);
Supposing you want to use UNICODE ( visual studio's name for the 2 byte encoding of UNICODE characters native to Windows ) then you have to explicitly call the MBCS version of the API
char* pathOfFileType;
hFind = FindFirstFileA(pathOfFileType, &FindFileData);