LabVIEW VISA Read doesn't read data after system settings change - io

I have greatly simplyfied my VI to the basic one below and it still doesn't work. I want to read the gain setting on my LakeShore 330 temperature controller. This is the way to do it and this is the way how it worked before changing some PC system settings. The buffer result should be an integer with a value between 000 and 999. In my case it should be 020 but shows 000, no matter what even if I change it to a different value in the controller. VISA Read still gives 000 as output.
I had some issues with system settings like delimiter, comma and dots for csv files. Basically my PC is now set to US standards instead of European and all my other software package work accordingly now. VISA Read still works fine with doubles etc coming from the controller but integers have all turned to 000.
Can anyone explain to me how VISA Read is affected by system settings, especially integers? I am quite confused because integers have no decimals, commas or other symbols.

Open you Labview.ini file and look for the [LabVIEW] group and key useLocaleDecimalPt , if that's true it will use a period (.) as the decimal sign. Otherwise your local computer setting.
VISA will not deal with integers, as your example shows it will output a string.
But the code for your LakeShore 330 might have an incorrect formatter, is the VI code viewable? If so try to debug. Here is an overview of all the format specifier codes for Format Into String. Especially the %.; (point), %,; (comma), %; (system default) codes.

Related

How to read a csv file that has points as thousand separator on excel

So, I've got that huge csv file that contains numbers that use "." as number separators (I guess this is how they roll in germany). Some of them are negative numbers.
I have to check that the sum is a certain amount just to be sure they sent me the correct data. When I just replace the dots with nothing I get an incorrect total (close to the total they sent me, but still incorrect). And as I can't review the whole file to find if there is something wrong somewhere, I can't be certain that the issue lies with the data or with something I didn't expect (like a line that would use "." as a decimal separator for example, but maybe there are more exotic cases that I could quite not imagine)
I'm pretty sure there must be a way to make excel understand that "." is a thousand separator, but so far I didn't manage to make that custom format understand what I'm trying to say.
Well this is actually half-true, I can make him understand that it should write 1.000.000 instead of 1000000 but I can't make him understand that it should read 1.000.000 as 1000000.
I also tried my luck at changing the separator in File > Options > Advanced > Use system separator, but it doesn't seem to work (like at all, when I change it, nothing changes, maybe this feature is bugged)
NB : I'm french and my default separator is a space. Though I could change the language to english, I can't change it to german because the package is not installed and I can't install anything on my working computer (cause "securtity and blahblahblah").
Thank you for your kind help.
Regards.

Unicode Codepoints for special characters in MS Keyboard Layout Creator

My goal:
I am trying to get the MS Keyboard Layout Creator to allow me to perform a carriage return/enter whenever I hit the [R-Arrow] key in combination with the [Control] key, but still have the [R-Arrow] key perform as normal (i.e. move one character right) when hit alone. I'm doing this because my laptop keyboard [Enter] key is busted, and I want use this hack for a short time, before I go ahead and get another keyboard. Yes, I know it might be easier to get a new one. :)
As far as I can tell, I have almost figured everything out. The only pieces of information I still need are the exact hexadecimal codepoints for both the 1) right arrow navigation and 2) enter/carriage-return. I am hoping someone can direct me to this info. I have found the unicode reference but I am unable to discern which codes I might use for the carriage return and the right arrow navigation (not the right arrow ascii character →, I don't care about that)
Example code in my existing KLC file:
KBD Layout01 "Layout01 Description"
COPYRIGHT "(c) 2017 Company"
COMPANY "Company"
LOCALENAME "en-US"
LOCALEID "00000409"
VERSION 1.0
SHIFTSTATE
0 //Column 4
1 //Column 5 : Shft
2 //Column 6 : Ctrl
LAYOUT ;an extra '#' at the end is a dead key
//SC VK_ Cap 0 1 2
//-- ---- ---- ---- ---- ----
39 SPACE 0 0020 0020 -1 // SPACE, SPACE, <none>
53 DECIMAL 0 002e 002e -1 // FULL STOP, FULL STOP,
My understanding of the code (SPACEBAR example)
Looking at the preexisting examples in the file, (the space and the decimal) I have figured out the following:
Note: the examples in parentheses below refer only to the spacebar.
The first number is the keyboard key (e.g. 39 above)
The word which follows that number is the designated label to refer to that key (e.g. SPACE above)
the next three numbers are hexadecimal codepoints/symbols which refer to "SHIFTSTATES"
The first is the codepoint for what the key will output if pressed while the CAPSLOCK is pressed.
The second is the codepoint for what the key will output if pressed simultaneously with the SHIFT key.
The third is the codepoint for what the key will output if pressed simultaneously with the CONTROL key.
The goal: figuring out the codes for right-arrow navigation and enter
I have figured out this much for my line of code that I want to add in order so that pressing the right key alone will still navigate right, but in wihc the combination "control-right" will instead trigger a carriage-return/enter
4d RIGHT 0 ??I don't know?? ??I don't know?? -1
I believe the I know following
4d (in the 1st column) is the key code for the right arrow key
the handle RIGHT (in the 2nd column) is the handle/name for the right arrow
0 (in the 3rd column means don't change the key if the capslock is pressed
What I need your help to figure out
What the codepoint/hexadecimal/unicode symbol is for performing a right arrow navigation (I think that is what goes in the fourth column if I want [Shift]-[Right-Arrow] to make the cursor move one character to the right).
What the codepoint/hexadecimal/unicode symbol is for performing a carriage-return/enter(I think that is what goes in the fifth column if I want [Control]-[Right-Arrow] to trigger an enter/carriage-return).
It may be that I am mistaken and the symbols I need are not unicode codepoints; if I am wrong, please correct me, as that info will help me get closer to my goal. Any help would be greatly appreciated!
I don't know if you still need this, as I had already written down most of it I post it.
I looked into it for a while, I haven't found an actual definitive answer but I can give you some hints (I post this as an answer nonetheless because it would have been too unwieldy to use comments).
I have a strong feeling that what you ask is not possible (that control keys such as the arrows cannot be mapped to different keys/characters/functions when a modifier such as ctrl is pressed).
I'm not really a huge expert in these things but I can give you some pointers:
(in the following there is a good deal of information not much related to your problem, but it might help you understand better)
When you press a key in Windows there are at least 3 sets of codes that are involved:
Scan codes: these are the codes that are actually generated by the hardware and sent to the pc. I have little knowledge of them, I never had a need to use them and I was too young when they were more relevant. They can theoretically vary from keyboard to keyboard but they're largely standardized; the USB keyboards are really standardized, for what I could understand, and their scan codes ought to be those listed in these HID Usage Tables (section 10). Wikipedia has some info but not a full list of the traditional codes. Most likely you won't need these, though (but maybe you will). By the way, these scan codes are also passed to the applications (I'm not sure how reliably) but they hardly ever use them.
Virtual-key codes: The scan codes in Windows are translated by the keyboard driver into a common set of key codes specified by Microsoft: the Virtual-Key Codes. These are independent by the keyboard and are what's (normally) used by the applications when they need to handle the single key presses.
Unicode, or other charset, characters: Windows recognizes when the keys being pressed are supposed to produce printable characters and passes these characters to the applications. At the times when an application is only interested in printable characters it only looks at these characters, although when they need to do more complex things (shortcuts...) they also have access to the virtual-key codes (and, if they really want, to the scan codes). Unicode is a character set, not a "key-codes set", so it generally contains only printable characters. To facilitate interoperability with ASCII and other legacy charsets it also includes the control characters defined in previous standards, but among these control characters the arrow keys are not present, so there are no unicode codepoints for the keyboard's arrows.
In the second column of the klc it would appear that you have to put the name of the virtual-key constant with VK_ removed. Quite weird indeed.
Several Microsoft documentation pages say that the WDK kbd.h file that you can also find in the inc directory of the Microsoft Keyboard Layout Creator has the detailed information about this stuff. Personally I couldn't make too much out of it, though.
If you really want to dig into this the late Michael Kaplan's blog has probably the information you're looking for, somewhere.
Your best luck is most likely to use some other application. I stumbled upon KbdEdit, that does handle the arrow keys, but it really seems that it can't assign a different function to the key when used with a modifier (but you can change the effect of the key altogether, irrespective of the pressed modifier).
For the Enter key you would likely need to use the virtual key, which is 0D (VK_RETURN).
The sequence of characters used to indicate line breaks on Windows is CR LF, which have (in Unicode and almost every other existing charset) codepoints 0D 0A, respectively.
The Windows message that notifies applications of entered characters (point 1.3 above - I mean the WM_CHAR message, by the way) though reports only a CR (0D) when you press Enter; so if those klf files use unicode codepoints in some part there's a good chance that they use that (CR) to indicate a Enter key.
All in all, your best bet is probably to just assign the Enter to a different key (for example a function key, the right ctrl or win key if you have them or the caps-lock).

PL/I character set and IBM Personal Communications - wrong characters are displayed

Some characters that I enter in editor displayed not identical to those on keyboard. So I have error messages like this:
Character with decimal value 176 does not belong to the PL/I character
set. It will be ignored.
when trying to compile PL/I programm.
Sometimes character can displayed even properly, but I still have similar error message.
Examples of this characters are character that represents logical OR, logical NOT.
How to solve this problem? Is it a settings of editor, or settings of program IBM Personal Communications? Or may be it is better to enter 16-code of those symbols (how to do that if possible, and how to determine what code I need)?
There is a lot of places where this can go wrong...
The keyboard-driver on your client machine has to be configured correctly for the keyboard you use. But if other programs work correctly and only the mainframe emulation behaves strangely then this should be OK.
The PCOMM-session has to be configured to use the correct Host-codepage. Ask your mainframe technical guys what is used and configure your terminal emulation accordingly. Since we don't use PCOMM I can't help you with this, you will have to look around the session settings a bit.
In PL/I most characters are taken from the range that is identical in most EBCDIC codepages. The main exceptions are the characters for the OR- and NOT-operators which may differ. IBM-default for OR is '4F'X, which is a pipe-character '|' in codepage 1140 (English), but an exclamation mark '!' in codepage 1141 (German). Default for NOT is '5F'X which is a logical NOT-sign '¬' in 1140 but a caret '^' in 1141.
Since these problems are well known the compiler offers two options OR() and NOT() to set the characters to be used for these operators. So you might look in your compile-listing whether these parameters are set in your installation and what their values are since these are the characters you have to use.

String wired to Case Structure always goes to Default - LabVIEW

I am trying to take the readout from an instrument and if the readout matches certain strings (from the instrument programming manual) I want to set an indicator to a specific value, different for each possible string. A case structure seems like the best option, with all the possible readouts as the cases. I did this and added "" as the default case to send out a value for the no-match case. The trouble is that if I wire the readout string to the case structure it always executes the default case no matter what the readout (and yes, before anyone asks, I verified that the readout strings match my cases exactly). To check that the case structure was working I wired a constant to the case structure and it works fine, even when I copy and paste the value from the readout string to the constant. Also, I made sure that case insensitive matching was selected so that's not the issue. Anyone have an idea why this is happening? I can post sample VI's if necessary.
Found the problem. Converted the strings into byte arrays and looked at the ascii values. Apparently one had a new line character at the end even though there was no new line on the indicator. Fixed comparison by trimming white space on strings. Look out for that.
To check exactly what's in your string you can wire it to an indicator, right-click on that indicator and choose '\'Codes Display. This will then show codes such as \n for newline, \00 for ASCII 0, \FF for ASCII 255, etc.

Is any software decent at importing column-aligned text?

Here's something that's really irked me over the years. I've never used any software that, when importing data from a column-aligned text file, can figure out the column breaks in a correct manner.
Excel 2K3 and a lot of other Microsoft components that seem to share a common codebase (like the import options for SQL2K) attempt to figure out the column breaks for you. Unfortunately, they only look at the first n rows, and are often completely wrong.
OpenOffice.Org 3.1 has a import dialog almost exactly like Excel 2K3 but it doesn't even attempt to guess the column breaks for you. And the latest version of Numbers doesn't appear to handle column-aligned imports at all.
Obviously column-aligned data is undesirable for a number of reasons, but a lot of older software (particularly in-house software various companies have floating around) exports data in this format so I do need to handle it every so often. Surely, somewhere, SOME software imports it well without me coding an import utility myself or manually specifying where twelve zillion columns start and stop?
OSX, Windows, whatever. I'm open to suggestions. Ultimate goal is to get it into a SQL Server table, but simply getting it into a Excel/XML/tab-delimited/etc file in the meantime would be fine because it's easy enough to get into SQL Server from there.
I tend to normalize such data with awk -- perhaps generating a csv file -- before trying to import it into Excel.
See the awk user's manual.
I don't think there is a silver bullet for your request. I think the best you can hope for is to define your input format once and be able to reuse that format when you receive a file with the same format again.
As one poster mentioned you could use awk or, if .NET is more your thing, then you could use FileHelpers. It's an open source .NET library that does a good job reading and writing both Fixed length and delimited files. The downside is that you would be creating a .NET application to do the work (either inserting directly into a DB or perhaps creating an output file. On the plus side, once created, you could reuse the mapping classes again if you get the same file format.
Well obviously no software can be entirely correct in guessing the layout of a fixed column file, since there is no seperator (though variable width columns with higher maximum lengths will often produce enough space on the end to start guessing). For example the following could be anywhere from 1-9 columns (I have personally had to figure out some super packed fixed column layouts like this, only much longer)
135464876
647873159
345467575
If SQL Server is the ultimate destination, have you looked into the SQL Server import wizard?
Right click your database in Management Studio and select Tasks->Import Data. Proceed through and select "Flat File" as your data source. In the format dropdown change from Delimited to Fixed Width. On the left you can now use the Columns screen to draw the column seperators. There is also an advanced and preview screen.
Try out this demo (I was on development team):
Personator 4
Install, run the program, go to Tools | ASCII Conversion | Import from ASCII.
The import will be to DBF/FoxPro, but you can then export that file into one of the formats you mentioned.
The start/stop guesser uses a few statistical formulas to try to get the boundaries correct; you get to verify and/or correct with a graphical editor after analysis.
If you save your file as a text file and attempt to open
it in Microsoft Excel 2007 and select "Fixed Width",
Excel will "guess" where the breaks occur (based on
whitespace), but you can actually change where the column
field breaks will occur. The application has vertical lines that
can be moved left or right X characters. Excel
will "guess" where the breaks occur, but if it
guesses incorrectly, you can still change where the field breaks
should occur. On STEP 2 of the wizard, just move the
vertical lines to the left or right if you need
to change Excel's guesses as to where the field breaks
are. You can see which character number the field
break occurs in before importing.

Resources