ENTER keyword in Basic - basic

I am going over some ancient 1975 Basic code and found what looks like a keyword in the line:
ENTER #P,B2,B1,C$.I know that B2, B1 and C$ are all variables, and though it would not make sense P may be a variable also.Does anyone know what the ENTER keyword is and what these parameters refer to?Thanks in advance.

The ENTER statement is extraordinarily obscure; it does not appear in either the first or third editions of David Lien’s BASIC Handbooks, for example.
By searching on “ENTER #P” I was able to find one BASIC variant that does include it: HP 2000/Access BASIC. This particular manual is dated September 1975, so it fits loosely with the date of the program you’re looking at.
In HP 2000/Access BASIC (pages 2-15 and 11-33 of the manual), the ENTER statement is a variation of INPUT, one that provides no prompt and also stores more information about how and where the input was provided. The significance of the pound sign (#) is that it assigns the user port number to that variable. In your case, the “user port number” is assigned to the variable P.
In your example:
P will be assigned the user port number.
B2 is the number of seconds the user has to respond. If the user does not respond, the value returned will be -256; -257 and -258 are error conditions.
B1 is the number of seconds the user actually took to respond.
C$ is the variable which will be assigned the value the user enters, if they enter it in the appropriate time.
The “user port number” does not appear to be defined in the manual; however, contextually it appears to be the port on which the user is logged into the system. It can be a number “in the range from 0 to 31.” The port number appears to be mainly, or only, used for logging purposes. I don’t see anywhere in the manual where the port number can be targeted by the BASIC program.
From the manual:
The ENTER statement provides the program with more control over the input operation. The statement can limit the amount of time allowed to respond with data, provide the program with the actual time taken to respond, indicate whether the data was acceptable, and return the port number of the user's terminal. The port number can be obtained separately, without involving the user at all, or together with a single data value. Data can also be requested without asking for the port number. If a number sign (#) follows the keyword ENTER, then the variable it precedes is assigned the user port number (an integer in the range 0 to 31). Otherwise (or following the port return variable) the first expression is evaluated and rounded to an integer (which must be in the range 1 to 255) specifying the number of seconds permitted the user to respond. No prompt is printed, the program must notify the user that input is expected by a message in a preceding PRINT statement. The variable following the expression is set to the approximate time, in seconds, that the user took to respond. If the constant was not legal, the time is negated. If the allotted time has elapsed, the value -256 is returned; the values -257 and -258 indicate a transmission problem occurred and the user should be asked to respond again. The last variable in the list is assigned the single value which the user is expected to enter. Unlike the INPUT statement, ENTER does not respond to the return (which completes the user's answer) with a line feed.
Examples:
10 ENTER #P
20 ENTER 255,R,A
30 ENTER #P,T1,R,A(I)
Fortunately P is the variable used for the port number in this example as it is in yours; otherwise, the search would not have found this manual.

Related

Addition of STOP statement changes output value

A value obtained for a (double precision) variable is written to the screen as part of my Fortran program. Following the write statement, the program continues uninterrupted.
However, when I add a stop statement immediately after the write, I get a different (screen printed) value for this variable. The change in values begins at the 6th significant digit.
The code includes:
enertot=energy0(x,y,z,size) !energy0 is a function
write (*,*) 'The initial energy is', enertot
This outputs some (plausible) value for enertot on the screen, and the program goes on.
Now adding the stop:
enertot=energy0(x,y,z,size) !energy0 is a function
write (*,*) 'The initial energy is', enertot
stop
This gives me a different value for enertot.
The same problem happens regardless of the compiler I use (f90/95 compilers). Could it have to do with the machine being very old, operating on an out of date Linux Fedora OS?
Even weirder - when I run the exact same program on my Windows laptop with the Silverfrost compiler, I get a third result for enertot altogether, differing from the previous results starting from the 5th significant digit. In this case, however, addition of the stop doesn't change the printed value at all.
Any thoughts?

Variable length messages in Verilog (serial CRC-32)

I'm working with a serial protocol. Messages are of variable length that is known in advance. On both transmission and reception sides, I have the message saved to a shift register that is as long as the longest possible message.
I need to calculate CRC32 of these registers, the same as for Ethernet, as fast as possible. Since messages are variable length (anything from 12 to 64 bits), I chose serial implementation that should run already in parallel with reception/transmission of the message.
I ran into a problem with organization of data before calculation. As specified here , the data needs to be bit-reversed, padded with 32 zeros and complemented before calculation.
Even if I forget the part about running in parallel with receiving or transmitting data, how can I effectively get only my relevant message from max-length register so that I can pad it before calculation? I know that ideas like
newregister[31:0] <= oldregister[X:0] // X is my variable length
don't work. It's also impossible to have the generate for loop clause that I use to bit-reverse the old vector run variable number of times. I could use a counter to serially move data to desired length, but I cannot afford to lose this much time.
Alternatively, is there an operation that would directly give me the padded and complemented result? I do not even have an idea how to start developing such an idea.
Thanks in advance for any insight.
You've misunderstood how to do a serial CRC; the Python question you quote isn't relevant. You only need a 32-bit shift register, with appropriate feedback taps. You'll get a million hits if you do a Google search for "serial crc" or "ethernet crc". There's at least one Xilinx app note that does the whole thing for you. You'll need to be careful to preload the 32-bit register with the correct value, and whether or not you invert the 32-bit data on completion.
EDIT
The first hit on 'xilinx serial crc' is xapp209, which has the basic answer in fig 1. On top of this, you need the taps, the preload value, whether or not to invert the answer, and the value to check against on reception. I'm sure they used to do all this in another app note, but I can't find it at the moment. The basic references are the Ethernet 802.3 spec (3.2.8 Frame check Sequence field, which was p27 in the original book), and the V42 spec (8.1.1.6.2 32-bit frame check sequence, page 311 in the old CCITT Blue Book). Both give the taps. V42 requires a preload to all 1's, invert of completion, and gives the test value on reception. Warren has a (new) chapter in Hacker's Delight, which shows the taps graphically; see his website.
You only need the online generators to check your solution. Be careful, though: they will generally have different preload values, and may or may not invert the result, and may or may not be bit-reversed.
Since X is a viarable, you will need to bit assignments with a for-loop. The for-loop needs to be inside an always block and the for-loop must static unroll (ie the starting index, ending index, and step value must be constants).
for(i=0; i<32; i=i+1) begin
if (i<X)
newregister[i] <= oldregister[i];
else
newregister[i] <= 1'b0; // pad zeros
end

How to work with the COUNTER in Nagios or RRD?

I have the following problem:
I want to do the statistics of data that need to be constantly increasing. For example, the number of visits to the link. After some time be restarted these visit and start again from the beginning. To have a continuous increase, want to do the statistics somewhere. For this purpose, use a site that does this. In his condition can be used to COUNTER, GAUGE, AVERAGE, ... a.. I want to use the COUNTER. The system is built on Nagios.
My question is how to use this COUNTER. I guess it is the same as that of the RRD. But I met some strange things in the creation of such a COUNTER.
I submit the values ' 1 ' then ' 2 ' and the chart to come up 3. When I do it doesn't work. After the restart, for example, and submit again 1 to become 4
Anyone dealt with these things tell me briefly how it works with this COUNTER.
I saw that the COUNTER is used for traffic on routers, etc, but I want to apply for a regular graph, which just increases.
The RRD data type COUNTER will convert the input data into a rate, by taking the difference between this sample and the last sample, and dividing by the time interval (note that data normalisation also takes place and this is dependent on the Interval setting of the RRD)
Thus, updating with a constantly increasing count will result in a rate-of-change value to be graphed.
If you want to see your graph actually constantly increasing, IE showing the actual count of packets transferred (for example) rather than the rate of transfer, you would need to use type GAUGE which assumes any rate conversion has already been done.
If you want to submit the rate values (EG, 2 in the last minute), but display the overall constantly increasing total (in other words, the inverse of how the COUNTER data type works), then you would need to store the values as GAUGE, and use a CDEF in your RRDgraph command of the form CDEF:x=y,PREV,+ to obtain the ongoing total. Of course you would only have this relative to the start of the graph time window; maybe a separate call would let you determine what base value to use.
As you use Nagios, you may like to investigate Nagios add-ons such as pnp4nagios which will handle much of the graphing for you.

explain me a difference of how MRTG measures incoming data

Everyone knows that MRTG needs at least one value to be passed on it's input.
In per-target options MRTG has 'gauge', 'absolute' and default (with no options) behavior of 'what to do with incoming data'. Or, how to count it.
Lets look at the elementary, yet popular example :
We pass cumulative data from network interface statistics of 'how much packets were recieved by the interface'.
We take it from '/proc/net/dev' or look at 'ifconfig' output for certain network interface. The number of recieved bytes is increasing every time. Its cumulative.
So as i can imagine there could be two types of possible statistics:
1. How fast this value changes upon the time interval. In oher words - activity.
2. Simple, as-is growing graphic that just draw every new value per every minute (or any other time interwal)
First graphic will be saltatory (activity). Second will just grow up every time.
I read twice rrdtool's and MRTG's docs and can't understand which option mentioned above counts what.
I suppose (i am not sure) that 'gauge' draw values as is, without any differentiation calculations (good for measuring how much memory or cpu is used every 5 minutes). And default or 'absolute' behavior tryes to calculate the speed between nearby measures, but what's the differencr between last two?
Can you, guys, explain in a simple manner which behavior stands after which option of three options possible?
Thanks in advance.
MRTG assumes that everything is being measured as a rate (even if it isnt a rate)
Type 'gauge' assumes that you have already calculated the rate; thus, the provided value is stored as-is (after Data Normalisation). This is appropriate for things like CPU usage.
Type 'absolute' assumes the value passed is the count since the last update. Thus, the value is divided by the number of seconds since the last update to get a rate in thingies per second. This is rarely used, and only for certain unusual data sources that reset their value on being read - eg, a script that counts the number of lines in a log file, then truncates the log file.
Type 'counter' (the default) assumes the value passed is a constantly growing count, possibly that wraps around at 16 or 64 bits. The difference between the value and its previous value is divided by the number of seconds since the last update to get a rate in thingies per second. If it sees the value decrease, it will assume a counter wraparound at 16 or 64 bit. This is appropriate for something like network traffic counters, which is why it is the default behaviour (MRTG was originally written for network traffic graphs)
Type 'derive' is like 'counter', but will allow the counter to decrease (resulting in a negative rate). This is not possible directly in MRTG but you can manually create the necessary RRD if you want.
All types subsequently perform Data Normalisation to adjust the timestamp to a multiple of the Interval. This will be more noticeable for Gauge types where the value is small than for counter types where the value is large.
For information on this, see Alex van der Bogaerdt's excellent tutorial

Why should checking a wrong password take longer than checking the right one?

This question has always troubled me.
On Linux, when asked for a password, if your input is the correct one, it checks right away, with almost no delay. But, on the other hand, if you type the wrong password, it takes longer to check. Why is that?
I observed this in all Linux distributions I've ever tried.
It's actually to prevent brute force attacks from trying millions of passwords per second. The idea is to limit how fast passwords can be checked and there are a number of rules that should be followed.
A successful user/password pair should succeed immediately.
There should be no discernible difference in reasons for failure that can be detected.
That last one is particularly important. It means no helpful messages like:
Your user name is correct but your password is wrong, please try again
or:
Sorry, password wasn't long enough
Not even a time difference in response between the "invalid user and password" and "valid user but invalid password" failure reasons.
Every failure should deliver exactly the same information, textual and otherwise.
Some systems take it even further, increasing the delay with each failure, or only allowing three failures then having a massive delay before allowing a retry.
This makes it take longer to guess passwords.
I am not sure, but it is quite common to integrate a delay after entering a wrong password to make attacks harder. This makes a attack practicaly infeasible, because it will take you a long time to check only a few passwords.
Even trying a few passwords - birthdates, the name of the cat, and things like that - is turned into no fun.
Basically to mitigate against brute force and dictionary attacks.
From The Linux-PAM Application Developer's Guide:
Planning for delays
extern int pam_fail_delay(pam_handle_t *pamh, unsigned int micro_sec);
This function is offered by Linux-PAM
to facilitate time delays following a
failed call to pam_authenticate() and
before control is returned to the
application. When using this function
the application programmer should
check if it is available with,
#ifdef PAM_FAIL_DELAY
....
#endif /* PAM_FAIL_DELAY */
Generally, an application requests
that a user is authenticated by
Linux-PAM through a call to
pam_authenticate() or pam_chauthtok().
These functions call each of the
stacked authentication modules listed
in the relevant Linux-PAM
configuration file. As directed by
this file, one of more of the modules
may fail causing the pam_...() call to
return an error. It is desirable for
there to also be a pause before the
application continues. The principal
reason for such a delay is security: a
delay acts to discourage brute force
dictionary attacks primarily, but also
helps hinder timed (covert channel)
attacks.
It's a very simple, virtually effortless way to greatly increase security. Consider:
System A has no delay. An attacker has a program that creates username/password combinations. At a rate of thousands of attempts per minute, it takes only a few hours to try every combination and record all successful logins.
System B generates a 5-second delay after each incorrect guess. The attacker's efficiency has been reduced to 12 attempts per minute, effectively crippling the brute-force attack. Instead of hours, it can take months to find a valid login. If hackers were that patient, they'd go legit. :-)
Failed authentification delays are there to reduce the rate of login attempt. The idea that if somebody is trying a dictionary or a brute force attack against one or may user accounts that attacker will be required to wait the fail delay and thus forcing him to take more time and giving you more chance to detect it.
You might also be interested in knowing that, depending on what you are using as a login shell there is usually a way to configure this delay.
In GDM, the delay is set in the gdm.conf file (usually in /etc/gdm/gdm.conf). you need to set RetryDelay=x where x is a value in seconds.
Most linux distribution these day also support having FAIL_DELAY defined in /etc/login.defs allowing you to set a wait time after a failed login attempt.
Finally, PAM also allows you to set a nodelay attribute on your auth line to bypass the fail delay. (Here's an article on PAM and linux)
I don't see that it can be as simple as the responses suggest.
If response to a correct password is (some value of) immediate, don't you only have to wait until longer than that value to know the password is wrong? (at least know probabilistically, which is fine for cracking purposes) And anyway you'd be running this attack in parallel... is this all one big DoS welcome mat?
What I tried before appeared to work, but actually did not; if you care you must review the wiki edit history...
What does work (for me) is, to both lower the value of pam_faildelay.so delay=X in /etc/pam.d/login (I lowered it to 500000, half a second), and also add nodelay (preceded by a space) to the end of the line in common-auth, as described by Gabriel in his answer.
auth [success=1 default=ignore] pam_unix.so nullok_secure nodelay
At least for me (debian sid), only making one of these changes will not shorten the delay appreciably below the default 3 seconds, although it is possible to lengthen the delay by only changing the value in /etc/pam.d/login.
This kind of crap is enough to make a grown man cry!
On Ubuntu 9.10, and I think new versions too, the file you're looking for is located on
/etc/pam.d/login
edit the line:
auth optional pam_faildelay.so delay=3000000
changing the number 3 with another you may want.
Note that to have a 'nodelay' authentication, I THINK you should edit the file
/etc/pam.d/common-auth
too. On the line:
auth [success=1 default=ignore] pam_unix.so nullok_secure
add 'nodelay' to the final (without quotes).
But this final explanation about the 'nodelay' is what I think.
I would like to add a note from a developers perspective. Though this wouldn't be obvious to the naked eye a smart developer would break out of a match query when the match is found. In witness, a successful match would complete faster than a failed match. Because, the matching function would compare the credentials to all known accounts until it finds the correct match. In other words, let's say there are 1,000,000 user accounts in order by IDs; 001, 002, 003 and so on. Your ID is 43,001. So, when you put in a correct username and password, the scan stops at 43,001 and logs you in. If your credentials are incorrect then it scans all 1,000,000 records. The difference in processing time on a dual core server might be in the milliseconds. On Windows Vista with 5 user accounts it would be in the nanoseconds.
I agree. This is an arbitrary programming decision. Putting the delay to one second instead of three doesn't really hurt the crackability of the password, but makes it more user-friendly.
Technically, this deliberate delay is to prevent attacks like the "Linearization attack" (there are other attacks and reasons as well).
To illustrate the attack, consider a program (without this
deliberate delay), which checks an entered serial to see whether it
matches the correct serial, which in this case happens to be
"xyba". For efficiency, the programmer decided to check one
character at a time and to exit as soon as an incorrect character is
found, before beginning the lengths are also checked.
The correct serial length will take longer to process than an incorrect serial length. Even better (for attacker), a serial number
that has the first character correct will take longer than any that
has an incorrect first character. The successive steps in waiting time
is because each time there's one more loop, comparison to go through
on correct input.
So, attacker can select a four-character string and that the string beginning with x takes the most time. (by guess work)
Attacker can then fix character as x and vary the second character, in which case they will find that y takes the longest.
Attacker can then fix the first two characters as xy and vary the third character, in which case they will find that b takes the
longest.
Attacker can then fix the first three character as xyb and vary the fourth character,in which case they will find that a takes the
longest.
Hence, the attackers can recover the serial one character at a time.
Linearization.java.
Linearization.docx, sample output
The serial number is four characters long ans each character has 128
possible values. Then there are 1284 = 228 =
268,435,456 possible serials. If attacker must randomly guess
complete serial numbers, she would guess the serial number in about
227 = 134,217,728 tries, which is an enormous amount of work. On the other hand, by using the linearization attack above, an
average of only 128/2 = 64 guesses are required for each letter, for a
total expected work of about 4 * 64 = 28 = 256 guesses,
which is a trivial amount of work.
Much of the written martial is adapted from this (taken from Mark Stamp's "Information Security: Principles and Practice"). Also the calculations above do not take into account the amount of guesswork needed to to figure out the correct serial length.

Resources