Why did we bother with line numbers at all? [closed] - basic

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
When you write something in BASIC, you are required to use line numbers. Like:
10 PRINT "HOME"
20 PRINT "SWEET"
30 GOTO 10
But I wonder: who came up with the idea to use line numbers at all? It is such a nuisance, and left quite an "echo" in the developing (pun intended) world!

The idea back then was that you could easily add code everywhere in your program by using the appropriate line number. That's why everybody uses line numbers 10, 20, 30.. so there is room left:
10 PRINT "HOME"
20 PRINT "SWEET"
30 GOTO 10
25 PRINT "HOME"
On the first interfaces BASIC was available for, there was no shiny editor, not even something like vi or emacs (or DOS edit, heh). You could only print out your program on the console and then you would add new lines or replace them, by giving the appropriate line number first. You could not navigate through the "file" (the program was kept in memory, although you could save a copy on disk) with the cursor like you are used to nowadays.
Therefore the line numbers weren't only needed as labels for the infamous GOTO, but indeed needed to tell the interpreter at what position in the program flow you are editing.

It has a loong-loong history.
Line numbering actually comes from Dartmouth BASIC, which was the original version of the BASIC programming language and was the integral part of a so called Dartmouth Time Sharing System. That DTSS had a rudimentary IDE, which was nothing more than an interactive command line.
So every line typed inside this "IDE", and beginning with a line number, was added to the program, replacing any previously stored line with the same number; anything else was assumed to be a DTSS command and immediately executed.

Before there was such a thing as a VDT (video display terminal), we old-timers programmed on punch cards. Punch cards reserved columns 72-80 for sequence numbers - if you dropped your card deck and they all got out of order, you could put the deck in a card sorter that would order the cards based on those sequence numbers. In many ways, the BASIC line numbers were similar to those sequence numbers.
Another advantage in the BASIC world is that in the old days BASIC was interpreted as it was run. Using labels rather than sequential line numbers for branches would require a first pass to pick up all the labels and their locations, where as if you use line numbers the interpreter knows whether it needs to start scanning forwards or backwards for the destination.

Back in the day you didn't have a 2 dimensional editor like emacs or vi. All you had was the command line.
Your program was stored in memory and you would type in single line commands to edit single lines.
If you were a Unix god you could do it with ed or something, but for BASIC on a C-64, VIC-20, or TRS-80 you'd just overwrite the line.
So a session might look like:
$10 PRINT "Hellow World"
$20 GOTO 10
$10 PRINT "Hello World"
And now the program would work correctly.
Some older mainframes even had line terminals without a screen. Your whole session was printed on paper in ink!

The "Who?" would be the inventors, Kemeney and Kurtz.
After reading the replies, I checked the Wikipedia entry for "Dartmouth BASIC", and was surprised to learn
The first compiler was produced before the time-sharing system was ready. Known as CardBASIC, it was intended for the standard card-reader based batch processing system.
So, it looks like Paul Tomblin "gets the square".

Paul Tomblin's answer is the most comprehensive, but I'm surprised no one has mentioned that a big part of the BASIC project's initial goal was to provide a beginner-friendly interactive environment using timesharing. (Kurtz and Kemeny's vision for "universal access for all students" was far ahead of its time in this regard.)
The BASIC system that was developed to fulfill this goal featured Teletype ASR-33 (and later other) printing terminals. When connected to a timesharing-capable OS, these allowed editing and running BASIC programs in an interactive mode (unlike working with punched cards), but they are not cursor-addressable. Line numbers were a beginner-friendly way to both specify the order of program statements and allow unambiguous editing in the absence of a screen editor. The Wikipedia entry for "line editor" explains further, and anyone who's ever tried to use a line editor (such as the Un*x 'ed') can appreciate why Kurtz and Kemeny should be thanked for sparing the beginner having to learn the cryptic command sequences required for editing text in this manner.

They originated in FORTRAN, from which BASIC was derived. However, in FORTRAN only lines referenced by other lines (like GOTO targets) needed numbers. In BASIC they had a secondary use, which was to allow editing of specific lines.

Back in the fifties, when high programming languages were in their early beginnings, there were no terminals, no editors, no monitors (yes, no monitors), just card punchers and readers (for writing and reading the contents of cards into memory of a computer) and printers (for printing results, naturally).
Later, tape was introduced, but that's another story.
Each punch card had its own number. There were several reasons for that; from purely keeping them in order, to determining the sequence of execution. Each card was one line of code (in today's terms). Since, at that time, there were no constructs like if..then..else, or whatever variant of the like, the sequence of execution had to be determined somehow. So GOTO statements were introduced. They were the basis of loops. The term "spaghetti code" comes from that time period also, since badly written code was relatively hard to follow, like spaghetti on a plate :)

I'd guess it comes from assembler, where each instruction has an address which may be jumped to by another instruction.
Additionally, the first computers didn't have much memory, and storing a line number only takes two bytes (if done properly). Writing a label takes more memory, first in the location, where that label is defined, then in any jump command.
Finally in the good old days there weren't any fancy editors. The only "editor" was a simple command line interface, which treated everything starting with a number being part of a program and everything else as commands to be executed immediately. Most prominent example should be the Commodore 64.
Newer dialects of Basic don't have the need for line numbers any longer.

in Basic, if you didn't have a line number, how can you preform a
GOTO 10
that was a way to jump lines, a good way that was found ... more than 20 years ago!
today, the lines help us catching errors/exceptions because the debug engines are made to send us in the message that we got an exception on line xxx and we jump right away to it!
imagine a world without line numbers... how can a reporter be paid without the lines?
"Now that you know the novel, you have to write a summary with no more than 50 lines"
remember this? Even at the school we learn about line numbers!
if it wasn't invented, someone will already invented again so we could use it nicely :)

Not all versions of BASIC required line numbers. QBasic, for instance, supported labels. You could then jump to those with GOTO (ignoring Dijkstra's "Go To Statement Considered Harmful," for the moment).

The answer is already above. Paul Tomblin wrote it (with a caveat to zabzonk). Actually, I would argue that any answer which does not mention "punch cards" is incomplete, if it mentions neither punch cards nor FORTRAN, it is wrong. I can say that this is definitively right because my parents both used punch cards on a regular basis (they started with FORTRAN 66 and 77), then migrated to Basic and COBOL in the 80's.

In the early days, most programs were entered with punch cards. The punch cards were usually entered in sequence, usually one instruction per card, with labels (JMP/JSR targets) being a separate instruction card.
To edit your program, you replaced the card.
Later implementations added an optional sequence number on the right end of the line, so that when/if they got out of order, they could be resequenced by an automated reader.
Fortran used both numeric target labels on the left (col 1-5) and left a reserved block on the right (73-80) for sequence or comment.
When BASIC was initially written, it was decided to move the sequence numbers to the left, into FORTRAN's label field, and to allow overwriting prior cards' memory footprint... as an editing mode. This was intended for the interactive dev environment, but worked just as well with cards. And cards were used in some early implementations for a variety of reasons.
Keep in mind: Many computers were card-reader and printer interface right through the late 1970's. Even tho' interactive mode basics were available, card punched basic programs were frequently used. Since many simply were feeding into the IDE, they worked exactly the same way. Including needing a "Run" card at the end. In such cases, one could simply tack a correction card and another Run card to rerun with a variation on some variable; likewise, in complex programs, simply adding a corrected line of a card before the run was adequate to edit out problems without spending precious time finding the errant card itself.

I like the robot church on Futurama, on the walls were written stuff like
10 SIN
20 GOTO HELL
On the Speccy you couldn't edit a line without the line number.

I find them very helpful when pairing. I don't have to point at a line when my pair has the keyboard, I can just say, "on line 74, shouldn't that really be getMoreBeer()?"

The original editor for DOS was a wonderful utility called edlin. You could only edit a single line. To make life even more interesting in many versions of BASIC you could type lines out of order, line 10, 20, 30, 25, 5, The execution would be by line line number not by the order of appearance.

Related

Best workflow using Vim buffers

In the last 2-3 years that I started using Vim as my main editor, I've learned to use windows (splits) when working with multiple files (Because in every task, I need lots of files to work with)
But a few days ago, I ran into this question and it blow my mind (and my workflow :))
So I tried to use buffers and no windows and it's really hard. Imagine having multiple blocks (folders) and each one of them has a model.php and a controller.php in them.
So at the start of the task, I don't know which block I need, so after a few minutes, I'll open multiple model.phps and controller.phps.
Now, if I don't have every file that I need in my buffer, I must search buffer first and when I see that I didn't load it, now I have to use explorer and load the file into buffer. So it's something like this:
:ls<CR>
{If the file that I need is here then}
:b num<CR>
{else}
:FZF {and finding that file}
So it's a lot harder than just working with windows (Where I can see what files are loaded in front of me)
(And of course the overhead of finding buffers and searching them by name/number is like opening the file every time that you needed)
But as said in this question and lots of other places, buffers should make your workflow easier than windows and windows should be used only for diffs and etc.
So is there any better ways to use buffers or am I doing something wrong?
(BTW, I'm currently using :Buffers from fzf.vim)
So I tried to use buffers and no windows and it's really hard.
This means that you misunderstood both the spirit and the letter of the linked answer.
To recap, Vim's exact equivalent of "documents" in regular document-based applications is buffers. Vim also gives you a first layer of abstraction on top of buffers: windows, and another one on top of windows: tab pages in order to give you more flexibility with building your workflow.
Forcing oneself to use buffers instead of windows or instead of tab pages or whatever makes no sense as there is value in all three and such an attitude would only decrease the overall value of your editor. Use the interaction model that best suits your needs, not the interaction model that you convinced yourself is the purest.
As for the confusion between files and buffers, how about the confusion between files and tab pages or between buffers and windows? When you are dealing with abstractions built on top of other abstractions you have to have commands specific to one layer or another and learning how that layered cake works gives you the necessary intuition for deciding what command to use and when.
Basically, you have 3 cases:
Case #
Is a buffer
Is a file
1
Y
Y
2
Y
N
3
N
Y
In case #1, the buffer is associated with a file so you can use both file-centric and buffer-centric commands to reach your target.
In case #2, the buffer is not associated with a file so you can only use buffer-centric commands to reach your target.
In case #3, there is no buffer so yo can only use file-related commands to reach your target.
Another way to think about it is to ask the question "Have I already been there?". If the answer is "no", then use file-centric commands, if the answer is "yes", use buffer-centric commands. If you have no idea or if you don't want to think about any of this, just use file-centric commands as a fallback.
Note that the context of that answer was "buffers vs windows vs tab pages". Abstracting yourself away from the notions of files or documents is the real deal.
When speaking of "best workflow" we inevitably speak of our personal habits and tastes. So just remember that it's your editor, your workflow and your "best practices". Not someone else's.
Windows/tabs and buffers do not prohibit using each other. It is not a problem to open buffer in current window/tab even if it was already opened in another one (or even in dozen ones).
If you feel uncomfortable searching through the buffers list then try doing it with some alternative tools. For example, you like clicking with mouse then run GVim and browse through "Buffers" menu; or you are good at memorizing numbers then make all buffers numbers to show in status line and switch buffers by typing NN<Ctrl-^> directly; or you like reading file contents then find some plugin that shows also "a buffer preview" etc.etc.

Get text around input caret on Linux

Motivation: I'm trying to write scripts which send keystrokes to the currently focused window. Right now I use xdotool, which lets me send raw keystrokes. However, I want the exact keystrokes to be a function of the current text around the input caret in the focused window.
Problem: Is there a generic way of reading the state of the text input caret -- both its current position as well as the text around it? Intuitively, I want the content of the current "text box" as well as the location of the cursor within that text box. Perhaps this is not possible in the general case, but is there a way of doing it which would work for emacs and firefox? I'm running Ubuntu Linux
Further motivation: due to a bad case of RSI I control my computer by voice rather than typing. This works by setting up voice-activated scripts that are triggered by saying different phrases. When dictating English prose, it would be helpful to automatically capitalize words at the beginning of sentences. This automatic capitalization can be accomplished by reading the characters immediately before the input caret, checking if they contain a period, and if so, capitalizing the start of the next phrase that I dictate by voice.
Thanks so much! If anybody can help me here, it would greatly increase my day-to-day accessibility.
Since there is no standard widget toolkit for X11, but only a buch of independently developed arbitrary toolkits, there is no generic way to implement this.
As far as X11 and tools operating on its level (like xdotool) is concerned, there's only windows of either the InputOutput variety (i.e. visible windows, that receive events and one can draw to) or the just Input which are invisible and only receive events. There are no further refined "widgets" so to speak. You get a pixel grid, which you can draw to.
Accessibility interfaces are the burden of the toolkits (or if you don't use a toolkit – then you're a badass – you, the developer), to implement: https://www.freedesktop.org/wiki/Accessibility/
The absolute generic way would be to take a screenshot of the currently focused window, employ a computer vision / machine learning based solution to identify the caret, then OCR the line of text around it. And to be honest, IMHO doing it that way would probably be a lot more reliable than hoping for the accessibility interfaces to be properly implemented.

Open a new window for a milisecond in python(3x)

I'm making a small game where you have to guess if the answer is cp1 or cp2 and I want to give the user some sort of hint. One way of doing that I think is by flashing the answer for a nano or milisecond to user, i.e. a new window would open with a : . Any ideas as to how to do this?
[...] flashing the answer for a nano or millisecond to user [....] in a new window [...]
A millisecond is too short (both for the human player -read about the persistence of vision- and for the Python interpreter); your screen is probably refreshed at 60Hz. Consider flashing for at least one tenth of a second (and probably more than that, you'll need to experiment, and you might make the flashing delay or period configurable). How to do that depends upon the widget toolkit you are using.
If using something above GTK, you'll need to find the Python binding to g_timeout_add, see also this.
If you use something above libSDL (e.g. pygame_sdl2), you need something related to its timers.
There are many other widgets or graphical frameworks usable from Python, and you need to choose one (look also into PyQt). Each of them has its own way to deal with timing, delays, windows, graphical display of text inside a window, etc...
If your system is Linux, see also time(7) for a general overview of time related things. Event loops (like those in graphics libraries) are built above a multiplexing system call such as poll(2) (or the old select, etc...).
You need to spend several days in reading more, choosing your graphical toolkit, before coding a single line of code of your game (which might need more code than what you imagine now).
I think the closest you can easily get to this affect is to just print to the console, but don't print the new-line (\n), just the carriage return (\r). This way you can write over the text after a couple of milliseconds. We do need to know the length of the thing we are printing so that we can be sure to completely override it with the next print.
The code for this would look something like:
import time
ms = 100
s = 'cp1'
print(s, end='\r')
time.sleep(ms / 1000)
print(' ' * len(s))

filter lines starting with ; ussing batch

Hi I have a script (auto lisp AutoCAD) for a program. The rules of this script are that comments are started with ; character is it possible to write a batch that filters out all lines starting with ;. I namely then encrypt the file from a LSP to a FAS type which renders the commentary as useless (cant be read when encrypted) however AutoCAD still encrypts the text meaning a fairly heavy file size (double of what it should be). The current method is to manually delete every comment line by hand however try doing that a few hundred times. And I need the commentary in place to keep neat record of what’s happening because I work from the not encrypted lisp file its self.
All in all I also want the encryption because its my hard work and my right to keep this secure as it then also means more job security, it also allows me to block some smart alec self proclaimed staff making edditation and in edition the file encryption is recommended for stability reasons by AutoCAD its self.
All in all even if it was because I like to without good reason then that should be valid enough.
I’m looking to achieve this through a batch script as that one of few languages that I feel competent enough in… outside of the AutoCAD frame.
The following will convert a file named "source.lsp" and produce "noComment.lsp". It will strip out lines that start with a ; (including comment lines indented with spaces).
findstr /rvc:"^ *;" "source.lsp" >"noComment.lsp"

Pivotal Suboptimal Decisions in the History of Software [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Throughout the history of software development, it sometimes happens that some person (usually unknown, probably unwittingly) made what, at the time, seemed a trivial, short-term decision that changed the world of programming. What events of this nature come to mind, and what have been our industry's response to mitigate the pain?
Illustration (the biggest one I can think of): When IBM designed the original PC, and decided to save a couple dollars in manufacturing costs by choosing the half-brain-dead 8088 with 8-bit-addressable memory, instead of one of the 16-bit options (8086, 680n, etc.), dooming us to 20 years of address offset calculations.
(In response, a lot of careers in unix platform development were begun.)
Somewhere toward the other end of the scale lies the decision someone made to have a monster Shift Lock key at the left end of the keyboard, instead of a Ctrl key.
Paul Allen deciding to use the / character for command line options in MS DOS.
Allocating only 2 digits for the year field.
And the mitigation was to spend huge amounts of money and time just before the fields overflowed to extend them and fix the code.
Ending Alan Turing's career when he was only 42.
Microsoft deciding to use backslash rather than forwardslash as the path delimiter. And failing to virtualize the drive letter.
Actually the 8088 & 8086 have same memory model and same number of address bits (20). Only difference is width of external data bus which is 8 bit for 8088 & 16 bit for 8086.
I would say that use of inconsistent line endings by different operating systems (\n - UNIX, \r\n - DOS, \r - Mac) was a bad decision. Eventually Apple relented by making \n default for OS-X but Microsoft is stubbornly sticking to \r\n. Even in Vista, Notepad can not properly display a text file using \n as line ending.
Best example of this problem is the ASCII mode of FTP which just adds /r to each /n in a file transferred from a UNIX server to Windows client even though the file originally contained /r/n.
There were a lot of suboptimal decisions in the design of C (operator precedence, the silly case statement, etc.), that are embedded in a whole lot of software in many languages (C, C++, Java, Objective-C, maybe C# - not familiar with that one).
I believe Dennis Ritchie remarked that he rethought precedence fairly soon, but wasn't going to change it. Not with a whole three installations and hundreds of thousands of lines of source code in the world.
Deciding that HTML should be used for anything other than marking up hypertext documents.
Microsoft's decision to use "C:\Program Files" as the standard folder name where programs should be installed in Windows. Suddenly working from a command prompt became much more complicated because of that wordy location with an embedded space. You couldn't just type:
cd \program files\MyCompany\MyProgram
Anytime you have a space in a directory name, you have to encase the entire thing in quotes, like this:
cd "\program files\MyCompany\MyProgram"
Why couldn't they have just called it c:\programs or something like that?
Apple ousting Steve Jobs (the first time) to be led by a succession of sugar-water salemen and uninspired and uninspiring bean counters.
Gary Kildall not making a deal with IBM to license CP/M 86 to them, so they wouldn't use MS-DOS.
HTML as a browser display language.
HTML was originally designed a content markup language, whose goal was to describe the contents of a document without making too many judgments about how that document should be displayed. Which was great except that appearance is very important for most web pages and especially important for web applications.
So, we've been patching HTML ever since with CSS, XHTML, Javascript, Flash, Silverlight and Ajax all in order to provide consistent cross-browser display rendering, dynamic content and the client-side intelligence that web applications demand.
How many times have you wished that browser control languages had been done right in the first place?
Microsoft's decision not to add *NIX-like execute/noexecute file permissions and security in MS-DOS. I'd say that ninety percent of the windows viruses (and spyware) that we have today would be eliminated if every executable file needed to be marked as executable before it can even execute (and much less wreak havoc) on a system.
That one decision alone gave rise to the birth of the Antivirus industry.
Using 4 bytes for time_t and in the internet protocols' timestamps.
This has not bitten us yet - give it a bit more time.
Important web sites like banks still using "security questions" as secondary security for people who forget their passwords. Ask Sarah Palin how well that works when everybody can look up your mother's maiden name on Wikipedia. Or better yet, find the blog post that Bruce Schneier wrote about it.
EBCDIC, the IBM "standard" character set for mainframes. The collation sequence was "insane" (the letters of the alphabet are not contiguous).
Lisp's use of the names "CAR" and "CDR" instead of something reasonable for those basic functions.
Null References - a billion dollar mistake.
Netscape's decision to rewrite their browser from scratch. This is arguably one of the factors that contributed to Internet Explorer running away with browser market share between Netscape 4.0 and Netscape 6.0.
DOS's 8Dot3 file names, and Windows' adoption of using the file extension to determine what application to launch.
Using the qwerty keyboard on computers instead of dvorak.
Thinking that a password would be a neat way to control access.
Every language designer who has made their syntax different when the only reason was "just to be different". I'm thinking of S and R, where comments start with #, and _ is an assignment operator.
Microsoft copying the shortcut keys from the original Mac but using Ctrl instead of a Command key for Undo, Cut, Copy, Paste, etc. (Z, X, C, V, etc.), and adding a near worthless Windows key in the thumb position that does almost nothing compared to the pinky's numerous Ctrl key duties. (Modern Macs get a useful Ctrl key (for terminal commands), and a Command key in the thumb position (for program or system shortcuts) and an Alt (Option) key for typing weird characters.)
(See this article.)
Null-terminated strings
7-bits for text. And then "fixing" this with code pages. Encoding issues will kill me some day.
Deciding that "network order" for multi-byte numbers in the Internet Protocol would be high order byte first.
(At the time the heterogenous nature of the net meant this was a coin toss decision. Thirty years later, Intel-derived processors so completely dominate the marketplace it seems lower-order-byte first would have been a better choice).
Netscape's decision to support Java in their browser.
Microsoft's decision to base Window NT on DEC VMS instead of Unix.
The term Translation Lookaside Buffer (which should be called something along the lines of Page Cache or Address Cache).
Having a key for Caps Lock instead of for Shift Lock, in effect it's a Caps Reverse key, but with Shift Lock it could have been controllable.

Resources