Programming: How to alter printed text - text

In many programs, including many brute-forcers and the output of many commands, text which has already been printed to the screen is altered in front of you.
Using the brute-force example; the password which the program is currently attempting is cycled through extremely fast.
My question is: How to alter text which has already been printed?
Is there perhaps a function, or specific method by which this can be achieved? I have never encountered any code like it and it doesn't even seem possible from a programming perspective, however i am counting on being wrong.
Thank you in advanced.

A carriage return char should be printed (\r), and the output should be flushed.
In python, this can be accomplished specifying the end flag to \r and flush to True.
print("sample text", end='\r", flush=True)
In a more generic way, any language should be able to access sys functions, in which case the \r should be added manually, and then the flush() function should also be called manually.

Related

How does the Ctrl + Z keyboard shortcut actually work in general?

I mean, its not about some code or something, but how does that shortcut work in general, like when I am working on a something and accidentally, I delete a chunk of text, how does the shortcut, revert it back on to the screen, don't give me the code or something, but take instances of elements in the coding world like whiles, ifs etc. How did the creator get the idea that something like this should even exist?
There are multiple ways this can be achieved, the decision how, is up to the developer to decide.
One way is to make use of a stack, where the where the state of this program is stored in such a structure.
Another way is use a design pattern called the command pattern which is often used to implement undo redo functionality, this is very similar to a stack, but instead of storing the program state, you save the action done to the program together with a similar action to undo the executed one.

Open a new window for a milisecond in python(3x)

I'm making a small game where you have to guess if the answer is cp1 or cp2 and I want to give the user some sort of hint. One way of doing that I think is by flashing the answer for a nano or milisecond to user, i.e. a new window would open with a : . Any ideas as to how to do this?
[...] flashing the answer for a nano or millisecond to user [....] in a new window [...]
A millisecond is too short (both for the human player -read about the persistence of vision- and for the Python interpreter); your screen is probably refreshed at 60Hz. Consider flashing for at least one tenth of a second (and probably more than that, you'll need to experiment, and you might make the flashing delay or period configurable). How to do that depends upon the widget toolkit you are using.
If using something above GTK, you'll need to find the Python binding to g_timeout_add, see also this.
If you use something above libSDL (e.g. pygame_sdl2), you need something related to its timers.
There are many other widgets or graphical frameworks usable from Python, and you need to choose one (look also into PyQt). Each of them has its own way to deal with timing, delays, windows, graphical display of text inside a window, etc...
If your system is Linux, see also time(7) for a general overview of time related things. Event loops (like those in graphics libraries) are built above a multiplexing system call such as poll(2) (or the old select, etc...).
You need to spend several days in reading more, choosing your graphical toolkit, before coding a single line of code of your game (which might need more code than what you imagine now).
I think the closest you can easily get to this affect is to just print to the console, but don't print the new-line (\n), just the carriage return (\r). This way you can write over the text after a couple of milliseconds. We do need to know the length of the thing we are printing so that we can be sure to completely override it with the next print.
The code for this would look something like:
import time
ms = 100
s = 'cp1'
print(s, end='\r')
time.sleep(ms / 1000)
print(' ' * len(s))

Overlayed text?

Quick text-processing question. It's not necessarily related to programming, but this is the best place I figured I should go.
Rate down to tell me this kind of question is not welcome here. (Though, I really like my one little reputation point.)
Anyways, how can I encode text so that two characters get rendered in the same charspace?
NOTE: this is for plain-text -- nothing particularly complex.
The best you can do is put a backspace character between the two. However the outcome isn't likely to be useful to you, it will depend on what software is being used to display the text. The most likely is that the backspace will be ignored or shown as some generic "unavailable" glyph. The second most likely is that the second character will completely erase the first. You'd have to be very lucky for the two characters to be displayed one over the other in the same space.
If it's plain text to be processed by any editor, as far as I know you can't. Even if your text is encoded in Unicode, I don't think it provides combining characters for normal letters, but just for accents and similar symbols which are intended to be combined with other glyphs.
BTW, I'm not sure that stackoverflow is the right place for this kind of stuff, I'd see it better in superuser.com.

Why is the software world full of status codes?

Why did programmers ever start using status codes? I mean, I guess I could imagine this might be useful back in the days when a text string was an expensive resource. WAYYY back then. But even after we had megabytes of memory to work with, we continued to use them. What possible advantage could there be for obfuscating the meaning of an error message or status message behind a status code?
It's easy to provide different translations of a status code. Having to look up a string to find the translation in another language is a little silly.
Besides, status codes are often used in code and typing:
var result = OpenFile(...);
if (result == "File not fond") {
...
}
Cannot be detected as a mistake by the compiler, where as,
var result = OpenFile(...);
if (result == FILE_NOT_FOND) {
...
}
Will be.
It allows for localization and changes to the text of an error message.
It's the same reason as ever. Numbers are cheap, strings are expensive, even in today's mega/gigabyte world.
I don't think status codes constitute obfuscation; it's simply an abstraction of a state.
A great use of integer status codes is in a Finite-State Machine. Having the states be integers allow for an efficient switch statement to jump to the correct code.
Integers also allow for more efficient use of bandwidth, which is still a very important issue in mobile applications.
Yet another example of integer codes over strings is for comparison. If you have similar statuses grouped together (say status 10000-10999) performing range comparisons to know the type of status is a major win. Could you imaging doing string comparisons just to know if an error code is fatal or just a warning, eww.
Numbers can be easily compared, including by another program (e.g. was there a failure). Human readable strings cannot.
Consider some of the things you might include in a string comparison, and sometimes might not:
Encoding options
Range of supported characters (compare ASCII and Unicode)
Case Sensitivity
Accent Sensitivity
Accent encoding (decomposed or composed unicode forms).
And that is before allowing for the majority of humans who don't speak English.
404 is universal on the web. If there were no status codes, imagine if every different piece of server software had its own error string?
"File not found"
"No file exists"
"Couldn't access file"
"Error presenting file to user"
"Rendering of file failed"
"Could not send file to client"
etc...
Even when we disregard data length, it's still better to have standard numeric representations that are universally understood, even when accompanied by actual error messages.
Computers are still binary machines and numeric operations are still cheaper and faster than string operations.
Integer representation in memory is a far more consistent thing than string representation. To begin with just think about all those null-terminated and Pascal strings. Than you can think of ASCII and the characters from 128 to 255 that were different according to different codepages and end up with Unicode characters and all of their little endian big endians, UTF-8 etc.
As it comes, returning an integer and having a separate resource stating how to interpret those integers is a far more universal approach.
Well when talking to a customer over the telephone a Number is much better then a string, and string can be in many different languages a number can't, try googeling some error text in lets say Swedish and then try googling it in English guess where you get the best hit.
Because not everyone speaks English. It's easier to translate error codes to multiple languages than to litter your code base with strings.
It's also easier for machines to understand codes as you can assign classes of errors to a range of numbers. E.g 1-10 are I/o issues, 11-20 are db, etc
Status codes are unique, whereas strings may be not. There is just one status code for example "213", but there may be many interpretation of for example "file not found", like "File not found", "File not found!", "Datei nicht gefunden", "File does not exist"....
Thus, status codes keep the information as simple as possible!
How about backwards compatibility with 30+ years of software libraries? After all, some code is still written in C ...
Also ... having megabytes of memory available is no justification for using them. And that's assuming you're not programming an embedded device.
And ... it's just pointless busy work for the CPU. If a computer is blindingly fast at processing strings, imagine the speed boost from efficient coding techniques.
I work on mainframes, and there it's common for applications to have every message prepended by a code (usually 3-4 letters by product, 4-5 numbers by specific message, and then a letter indicating severity of the message). And I wish this would be a standard practice on PC too.
There are several advantages aside from translation (as mentioned by others) to this:
It's easy to find the message in the manual; usually, the software are accompanied with the message manual explaining all the messages (and possible solution, etc.).
It's possible for automation software to react in the specific messages in the log in a specific way.
It's easy to find the source of the message in the source code. You can have further error codes per specific message; in that case, this is again helpful in debugging.
For all practical purposes numbers are the best representation for statuses, even today, and I imagine would be so for a while.
The most important thing about status codes is probably conciseness and acceptance. Two different systems can talk to each other all they want using numbers but if they don't agree on what the numbers mean, it's going to be a mess. Conciseness is another important aspect as the status code must not be more verbose than the meaning it's trying to convey. The world might agree on using
The resource that you asked for in the HTTP request does not exist on this server
as a status code in lieu of 404, but that's just plain nuisance.
So why do we still use numbers, specifically the English numerals? Because that is the most concise form of representation today and all computers are built upon these.
There might come a day when we start using images, or even videos, or something totally unimaginable for representing status codes in a highly abstracted form, but we are not there yet. Not even for strings.
Make it easier for an end user to understand what is happening when things go wrong.
It helps to have a very basic method of giving statuses clearly and universally. Whereas strings can easily be typed differently depending on dialect and can also have grammatical changes, Numerals do not have grammatical formatting and do not change with dialect. There is also the storage and transfer issue, a string is larger and thus takes longer to transfer over a network and store (even if it is a few thousandths of a millisecond). Because of this, we can assign numbers as universal identifiers for statuses, for they can transfer quicker and are more punctual, and for the programmes that read them can identify them however they wish (Being multilingual).
Plus, it is easier to read computationally:
switch($status) {
case '404':
echo 'File not found!';
break;
case '500':
echo 'Broken server!';
break;
}
etc.

Why did we bother with line numbers at all? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
When you write something in BASIC, you are required to use line numbers. Like:
10 PRINT "HOME"
20 PRINT "SWEET"
30 GOTO 10
But I wonder: who came up with the idea to use line numbers at all? It is such a nuisance, and left quite an "echo" in the developing (pun intended) world!
The idea back then was that you could easily add code everywhere in your program by using the appropriate line number. That's why everybody uses line numbers 10, 20, 30.. so there is room left:
10 PRINT "HOME"
20 PRINT "SWEET"
30 GOTO 10
25 PRINT "HOME"
On the first interfaces BASIC was available for, there was no shiny editor, not even something like vi or emacs (or DOS edit, heh). You could only print out your program on the console and then you would add new lines or replace them, by giving the appropriate line number first. You could not navigate through the "file" (the program was kept in memory, although you could save a copy on disk) with the cursor like you are used to nowadays.
Therefore the line numbers weren't only needed as labels for the infamous GOTO, but indeed needed to tell the interpreter at what position in the program flow you are editing.
It has a loong-loong history.
Line numbering actually comes from Dartmouth BASIC, which was the original version of the BASIC programming language and was the integral part of a so called Dartmouth Time Sharing System. That DTSS had a rudimentary IDE, which was nothing more than an interactive command line.
So every line typed inside this "IDE", and beginning with a line number, was added to the program, replacing any previously stored line with the same number; anything else was assumed to be a DTSS command and immediately executed.
Before there was such a thing as a VDT (video display terminal), we old-timers programmed on punch cards. Punch cards reserved columns 72-80 for sequence numbers - if you dropped your card deck and they all got out of order, you could put the deck in a card sorter that would order the cards based on those sequence numbers. In many ways, the BASIC line numbers were similar to those sequence numbers.
Another advantage in the BASIC world is that in the old days BASIC was interpreted as it was run. Using labels rather than sequential line numbers for branches would require a first pass to pick up all the labels and their locations, where as if you use line numbers the interpreter knows whether it needs to start scanning forwards or backwards for the destination.
Back in the day you didn't have a 2 dimensional editor like emacs or vi. All you had was the command line.
Your program was stored in memory and you would type in single line commands to edit single lines.
If you were a Unix god you could do it with ed or something, but for BASIC on a C-64, VIC-20, or TRS-80 you'd just overwrite the line.
So a session might look like:
$10 PRINT "Hellow World"
$20 GOTO 10
$10 PRINT "Hello World"
And now the program would work correctly.
Some older mainframes even had line terminals without a screen. Your whole session was printed on paper in ink!
The "Who?" would be the inventors, Kemeney and Kurtz.
After reading the replies, I checked the Wikipedia entry for "Dartmouth BASIC", and was surprised to learn
The first compiler was produced before the time-sharing system was ready. Known as CardBASIC, it was intended for the standard card-reader based batch processing system.
So, it looks like Paul Tomblin "gets the square".
Paul Tomblin's answer is the most comprehensive, but I'm surprised no one has mentioned that a big part of the BASIC project's initial goal was to provide a beginner-friendly interactive environment using timesharing. (Kurtz and Kemeny's vision for "universal access for all students" was far ahead of its time in this regard.)
The BASIC system that was developed to fulfill this goal featured Teletype ASR-33 (and later other) printing terminals. When connected to a timesharing-capable OS, these allowed editing and running BASIC programs in an interactive mode (unlike working with punched cards), but they are not cursor-addressable. Line numbers were a beginner-friendly way to both specify the order of program statements and allow unambiguous editing in the absence of a screen editor. The Wikipedia entry for "line editor" explains further, and anyone who's ever tried to use a line editor (such as the Un*x 'ed') can appreciate why Kurtz and Kemeny should be thanked for sparing the beginner having to learn the cryptic command sequences required for editing text in this manner.
They originated in FORTRAN, from which BASIC was derived. However, in FORTRAN only lines referenced by other lines (like GOTO targets) needed numbers. In BASIC they had a secondary use, which was to allow editing of specific lines.
Back in the fifties, when high programming languages were in their early beginnings, there were no terminals, no editors, no monitors (yes, no monitors), just card punchers and readers (for writing and reading the contents of cards into memory of a computer) and printers (for printing results, naturally).
Later, tape was introduced, but that's another story.
Each punch card had its own number. There were several reasons for that; from purely keeping them in order, to determining the sequence of execution. Each card was one line of code (in today's terms). Since, at that time, there were no constructs like if..then..else, or whatever variant of the like, the sequence of execution had to be determined somehow. So GOTO statements were introduced. They were the basis of loops. The term "spaghetti code" comes from that time period also, since badly written code was relatively hard to follow, like spaghetti on a plate :)
I'd guess it comes from assembler, where each instruction has an address which may be jumped to by another instruction.
Additionally, the first computers didn't have much memory, and storing a line number only takes two bytes (if done properly). Writing a label takes more memory, first in the location, where that label is defined, then in any jump command.
Finally in the good old days there weren't any fancy editors. The only "editor" was a simple command line interface, which treated everything starting with a number being part of a program and everything else as commands to be executed immediately. Most prominent example should be the Commodore 64.
Newer dialects of Basic don't have the need for line numbers any longer.
in Basic, if you didn't have a line number, how can you preform a
GOTO 10
that was a way to jump lines, a good way that was found ... more than 20 years ago!
today, the lines help us catching errors/exceptions because the debug engines are made to send us in the message that we got an exception on line xxx and we jump right away to it!
imagine a world without line numbers... how can a reporter be paid without the lines?
"Now that you know the novel, you have to write a summary with no more than 50 lines"
remember this? Even at the school we learn about line numbers!
if it wasn't invented, someone will already invented again so we could use it nicely :)
Not all versions of BASIC required line numbers. QBasic, for instance, supported labels. You could then jump to those with GOTO (ignoring Dijkstra's "Go To Statement Considered Harmful," for the moment).
The answer is already above. Paul Tomblin wrote it (with a caveat to zabzonk). Actually, I would argue that any answer which does not mention "punch cards" is incomplete, if it mentions neither punch cards nor FORTRAN, it is wrong. I can say that this is definitively right because my parents both used punch cards on a regular basis (they started with FORTRAN 66 and 77), then migrated to Basic and COBOL in the 80's.
In the early days, most programs were entered with punch cards. The punch cards were usually entered in sequence, usually one instruction per card, with labels (JMP/JSR targets) being a separate instruction card.
To edit your program, you replaced the card.
Later implementations added an optional sequence number on the right end of the line, so that when/if they got out of order, they could be resequenced by an automated reader.
Fortran used both numeric target labels on the left (col 1-5) and left a reserved block on the right (73-80) for sequence or comment.
When BASIC was initially written, it was decided to move the sequence numbers to the left, into FORTRAN's label field, and to allow overwriting prior cards' memory footprint... as an editing mode. This was intended for the interactive dev environment, but worked just as well with cards. And cards were used in some early implementations for a variety of reasons.
Keep in mind: Many computers were card-reader and printer interface right through the late 1970's. Even tho' interactive mode basics were available, card punched basic programs were frequently used. Since many simply were feeding into the IDE, they worked exactly the same way. Including needing a "Run" card at the end. In such cases, one could simply tack a correction card and another Run card to rerun with a variation on some variable; likewise, in complex programs, simply adding a corrected line of a card before the run was adequate to edit out problems without spending precious time finding the errant card itself.
I like the robot church on Futurama, on the walls were written stuff like
10 SIN
20 GOTO HELL
On the Speccy you couldn't edit a line without the line number.
I find them very helpful when pairing. I don't have to point at a line when my pair has the keyboard, I can just say, "on line 74, shouldn't that really be getMoreBeer()?"
The original editor for DOS was a wonderful utility called edlin. You could only edit a single line. To make life even more interesting in many versions of BASIC you could type lines out of order, line 10, 20, 30, 25, 5, The execution would be by line line number not by the order of appearance.

Resources