Related
In the last 2-3 years that I started using Vim as my main editor, I've learned to use windows (splits) when working with multiple files (Because in every task, I need lots of files to work with)
But a few days ago, I ran into this question and it blow my mind (and my workflow :))
So I tried to use buffers and no windows and it's really hard. Imagine having multiple blocks (folders) and each one of them has a model.php and a controller.php in them.
So at the start of the task, I don't know which block I need, so after a few minutes, I'll open multiple model.phps and controller.phps.
Now, if I don't have every file that I need in my buffer, I must search buffer first and when I see that I didn't load it, now I have to use explorer and load the file into buffer. So it's something like this:
:ls<CR>
{If the file that I need is here then}
:b num<CR>
{else}
:FZF {and finding that file}
So it's a lot harder than just working with windows (Where I can see what files are loaded in front of me)
(And of course the overhead of finding buffers and searching them by name/number is like opening the file every time that you needed)
But as said in this question and lots of other places, buffers should make your workflow easier than windows and windows should be used only for diffs and etc.
So is there any better ways to use buffers or am I doing something wrong?
(BTW, I'm currently using :Buffers from fzf.vim)
So I tried to use buffers and no windows and it's really hard.
This means that you misunderstood both the spirit and the letter of the linked answer.
To recap, Vim's exact equivalent of "documents" in regular document-based applications is buffers. Vim also gives you a first layer of abstraction on top of buffers: windows, and another one on top of windows: tab pages in order to give you more flexibility with building your workflow.
Forcing oneself to use buffers instead of windows or instead of tab pages or whatever makes no sense as there is value in all three and such an attitude would only decrease the overall value of your editor. Use the interaction model that best suits your needs, not the interaction model that you convinced yourself is the purest.
As for the confusion between files and buffers, how about the confusion between files and tab pages or between buffers and windows? When you are dealing with abstractions built on top of other abstractions you have to have commands specific to one layer or another and learning how that layered cake works gives you the necessary intuition for deciding what command to use and when.
Basically, you have 3 cases:
Case #
Is a buffer
Is a file
1
Y
Y
2
Y
N
3
N
Y
In case #1, the buffer is associated with a file so you can use both file-centric and buffer-centric commands to reach your target.
In case #2, the buffer is not associated with a file so you can only use buffer-centric commands to reach your target.
In case #3, there is no buffer so yo can only use file-related commands to reach your target.
Another way to think about it is to ask the question "Have I already been there?". If the answer is "no", then use file-centric commands, if the answer is "yes", use buffer-centric commands. If you have no idea or if you don't want to think about any of this, just use file-centric commands as a fallback.
Note that the context of that answer was "buffers vs windows vs tab pages". Abstracting yourself away from the notions of files or documents is the real deal.
When speaking of "best workflow" we inevitably speak of our personal habits and tastes. So just remember that it's your editor, your workflow and your "best practices". Not someone else's.
Windows/tabs and buffers do not prohibit using each other. It is not a problem to open buffer in current window/tab even if it was already opened in another one (or even in dozen ones).
If you feel uncomfortable searching through the buffers list then try doing it with some alternative tools. For example, you like clicking with mouse then run GVim and browse through "Buffers" menu; or you are good at memorizing numbers then make all buffers numbers to show in status line and switch buffers by typing NN<Ctrl-^> directly; or you like reading file contents then find some plugin that shows also "a buffer preview" etc.etc.
I often come across the situation where I would like to read a file's original content in a human-readable way. When opening this kind of file in a text editor, why is it that it is usually gibberish with some complete and comprehensible text ? I would think that if the file is converted to something other than it's original written format, that there would be no comprehensible text remaining, yet I often find it is somewhere in between.
For example, I know that if I open a binary in a text format, there will be nothing comprehensible left that isn't purely accidental.
Example screencapture of partial gibberish text
Why is there complete text in here mixed with gibberish? Does that mean if I open the file with some sort of different encoding (I don't know what's possible), the file will come through as fully readable text? I would understand if it were all-or-nothing (either gibberish-non-readable OR human language) but I don't understand the in-between.
Please provide educational responses, rather than "because that's the way it is" type answers.
Those are formatting characters; there is no standard use and vary by the format of the file in question. You can still extract the text as needed with a fair knowledge of grep and regex, but it won't be fun. The best bet is to open the file with the software that can read it properly, as a text editor like gedit or Notepad++ will read the raw data and display that. Adobe's pdf format has text embedded, for instance, and all that gibberish is instructions for the Reader software for displaying it correctly on the screen while still allowing for relatively straightforward text extraction when required.
Editors have no real way to interpret the special formatting characters, and would need to be loaded with APIs for every conceivable program. They would also need to be updated constantly, since the formatting changes regularly for a variety of reasons. Many times, it is just to keep the files from being backward compatible with their own or other products, forcing an upgrade path. Microsoft is rather famous for that, but they are by far not the only company to do so.
In our business, we require to log every request/response which coming to our server.
At this time being, we are using xml as standard implementation.
Log files are used if we need to debug/trace some error.
I am kind of curious if we switch to protocol buffers, since it is binary, what will be the best way to log request/response to file?
For example:
FileOutputStream output = new FileOutputStream("\\files\log.txt");
request.build().writeTo(outout);
For anyone who has used protocol buffers in your application, how do you log your request/response, just in case we need it for debugging purpose?
TL;DR: write debugging logs in text, write long-term logs in binary.
There are at least two ways you can do this logging (and maybe, in fact, you should do both):
Writing your logs in text format. This is good for debugging and quickly checking for problems with your eyes.
Writing your logs in binary format - this will make future analysis much quicker since you can load the data using same protocol buffers code and do all kinds of things on them.
Quite honestly, this is more or less the way this is done at the place this technology came from.
We use the ShortDebugString() method on the C++ object to write down a human-readable version of all incoming and outgoing messages to a text-file. ShortDebugString() returns a one-line version of the same string returned by the toString() method in Java. Not sure how easy it is to accomplish the same thing in Java.
If you have competing needs for logging and performance then I suppose you could dump your binary data to the file as-is, with perhaps each record preceded by a tag containing a timestamp and a length value so you'll know where this particular bit of data ends. But I hasten to admit this is very ugly. You will need to write a utility to read and analyze this file, and will be helpless without that utility.
A more reasonable solution would be to dump your binary data in text form. I'm thinking of "lines" of text, again starting with whatever tagging information you find relevant, followed by some length information in decimal or hex, followed by as many hex bytes as needed to dump your buffer - thus you could end up with some fairly long lines. But since the file is line structured, you can use text-oriented tools (an editor in the simplest case) to work with it. Hex dumping essentially means you are using two bytes in the log to represent one byte of data (plus a bit of overhead). Heh, disk space is cheap these days.
If those binary buffers have a fairly consistent structure, you could even break out and label fields (or something like that) so your data becomes a little more human readable and, more importantly, better searchable. Of course it's up to you how much effort you want to sink into making your log records look pretty; but the time spent here may well pay off a little later in analysis.
If you've non-ASCII character strings in your messages, simply logging them by using implicit or explicit call to toString would escape the characters.
"오늘은 무슨 요일입니까?" becomes "\354\230\244\353\212\230\354\235\200 \353\254\264\354\212\250 \354\232\224\354\235\274\354\236\205\353\213\210\352\271\214?"
If you want to retain the non-ASCII characters, use TextFormat.printer().escapingNonAscii(false).printToString(message).
See this answer for more details.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
When you write something in BASIC, you are required to use line numbers. Like:
10 PRINT "HOME"
20 PRINT "SWEET"
30 GOTO 10
But I wonder: who came up with the idea to use line numbers at all? It is such a nuisance, and left quite an "echo" in the developing (pun intended) world!
The idea back then was that you could easily add code everywhere in your program by using the appropriate line number. That's why everybody uses line numbers 10, 20, 30.. so there is room left:
10 PRINT "HOME"
20 PRINT "SWEET"
30 GOTO 10
25 PRINT "HOME"
On the first interfaces BASIC was available for, there was no shiny editor, not even something like vi or emacs (or DOS edit, heh). You could only print out your program on the console and then you would add new lines or replace them, by giving the appropriate line number first. You could not navigate through the "file" (the program was kept in memory, although you could save a copy on disk) with the cursor like you are used to nowadays.
Therefore the line numbers weren't only needed as labels for the infamous GOTO, but indeed needed to tell the interpreter at what position in the program flow you are editing.
It has a loong-loong history.
Line numbering actually comes from Dartmouth BASIC, which was the original version of the BASIC programming language and was the integral part of a so called Dartmouth Time Sharing System. That DTSS had a rudimentary IDE, which was nothing more than an interactive command line.
So every line typed inside this "IDE", and beginning with a line number, was added to the program, replacing any previously stored line with the same number; anything else was assumed to be a DTSS command and immediately executed.
Before there was such a thing as a VDT (video display terminal), we old-timers programmed on punch cards. Punch cards reserved columns 72-80 for sequence numbers - if you dropped your card deck and they all got out of order, you could put the deck in a card sorter that would order the cards based on those sequence numbers. In many ways, the BASIC line numbers were similar to those sequence numbers.
Another advantage in the BASIC world is that in the old days BASIC was interpreted as it was run. Using labels rather than sequential line numbers for branches would require a first pass to pick up all the labels and their locations, where as if you use line numbers the interpreter knows whether it needs to start scanning forwards or backwards for the destination.
Back in the day you didn't have a 2 dimensional editor like emacs or vi. All you had was the command line.
Your program was stored in memory and you would type in single line commands to edit single lines.
If you were a Unix god you could do it with ed or something, but for BASIC on a C-64, VIC-20, or TRS-80 you'd just overwrite the line.
So a session might look like:
$10 PRINT "Hellow World"
$20 GOTO 10
$10 PRINT "Hello World"
And now the program would work correctly.
Some older mainframes even had line terminals without a screen. Your whole session was printed on paper in ink!
The "Who?" would be the inventors, Kemeney and Kurtz.
After reading the replies, I checked the Wikipedia entry for "Dartmouth BASIC", and was surprised to learn
The first compiler was produced before the time-sharing system was ready. Known as CardBASIC, it was intended for the standard card-reader based batch processing system.
So, it looks like Paul Tomblin "gets the square".
Paul Tomblin's answer is the most comprehensive, but I'm surprised no one has mentioned that a big part of the BASIC project's initial goal was to provide a beginner-friendly interactive environment using timesharing. (Kurtz and Kemeny's vision for "universal access for all students" was far ahead of its time in this regard.)
The BASIC system that was developed to fulfill this goal featured Teletype ASR-33 (and later other) printing terminals. When connected to a timesharing-capable OS, these allowed editing and running BASIC programs in an interactive mode (unlike working with punched cards), but they are not cursor-addressable. Line numbers were a beginner-friendly way to both specify the order of program statements and allow unambiguous editing in the absence of a screen editor. The Wikipedia entry for "line editor" explains further, and anyone who's ever tried to use a line editor (such as the Un*x 'ed') can appreciate why Kurtz and Kemeny should be thanked for sparing the beginner having to learn the cryptic command sequences required for editing text in this manner.
They originated in FORTRAN, from which BASIC was derived. However, in FORTRAN only lines referenced by other lines (like GOTO targets) needed numbers. In BASIC they had a secondary use, which was to allow editing of specific lines.
Back in the fifties, when high programming languages were in their early beginnings, there were no terminals, no editors, no monitors (yes, no monitors), just card punchers and readers (for writing and reading the contents of cards into memory of a computer) and printers (for printing results, naturally).
Later, tape was introduced, but that's another story.
Each punch card had its own number. There were several reasons for that; from purely keeping them in order, to determining the sequence of execution. Each card was one line of code (in today's terms). Since, at that time, there were no constructs like if..then..else, or whatever variant of the like, the sequence of execution had to be determined somehow. So GOTO statements were introduced. They were the basis of loops. The term "spaghetti code" comes from that time period also, since badly written code was relatively hard to follow, like spaghetti on a plate :)
I'd guess it comes from assembler, where each instruction has an address which may be jumped to by another instruction.
Additionally, the first computers didn't have much memory, and storing a line number only takes two bytes (if done properly). Writing a label takes more memory, first in the location, where that label is defined, then in any jump command.
Finally in the good old days there weren't any fancy editors. The only "editor" was a simple command line interface, which treated everything starting with a number being part of a program and everything else as commands to be executed immediately. Most prominent example should be the Commodore 64.
Newer dialects of Basic don't have the need for line numbers any longer.
in Basic, if you didn't have a line number, how can you preform a
GOTO 10
that was a way to jump lines, a good way that was found ... more than 20 years ago!
today, the lines help us catching errors/exceptions because the debug engines are made to send us in the message that we got an exception on line xxx and we jump right away to it!
imagine a world without line numbers... how can a reporter be paid without the lines?
"Now that you know the novel, you have to write a summary with no more than 50 lines"
remember this? Even at the school we learn about line numbers!
if it wasn't invented, someone will already invented again so we could use it nicely :)
Not all versions of BASIC required line numbers. QBasic, for instance, supported labels. You could then jump to those with GOTO (ignoring Dijkstra's "Go To Statement Considered Harmful," for the moment).
The answer is already above. Paul Tomblin wrote it (with a caveat to zabzonk). Actually, I would argue that any answer which does not mention "punch cards" is incomplete, if it mentions neither punch cards nor FORTRAN, it is wrong. I can say that this is definitively right because my parents both used punch cards on a regular basis (they started with FORTRAN 66 and 77), then migrated to Basic and COBOL in the 80's.
In the early days, most programs were entered with punch cards. The punch cards were usually entered in sequence, usually one instruction per card, with labels (JMP/JSR targets) being a separate instruction card.
To edit your program, you replaced the card.
Later implementations added an optional sequence number on the right end of the line, so that when/if they got out of order, they could be resequenced by an automated reader.
Fortran used both numeric target labels on the left (col 1-5) and left a reserved block on the right (73-80) for sequence or comment.
When BASIC was initially written, it was decided to move the sequence numbers to the left, into FORTRAN's label field, and to allow overwriting prior cards' memory footprint... as an editing mode. This was intended for the interactive dev environment, but worked just as well with cards. And cards were used in some early implementations for a variety of reasons.
Keep in mind: Many computers were card-reader and printer interface right through the late 1970's. Even tho' interactive mode basics were available, card punched basic programs were frequently used. Since many simply were feeding into the IDE, they worked exactly the same way. Including needing a "Run" card at the end. In such cases, one could simply tack a correction card and another Run card to rerun with a variation on some variable; likewise, in complex programs, simply adding a corrected line of a card before the run was adequate to edit out problems without spending precious time finding the errant card itself.
I like the robot church on Futurama, on the walls were written stuff like
10 SIN
20 GOTO HELL
On the Speccy you couldn't edit a line without the line number.
I find them very helpful when pairing. I don't have to point at a line when my pair has the keyboard, I can just say, "on line 74, shouldn't that really be getMoreBeer()?"
The original editor for DOS was a wonderful utility called edlin. You could only edit a single line. To make life even more interesting in many versions of BASIC you could type lines out of order, line 10, 20, 30, 25, 5, The execution would be by line line number not by the order of appearance.
Does my question make sense? Using either Vim or Emacs, you come to understand that the interface exposes the code's representation of the state of the file you are editing in the buffer, the file is the on-disk storage you can fill a buffer from or write a buffer to. All these things a programmer would know, but when just editing text, why is it exposed? Any newer editor just tells you "Here is a file. Edit it."
Yes, I understand the technical meanings, but that isn't my question. This is a question not even about if it is a good idea to do it or not. Vim and Emacs are our two oldest editors in common use today, and they share this behavior. I know of no new editor that does the same. When did editors stop doing this and why?
For starters, Emacs uses plenty of buffers that aren't associated with any file. Any time you open a directory, read your mail, open a terminal, compile a program, launch an interactive Python session, or connect to a database, you get a buffer. Hence, Emacs's basic unit of work is a buffer and not a file, and the same logic holds for Vim.
New applications that only edit files make no distinction because every screen or window or tab directly represents a file. More capable applications like Emacs and Vim are a lot more flexible in that respect.
OK, here's my weird philosophical answer :
because late binding between the buffer in the editor and the actual concrete thing you're working on, gives the editing environment more flexibility and power.
Think this is out of date? One place where the idea is back with a vengeance is in the browser, where you don't have 1-1 correspondence between tabs and web-pages. Instead, inside each tab you can navigate forwards and backwards between multiple pages. No-one would try to make an MDI type interface to the web, where each page had it's own inner window. It would be impossibly fiddly to use. It just wouldn't scale.
Personally, I think IDEs are getting way too complicated these days, and the static binding between documents and buffers is one reason for this. I expect at some point there'll be a breakthrough as they move to the browser-like tabbed-buffer model where :
a) you'll be able to hyperlink between multiple files within the same buffer/tab (and there'll be a back-button etc.)
b) the generic buffers will be able to hold any type of data : source-code, command-line, dynamically generated graphic output, project outline etc.
In other words, much of the Vim / Emacs model, except tweaked to be more in-line with discoveries that browsers are making.
Because several buffers can show you different view of the same file. I do not know of other editors but this is true of Emacs. And what do you mean exactly with Old?
When applications started becoming used heavily by non-geeks who didn't want to trouble themselves with irrelevant detail.
I think the new editors quit doing it for the reasons you stated, that it is an abstraction that just gets in the way. Also most modern editors have unlimited undo, so the idea of the "buffer" is sort of implicit.
I guess I'm just an old fogey (in the die-hard vim camp), but the other editing packages I use, such as MS Word or Open Office, preserve the distinction between the copy of the file that I'm editing and the last saved version. That is utterly invaluable -- I don't want the editor to trample over my last good version until I'm ready for it to do so. Indeed, there's a decent chance (say one in a thousand) that I'll create a new file with the buffer I'm editing on.
On the other hand, the ability to create a file image by reading multiple files (either several copies of the same file, or copies of several different files) is also useful. There are faintly similar facilities in other documents.
So, I could be missing the point - I don't know which editors you are referring to as removing the distinction. But I think that all editors preserve the distinction between the copy of the file that you're editing and the last version saved to disk.
Because developers of those editors didn't care to hide implementation details from users.