I want to change the way text is represented internally in ANY Text Editor - vim

I want to use a algorithm to reduce memory used to save the particular text file.I don't really know how text is stored but i have an idea in mind.
Would it be better to extend a open source text editor (if yes than which one) or write a text editor myself.
It would be nice if someone could also give me a link or tutorial to some basics on how text editors work and the way data is stored.
Edited to add
To clarify, what I wanted to do is instead of saving duplicates of a word make a hash table and store the address where it needs to be placed.
That way I wouldn't be storing the duplicates.
This would have become specific to a particular text editor.
Update
thanks everyone I got what all of you'll are trying to say. Anyways all i wanted to do is instead of saving duplicates of a word make a hash table and store the address where it needs to be placed.
This was i wouldn't be storing the duplicates.
Yes and this would have become specific to a particular text editor. never realized that.

I want to use a algorithm to reduce memory used to save the particular text file
If you did this you would no longer have a text editor, but instead you would have created some sort of binary file editor.
The whole point of the text file format is that it is universal, meaning any text file can be open in any other text editor.

Emacs handles compression transparently. Just create a text file with .gz extension. Emacs will automatically compress contents of the file during save operation, and decompress when you open the file next time.

Text is basically stored as-is. i.e., every character takes up a byte or two (wide chars), and there is no conversion done on it when it's saved. It might add an end-of-file character or something though. Don't try coming up with your own algorithm to compress these files. That's why zip-files and other archives were created. They're really good at compressing text. If you wanted to add these feature to your text-editor, you'd have to add some sort of post-save hook to zip it, and then put a hook on the open command to unzip it. Unless you wanted to do it by hand every time. Don't try writing the text editor yourself from scratch, unless (maybe) you're writing notepad. Text editors with syntax highlighting aren't very easy to make, even with the proper libraries. I'd say write a plugin for something like Visual Studio or what have you. Or find an open-source text editor.

Related

How can i Convert a text file to UCS-2 LE, from whatever the default is?

I am looking for a way to convert or save a text file in the UCS-2 LE format; specifically without BOM...i guess.
I have zero knowledge what any of that means actually; but i know i need that because of this wiki page on what i am trying to accomplish: https://developer.valvesoftware.com/wiki/Closed_Captions
in other words:
this is for a specific game engine, "Source Engine," which requires the format in order to compile in-game closed captions for sounds.
I have tried saving the file in Notepad++ using the "UCS-2 LE BOM" option under the encoding menu...there is no option for just "UCS-2 LE" however, and because of this, the captions cannot be compiled for the game engine. I need to save without BOM, "I guess" (because again I don't know what I'm talking about and I assume based on logical conclusions, that I need to not have BOM, whatever that actually means.)
I would like to know about a way to either save a txt file in that encoding format; or a way to convert one.
In my specific case; it appears that my problem boils down to "the program is weird."
what I mean by this is, notepad++ actually does save in the correct format; but I failed to realize that because of a quirk in the caption compiler where it only works if you drag the file onto it; not via command line as previously thought.
I will accept this as the answer when i am allowed to in 2 days.

Data structure for a grammar checker for LaTeX sources

Let me start with acknowledging that this is a rather broad question, but I need to start somewhere and reduce the design space a bit.
The problem
Grammarly is an online app that provides grammar and spell-checking as a browser plugin. Currently, there neither exists support for text editors nor latex sources. Grammarly is apparently often confused when forced to deal with annotated text or text that is formatted (e.g. contains wrapped lines). I guess many people could use that tool when writing up scientific papers or pretty much any other LaTeX tool. I also presume that other solutions exist or will soon pop up that work similarly.
The solution
In principle, it is not necessary to support Grammarly directly in, e.g., emacs. It suffices to provide a convenient interface to check multiple source files at once. To that end, a simple web app could walk through a directory, read all .tex sources, remove all formatting and markup, and expose the files as an HTML document. The user could open that document, run Grammarly, and apply any fixes. The app would have to take the corrected text and reapply formatting, markdown, etc. to save the now fixed source file.
The question(s)
While it is reasonably simple to create such a web application, there are other requirements to be considered: LaTeX parsing (up to "standard" syntax) and a library like HaTeX could deal with parsing and interpretation. But the process of editing needs some thought. Presuming that the removal of formatting can be implemented by only deleting content, it should be possible to take a correction as a diff and reapply it to the formatted document.
In Haskell, is there a data structure for text editing that supports this use case. That is, a representation of text that can store deletions, find diffs, undo deletions, and move a diff accordingly? If not in Haskell, does something like this exist somewhere else?
Bonus question 2: What is the simplest (as in loc required) web framework in Haskell to set up such a web app? It would serve one HTML document and accept updated versions of the text files. No database is required.
Instead of removing and then adding the text formatting, you could parse the souce text into a stream of annotated tokens:
data AnnotatedChar = AC
{ char :: Char
, formatting :: String
}
The following source:
Is \emph{good}.
would be parsed as:
[AC 'I' "", AC 's' "", AC ' ' "\emph{", AC 'g' "", ...
Then, extract only the chars from this list, send them to Grammarly, and get back the result. Now, diff the list of annotated characters with the list of characters you got from Grammarly. This way, you only to deal with a list of characters, but keep the annotations.

How can I edit a DXF in node.js?

I'd like to make a custom lasered label from a user's input on a website. I have a template dxf file and I'd like to replace placeholder text with the user input. My problem is the dxf file format is very unreadable in its text format. Is there any way to make sense of the numeric data? If not are there any other formats (svg, etc) that would be easier to work with?
EDIT: The reason I've found it unreadable in terms of text is that the program (Solidworks) converted the text to curves.) At this point I'm trying to figure out how to prevent that.
AutoDesk was nice enough to document DXF syntax in great detail. Spend a couple hours understanding the documentation from the link below, and I think you will find it quite easy to parse and edit using code.
To just replace some placeholder text, it should be just as simple as reading the DXF file into a string (a dxf file is no different than a txt file), performing a text replace operation and saving it back to file. Just make sure that your placeholder text is very unique and is not contained in any of the key words in the document below (otherwise your DXF file will get corrupted). Something like "PlaceHolderText" will do the trick.
http://images.autodesk.com/adsk/files/autocad_2012_pdf_dxf-reference_enu.pdf
Edit: More Info
I do a lot of work with AutoDesk Inventor which is in direct competition with SolidWorks, so they are effectively the same tool. We were faced with a similar problem of needing to place text onto sheet metal flat pattern DXFs that came out of Inventor in order to identify the part, but Inventor simply could not do it (see, exactly the same!). One of our developers had the idea to place a very precise geometry punch onto the flat pattern. After the DXF was generated he wrote some code that parsed the DXF file and replaced the geometry with a text entity. More specifically we used a triangle with sides having each length defined to something like the 7th decimal place. You can then use one of the vertices of the triangle to position the text, including rotation. This process would be automatic, so once you write the code with the help of the document above (which won't take the long), it will just work. If your engraver can handle text the way you want it, I'd say this is a very good solution. We generate hundreds of parts every day using this code. Hope this helps.

Linux PdfToText function return blank text file

I've used a linux function to convert a list of PDF files to text.
Command:
pdftotext -htmlmeta
This work well for most of my files.
but for a small amount of them, this return me a blank text file.
My unsuccesssfull pdf files were not encrypted, not securised by user / password and they were not read only.
Converting PDFs to text is not a well-defined process. It can work awesome or not at all, depending on the PDF input.
Why is this? Because a PDF's task is mainly to represent the optics of a document, not the textual contents. PDFs can be everything from a pure text with positional information up to a pure graphics of the glyphs of the letters of the text. In the latter case one would need to run an OCR on the input in order to receive text information. This is not done by tools like pdftotext.
Sometimes the text in the PDF is scattered throughout the file, e. g. because first all standard-font letters are mentioned in the PDF, then, later in the file, all the italics-font letters are mentioned (of course with positional information, so a reader of the optical representation won't notice this, even if standard and italics are mixed throughout the text on the page). To rearrange this mess to a fluent text is a major task not very many converters are capable of.
So I guess all you can do is try some more converters for PDF to text (some are better than others, and some are better just for some specific input) or see that you can get the text from another source than the PDF files.

Serialized Printing Method

I am looking for a method by which I can print one document, and have a field that is incremented on each copy printed. I currently run linux, so bash in concert with several programs might be the way to go, but I'm just not sure where to start.
I have a document that is used for our business that currently is hand stamped for serialization... We would like to simply print them but cant find a method by which to increment a specific field. I would like to use either a PDF or an ODF/ODT for the document.
Thanks for any help you can give!
How is the document produced at the first place?
If you master that process, you could certainly add serialization at that level. For instance if using LibreOffice you could do that in LibreOffice. If using a text formatter (like LaTeX, Lout, ....) just emit the formatting instructions (e.g. the .tex or .lout source file) with some unique counting (perhaps simpler to do in some scripting language like Python or Ocaml).
Then run the relevant tool to get a .pdf file.

Resources