The CSS3PIE .htc filetype in Perforce seems to cause issues when requesting the latest version. Changing the filetype from Text to Binary directly in Perforce fixed the problem.
Out of curiosity, does anyone have an idea why the PIE.htc format causes these problems? Could it be the encoding, the filetype, or attributes? If PIE.htc file type is set (as happens by default) to text in Perforce, it won't work when you try to submit it.
I don't know what a '.htc' file is, but if you want all your '.htc' files to be treated as binary files by Perforce, rather than as text files, you can use the Perforce 'typemap' feature to do this: use 'p4 typemap'.
This blog: http://blogs.encodo.ch/news/view_article.php?id=79 has a nice writeup.
Generally, the reason you do this is that Perforce tries to normalize the line ending characters for text files, so that the files are a single line-end on Unix computers, but are carriage-return/line-end pairs on Windows computers.
And if your filetype contains 0x0a characters, but it is NOT a textual data type and these characters do NOT indicate line ends, then turning them into CRLF pairs will mess you up.
So then binary is the better filetype for such files.
Related
I have a document with a non-standard encoding. Some of them aren't valid UTF-8, etc. However, a lot of the starts/ends of sections etc still line up, so I would still like to do some basic things in vim with this document. One of those things is to copy and paste. However this seems to fail when my document isn't in a standard encoding.
I can't provide my original document, but one can easily experiment with this with other arbitrary files. For example, if you open a .jpg file in vim, and then (y)ank all of it and (P)aste it into a new file, and save that file as a .jpg, it won't open. Somewhere in the yank-paste pipeline stuff got lost. What is going on with my vim clipboard, and is there a way to work around this?
What is the best way to view an EBCDIC formatted file in VIM?
On a mainframe
First of all, if Vim was compiled and is running on a system which default encoding is EDCDIC (e.g. an IBM manframe with an ANSI C compiler) then Vim will open EBCDIC files in the system's code page by default. Such an instance of Vim will have:
has("ebcdic")
evaluating to 1. And when invoked with --version Vim will print:
+ebcdic
Instances of Vim not compiled in an EBCDIC environment will never evaluate has("ebcdic") to true. This feature of Vim is needed because other features behave differently in a purely EBCDIC environment.
Not on a mainframe
Yet, most systems today do not use EBCDIC code pages. For the situation where a file encoded in an EBCDIC code page needs to be edited in Vim on a more popular system, Vim uses the iconv library. In essence, to be able to open a file encoded in an EBCDIC code page Vim needs to be compiled with iconv support. iconv support can be tested by evaluating:
has("iconv")
Or searching for the
+iconv
string in the output of vim --version.
EBCDIC has several code pages, and Vim will only be capable of using the code pages supported by the iconv library it was compiled against. To check which code pages are available you can use the iconv utility that comes together with the iconv library:
iconv --list
Now, let's assume that we have a file called myfile encoded with the EBCDIC-US code page (also called the EBCDIC-037 code page) and that the iconv installed on the system supports this code page.
Before opening the file in Vim we need to set Vim's encoding to utf-8, in ~/.vimrc we need:
set enc=utf-8
:h fenc advises that the encoding must be set to utf-8 if any file conversion (through iconv) is performed. Otherwise data loss is possible when writing the file back.
Now we open the file in Vim with vim myfile and see mangled characters. That is fine, we now need to perform the conversion using iconv with:
:e ++enc=EBCDIC-US
Vim will now display the file in utf-8 but will save the file in EBCDIC-US, both accomplished using iconv conversions on the fly.
Closing notes
The mapping between IBM's naming of code pages:
EBCDIC-037
EBCDIC-273
EBCDIC-500
...
and iconv's names
EBCDIC-US
EBCDIC-AT-DE-A
EBCDIC-BE
...
Is often non-trivial. Yet, if the display encoding (enc) is set to utf-8 there should be no issue in trying different code pages with
:e ++enc=EBCDIC-US
:e ++enc=EBCDIC-AT-DE-A
until the correct conversion is found.
Extra note: consider using vi.SE if you have more questions related to Vim.
I have been in charset-hell for days and vim somehow always shows the right charset for my file when even I'm not sure what they are (I'm dealing with files with identical content encoded in both charsets, mixed together)
I can see from inspecting the ΓΌ (u-umlaut) character in UTF-8 vs ISO-8859-1 which encoding I'm in, but I don't understand how vim figured it out - in those character-sets only the 'special characters' really look any different
If there is some other recording of the encoding/charset information I would love to know it
The explanation can be found under :help 'fileencodings':
This is a list of character encodings considered when starting to edit
an existing file. When a file is read, Vim tries to use the first
mentioned character encoding. If an error is detected, the next one
in the list is tried. When an encoding is found that works,
'fileencoding' is set to it. If all fail, 'fileencoding' is set to
an empty string, which means the value of 'encoding' is used.
So, there's no magic involved. When there's a Byte Order Mark in the file, that's easy. Else, Vim tries some other common encodings (which you can influence with that option; e.g. Japanese people will probably include something like sjis if they frequently edit such encoded files).
If you want a more intelligent detection, there are plugins for that, e.g. AutoFenc - Tries to automatically detect and set file encoding.
It could be a windows saved file opened in unix/linux problem and I am not quite sure how to solve it.
When I open a file which was previously saved by another developer using windows, my vim buffer some times shows
Trying char-by-char conversion...
In the middle of my file and I am unable to edit the code/text/characters right below this message in my buffer.
Why does it do that and how do I prevent this from happening?
This message comes from the Vim function mac_string_convert() in src/os_mac_conv.c. It is accompanied by the following comment:
conversion failed for the whole string, but maybe it will work for each character
Seems like the file you're editing contains a byte sequence that cannot be converted to Vim's internal encoding. It's hard to offer help without more details, but often, these help:
Ensure that you have :set encoding=utf-8
Check :set filencodings? and ensure that the file you're trying to open is covered, or explicitly specify an encoding with :edit ++enc=... file
The 8g8 command can find an illegal UTF-8 sequence, so that you can remove it, in case the file is corrupted. Binary mode :set binary / :edit ++bin may also help.
I'm using Vim 7.3 on WinXP. I use XML files that are generated by an application at my work which writes them with UCS-2le encoding. After reading several articles on encoding at the vim wiki I found the following advice given, namely to set my file encoding in vimrc:
set fileencodings=ucs-bom,utf-8
The file in question has FF EE as the first characters (confirmed viewing with HxD), but Vim doesn't open it properly. I can open my UCS-2le files properly with this in my vimrc:
set fileencodings=ucs-2le, utf-8
But now my UTF-8 files are a mess!
Any advice how to proceed? I typically run Gvim without behave MSwin (if that matters). I use very few plugins. My actual vimrc setting regarding file encodings are:
set encoding=utf-8
set fileencodings=ucs-bom,utf-8,ucs-2le,latin1
The entry for ucs-2le in the third spot seems to make no difference. As I understand it, the first entry (set encoding) is the encoding Vim uses internally in its buffer, while the second (set fileencodings) deals with the encoding of the file when vim reads and writes it. So, it seems to me that since the file has a byte order mark, ucs-bom as the first entry in setfileencodings should catch it. As far I can tell, it seems that vim doesn't recognize that this file is 16 bytes per character.
Note: I can/do solve the problem in the meantime by manually setting the file encoding when I open my ucs-2le files:
edit ++enc=ucs2-le
Cheers.
Solved it. I am not sure what I did but the fixes noted are working perfectly now to read and write my UCS-2 files - though for unknown reason not immediately (did I just restart Vim?). I could try to reverse the fixes to see which one was the critical change but here's what I've done (see also my comments on Jul 27 above):
Put AutoFenc.vim plugin in my plugins folder (automatically detects file encoding (AutoFenc.vim).
Added iconv.dll and new version of libintl.dll to my vim73 folder (Vim.org)
Edited vimrc as below
vimrc now contains (the last bits just make it easier to see what's happening with file encodings by showing the file encoding in the status line):
"use utf-8 by default
set encoding=utf-8
set fileencodings=ucs-bom,utf-8,ucs-2le,latin1
"always show status line
set laststatus=2
"show encoding in status line http://vim.wikia.com/wiki/Show_fileencoding_and_bomb_in_the_status_line
if has("statusline")
set statusline=%<%f\ %h%m%r%=%{\"[\".(&fenc==\"\"?&enc:&fenc).((exists(\"+bomb\")\ &&\ &bomb)?\",B\":\"\").\"]\ \"}%k\ %-14.(%l,%c%V%)\ %P
endif
And all is well.