How to improve the completion speed of clang_complete? - vim

I'm using the plugin clang_complete in Vim. The plugin could complete C++ STL accurately. But its completion speed is unacceptable. Is there any way to improve the clang_complete's completion speed?
update:Yesterday I found this,and now the omnicppcomplete could basically meet my need ,so I decided to continue to use omnicppcomplete.vim. Thak you for your answers!!

Well i heard, that using libclang.so instead of clang executable is much faster. However for reliable completion, you need to ignore errors, and im kinda lost in using libclang api. Its really not that easy.
I dont know, which version of clang_complete are you using, but there is follow up plugin called same, that is updated until now by some guy. He tried to use libclang and pre-filled databases for speedup - and not only for completing, but also for context sensitive navigating. See here:
http://blog.wuwon.id.au/2011/10/vim-plugin-for-navigating-c-with.html
It actually does have some problems, as it doesnt work correctly, when there is something in code, that clang compiler doesnt like. it could be ignored by old clang_complete, but not this version(at least, when you are using libclang, you are free to use also the old clang executable).

If you're not using Clang 3.0 (rc) or trunk, you may be using slower code. This has recently been worked on, due to inacceptable performance, so just updating Clang might give you the boost you need.

Related

GNSS-SDR on Windows?

I know the answer might be negative, but is there any way to run Gnss-Sdr on Windows Instead of Linux/Mac OS?
I Use it on Linux Already But I have just wondered if it can be done.
only related answers please.
It's possible. I'm just doing this. The problem is that some code fragments are written under Linux. The build system and library search methods are also under it. For the first time, I had to cut TCP data transfer and heavily correct some CMake files. I build it with the help MSYS2 under MinGW. The biggest problem is linking files. At this stage, I build most of the individual components. It was also required to manually build all the libraries. With my little experience in porting programs from system to system, it was hard

Handling autoconf with Android after NDK16

I'm trying to update an existing configuration we have we are cross compiling for a number of targets - the question specifically here is about Android. More specifically we are building code using cmake and the hunter package manager. However we are building ICU using a link that uses autoconf/configure, called from cmake. Not sure that is specifically important except that we have less control on the use of configure than is generally the case.
OK: we have a version that builds against an old NDK but I am updating and have hit a problem identified by https://android.googlesource.com/platform/ndk/+/master/docs/UnifiedHeaders.md: with NDK16 and later, the value of the sysroot parameter needs to vary between compilation and linkage. As it stands the configure script tries to build a small program conftest.c - the program fails to link. Manually I can compile the code in two stages using -c and then linking the subsequent .o, but that is not what configure is trying to do.
Now the reality is that when I build this code, I don't actually need to link the code - I am generating a library which is used elsewhere. However that is not currently the way that configure sees it.
I may look to redo the configuration script to just check that the code can be compiled when cross compiling. However I am curious to know if anybody has managed to handle this sort of thing by keeping the existing config files and just changing the parameters by which the scripts are called.
When r19 releases to stable this problem will go away on its own (https://github.com/android-ndk/ndk/issues/780), but since that's still in beta it's not a good solution just yet.
Prior to r19 (this isn't really unique to r16+, this has always been the case and it was just asymptomatic previously), autoconf builds should be done using a standalone toolchain.
You however should not use a standalone toolchain for CMake, so odds are something about your configuration will need to change until r19 is released. Depending on the effort involved, it may make sense to keep to r15 until r19 is available.

Is haskellmode-vim dead?

I just disabled haskellmode-vim from my plugin configurations. Basically this was for three reasons:
I prefer neocomplcache for my auto completion needs.
Apparently it wasn't updated since 2010.
It doesn't seem to be compatible with cabal
I hope that someone jumps in the pit and points out that I just have misconfigured the whole thing (as in I configured the most basic thing in the readme). To make this a question:
Is it possible to setup haskellmode such that ...
... it gets its configuration from cabal?
... it doesn't set `completefunc' so that neocomplcache still works?
Author here. I haven't had much chance to work with Haskell since 2010, so haskellmode for Vim has not been developed since then, either.
I used to think someone must have written something better since, or that my old code probably doesn't work with newer releases, but every few months, someone mails me telling that they are still using this plugin and it still works for them (which is a mix of pleasant surprise and uncomfortable reminder of the lack of development/maintenance).
Some of them have created clones on github (last time I checked, there were about a dozen), usually to accomodate the latest fashion in Vim plugin management (there may have been small hacks to make it build via cabal, but I recall no complete integration). Vim gives you a lot of control over the order of plugin loading, if you want someone else to override the completefunc.
I still expect haskellmode-vim to drop out of usage sooner or later. However, if someone were to step forward willing to take on maintenance for one of the github clones, that would be fine, too.
As long as credit is given, and modified plugins are marked as such, I'm also happy to see ideas from haskellmode-vim used in other plugins (there used to be a happy exchange of such ideas between vim and emacs haskell plugins), so more modern and active plugins could absorb any missing features from haskellmode-vim.

Will writing C in both Windows and Linux cause compiling problems?

I work from 2 different machines. One is Windows and the other is Linux. If I alternately work on the same project but switch between both OSes, will I eventually run into compiling errors? I ask because maybe there are standards supported by one but not by the other.
That question is a pretty broad one and it depends, strictly speaking, on your tool chain. If you were to use the same tool chain (e.g. GCC/MinGW or Clang), you'd be minimizing the chance for this class of errors. If you were to use Visual Studio on Windows and GCC or Clang on the Linux side, you'd run into more issues alone because some of the headers differ. So once your program leaves the realm of strict ANSI C (C89) you'll be on your own.
However, if you aren't careful you may run into a lot of other more profane errors, such as the compiler on Linux choking on the line endings if you didn't tell your editor on the Windows side to use these.
Ah, and also keep in mind that if you want to actually cross-compile, GCC may be the best choice and therefore the first part I mentioned in my answer becomes a moot point. GCC is a proven choice on both ends. And given your question it's unlikely that you are trying to write something like a kernel mode driver - which would be fundamentally different.
That may be only if your application use some specific API.
It is entirely possible to write code that works on both platforms, with no issues to compile the code. It is, however, not without some difficulties. Compilers allow you to use non-standard features in the compiler, and it's often hard to do more fancy user interfaces (even if it's still just text) because as soon as you start wanting to do more than "read a line of text as it is entered in a shell", it's into "non-standard" land.
If you do find yourself needing to do more than what the standard C library can do, make sure you isolate those parts of the code into a separate file (or a couple of files, one for Linux/Unix style systems and one for Windows systems).
Using the same compiler (gcc) would help avoiding problems with "compiler B doesn't compile code that works fine in compiler A".
But it's far from an absolute necessity - just make sure you compile the code on both platforms and with all of your "suppoerted" compilers often enough that you haven't dug a very deep hole that is hard to get out of before you discover that "it's not working on the other system". It certainly helps if you have (at least) a virtual machine running the other OS, so you can easily try both variants.
Ideally, you want to set up an automated system, such that when you change the code [and feel that the changes are "complete"], it automatically gets built on both platforms and all compilers you want to use. And if possible, also automatically tested!
I would also seriously consider using version control - that way, when something breaks on one or the other side, you can go back and look at what the code looked like before it stopped working, and (hopefully) find the reason it broke much quicker than "Hmm, I think it's the change I made to foo.c, lets take that out... No, not that one, ok how about the change here..." - at least with version control, you can say "Ok, so version 1234 doesn't work, let's try version 1220 - ok, that works. Now try 1228, still works - so change between 1229 and 1234 - try 1232, ah, it's broken..." No editing files and you can still go to any other version you like with very little difficulty. I have used Mercurial quite a bit, git a little bit, some subversion, and worked on a project in Perforce for a few years. All of these are good - personally, I think I prefer mercurial.
As a side-effect: Most version control systems also deal with filename and line endings in the saner way than doing this manually.
If you combine your version control system with a "automated build and test-system", such as Jenkins, you can get everything very automated. Jenkins is free and runs on both Windows and Linux, and you can use it to automatically build and test your code as and when you submit the code to the version control system.
It will not create a problem until you recompile the source code in the respective OS. If you wanna run your compiled file generated by windows(.exe or .obj), into linux or vice-versa then it will definitely create a problem and wont be possible. But you can move you source code (file with extension .c/.c++) into any of the os. And sometimes it also create problems with different header files, so take care of that also. Best practice is to use single OS for you entire project, avoid multiple os until it is extremely necessary.

gccsense vs. clang_complete

I've been using omniCppComplete + ctags for a while, and want to make a further improvement on the code completion.
According to the suggestion in here [1], gccsense and clang_complete seems to be alternatives. However, I am not sure which one is better. Any idea on their performance?
Thanks!
Update: After I tried clang_complete, I found the completion speed extremely unacceptable.
I then tried it using libclang.dylib, which speeds up a lot but still make one feels lagging.
I think I should stick to ctags for now.
You should probably use clang_complete, not gccsense.
The main point here is the architecture of the two. The idea behind both solutions is very similar: you can't get normal C++ completion without access to internal compiler (gcc) information (Abstract Syntax Tree) while gcc doesn't provide you with sufficient interfaces for that. The implementation part of accessing this info though is quite different here: gccsense is a kind of "hack" - it's a custom build of gcc capable for storing the neccessary info for futher providing it to plugin, while clang_complete goes the other way by using alternative compiler: clang, one of the main goals of creation of which was exactly making AST easily accessible by external tools.
So, in case of using gccsense you'll need to compile your code with a kind of custom gcc compiler, which is already a little bit outdated (gccsense is using gcc 4.4) now and will constantly need developer's support in feature. On the contrary, clang_complete doesn't depend so much on clang compiler, it uses it as external tool.
As for performance: again clang was designed to be faster than gcc and it is. Clang_complete can be slightly slower on Windows than on MacOS/Linux, however gccsense can't even be compiled for Windows at the time.
GCCsense can be built on Windows.
See my patch on gcc 4.5.2 here:
http://forums.codeblocks.org/index.php/topic,13812.msg94824.html#msg94824
I admit that gccsense is just a hack to gcc, but clang has much better design from its beginning.
I hope anyone could improve gcc/gccsense.

Resources