I am using Node.js Tools for Visual Studio.
When I am opening a project it will take some time to load, because of Node.js analysis process.
Another problem is .ntvs_analysis.dat is growing larger and larger?
What is it and do I need it?
To my understanding, the NTVS extension analyzes your code to provide IntelliSense support. The result of the analyzed code is stored in ntvs_analysis.dat. However, it doesn't only analyze your code but also all installed npm_modules and their dependencies (and theirs, and theirs)). So installing more modules will make your ntvs_analysis.dat grow really fast.
There is an open issue on github https://github.com/Microsoft/nodejstools/issues/88 about this. The file is getting really big for some people including myself.
One proposed solution in the discussion is to reduce the depth of scanned folders. Turning off IntelliSense would help keeping the file smaller according to the discussion.
Related
I'm looking for an open-source tool or a NPM package, which can be ran using node (for example by spawning a process and calling command line).
The result I need a PDF file converted/broken to images. Where each page in PDF is now an image file.
I checked
https://npmjs.com/package/pdf-image -- seems to be last maintained 3 years ago.
same for https://npmjs.com/package/pdf-img-convert
Please advise which package/tool I can use?
Thanks in advance.
Be aware generally https://npmjs.com/package/pdf-img-convert is frequently updated thus the better of the two, but has 3 pending pull requests so review if they impact your useage. (Note https://npmjs.com/package/pdf-image has a significantly much heavier set of dependencies to break and also has a much bigger list of pending pull requests thus your correct assumption the older it is ....)
However current pdf-img-convert 1.0.3 has a breaking dependency that needs a manual correction due to a change in Mozilla naming earlier this year from es5 to legacy.
see https://github.com/olliet88/pdf-img-convert.js/issues/10
For a cross platform Open Source CLI tool I would suggest Artifex MuTool (AGPL is not free for commercial use, but your getting quality support) has continuous daily commits, it can be programmed via Mutool Run ecma.js
Out of the box a simple convert in.pdf out%4d.png will attempt fixing broken PDF but may reject some that need a more forgiving secondary approach such as above.
Go ahead with the second one.
https://npmjs.com/package/pdf-img-convert
I am in the process of updating an older windows driver. I am using Build.exe and the associated tool set included in the WinDDK (7600.16385.1). Reviewing the SOURCES file I came across the following macro: USE_CTRLDLL=1. I cannot find any documentation related to this on MSDN (https://msdn.microsoft.com/en-us/library/ms910176.aspx) or third party sites. Any idea as to what this macro actually tells the tool set to do?
The following answer was provided by Don Burn in the Windows Dev Center Forums (What does USE_CTRLDLL=1 in SOURCES file do?):
I suspect someone typo'd meaning to put in USE_CRTDLL which is
obsolete and instead should be USE_MSVCRT.
Removing this MACRO has no apparent effect on the compilation, linking, or execution of the driver. As Don implies, it is likely the result of a typo made during a maintenance update.
In a way I am looking for best-practice here.
I have a common project that is shared by many of my apps. This project has FlurryAnaylics and the ATMHud DLLs as references.
If I do not also reference these DLLs in the main project, the apps will often, but not always, fail in the debug-to-device test. In the debug-to-simulator I don't need to add these DLLs to the main project.
So, the question is: Do I have to include references to DLLs in the main project that I have in sub projects all the time?
Whenever possible I use references to project files (csproj files) over references to assemblies (.dll). It makes a lot of things easier, like:
code navigation (IDE);
automatic build dependency (the source code you're reading is the one you're building, not something potentially out-of-sync);
source-level debugging (even if you can have it without it, you're sure to be in-sync);
(easier) switch between Debug|Release|... configurations;
changing defines (or any project-level option);
E.g.
Solution1.sln
Project1a.csproj
MonoTouch.Dialog.csproj (link to ../Common/MonoTouch.Dialog.csproj)
Solution2.sln
Project2a.csproj
MonoTouch.Dialog.csproj (link to ../Common/MonoTouch.Dialog.csproj)
Common.sln
MonoTouch.Dialog.csproj
Large solutions might suffer a bit from doing this (build performance, searching across files...). The larger they get the less likely everyone has to know about every part of it. So there's a diminished return on the advantages while the inconvenience grows with each project being added.
E.g. I would not want to have references to every framework assemblies inside Mono (but personally I could live with all the SDK assemblies of MonoTouch ;-)
Note: Working with assemblies references should not cause you random errors while debugging on device. If you can create such a test case please fill a bug report :-)
I need to implement a memory cache with Node, it looks like there are currently two packages available for doing this:
node-memcached (https://github.com/3rd-Eden/node-memcached)
node-memcache (https://github.com/vanillahsu/node-memcache)
Looking at both Github pages it looks like both projects are under active development with similar features.
Can anyone recommend one over the other? Does anyone know which one is more stable?
At the moment of writing this, the project 3rd-Eden/node-memcached doesn't seem to be stable, according to github issue list. (e.g. see issue #46) Moreover I found it's code quite hard to read (and thus hard to update), so I wouldn't suggest using it in your projects.
The second project, elbart/node-memcache, seems to work fine , and I feel good about the way it's source code is written. So If I were to choose between only this two options, I would prefer using the elbart/node-memcache.
But as of now, both projects suffer from the problem of storing BLOBs. There's an opened issue for the 3rd-Eden/node-memcached project, and the elbart/node-memcache simply doesn't support the option. (it would be fair to add that there's a fork of the project that is said to add option of storing BLOBs, but I haven't tried it)
So if you need to store BLOBs (e.g. images) in memcached, I suggest using overclocked/mc module. I'm using it now in my project and have no problems with it. It has nice documentation, it's highly-customizable, but still easy-to-use. And at the moment it seems to be the only module that works fine with BLOBs storing and retrieving.
Since this is an old question/answer (2 years ago), and I got here by googling and then researching, I feel that I should tell readers that I definitely think 3rd-eden's memcached package is the one to go with. It seems to work fine, and based on the usage by others and recent updates, it is the clear winner. Almost 20K downloads for the month, 1300 just today, last update was made 21 hours ago. No other memcache package even comes close. https://npmjs.org/package/memcached
The best way I know of to see which modules are the most robust is to look at how many projects depend on them. You can find this on npmjs.org's search page. For example:
memcache has 3 dependent projects
memcached has 31 dependent projects
... and in the latter, I see connect-memcached, which would seem to lend some credibility there. Thus, I'd go with the latter barring any other input or recommenations.
As the title says, I want to have a build tool that quite much stays out of my way.
I would rather want to specify rules, rather than steps in the build process. I wan to say that I want a binary file with a name placed in the root directory of my project, .o files should go in an obj/tmp dir and the source is in the Source-directory.
I do NOT want to tell it that it is this'n'that file as I keep adding new files rather quickly, it should just scan the source directory (and its subdirectories) looking for Ragel (.rl) and C++ code (.cxx) and doing what's necessary to make all into an executable.
I have looked into many tools, like auto{make,conf,header} (Did not really like that I placed the files it wanted in a subdir of project root, eclipse did not like that either), CMake (Seems like I have to add all source-files myself, and is quite much a variation of autotools in my eyes). I have also read about ant, maven (I am also allergic against XML, it's a good format to serialize data for applications, not so much for humans. I would prefer YAML) and others on WikiPedia. And I have seen tools which seems good but which require to be set up as a webserver which is kinda overkill.
Also, I really need the ability to be able to work offline without internet connection!.
Right now it seems like the best option is to make a little script that finds all .cxx files and write an Unity.cxx and builds that one with G++, which probably is quite fast but to much an ugly hack, I guess.
Bonus Points:
Fast builds
Ability to type build test-1 or something and it will build and directly run test-1
Multi-core builds (i.e. faster builds)
Does really not interrupt my train of thought
CMake is great. It's free, cross-platform, and reasonably well documented. It supports "out of source builds", meaning none of the build files are placed in the source directory. That makes source control a bit easier. It can be set up to find new files (globbing). Fast?...It generates make files...after that it's up to your compiler. Multicore...again, more a function of the compiler. I've used CMake on Windows, Linux, and Mac...it just works.
Another that I haven't tried but have read about and plan to test is premake... http://industriousone.com/sample-script
cake from CoffeeScript is quite good, and I'm writing a similar tool using Lua myself.
CMake and premake Ain't build/maketools, they are build/make-descriptor generators; which may fit a large number of projects that ain't changing too much. But not for project where rapid prototyping is a key.
Right now, I'm doing a project where the browser updates when you hit the save-button in your text editor; You do not need to go to the browser and hit F5 (Which would cause a small delay while the browser load in everything again, and you would most likely loose the state of the page, like say that you have an menu open, and wish to tweak the look of the menu. You would be forced to navigate there again in your RIA).