What is the recommended way of having several cabal packages in one directory?
Why: I have an old project with many separable modules. Since originally they formed just one program it was, and still is, handy to have them in same directory for easy compiling.
Options
Just suffer and split everything, including VCS holding the stuff, into different directories?
Hack cabal until it is happy with multiple .cabal files in same directory?
Make another subdirectory for each module and put .cabal files there along with symlinks to original pieces of code?
Something smarter? What?
I'd have to recommend option 1 or 3 for cleanliness. I'm not sure how to get around this, if there is even a way to get around this.
I'd say a modified option of 1: subdirectories for everything, no symlinks, but keep everything under a single VCS.
This problem is on the issue list for Cabal 2.
I would recommend that this is exactly what workspaces in Leksah were designed to do. Just get your hands on Leksah and then the rest will sort itself out.
Related
I need to obfuscate my source code as best as possible so I decided to use uglifyjs2.. Now I have the project structure that has nested directories, how can I run it through uglifyjs2 to do the whole project instead of giving it all the input files?
I wouldn't mind if it minified the whole project into a single file or something
I've done something very similar to this in a project I worked on. You have two options:
Leave the files in their directory structure.
This is by far the easier option, but provides a much lower level of obfuscation since someone interested enough in your code basically has a copy of the logical organization of files.
An attacker can simply pretty-print all the files and rename the obfuscated variable names in each file until they have an understanding of what is going on.
To do this, use fs.readdir and fs.stat to recursively go through folders, read in every .js file and output the mangled code.
Compile everything into a single JS file.
This is much more difficult for you to implement, but does make life harder on an attacker since they no longer have the benefit of your project's organization.
Your main problem is reconciling your require calls with files that no longer exist (since everything is now in the same file).
I did this by using Uglify to perform static analysis of my source code by analyzing the AST for calls to require. I then loaded the source code of the required file and repeated.
Once all code was loaded, I replaced the require calls with calls to a custom function, wrapped each file's source code in a function that emulates how node's module system works, and then mangled everything and compiled it into a single file.
My custom require function does most of what node's require does except that rather than searching the disk for a module, it searches the wrapper functions.
Unfortunately, I can't really share any code for #2 since it was part of a proprietary project, but the gist is:
Parse the source text into an AST using UglifyJS.parse.
Use the TreeWalker to visit every node of the AST and check if
node instanceof UglifyJS.AST_Call && node.start.value == 'require'
As I have just completed a huge pure Nodejs project in 80+ files I had the same problem as OP. I needed at least a minimal protection for my hard work, but it seems this very basic need had not been covered by the NPMjs OS community. Add salt to injury the JXCore package encryption system was cracked last week in a few hours so back to obfuscation...
So I created the complete solution, that handles file merging, uglifying. You have the option of leaving out specified files/folders as well from merging. These files are then copied to the new output location of the merged file and references to them are rewritten auto.
NPMjs link of node-uglifier
Github repo of of node-uglifier
PS: I would be glad if people would contribute to make it even better. This is a war between thieves and hard working coders like yourself. Lets join our forces, increase the pain of reverse engineering!
This isn't supported natively by uglifyjs2.
Consider using webpack to package up your entire app into a single minified .js file, excluding node_modules:
http://jlongster.com/Backend-Apps-with-Webpack--Part-I
I had the same need - for which I created node-optimize and grunt-node-optimize.
https://www.npmjs.com/package/grunt-node-optimize
As the title says, I want to have a build tool that quite much stays out of my way.
I would rather want to specify rules, rather than steps in the build process. I wan to say that I want a binary file with a name placed in the root directory of my project, .o files should go in an obj/tmp dir and the source is in the Source-directory.
I do NOT want to tell it that it is this'n'that file as I keep adding new files rather quickly, it should just scan the source directory (and its subdirectories) looking for Ragel (.rl) and C++ code (.cxx) and doing what's necessary to make all into an executable.
I have looked into many tools, like auto{make,conf,header} (Did not really like that I placed the files it wanted in a subdir of project root, eclipse did not like that either), CMake (Seems like I have to add all source-files myself, and is quite much a variation of autotools in my eyes). I have also read about ant, maven (I am also allergic against XML, it's a good format to serialize data for applications, not so much for humans. I would prefer YAML) and others on WikiPedia. And I have seen tools which seems good but which require to be set up as a webserver which is kinda overkill.
Also, I really need the ability to be able to work offline without internet connection!.
Right now it seems like the best option is to make a little script that finds all .cxx files and write an Unity.cxx and builds that one with G++, which probably is quite fast but to much an ugly hack, I guess.
Bonus Points:
Fast builds
Ability to type build test-1 or something and it will build and directly run test-1
Multi-core builds (i.e. faster builds)
Does really not interrupt my train of thought
CMake is great. It's free, cross-platform, and reasonably well documented. It supports "out of source builds", meaning none of the build files are placed in the source directory. That makes source control a bit easier. It can be set up to find new files (globbing). Fast?...It generates make files...after that it's up to your compiler. Multicore...again, more a function of the compiler. I've used CMake on Windows, Linux, and Mac...it just works.
Another that I haven't tried but have read about and plan to test is premake... http://industriousone.com/sample-script
cake from CoffeeScript is quite good, and I'm writing a similar tool using Lua myself.
CMake and premake Ain't build/maketools, they are build/make-descriptor generators; which may fit a large number of projects that ain't changing too much. But not for project where rapid prototyping is a key.
Right now, I'm doing a project where the browser updates when you hit the save-button in your text editor; You do not need to go to the browser and hit F5 (Which would cause a small delay while the browser load in everything again, and you would most likely loose the state of the page, like say that you have an menu open, and wish to tweak the look of the menu. You would be forced to navigate there again in your RIA).
I am facing with the bug following:
https://bugzilla.samba.org/show_bug.cgi?id=4531
rsync will always get the older symlink of the other side overwrite
the newer one on the local side.
Wayne has suggested to use unison, however it is a non-developing old
project that I have suspect to use.
What can you suggest me for ?
My main aim is to syncronize file, directories, links for 2 nodes.
unison is ok, as long as your file/folders name don't use unicode, especially cross platform. Can't hurt to give it a try.
See Here for the limitation on unicode in filename.
I want to create a setup for my project so that it can be installed on any pc without installing the header files.
How can I do that?
There are two general ways to distribute programs:
Source Distribution (source code to be built). The most common way is to use GNU autotools to generate a configure script so that your project can be installed by doing ./configure && make install
Binary Distribution (prebuilt). Instead of shipping source, you ship binaries. There are a couple of competing standards although the two main ones are RPM and DEB file.
You just changed your question (appreciated, it was kind of vage), so my answer no longer applies ..
make sure you have a C compiler
I'd be surprised if you didn't, Linux normally has one
find an editor you are comfortable with
vi and emacs are the classics
write your first program and compile
learn about makefiles
learn about sub projects and libraries
In many respects, your question is too vague to be answerable. You will need to describe more what you have in mind. All else apart, if you are using an integrated development environment (IDE), then what you do should be coloured strongly by what the IDE encourages you to do. (Fighting your IDE is counter-productive; I've just never found an IDE that doesn't make me want to fight it.)
However, for a typical project on Linux, you will create a directory to hold the materials. For a small project (up to a few thousand lines of code in a few - say 5-20 - files), you might not need any more structure than a single directory. For bigger projects, you will segregate sub-sections of the project into separate sub-directories under the main project directory.
Depending on your build mechanisms, you may have a single makefile at the top of the project hierarchy (or the only directory in the 'hierarchy'). This goes in line with the 'Recursive Make Considered Harmful' paper (P Miller). Alternatively, you can create a separate makefile for each sub-directory and the top-level makefile simply coordinates builds across directories.
You should also consider which version control system (VCS) you will use.
Mostly for my amusement, I created a makefile in my $HOME/bin directory called rebuild.mk, and made it executable, and the first lines of the file read:
#!/bin/make -f
#
# Comments on what the makefile is for
...
all: ${SCRIPTS} ${LINKS} ...
...
I can now type:
rebuild.mk
and this causes make to execute.
What are the reasons for not exploiting this on a permanent basis, other than this:
The makefile is tied to a single directory, so it really isn't appropriate in my main bin directory.
Has anyone ever seen the trick exploited before?
Collecting some comments, and providing a bit more background information.
Norman Ramsey reports that this technique is used in Debian; that is interesting to know. Thank you.
I agree that typing 'make' is more idiomatic.
However, the scenario (previously unstated) is that my $HOME/bin directory already has a cross-platform main makefile in it that is the primary maintenance tool for the 500+ commands in the directory.
However, on one particular machine (only), I wanted to add a makefile for building a special set of tools. So, those tools get a special makefile, which I called rebuild.mk for this question (it has another name on my machine).
I do get to save typing 'make -f rebuild.mk' by using 'rebuild.mk' instead.
Fixing the position of the make utility is problematic across platforms.
The #!/usr/bin/env make -f technique is likely to work, though I believe the official rules of engagement are that the line must be less than 32 characters and may only have one argument to the command.
#dF comments that the technique might prevent you passing arguments to make. That is not a problem on my Solaris machine, at any rate. The three different versions of 'make' I tested (Sun, GNU, mine) all got the extra command line arguments that I type, including options ('-u' on my home-brew version) and targets 'someprogram' and macros CC='cc' WFLAGS=-v (to use a different compiler and cancel the GCC warning flags which the Sun compiler does not understand).
I would not advocate this as a general technique.
As stated, it was mostly for my amusement. I may keep it for this particular job; it is most unlikely that I'd use it in distributed work. And if I did, I'd supply and apply a 'fixin' script to fix the pathname of the interpreter; indeed, I did that already on my machine. That script is a relic from the first edition of the Camel book ('Programming Perl' by Larry Wall).
One problem with this for generally distributable Makefiles is that the location of make is not always consistent across platforms. Also, some systems might require an alternate name like gmake.
Of course one can always run the appropriate command manually, but this sort of defeats the whole purpose of making the Makefile executable.
I've seen this trick used before in the debian/rules file that is part of every Debian package.
To address the problem of make not always being in the same place (on my system for example it's in /usr/bin), you could use
#!/usr/bin/env make -f
if you're on a UNIX-like system.
Another problem is that by using the Makefile this way you cannot override variables, by doing, for example make CFLAGS=....
"make" is shorter than "./Makefile", so I don't think you're buying anything.
The reason I would not do this is that typing "make" is more idiomatic to building Makefile based projects. Imagine if every project you built you had to search for the differently named makefile someone created instead of just typing "make && make install".
You could use a shell alias for this too.
We can look at this another way: is it a good idea to design a language whose interpreter looks for a fixed filename if you don't give it one? What if python looked for Pythonfile in the absence of a script name? ;)
You don't need such a mechanism in order to have a convention based around a known name. Example: Autoconf's ./configure script.