Output the binaries in the project's root bin subfolder using cmake - linux

I'm currently on a BF interpreter project. I decided to use CMake, and it works properly.
I settled for an out-of-source build, following the following tree :
+ Project root
|
+-- src/
+-- bin/
+-- build/ <- holds the "rubbish" generated by CMake when generating the Makefile
+-- CMakeLists.txt
When I want to build the project, I run, from the project's root folder :
cd build
cmake ..
make
In the CMakeLists.txt, I added the following line :
SET(EXECUTABLE_OUTPUT_PATH ${PROJECT_BINARY_DIR}/bin)
But, I've found it outputs the binaries in the build/bin folder, so I edited the line to :
SET(EXECUTABLE_OUTPUT_PATH "../bin/"
It works perfectly fine, but it is, IMHO, kind of ugly. Is there any "clean" way of doing this, that is without making assumptions about the project's structure, instead using something like set(EXECUTABLE_OUTPUT_PATH ${PROJECT_ROOT}/bin") ?
Thanks in advance for your replies and sorry for any English errors i may have made, as English isn't my first language.

You can set the variable CMAKE_RUNTIME_OUTPUT_DIRECTORY to achieve this - something like:
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${PROJECT_SOURCE_DIR}/bin)
Basically, the ${PROJECT_ROOT} variable you are looking for is PROJECT_SOURCE_DIR, CMAKE_SOURCE_DIR, or CMAKE_CURRENT_SOURCE_DIR. Each has a slightly different meaning, but for a basic project, these could well all point to the same directory.
Note that for the CMAKE_RUNTIME_OUTPUT_DIRECTORY variable to take effect, it must be set before the target is created, i.e. before the add_executable call in the CMakeLists.txt.
Also note that multi-configuration generators like MSVC will still append a per-configuration directory to this project root/bin folder.

Related

Nestjs incorrent dist folder structure with monorepo mode

How come it generates dist/apps/twitter and puts everything into this folder must be put just into dist folder? What can lead to this?
Expected dist folder structure:
dist:
--apps:
----twitter/src
--libs
This isn't "incorrect" per se. When Nest is in monorepo mode, it by default will use webpack to bundle the application code together to a single file. This of course can be overridden and tsc can be used instead to output all of the compiled TS code if that is preferred. When tsc compiles code that contains sources outside of a single directory (e.g. apps/twitter/src is the base directory but libs/whatever is imported as well) then Typescript does its best to maintain the directory structure in the resulting dist so that import paths do not end up getting messed up.
The reason for having the apps/twitter twice is because Nest sets the output directory of the build to be dist/apps/<app-name>, similarly to how Nx does. This is done so that if you end up having multiple applications, say apps/google, you can havedist/apps/twitter and dist/apps/google and not have their dists interfere with each other

WebStorm custom source roots

I am using WebStorm 2017.3 for Node.JS development, and I have a question about source roots.
I use app-module-path to reduce the number of relative paths between my files in require statements, to get rid of stuff like require('../../../utils/common/stuff'). What I do is, on the first line of my app I call
require('app-module-path').addPath(__dirname);
Which makes it possible to do things like require('src/utils/common/stuff'), if the src folder is located directly under the path added to app-module-path.
However, when using these custom source roots, while the code itself works, WebStorm seems unable to resolve the required files. Let's say I have this structure:
app.js
node_modules
|- some_module
|- xyz.js
src
|- routes
|- foo.js
|- bar.js
|- utils
|- stuff.js
WebStorm can of course successfully resolve the following "normal" require statements from foo.js:
require('./bar')
require('../utils/stuff')
require('some_module/xyz')
The first two are of course relative, the last one is because node_modules is a source root (or whatever term is used).
If I now add the root path to app-module-path, all my files can successfully do:
require('src/utils/stuff')
...which I think looks much nicer than endless relative ../../../../ everywhere. However, while the code itself works fine since src/utils/stuff is found relative to the root folder, the file is not resolved by the IDE, meaning my required object/function becomes grey instead of purple/yellow, I cannot Ctrl+Click on any symbols within it or use any other IDE nice stuff, which is very annoying.
I've tried marking directories as "Resource roots", but that doesn't make any difference; files required with absolute paths still cannot be resolved relative to resource roots. Basically what I'm after is the ability to configure additional folders to behave just like node_modules, and in the Directories setting, node_modules is marked as "Excluded", so it surely must be a completely separate setting controlling which folders are used as "require root folders" or "require search paths".
I've also tried making a symlink from <root>/node_modules/src to <root>/src, and while that makes it possible to Ctrl+Click on the actual file path 'src/utils/stuff' in the require statement, it still doesn't resolve the object that was required, and it also causes WebStorm to issue a warning about the module not being listed in package.json (which, of course, is correct).
Is there any way to configure WebStorm with additional "require root folders"? Or is there a better way than app-module-path to get rid of "relative path hell" in require statements while still preserving WebStorm resolving/indexing capabilities?

keep node_modules outside source tree in development (not production)

I prefer to keep all generated files and dependencies outside my source tree while I work on it.
npm and grunt make that difficult: npm will never allow moving local node_modules, so I have to use --global and --prefix. But grunt does not support such a scheme, apparently.
How can I achieve my objective given the above constraints?
So, if I have a project:
foo/
.git/
src/
gruntfile.js
package.json
I want no extra files in my tree, specifically, node_modules. (Also bower_components and build etc but this is about npm.) This directory should remain untouched while I am working on it and running it. That is all.
Apparently npm link is supposed to do this, but when I tried it still installed all the dependencies in ./node_modules. Any other invocation I cannot fathom; the docs are not clear.
A related suggestion was to use another directory with symlink to my gruntfile or package.json, but grunt just resolved the symlink and continued to work in my original directory!
So far the closest I have come is to link to e.g. ~/.cache/foo/node_modules from my project. Although it achieves keeping the deps out of my tree, I still have this link cluttering my workspace.
I want to know if there is a better way. Will some combination of npm install, npm link, ln, ~/.cache, NODE_PATH and PWD allow me to run my project, from my source tree, and keep it clean of all non-source artefacts?
Swimming against standards is a Very Bad Idea ®.
What you can (and should) do is add node_modules/ to your .gitignore (or whatever ignore file you have for your given source control system) so you don't version these files.
Also, you can use a directory like src/ to organize your code and "shelter" it from the mandatory configuration files (package.json, Gruntfile.coffee, etc).

How to get SCons Install to behave the same way as build at different hierarchy levels?

My project is the following tree
|-- A
| `-- SConscript
|-- B
| `-- SConscript
`-- SConstruct
and I want to install A's content into /install/A, and B's into /install/B, I achieve this by two similar looking SConscripts called from the top SConstruct. SConstruct sets up env['INSTALL_DIR'] = '/install' and exports it.
The A SConscript looks like this:
Import('env')
env = env.Clone(
INSTALL_DIR = os.path.join(env['INSTALL_DIR'], "A"))
env.Alias('install', env['INSTALL_DIR'])
build_result_obj = Program(...)
env.Install(env['INSTALL_DIR'], build_result_obj)
and similar for B.
When both, A and B are outdated, and I am in A subdirectory, I can run scons -u there, and it will only build A. But if I run scons -u install there, then it would try to install B as well, causing it to build B too.
I could resolve it by having different Alias names for install (install-A, install-B) and a combined one for two, but I don't want to remember all such names. I just want the install to behave the same as build with respect to the current location. How to achieve that?
You'll have to add your install targets to the Default target list. There is a method env.Default() for this, please check the docs of SCons. Note, how you can add Aliases to the Default list, too (once defined they're pretty much treated like file targets).
Another thing to regard here is, that you shouldn't define the install Aliases as simply
Alias('name', path_to_folder)
Like in every other build system, SCons will regard your install folder as up to date, as soon as it exists...and then no updates of your to-be-installed files happen.
Instead, define the Alias after you called the Install builder, and add the return value...which represents the path to the "program" node:
build_result_obj = Program(...)
instobj = env.Install(env['INSTALL_DIR'], build_result_obj)
env.Alias('install', instobj)

Linking separate projects in GHC

Ok this should be simple, but can't seem to figure this out. I have two projects, ProjectA and ProjectB. ProjectB depends on the old project, ProjectA. Now I want to build ProjectB. And I do not want to change the directory structure for ProjectA now. Problem is, I always used -outputdir bin with ProjectA.
ProjectA looked like this:
ProjectA/
bin/
(*.o, *.hi in proper paths, sometimes also *.p_o and *.p_hi)
Foo/
ModuleX.hs
ModuleA.hs
ModuleB.hs
Now I have a different folder with ProjectB, with its own separate -outputdir. I just need to link to the old project object files (without having ProjectA files recompiled). I realize that I can probably cabalize ProjectA ... but is there no simpler way?
The "simple way" is to use Cabal. Once you've installed Project A, you never need to worry about where the hell it's actually stored ever again; GHC will just find it.
If you don't want to do this, try using the -i switch to GHC to tell what folders to search for your compiled stuff.
http://www.haskell.org/ghc/docs/7.0.1/html/users_guide/separate-compilation.html

Resources