Add include dir for node gyp - node.js

I am deploying a node-js app to heroku that requires the npm package imagemagic-native.
I made the buildpack install libmagick++-dev and export the include path:
export INCLUDE_PATH="$BUILD_DIR/.apt/usr/include:$INCLUDE_PATH"
export CPATH="$INCLUDE_PATH"
export CPPPATH="$INCLUDE_PATH"
Upon installing the imagemagic-native package with npm install, node-gyp is invoked to compile it's binaries. However I get this error:
remote: > imagemagick-native#1.7.0 install /tmp/build_720834c3a32b65d69ae603d7c618e20f/node_modules/imagemagick-native
remote: > node-gyp rebuild
remote:
remote: make: Entering directory `/tmp/build_720834c3a32b65d69ae603d7c618e20f/node_modules/imagemagick-native/build'
remote: CXX(target) Release/obj.target/imagemagick/src/imagemagick.o
remote: In file included from ../src/imagemagick.cc:9:
remote: ../src/imagemagick.h:1:22: warning: Magick++.h: No such file or directory
This suggests that gcc doesn't see the header files for libmagick++, because $CCPATH is not available to it.
How can I make npm install add the path to the list of include_dirs that node-gyp uses?
More detail about my use case is here: Using Magick++ in a node.js application on heroku

Try:
setting the environment variable CXX=/path/to/g++ -Ipath/to/include
and then restarting the process. If you're using bash this is done by
export CXX="/path/to/g++ -Ipath/to/include"
/path/to/include being where the missing header Magick++.h is located
if that doesn't work you may manually have to set CXX to include the -I in the makefile at /tmp/build_720834c3a32b65d69ae603d7c618e20f/node_modules/imagemagick-native/build then cding into that directory and calling make.

I've spent some time trying to answer the same question. In the end, i've found the proper way to do this here. You need to set 'include_dirs' property in ~/.node-gyp/x.x.x/common.gypi.
This is how I've set the include dir on Mac OS to /opt/local/include/ (which is where all macports intalls go):
...
['OS=="mac"', {
'defines': ['_DARWIN_USE_64_BIT_INODE=1'],
'include_dirs': ['/opt/local/include'],
'xcode_settings': {
'ALWAYS_SEARCH_USER_PATHS': 'NO',
...
Though I'm not sure it's applicable for heroku environment.

You can also use the "include_dirs" option in your project binding.gyp file. Read more about available options on the format description page.

You can now do OTHER_CFLAGS='-I/usr/local/include' supposedly. See https://github.com/nickdesaulniers/node-nanomsg/pull/144

Related

VS Code error `failed to run build scripts` when using Rust-Analyzer only on one specific project

My company just switched on a new repo the Rust project I'm working on, to merge it with a Tauri project, and VS Code now gave me this error:
Failed to run build scripts
I can compile, run my project or use cargo check to see the warnings/error but Rust Analyzer is not working and it's very annoying for me to run cargo check every time i need to check my code.
This error occurs only on this repo, which contains a Rust workspace separate in two folders: one Tauri/Rust project and one basic Rust project.
Cargo.toml at the root:
[workspace]
members = [
"builder",
"studio/src-tauri"
]
Error details:
[ERROR rust_analyzer::lsp_utils] failed to run build scripts:
The following warnings were emitted during compilation:
error: failed to run custom build command for `cairo-sys-rs v0.15.1`
Caused by:
process didn't exit successfully: `/home/korocouille/Sogilis/IReflex/reflex2/target/debug/build/cairo-sys-rs-decf14405d906ced/build-script-build` (exit status: 1)
--- stdout
cargo:rerun-if-env-changed=CAIRO_NO_PKG_CONFIG
cargo:rerun-if-env-changed=PKG_CONFIG_x86_64-unknown-linux-gnu
cargo:rerun-if-env-changed=PKG_CONFIG_x86_64_unknown_linux_gnu
cargo:rerun-if-env-changed=HOST_PKG_CONFIG
cargo:rerun-if-env-changed=PKG_CONFIG
cargo:rerun-if-env-changed=PKG_CONFIG_PATH_x86_64-unknown-linux-gnu
cargo:rerun-if-env-changed=PKG_CONFIG_PATH_x86_64_unknown_linux_gnu
cargo:rerun-if-env-changed=HOST_PKG_CONFIG_PATH
cargo:rerun-if-env-changed=PKG_CONFIG_PATH
cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR_x86_64-unknown-linux-gnu
cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR_x86_64_unknown_linux_gnu
cargo:rerun-if-env-changed=HOST_PKG_CONFIG_LIBDIR
cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR
cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR_x86_64-unknown-linux-gnu
cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR_x86_64_unknown_linux_gnu
cargo:rerun-if-env-changed=HOST_PKG_CONFIG_SYSROOT_DIR
cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR
cargo:warning=`"pkg-config" "--libs" "--cflags" "cairo" "cairo >= 1.14"` did not exit successfully: exit status: 1
error: could not find system library 'cairo' required by the 'cairo-sys-rs' crate
--- stderr
Package cairo was not found in the pkg-config search path.
Perhaps you should add the directory containing `cairo.pc'
to the PKG_CONFIG_PATH environment variable
No package 'cairo' found
Package cairo was not found in the pkg-config search path.
Perhaps you should add the directory containing `cairo.pc'
to the PKG_CONFIG_PATH environment variable
No package 'cairo' found
I tried to uninstall and re-install Rust-Analyzer, switch to pre-release or older version, delete my target folder, run a cargo clean cmd but nothing changed, and I can't find any solution.

How can I avoid rebuilding dependencies when `cargo install` fails due to a system configuration issue?

I'm trying to cargo install a project with many dependencies. One of the later dependencies fails to build due to some system configuration issue:
cargo install diesel_cli
... many dependencies here...
Compiling diesel_cli v1.4.1
error: linking with `cc` failed: exit code: 1
|
= note: ...large output removed...
= note: ld: library not found for -lmysqlclient
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Once I think I've solved the system configuration issue, I need to re-run cargo install, wait a while for the first set of dependencies to build, then see if I get past the failure.
How can I avoid rebuilding all of those dependencies?
The error message contains the directory containing the failed build artifacts:
error: failed to compile `diesel_cli v1.4.1`, intermediate artifacts can be found at `/var/folders/_b/d4_bd15x7s5g99cjvyhpw26w0000gp/T/cargo-installDQOdPD`
You can pass that directory via the --target-dir option (or setting the CARGO_TARGET_DIR environment variable) to use it again, avoiding rebuilding the dependencies:
cargo install diesel_cli --target-dir=/var/folders/_b/d4_bd15x7s5g99cjvyhpw26w0000gp/T/cargo-installDQOdPD

macOS & dyld: Symbol not found: _usdt_create_provider

In short, I'm unable to install #pact-foundation/pact-node on my development computer and from what I gather it seems to be loosely related to being on macOS 10.14. When I say loosely, this does not affect my other non-development computer running the same stack.
Within nvm I've tried using Node 8.14.0, 8.15.0, 9.4.0, 10.14.2, 10.15.0 and 11.6.0, in addition to system Node, which is also 11.6.0. Each version results in the same error messages, regardless of whether I'm in my team's project directory or in an otherwise empty sandbox directory.
Until a few minutes ago I was running macOS 10.14.1 and am seeing the same problems on 10.14.2. There are no updates that haven't been installed.
The package installation output is as follows.
$ npm install #pact-foundation/pact-node
> dtrace-provider#0.8.7 install /Users/andrewgould/www/sandbox/node_modules/dtrace-provider
> node-gyp rebuild || node suppress-error.js
ACTION binding_gyp_ndtp_target_build_ndtp .
TOUCH Release/obj.target/ndtp.stamp
> spawn-sync#1.0.15 postinstall /Users/andrewgould/www/sandbox/node_modules/spawn-sync
> node postinstall
> caporal#0.10.0 postinstall /Users/andrewgould/www/sandbox/node_modules/caporal
> (test -f ./node_modules/husky/bin/install.js && node ./node_modules/husky/bin/install.js) || exit 0
> #pact-foundation/pact-node#6.20.0 postinstall /Users/andrewgould/www/sandbox/node_modules/#pact-foundation/pact-node
> node postinstall.js
dyld: lazy symbol binding failed: Symbol not found: _usdt_create_provider
Referenced from: /Users/andrewgould/www/sandbox/node_modules/dtrace-provider/src/build/Release/DTraceProviderBindings.node
Expected in: flat namespace
dyld: Symbol not found: _usdt_create_provider
Referenced from: /Users/andrewgould/www/sandbox/node_modules/dtrace-provider/src/build/Release/DTraceProviderBindings.node
Expected in: flat namespace
Abort trap: 6
Has anyone seen errors like these before? Is there a solution known?
It turns out this issue was caused by binutils, which I had installed via Homebrew. Uninstalling that fixed the problem.
From the GNU binutils website, the main packages included in it are ld, the GNU linker, and as, the GNU assembler. Both tools are included with macOS, however the Homebrew versions of these tools caused the conflicts shown in the above question.

Package Node-Red using zeit/pkg

I am having some issues using zeit/pkg on my node-red project. Here are the steps to replicate the issue:
git clone https://github.com/node-red/node-red.git
cd node-red
npm install
npm run build
After build command I added the following to my package.json file:
"pkg": {
"assets": [
"./red/**/*"
],
"scripts": [
"./red/**/*.js"
]
}
Run the command pkg .
After running pkg . I get the following errors:
C:\xampp\htdocs\node-red>pkg .
> pkg#4.3.4
> Targets not specified. Assuming:
node8-linux-x64, node8-macos-x64, node8-win-x64
> Warning Cannot resolve 'path.join(__dirname, '..', '..', 'package.json')'
C:\xampp\htdocs\node-red\red\runtime\index.js
Dynamic require may fail at run time, because the requested file
is unknown at compilation time and not included into executable.
Use a string literal as an argument for 'require', or leave it
as is and specify the resolved file name in 'scripts' option.
> Warning Cannot resolve ''./' + aSettings.storageModule'
C:\xampp\htdocs\node-red\red\runtime\storage\index.js
Dynamic require may fail at run time, because the requested file
is unknown at compilation time and not included into executable.
Use a string literal as an argument for 'require', or leave it
as is and specify the resolved file name in 'scripts' option.
> Warning Cannot resolve 'relPath'
C:\xampp\htdocs\node-red\red\runtime\nodes\registry\loader.js
Dynamic require may fail at run time, because the requested file
is unknown at compilation time and not included into executable.
Use a string literal as an argument for 'require', or leave it
as is and specify the resolved file name in 'scripts' option.
Running the exe after the pkg command gives me this error:
Node-RED has not been built. See README.md for details
Any help would be greatly appreciated. It seems like a simple path issue but I can't seem to get it working.

chromium gyp config failure on linux

I am trying to build chromium on linux but at the moment I fail at the command:
GYP_GENERATORS="ninja" build/gyp_chromium
I get the following error:
Updating projects from gyp files...
gyp: conditions chromecast==1 must be length 2 or 3, not 4 while loading dependencies of /home/code/git/src/base/base.gyp while loading dependencies of /home/code/git/src/build/all.gyp while trying to load /home/code/git/src/build/all.gyp
I couldn't find any solution on the net...
Can somebody please help me?
You shouldn't have to run build/gyp_chromium. In fact, on Linux, you should now be using GN, which is a new and improved meta build system replacing GYP.
The basic steps from the beginning are:
Install depot_tools
fetch chromium
(In your Chromium src dir) ./build/install-build-deps.sh
gclient runhooks
gn gen out/Default
ninja -C out/Default chrome
Once you're already set up, the update - build cycle looks like this:
git fetch && git checkout origin/master && gclient sync
ninja -C out/Default chrome
You can get all the details here: https://www.chromium.org/developers/how-tos/get-the-code

Resources