zsh: illegal hardware instruction npm run start-server - node.js

Goal : To build a Node.js web server for a training and classification model on the server-side using TensorFlow.js
I am trying to do this tutorial to learn TensorFlow.js.
Expected results : Server should run locally on an appropriate port like so.
$ npm run start-server
...
> Running socket on port: 8001
Epoch 1 / 1
eta=0.0 ========================================================================================================>
2432ms 34741us/step - acc=0.429 loss=1.49
Actual results : Server does not run
#MacBook-Pro baseball % npm run start-server
> tfjs-examples-baseball-node#0.2.0 start-server
> node server.js
zsh: illegal hardware instruction npm run start-server
My hardware and software configuration :
MacBook Pro (13 inches, M1, 2020)
Chip : Apple M1
macOS Big Sur version 11.4
#tensorflow/tfjs-node: ^1.3.2
Node.js version : v14.17.5
Xcode 12.5.1 / Build version 12E507
What have I tried ?
At this stage, I can't remember the number of Github issues and Stackoverflow question I have read to solve this problem without success.
Tried using prior versions of #tensorflow/tfjs-node like 1.2.0 and 1.0.0.
Tried the most recent version of #tensorflow/tfjs-node i.e. 3.8.0
Checked if Python 2.7.16 and 3.9.6 are installed on my Mac. They are.
Deleted node_modules and package-lock.json and ran npm install
Updated Node.js to its most recent version.
Made sure Xcode is installed.
It seems to be a hardware compatibility issue but I can't figure out the solution. Please note that I am trying to use the Javascript implementation of TensorFlow and not Python.

first, tfjs-node includes binary tensorflow implementation (same on as python), the JS part is just a wrapper (tfjs-node installer actually builds NAPI bindings to the binary)
second, this is M1 CPU which doesn't emulates x86 instruction set - and advanced instructions (e.g. AVX) are simply not supported.
since you've already tried old tfjs-node v1.2 (last one before AVX), only proper solution is to build tfjs-node from scratch on M1 hardware - quite a painful process, but not impossible
you might get lucky if you search on github for someone else's port
UPDATE:
Apple has created a fork of TF that uses Apple's ML libs based on TF 2.4RC (and optionally Metal libs),
but it doesn't seem well maintained - last update is in March 2021.
https://github.com/apple/tensorflow_macos
https://developer.apple.com/metal/tensorflow-plugin/
So first step is to get TF working in Python
then it's a question of rebuilding #tensorflow/tfjs-node package to use that library instead of prebundled one

Related

NVM for Windows successfully installed but CMD prompt informs my version of Node.exe not compatible with my Windows version

I'm on a Microsoft Surface Pro X (it features Windows 10 Home on ARM 64-bit processor) trying to install Node.js. I've decided to use NVM for Windows. I can successfully install and get NVM for Windows running correctly. To my understanding, I'm able to install different versions of Node (i.e. 16.16.0 and 16.13.1, for example, both LTS) as shown in below graphic. But when I try to run Node, I get the error "This version of C:\Program Files\Nodejs\node.exe is not compatible with the version of Windows you're running. Check your computer's system information and then contact the software publisher."
I am not clear on the why this is happening. Am I not downloading a version of Node that is compatible on my ARM 64-bit processor? I've read through several closed issues on the GitHub page but I haven't encountered someone bringing up this same error. I'm pretty confident it does NOT have to do with the integrity of my symlinks too, or my system environment variables. See below:
Your help and insight is appreciated. Thanks.

Anaconda and upgrading to new M1 Mac

Background
I've just got a new M1 mac mini dev machine, and migrated from my old x86 mac using apple's migration assistant.
Doing that also copied over all my conda environments to the new machine (they were all in my home directory)
I installed the latest version of anaconda and anaconda plus all my python code and environments seem to work fine (this includes a bunch of wheel modules, notably numpy/scipy).
I did a bunch of googling for my questions below, but couldn't find any good answers anywhere - so I thought I'd ask SO as this seems like a quite common situation others will run into
Questions
Does anyone know the status of M1 native versions of python/numpy/scipy etc provided by conda forge?
I presume that all the binaries in my environments for python/numpy etc all still the old x86 versions, as they were all in environments in my home directory, and running via emulation. So, how do you go about changing/updating those to a M1 arm native
version if/when available?
A quick update as of July 2021.
TLDR
The conda-forge group have a M1 native conda installer here.
Installation is simple - run the installer, and you have conda up and running.
This will install an M1 native conda, and that conda's default environment will by default install M1 native python versions and M1 native versions of modules (if available).
There seem to be native osx M1 native wheels for most common modules now available on the conda-forge channel.
Current status
It seems Anaconda still do not have a native M1 version, nor does Miniconda. ...I can't figure out why it's taken so long and neither still seem to have native M1 support, but that's a separate issue.
Alternative
However, as steff above mentioned, conda-forge (as in the group responsible for maintaining the conda-forge channel) do have a installer for their version of conda that is itself both native M1, and also sets up your environment to pull M1 native wheels where available. This they call Miniforge.
Their github is here.
Various installers for their Miniforge (via direct download, curl or homebrew) can be found on their github page (above) - the direct link to the ARM native miniforge installer is here.
A quick search on conda-forge show's almost all common modules do now have native M1 wheels available. (look for supporting platform 'osx-arm64` eg numpy)
Caveats
I've not tested this too extensively yet, and I'm not sure exactly what happens if a non-M1 wheel is available (I believe it will default to downloading a no-arch version).
I'm also not sure/haven't tested whether you can mix and match M1 wheels with x86 mac wheels. (I'm guessing this would work, but haven't tried).
I also have only done minimal testing using the conda's pip, and how well it recognizes/tries to download/resolves M1 vs x86 pip packages.
The answer here is going to evolve over time, so here is the most up-to-date knowledge I have as of 27 Jan 2021.
Installing conda in emulation mode works completely fine. All you need to do is to install it in a Terminal run in emulation mode, or else install it using a Terminal emulator that has not been ported over yet.
Once your conda environments are up and running, everything else looks and feels like it did on x86 Macs.
If you'd like a bit more detail, I blogged about my experience. Hopefully it helps you here.
I got my M1 about 2 weeks ago and managed to install absolutely everything I need natively from conda-forge and pip. The installer you can download here.
As of 5Feb Homebrew is also officially supported on osx-arm64.
2022/03/02 answers
Native M1 installations are pretty simple now. Here are a few options for Miniforge and Miniconda.
(1) Using Apple's instructions for Tensorflow with Miniforge
This uses the same Miniforge solution mentioned above but includes an M1-optimized Tensorflow install, meaning TF has access to the M1 GPU cores.
Look for the "arm64: Apple Silicon" section at:
https://developer.apple.com/metal/tensorflow-plugin/
(2) Running native M1 with Miniforge and Rosetta with Miniconda side-by-side (Jeff Heaton's tutorial from 2021/11)
Jeff basically uses Apple's solution above for the native Miniforge install.
https://www.youtube.com/watch?v=w2qlou7n7MA
(3) Using native M1 Miniconda
There was a native M1 Miniconda installer published in 2021/11: Miniconda3 macOS Apple M1 64-bit bash (Py38 conda 4.10.1 2021-11-08)
https://docs.conda.io/en/latest/miniconda.html
My Experiences
I successfully ran the side-by-side installation from Jeff's tutorial with a few changes. It was very easy and I verified that in the native M1 Miniforge environment that Numpy is using the optimized BLAS/LAPACK linear algebra libraries and that Tensorflow has GPU access. I will update here after I run the Miniconda native M1 installer.
I installed the native version of python3 through miniforge (Apple version) and Spyder (Intel version) through homebrew and everything is working just fine for me with one exception, I've observed one strange behaviour when setting the "graphics backend" option to "automatic" instead of "inline".
Spyder >>> Prefernces >>> IPython Console >>> Graphics >>> Graphics Backend >>> inline, or automatic
When I start Spyder with the "inline" option and switch to "automatic", the opened kernels function just as expected. However, if I open new consoles they don't work at all. The issue also persists after restarting Spyder. The only way I manage to plot graphics in a separate window is to start Spyder with IPython console "graphics backend" set to "inline" and then change it to "automatic".
If I run python3 through terminal, plotting graphics works just fine as well.
My installation commands were:
brew install --cask miniforge
conda init zsh
conda activate
brew install --cask spyder
brew install PyQt#5
pip3 install matplotlib
You can check out this anouncement by Anaconda. You can now use Anaconda on your M1 MAC direclty.
"The 2022.05 release of Anaconda Distribution features native compiling for Apple M1’s ARM64 architecture (boasting 20% faster compute), Anaconda Navigator 2.1.4, conda 4.12.0, as well as several new and updated packages. 2022.05 is also the last release that will support win32."

Angular compiler is slow between two identical model laptops

Coworker and I are trying to figure out why compilation times are different. We have the same exact Dell Laptop 7030 model, same SSD, same hard drive, same memory, specs. Our task manager process look similar.
Corporate orders the identical model computers.
We are downloading from Angular Git Repository, with config and package json, with same node size memory.
Question is, Initial build between our laptops is 2 min compared to 8 min.
When we edit a single word in same file, his only takes 5 seconds to recompile, my takes 20 seconds.
Only programs running on the identical computer is Angular command ng serve.
Does anyone have ideas to resolve this issue, and slow compile time?
Is there anything I can change in my work station to the compile speed similar?
Attemped the solutions for all coworkers, still slow
We have same Node.js versions
updated from Angular 8 to 10 in Company Project Git Repo
tried npm cache clear
attempted Uninstalling and Reinstalling Angular and Nodejs
ScanDisk from Windows does not show errors on SSD drive
compared package-lock.json with coworker, they are the exact same, compared in source control diff
turned on Windows Defender Exclusion on the Angular Git Folder
Resources:
Angular compilation slow
How to speed up the Angular build process
Update:
Just noticed my laptop really stalls on styles.scss file at 48%
As you described enough, All the things are the same, whether node or angular versions, Hardware models, software and configurations. You also ensured that build pipelines are the same.
The only thing which comes to mind is a few difference in dependencies, yes the version mismatch inside node_modules packages. Considering that when you dont specify exact versions like ^x.x.x (aka semantic versioning) includes everything greater than the particular version in the same major range for dependency to be installed by npm or yarn. There's a tool named npm semver to show this. In this way, You have to check the installed packages versions too, by opening each one and looking inside package.json file. Specially for those which you think affect the performance like sass loader. Performance for each release may be different.
If you are running exactly the same codebase.
I would with a high degree of confidence say that it is
node.js or different npm versions that is the problem
try running
node --version
npm --version
yarn --version
to see which versions of them you have installed.
for angular 9 I would suggest running node 12, which is the current LTS (Long Time Support) version.
until node 14 which is also out now, but it's LTS release is not until october 2020.
I find yarn to be magnitudes faster than npm, so if you want a speed boost try running that if you haven't tried it.
Also Try clearing the npm cache, as mentioned per comments.
npm cache clear

Electron throwing error %1 is not a valid win32 application with custom node addon

I've written a custom node addon that works perfectly fine when running the 64 bit version of Electron.
I tried setting the architecture to ia32 and everything builds, but I get the not valid win32 application error, no matter what I do.
My environment settings are:
npm_config_disturl=https://atom.io/download/atom-shell
set npm_config_target=1.0.1
set npm_config_arch=ia32
set npm_config_runtime=electron
set HOME="C:\Users\myHome\.electron-gyp"
set VCTargetsPath=C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V140
I have been building the addon by calling npm install.
Here's how I set my node to target 32 bit and install all packages in 32bit format. It works for me. You may try.
npm set npm_config_arch ia32
npm clean-install --arch=ia32
The first command set the node environment to 32bit.
The second command re-install all the node packages that are 32bit compatible.
I was trying to compile for windows from mi Mac and I had that problem too, but after some readings I figured out how to proceed, and after all I can say that I got it. Yesterday I spent all day setting up a windows virtual machine in my (other) Linux laptop (I used my linux laptop just because my mac was exhausted in storage...). I was having too a problem with the preloadScript from electron main process in windows, Cant found the script, it was solved too.
Anyway, I think the library node printer from #tojocky is well maintained, in other hand in the electron-builder documentation they say that you should compile in native for natural reasons. Once you will have it, you'll see that it's a cleaner and pragmatic solution ...
This was my entire process, I hope it helps to someone having the same issue:
Get VirtualBox (or Parallels but is not free)
Get iso for W10
Create a VM with this W10 iso, and you should give to this VM some storage (because some dependency that you'll need to compile), I have assigned 60gb to this VM
Once I had that VM running, I just installed in that machine Visual Studio 2017 (with their build-tools included, it's necessary)
And then, I used CMD to make the rest
Install NodeJS (and NPM, but it comes with)
Install node-gyp globally
Install Python 2.7
Clone your project from git (in my case)
npm i (in your project), you should have as npm dependency in your package.json the module electron-builder of course. (here I had some troubles because when node-gyp tried to rebuild printer to generate the binary for windows it was failing, this was because it was imposible to find the python executable, so if you face this problem you should add it like:npm config set python "c:\Python27\python.exe" in my case )
Then try again npm i and Voila!
After all, you should make the build using electron-builder, in my case my npm script command was build --win --x64 but you can use the --ia32 flag as well for 32bits

Where is linux-tick-processor on node.js ubuntu native package installation?

I have installed Node.js on an Ubuntu 64bit server using a standard apt-get and would like to profile scripts through the "--prof" flag.
Web searching shows there should be a tool to process the v8.log output located in "deps/v8/tools/linux-tick-processor" but I don't seem to have any of those directories. Do they come with the native install? should they be installed separately? if so how?
Thank you
You need to download the source package with sudo apt-get source nodejs.The path you mentioned is in there.
You'll need to scons prof=on d8 in deps/v8 to build the debugger first, which might have some trouble on a 64-bit machine (v8 is 32-bit only), see here for more info.
Here's how I did it for Node.js 0.10.25 and 0.10.26:
I downloaded the source for Node.js that corresponds to the binaries I'm using. (I'm on Debian testing, which is a bit behind the releases from the Node.js web site.)
I checked the version of v8 bundled in the node sources. (Look at deps/v8/ChangeLog. It was 3.14.5 for Node.js 0.10.25 and 0.10.26.)
I downloaded this exact version of v8 from the v8 site.
Why? I tried running make native in Node.js deps/v8 directory but the Makefile was complaining about a missing test directory. From this we can infer that the Node developers are not including the entire v8 distribution. Once upon a time, with an earlier version of Node (0.8.something) I did build v8 from what was available in deps/v8 but this time I decided to use a different approach.
As explained in v8's build/README.txt, in the top level of the source tree for v8, I did:
$ svn co http://gyp.googlecode.com/svn/trunk build/gyp
(Linking my installed gyp to build/gyp as suggested in OrangeDog's answer did not work. That's why I did the above.)
I ran:
$ CXX=g++-4.7 make native
Why the CXX setting? I ran into a compilation problem right away when I tried with the default gcc. I checked the version. It was 4.8 and I remembered a story on Slashdot about how 4.8 was giving people trouble. So I installed 4.7. Worked fine.
I linked out/native/d8 to a location which is in my PATH. This is because the linux-tick-processor script does a poor job at finding d8. The simplest solution was to make it available in my PATH. Your mileage may vary.
After all this, linux-tick-processor can be used with the v8.log files that Node produces.
Either install the source package - sudo apt-get source nodejs, or switch to the official source as the ubuntu packages are very out of date.
To build d8, go to the deps/v8 directory.
Create a symlink at build/gyp to the directory where gyp can be found (e.g. /usr/bin).
Run make native.
Copy/symlink out/native/d8 to somewhere on your PATH.

Resources