Node and CPU architecture - node.js

I have a node app that is going to run on a small touch screen device that has an ARM CPU. The app itself is pretty simple. I reads data from syslog and sends an ipc message to another process if it finds a log entry with some specific data.
My concern is whether or not there will be any issues with installing the npm dependencies on a build machine which is running on a different architecture and then copying it onto the ARM device. The build machine is likely to be a 64 bit Mac or Linux box.
The app seems to work fine when I run npm install on my mac and then copy the resulting node_modules folder onto the ARM device. However, I had written electron apps for this same ARM device that required us to use electron-packager with a target architecture of
--platform=linux --arch=armv7l
for it to run. Simply installing the node_modules on a mac then copying them over did not work in that case.
So what is the difference? Is it just the use of electron itself that requires the platform specific build or is it something else I might run into with this new app I'm writing?

You can find platform specific file by executing:
find node_modules -name "*.node" |xargs file

Related

xcodebuild issues when linux is mounted on mac (Since /var/root comes into picture when using this way)

Use case is something like this. We need to use the Bluez BT stack on linux. There is also dependency of an iOS app that controls BT testing on iOS (On mac). The execution flow triggers mounting of the file system from linux onto mac and tries to build the xcode project and use the .app file that gets generated after the build is successful
If the xcode build command is run manually on Mac directly, there is no problem
xcodebuild test-without-building -project ios_bluetooth/ios_bluetooth.xcodeproj/ -scheme
ios_bluetooth -destination id=uuid -only-
testing:ios_bluetoothUITests CONFIGURATION_BUILD_DIR=./Build/Products/Debug-iphoneos -
derivedDataPath ./ios_bluetooth/DerivedData/ arguments=TESTS_STA_BLUETOOTH_ON
From Linux, after the mount, by default it mounts to /var/root/ProjectFolder/Dependencies. Running the xcodebuild command results in issues in terms of permission. Issue can be see even if logged in as root on mac. Is there a way to circumvent this issue and get the xcode project to build? Any help in this regard is appreciated.

One of the IoT Edge Module is in Backoff state Raspberry Pi 4 with Raspbian OS

I have developed a module and built the image for arm64v8 architecture as my Edge device is running in Raspberry Pi 4. I got the file deployment.arm64v8.json in the config folder correctly. But when I right-click on the device in Visual Studio Code and select Create Deployment for Single Device, the modules are getting added, but one of the modules is showing Backoff state. What could be the problem here, and was strictly following this doc.
I also tried restarting the services.
Device Information
Host OS: Raspberry OS
Architecture: Arm64v8
Container OS: Linux containers
Runtime Versions
iotedged: iotedge 1.0.9.4
Docker/Moby [run docker version]:
Update:
I am trying to build arm32 image in my 64 bit Windows Dev Machine, I guess that is the reason why I am getting this issue. Now I have 3 options.
Install the 64 bit version of Raspberry OS from here
Set up a 32 bit virtual machine and use it as a dev machine and
build 32 bit images
I already have a WSL running, maybe running the Visual Studio code
solution there?
Could you please tell me what would be the better way?
There were a couple of issues where I was doing wrong. First thing is that I was trying to build an arm64 image in my 64 bit Windows Dev Machine and then deploy the image to the arm32 Raspbian OS, which will never work. You can see the version and other details by running the below commands.
If it says aarch64 then it is 64 bit. If it says armv7l then it is 32 bit. In my case, it is arm71. So now I had to build an arm32 container images on my 64 bit Windows Host machine and use it on my Raspberry Pi 4. According to this doc, it is definitely possible.
You can build ARM32 and ARM64 images on x64 machines, but you will not
be able to run them
Running was not my problem, as I just had to build the image and I will use it in my Raspberry Pi. To make it work, I had to change my Dockerfile.arm32v7, specifically the first line where we pull the base image.
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build-env
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/core/runtime:3.1-buster-slim-arm32v7
WORKDIR /app
COPY --from=build-env /app/out ./
RUN useradd -ms /bin/bash moduleuser
USER moduleuser
ENTRYPOINT ["dotnet", "SendTelemetry.dll"]
The "build-env" image should be the same architecture as the host OS, the final image should be the target OS architecture. Once I made the changes to the docker file, I changed the version in the module.json file inside my module folder so that the new image with a new tag will be added to the Container Registry when I use the option Build and Push IoT Edge Solution after right-clicking deployment.template.json, and then I used Create Deployment for Single Device option after right-clicking on the device name in Visual Studio Code. And then when I monitor the device (Start Monitoring Built-in Event Endpoint option), I am getting this output.
Support with Microsoft was really cool with this issue. They really helped me to solve this GitHub issue that I had posted.

NPM CI Cross-Platform Reliability

Our NodeJS application should run on Linux and Windows servers. We have the following dilemma:
If we run npm i as our CI Build then we sometimes get errors due to differences between the developers laptop's NPM and the build server.
However, if we run npm ci then the build will presumably be locked to the platform of the developers laptop (Windows) and not work on a linux build server.
Maybe our assumptions are incorrect:
Do we need to build 2 versions of our app: one for each platform?
Does npm ci lock us into the platform of the developer's machine through package-lock.json?
Examples of builds working on developers Windows laptops and on Windows servers but not on a Linux server are apps like strapi or packages like sharp which compile stuff for the platform (.dlls for windows, godknowswhat for linux).
Apparently there is no portable way to create a portable node_modules folder, even if all dependencies are pure javascript ones.
I found this the hard way as it seems that npm created different scripts on Windows, like ones ending in .ps1, something that does not happen on POSIX platforms.
That means, that no way to track node_modules in git either.

How to run an Electron .exe app on Linux?

I'm trying to run an application build with Electron on Linux. They app maker offers an .exe installation file. So I figured I'd install it in WINE, but I seem to be missing something the app needs to run.
Since the install is an .exe, do I need WINE? And if I need WINE, what do I need to install to make the app work? I have tried two Electron apps, both only downloadable as a .exe install file.
Electron adds os native calls, so .exe files usually do not work. WINE is not able to emulate all of those calls, so if it isn't working for you, then you are out of luck I guess. Look for apps that offer linux versions, like https://www.electronjs.org/apps/camunda-modeler. If you have access to the repository, chances are they build it using electron-builder. You can just build it yourself with the command electron-builder build --linux in most cases

DNU publish to linux runtime from windows OS

I want to publish using dnu to run on linux from a windows machine. This is required to make docker images, I know the usual practice is to push the source to linux docker and do "dnu restore", but that sounds a lengthy process, and completely against the cross-compat that the DNXCore50 is trying to offer.
The latest dnx runtime now includes a "runtime" for unix/darwin related packages to target the other operating systems. But how to run a publish command that targets linux or rather if there is a way to pull the linux dnx core in a windows machine using dnvm install coreclr??

Resources