Why build.sh is not generated inside the rust code directory in near sdk - near-sdk-rs

Steps that I followed
cargo build --target wasm32-unknown-unknown --release
env 'RUSTFLAGS=-C link-arg=-s' cargo build --target wasm32-unknown-unknown --release
check ./build.sh it throw error no such file & directory Linux

Related

Docker Build on node:alpine-lts suddenly not working anymore

Following my Dockerfile
FROM node:lts-alpine as node
ARG STAGE='dev'
ARG STAGEPATH='/dev'
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN $(npm bin)/ng build --configuration $STAGE --base-href=${STAGEPATH}/konnektor/
I execute manually
docker build --progress=plain --no-cache -t konnektor .
And get the following Error
#14 [node 6/6] RUN $(npm bin)/ng build --configuration dev --base-href=/dev/konnektor/
#14 sha256:0d47aa4d98557f141c02bf395f4e2f6dd49e9cebe6dddad4ea4e5126da5016e7           
#14 10.72 /bin/sh: Unknown: not found
#14 ERROR: executor failed running [/bin/sh -c $(npm bin)/ng build --configuration $STAGE --base-href=${STAGEPATH}/konnektor/]: exit code: 127
------> [node 6/6] RUN $(npm bin)/ng build --configuration dev --base-href=/dev/konnektor/:
The build was working yesterday morning. I figured out, that the parentimage "node:lts-alpine" got an update a few hours ago. So i strongly think this is the problem. My problem now is, how can i get my build up and running again? The last Image was overwritten by this build and docker.io does not give older digest-hashes to pull older images.
I tried to get older Image-Versions without success on docker.io.
I saw, that other alpine-containers were updated on the same time.
For others, maybe facing the same problem, here my fix:
I was able to fix my setup by hardwiring the Digest of the Dockerimage to the version before.
The problem was, finding the digest hash-values of the images. They are not to find in docker.io (only the most recent hash).
But fortunately, there is an repo keeping track of the Versions:
https://github.com/docker-library/repo-info/blob/master/repos/node/remote/lts-alpine.md
Following the git-history of lts-alpine.md gave me the older image-hash. Accordingly i changed
FROM node:lts-alpine as node
to
FROM node#sha256:fda98168118e5a8f4269efca4101ee51dd5c75c0fe56d8eb6fad80455c2f5827 as node

Rust in Docker image: exec no such file or directory

I try to create a Docker image with a small Rust application. I want to run it on my Kubernetes cluster running on Raspberry Pi 4b. So the image must be linux/arm64/v8.
I create the image with this command on macOS:
$ docker build --platform linux/arm64/v8 -t dasralph/ping:arm64_1.0.4 .
But when I run it on a Raspberry Pi, the exec isn't found:
$ sudo docker run dasralph/ping:arm64_1.0.4
Unable to find image 'dasralph/ping:arm64_1.0.4' locally
arm64_1.0.4: Pulling from dasralph/ping
4f4fb700ef54: Pull complete
38f252ce47e1: Pull complete
Digest: sha256:4fbda499e0552bf08bf230db56906d185bd340655c0cc741ad10ee0ea642c626
Status: Downloaded newer image for dasralph/ping:arm64_1.0.4
exec /ping: no such file or directory
This is my Docker file:
# STAGE 1 is to build the binary
# Use rust-based image for container
FROM rust:1.61 AS builder
# Adding necessary packages
RUN apt update && apt upgrade -y
RUN apt install -y g++-aarch64-linux-gnu libc6-dev-arm64-cross
RUN rustup target add aarch64-unknown-linux-gnu
RUN rustup toolchain install stable-aarch64-unknown-linux-gnu
ENV CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER=aarch64-linux-gnu-gcc \
CC_aarch64_unknown_linux_gnu=aarch64-linux-gnu-gcc \
CXX_aarch64_unknown_linux_gnu=aarch64-linux-gnu-g++
# Set working directory in container; make directory if not exists
RUN mkdir -p /usr/src/ping
WORKDIR /usr/src/ping
# Copy all Cargo files from local computer to container
COPY Cargo.toml .
COPY Cargo.lock .
COPY .env.docker .env
COPY src src
# Build release application
RUN cargo build --target aarch64-unknown-linux-gnu --release
# STAGE 2 is to have smallest image possible by including only necessary binary
# Use smallest base image
FROM shinsenter/scratch
# Copy application binary from STAGE 1 image to STAGE 2 image
COPY --from=builder /usr/src/ping/target/aarch64-unknown-linux-gnu/release/ping /
EXPOSE 8080
ENTRYPOINT ["/ping"]
Has anyone a hint of what's going wrong?
Cargo.toml
[package]
name = "ping"
version = "0.2.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
actix-web = "4"
load-dotenv = "0.1.2"
main.rs
use actix_web::{get, App, HttpResponse, HttpServer, Responder};
use std::env;
use load_dotenv::load_dotenv;
load_dotenv!();
#[actix_web::main]
async fn main() -> std::io::Result<()> {
let bind_address = env!("BIND_ADDRESS", "BIND_ADDRESS must be set");
println!("BIND_ADDRESS: {bind_address}");
HttpServer::new(|| {
App::new()
.service(hello)
})
.bind((bind_address, 8080))?
.run()
.await
}
#[get("/")]
async fn hello() -> impl Responder {
HttpResponse::Ok().body("Hello Ralph!")
}
My problem was, that I need to make sure that Rust compiles into a static binary. It looks like MUSL is one way to do that.
This is now my updated Dockerfile:
# Build: docker build --platform linux/arm64/v8 -t dasralph/ping:arm64_0.1.0 --push .
# Run: docker run -p 8080:8080 ping
# Test: curl http://localhost:8080/
# STAGE 1 is to build the binary
# Use rust-based image for container
FROM rust:1.61.0-alpine AS builder
# Adding necessary packages
RUN apk update
RUN apk add pkgconfig openssl openssl-dev musl-dev
RUN rustup target add aarch64-unknown-linux-musl
RUN rustup toolchain install stable-aarch64-unknown-linux-musl
# Set working directory in container; make directory if not exists
RUN mkdir -p /usr/src/ping
WORKDIR /usr/src/ping
# Copy all files from local computer to container
COPY Cargo.toml .
COPY Cargo.lock .
COPY .env.docker .env
COPY src src
# Build release application
RUN cargo build --target aarch64-unknown-linux-musl --release
# STAGE 2 is to have smallest image possible by including only necessary binary
# Use smallest base image
FROM shinsenter/scratch
# Copy application binary from STAGE 1 image to STAGE 2 image
COPY --from=builder /usr/src/ping/target/aarch64-unknown-linux-musl/release/ping /
EXPOSE 8080
ENTRYPOINT ["/ping"]

What packages do I need to install to accomplish a "vue build <src path> --config <config file path> --dist <dist location> --prod --lib" command?

I've forked an old Vue.js package that has some issues in it (v-money) and made the necessary changes to accomplish what I need. But now when I try to build using the package's original method, I'm getting an error:
npm run build
vue build ./src/index.js --config ./build.config.js --dist ./dist/ --prod --lib "--disable-compress"
Usage: vue build [options]
alias of "npm run build" in the current project
Options:
-h, --help display help for command
Unknown option --config.
I'm guessing I've got the wrong version of Vue.js installed, as the package didn't indicate what version it's supposed to be, but I can't find anything on the web that shows --config, --dist, --prod, and --lib as build options for Vue.js.
I've attempted to build the package as-is without any of my small changes and that fails in the same way.
Install the following dev dependencies from the root of the v-money project:
npm i -D vue-cli#2.8.2 \
uglify-es \
uglifyjs-webpack-plugin#^1
Edit build.config.js to use the Uglify dependencies installed above:
const UglifyJSPlugin = require('uglifyjs-webpack-plugin')
module.exports = {
webpack: {
⋮
plugins: [
new UglifyJSPlugin(),
⋮
]
}
}
demo

Is there something like nodemon in Rust?

In javascript nodemon is a static files server that reloads on code changes.
I am using wasm-pack and miniserve to do two commands:
build
wasm-pack build --target web --out-name wasm --out-dir ./static/build
serve:
miniserve ./static --index index.html
I would love these two to be automated just like in javascript with nodemon.
Use cargo-watch and pass shell commands to execute using -s or --shell flags:
cargo watch -s 'wasm-pack build --target web --out-name wasm --out-dir ./static/build && miniserve ./static --index index.html'
Thanks for the help Lux and kmdreko
Use chobs. It can execute your command and restart it on changes.
chobs watch -e "crago run -- -f -e -b"
or any other exec.

how to ask cmake to clean Release configuration for Msvc build

I have build tree for different compilers (msvc-2008, mingw-gcc), generated with CMake:
\---buildroot
\---win32
+---mingw-gcc-4.4.0
| +---Debug
| \---Release
\---msvc-2008
+---Debug_Dynamic
+---Debug_Static
+---Release_Dynamic
\---Release_Static
And i want to build all configurations with one script. I wrote simple python wrapper, which iterates over hierarchy and calls cmake --build. For msvc builds i need to select proper configuration for building, cleaning and installing
I read documentation, and find parameter --config.
So final cmake command looks like:
cmake --build win32\mingw-gcc-4.4.0\Debug
cmake --build win32\mingw-gcc-4.4.0\Release
cmake --build win32\msvc-2008\Debug_Dynamic --config Debug
cmake --build win32\msvc-2008\Debug_Static --config Debug
cmake --build win32\msvc-2008\Release_Dynamic --config Release
cmake --build win32\msvc-2008\Release_Static --config Release
Here cmake commands for clean all targets:
cmake --build win32\mingw-gcc-4.4.0\Debug --target clean
cmake --build win32\mingw-gcc-4.4.0\Release --target clean
cmake --build win32\msvc-2008\Debug_Dynamic --config Debug --target clean
cmake --build win32\msvc-2008\Debug_Static --config Debug --target clean
cmake --build win32\msvc-2008\Release_Dynamic --config Release --target clean
cmake --build win32\msvc-2008\Release_Static --config Release --target clean
So i found answer on my question.
Disclaimer: I must admit that I do not know much about the --build option for Cmake. However, cmake also generates a 'package' and 'install' target, perhaps you can specify those with the --target option with cmake --build.
cmake --build my_build_dir --target install
Otherwise, you will need to specify those using devenv or msbuild command line options. Without cmake this woule be something like
devenv INSTALL.vcproj /Build Release
devenv INSTALL.vcproj /Clean Release
msbuild INSTALL.vcproj /t:Build /p:Configuration=Release
msbuild INSTALL.vcproj /t:Clean /p:Configuration=Release
I guess you can pass everything after msbuild/devenv through to cmake with '--'

Resources