I wanted to build tensor-flow serving from source optimized for my cpu and I have followed the instructions given at tensorflow serving page.
I felt like the instructions is not completed. I was only able to find these three lines of instructions and I have done it.
git clone -b r2.3 https://github.com/tensorflow/serving.git
cd serving
tools/run_in_docker.sh -d tensorflow/serving:2.3.0-devel \
bazel build --config=nativeopt tensorflow_serving/...
So I'm wondering what to do next after the last step? How can I install it in my ubuntu so that I can access it via terminal using the command like this tensorflow_model_server --port=8500...?
After building Tensorflow Serving you can start testing it, a good starting point can be this Serving Basic from Tensorflow website.
Related
Environment
I do use CI/CD of gitlab to bundle my application.
I do use node:14-alpine as image and do run yarn to build my app.
After build is finished, I do deploy my app via rsync to the target-server, which run's ubuntu 20.04.
On this server, I do use pm2 to start the app and keep it running.
Issue
If I look into the logs, I do see an error like this:
I've searched a bit, and found that the issue might be caused of musl-dev is missing.
I've installed it at my server, and into the docker-container, but with same result.
BUT, if I do delete the node_modules directory from server, and run yarn install right at the Server, the app run like expected
Question
So why does this issue happens here? Must I have the same distribution & version of linux in my docker-container to fit all dependencies?
Don't use an Alpine image if you're deploying on Ubuntu.
So why does this issue happens here?
The fundamental C standard library implementation is different on the two (Alpine uses musl libc; Ubuntu and more or less all other distros use GNU C Library (glibc)).
Trying to move binaries (such as those that might appear in node_modules for native modules) built against one libc implementation to a system using the other will likely be painful or not work at all (as you noticed).
Must I have the same distribution & version of linux in my docker-container to fit all dependencies?
If none of the dependencies use native code, then you should be able to just move things over without issues, but otherwise it'll be easiest (e.g. considering the versions of other libraries your dependencies may link against) to just use the same version as your target OS – or, if you don't want to think about that, just deploy your application as a Docker container.
Even if the suggestion from #AKX is a good answer, I've played a bit around to figure out how to solve this special case.
Here is my solution:
install musl-dev at the server
link it to /lib
apt-get install musl-dev
ln -s /usr/lib/x86_64-linux-musl/libc.so /lib/libc.musl-x86_64.so.1
In my case it's only this single dependency which cause the trouble. If I got more of this, I will follow AKX's suggestion and choose a debian/ubuntu-like distribution to bundle it.
While trying to learn fairseq, I was following the tutorials on the website and implementing:
https://fairseq.readthedocs.io/en/latest/tutorial_simple_lstm.html#training-the-model
However, after following all the steps, when I try to train the model using the following:
! fairseq-train data-bin/iwslt14.tokenized.de-en \ --arch tutorial_simple_lstm \ --encoder-dropout 0.2 --decoder-dropout 0.2 \ --optimizer adam --lr 0.005 --lr-shrink 0.5 \ --max-tokens 12000
I receive an error:
`fairseq-train: error: argument --arch/-a: invalid choice: 'tutorial_simple_lstm' (choose from 'fconv', 'fconv_iwslt_de_en', 'fconv_wmt_en_ro', 'fconv_wmt_en_de', 'fconv_wmt_en_fr', 'fconv_lm', 'fconv_lm_dauphin_wikitext103', 'fconv_lm_dauphin_gbw', 'transformer', 'transformer_iwslt_de_en', 'transformer_wmt_en_de', 'transformer_vaswani_wmt_en_de_big', 'transformer_vaswani_wmt_en_fr_big', 'transformer_wmt_en_de_big', 'transformer_wmt_en_de_big_t2t', 'bart_large', 'bart_base', 'mbart_large', 'mbart_base', 'mbart_base_wmt20', 'nonautoregressive_transformer', 'nonautoregressive_transformer_wmt_en_de', 'nacrf_transformer', 'iterative_nonautoregressive_transformer', 'iterative_nonautoregressive_transformer_wmt_en_de', 'cmlm_transformer', 'cmlm_transformer_wmt_en_de', 'levenshtein_transformer', 'levenshtein_transformer_wmt_en_de', 'levenshtein_transformer_vaswani_wmt_en_de_big',....
Some additional info: I am using google colab. And I am writing the entire code until train step into .py file and uploading it to fairseq/models/... path as per my interpretation of the instructions. I am following the exact tutorial in the link.
And, before running it on colab, I am installing fairseq using:
!git clone https://github.com/pytorch/fairseq %cd fairseq !pip install --editable ./
I think this error happens because the command line argument created as per the tutorial has not been set properly.
Can anyone please explain if on any step I would need to do something else.
I would be grateful for your inputs as for a beginner learner such help from the community goes a long way.
Seems you didn't register the SimpleLSTMModel architecture as follow. Once the model is registered you can use it with the existing Command-line Tools.
#register_model('simple_lstm')
class SimpleLSTMModel(FairseqEncoderDecoderModel):
...
.
.
...
Please note that copying .py files doesn't mean you have registered the model. To do so, you need to execute the .py file that includes abovementioned lines of code. Then, you'll be able to run the training process using existing command-line tools.
You should put your .py into:
fairseq/fairseq/models
not to fairseq/models
I'm trying to build the frontend of a web application in a Node.js Docker container. As I'm on a Windows PC, I'm very limited in my base Images. I chose this one, as it's the only one on DockerHub with a decent number of downloads. As the application is meant to run in Azure, I'm also limited to Windowsservercore 2016. When I run the following Dockerfile, I get the error message below (on my host system the build runs fine btw):
FROM stefanscherer/node-windows:10.15.3-windowsservercore-2016
WORKDIR /app
RUN npm install -g #angular/cli#6.2.4
COPY . ./
RUN ng build
#
# Fatal error in , line 0
# API fatal error handler returned after process out of memory on the background thread
#
#
#
#FailureMessage Object: 000000E37E3FA6D0
I tried increasing the memory available to the build process with --max_old_space up to 16GB (the entire RAM of my laptop) but that didn't help. I also contacted the author of the base image to find out if that's the issue but as this doesn't seem to be reproducable with a smaller example application, that wasn't very fruitful either. I'm working on this issue for a week now and I'm seriously out of ideas what could be the reason. So I hope to get a new impulse from here. At least a dircetion I could investigate in.
What I also tried was getting Node.js and Angular installed on a Windowsservercore base image. If someone has an idea how to do that, it could be the solution.
EDIT: I noticed that the error message is the only output I get from the build process, it doesn't even get to try building the modules. Maybe that means something...
Alright, I figured it out. Although the official Docker documentation states, that Docker has unlimited access to resources, it seems that you need to use the -m option when your build process exceeds a certain amount of memory.
Edit: This question seems to be getting some views so maybe I should clarify this answer a bit. The root of the problem seems to be that under Windows, Docker runs inside a Hyper-V VM. So when the documentation talks about "unlimited access to resources", it doesn't mean your PC's resources, but instead the resources of that VM.
I’ve written a very small application in Go, and configured an AWS Linux AMI to host. The application is a very simple web server. I’ve installed Go on the Linux VM by following the instructions in the official documentation to the letter. My application runs as expected when invoked with the “go run main.go” command.
However, I receive an “Invalid argument” error when I attempt to manually launch the binary file generated as a result of running “go install”. Instead, if I run “go build” (which I understand to be essentially the same thing, with a few exceptions) and then invoke the resulting binary, the application launches as expected.
I’m invoking the file from within the $GOPATH/bin/ folder as follows:
./myapp
I’ve also added $GOPATH/bin to the $PATH variable.
I have also moved the binary from $GOPATH/bin/ to the src folder, and successfully run it from there.
The Linux instance is a 64-bit instance, and I have installed the corresponding Go 64-bit installation.
go build builds everything (that is, all dependent packages), then produces the resulting executable files and then discards the intermediate results (see this for an alternative take; also consider carefully reading outputs of go help build and go help install).
go install, on the contrary, uses precompiled versions of the dependent packages, if it finds them; otherwise it builds them as well, and installs under $PATH/pkg. Hence I might suggest that go install sees some outdated packages which screw the resulting build.
Consider running go install ./... in your $GOPATH/src.
Or may be just selective go install uri/of/the/package for each dependent package, and then retry building the executable.
I have a Node.js application that I want to run on a Raspberry Pi.
And, I'd like to be able to deploy new version of my application as well as new versions of Node.js to that Raspberry Pi remotely.
Basically, something such as:
$ pi-update 192.168.0.37 node#0.11.4
$ pi-update 192.168.0.37 my-app#latest
I don't have any preferences on how to transfer my app to the Pi, may it be pushing or pulling. I don't care (although I should add that the code for the application is available from a private GitHub repository).
Additionally, once Node.js and / or my app were deployed, I want the potentially running Node.js app to restart.
How could I do this? Which software should I look into? Is this something that can easily be done using tools from Raspbian, or should I look for 3rd party software (devops tools, such as Chef & co.), or ...?
Any help is greatly appreciated :-)
a) For running the script continuously, you can use tools like forever or pm2, otherwise you can also make the app a debian daemon on raspian you can run with sudo <servicename> start (if you're running Arch Linux, this is handled differently I guess).
b) If your Raspberry is reachable from the internet, you can use a GitHub hook (API Documentation) to run every time you push a change to your repository. This hook is basically a URL endpoint on your Pi that runs a little shell script locally.
This script should shutdown you app gracefully, do a git pull for your repository and start the app/service again. You could also trigger this shell script over SSH from your local machine, e.g. ssh pi#192.168.0.37 /path/to/your/script
A update script could look like this:
# change the 'service' command to your script runner of choice
service <yourapp> stop
cd /path/to/your/app
git pull
service <yourapp> start
c) The problem with remote updating Node itself is, that the official binary builds for Raspberry Pi appear only very irregularly, otherwise it would be easy to just download/update the binaries with wget or curl. So most of the time you either need to cross compile Node on your own machine or spend about two hours to recompile it on your Pi. If you want to go with the unofficial builds on GitHub, you can install them with curl -# -L https://gist.github.com/raw/3245130/v0.10.17/node-v0.10.17-linux-arm-armv6j-vfp-hard.tar.gz | tar xzvf - --strip-components=1 -C /usr/local but you need to check the file name for every release.
Look no further than resin.io All you need to is flush your rpi with their image and then git push your project. resin.io will compile its code and dependencies for your device's architecture and send the result to your device(s) (in a docker file).
You can create a very simple continuous integration scheme using supervisor, which does two things:
keeps your process running even if it fails,
and restarts your process if any of the files changes.
It becomes a simple issue to keep your app updated: you just have to run the commands git pull; npm install: when code is downloaded (or even node modules change) supervisor will will restart the app automatically for you.
If the Raspberry Pi is visible from the internet you can use a GitHub webhook, pointing it to a very simple page that runs the commands git pull; npm install using child_process.exec(). (One important note: use a non-trivial URL (with a code or something) so that people don't run into it by mistake.) Otherwise just run those commands from the crontab every hour or so, for instance.
As for updating node.js itself, I would use the official Debian package, either from testing or getting it from unstable. Otherwise you would have to create a private repo to host your own packages, which probably is not worth the hassle; but is doable.