I'm trying to deploy a database-backed Rust app on Amazon Lightsail. It uses the ORM crate Diesel. It compiles without trouble on my local (Arch) Linux machine.
To compile the app remotely, I SSH into a Lightsail Debian VM. After installing Rust, cloning the repo, and specifying the toolchain, I run cargo build. This does compile a bunch of crates, but in compiling Diesel it appears to hang. At least, calling ps shows the cargo and rustc processes appearing to continue after 30 mins.
I've tried Diesel versions 1.4.5 and 2.0.0, stable and nightly Rust toolchains, and Ubuntu as well as Debian VMs.
[Edit: the app also compiles without trouble on a Linode VM.]
What could be the problem? (How can I collect further information for diagnosis?)
What is the CPU graph showing?
Lightsail uses burstable instances that have a baseline of CPU and can handle occasional traffic spikes, but if you spike the CPU for too long then the CPU gets throttled.
If you check on the instances metrics tab you can see if it's running out of burst capacity (choose burst capacity percentage or minutes from the drop down).
Related
I am moving my windows hosted SPA app into a Linux container. I am somewhat familiar with Ubuntu, so I was going to use that.
The NodeJs page on docker hub shows containers for several Debian versions and Alpine.
But nothing for Ubuntu.
Is Ubuntu not recommended for use with NodeJs by the NodeJs Team?
Or is it just too much work to keep lots of Linux distros of NodeJs preped, so the Node team stopped at Debian and Alpine?
Or is there some other reason?....
Ubuntu is too heavy to have it as a base container for running a node application as a server. Debian and Alpine are much more lightweight compared to Ubuntu.
Just on top of that, having some knowledge of Ubuntu, debian and alpine wouldn't be a big change. At the end of the day Ubuntu is somewhat built on top of debian, and they're linux distros so you should be fine. Especially that you'd need to do your configure steps ones, save them as part of the container image and you're done. Every time it will make the same container with the right setup. The beauty of containers.
Ubuntu is just a really heavy base and is going to add a ton of packages into the container that most likely are unnecessary. If you're going to be building production grade containers, Alpine is usually the go-to. It has a minimal amount of libraries installed, reducing the overall size of the container, and should be closest to "bare-minimum" that your application needs to run. I'd start there.
So I am 400 hours into a project at work building an automated image classification pipeline. I have have overcame many hurdles, and am about finished with the first alpha. Every thing runs in docker containers on my workstation. The only thing left is to build the inference service. So I set up one more docker container pull in my libraries and set up the flask endpoints, and copy the tflite file to the shared volume; every thing seems to be in order, I can hit the API with the chrome and I get the right responses.
So I very happily report that the project is ready for testing 5 weeks early! I explain that all we have to do is install docker, build and run the docker file, and we are ready to go. To this my coworker responds "the target machines are 32bit! no docker!"
Upgrading to 64 bit is off the table.
I tried to compile tensorflow to 32 bit..........
I want to add a single board PC (x64) to the machine network and run the docker from there but management wants a solution that does not require retrofitting.
The target machines have very unstable internet connections managed by other companies in just about every country on earth so a cloud solution is not going to work.(plus I need sub 50 ms latency)
Does anyone have an idea of how to tackle this challenge? at this point I think I am stuck recompiling tf to 32bit; but I don't know how!
The target machines is running a custom in house distro of Debian 6 32bit.
The target machines are old and have outdated software but were very high end at the time they were built.
It's not clear which 32bit architecture you want to use. I guess it's ARM32.
If that's the case, you can build TF or TFLite for ARM32.
Check the following links.
https://www.tensorflow.org/install/source_rpi
https://www.tensorflow.org/lite/guide/build_rpi
Though they're about RPI, you could get some idea on how to build it for ARM32.
Problem statement first: How does one properly setup tensorflow for running on a DSVM using a remote Docker environment? Can this be done in aml_config/*.runconfig?
I receive the following message and I would like to be able to utilize the increased speeds of the extended FMA operations.
tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Background: I utilize a local docker environment managed through Azure ML Workbench for initial testing and code validation so that I'm not running an expensive DSVM constantly. Once I assess that my code is to my liking, I then run it on a remote docker instance on an Azure DSVM.
I want a consistent conda environment across my compute environments, so this works out extremely well. However, I cannot figure out how to control the tensorflow build to optimize for the hardware at hand (i.e. my local docker on macOS vs. remote docker on Ubuntu DSVM)
The notification is indicating that you should compile TensorFlow from the source to leverage this particular CPU architecture, so it runs faster. You can safely ignore this If you choose to compile, though, you can compile and install the TensorFlow source code, and then use the native VM execution mode (vs. using Docker) to run it from Azure Machine Learning.
Hope this helps,
Serina
For our Travis-CI builds of the Jailhouse hypervisor, we have a rather costly environment setup which consists of a partial distribution update to pull in a recent make version (>=3.82, the default one is still only 3.81 - Ubuntu...), a cross toolchain for ARM and a 100 MB package of prebuilt Linux kernel sources that we need to compile an out-of-tree module.
To reduce the build time and the number of kernel downloads, we currently build all configuration variants sequentially in a single run (make; make clean; make...). That was fine for checking for build breakages, but with the addition of a Coverity scan, which depends on the build outputs, it no longer works. Switching to a build matrix seems the obvious solution, at the price of multiple installations because Travis-CI seems to be unable to reuse them during such builds. While we currently only have 3 configuration variants, this will increase in the future (e.g. every ARM board added will increase it by one), thus the approach does not really scale.
Do we have any alternatives? I already looked at caching, available via the docker-based build, but lacking sudo support there prevents this approach. Other ideas?
You should change your build to do this
cov-build --idir <target1> make; make clean
...
Use different intermediate directories for each build. Then go back later and run
cov-analyze --idir <target1>
cov-commit-defects --idir <target1> --stream <target1>
I'm currently looking for a VPS to deploy a Yesod site on, I was wondering what the system requirements are for running Yesod? I will be using Nginx with Warp as the system configuration.
There are no hard-and-fast rules here, but I comfortably run about 5 Yesod-powered sites with Nginx and PostgreSQL and a micro EC2 instance (micro being the instance size, not a random adjective).
I had a VPS and I had trouble with the glibc version, mainly because a lot of hosting companies are quite conservative and don't offer the latest and greatest versions of the common Linux distributions. GHC won't work with older versions of glibc, although I haven't found anywhere an exact definition of how old is too old.
So one system requirement is: a recent Linux that doesn't have an ancient version of glibc.
I currently run one yesod app on Debian Lenny on VDS, with 500MHz CPU and 196Mb RAM. I do not compile app on the VDS, instead I upload compiled binary. It only needs recent libgmp, but I put one (libgmp*.so) from my desktop to the same directory as application and run
LD_LIBRARY_PATH=. ./my-yesod-app