So I am 400 hours into a project at work building an automated image classification pipeline. I have have overcame many hurdles, and am about finished with the first alpha. Every thing runs in docker containers on my workstation. The only thing left is to build the inference service. So I set up one more docker container pull in my libraries and set up the flask endpoints, and copy the tflite file to the shared volume; every thing seems to be in order, I can hit the API with the chrome and I get the right responses.
So I very happily report that the project is ready for testing 5 weeks early! I explain that all we have to do is install docker, build and run the docker file, and we are ready to go. To this my coworker responds "the target machines are 32bit! no docker!"
Upgrading to 64 bit is off the table.
I tried to compile tensorflow to 32 bit..........
I want to add a single board PC (x64) to the machine network and run the docker from there but management wants a solution that does not require retrofitting.
The target machines have very unstable internet connections managed by other companies in just about every country on earth so a cloud solution is not going to work.(plus I need sub 50 ms latency)
Does anyone have an idea of how to tackle this challenge? at this point I think I am stuck recompiling tf to 32bit; but I don't know how!
The target machines is running a custom in house distro of Debian 6 32bit.
The target machines are old and have outdated software but were very high end at the time they were built.
It's not clear which 32bit architecture you want to use. I guess it's ARM32.
If that's the case, you can build TF or TFLite for ARM32.
Check the following links.
https://www.tensorflow.org/install/source_rpi
https://www.tensorflow.org/lite/guide/build_rpi
Though they're about RPI, you could get some idea on how to build it for ARM32.
Related
Little intro:
I have two OS on my pc. Linux and Windows. I need Linux for work, but it freezes on my pc, but windows does not. I've heard that is a common thing for ASRock motherboards.
That's why i want to switch to Windows for work.
So my idea was to create docker image with everything i need for work, such as yarn, make, and a lot of other stuff, and run it on windows for using linux functionality. You got the idea.
I know that docker is designed to only do one thing per image, but i gave this a shot.
But there are problems constantly. For example right now i'm trying to install nvm on my image, but, after building the image, command 'nvm' is not found on bash. It is a known problem and running source ~/.profile adds the command in console, but running it while building the image doesnt affect your console when you run this image. So you need to do that manually every time you use this image.
People suggest putting this in .bashrc which gives segmentation error.
And that's just my problem for today, but i've encountered many more, as i've been trying creating this image for a couple of days already.
So my question is basically this: is it possible to create fully operational OS in one docker image, or maybe one could connect multiple images to create OS, or do i just need to stop that and use a virtual machine like a sensible person?
I would recommend using a virtual machine for your use-case. Since you will be using this for work and modifying settings, and installing new software, these operations are better suited to be in a virtual machine where it is expected that you change the state or configurations.
In contrast, Docker containers are generally meant to be immutable, as in the running instance of the image should not be altered or configured. This is so that others can pull down the image and it works "out-of-the-box." Additionally, most Docker containers available on Docker Hub are made to be lean, with only one or two use cases in mind and not extra (for security purposes and image size), so I expect that you would frequently run into problems trying to essentially set up a Docker image that you would be working on. Lastly, since it is not done frequently, there would be less help available online, and Docker-level virtualization does not really suit your situation.
Problem statement first: How does one properly setup tensorflow for running on a DSVM using a remote Docker environment? Can this be done in aml_config/*.runconfig?
I receive the following message and I would like to be able to utilize the increased speeds of the extended FMA operations.
tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Background: I utilize a local docker environment managed through Azure ML Workbench for initial testing and code validation so that I'm not running an expensive DSVM constantly. Once I assess that my code is to my liking, I then run it on a remote docker instance on an Azure DSVM.
I want a consistent conda environment across my compute environments, so this works out extremely well. However, I cannot figure out how to control the tensorflow build to optimize for the hardware at hand (i.e. my local docker on macOS vs. remote docker on Ubuntu DSVM)
The notification is indicating that you should compile TensorFlow from the source to leverage this particular CPU architecture, so it runs faster. You can safely ignore this If you choose to compile, though, you can compile and install the TensorFlow source code, and then use the native VM execution mode (vs. using Docker) to run it from Azure Machine Learning.
Hope this helps,
Serina
There are many websites providing cloud coding sush as Cloud9, repl.it. They must use server virtualisation technologies. For example, Clould9's workspaces are powered by Docker Ubuntu containers. Every workspace is a fully self-contained VM (see details).
I would like to know if there are other technologies to make sandboxed environment. For example, RunKit seems to have a light solution:
It runs a completely standard copy of Node.js on a virtual server
created just for you. Every one of npm's 300,000+ packages are
pre-installed, so try it out
Does anyone know how RunKit acheives this?
You can see more in "Tonic is now RunKit - A Part of Stripe! " (see discussion)
we attacked the problem of time traveling debugging not at the application level, but directly on the OS by using the bleeding edge virtualization tools of CRIU on top of Docker.
The details are in "Time Traveling in Node.js Notebooks"
we were able to take a different approach thanks to an ambitious open source project called CRIU (which stands for checkpoint and restore in user space).
The name says it all. CRIU aims to give you the same checkpointing capability for a process tree that virtual machines give you for an entire computer.
This is no small task: CRIU incorporates a lot of lessons learned from earlier attempts at similar functionality, and years of discussion and work with the Linux kernel team. The most common use case of CRIU is to allow migrating containers from one computer to another
The next step was to get CRIU working well with Docker
Part of that setup is being opened-source, as mentioned in this HackerNews feed.
It uses linux containers, currently powered by Docker.
We build firmware using Windows CE (6 and 7) on a Windows XP system. We often install the QFEs (CE patches/updates) from Microsoft as they are released. When we have to go back to a certain release to develop a patch, it can be a real pain because we will need to build a system with the same patch level that existed on the system at the time that the product was released. Is there any easy way to maintain a QFE history that can easily be reverted at any given time? Something along the lines of snapshotting the system state as it pertains to the CE install/QFEs at each release? We don't want to use virtual machine snapshots or anything that controls the state of anything outside of the Windows CE components for this. It is a pretty specific requirement, so I am guessing no, but perhaps someone has tackled this exact problem.
I understand that you're saying you don't want to use VMs, though I'm not entirely sure why. I'd recommend at least thinking about it.
Back when I controlled builds for multiple platforms across multiple OS versions, I used Virtual Machines for this. Each VM was a bare snapshot of a PC with the tools and SDKs installed. A build script would then pull the source for each BSP and build it nightly. They key is to maintain and archive "clean" VMs (without source) and just pitch the changes after doing builds. It was way faster and way cleaner than trying to keep the WINCEROOT for each QFE level in source control and pulling that - you have to reset the machine to zero in that case anyway to be confident of no cross-pollution between levels.
Approach (A)
From my experience I saw that for a small team there's a dedicated server with all development tools (e.g. compiler, debugger, editor etc.) installed on it. Testing is done on dedicated per developer machine.
Approach (B)
On my new place there's team utilizing a different approach. Each developer has a dedicated PC which is used both as development and testing server. For testing an in-house platform is installed on the PC to run application over it. The platform executes several modules on kernel space and several processes on user space.
Problem
Now there are additional 2 small teams (~ 6 developers at all) joining to work on the exactly same OS and development environment. The teams don't use the mentioned platform and can execute application over plain Linux, so no need in dedicated machine for testing. We'd like to adopt approach (A) for all 3 teams, but the server must be stable and installing on it in-house platform, described above, is highly not desirable.
What would you advise?
What is practice for development environmentin your place - one server per team(s) or dedicated PC/server per developer?
Thanks
Dima
We've started developing on VMs that run on the individual developers' computers, with a common subversion repository.
Benefits:
Developers work on multiple projects simultaneously; one VM per project.
It's easy to create a snapshot (or simply to copy the VM) at any time, particularly before those "what happens if I try something clever" moments. A few clicks will restore the VM to its previous (working) state. For you, this means you needn't worry about kernel-space bugs "blowing up" a machine.
Similarly, it's trivial to duplicate one developer's environment so, for example, a temporary consultant can help troubleshoot. Best-practices warning: It's tempting to simply copy the VM each time you need a new development machine. Be sure you can reproduce the environment from your repository!
It doesn't really matter where the VMs run, so you can host them either locally or on a common server; the developers can still either collaborate or work independently.
Good luck — and enjoy the luxury of 6 additional developers!