How to add extra library in ML FLOW which can not be added through conda.yaml - mlflow

It would be great if you can shed some light.
I m building a data science project using MLFLOW. I can install most of library through conda.yaml. But there is one library which is in azure artifact. which can not be packaged directly. Is there any way we can mention extra libraries for MLFLOW when we run it on databricks/Locally. Which MLFLOW can use it while running

Related

Should I use Docker to deploy a library of functions?

I see that Docker is intended to deploy applications, but what about libraries? For instance I have a library called RAILWAY that is a set of headers, binary code libraries, and command line tools.
I was thinking the output of the railway CI/CD pipeline can be a docker image that is pushed to a registry. Any application that wants to use railway, must be built using docker. And it will just put FROM railway:latest and COPY --from=railway ... in its Dockerfile. The application can copy whatever it need from the library image into its own image.
Is this a normal use-case?
I could use a Debian package for railway, but Azure Artifacts do not support Debian packages (only nuget and npm). And docker is just so damn easy!
Most languages have their own systems for distributing and managing dependencies (like NuGet which you mentioned) which you should use instead.
The problem with your suggestion is that it's not as simple as "applications use libraries", it's rather "applications use libraries which use libraries which use libraries which use...".
E.g. if your app wants to use libraries A and B, but library A also uses library B itself, how do you handle that in your setup? Is there a binary for B in As docker image that gets copied over? Does it overwrite the binary for B that you copied earlier? What if they're different versions with different methods in them?

How to build an external class library for use with a solution in a CI Pipeline in Azure DevOps?

I've requested to my Team Lead that we start integrating a CI/CD pipeline into most, if not all, of our projects. Our newest project relies heavily on our own, external class library that is referenced in the solution ; it is under "Dependencies" as a project reference.
The project runs fine when I build it in my machine using Visual Studio 2019, and before we needed to integrate an external library, it would build and release fine using our Azure DevOps pipelines.
However, with the addition of an external class library, when I try to run a build through Azure DevOps, I get the following error:
The project file ....csproj was not found.
I fully understand why it can't find it - because I need to pull in the external class library and build that first! There doesn't seem to be a lot of online material (not that I could find anyway!) that describes solutions to this other than "use nuget" ; unfortunately, it is a requirement from my Team Lead that this is not a route we go down - which has lead to a long couple of days!
With this in mind, I can't find another way to do this in Azure DevOps. I have looked into some sort of PowerShell command but to no avail thus far.
Has anyone run into this issue before with external class libraries in DevOps and can give me advice on the best way to approach it?
Generally speaking in 99,99% cases keeping a direct reference to the project is not a good idea. You can end up with really unmaintainable CI/CD logic and/or with dll versions mismatches during deployments. Actually I am an Architect in the project where I tried to fix that issue by migrating all dependencies to the NuGet server.
Azure Artifacts
You mentioned, that you are using Azure DevOps as main CI/CD tool, so this is a great opportunity to introduce Azure Artifacts as internal nuget server which is a part of Azure DevOps. For the first 2 GB it is free, here you have pricing details.
Alternatives
If for some reason you cant use Azure Artifacts, I recommend some alernatives:
MyGet
ProGet
Own nuget server
More information about alternatives you can find in this article.

Deploy Python app with textract module to Google Cloud Platform

I want to create a Python script that will parse 40.000 PDF files(text and images). Since I saw that there is no easy method to check if a page contains images I think I should use textract module.
Ideally I would deploy to Google App Engine.
My question is, for textract I've also installed other packages beside Python to my system. Can I deploy the script(with proper requirements.txt file) on Google Cloud App Engine without problem? or I will to use something else?
It is possible to use App Engine, but only with the Flexible environment and using a custom runtime, which allows you to add non-python dependencies (and also python dependencies not installable via pip):
Custom runtimes allow you to define new runtime environments, which
might include additional components like language interpreters or
application servers.
See also Building Custom Runtimes.

Google App Engine - specify custom build dependencies

My app needs cmake, libx11-dev and libpng-dev to build. I came across this documentation, which leads me to believe that I can list these as dependencies for my app to run on the Google App Engine platform, although I cannot figure out how. I was successfully able to run my app in a Compute Engine instance, although this is costly and, if I'm not mistaken, unnecessary. How do I get the packages listed at the beginning of the question installed beyond session end?
You can only list Node.js dependencies that way. From Declaring and managing dependencies (emphasis mine):
You can use any Linux-compatible Node.js package with App Engine
flexible environment, including packages that require native (C)
extensions.
You can use dependencies other than Node.js (at least cmake in your list) but only in the flexible environment, via a custom runtime. From About Custom Runtimes:
Custom runtimes allow you to define new runtime environments, which
might include additional components like language interpreters or
application servers.
See also Building Custom Runtimes.
You need to keep in mind that the App Engine Flexible Environment still uses Compute Engine instances so may not get an additional benefit from moving across to this
Based on Google Compute Engine, the App Engine flexible environment
automatically scales your app up and down while balancing the load.
The issue that you have is that if you require cmake, libx11-dev and libpng-dev to build your application you'll still need to use an underlying Compute Engine VM in order to run the application. This will be the case even if you consider moving across to Kubernetes Engine as well.
If you're looking to manage costs for your application, perhaps consider downsizing the VM to a smaller instance or look into modifying your application to suit the App Engine Standard Environment or use Cloud Functions

XNA 4.0 load external 3D objects on Windows

I'm working on a project where my XNA 4.0 powered 3D engine needs to load external fbx models input by the user, in run time rather than in the default compile time way.
I understand XNA is built to bundle/process complex resources compile time to make the runtime smaller, but as I only need to target Windows I wonder if it is possible to load models with textures externally, and if so, how?
Yes, as #Andrew mentioned, using the built in content pipeline would require a developer install so that the content pipeline is available. Of course, you can parse it yourself and pull out the information at runtime to avoid that dependency. There are people out there doing it ... for example, the guys at sandswept studios have an API to do this, and are willing to discuss commercial agreements (just contact them):
http://thunderfist-podium.blogspot.com/2008/09/fbx-and-xna-part-1-fbx-format-and.html
I found the solution here:
http://create.msdn.com/en-US/education/catalog/sample/winforms_series_2

Resources