Why do I encounter "INVALID_PARAMETER VALUE" error when opening "Models" tab in MLFlow UI? - mlflow

I installed mlflow via pip and opened it in the browser through the terminal. The tab Experiments displays normally; however, when I switch to Models tab, the app crashes due to the following error:
INVALID_PARAMETER_VALUE: Model registry functionality is unavailable;
got unsupported URI './mlruns' for model registry data storage.
Supported URI schemes are: ['postgresql', 'mysql', 'sqlite', 'mssql'].
See https://www.mlflow.org/docs/latest/tracking.html#storage for how to run
an MLflow server against one of the supported backend storage locations.
I would like to log model data locally without connecting to external servers or databases. I thank you in advance for any help!

According to MLflow documentations
If running your own MLflow server, you must use a database-backed backend store in order to access the model registry via the UI or API.
So you should use --backend-store-uri to configure the database-backed store. MLflow supports the database dialects MySQL, MS SQL, SQLite, and PostgreSQL. You can read more here.
For example, --backend-store-uri sqlite:///mlflow.db would use a local SQLite database.
mlflow ui --backend-store-uri sqlite:///mlflow.db

Related

Is it ok for a NodeJs API to have both MongoDb and MySQL(with prisma), but for different purposes?

Let me explain, so, I'm building this web application that visualizes data and the dataset that I'm working with is uploaded on a mongodb cluster. I'm also planning on making a login system using prisma and mysql. Should I build a new api for the whole user part or work with both databases in the same project?

Connecting to external database using React, Nodejs, Express, and PostgreSQL

I created a course management system using PHP and HTML. I use PostgreSQL to store data. I use my school's server and database where I could put my files on there with WinSCP and can connect to the database with putty via SSH where I can run queries. Everything works fine.
I am trying to do the exact same project using React, Nodejs, Express, and PostgreSQL. All the tutorials I see online show how to connect to a local database. I want to eventually publish this project and for it to communicate with the same database as my PHP project. Is that possible? So far, I was able to build the client-side of the application(react) but have not been able to communicate with the database (server-side). Any tips?

Migrating MongoDB from localhost to server

I completed development of a web application for my university doing everything on localhost. Now I want to migrate the backend and the MongoDB to the stakeholder's (my university) server environment. I tried finding different ways to do so but I haven't found a solution. Please help me in resolving this. Any links and working examples or videos would help a lot.
Tech stack used for backend:
1. NodeJS
2. MongoDB
3. google maps client api
I would suggest using a database software interface like
Studio 3T, Robot3T or any others that have a backup and/or export and/or import feature.
If you have access to the machine and can install tools for MongoDB there are some terminal tools to do this.
Mongo Import
https://docs.mongodb.com/manual/reference/program/mongoimport/
Mongo Export
https://docs.mongodb.com/manual/reference/program/mongoexport/#bin.mongoexport

Thingsboard: change of database

I have installed thingsboard on Linux. initially it was using Cassandra database, but now i have changed it to postgresql. But the issue is that both the thingsboard and postgresql are not running. The only error in postgresql log file is "Incomplete startup Packet" and thingsboard log file has "all hosts tried query failed(tried: /127.0.0.1:9042)" error.
I have stopped the Cassandra service and also configured the thingsboard.yml file to use postgresql database.
How to fix this issue.
Without knowing more log details, I suggest to fresh install a new ThingsBoard+PostgreSQL instance and migrate the Cassandra db to the new instance using the REST API feature of ThingsBoard, this way you avoid the corruption of PostgreSQL.
The steps are:
Install a new instance of ThingsBoard and PostgresSQL
Retrieve data via REST API from old instance
Send data via MQTT or other supported protocols to new instance
You can find a script which does this automatically in this repo.
I am not sure but In my experience that you should check up as below.
install PostgreSQL on your server.
create database like thingsboard inside postgreSQL.
configure for using PostgreSQL in thingsboard.yml
Run installation script again
https://thingsboard.io/docs/user-guide/install/linux/#memory-update-for-slow-machines-1gb-of-ram

Suggestions for how to host a public TensorBoard to present an ML prototype?

My current TensorBoard is only available locally, I understand I can ssh into my ML server and access the TensorBoard remotely. I am looking for a way to host the TensorBoard files on my webserver that doesn't have a dependency on the ML server, so that I may present the results of my ML prototype to those that have access to a web url. Ideas?
You can just install Tensorflow (which also installs Tensorboard) on your web server and then just copy the Tensorboard files over from your ML server to the web server either manually or using rsync. This will not expose your ML server to the outside at all.

Resources