How to load custom spacynlp model on server - nlp

How to load custom spacyNlp ner model on ec2 server. I am facing error that "not able to load model" on server though i download it.
My custom model get load on server

Related

Failed to load image in Amazon Sagemaker Ground Truth

I am trying to label a custom dataset that I have but after labelling a certain number of images, Amazon Sagemaker displays an error stating 'Failed to load image, refresh if this persists'. I tried refreshing the page but still this error persists.

Azure Labelling tool throwing error 401 on running layout OCR

I have been trying to train a custom model for a document with some fixed layout text & information. I have successfully created, project, connection, container got URL for blob container. when I open the labelling tool to mark text recognization, this throws me an errror code 401, not sure, what's wrong here?
Please note - I have running other projects with different layout and document and able to train the model and use it.
what are the chances here for this error under same account but new storage, resource group, different end point and API.

I have a question about uploading a trained model to production on gcp

I am trying to follow Google's directions to upload a trained model to the cloud. It is just a portfolio-builder, and all I'm doing is mnist classification. I am getting stuck right here:
Web Link
My question is, how is a model created with a gRPC request as per the above link? I understand the basics of HTTP, but I'm obviously missing something as far as how this works. Is the project name unique and that is how it is created in the string:
POST https://ml.googleapis.com/v1/{parent=projects/*}/models
Thanks so much for your help.
I've gone through HTTP and gRPC documentation, and I still don't understand how this works.
Here is the code they want me to run, but I'm obviously missing something about how this works.
POST https://ml.googleapis.com/v1/{parent=projects/*}/models
The URI is using gRPC transcoding to provide both REST and gRPC endpoints.
If you're just using REST, this would be:
POST https://ml.googleapis.com/v1/projects/<YOUR PROJECT NAME>/models
where the the request body is an instance of a Model.

Backbone.js & Node.js: Looking for a best practice for application settings

I'm building a web application based on backbone.js as the frontend and node.js as the backend.
I am looking for best practices on loading and saving the application settings/configuration a backbone/node environment. The idea is to allow an admin user to view/edit the settings, and of course these settings will silently be loaded when any user is accessing the application through the web.
I was thinking of creating a backbone model called 'settings', which will be loaded once the application starts. Then add a settings view where admins can view and edit at will. Not all the settings will be pre-loaded, only when the admin tried to access them (for eg. settings that are relevant for the backend will only be shown in the admin edit page, and not pre-loaded on application start)
Note: These settings will be saved in a MongoDB document.
How do you guys manage your web application settings/configurations?
Any data that is going to be accessible through the client and retrieved from your database should be represented by a backbone model. Your intuition of creating a 'settings' backbone model will allow you to display the data retrieved from your MongoDB backend. Then, when the settings are updated in your view, you can save the backbone model, which will in turn update the settings in your db.
Since you are dealing with settings/configurations that can affect your application, you just want to make sure that you do correct validation on anyone trying to access that specific page.

Using nodejs as a webservice aside to website webserver

In my local computer, I have a programm that scrappes data from some website, periodically, once a day. The scrapping process transform the data from the html pages to a list of objects (This was developed in C#)
I want to upload the data to a database on a cloud hosting service (such as OpenShift.com)
Also I want to create a website that its server side will use the data from this cloud database.
The server side will be written in nodejs.
I will create a nodejs server for the website, that will expose an interface for the website to the server.
Should I create another nodejs webserver that will act as a webservice interface for the database operations?
My local scrapping program needs an interface to some server that will enable it to add the scrapped objects list.
The website need query operations on the data, but I want this query logic to run not directly from the database but from a middleware server.
So the question is more about good practice than technique. How would you design this architechture?

Resources