Flair NLP package on cloud TPU - nlp

Is it possible to use "flair" nlp library on Google cloud TPU ? I am trying to use Google Colab runtime TPUs and getting some errors.

I found that if i change version to 1.5 this error goes away, due to wrong typecasting in the code. However, it still seems to get stuck .. not sure why.. whereas on GPU it works fine.

Related

What is the cause of LIBRARY_MANAGEMENT_FAILED while trying to run notebook with custom library on synapse?

Today when we've tried running our notebooks defined in synapse, we've received constantly error: 'LIBRARY_MANAGEMENT_FAILED'. We are using approach from: https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-manage-python-packages#storage-account to manage custom libraries, and it was working fine up until this point. Additionally, we've tried separate method of providing spark pool with custom library and tried to use workspace packages, but after 10 minutes of loading custom package, it timesouts with failure.
When we are removing python folder completely from storage, sparkpools run notebooks normally.
Yesterday everything was working properly. The problem also could not be in custom library, because it does not work even with empty python folder.
There were issues on Microsoft side, which were resolved and it started working next day.

Can't Import Bert_Text after installing it successfully

Bert is very powerful model for text classification but implementation of bert requires much more code than any other model. bert-text is pypi package to provide developer a ready-to-use solution.I have installed it properly.When I have tried to import ,it is throwing error ModuleNotFoundError: No module named 'bert_text'.I have properly written the name bert_text.
I have tried it in Kaggle,Colab and local machine but the error is same.
Hey as this is a refactor made by Yan Sun, This issue is already pending, you can go to this link and subscribe for an update when the developers will provide its solution. https://github.com/SunYanCN/bert-text/issues/1

better alternatives to sharp or gm for resizing images on aws lambda with node.js

I have been trying to create a working image resizer in lambda and following various examples and code I see out there to do it in node.js
I tried gm with imagemagick tools but there seems to be a built-in buffer limit which causes it to fail in the lambda environment on large images.
I tried using sharp but it keeps running into errors looking for libvips and the documentation is a cluster##$^ and I can't seem to find a succinct "do this to get it to work" instruction anywhere.
So I'm yet again looking for some kind of a tool that can be run in node.js in the lambda environment to resize an image from an s3 download stream and re-upload the end result back to another S3 bucket. I need to also be able to get the image pixel dimensions while resizing it.
It needs to be able to handle large images without puking and not require a doctorate in amazon linux to install on lambda. I've wasted too much time on this aspect of this project already.
Any help or suggestions is greatly appreciated.
Alas with much banging of my face on the keyboard intermittently, I eventually found a magic combination of using the docker run 'npm install' syntax on the sharp installation page combined with setting that particular script up to node.js v10.x that got it working on my third attempt. (I have no idea what was different from the first attempts, but I'm still figuring out how serverless deploy works for combined functions as well - too much 'new stuff' all in one project sigh)

How to solve google collab cuda malloc failed error?

I am running yolo v3 custom object detection for my project.But it is giving error when I tried to run my custom object detection.I am using windows 10 chrome browser.I need help to solve this error.
Thanks in advance.
This is the command that I am trying to run:
!./darknet detector train "/content/gdrive/My Drive/darknet/obj.data" "/content/darknet/cfg/yolov3.cfg" "/content/gdrive/My Drive/darknet/darknet_53.conv.74"
THIS IS THE ERROR IMAGE
The issue is solved when google colab assigned me full resources.Now I am training my own yolo custom object detection.

Using Mongoose with IntelliJ IDEA Causes Unresolved Function or Method

I just downloaded the 30-day trial of IntelliJ IDEA Ultimate Edition and I'm having an issue with the Node.js project I imported.
I am using Mongoose and am using the find and findById methods of my model classes fairly often. Since these methods are provided by mongoose and not actually defined explicitly in my model classes, IntelliJ does not seem to like them. Everywhere that I use the methods, I get an error saying "Unresolved function or method xxx".
I have read several other users stating they had similar issues with Node and WebStorm, but those all seemed to be with the code Node library and all of the answers said to use the Node.js Globals library. I have imported this library and tried downloading both mongodb-DefinitelyTyped and mongoose-DefinitelyTyped from the Libraries screen, and still cannot get the warnings to go away.
I decided it was probably faster to contact JetBrains support since I didn't seem to be getting any quick solutions here. I guess this issue is a bug with IntelliJ IDEA 14 because they had me try the IntelliJ IDEA 15 EAP and it worked fine.
So I guess the solution is to go EAP or just wait for 15 if anybody else has this issue.

Resources