I have been trying to train a custom model for a document with some fixed layout text & information. I have successfully created, project, connection, container got URL for blob container. when I open the labelling tool to mark text recognization, this throws me an errror code 401, not sure, what's wrong here?
Please note - I have running other projects with different layout and document and able to train the model and use it.
what are the chances here for this error under same account but new storage, resource group, different end point and API.
Related
I learning azure, specifically datafactory, so in a basic exercice.
1 - I should create a input container, and a output container (using azure sorage 2).
2 - After that, i created the datasets for input and output.
3 - And finally. I should connect the dataflow to my input dataset.
but
i can test conections on the datasets to prove that i created it without problems. but i cant test the connection on my dataflow to the input dataset.
enter image description here
i tryed
recreating it with different names.
keep only the needed file in the storage
use different input file (i am using a sample similar to the "movies.csv" expected to the exercise.
I created azure blob container and uploaded file
I created linked service with azure storage account
I created a dataset with above linked service following below procedure:
I tested the connection, it connected successfully.
I didn't get any error. The error which you mentioned above is related to dynamic content. If you assign any parameters in dataset provide the values of parameters correctly. I added parameters in dataset as below
I try to test the dataset I got error:
I added values for parameters in debug settings
Tested the connection, it connected successfully
Otherwise add the sink to the dataflow and try to debug it, it may work.
I think I found the solution.
when i am working with "debug on" and for some reason i create another "data flow", you cant connect to the new datasets.
But
if I restart the debug (put off and on again), the connections start working again.
I'm new to the Security Command Center (SCC) and Data Loss Prevention (DLP). I'm trying to create a job in DLP to check if there is any PII in a BigQuery table, but I'm getting the following error upon creation:
Request violates constraint constraints/gcp.resourceLocations on the project resource.
Learn more https://cloud.google.com/resource-manager/docs/organization-policy/defining-locations.
On the organization level, there's an organization policy (inherited in my project) allowing resources to be created in Europe only due to GDPR. I suspect that maybe DLP runs in some other region (US maybe) and that's why I'm getting this error.
I can't seem to be able to choose where does the job run in the options while creating the job, and I can't seem to find anything about this in the documentation. Any idea why am I getting this error and how to fix it?
The answer is copied from here.
Your organization policy is not allowing DLP to run the job, at this moment the DLP API is blocked by this "constraints/gcp.resourceLocations" and there's no workaround at the moment. However, there's a feature request to have the possibility to set a specific location rather than using "global", which is what in fact is causing this issue.
I've been working in Azure Search + Azure Blob Storage for while, and I'm getting trouble indexing the incremental changes for new files uploaded.
How can I refresh the index after upload a new file into my blob container? Following my steps after upload file(I'm using rest service to perform these actions): I'm using the Microsoft Azure Storage Explorer [link].
Through this App I've uploaded my new file to a folder already created before. After that, I used the Http REST to perform a 'Run' indexer command, you can see in this [link].
The indexer shows me that my new file was successfully added, but when I go to search the content in this new file is not found.
Please, anybody knows how to add this new file in Index and also how to find this new file by searching for his content?
I'm following Microsoft tutorials, but for this issue, I couldn't find a solution.
Thanks, guys!
Assuming everything is set up correctly, you don't need to do anything special - new blobs will be picked up and indexed the next time indexer runs according to its schedule, or you run the indexer on demand.
However, when you run the indexer on demand, successful completion of the Run Indexer API means that the request to run the indexer has been submitted; it does not mean that the indexer has finished running. To determine when the indexer has actually finished running (and observe the errors, if any), you should use Indexer Status API.
If you still have questions, please let us know your service name and indexer name and we can take a closer look at the telemetry.
I'll try to describe how can I figured out this issue.
Firstly, I've created a DataSource through this command:
POST https://[service name].search.windows.net/datasources?api-version=[api-version]
https://learn.microsoft.com/en-us/rest/api/searchservice/create-data-source.
Secondly, I created the Index:
POST https://[servicename].search.windows.net/indexes?api-version=[api-version]
https://learn.microsoft.com/en-us/rest/api/searchservice/create-index
Finally, I created the Indexer. The problem happened at this moment because it is where all configurations are setted.
POST https://[service name].search.windows.net/indexers?api-version=[api-version]
https://learn.microsoft.com/en-us/rest/api/searchservice/create-indexer
After all these things done. The Index starts indexing all contents automatically (once we have contents into blob storage).
The crucial thing comes now. while your index is trying to extract all 'text' into your files, could occur some issue when the type of file is not 'indexable'. For example, there are two properties that you must pay attention excluded extensions, indexed extensions.
If you don't write the types properly, the Index throws an exception. Then, The Feedback Message(in my opinion is not good, was like a 'miss lead') says to avoid this error you should set the Indexer to '"dataToExtract" : "storageMetadata"'.
This command means that you are trying just index the metadata and no more the content of your files, then you cannot search by this and retrieve.
After that, the same message at the bottom says to avoid these issue you should set two properties (who solved the problem)
"failOnUnprocessableDocument" : false,"failOnUnsupportedContentType" : false
In addition, now everything is working properly. I appreciate your help #Eugene Shvets, and I hope this could be useful for someone else.
I have followed all steps shown in the MSDN documentation to Copy File from FTP.
So far, the data sets are created, linked servers were created, the pipeline is created. The diagram for the pipeline shows the logical flow. However, when I schedule the ADF, to do the work for me. It fails. The input dataset passes, but when executing the output dataset, I am presented with the following error.
Copy activity encountered a user error at Source side:
ErrorCode=UserErrorFileNotFound,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Cannot
find the file specified. Folder path: 'Test/', File filter:
'Testfile.text'.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Net.WebException,Message=The
remote server returned an error: (500) Syntax error, command
unrecognized.,Source=System,'.
I can physically navigate to the folder and see for myself the file, but when using the ADF, I ma having issues. The firewall is set to allow the connection. Still I am getting this issue. As there is very minimal logging, I am unable to nail down the issue. Could someone help me out here?
PS: Cross Posted at MSDN
I encountered the same error and I was able to solve it by adding "enableSsl": true,
"enableServerCertificateValidation": true
I'm trying to delete all classifiers of my instance of IBM Watson Visual Recognition service so that I can create only the new classifiers that are used for my app.
To do it, I wrote Node.js code that lists all the classifiers and sends a delete request.
When I executed it (hundreds of delete requests in parallel), I received a 429 error - too many requests. After that, all of my delete requests (even individual ones) received an 404 error - Cannot delete classifier.
My questions are:
Is there a better way to delete all classifiers that is not doing it one by one?
Why am I unable to delete individual classifiers now? Is there some policy that blocks me after a 429 too many requests error?
This is the 429 error that I received in the multiple delete requests
code: 429,
error: '<HTML><BODY><span class=\'networkMessage\'><h2>Wow, is it HOT in here!</h2>My CPU cores are practically burning up thanks to all the great questions from wonderful humans like you.<p>Popularity has its costs however. Right now I just can\'t quite keep up with everything. So I ask your patience while my human subsystems analyze this load spike to get me more Power.<p>I particularly want to <b>thank you</b> for bringing your questions. PLEASE COME BACK - soon and frequently! Not only do I learn from your usage, but my humans learn from workload spikes (like this one) how to better adjust capacity with Power and Elastic Storage.<p>So again, thank you for helping to make me smarter and better. I\'m still young and growing, but with your patience and help, I hope to make you proud someday!</p>Your buddy,<br>Watson<br><span class=\'watsonIcon\'></span>p.s. Please share your experiences in the Watson C
Edit:
I noticed that the error apparently happens only when I try to delete a "default" classifier that is provided by the service (like "Graphics", "Color", "Black_and_white", etc.). The deletion works fine for when I try do delete a classifier that I created with my own pictures.
Is it a characteristic of the service that I'm not allowed to delete the default classifiers ? If it is, any special reason for that ? The app that I'm building doesn't need all those built-in classifiers, so it is useless to have all that.
I understand that I can inform the list of classifiers that I want to use when I request a new classification, but in this situation I'll need to keep a separated list of my classifiers and will not be able to request a more generic classification without getting the default classifiers in the result.
I'm using node js module "watson-developer-cloud": "^1.3.1" - I'm not sure what API versions it uses internally. I just noticed there is a newer version available. I'll update it and report back here if there is any difference.
This is the JS function that I'm using to delete a single classifier
function deleteClassifier(classifier_id,callback){
var data = {
"classifier_id": classifier_id,
};
visualRecognition.deleteClassifier(data,function(err, response) {
if (err){
callback(err);
}
else{
callback(null, response);
}
});
}
-Edit
The occurred when I was using V2 API - But I believe it is not related to API version. See the accepted answer
1-Is there a better way to delete all classifiers that is not doing it one by one ?
No, you must delete them one by one.
2- Why I'm unable to delete individual classifiers now ? Is there some policy that blocks me after a 429 too much requests ?
I suspect that when your request to DELETE /classifiers/{classifier_id} returns a 404, it is because the classifier_id was previously, successfully deleted. You can verify this by doing a GET /classifiers operation to see the list of all current custom classifiers for your account. 404 is the designed response to an attempt to delete a classifier which cannot be found (which would be the case if it previously was deleted.) There is no policy which would block you after encountering a 429.
Could you give an example of the URLs you are using - I am curious if it is the beta service (v2) or the newest version, v3?
I found the problem is that I was trying to delete the default classifiers and that is not allowed.
In the later version of the API (V3 as I write this answer) there is only one default classifier and it can not be deleted.
The 404 errors that I was getting was because I was trying to delete the default classifiers. All my custom classifiers were already deleted as Matt Hill mentioned in his answer