when trying to configure Event Hubs Capture with Datalake Store, I get the following error in the Azure portal.
SubCode=40000. DataLakeFolderPath. + some tracking number guid and correlation guid.
Anyone know what this error means?
I assume it's got something to do with the DataLakeFolderPath, but no clue what SubCode of 40000 means.
After redoing all the steps contained in this article it seems to work.
If I retrace my steps it appears the error is likely due to me not setting the correct permissions at the ADLS root level, or the correct permissions at the specific Folder level.
I don't know which one, yet.
Related
I'm trying to read data from Share Point online. I'm using KingwaySoft SharePoint Drivers on SSIS.
I'm getting below error even though with provided credentials
what could be the reason?
do I have to change the URL pattern?
Resolved
I have added /_vti_bin/listdata.svc/ at the end of the site URL.
like :
https://company0.sharepoint.com/sites/sfinfo/IT_team/_vti_bin/listdata.svc/
Thanks
The ideal service URL that you would need to provide should include the base site and subsite, if necessary, and no other details.
Give it a try and if the error persists, please reach out to our support team with the screenshot of the SharePoint connection manager configuration.
I've been working in Azure Search + Azure Blob Storage for while, and I'm getting trouble indexing the incremental changes for new files uploaded.
How can I refresh the index after upload a new file into my blob container? Following my steps after upload file(I'm using rest service to perform these actions): I'm using the Microsoft Azure Storage Explorer [link].
Through this App I've uploaded my new file to a folder already created before. After that, I used the Http REST to perform a 'Run' indexer command, you can see in this [link].
The indexer shows me that my new file was successfully added, but when I go to search the content in this new file is not found.
Please, anybody knows how to add this new file in Index and also how to find this new file by searching for his content?
I'm following Microsoft tutorials, but for this issue, I couldn't find a solution.
Thanks, guys!
Assuming everything is set up correctly, you don't need to do anything special - new blobs will be picked up and indexed the next time indexer runs according to its schedule, or you run the indexer on demand.
However, when you run the indexer on demand, successful completion of the Run Indexer API means that the request to run the indexer has been submitted; it does not mean that the indexer has finished running. To determine when the indexer has actually finished running (and observe the errors, if any), you should use Indexer Status API.
If you still have questions, please let us know your service name and indexer name and we can take a closer look at the telemetry.
I'll try to describe how can I figured out this issue.
Firstly, I've created a DataSource through this command:
POST https://[service name].search.windows.net/datasources?api-version=[api-version]
https://learn.microsoft.com/en-us/rest/api/searchservice/create-data-source.
Secondly, I created the Index:
POST https://[servicename].search.windows.net/indexes?api-version=[api-version]
https://learn.microsoft.com/en-us/rest/api/searchservice/create-index
Finally, I created the Indexer. The problem happened at this moment because it is where all configurations are setted.
POST https://[service name].search.windows.net/indexers?api-version=[api-version]
https://learn.microsoft.com/en-us/rest/api/searchservice/create-indexer
After all these things done. The Index starts indexing all contents automatically (once we have contents into blob storage).
The crucial thing comes now. while your index is trying to extract all 'text' into your files, could occur some issue when the type of file is not 'indexable'. For example, there are two properties that you must pay attention excluded extensions, indexed extensions.
If you don't write the types properly, the Index throws an exception. Then, The Feedback Message(in my opinion is not good, was like a 'miss lead') says to avoid this error you should set the Indexer to '"dataToExtract" : "storageMetadata"'.
This command means that you are trying just index the metadata and no more the content of your files, then you cannot search by this and retrieve.
After that, the same message at the bottom says to avoid these issue you should set two properties (who solved the problem)
"failOnUnprocessableDocument" : false,"failOnUnsupportedContentType" : false
In addition, now everything is working properly. I appreciate your help #Eugene Shvets, and I hope this could be useful for someone else.
I am trying to to follow the tutorial below
Azure Tutorial
As noted at the bottom there appear to have been changes since this was created
When I get to the part where I create an input for my stream analytics job, I cannot select an event hub even though there is one in my subscription
So I went to provide the information manually and I get an error stating invalid token
Has anyone got any ideas how to resolve this or can point me to a better/more recent tutorial?
I am looking to stream data in real time
Paul
Thanks for the help here I ended up using the secondary key and that worked fine!
Change to use Secondary connection string or use a different shared policy altogether.
You can use the primary of the new shared access policy.
PS : It is a weird error, sometimes removing the last ";" worked.
I have followed all steps shown in the MSDN documentation to Copy File from FTP.
So far, the data sets are created, linked servers were created, the pipeline is created. The diagram for the pipeline shows the logical flow. However, when I schedule the ADF, to do the work for me. It fails. The input dataset passes, but when executing the output dataset, I am presented with the following error.
Copy activity encountered a user error at Source side:
ErrorCode=UserErrorFileNotFound,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Cannot
find the file specified. Folder path: 'Test/', File filter:
'Testfile.text'.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Net.WebException,Message=The
remote server returned an error: (500) Syntax error, command
unrecognized.,Source=System,'.
I can physically navigate to the folder and see for myself the file, but when using the ADF, I ma having issues. The firewall is set to allow the connection. Still I am getting this issue. As there is very minimal logging, I am unable to nail down the issue. Could someone help me out here?
PS: Cross Posted at MSDN
I encountered the same error and I was able to solve it by adding "enableSsl": true,
"enableServerCertificateValidation": true
I've got an Azure app up and running, but various requests generate a 500 error. There are no other details that come back from the server to let me know exactly what the problem is. No stack trace, no error message. The only thing I get back from the server are the http headers indicating I've got an error.
I've done a little looking around but can't seem to find a way to retrieve the error details that I'm looking for. I've seen some articles that suggest that I enable logging, but I'm not sure 1) how to do that, 2) where those log files would go and 3) how to access said log files. I've seen posts that say to add a whole bunch of code to my application to enable logging, but all I'm looking for is an error message and a stack trace from a 500 error. Do I really have to add a bunch of code to my app to see that information? If not, how can I get at it?
Thanks!
Chris
The best long-term solution is to enable Azure Diagnostics, which I think is what you're referring to. If you want a quick-and-dirty solution, you can log errors out to a file and then RDP into the role instances to view them. This is very similar to what you would do on a server in your own datacenter.
You can create the logs however you like. I've used log4net and RollingFileAppenders with some success. Setting the logfile path to something like "\logs\mylog.txt" will place the logs in the E: drive of the VM. Note you'll still need code somewhere in your app to capture the error and write it to the log - typically the global error handler in Global.asax is a good place for that.
You'll also have to enable RDP access to your role instances. There are many articles detailing how to do that. Here's one.
This is not a generally recommended approach because the logs may disappears when the role recycles or is recreated. It's also a pain in the butt to log to keep an eye on all those different servers.
One other warning - it's possible that the 500 error is due to some failure in your web.config. If that is the case, all the the application-level error logging in the world isn't going to help you. So be sure that your web.config is valid, and also check the Windows Event Logs while you're RDP'd into the server.
500 internal server error is most generally caused by some problem on the server when it was not able to understand incoming requests or there was some problem in configuration. So, try to run the app locally and see if there is some problem. You can record errors in a database in catches/application_error and also can use tracing. Believe me they are very helpful and worth a few extra lines of code.
For tracing have a look here, http://msdn.microsoft.com/en-us/magazine/ff714589.aspx