Is there a delay when using the Box.com search API? - search

I'm using the Search API as defined here:
https://developers.box.com/docs/#search
It works well, though I noticed that when I make a folder on the site, then immediately call the API searching for that folder name, it doesn't appear in the results for a minute or so. Is there something I'm doing wrong, or some way to force it to do a live search? Thanks.

You're not doing anything wrong. It just takes a little bit of time for the search indexes to be updated with the new file/folder metadata. There's nothing you can do client-side to speed this up.
If you need immediate access that new folder, consider saving the folder ID that's returned in the response of the Create a New Folder request.

Related

How to download a file from website by using logic app?

how you doing?
I'm trying to download a excel file from a web site (Specifically DataCamp) in order to use its data into an automatic process, but before to get the file is necessary to sign in on the page. I was thinking that this would be possible with the JSON Query on the HTTP action, but to be honest I don't know where to start (I'm new on Azure).
The process that I need to emulate to get the file extraction would be as follow (I know this could be possible with an API or RPA but I don't have any available for now):
Could you tell me guys some advices (how to get the desired result or at least where to make research)? is this even posibile?
Best regards.
If you don't have other ways, e.g. your source is on an SFTP, etc. than using an HTTP Action should work, pass the BODY to your next action (e.g. you might want to persist that on a BLOB if content is binary).
If your content is "readable", e.g. JSON, CSV and want to load for processing, you need to ensure, for large files, that you read it in Chunks to load it completely before processing.
Detailed explanation at https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-handle-large-messages#download-content-in-chunks

How to deal with data which is very little that I don't want to save in database?

The scenario is:
I am working on a express.js app with mongoDB and EJS.
There is a URL/ link on a page which I want to change, let's say using
a form.
I am already using user model to retrieve and update user data.
What I can do is save that link in a collection in mongoDB i.e. create
a model.
But, I think it's not good to create a model for just a link and get
it to render on EJS.
What should I do any suggestions? Any tricks?
*No need to read if you already know what I should do.
I tried something which is not a good practice and sometimes causing big issues.
I have added a JSON file in public directory to serve.
I am reading this file to get the link on client side.
when I want to change the link, I submit new link using a form and on server
side overwrite the content of that file (JSON file present in public directory).
so next time that file will be served with changed content.
I tried overwriting the contents of that file(using "fs") synchronously as well as asynchronously but because that may be sometimes web page stucks for a second, but it is not crashing the app.
may be this was silly, please suggest if you know anything i should do.
*NOTE: Sorry, if this question is inappropriate for StackOverflow. But I am struggeling to find any solution.

Pushing documents(blobs) for indexing - Azure Search

I've been working in Azure Search + Azure Blob Storage for while, and I'm getting trouble indexing the incremental changes for new files uploaded.
How can I refresh the index after upload a new file into my blob container? Following my steps after upload file(I'm using rest service to perform these actions): I'm using the Microsoft Azure Storage Explorer [link].
Through this App I've uploaded my new file to a folder already created before. After that, I used the Http REST to perform a 'Run' indexer command, you can see in this [link].
The indexer shows me that my new file was successfully added, but when I go to search the content in this new file is not found.
Please, anybody knows how to add this new file in Index and also how to find this new file by searching for his content?
I'm following Microsoft tutorials, but for this issue, I couldn't find a solution.
Thanks, guys!
Assuming everything is set up correctly, you don't need to do anything special - new blobs will be picked up and indexed the next time indexer runs according to its schedule, or you run the indexer on demand.
However, when you run the indexer on demand, successful completion of the Run Indexer API means that the request to run the indexer has been submitted; it does not mean that the indexer has finished running. To determine when the indexer has actually finished running (and observe the errors, if any), you should use Indexer Status API.
If you still have questions, please let us know your service name and indexer name and we can take a closer look at the telemetry.
I'll try to describe how can I figured out this issue.
Firstly, I've created a DataSource through this command:
POST https://[service name].search.windows.net/datasources?api-version=[api-version]
https://learn.microsoft.com/en-us/rest/api/searchservice/create-data-source.
Secondly, I created the Index:
POST https://[servicename].search.windows.net/indexes?api-version=[api-version]
https://learn.microsoft.com/en-us/rest/api/searchservice/create-index
Finally, I created the Indexer. The problem happened at this moment because it is where all configurations are setted.
POST https://[service name].search.windows.net/indexers?api-version=[api-version]
https://learn.microsoft.com/en-us/rest/api/searchservice/create-indexer
After all these things done. The Index starts indexing all contents automatically (once we have contents into blob storage).
The crucial thing comes now. while your index is trying to extract all 'text' into your files, could occur some issue when the type of file is not 'indexable'. For example, there are two properties that you must pay attention excluded extensions, indexed extensions.
If you don't write the types properly, the Index throws an exception. Then, The Feedback Message(in my opinion is not good, was like a 'miss lead') says to avoid this error you should set the Indexer to '"dataToExtract" : "storageMetadata"'.
This command means that you are trying just index the metadata and no more the content of your files, then you cannot search by this and retrieve.
After that, the same message at the bottom says to avoid these issue you should set two properties (who solved the problem)
"failOnUnprocessableDocument" : false,"failOnUnsupportedContentType" : false
In addition, now everything is working properly. I appreciate your help #Eugene Shvets, and I hope this could be useful for someone else.

SharePoint 2013 REST upload from App works on image, fails on video

Has anyone tried to upload a video via REST to SharePoint 2013 from a SharePoint hosted app?
Below are my two POSTs. The first one, an image, works fine. The second one does add my video but it throws a 404 (Not Found). Subsequent executions do not overwrite but instead create new video files with some random letters afterwards. The subsequent executions also pop the 404.
I should also point out the the overwrite flag is obviously being ignored because it always creates a new file. Further when I tried to use the "manipulated" video URL that you see in a library after uploading it blows with a server error.
My suspicion is that it's because of the way SP2013 handles videos by creating items that don't retain their extension like an image does. Anyone know for sure?
Or know if there's some sort of RESPONSE that is sent back that would cause the 404?
http://app2-6040b7dbcd33cc.sp13apps-qa.PATH/sites/XDevT/CustomNewsFeedEntry/_api/SP.AppContextSite(#TargetSite)/web/lists/getByTitle(#TargetLibrary)/RootFolder/Files/add(url=#TargetFileName,overwrite='true')?#TargetSite=%27http://teamsites13-qa.PATH/sites/XDevT%27&#TargetLibrary=%27NewsFeedVideos%27&#TargetFileName=%27cg-overlay-img.jpg%27
http://app2-6040b7dbcd33cc.sp13apps-qa.PATH/sites/XDevT/CustomNewsFeedEntry/_api/SP.AppContextSite(#TargetSite)/web/lists/getByTitle(#TargetLibrary)/RootFolder/Files/add(url=#TargetFileName,overwrite='true')?#TargetSite=%27http://teamsites13-qa.PATH/sites/XDevT%27&#TargetLibrary=%27NewsFeedVideos%27&#TargetFileName=%27WMV_Movie.wmv%27

Issus deleting a site collection

I'm currently doing some test where I try to delete a site collection programmatically. Thereby I realized some strange behavior by SharePoint.
I used the following code to test the site collection deletion.
private static void DeleteSiteCollection(string urlSiteToDelete)
{
SPSecurity.RunWithElevatedPrivileges(delegate()
{
SPWebApplication webApp = SPWebApplication.Lookup(new Uri("http://wssdev06"));
webApp.Sites.Delete(urlSiteToDelete);
webApp.Update();
});
}
So when I call the method with the url of an existing site collection the site collection will be deleted as expected. But when I call the method with null, the empty string or an url which is not connected to a site collection then SharePoint deletes the site collection which resides under root (e.g. http://wssdev06/).
I'm not sure if I'm too dump to use this SPSiteCollection.Delete() method or if I did not understand the conecpt of site collections and managed paths, but I think this a really strange and alarming behavior.
I could repoduce this behavoir on different web application but had no option to test in on another SharePoint environment yet.
So am I doing something wrong or is this a bug?
UPDATE:
So I did some more investigations and realized that this must have something to do with the indexer of the SPSiteCollection class which returns the root site collection if there is no site collection located under the given url. Looks like a bug.
Whenever you ask SharePoint to find a Site Collection using an Url it'll do it's best to return a SPSite even if it means that it'll have to ignore part of the Url.
Sometimes this is a very good thing. f.i. if you have the full url of a list and want to find the corresponding SPSite and SPWeb.
But it can be very dangerous like when you're deleting Site collections and maybe make a spelling mistake.
If you want to make sure you get the right Site Collection the lookup the SPSite first and check that the SPSite you get has the Url that you want.
BR
Per
Your code looks right. One thought would be to add a check of the sites collection to make sure the site you want to delete is in the sites collection. I realize this does not answer your question.
This sounds like exactly the issue described in Microsoft's KB 968474 - stsadm can inadvertently delete a root site collection if erroneous URL path used. Similar to your symptoms, when using stsadm - o restore, "If the URL path is incorrect then the deletion and restore is attempted against the only valid path which is the root site collection of the URL."
It sounds to me like there is some bug in the underlying site delete API, as you suspected. Possibly, the algorithm looks for a "closest match" rather than "exact match".
Enumerating the Site Collections and validating an exact match might be the best way to avoid this. However, I wouldn't say you're doing anything wrong as this is very close to the Microsoft sample code and the documentation for the function gives no warning about passing invalid URLs.

Resources