In NetSuite using SuiteTalk, Is it possible to create a CSV file from saved search - netsuite

Backgorund:
I am a newbie in the NetSuite world. We are trying to integrate NetSuite with our ERP and I am doing some preliminary research to find out what would be the best option moving ahead. The primary objective of the first task is to download huge volume of data from NetSuite to our end and find alternatives approaches.
I did some research on SuiteScript/SuiteTalk/Analytics and some facts I have come to find and my questions are below:
Custom search can be created and save SuiteScript/SuiteTalk.
This saved search can be invoked via both SuiteScript as well as SuiteTalks
Well have a confusion, is the Saved Search the View, which SuiteAnalytics can access? (NOT MY MAIN QUESTION THOUGH!!).
Using SuiteScript, return of Saved Search execution can be saved on as a flat file, and that file can be moved to File Cabinet. Exposing a REST API using RESTlet, this file can be downloded. [But have not implemented this yet!!]
[MAIN QUESTION] IS IT POSSIBLE TO DO THE SAME, CREATE A FLAT FILE AT NETSUITE END USING SUITTALK? AND ALSO HOW TO DO SAVE/MOVE THE FILE TO FILE CABINET AFTER THAT?
I have not researched more on the topic File Cabinet and how a created file or files here are indexed?
Or Is it better to load whole result set from the SOAP call?
Your comments are highly appreciated!
Thank you!

You can certainly execute a saved search via SuiteTalk. You can also loop through all the results of the saved search and do whatever you'd like with those results, such as create a text file.
The SuiteTalk API also allows for accessing the File Cabinet to create or retrieve files, with limitations on file size.

Suitetalk can be used to create File and move file from a folder to another by changing the folder internalId of the fileObject.
Since you are using the Suitetalk to create/load Saved Search; you are required to create and save the CSV at your end using the search result and then move the file to file cabinet.
Since your objective is to get huge data from NetSuite I would recommend below option:
Use Scheduled script/Map Reduce to build a file and place it in required folder of file cabinet
Using Suitetalk you can extract that file. (Note: You don't need REST do this job. You can get the fileContents and store the result at your end. You cannot directly store the file. You will have to store the filecontents)

Thanks #netsuite-guru and #suite-resources!
So doing some further research, and considering your recommendations, the server side(at NETSUITE) scripting can be done only using SuiteScript to achieve the goal of automating - READ from NetSuite and WRITE to File to FILECABINET!
Also found another good read thread as an option to MapReduce link
But would go with "Scheduled script/Map Reduce" at this time.

Related

Is it possible to add filters to a saved search when using N/Task module to run saved search

I have a script (2.0) successfully running saved searches and writing the results to CSV files in the File Cabinet via the N/task.searchTask.
The issue is that the full system requires multiple saved searches that vary only by date range, and multiple script deployments configured with the same script parameterized by Saved Search Id and File Id (for the results). It would be better/simpler to have one Saved Search and have parameters for the data range instead of having multiple Saved Searches.
Is there a path using the N/task.searchTask that allows for the adding of Filters on the Saved Search?
You can use the N/search module to load Saved Searches and modify them however you see fit before executing them. You could certainly load a Search then manipulate its filters property before running the search.

Azure search adding documents to index approaches

I am not sure if i am going to be able to describe this right but ill give it a go.
We are working on implementing Azure search. At the core level we have searchable PDF documents that we want the text of them added to the index so all of them are searchable.
The initial thought was to just submit that document to the index via the add document rest api. The thinking was that this would be the most simple and quickest path
to getting the text of that document into the index. We also considered using and indexer and just having all the Searchable PDF docs in a blob store and have the indexer
crawl those every 10-15 mins.
We also looked into (based on a recommendation) submitting a standalone JSON file with the text from the PDF in it. Submitting that to the index either via the same add document API or
placing that file in a blob store. Within the JSON document we would need to have document identifiers that provide the index with the location of the PDF so that when that text is found
via search, we can make that clickable and as a result open the PDF.
It seems to me that pushing in the json file with the document add api. Indexing that and when it is part of a search we can use the doc id to link back to it and open it.
For those of you that have used Azure search. How did you implement?
If you're totally sure that only pdf will live on this particular index, then the first approach is faster to implement, since the native indexer can be used for extract the content of the pdf document as well to push it to the index.
Both approaches will work, but for the second one, you would need to extract the pdf yourself using an external tool.

Extract Keywords from Office Documents with Sharepoint Flow

I am trying to implement a document management system using Sharepoint. One major issue is that colleagues cannot find documents in the current setup (local fileserver). They have asked that we have a system that scans uploaded documents and automatically looks for keywords in them and then populates a "Meta" column.
I have had sort of success with OCR on image files, but getting keywords out of office documents (doc, xls etc.) I have had no success until now.
Is there a way to setup a flow to do this task for me?
any help is much aprechiated.
i tried "Get file metadata" and Azure "Text analysis", but it seems to take the raw data of the files (XML I assume) and returns that the document is to large to analyse.
There is something vague about this requirement - how is a keyword defined in a document?
Therefore, first obvious solution would be to assign keywords for each file upon uploading it. You may create a process for this with flow - have tasks, reminders and so on.
Automating this with OCR first means that you need to user OCR that works with MS flow you have only one choice - ElasticOCR. Then, in your flow
- feed the document content to the ElasticOCR action
- keep in mind that OCR is not 100% accurate
- analyze the generated text content according to your keyword definition
- finally write the meta back to the library in the corresponding columns.
Having worked on a similar requirement, we asked uploaders to publish their documents with a short abstract(column from the content type). The assumption is the abstract contains the keywords and is stored in a multi-line column - making it searchable site wide.

Export Sharepoint list to .csv and upload to Azure Data Lake Using Flow

I am trying to using Microsoft Flow to export a Sharepoint List to Azure Data Lake.
I want it so that anytime a particular online list is changed, its entire contents are loaded into a file in Data Lake. If the file already exists, I want to overwrite it. Can someone please explain how I can go about doing this, I have tried multiple ways, but they are not getting the job done.
Thanks
I was able to get the items in the SharePoint list to near perfection. I will post the Flow here in case anyone in the future needs it.
So what I did is that every 5 minutes I "create" a file in Azure Data Lake which overwrites the file if it exists. The content of the files cannot be blank, so I added a newline to the content. Then I use Get Items to retrieve all the items in the SharePoint List. From there, using an Apply to each loop, I append the content of the current row of the Sharepoint list to the Data Lake file (separated by | and ending with a new line after all the content is added). This works to near perfection, with the only caveat being the newline at the beginning of the file, which I eliminate using PowerQuery.
This is exactly what I needed. If anybody sees a way to make this better, please post so that we can get this to perfection.

How do I search attached files stored in a MS Access 2010 database?

How do I search in MS Access (ver 2010) for data in files attached to records? If I do a "Find" and specify text I KNOW is in an attached txt file to a particular record, there are no hits. While if I have the same data in a Text Field or Memo field, Access finds it. I understood from one of the Access help screens I found that it is possible to search attachments from within Access, but I have not been able to do this yet.
BTW, I did try using the query tool and searching for text I knew was in the attachment, but it was not successful, although it did find the same text within a memo field in another record.
Thx,
jmb
I'm fairly certain that there is no mechanism in Access to find records based on text within a file attachment. A bit of web searching found an earlier question here and the responses seem to agree that there isn't.
One reference from Microsoft here says
By using attachments, you open documents and other non-image files in their parent programs, so from within Access, you can search and edit those files.
but I think that statement could be misinterpreted. I believe what they meant to say was that
"...from within Access you can open an attachment in its parent program and then work on it as usual (e.g., edit it, search it, print it, and so on)."
You can use file system object, open the file as string and search sequentially. That's as close as you'll get

Resources