Best way to store response back into NetSuite? - netsuite

I am a newbie in NetSuite. I am currently consuming a saved search via RESTlet in my C# code, pushing it forward to a wsdl and also receiving a response from the same. Now I want to save the response in NetSuite as well, possibly along with the Transaction list records from which I have created the Saved Search.
Please guide me towards what might be the best approach.

You can always view the repsonse/request in Setup > Integration > SOAP Web Services Usage Log. They are already saved as records, you just need to query them

Related

How to download a file from website by using logic app?

how you doing?
I'm trying to download a excel file from a web site (Specifically DataCamp) in order to use its data into an automatic process, but before to get the file is necessary to sign in on the page. I was thinking that this would be possible with the JSON Query on the HTTP action, but to be honest I don't know where to start (I'm new on Azure).
The process that I need to emulate to get the file extraction would be as follow (I know this could be possible with an API or RPA but I don't have any available for now):
Could you tell me guys some advices (how to get the desired result or at least where to make research)? is this even posibile?
Best regards.
If you don't have other ways, e.g. your source is on an SFTP, etc. than using an HTTP Action should work, pass the BODY to your next action (e.g. you might want to persist that on a BLOB if content is binary).
If your content is "readable", e.g. JSON, CSV and want to load for processing, you need to ensure, for large files, that you read it in Chunks to load it completely before processing.
Detailed explanation at https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-handle-large-messages#download-content-in-chunks

Export Azure Application Insights with REST API for search term

I have a requirement where I need to search a particular term in App insights and then export this 'report' and send it in the email. Consider this as "report a bug" use case.
So if I search key "xxxxxx123", it should retrieve all matching traces/logs and then export it to either excel or CSV.
So my question is, is it possible with the NuGet package or even with REST API?
I tried looking at this, but couldn't find it helpful,
dev.applicationinsights.io/apiexplorer
There is no nuget package. You can do it by using the Get query rest api.
Write any query you need in the Query textbox, then in the right side, you can see the generated request. Then you can use c# or other programming language to query the results, as per this article.(Note: remember use your real Application ID and API Key).
Add a screenshot for your reference:
After fetch the data, you should write your own logic to export it to csv or excel.

Querying ArangoDB without leaving page

I'm relatively new to webdevelopment and have been using ArangoDB for most of that limited experience. I have a basic understanding of Node.js and creating express based CRUD apps with ArangoDB as the database.
I'm getting to a point though where I'd like to have the ability to query the database from inside the client. Say I would like to have a datalist-type element where the user types words into a searchbar. I'd like the ability to query the database from there rather than having to query the database for all of its files prior to creating the datalist. I have not found a single mention though of using database queries from the client side. I can't imagine that this is not possible. Surely when I search wikipedia through the search bar and it provides me with options I didn't just receive the entire wikipedia documents list upon loading the page? Please steer me in the right direction, I don't know how to tackle this problem.
Have a look at how to build dynamic forms, this will allow you to perform AJAX style calls from the browser window to a back end REST API service. This will allow your back end web service to gather the data for the response (from ArangoDB if required), and respond with that data, most likely in a JSON format.
Your UI can then take that response and dynamically update components in your DOM so that the user can see the data injected into the page without a page reload action taking place.
https://www.pluralsight.com/search?q=ajax is a great place to start.
Alternatively you can have a look at free content like https://www.youtube.com/watch?v=tNKD0kfel6o

Import data from Clio to Azure database using API v4

Let me start out by saying I am a SQL Server Database expert, not a coder so making API calls is certainly not an everyday task for me.
Having said that, I am trying to use the Azure Data Factory's data copy tool to import data from Clio to an Azure SQL Server database. I have had some limited success, data is copied over using the API and inserted into the target table but paging really seems to be an issue. I am testing this with the billable_clients call and the first 25 records with the fields I specify are inserted along with the paging record. As I understand, the billable_clients call is eligible for bulk actions which may be the solution, although I've not been able to figure out how it works. The url I am calling is below:
https://app.clio.com/api/v4/billable_clients.json?fields=id,unbilled_hours,name
Using Postman I've tried to make the same call while adding X-BULK true to the header but that returns no results. If there is anyone that can shed some light on how the X-BULK header flag is used when making a call, or if anyone has any experience loading Clio data into a SQL Server database I'd love some feedback on your methods.
If any additional information regarding my attempts or setup would help please let me know.
Thanks!
you need to download the json files with Bulk API and then update them in DB.
It isn't possible to directly insert the data

I need to scrape all the analytics from a Flurry account

Right now, the only project I can see that does this is
https://github.com/lucamartinetti/flurry-scraper
...but it currently is not logging in properly, I suspect that this is do to the fact that Flurry has made changes to their API which result in the login not working anymore...
I tried messing with it, but am unable to get it to work.
Can anyone help me, or point me in the direction of a project that will do this? I want to scrape all the data possible and download it.
Any help would be appreciated.
Thanks,
-Mark
You don't need to scrape the website if all you want is analytics metrics of your app and you have the API key.
You just need to access this data using Flurry's reporting APIs.
For instance, you can make a REST call to the AppMetrics API and it would give you data about about your apps' users, sessions, pageviews, etc in XML or JSON. A simple AppMetrics call would be of the form:
http://api.flurry.com/appMetrics/METRIC_NAME?apiAccessCode=APIACCESSCODE&apiKey=APIKEY&startDate=STARTDATE&endDate=ENDDATE&country=COUNTRY&versionName=VERSIONNAME&groupBy=GROUPBY

Resources