Let me start out by saying I am a SQL Server Database expert, not a coder so making API calls is certainly not an everyday task for me.
Having said that, I am trying to use the Azure Data Factory's data copy tool to import data from Clio to an Azure SQL Server database. I have had some limited success, data is copied over using the API and inserted into the target table but paging really seems to be an issue. I am testing this with the billable_clients call and the first 25 records with the fields I specify are inserted along with the paging record. As I understand, the billable_clients call is eligible for bulk actions which may be the solution, although I've not been able to figure out how it works. The url I am calling is below:
https://app.clio.com/api/v4/billable_clients.json?fields=id,unbilled_hours,name
Using Postman I've tried to make the same call while adding X-BULK true to the header but that returns no results. If there is anyone that can shed some light on how the X-BULK header flag is used when making a call, or if anyone has any experience loading Clio data into a SQL Server database I'd love some feedback on your methods.
If any additional information regarding my attempts or setup would help please let me know.
Thanks!
you need to download the json files with Bulk API and then update them in DB.
It isn't possible to directly insert the data
Related
I am attempting to write an Azure CosmosDB integration (Core SQL Api) that integrates with an external service to provide some of the query data. As an example, I need a query made on Cosmos DB to convert some of the data returned (e.g. ID's) by the query into real data by calling an external service via a REST API. This should only happen when querying certain columns.
I initially investigated using a JS stored procedure and/or a UDF to make this external call, but the JS environment seems to be extremely limited and doesn't provide any way to make external calls. I then tried using this https://github.com/Oblarg/cosmosdb-storedprocs-ts repository, which uses webpack to bundle all of node.js into the stored procedure, allowing node modules to be used in stored procedures. Whilst this does allow some node modules to be used, whenever I try and use "https", "fetch", or "axios" modules to make an HTTP GET request I get errors (the same code works fine in a normal node environment, but I'm not a JS expert and can't seem to work past these errors). After a day of attempts it seems like the stored procedure approach is not possible.
Is this the case or is there some way of making HTTP GET requests from a JS stored procedure? If not possible with stored procedures, are there any other techniques to achieve the requirement of reading data from a remote API when querying cosmos DB?
Thanks
There is no way to achieve this from CosmosDB directly, for queries you also cannot use the change feed as the document dont change, so really your only option is to use a function or some preprocessor app to handle it, as you say its not ideal but there is no other solution here. If it was an insert or an update then change feed would allow you to do this but for plain queries its not possible.
As the title says I am trying to am creating a Dashboard.
The Dashboard should include an option to view Data inserted in a Database, live or at least "live" with minimal delay.
I was thinking about 2 approaches:
When the option is used the Back-End creates a Trigger in the Database(its only certain Data so i would have to change the Trigger according to the Data). Said trigger should then send the new Data via http to the Back-End.
What i see as a problem is that the delay of sending the Data and possible errors could block the whole database.
1.1. Same as 1. but the trigger puts the new Data in a seperate Table where i can then query and delete the Data.
Just query for the newest data every 1-5 sec. or so. This just seems extremly bad and avoidable.
Which of those is the best way to do this? Am i missing something? How is this usually done?
The Database is a pgsql Database,Back and Front-end are in NodeJs.
Microsoft documentation here suggests to use await client.OpenAsync(); to avoid startup latency to Cosmos DB. This seems to be only applicable to SQL API. I try to use Table API and could not manage to do the same. My first request executes in 1500 ms and subsequent takes only 40, so that would be a very nice improvement.
I had tried both Microsoft.Azure.Cosmos.Table and Microsoft.WindowsAzure.Storage to connect, but did not find any way of doing that. The only thing I can think of is doing a "dummy" request that for sure does not return anything instead to achieve the same goal.
Is there any better way to initialize the connection?
A simple solution would be to query anything that you know exists.
Any call using the client will initialise the connection and do the (approximately) 8 requests that CosmosDB needs.
Reading the database account would be the simplest way to achieve this.
I've been working on a Xamarin.Forms application in Visual Studio using Azure for the backend for a while now, and I've come across a really strange issue.
Please note, that I am following the methods mentioned in this blog
For some strange reason the PullAsync() method seems to have some bizarre problems. Any data that I create and sync will only be pulled by PullAsync() from that solution. What I mean by that is that if I create another solution that accesses the exact same backend, it can perform it's own create/sync data, but will not bring over the data generated by the other solution, even though they both seem to have the exact same access. This appears to be some kind of a security feature/issue, but I can't quite make sense of it.
Has anyone else encountered this at all? Was there a work-around at all? This could potentially cause problems down the road if I were to ever want to create another solution that accesses the same system/data for whatever reason.
For some strange reason the PullAsync() method seems to have some bizarre problems. Any data that I create and sync will only be pulled by PullAsync() from that solution.
According to your provided tutorial, I found that the related PullAsync is using Incremental Sync.
await coffeeTable.PullAsync("allCoffees", coffeeTable.CreateQuery());
Incremental Sync:
the first parameter to the pull operation is a query name that is used only on the client. If you use a non-null query name, the Azure Mobile SDK performs an incremental sync. Each time a pull operation returns a set of results, the latest updatedAt timestamp from that result set is stored in the SDK local system tables. Subsequent pull operations retrieve only records after that timestamp.
Here is my test, you could refer to it for a better understanding of Incremental Sync:
Client : await todoTable.PullAsync("todoItems-02", todoTable.CreateQuery());
The client SDK would check if there has a record with the id equals deltaToken|{table-name}|{query-id} from the __config table of your SQLite local store.
If there has no record, then the SDK would send a request as following for pulling your records:
https://{your-mobileapp-name}.azurewebsites.net/tables/TodoItem?$filter=(updatedAt%20ge%20datetimeoffset'1970-01-01T00%3A00%3A00.0000000%2B00%3A00')&$orderby=updatedAt&$skip=0&$top=50&__includeDeleted=true
Note: the $filter would be set as (updatedAt ge datetimeoffset'1970-01-01T00:00:00.0000000+00:00')
While there has a record, then the SDK would pick up the value as the latest updatedAt timestamp and send the request as follows:
https://{your-mobileapp-name}.azurewebsites.net/tables/TodoItem?$filter=(updatedAt%20ge%20datetimeoffset'2017-06-26T02%3A44%3A25.3940000%2B00%3A00')&$orderby=updatedAt&$skip=0&$top=50&__includeDeleted=true
Per my understanding, if you handle the same logical query with the same query id (non-null) in different mobile client, you need to make sure the local db is newly created by each client. Also, if you want to opt out of incremental sync, pass null as the query ID. In this case, all records are retrieved on every call to PullAsync, which is potentially inefficient. For more details, you could refer to How offline synchronization works.
Additionally, you could leverage fiddler for capturing the network traces when you invoke the PullAsync, in order to troubleshoot your issue.
Sybase ASEBulkCopy is not working.
I have set the EnableBulkLoad attribute to 1 in the connection string.
It is uploading 1 record at a time even after setting the batch size to 500. The other settings EnableBulkLoad attribute is set to 1 in the connection string.
What other settings am I missing.
Please someone help me with this.
Thanks in advance.
Whether bulk load actually happens depends on other things as well, such as the presence of indexes on the target table. By enabling bulk load you're basically telling the ASE server that it should try to do bulk uploading if it can -- but maybe it cannot so it uses non-bulk.
I'm not sure I understand the details of your question though. What do you mean by "upload"? Does your client app send only 1 record to the ASE server at a time?
Or does it mean that ASE performs regular inserts instead of bulk inserts? If the latter, how did you diagnose that?
I recommend trying it first with the 'bcp' client utility to figure out if bulk loading is possible to start with.