Hansontable and NodeJS - node.js

I'm totally new to Handsontable and I want to use it in my Nodejs&React app to save data in a Postgresql database.
Basically, I've tried to render the table component and its custom headers but I can't see how it can be used in with Nodejs, despite all the Googling I did.
My current table on a page
I'd appreciate your help.

step1. Use a hands-on table to have the user fill in the data and when they press the save button, use "getSourceData" or "getData" to
Obtain table data.
step2. Submit the data acquired in step1 to the nodejs server and save the data to the Postgresql database.
I think this will make it possible.

Related

Storing data from a form using nodejs and express

I need to get some data from a form as POST request and save that data so i can use it on another page.
I am successful at retrieving the data form the form as JSON, but since i am not using any database, i am not able to see the data that i retrieved from the form when i redirect for the next page..
I am new at using jade, nodejs and express. I'd like a opinion on how to retrieve the data that i've sent via post method.
If i use render, it loads the page with all the correct info, but if i reload or change to another page, the data will simple disappear.
You can use Redux Persist (state management tool) or local storage to save data. Both of these will save your data in a way that it can be retrieved when you navigate to the next page or even if you reload your page.

Creating a Dashboard with a Livestream option

As the title says I am trying to am creating a Dashboard.
The Dashboard should include an option to view Data inserted in a Database, live or at least "live" with minimal delay.
I was thinking about 2 approaches:
When the option is used the Back-End creates a Trigger in the Database(its only certain Data so i would have to change the Trigger according to the Data). Said trigger should then send the new Data via http to the Back-End.
What i see as a problem is that the delay of sending the Data and possible errors could block the whole database.
1.1. Same as 1. but the trigger puts the new Data in a seperate Table where i can then query and delete the Data.
Just query for the newest data every 1-5 sec. or so. This just seems extremly bad and avoidable.
Which of those is the best way to do this? Am i missing something? How is this usually done?
The Database is a pgsql Database,Back and Front-end are in NodeJs.

Refresh data from oracle database on jsf page refresh in adf 12c

I am using jdeveloper 12c to build fusion web app with oracle 12c database.
My project is running fine in browser and also can be accessible on other system's browser via static IP.
When someone is accessing application in browser and enter some data in table, it saves in database but it doesn't reflects on my browser even after refreshing page until unless i rebuild my project.
How can i get data without rebuilding my project (I want to fetch all updated data through page refresh).
I don't want to create a method for this.
Browser page refresh doesn't call data querying. You need to call Execute on needed iterators. For instance you can start a taskflow with Execute method before showing user interface.
I am not sure but, use Execute of that particular VO after commiting the data.Before using Execute in java code you have to add Execute in page deinition or go to that particular table in page. use binding property under property inspector. create the binding as given #{backingBeanScope.popupBean.refresh} and call it in java as like this to refresh the data while saving in DB 'AdfFacesContext.getCurrentInstance().addPartialTarget(refresh);'
Only you need to add EXECUTE button from that specific table (of which you want to refresh data) operation in data control tab, and you can label it as REFRESH button.

Import data from Clio to Azure database using API v4

Let me start out by saying I am a SQL Server Database expert, not a coder so making API calls is certainly not an everyday task for me.
Having said that, I am trying to use the Azure Data Factory's data copy tool to import data from Clio to an Azure SQL Server database. I have had some limited success, data is copied over using the API and inserted into the target table but paging really seems to be an issue. I am testing this with the billable_clients call and the first 25 records with the fields I specify are inserted along with the paging record. As I understand, the billable_clients call is eligible for bulk actions which may be the solution, although I've not been able to figure out how it works. The url I am calling is below:
https://app.clio.com/api/v4/billable_clients.json?fields=id,unbilled_hours,name
Using Postman I've tried to make the same call while adding X-BULK true to the header but that returns no results. If there is anyone that can shed some light on how the X-BULK header flag is used when making a call, or if anyone has any experience loading Clio data into a SQL Server database I'd love some feedback on your methods.
If any additional information regarding my attempts or setup would help please let me know.
Thanks!
you need to download the json files with Bulk API and then update them in DB.
It isn't possible to directly insert the data

How do you know data has been new added in your MongoDB

I have an nodejs server running witch show data on a web interface. The data is fetched from a MongoDB using mongoose. The data is added via an node-red application witch is isolated from the rest.
Currently my nodejs server fetches the data every 5 seconds. Is there a way to know if the data in my MongoDB has changed?
Thanks, I hope my question is clear.
I was also looking for something similar to what you are asking for few months back. Few ways which i know to do it are:
1) You can try to use middlewares while inserting your documents in DB. You can then send that new data either after saving it in DB or at the time of insertion only.
2) Refer to this answer which talks about solving your problem using inbuilt functions provided by mongoDb. You can study in deep about them in mongoDb docs.
3) There is also another way to do this which includes listening to changes in log files. As you know everything done in mongo is recorded and logged in files so whenever there is some change in data you can know it from there also. You will have to do the digging by yourself in this method.
Hope it helps!

Resources