How can I write KSQL queries in nodejs? - node.js

Could you provide me with a code sample in node js that runs KSQL queries on Kafka ?
F.e. I would like to create and then retrieve a simple table.

Afaik there's no library that would do that for you. Did you look at using the REST api for this?
https://docs.confluent.io/current/ksql/docs/developer-guide/api.html

Related

How to connect to Flink SQL Client from NodeJS?

I'm trying to use Apache Flink's Table concept in one of my projects to combine data from multiple sources in real-time. Unfortunately, all of my team members are Node.JS developers. So, I'm looking for possible ways to connect to Flink from NodeJS and query from it. In Flink's documentation for SQL Client, it's mentioned that
The SQL Client aims to provide an easy way of writing, debugging, and submitting table programs to a Flink cluster without a single line of Java or Scala code. The SQL Client CLI allows for retrieving and visualizing real-time results from the running distributed application on the command line.
Based on this, is there any way to connect to Flink's SQL client from NodeJS? Is there any driver already available for this like Node.JS drivers for MySQL or MSSQL. Otherwise, what are the possible ways of achieving this?
Any idea or clarity on achieving this would be greatly helpful and much appreciated.
There's currently not much that you can do. The SQL Client runs on local machines and connects to the cluster there. I think what will help you is the introduction of the Flink SQL Gateway, which is expected to be released with Flink 1.16. You can read more about that on https://cwiki.apache.org/confluence/display/FLINK/FLIP-91%3A+Support+SQL+Gateway
Another alternative is to check out some of the products that offer a FlinkSQL editor on the market, maybe that is useful path for your colleagues.
For example:
https://www.ververica.com/apache-flink-sql-on-ververica-platform
https://docs.cloudera.com/csa/1.7.0/ssb-overview/topics/csa-ssb-intro.html
Note that this is not exactly what you asked for, but could be an option to enable your team.

nodejs bigtable copy rows using filters like prefix

Is it possible in Bigtable/nodejs-bigtable to do something similar to createReadStream but instead of first retrieving the rows just to write them back again I'm looking for a way to do this on the server like a insert into select from in sql
Cloud Bigtable does not offer any direct way to run application code on its servers.
Cloud Bigtable's general strategy of running high volume jobs is to suggest runing on Dataflow with the Cloud Bigtable HBase connector (although that requires java code).
That said, the specific implementation very much depends on your objectives. Any additional information about your use case would help.

Use Salesforce API to extract data into Alteryx

I have an Alteryx workflow and wanted to hook it up to import data from Salesforce, specifically Veeva (which sits on Salesforce). I want to use the Salesforce API but not sure how I can do this simply with Alteryx.
Is it possible to use Alteryx with some other software/framework to import data and run it through my ETL process?
I've heard I can possibly use Apache Spark but i'm not familiar with it. I've also heard I can possibly use Alteryx with Apache Camel but not sure about this either. Thanks!
You can find out how to connect to an API in Alteryx at this link:
https://community.alteryx.com/t5/Engine-Works-Blog/REST-API-In-5-Minutes-No-Coding/ba-p/8137
With the Salesforce API, sometimes it can be easiest to use the SOAP API for Authentication and the REST API for download. I'm not entirely sure why, but both Alteryx & Tableau do that behind the scenes for connections.
Basically, you will call out to the SOAP API for authentication, get the Auth Token and use that on subsequent calls to the REST API. The above link should tell you how to implement that in Alteryx.
As for other software/framework for import, the simple answer is Yes. The Tools to look at for this are the R Tool & Run Command Tool. They will let you either import data using an R script or from Command Line (allowing python, js, batch etc).
Spark is supported in Alteryx both natively and using the In-DB scripts. Theoretically you could use Alteryx with Apache Camel, but I don't know enough about the specifics of the Camel endpoints to say that with certainty.

Where should I write the queries for Neo4j and where is the data stored?

I am going to build an application using neo4j for the graph database and expressjs as a platform for nodejs. However, this is the first time i am using Neo4j and I do not know where I should write my queries and view the data and where data is stored. Any help is appreciated :)
Just check out the documentation of the neo4j node driver
You'll see that you can write your queries either as part of service/helper methods or as part of your domain model.
The data is stored in a directory on disk, which is configured in conf/neo4j-server.properties

Kdb+ drivers for NodeJS

I'd like to use NodeJS to implement a simple REST interface to pull data from Kdb+.
Does anyone know of any existing driver or a project to implement a Kdb+ driver for NodeJS?
Not that I know of... but shouldn't be too hard, really. You could simply use the http interface to create calls to kdb and parse the result into json with Arthur's json implementation
There is https://github.com/michaelwittig/node-q
I've used it in my project and works great.

Resources