Our production environment doesn't provide a shell but only javascript engine and REST interface. Our arangodb server will be installed at a remote location. Since all of our users are comfortable with javascript implementation we are looking for a solution if we could provide them an interface where they write the queries for arangodb in javascript (the way we do in arangodbsh) and we can execute them remotely and get the result.
Is it somehow possible ?
I am new to arangodb and so far I have found that there is only REST interface available to interact remotely.
arangosh is not available and can not be used.
You can use arangosh to connect to the remote server as it uses the REST interface to work. All information on connecting to your server is available via arangosh --help. The default behaviour of arangosh is to connect to a local ArangoDB instance, but it can connect to remote ones as well.
You probably want to do something like, where 1.2.3.4 is the IP of your remote server:
arangosh --server.endpoint tcp://1.2.3.4:8529
If you want to execute arbitrary JavaScript code in ArangoDB from an application, you can use the endpoint /_admin/execute described here that takes JavaScript code as its body that will be executed in ArangoDB. Be aware that this is a potential security risk
Related
I've successfully installed the Pervasive 13's 64bit Client onto Ubuntu Server 18.04.
How can I now establish a connection to the Pervasive 13 server (which is installed on a Windows 2008 R2 server) and perform a sql query?
I'm extremely confused by the documentation, which directs me to the bcfg tool after client installation. I'm not clear if that tool is for configuring the server or for setting up the client connection. Ether way, the documentation is too abstract for my comprehension; I need concrete examples of someone successfully establishing a connection to (at least a hypothetical Pervasive server located at some hypothetical ip address) and NOT JUST abstract syntax that never shows an example of SQL statement being submitted from command line Linux.
Seriously, the documentation covers so much detail of stuff I don't immediately care about, that I can never seem to figure out my practical needs which are to simply establish a connection to the database, perform a SQL query, and get a result set.
The installation of the client should have sensible defaults, and the documentation, after installation, should focus on getting you connected and running sql statements as quickly as possible, instead of going on and on about details that are only of interest if the defaults aren't sensible. Let me connect first! Then if I have a problem, only then do I care to learn further detail about other aspects of configuring the connection.
Pervasive is such an obscure database server, that I'm left with only this documentation to figure this out. Any other database would probably have YouTube videos that show you how to install the client, and start making some SQL queries and getting result sets.
Someone at Actian, ought to be kind enough to make a quick start video for the client on Ubuntu Server that quickly covers installation and finishes where you're submitting sql queries and get result sets. After all, that's the purpose of database client.
Can someone please provide some concrete examples of how I can turn this successful installation into a relationship with the database server where I can submit SQL queries and receive result sets?
I'm not sure why the documentation points to bcfg.
If the Client is installed and didn't display any errors, you need to add an ODBC DSN using dsnadd (https://docs.actian.com/psql/PSQLv13/index.html#page/uguide%2Fuguide.dsnadd.htm%23ww68699). An example of creating a client side DSN pointing to a remote database is:
dsnadd -dsn=clientDemodata -db=Demodata -host=WindowsServerName
(where clientDemodata is the DSN created on the Linux box, Demodata is the PSQL database on the remote server called WindowsServerName).
Once the DSN has been added, you should be able to use isql or isql64 (https://docs.actian.com/psql/PSQLv13/index.html#page/uguide%2Fuguide.isql.htm%23ww138933) to execute a query.
Running isql / isql64 with just the DSN will let you execute SQL queries interactively:
isql64 clientDemodata
An example of running isql using a file as input for the SQL statement(s) is:
cat two-queries.sql | isql clientDemodata -b
If you've done all that, what errors or behavior are you seeing?
I want to know if it is possible to run an isql query via HTTP in OpenLink Virtuoso.
I understand that the isql server runs on port 1111, but I cannot find any example (e.g. curl) to run an SQL query (not SPARQL) via HTTP.
I don't want to use ODBC because that would require configuration on different environments (UNIX or Windows) and I don't have time to change our Vagrant scripts for that.
JDBC is also not an option because we run on NodeJS and that would require a wrapper that would put additional overhead on the query times.
Running OpenLink Virtuoso 7.
The data service at 1111 is not an HTTP service, so curl cannot be used against it.
You may be able to script something to run against the HTTP-accessible iSQL implementation at <http://{{virtuoso-host:port}}/conductor/isql.vspx>. Note that this is digest-auth protected and was intended for human interaction, so the client tool may need to parse the HTML of the response.
If that won't serve your needs, I suggest you ask on the Virtuoso Users mailing list. There are likely other options.
I have a java class which manage jobs and execute them via spark(using 1.6).
I am using the API - sparkLauncher. startApplication(SparkAppHandle.Listener... listeners) in order to monitor the state of the job.
The problem is I moved to work in a real cluster environment and this way can’t work when the master and workers are not on the same machine, as the internal implementation is making a use of localhost only (loopback) to open a port for the workers to bind to.
The API sparkLauncher.launch() works but doesn’t let me monitor the status.
What is the best practice for cluster environment using a java code?
I also saw the option of hidden Rest API, is it mature enough? Should I enable it in spark somehow (I am getting access denied, even though the port is open from outside) ?
REST API
In addition to viewing the metrics in the UI, they are also available as JSON. This gives developers an easy way to create new visualizations and monitoring tools for Spark. The JSON is available for both running applications, and in the history server. The endpoints are mounted at /api/v1. Eg., for the history server, they would typically be accessible at http://:18080/api/v1, and for a running application, at http://localhost:4040/api/v1.
More details you can find here.
Every SparkContext launches a web UI, by default on port 4040, that displays useful information about the application. This includes:
A list of scheduler stages and tasks
A summary of RDD sizes and memory usage
Environmental information.
Information about the running executors
You can access this interface by simply opening http://driver-node:4040 in a web browser. If multiple SparkContexts are running on the same host, they will bind to successive ports beginning with 4040 (4041, 4042, etc).
More details you can find here.
I know that Ruby on Rails has this feature, and in the railstutorial it specifically encourages it. However, I have not found such a thing in nodejs. If I want to run Sqlite3 on my machine so I can have easy to use database access, but postgres in production on Heroku, how would I do this in Nodejs? I can't see to find any tutorials on it.
Thank you!
EDIT: I meant to include Node.JS + Express.
It's possible of course, but be aware that this is probably a bad idea: http://12factor.net/dev-prod-parity
If you don't want to go through the hassle of setting up postgres locally, you could instead use a free postgres plan on Heroku and connect to it from your local machine:
DATABASE_URL=url node server.j
A .env file can make this easier:
https://devcenter.heroku.com/articles/heroku-local#copy-heroku-config-vars-to-your-local-env-file
To switch between production and development Db you use different ports for running you application locally and on Heroku.
As Heroku by default runs the application to port 80 you have a some other port while running your app locally.
This will help you to figure out in run time if your application is running locally or in production and you can switch the Databases accordingly.
You could use something like jugglingdb to do this:
JugglingDB(3) is cross-db ORM for nodejs, providing common interface to access most popular database formats. Currently supported are: mysql, sqlite3, postgres, couchdb, mongodb, redis, neo4j and js-memory-storage (yep, self-written engine for test-usage only). You can add your favorite database adapter, checkout one of the existing adapters to learn how, it's super-easy, I guarantee.
Jugglingdb also works on client-side (using WebService and Memory adapters), which allows to write rich client-side apps talking to server using JSON API.
I personally haven't used it, but having a common API to access all your database instances would make it super simple to use one locally and one in production - you could wire up some location detection without too much trouble as well and have it automatically select the target db depending on the environment it's in.
Here is the problem: We have a client that uses Progress Openedge database, we need to execute queries on this database from our servers.
Currently the drivers are installed on our Windows server, and the PHP code uses ODBC to run the queries.
Now we would like to move the code to a Linux server. We tried before to work with their linux drivers but that attempt has failed.
The question is, Is it possible somehow to run PHP code on a linux server, this code communicates with the Windows server, runs the query on the Windows server, and return the results to Linux?
How would you access to this problem.
Thanks!
Yes, it's possible. Your question boils down to "how can my Linux server ask my Windows server to do something" (where the "something" happens to be "talk to a database"), and there are a variety of ways to accomplish that. You could run a web service (RESTful or SOAP) on the Windows server, for example.
Make sure you think about security: if you deploy a service on your Windows server that lets remote clients modify a database, you have to be mindful of which remote clients are allowed to use that service. The last thing you want to do is accidentally allow random strangers to run arbitrary queries against your database.
We have a Knowledgebase Article detailing some setup procedures for Linux installations; it also has a video explaining some aspects of the setup. If the other answered haven't provided a complete solution for you, hopefully our article can at least get you started in the right direction.
Also keep in mind that depending on your version of OE, the driver libraries may be different.