Run scripts in a Node server - node.js

I have a server application using Node, and sometimes I need to run some script in it. Some examples of scenarios when this would be necessary:
During development, I need to create many entries in the database to simulate an use case.
In production, some bug happened and some information was not correctly stored in the DB, I may need to backfill it.
The way I know how do it in node is to deploy some instance of the server with an endpoint that contains the code to be run.
It is interesting to use the node server because it already has a lot of code that I can reused, for example DAO and safe create/delete funcitons.
Django has a interactive Python interpreter that does this job, but I do not know any similar way to it in Node.
Other strategies of doing this use cases are very welcome.

During development, you can just go with debugging, although that requires triggering a breakpoint. Alternatively, if it's just about your database, there are better external programs to interact with your database.
To actually answer your question: Node does have a VM module to run code and even a REPL module to help writing custom REPLs. This does require some work to link up your APIs though, but doable.
As for how you actually interact with that REPL, there are several options. Using a raw socket (and Telnet), a terminal on your site communicating over a WebSocket, a simple HTTP endpoint, ...

You can add scripts in your package.json to handle this with a path to the script. For example "seed:db": "node ./src/seeder.js -i", "drop:db": "node ./src/seeder.js -d" where the i an d flags will be used to determine if i am inserting or deleting, and can be gotten with process.argv[2]

Related

Shipping Postgres inside a node server [duplicate]

If it's possible, I'm interested in being able to embed a PostgreSQL database, similar to sqllite. I've read that it's not possible. I'm no database expert though, so I want to hear from you.
Essentially I want PostgreSQL without all the configuration and installation. If it's possible, tell me how.
Run postgresql in a background process.
Start a separate thread in your application that would start a postgresql server in local mode either by binding it to localhost with some random free port or by using sockets (does windows support sockets?). That should be fairly easy, something like:
system("C:\Program Files\MyApplication\pgsql\postgres.exe -D C:\Documents and Settings\User\Local Settings\MyApplication\database -h 127.0.0.1 -p 12345");
and then just connect to 127.0.0.1:12345.
When your application quits, you can always send a SIGTERM to your thread and then wait a few seconds for postgresql to quit (ie join the thread).
PS: You can also use pg_ctl to control your "embedded" database, even without threads, just do a "pg_ctl start" (with appropriate options) when starting the application and "pg_ctl stop" when quitting it.
You cannot embed it, nor should you try.
For embedding you should use sqlite as you mentioned or firebird rdbms.
Unless you do a major rewrite of code, it is not possible to run Postgres "embedded". Either run it as a separate process or use something else. SQLite is an excellent choice. But there are others. MySQL has an embedded version. See it at http://mysql.com/oem/. Also several java choices, and Mac has Core Data you can write too. Hell, you can even use FoxPro. What OS you on and what services you need from the database?
You can't embed it as a in process type thing like sqlite etc, but you can easily embed it into your application setup using Inno setup at http://www.innosetup.org. Search their mailing list archive and you will find someone did most of the work for you and all you have to to is grab the zipped distro and you can easily have postgresql installed when the user installs your app. You can then use the pg_hba.conf file to restrict the server to local host only. Not a true embedded DB, but it would work.
PostgreSQL is intended to run as a stand-alone server; it's probably possible to embed it if you hack at it hard and long enough, but it would be much easier to just run it as intended in a separate process.
HSQLDB (http://hsqldb.org/) is another db which is easily embedded. Requires Java, but is an excellent and often-used choice for Java applications.
Anyone tried on Mac OS X:
http://pagesperso-orange.fr/bruno.gaufier/xhtml/prod_postgresql.xhtml
http://www.macosxguru.net/article.php?story=20041119135924825
(Of course sqlite would be my embedded db of choice as well)
Well, I know this is a very very very old post, but if anyone has nowadays this question, I would refer to:
You can use containers running Postgres. Here's a post that could be helpful, doing something along this line using R:
https://rsangole.netlify.app/post/2021/08/07/docker-based-rstudio-postgres/?utm_source=pocket_mylist
Take a look at duckdb https://duckdb.org/docs/installation/ It is relatively new and still needs to mature. But it works pretty much like an embedded database ("In-process, serverless"), with bindings for several languages (Python, R, Java, ...)

Running an oracle (written in node.js) after compile/migrate contracts

I'm trying to write some tests for truffle, but I've realized that when running truffle test, before the test file is executed, truffle performs the compilation and migration of the contracts. For this reason, I need my oracle to be launched just after the migration because when launched it is waiting for events coming from a specific contract address.
Is there any way of launching the oracle programmatically and keeping it alive during the test? The command to launch it is simple, just something like node oracle.js --network=test
I guess my code is not needed for the question, but anyway, if you need to know any approach I'm following on the project feel free to ask about.
Thanks in advance.
You should be able to launch the oracle from any .js test file that requires the oracle to be running. You could likely accomplish this using a npm package such as forever.

Foxx apps debugging workflow?

What is the recommended workflow to debug Foxx applications?
I am currently working on a pretty big application and it seems to me I am doing something wrong, because the way I am proceeding does not seem to be maintanable at all:
Do your changes in Foxx app (eg new endpoints).
Upload your foxx app to ArangoDB.
Test your changes (eg trigger API calls).
Check the logs to see if something went wrong.
Go to 1.
i experienced great time savings, shifting more of the development workflow to the terminal client 'arangosh'. Especially when debugging more complex endpoints, you can isolate queries and functions and debug each individually in the terminal. When done debugging, you merge your code in Foxx app and mount it. Require modules as you would do in Foxx, just enter variables as arguments for your functions or queries.
You can use arangosh either directly from the terminal or via the embedded terminal in the Arangodb frontend.
You may also save some time switching to dev mode, which allows you to have changes in your code directly reflected in the mounted app without fetching, mounting and unmounting each time.
That additional flexibility costs some performance, so make sure to switch back to production mode once your Foxx app is ready for deployment.
When developing a Foxx App, I would suggest using the development mode. This also helps a lot with debugging, as you have faster feedback. This works as follows:
Start arangod with the dev-app-path option like this: arangod --javascript.dev-app-path /PATH/TO/FOXX_APPS /PATH/TO/DB, where the path to foxx apps is the folder that contains a database folder that contains your foxx apps sorted by database. More information can be found here.
Make your changes, no need to deploy the app or anything. The app now automatically reloads on every request. Change, try out, change try out...
There's currently no debugging capabilities. We are planning to add more support for unit testing of Foxx apps in the near future, so you can have a more TDD-like workflow.

Running IIS server with Coypu and SpecFlow

I have already spending a lot of time googling for some solution but I'm helpless !
I got an MVC application and I'm trying to do "integration testing" for my Views using Coypu and SpecFlow. But I don't know how I should manage IIS server for this. Is there a way to actually run the server (first start of tests) and making the server use a special "test" DB (for example an in-memory RavenDB) emptied after each scenario (and filled during the background).
Is there a better or simpler way to do this?
I'm fairly new to this too, so take the answers with a pinch of salt, but as noone else has answered...
Is there a way to actually run the server (first start of tests) ...
You could use IIS Express, which can be called via the command line. You can spin up your website before any tests run (which I believe you can do with the [BeforeTestRun] attribute in SpecFlow) with a call via System.Diagnostics.Process.
The actual command line would be something like e.g.
iisexpress.exe /path:c:\iisexpress\<your-site-published-to-filepath> /port:<anyport> /clr:v2.0
... and making the server use a special "test" DB (for example an in-memory RavenDB) emptied after each scenario (and filled during the background).
In order to use a special test DB, I guess it depends how your data access is working. If you can swap in an in-memory DB fairly easily then I guess you could do that. Although my understanding is that integration tests should be as close to production env as possible, so if possible use the same DBMS you're using in production.
What I'm doing is just doing a data restore to my test DB from a known backup of the prod DB, each time before the tests run. I can again call this via command-line/Process before my tests run. For my DB it's a fairly small dataset, and I can restore just the tables relevant to my tests, so this overhead isn't too prohibitive for integration tests. (It wouldn't be acceptable for unit tests however, which is where you would probably have mock repositories or in-memory data.)
Since you're already using SpecFlow take a look at SpecRun (http://www.specrun.com/).
It's a test runner which is designed for SpecFlow tests and adds all sorts of capabilities, from small conveniences like better formatting of the Test names in the Test Explorer to support for running the same SpecFlow test against multiple targets and config file transformations.
With SpecRun you define a "Profile" which will be used to run your tests, not dissimilar to the VS .runsettings file. In there you can specify:
<DeploymentTransformation>
<Steps>
<IISExpress webAppFolder="..\..\MyProject.Web" port="5555"/>
</Steps>
</DeploymentTransformation>
SpecRun will then start up an IISExpress instance running that Website before running your tests. In the same place you can also set up custom Deployment Transformations (using the standard App.Config transformations) to override the connection strings in your app's Web.config so that it points to the in-memory DB.
The only problem I've had with SpecRun is that the documentation isn't great, there are lots of video demonstrations but I'd much rather have a few written tutorials. I guess that's what StackOverflow is here for.

How to set up a node.js development environment/server (Ubuntu 11.04)

I am trying to set up a development environment for node.js. I assumed at first that it requires something similar to the traditional, "localhost" server approach. But I found myself at a loss. I managed to start a node.js hello world app from the terminal. Which doesn't looked like a big deal - having to start an app from the console isn't that hard. But, after some tweaking, I found out that the changes aren't shown in the browser immediately - you need to "node [appName here]" it again to run.
So, my question is:
Is there a software or a tutorial on how to create a more "traditional" development server on your local machine? Along with port listening setup, various configurations, root directories etc (things that are regular in stacks like XAMMP, BitNami or even the prepackaged Ubuntu LAMP). Since I'm new at node.js, I can't really be sure I'm even searching for the right things on google.
Thanks.
Take a look at :
https://github.com/remy/nodemon
it'll allow you to do - nodemon app.js
and the server will restart automatically in case of failure.
To do this I built a relatively small tool in NodeJS that allows me to start/stop/restart a NodeJS child process (which contains the actual server) and see/change configuration option and builds/versions of the application, with admin options available on a different tcp port. It also monitors said child process to automatically respawn it if there was a error (and after x failed attempts stops trying and contacts me).
While I'm prohibited from sharing source code, this requires the (built-in) child_process module, which has a spawn method that returns a child process I guess, which contains a pid (process id) which you can use with the kill method to kill said child process. Instead of killing it you could also work with SIGINT an catch it within your child application to first clean up some stuff and then exit. It's relatively easy to do.
Some nice reading material regarding this.
http://nodejs.org/docs/v0.5.5/api/child_processes.html

Resources