Foxx apps debugging workflow? - arangodb

What is the recommended workflow to debug Foxx applications?
I am currently working on a pretty big application and it seems to me I am doing something wrong, because the way I am proceeding does not seem to be maintanable at all:
Do your changes in Foxx app (eg new endpoints).
Upload your foxx app to ArangoDB.
Test your changes (eg trigger API calls).
Check the logs to see if something went wrong.
Go to 1.

i experienced great time savings, shifting more of the development workflow to the terminal client 'arangosh'. Especially when debugging more complex endpoints, you can isolate queries and functions and debug each individually in the terminal. When done debugging, you merge your code in Foxx app and mount it. Require modules as you would do in Foxx, just enter variables as arguments for your functions or queries.
You can use arangosh either directly from the terminal or via the embedded terminal in the Arangodb frontend.
You may also save some time switching to dev mode, which allows you to have changes in your code directly reflected in the mounted app without fetching, mounting and unmounting each time.
That additional flexibility costs some performance, so make sure to switch back to production mode once your Foxx app is ready for deployment.

When developing a Foxx App, I would suggest using the development mode. This also helps a lot with debugging, as you have faster feedback. This works as follows:
Start arangod with the dev-app-path option like this: arangod --javascript.dev-app-path /PATH/TO/FOXX_APPS /PATH/TO/DB, where the path to foxx apps is the folder that contains a database folder that contains your foxx apps sorted by database. More information can be found here.
Make your changes, no need to deploy the app or anything. The app now automatically reloads on every request. Change, try out, change try out...
There's currently no debugging capabilities. We are planning to add more support for unit testing of Foxx apps in the near future, so you can have a more TDD-like workflow.

Related

Run scripts in a Node server

I have a server application using Node, and sometimes I need to run some script in it. Some examples of scenarios when this would be necessary:
During development, I need to create many entries in the database to simulate an use case.
In production, some bug happened and some information was not correctly stored in the DB, I may need to backfill it.
The way I know how do it in node is to deploy some instance of the server with an endpoint that contains the code to be run.
It is interesting to use the node server because it already has a lot of code that I can reused, for example DAO and safe create/delete funcitons.
Django has a interactive Python interpreter that does this job, but I do not know any similar way to it in Node.
Other strategies of doing this use cases are very welcome.
During development, you can just go with debugging, although that requires triggering a breakpoint. Alternatively, if it's just about your database, there are better external programs to interact with your database.
To actually answer your question: Node does have a VM module to run code and even a REPL module to help writing custom REPLs. This does require some work to link up your APIs though, but doable.
As for how you actually interact with that REPL, there are several options. Using a raw socket (and Telnet), a terminal on your site communicating over a WebSocket, a simple HTTP endpoint, ...
You can add scripts in your package.json to handle this with a path to the script. For example "seed:db": "node ./src/seeder.js -i", "drop:db": "node ./src/seeder.js -d" where the i an d flags will be used to determine if i am inserting or deleting, and can be gotten with process.argv[2]

How to run a migration on Google App Engine

I have a Node.js app running on Google App Engine.
I want to run sequelize migrations.
Is it possible to run a command from within the Instance of my node.js app?
Essentially something like heroko's run command which will run a one-off process inside a Heroku dyno.
If this isn't possible what's the best practice in running migrations?
I could always just add it to the gcp-build but this will run on every deploy.
It's not possible to run standalone scripts/apps in GAE, see How do I run custom python script in Google App engine (in python context, but the general idea applies to all runtimes).
The way I ran my (datastore) migrations was to port the functionality of the migration script itself into the body of an admin-protected handler in my GAE app which I triggered with a HTTP request for a particular URL. I re-worked it a bit to split the potentially long-running migration operation into a sequence of smaller operations (using push task queues), much more GAE-friendly. This allowed me to live-test the migration one datastore entity set at a time and only go for multiple sets when completely confident with its operation. Also didn't have to worry about eventual consistency (I was using queries to determine the entities to be migrated) - I just repeatedly invoked the migration until there was nothing left to do.
Once the migration was completed I removed the respective code (but kept the handler itself for future migrations). As a positive side effect I pretty much had the migration history captured in my repository's history itself.
Potentially of interest: Handling Schema Migrations in App Engine

Use Google App Engine or Google Cloud Compute VM to Test Run My App?

I'm moving my Three.js app and its customized node.js environment, which I've been running on my local machine to Google Cloud. I want to test things out there, and hopefully soon get some early alpha testing going with other people.
I'm not sure which is the wiser way to go... to upload the repo I've been running locally as-is onto a VM which users would then access via the VM's external IP until I get a good name to call this app... or merge my local node.js environment with what's available via the Google App Engine and run it on GAE.
Issues I'm running into with the linux VM approach... I'm not sure how to do the equivalent on the VM of what I've been doing locally. In Windows Powershell I cd into the app directory and then enter node index.js. I'm assuming by this method of deployment that I can get the app running as soon as the browser hits the external IP. I should mention too that the app will allow users to save content as well as upload images, and eventually, 3D models as well as json datasets.
Issues I'm running into with the App Engine approach: it looks like I only have access to a linux-based command line, and have to install all the node.js modules manually. Meanwhile I have a bunch of files to upload, both the server-side node files and all the frontend stuff. I don't see where to upload those files, and ultimately what I'd like to do is have access to a visual, editable file-tree interface, as I have in Windows and FileZilla, so I can swap files in and out, etc. Alternatively I suppose I could import a repo from Github? Github would be fine as long as I can visually see what's happening. Is there a visual interface for file structure available in GAE somewhere? Am I missing something?
I went through the GAE "Hello World" tutorial and that worked fine, but was left scratching my head afterward regarding how to actually see and edit the guts of the tutorial app, or even where to look for the files.
So first off, I want to determine what's the better approach, and then if possible, determine how to make the experience of getting my app up there and running a more visual, user-friendly experience.
Thanks.
There are many things to consider when choosing how to run an app, but my instinct for your use case is to simply use a VM on GCE. The most compelling reason for this is that it's the most similar thing to what you have now. You can SSH into the machine and run nohup node index.js & (or node index.js inside tmux/screen if you prefer) and it will start the app and not stop it when you log out of SSH. You can use SCP / SFTP with whatever GUI client you want to upload files. You don't have to learn anything new! If you wanted to, you could even use a Windows VM (although I think you have to pay a little more than for a comparable Linux VM due to the licensing fees).
That said, the other way is arguably more "correct" by modern development standards, but it will involve a lot more learning that will prevent you from getting your app running somewhere other than your laptop in the short term:
First, you'll need to learn about Docker and stateless containers, which is basically what your app runs inside of on AppEngine.
Next, you'll need to learn how to hook up a separate stateful service (database, file server, ...) to your app's container so you can store your files, etc. in it, and then probably rewrite your app somewhat to use it to store stuff.
Next, you'll probably want some way to automatically deploy this from code instead of manually doing it, which gets you into build systems, package managers, artifact storage, continuous integration systems, and on and on and on.
This latter path is certainly what you should choose for a long-running production service if you work with a big team of developers -- but that doesn't mean that it's necessarily the right path for your project today. If you don't care about scaling up automatically, load balancing between nodes, redundant copies of your app running in different regions in case there's a natural disaster, etc., then go with the easy way for now, and you can learn new ways to improve the service when they're actually needed.

Running IIS server with Coypu and SpecFlow

I have already spending a lot of time googling for some solution but I'm helpless !
I got an MVC application and I'm trying to do "integration testing" for my Views using Coypu and SpecFlow. But I don't know how I should manage IIS server for this. Is there a way to actually run the server (first start of tests) and making the server use a special "test" DB (for example an in-memory RavenDB) emptied after each scenario (and filled during the background).
Is there a better or simpler way to do this?
I'm fairly new to this too, so take the answers with a pinch of salt, but as noone else has answered...
Is there a way to actually run the server (first start of tests) ...
You could use IIS Express, which can be called via the command line. You can spin up your website before any tests run (which I believe you can do with the [BeforeTestRun] attribute in SpecFlow) with a call via System.Diagnostics.Process.
The actual command line would be something like e.g.
iisexpress.exe /path:c:\iisexpress\<your-site-published-to-filepath> /port:<anyport> /clr:v2.0
... and making the server use a special "test" DB (for example an in-memory RavenDB) emptied after each scenario (and filled during the background).
In order to use a special test DB, I guess it depends how your data access is working. If you can swap in an in-memory DB fairly easily then I guess you could do that. Although my understanding is that integration tests should be as close to production env as possible, so if possible use the same DBMS you're using in production.
What I'm doing is just doing a data restore to my test DB from a known backup of the prod DB, each time before the tests run. I can again call this via command-line/Process before my tests run. For my DB it's a fairly small dataset, and I can restore just the tables relevant to my tests, so this overhead isn't too prohibitive for integration tests. (It wouldn't be acceptable for unit tests however, which is where you would probably have mock repositories or in-memory data.)
Since you're already using SpecFlow take a look at SpecRun (http://www.specrun.com/).
It's a test runner which is designed for SpecFlow tests and adds all sorts of capabilities, from small conveniences like better formatting of the Test names in the Test Explorer to support for running the same SpecFlow test against multiple targets and config file transformations.
With SpecRun you define a "Profile" which will be used to run your tests, not dissimilar to the VS .runsettings file. In there you can specify:
<DeploymentTransformation>
<Steps>
<IISExpress webAppFolder="..\..\MyProject.Web" port="5555"/>
</Steps>
</DeploymentTransformation>
SpecRun will then start up an IISExpress instance running that Website before running your tests. In the same place you can also set up custom Deployment Transformations (using the standard App.Config transformations) to override the connection strings in your app's Web.config so that it points to the in-memory DB.
The only problem I've had with SpecRun is that the documentation isn't great, there are lots of video demonstrations but I'd much rather have a few written tutorials. I guess that's what StackOverflow is here for.

Deploying updates to production node.js code

This may be a basic question, but how do I go about effeciently deploying updates to currently running node.js code?
I'm coming from a PHP, JavaScript (client-side) background, where I can just overwrite files when they need updating and the changes are instantly available on the produciton site.
But in node.js I have to overwrite the existing files, then shut-down and the re-launch the application. Should I be worried by potential downtime in this? To me it seems like a more risky approach than the PHP (scripting) way. Unless I have a server cluster, where I can take down one server at a time for updates.
What kind of strategies are available for this?
In my case it's pretty much:
svn up; monit restart node
This Node server is acting as a comet server with long polling clients, so clients just reconnect like they normally would. The first thing the Node server does is grab the current state info from the database, so everything is running smoothly in no time.
I don't think this is really any riskier than doing an svn up to update a bunch of PHP files. If anything it's a little bit safer. When you're updating a big php project, there's a chance (if it's a high traffic site it's basically a 100% chance) that you could be getting requests over the web server while you're still updating. This means that you would be running updated and out-of-date code in the same request. At least with the Node approach, you can update everything and restart the Node server and know that all your code is up to date.
I wouldn't worry too much about downtime, you should be able to keep this so short that chances are no one will notice (kill the process and re-launch it in a bash script or something if you want to keep it to a fraction of a second).
Of more concern however is that many Node applications keep a lot of state information in memory which you're going to lose when you restart it. For example if you were running a chat application it might not remember who users were talking to or what channels/rooms they were in. Dealing with this is more of a design issue though, and very application specific.
If your node.js application 'can't skip a beat' meaning it is under continuous bombardment of incoming requests, you just simply cant afford that downtime of a quick restart (even with nodemon). I think in some cases you simply want a seamless restart of your node.js apps.
To do this I use naught: https://github.com/superjoe30/naught
Zero downtime deployment for your Node.js server using builtin cluster API
Some of the cloud hosting providers Node.js (like NodeJitsu or Windows Azure) keep both versions of your site on disk on separate directories and just redirect the traffic from one version to the new version once the new version has been fully deployed.
This is usually a built-in feature of Platform as a Service (PaaS) providers. However, if you are managing your servers you'll need to build something to allow for traffic to go from one version to the next once the new one has been fully deployed.
An advantage of this approach is that then rollbacks are easy since the previous version remains on the site intact.

Resources