I need to solve an offline synch problem and PouchDB seems to be a good fit.
My client is a PC and my server needs to run on AWS. I am happy to work in node.js on the client.
From reading the documentation, it seems that the recommended configuration would be to use the WebSQL adapter on the client side and CouchDB (or Cloudant) on the server side since it is most "battle proven" adapter.
Q1) Since I am on AWS, would I be better off using an adapter to DynamoDB even if it is less "battle tested"?
Q2) Why isn't running the couchDB adapter also on the client side not a better option than using the WebSQL adapter? Is this even an option?
Related
I've been experimenting with trying to secure a Elasticsearch cluster with basic auth and TLS.
I've successfully been able to do that using Search-Guard. The problem occurs with the Couchbase XDCR to Elasticsearch.
I'm using a plugin called elasticsearch-transport-couchbase which perfectly fine without TLS and Basic Auth enabled on the Elasticsearch cluster. But when enabling that with Search-Guard I am not able to make that work.
As far as I can tell the issue lies with the elasticsearch-transport-couchbase plugin. This has also been discussed previously in some issues on their Github repo.
It is also the only plugin what I can find that can be used for XDCR from Couchbase.
I'm curious about other peoples experience with this. Is there anyone who have been in the same situation as I and been able to setup a XDCR from Couchbase to Elasticsearch with TLS?
Or perhaps there are some other more suitable tools that I can use that I have missed?
The Couchbase transport plugin doesn't support XDCR TLS yet, it's on the roadmap, but isn't going to happen soon. Search-guard adds SSL to the HTTP/REST endpoint in ES, but the plugin opens its own endpoint (on port 9091 by default) which Search-guard doesn't touch. I'll take a look at whether it's possible to extend search-guard to apply to the transport plugin - the main problem is on the Couchbase XDCR side, which doesn't expect SSL on the target endpoint.
Version 4.0 of the Couchbase Elasticsearch connector supports secure connections to Couchbase Server and/or Elasticsearch.
Reference: https://docs.couchbase.com/elasticsearch-connector/4.0/secure-connections.html
A small update. We went around the issue by setting up a stunnel with xinetd. So all communication with ELS have to go through the stunnel where the TLS will terminate.
We blocked access to port 9200, and restricted 9091 to the Couchbase-cluster host and 9300 to the other ELS nodes only.
Seems to work good.
I feel like I must be really thick here but I'm struggling with couchbase configuration.
I am looking to replace memcached with couchbase and am wanting to secure things more to my liking, on the server their are a number of applications that are set up to use memcached so it needs to be as drop in as possible without changing the applications configuration.
What I have done is installed couchbase on each of the webservers like I did with memcached and so far with my config everything is working.
The problem I have is port 11211 is open to the world a large and this terrifies me, either I'm thick or I'm not looking in the right place but I want to restrict port 11211 to only be listening on localhost 127.0.0.1.
Now couchbase seems to have reams and reams of documentation but I cannot find how to do this and am starting to feel like you need to be a couchbase expert to make simple changes.
I'm aware that the security of couchbase is to use password protected buckets with SASL auth but for me this isn't an option.
While I'm on the subject and if I can change the listening interface, are their any other ports with couchbase that don't need to be open to the world and can be restricted to local host.
Many many thanks in advance for any help, I'm at my wits end.
Let's back up a bit. Unlike memcached, Couchbase is really meant to be installed as a separate tier in your infrastructure and as a cluster even if you are just using it as a high availability cache. A better architecture would be to have Couchbase installed on its own properly sized nodes (in a cluster using Couchbase buckets) and then install Moxi (Couchbase's memcached proxy) on each web server that will talk to the Couchbase cluster on your app's behalf. This architecture will give you the functionality you need, but then give you the high availability, replication, failover, etc that Couchbase is known for.
In the long run, I would recommend that you transition your code to using Couchbase's client SDKs to access the cluster as that will give you the most benefits, performance, resilience, etc. for your caching needs. Moxi is meant more as an interim step, not a final destination.
I'm trying to figure out how to insert data into a Meteor mongodb database from an external native mobile application that I'm writing (specifically for iOS using Cinder, right now). I'd like events that happen on the mobile device application to be written to my Meteor app's database, so that it can be immediately displayed on a browser elsewhere.
Importantly, I need to stay in my native application on the mobile device - I can't launch into a browser. I'm a bit new to Meteor, so apologies if I'm overlooking something obvious.
Any ideas on how to do this?
Thanks!
Your best bet is to use an iOS DDP client like this one. You can use this client natively in your existing iOS app and subscribe and write back to data in your Meteor ecosystem.
DDP stands for Distributed Data Protocol and is authored by the Meteor group as an external standard for real-time app frameworks to adopt. It's a much preferred method than communicating directly with the database because you can leverage the publish and subscribe methods within the Meteor ecosystem.
The protocol is under rapid development.
I believe there will be a release shortly that will expound on the current state of DDP and the evolution of its official specification. An official spec is slated for the 1.0 release.
Note: Here is a great video overview of DDP in its present form.
Another option is to have your iOS app write directly to the MongoDB instance used by your Meteor app. You can use any MongoDB driver such as NuMongoDB. Meteor polls the MongoDB database every ten seconds so web based users will automatically see updates, albeit with a short lag.
DDP is evolving quickly, as Tim mentioned, so this option might be a little more stable.
In terms of hosting, for scalability I recommend separating your MongoDB instance from the free meteor.com site, by using a Mongo host such as MongoHQ.
And what about other cases ?
I mean browser applications ?
For a notification project, would like to push event notifications out. These are things like login, change in profile, etc., and to be displayed to the appropriate client. I would like to discuss some ideas on putting it together, and get some advice on the best approach.
I noticed here that changes made to a CouchDB can be detected with a _changes stream, picked up by Node, and a process kicks off. I would like to implement something like this (I'm using SQL Server, but an entry point at this level may not be the best solution).
Instead of following the CouchDB example (detecting database-based events, I think this just complicates things, since we're interested in client events), I was thinking that when an event occurs, such as a user login, then a message is sent to the Node server with some event details (RESTful request?). This message is then processed and broadcast to all connected clients; the appropriate client displays notification.
Proposed ecosystem:
.Net 4.0
IIS
IISNode
Socket.IO
Node.JS
SQL Server 2008
This will be built on top of an existing project using the .Net framework (IIS, etc.). Many of the clients' browsers do not support web sockets, so using Socket.IO is a good option (fallback support). However, from what I can see, Socket.IO only still only supports long polling through IISNode (which isn't really a problem).
An option would be to expose the Socket.IO/Node endpoint to all clients, so that client-based notifications can be sent through JS to the Node server, which broadcasts the message. (follows the basic chat-server /client/server examples).
Alternately, an IIS endpoint could be used, but could only support long polling (through Socket.IO). This would offer some additional .Net back-end processing, but may be over-complicating the architecture.
Is there SQL Server-based event notification available for Node?
What would be the best approach?
If I didn't get the terminology ecosystem configuration right, please clarify.
Thanks.
I would recommend you check out SignalR first before considering adding iisnode/node.js to the mix of technologies of your pre-existing ASP.NET application.
Regarding websockets, regardless if you use ASP.NET or node.js (socket.io), you can only use HTTP long polling for low latency notifications, as websockets are not supported by HTTP.SYS/IIS until Windows 8. iisnode does not currently support websockets (even on Windows 8), but such support could be added later.
I did some research lately regarding MSSQL access from node.js. There are a few OSS projects out there, some of them use native, platform-specific extensions, some attempt implementing TDS protocol purely in JavaScript. I am not aware of any that would enable you to access the SQL Notifications functionality. However, the MSSQL team itself is investing in a first class MSSQL driver for node.js, so this is something to keep an eye on going forward (https://github.com/tjanczuk/iisnode/issues/139).
If you plan to use SQL Notifications to support low latency notifications, I would strongly recommend starting with performance benchmarks that simulate the desired level of traffic at the SQL server level. SQL Notifications were meant primarily as a mechanism to help maintain in memory cache consistent with the content of the database, so it may or may not meet the requirements of your notification scenario. At the very minimum these measurements would help you start with a better design.
I would highly recommend using Pusher. That is what we use and it makes it easy to implement as it is a hosted solution. So plugging it and making it work is really easy. It doesn't cost much unless you are going to push a crazy amount of messages through it on a massive scale.
I have to connect to a RDBMS (locally and remotely) via iOS. I have seen the solution ( http://odbcrouter.com/ipad), but for that we have to buy a server side program to run but its very expensive and not a mangeable solution as we have to buy it for more than 1000 stores and making updates etc for each store will be very difficult.
I have just checkout the following iOS app ( http://www.impathic.com/impathic/index.html), it is doing the same thing without any server side code, which make me think that, there will be a way to do this without any server side code... So I was wondering if these RDBMS expose themselves the way I am expecting.
Thanks.
That impathic program is very interesting, but they're likely using a 3rd party library.
The RDBMS wire protocols aren't common, and typically only ODBC and JDBC drivers have implementations of them (and the actual native drivers of course).
If your database is Postgres or MySQL (or some other OSS DB) you may well be able to simply port the native drivers source code to iOS and build it for ARM. I've not done it, but it might work. I don't know if any of the other DBs have ARM native drivers for iOS or note.
Otherwise, you would likely be better off with a middleware solution, depending on what kind of work you plan on doing. Webservices are pretty straight forward for basic cases.