Terracotta bigmemory-max client - distributed-caching

In a multi stripes distributed cache cluster:
1. How does client requests are balanced between different stripes?
2. Does client fetch/put data only from/to one stripe (which internally might remotely forward the request to another stripe's active server according to the location of the partitioned data)?

The striping is based on sharding data. So requests go to the stripe that holds the data for a given key.
Clients know about cluster topology and direct requests to the correct stripe based on the data key.
Product documentation

Related

Add rate limit to public api returning customer status (exists, not-exist) against email

I have api which will return user status i.e exists or not exists in the shop against email. Now the challenge here is to rate limit any specific user/bot which is sending multiples request. One solution is to use advanced cloudfare rate limit but its only support IP counting in core subscription which we have already but IP counting is not a good solution at all since it can be send from any corporate LAN having multiple users or proxy servers.
While if I go for normal server side solution we have node modules like express-rate-limit but I still think the user is coming to our server and then we are blocking them.
Not sure if we have any best solution on cdn level. Also how can I track a user request uniquely with IP address which attributes I can use.

Tracking multiple "Binance" orders for multiple users from a single connection

The task is as follows:
There is a list of users from Binance exchange, each user can create an order on the Binance exchange. It is necessary to implement a mechanism for tracking user orders on the Binance exchange through a single connection.
There are a lot of users. A lot of tokens and secret keys. One connection.
I use the node js library "binance-api-node".
But I am ready to hear any solutions to the problem.
Orders sent through the POST /api/v3/order endpoint (client.order() in the library) return a unique newClientOrderId (or you can specify your own and send it with the request payload). You can store it in your app in relation with the order and the user.
Orders that have been created using a different way (e.g. the Binance UI) are a little more complicated. You can receive the list of orders per API key, for example with the GET /api/v3/allOrders endpoint. Each order again contains a unique clientOrderId, and since you know the API key that you used to query these orders, you can make a relation between the clientOrderId and your user.
Note that each Binance account can have multiple API keys and there is no easy way to determine whether two API keys belong to the same Binance account or not. See this answer for more info.
Because each authenticated REST endpoint requires exactly one API key (and some endpoints also require exactly one corresponding secret key to sign the payload), it is not possible to communicate with the API on behalf of multiple API keys in a single connection.
You'll need to make a separate request for each of the API keys.

Client billing/client usage for microsoft cognitive services speech to text?

I'm working on a website that is supposed to offer users to make use of azures cognitive services api. They can play audio or use their microphone to transform speech into text.
I'm currently using azures js sdk and technically it's working fine. However, I noticed a big shortcoming with this approach. The sdk connects through a websocket with the azure server, which exposes the subscription key to the client. So every member could theoretically read it out and sell it or alike.
Furthermore, if the client connects directly with azure, I have no secure way of preventing clients abusing the service. I need a way to measure roughly how much time a customer uses the service to take into account individual billing.
I could not find anything about that in the official documentation. So what are my options?
Should I redirect the clients' audio input to my own server, do some quantitative analysis, and then forward the input from a server side connection to azure? I fear with many concurrent customers, it might get laggy or connections might get dropped...
Is there any way to attach at least client ids or alike to azure websocket connection that I can read out somehow later?
Do you have any advice for me?
Given your additional comment, I would suggest that you switch your implementation from using subscription key to using authentication tokens.
That would:
generate a unique token for each client, based on 1 global subscription key
not expose your subscription key to your clients
restrict the use of the API, as the token is only valid for 10 minutes
Each access token is valid for 10 minutes. You can get a new token at any time, however, to minimize network traffic and latency, we recommend using the same token for nine minutes.
See documentation here for global implementation. In a nutshell, you need to implement this token generation in your backend, and serve the page to your client with this token instead of the key.
Side note 1: be careful about the maximum number of concurrent requests (100 - see here).
Note 2: that will not help you bill clients given their usage as you have just 1 key and there is no way to identify distinct usages in it

CouchDB read restriction

I have an app where i want to implement a chat/messaging service. I have to use CouchDB with PouchDB. My problem is that every user should be able to send a message to anyone, but only the receiver of the message can read this, but there is no way in CouchDB to restrict every user from reading the conversation doc. Database per user is also not a solution since there is no way for everyone to write to the corresponding database.
CouchDB & PouchDB do not have per-document access control, only per-database. One solution to this is to have:
a single database for sent messages residing on the server. The PouchDB clients write (but don't read) to this database by doing client->server one-way replication.
a database per user on the server side with server->client one-way replication. This is how the PouchDB clients get received messages.
on the server side, write some custom script to move documents from the central database to the per-user databases depending on the recipient
This is a similar approach to the one outlined in my blog post about bus station displays which uses a serverless changes feed listener to route the messages. It's not ideal, but is one solution.

Bitcoin exchange / e-wallet service - keeping member balances securely

Say I want to create a Bitcoin exchange or an e-wallet service and make it as secure as possible. Assuming the nature of the service results in more Bitcoin deposits coming in then Bitcoin leaving the system out, yet the need to allow instant withdrawals of Bitcoins out of the service, I thought of the following scheme or scenario.
Create on a separate computer a list of 1000 Bitcoin addresses using Multibit. Transfer those 1000 public keys to DB on web server using a USB, to a table holding a pool of free/non-used addresses. When a member creates an account I assign a free Bitcoin deposit address to make member account funding possible. Since the private key for these 1000 deposit addresses is not on the web server or DB (generated on another computer and only public keys were imported using USB) I am pretty much secure that all funds coming into the system as deposits are safe.
When a member wishes to trade with another member, I simply maintain my own balance accounting system, by creating tables and logging transfers from one member account to another.
When a member wishes to withdraw his Bitcoins, I will use a Hot wallet which would only accept requests from the web server IP address, check my internal accounting system to make sure member has enough balance left and make payments from the hot wallet to whatever external Bitcoin address withdrawal has been requested to. By making sure I keep no more than, say, 5% of overall balance on the hot wallet, any security breach will not result in 100% loss of site funds.
How secure is this scheme? Would you suggest I do things otherwise?
Yes, you can use such scheme, but make sure you're keeping those private keys for 1000 wallets in secure place. I would recommend encrypting all of those initial 1000 private keys with some master password which you'll never forget. Also think about storing those keys on offline storage/computer - you can use those offline storage to sign transactions in the emergent cases when you'll need to access those wallets.

Resources