Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
When people say that Hyperledger fabric is for enterprises solutions it means that the nodes could be the users that are granted access? or it has to be different enterprises cooperating?
Would granting access to the users through unique phone number be feasible?
Sorry for the dumb questions it's for a uni project and I can't find clear informations
Fabric is a private/permissioned block chain, and only verified nodes can participate.
In order to verify the nodes on the Internet network, that is, to verify the identity, a PKI system is common.wikipedia/PKI
That is, each node has its own private key and certificate (public key) with authority, and in Fabric, it is controlled under the name of MSP (Membership Service Provider). Fabric MSP
MSP is easy to get confused by their name, but they can be seen as somthinmg... files of crypto-config artifacts rather than objects that take action.
Would granting access to the users through unique phone number be feasible - Lorenzo Bonelli?
In fact, it is difficult to answer this question. First of all, I wonder if you are a developer, what is the background, and what is the target system.
First of all, I will assume that you basically know the structure of hyperledger-fabric.
Looking at the question, I think that the identity of a specific subject is digitally mapped to a phone number. Here, I think that users mean clients and peers in the fabric.
1.client
In the case of a client, it can be applied very easily. clients must be given a unique name during the registration process. of course, since the phone number must be able to be changed or modified, it seems to be correct to perform a separate mapping at the application level.
userID: 82+10-4036-xxxx
2. peer
In the case of peers, it seems that they can map this to their own name.
For example, when building a peer in a fabric network,
It is written in the expression of <peer_name>.<org_name>.<domain_name> like
peer0.org0.example.com
In this process, peer_name can be mapped to a phone number.
82+10-4036-xxxx.org0.example.com
Of course, the problem that occurs in the above process is that it is difficult to respond flexibly when the phone number is changed. in order to solve this part, it is desirable to implement a separate mapping table
When you create a mapping table, you have to think about who will manage how.
In the end, it is necessary to implement middleware in a form that requires the identity resolver/registry of the blockchain network.
Consider how to operate and manage it from the perspective of the blockchain.
to put it simply, middleware construction is required on the front of the blockchain, and pay attention to the philosophy of the blockchain..
(ex)
| number | uuid |
| 82+10 | '1b9d6bcd-bbfd-4b2d-9b5d-ab8dfbbd4bed' |
client): '1b9d6bcd-bbfd-4b2d-9b5d-ab8dfbbd4bed'
peer): 1b9d6bcd-bbfd-4b2d-9b5d-ab8dfbbd4bed.org0.example.com
If it tells you a more specific situation or goal, I'll consider a more appropriate form.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I would like to ask to more philosophical question. Topic is DDD and microservices. DDD recognizes bounded context. If I understand correctly then each bounded context is small part of whole system. For example there could be ordering context and invoicing context. Each context works with customers but ordering context cannot know about invoicing setting and invoicing cannot know about ordering setting. Does it means that there will be two customer microservices, each for each context?
Second question is: If I have order microservice can I load customer data to evaluate some conditions, to check customer can create new order, directly from database or can I need to access through customer microservice?
Thanks for your opinion.
First of all you have to know that the same concept can mean different things in different contexts. For example in order context the entity customer probably mean a person who you can deliver things and because of that the order-customer will have attributes such as address, prefered time to deliver.. etc.
However if we look a customer in invoicing context, it will mean a person who you can get paid, and because of that it will have attributes like, credit card number, paypal account, prefered payment type, etc..
Said that, and to answer your first question, I think it is no necessary to have two different customer services, you should have one customer service preferible on its own bounded context that will be called when a customer wants to update and query his own settings, and different views or projection entities for customers both in order and invoice contexts with the informarion that you need to perform the operations in those contexts.
In a event driven design, this entites will be updated accordingly to the service context by suscribing to the update customer events, so when any modification to deliver or payment options are produced, this entities will update.
Answering your second question, to access the database of one service directly from other service is never an option, it will cause the two services will be couple to the same database so the customer service wont be able to manage its database according to its own needs because other service knows and depends on the database structure (tables, columns, relations.. etc). The solution here is, if the data you need is no directly related with the process or if there is no performance strong requirements you can query the service every time you need the information.
However if the information is part of the other service process or there is need of high performance the best solution is have a local copy of that info as I said before when I was talking about order and invoice customers and update them when any changes are made. This can be even a cache if there isn't an event driven approach.
I know the concept of building a simple P2P network without any server. My problems is with securing the network. The network should have some administrative nodes. So there are two kinds of nodes:
Nodes with privileges
Nodes without privileges
The first question is: Can I assign some nodes more rights than others, like the privileges to send a broadcast message?
How can I secure the network of modified nodes that are trying to get privileges?
I'm really interested in answers and resources than can help me. It is important to me to understand this, and I'm happy to add further information if anything is unclear.
You seem lost, and I used to do research in this area, so I'll take a shot. I feel this question is borderline off-topic, but I tend to error toward leaving things open.
See the P2P networks Chord, CAN, Tapestry, and Pastry for examples of P2P networks as well as psuedo-code. These works are all based off distributed hash tables (DHTs) and have been around for over 10 years now. Many of them have open source implementations you can use.
As for "privileged nodes", your question contradicts itself. You want a P2P network, but you also want nodes with more rights than others. By definition, your network is no longer P2P because peers are no longer equally privileged.
Your question points to trust within P2P networks - a problem that academics have focused on since the introduction of (DHTs). I feel that no satisfactory answer has been found yet that solves all problems in all cases. Here are a few approaches which will help you:
(1) Bitcoin addresses malicious users by forcing all users within their network do perform computationally intensive work. For any member to forge bitcoins that would need more computational power than everyone to prove they had done more work than everyone else.
(2) Give privileges based on reputation. You can calculate reputation in any number of ways. One simple example - for each transaction in your system (file sent, database look up, piece of work done), the requester sends a signed acknowledgement (using private/public keys) to the sender. Each peer can then present the accumulation of their signed acknowledgements to any other peer. Any peer who has accumulated N acknowledgements (you determine N) has more privileges.
(3) Own a central server that hands out privileges. This one is the simplest and you get to determine what trust means for you. You're handing it out.
That's the skinny version - good luck.
I'm guessing that the administrative nodes are different from normal nodes by being able to tell other nodes what to do (and the regular nodes should obey).
You have to give the admin nodes some kind of way to prove themselves that can be verified by other nodes but not forged by them (like a policeman's ID). The Most standard way I can think of is by using TLS certificates.
In (very) short, you create couples of files called key and certificate. The key is secret and belongs to one identity, and the certificate is public.
You create a CA certificate, and distribute it to all of your nodes.
Using that CA, you create "administrative node" certificates, one for each administrative node.
When issuing a command, an administrative node presents its certificate to the "regular" node. The regular node, using the CA certificate you provided beforehand, can make sure the administrative node is genuine (because the certificate was actually signed by the CA), and it's OK to do as it asks.
Pros:
TLS/SSL is used by many other products to create a secure tunnel, preventing "man in the
middle" attacks and
impersonations
There are ready-to-use libraries and sample projects for TLS/SSL in practically every language, from .net to C.
There are revocation lists, to "cancel" certificates that have been stolen (although you'll have to find a way to distribute these)
Certificate verification is offline - a node needs no external resources (except for the CA certificate) for verification
Cons:
Since SSL/TLS is a widely-used system, there are many tools to exploit misconfigured / old clients / servers
There are some exploits found in such libraries (e.g. "heartbleed"), so you might need to patch your software a lot.
This solution still requires some serious coding, but it's usually better to rely on an existing and proven system than to go around inventing your own.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have an Android application that transmits some user account information as json over ssl to a central server. 3rd parties can send messages to the users if they have the users' username.
The username can never be queried from our server, infact no user data can be queried. The entire system relies on the fact that the user willingly shared there information with the 3rd parties and the 3rd parties can use that information to deliver messages to the users.
The transmitted data is not encrypted by me, since it is already sent over ssl. I feel that I need to encrypt the data stored on my server to keep it safe. What would be the best way to encrypt this data? When I query the user data, must I encrypt the supplied value and compare it to what is stored in the database or must I rather decrypt the database?
Or is it an overkill since only my server application will ever have access to this data?
It's not overkill to encrypt the private data of your users, customers, etc on your filesystems. For one thing that hard drive will eventually end up out of your control --- and it's extremely unlikely that you're going to properly destroy it after it seems to be non-function, even though there's a bunch of private data on it and potentially accessible to someone with a modicum of data recovery expertise and initiative.
I'd suggest PyCrypto.
The real challenge is how you'll managed your keys. One advantage of PK (public key) cryptography is that you can configure your software with the public (encrypting) key in the code and exposed ... that's sufficient for the portions of your application which are storing the data. Then you need to arrange a set of procedures and policies to keep the private key private. That means it can't be in your source code nor your version control system ... some part of your software has to prompt for it and get it (possibly typed in, possibly pushed in from a USB "keyboard emulator" or other password/key vault device).
This has to be done for every restart of your server software (that need to read back this customer data) ... but this can be a long running daemon and thus only need this a few times per year -- or less. You might use something like a modified copy of the ssh-agent to decouple the password management functionality from the rest of your application/server.
(If you wondering where there's value in keeping the private key in memory if it's always in memory when the machine is running --- consider what happens if someone breaks in and steals your computer. In the process they'll almost certainly power it off, thus protecting your data from the eventual re-start. One option, though weaker, is to use a small USB drive for the private key (password/passphrase) storage. This is still vulnerable to the risk of theft, but less of a problem when it comes to your eventual drive disposal. Hard drives are relatively hard and expensive to properly destroy --- but physically destroying a small, cheap USB drive isn't difficult at all).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I like how facebook releases features incrementally and not all at once to their entire user base. I get that this can be replicated with a bunch if statements smattered all throughout your code, but there needs to be a better way to do this. Perhaps that really is all they are doing, but that seems rather inelegant. Does anyone know if there is an industry standard for an architecture than can incrementally release features to portions of a user base?
On that same note, I have a feeling that all of their employees see an entirely different completely beta view of the site. So it seems that they are able to deem certain portions of their website as beta and others as production and have some sort of access control list to guide what people see? That seems like it would be slow.
Thanks!
Facebook has a lot of servers so they can apply new features only on some of them. Also they have some servers where they test new features before commiting to the production.
A more elegant solution is, if statements and feature flags using systems like gargoyle (in python).
Using a system like this you could do something like:
if feature_flag.is_active(MY_FEATURE_NAME, request, user, other_key_objects):
# do some stuff
In a web interface you would be able to isolate describe users, requests, or any other key object your system has and deliver your feature to them. In fact, via requests you could do things like direct X% of traffic to the new feature, and thus do things like A/B test and gather analytics.
An approach to this is to have a tiered architecture where the authentication tier hands-off to the product tier.
A user enters the product URL and that is resolved to direct them to a cluster of authentication servers. These servers handle authentication and then hand off the session to a cluster of product servers.
Using this approach you can:
Separate out your product servers in to 'zones' that run different versions of your application
Run logic on your authentication servers that decides which zone to route the session to
As an example, you could have Zone A running the latest production code and Zone B running beta code. At the point of login the authentication server sends every user with a user name starting with a-m to Zone A and n-z to Zone B. That way roughly half the users are running on the beta product.
Depending on the information you have available at the point of login you could even do something more sophisticated than this. For example you could target a particular user demographic (e.g. age ranges, gender, location, etc).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
Background: I am working on a proposal for a PHP/web-based P2P replication layer for PDO databases. My vision is that someone with a need to crowd-source data sets up this software on a web server, hooks it up to their preferred db platform, and then writes a web app around it to add/edit/delete data locally. Other parties, if they wish, may set up a similar thing - with their own web apps written around it - and set up data-sharing agreements with one or more peers. In the general case, changes made to one database are written to another on a versioned basis, such that they eventually flow around the whole network.
Someone has asked me why I'm not using CouchDB, since it has bi-directional replication and record versioning offered as standard. I wasn't aware of these capabilities, so this turns out to be an excellent question! It occurs to me, if this facility is already available, are there any existing examples of server-to-server replication between separate groups? I've done a great deal of hunting and not found anything.
(I suppose what I am looking for is examples of "group-sourcing": give groups a means to access a shared dataset locally, plus the benefits of critical mass they would be unable to build individually, whilst avoiding the political ownership/control problems associated with the traditional centralised model.)
You might want to check out http://refuge.io/
It is built around couchdb, but more specifically to form peer groups.
Also, here is a couchbase sponsored case study of replication between various groups
http://site.couchio.couchone.com/case-study-assay-depot
This can be achived on standard couchdb installs.
Hope that gives you a start.