Currently learning a bit about chaincode development using GO (recently worked with ethereum). I have the following code:
type Person struct {
name string // assume json fields for marshaling etc.
lastname string // ...
SSN string // ...
}
func (p *Person) Init(stub shim.ChaincodeStubInterface) Response {
args := stub.GetArgs()
var person Person
// just assume that args[0] is a json
json.Unmarshal(args[0], person )
p.name = person.name
p.lastname = person.lastname
p.SSN = person.SSN
}
......
During the init function I pass a person on the initialization of the chaincode. My question then is: upon another call against the chaincode. Will the p instance still be persisted from the init function such that I can read p.name given during the init? How does chaincode manage this?
You may find it appears to be persisted if you try it, but it will only be persisted in memory. If the chaincode restarts, or if the peer restarts, then the data will be lost. So this is not a recommended approach.
Chaincode has access to various stores - the world state (similar to the world state in Ethereum) and private data collections. Data stored in the world state is shared across all members of the channel.
You can put data into the world state using stub.PutState(key, value), and get data back from the world state using stub.GetState(key). Your Init function should store the data in the world state, and then your Invoke function can get the data out of the world state when any transactions are processed.
I recommend you check out the FabCar sample if you haven't already: https://github.com/hyperledger/fabric-samples/blob/release-1.4/chaincode/fabcar/go/fabcar.go
The initLedger transaction adds 10 cars to the world state. The queryCar transaction reads a car from the world state. There are other transactions for querying all of the cars, or updating a cars owner.
Related
Suppose I have two organizations on the same channel, A and B, and a chaincode including these methods:
queryA;
queryB (that returns a different set of data as output than queryA);
create;
update;
submitNewData.
How can I restrict access of the single methods so that, for example, a member of A can only access create, update and queryA; a member of B can only access submitNewData and queryB. So, a member of A can create the asset and modify a subset of fields (with "update"), a member of B can only modify another subset of fields (according to "submitNewData) and cannot create the asset.
If a peer of B executes a "peer chaincode invoke" to create or queryA, the access is denied.
Should I use ACLs? But how can I refer to the specific smart contract inside the chaincode?
You begin talking about a "member", but later you talk about a "peer".
You cannot restrict operations per peer. Every peer joined to the channel with the chaincode installed must proceed in the same way, so that the chaincode works in a deterministic way.
But you can, of course, restrict operations for the requestor, by evaluating its user ID, MSP ID, certificate attributes or whatever you want. For instance, in Go chaincodes, it is usually evaluated in BeforeTransaction function:
type SmartContract struct {
contractapi.Contract
}
func checkACL(ctx contractapi.TransactionContextInterface) error {
// Read incoming data from stub
stub := ctx.GetStub()
// Extract operation name
operation, parameters := stub.GetFunctionAndParameters()
operationSplitted := strings.Split(operation, ":")
operationName := operationSplitted[len(operationSplitted)-1]
// Get requestor info from stub
mspID, err := cid.GetMSPID(stub)
userID, err := cid.GetID(stub)
value, found, err := cid.GetAttributeValue(stub, "role")
// Evaluate your ACLs by contrasting the operation and the requestor your own way
// ...
// Return error when disallowed
// Operation allowed
return nil
}
func BeforeTransaction(ctx contractapi.TransactionContextInterface) error {
return checkACL(ctx)
}
func NewSmartContract() *SmartContract {
sc := new(SmartContract)
sc.BeforeTransaction = BeforeTransaction
// ...
return sc
}
In Service Fabric I am trying to call an ActorService and get a list of all actors. I'm not getting any errors, but no actors are returned. It's always zero.
This is how I add actors :
ActorProxy.Create<IUserActor>(
new ActorId(uniqueName),
"fabric:/ECommerce/UserActorService");
And this is how I try to get a list of all actors:
var proxy = ActorServiceProxy.Create(new Uri("fabric:/ECommerce/UserActorService"), 0);
ContinuationToken continuationToken = null;
CancellationToken cancellationToken = new CancellationTokenSource().Token;
List<ActorInformation> activeActors = new List<ActorInformation>();
do
{
var proxy = GetUserActorServiceProxy();
PagedResult<ActorInformation> page = await proxy.GetActorsAsync(continuationToken, cancellationToken);
activeActors.AddRange(page.Items.Where(x => x.IsActive));
continuationToken = page.ContinuationToken;
}
while (continuationToken != null);
But no matter how many users I've added, the page object will always have zero items. What am I missing?
The second argument int in ActorServiceProxy.Create(Uri, int, string) is the partition key (you can find out more about actor partitioning here).
The issue here is that your code checks only one partition (partitionKey = 0).
So the solutions is quite simple - you have to iterate over all partitions of you service. Here is an answer with code sample to get partitions and iterate over them.
UPDATE 2019.07.01
I didn't spot this from the first time but the reason why you aren't getting any actors returned is because you aren't creating any actors - you are creating proxies!
The reason for such confusion is that Service Fabric actors are virtual i.e. from the user point of view actor always exists but in real life Service Fabric manages actor object lifetime automatically persisting and restoring it's state as needed.
Here is a quote from the documentation:
An actor is automatically activated (causing an actor object to be constructed) the first time a message is sent to its actor ID. After some period of time, the actor object is garbage collected. In the future, using the actor ID again, causes a new actor object to be constructed. An actor's state outlives the object's lifetime when stored in the state manager.
In you example you've never send any messages to actors!
Here is a code example I wrote in Program.cs of newly created Actor project:
// Please don't forget to replace "fabric:/Application16/Actor1ActorService" with your actor service name.
ActorRuntime.RegisterActorAsync<Actor1> (
(context, actorType) =>
new ActorService(context, actorType)).GetAwaiter().GetResult();
var actor = ActorProxy.Create<IActor1>(
ActorId.CreateRandom(),
new Uri("fabric:/Application16/Actor1ActorService"));
_ = actor.GetCountAsync(default).GetAwaiter().GetResult();
ContinuationToken continuationToken = null;
var activeActors = new List<ActorInformation>();
var serviceName = new Uri("fabric:/Application16/Actor1ActorService");
using (var client = new FabricClient())
{
var partitions = client.QueryManager.GetPartitionListAsync(serviceName).GetAwaiter().GetResult();;
foreach (var partition in partitions)
{
var pi = (Int64RangePartitionInformation) partition.PartitionInformation;
var proxy = ActorServiceProxy.Create(new Uri("fabric:/Application16/Actor1ActorService"), pi.LowKey);
var page = proxy.GetActorsAsync(continuationToken, default).GetAwaiter().GetResult();
activeActors.AddRange(page.Items);
continuationToken = page.ContinuationToken;
}
}
Thread.Sleep(Timeout.Infinite);
Pay special attention to the line:
_ = actor.GetCountAsync(default).GetAwaiter().GetResult();
Here is where the first message to actor is sent.
Hope this helps.
In Hyperledger Composer, is logic.js related to Transaction?
How to access list from logic.js?
Have any good tutorial to understand what can I do in logic.js file?
Though, here is an answer, I'm explaining how logic.js related to a transaction, so that reader can understand.
Transaction is an asset transfer onto or off the ledger. In Hyperledger Composer transaction is defined in model (Contain .cto extension) file. In logic.js( can be anything.js) file contain the Transaction processor function to execute the transactions defined in the model file.
Here is sample model file:
/**
* My commodity trading network
*/
namespace org.acme.mynetwork
asset Commodity identified by tradingSymbol {
o String tradingSymbol
--> Trader owner
}
participant Trader identified by tradeId {
o String tradeId
o String firstName
o String lastName
}
transaction Trade {
--> Commodity commodity
--> Trader newOwner
}
In the model file, a Trade transaction was defined, specifying a relationship to an asset, and a participant. The Trade transaction is intended to simply accept the identifier of the Commodity asset which is being traded, and the identifier of the Trader participant to set as the new owner.
Here is a logic file which contain JavaScript logic to execute the transactions defined in the model file.
/**
* Track the trade of a commodity from one trader to another
* #param {org.acme.mynetwork.Trade} trade - the trade to be processed
* #transaction
*/
async function tradeCommodity(trade) {
trade.commodity.owner = trade.newOwner;
let assetRegistry = await getAssetRegistry('org.acme.mynetwork.Commodity');
await assetRegistry.update(trade.commodity);
}
Tutorial
Composer Official Documentation
IBM Blochain Developer Tutorial
Yes, but doesn't have to have the filename 'logic.js' exclusively. See more here https://hyperledger.github.io/composer/latest/reference/js_scripts
You model your Array fields first https://hyperledger.github.io/composer/latest/reference/cto_language.html then code it. Arrays are handled like any javascript array handling. See examples here -> https://github.com/hyperledger/composer-sample-networks/blob/master/packages/trade-network/lib/logic.js#L45 of results from a query being handled in an array. Also temperatureReadings is an array being handled in the sample networks here https://github.com/hyperledger/composer-sample-networks/blob/v0.16.x/packages/perishable-network/lib/logic.js#L37
I would encourage you to try out the tutorials to validate your understanding. https://hyperledger.github.io/composer/latest/tutorials/tutorials
Let's say I have two types of Hazelcast nodes running on cluster:
"Leader" nodes – these are able to load and populate Hazelcast map M. Leaders will also update values in M from time to time (based on external resource).
"Follower" nodes – these will need to read from M
My intent is for Follower nodes to trigger loading missing elements into M (loading thus needs to be done on Leader side) .
Roughly, the steps made to get an element from map could look like this:
IMap m = hazelcastInstance.getMap("M");
if (!m.containsKey(k)) {
if (iAmLeader()) {
Object fresh = loadByKey(k); // loading from external resource
return m.put(k, fresh);
} else {
makeSomeLeaderPopulateValueForKey(k);
}
}
return m.get(k);
What approach could you suggest?
Notes
I want Followers to act as nodes, not just clients, because there are going to be far more Follower instances than Leaders and I would like them to participate in load distribution.
I could just build another level of service, that would run only on Leader nodes and provide interface to populate map with requested keys. But that would mean adding extra layer of communication and configuration, and I was hoping that the kind of requirements stated above could be solved within single Hazelcast cluster.
I think I may have found an answer in the form of MapLoader (EDIT since originally posting, I have confirmed this is indeed the way to do this).
final Config config = new Config();
config.getMapConfig("MY_MAP_NAME").setMapStoreConfig(
new MapStoreConfig().setImplementation(new MapLoader<KeyType, ValueType>(){
#Override
public ValueType load(final KeyType key) {
//when a client asks for data for corresponding key of type
//KeyType that isn't already loaded
//this function will be invoked and give you a chance
//to load it and return it
ValueType rv = ...;
return rv;
}
#Override
public Map<KeyType, ValueType> loadAll(
final Collection<KeyType> keys) {
//Similar to MapLoader#load(KeyType), except this is
//a batched version of it for performance gains.
//this gets called on first access to the cache,
//where MapLoader#loadAllKeys() is called to get
//the keys parameter for this funcion
Map<KeyType, ValueType> rv = new HashMap<>();
keys.foreach((key)->{
rv.put(key, /*figure out what key means*/);
});
return rv;
}
#Override
public Set<KeyType> loadAllKeys() {
//Prepopulate all the keys. My understanding is that
//this is an initialization step, to give you a chance
//to load data on startup so an initial set of datas
//will be available to anyone using the cache. Any keys
//returned here are sent to MapLoader#loadAll(Collection)
Set<KeyType> rv = new HashSet<>();
//figure out what keys need to be in the return value
//to load a key into cache at first access to this map,
//named "MY_MAP_NAME" in this example
return rv;
}
}));
config.getGroupConfig().setName("MY_INSTANCE_NAME").setPassword("my_password");
final HazelcastInstance hazelcast = Hazelcast
.getOrCreateHazelcastInstance(config);
I have next code:
PhotoFactory factory = PhotoFactory.getFactory (PhotoResource.PICASA);
PhotoSession session = factory.openSession (login, password);
PhotoAlbum album = factory.createAlbum ();
Photo photo = factory.createPhoto ();
album.addPhoto (photo);
if (session.canUpload ()) {
session.uploadAlbum (album);
}
session.close ();
I'm not sure that I've chosen correct name. It's not so important but I'm just curious what had you chosen in my case. Another version is manager:
PhotoManager manager = PhotoManager.getManager (PhotoResource.PICASA);
PhotoSession session = manager.openSession (login, password);
PhotoAlbum album = manager.createAlbum ();
Photo photo = manager.createPhoto ();
album.addPhoto (photo);
if (session.canUpload ()) {
session.uploadAlbum (album);
}
session.close ();
UPD: I've just found next example at hibernate javadocs:
Session sess = factory.openSession();
Transaction tx;
try {
tx = sess.beginTransaction();
//do some work
...
tx.commit();
}
Is that a naming mistake?
At a very high level, I'd call it a Factory if it's only responsible for creating instances of classes; and a Manager if it needs to oversee the ongoing existence of objects, and how they relate to other objects, etc.
In the code snippets you've posted you're only creating objects and thus, in my opinion, Factory is an appropriate name. Though you should bear in mind what the conceptual responsibilities of the class are and whether they might expand in future.
That said, I would classically expect a factory to not have to worry about creating sessions itself but rather have sessions passed into their createFoo calls as required, so there's definitely some fudge factor as things are set up. I think personally I would have some other abstract entity responsible for creating sessions, and then pass these into the PhotoFactory.
I would choose the second solution (manager), since objects ending with 'Factory' are generally considered implementations of the factory pattern, which essentially creates new instances of objects. This is not the case in your example, since it manages a session for a particular service.
What's nice in this example, is that in fact your first static method call getManager is in fact a factory, since it creates instances of Manager objects.