Is it possible to have multiple mappings map to the same struct in Solidity? - struct

What I'm trying to achieve is to have two mappings of struct value type pointing to the same struct reference, so I can lookup and edit a specific struct instance in two ways. However, updating the struct in one mapping does not seem to update the struct in the other one. Here is my simplified contract to illustrate the idea:
contract Example {
mapping(uint => Pool) public poolsByDay;
mapping(uint => Pool) public poolsById;
constructor(uint day) public {
for (uint i = 1; i <= day; i++) {
Pool memory pool = Pool({
id: i,
amount: 0
});
poolsByDay[i] = pool;
poolsById[i] = pool;
}
}
function deposit(uint day, uint amount) external {
Pool storage pool = poolsByDay[day];
pool.amount += amount;
}
}
Notice that the keys for poolsByDay may change every day. And I want to be able to lookup a pool either by day or by ID.
Here is my test:
const example = await Example.new(7)
const day = 1
const amount = 100e18
await example.deposit(day, amount.toString())
const pool = await example.poolsByDay(term)
const anotherPool = await example.poolsById(pool.id)
assert.equal(pool.amount, amount) // succeeded
assert.equal(anotherPool.amount, amount) // failed
From what I understand, Solidity struct is a reference type. So I'm expecting the modification of one pool will be reflected in both mappings poolsByDay and poolsById, but it's not. Is it that I failed to initialize two mappings correctly?

No, the two mappings will point to distinct structs, so you'll need to handle the indirection yourself, e.g. by using a mapping from day to ID:
contract Example {
mapping(uint => uint) public poolsByDay;
mapping(uint => Pool) public poolsById;
constructor(uint day) public {
for (uint i = 1; i <= day; i++) {
poolsById[i] = Pool({ id: i, amount: 0 });
poolsByDay[i] = i;
}
}
function deposit(uint day, uint amount) external {
Pool storage pool = poolsById[poolsByDay[day]];
pool.amount += amount;
}
}
(In this contrived example, they seem to both use the same keys, but I assume in your real code there's a reason for having two mappings.)

Related

Make Structs, Enums, Constructors and Mappings work together

So I want to implement a simple CarShop contract in Solidity.
The contract should be initiated with a constructor where I input the current stock amount for the cars I already have in my Shop. I call these in the constructor (ToyotaCount, AudiCount, BmwCount)...
Then I think I need to create a struct that would store the CarCount and the CarType.
So I created an enum with (Toyota, Audi, Bmw)...
Finally, I would like to create this struct with the values CarCount from the constructor (as the initial state) together with carType of the cars from the enum... However, I am confused how exactly I should implement it and where I am going wrong.
Also, as a next step I want to implement a function called "AddCar" to update the values in the struct when I add some cars... for example I want to add 3 Audi cars...
Can you perhaps show me how I would need to correct my code, so the constructor, struct, enum work together. I would also really appreciate if you could point me to some similar projects or implementations.
This is my current Code. I think I initiated the constructor correctly. However, then something goes wrong with the interplay of struct and enum and constructor.
'''
contract CarShop {
address owner;
uint256 toyotaCount;
uint256 audiCount;
uint256 bmwCount;
constructor(uint256 _toyotaCount, uint256 _audiCount, uint256 _bmwCount) {
owner = msg.sender;
toyotaCount = _toyotaCount;
audiCount = _audiCount;
bmwCount = _bmwCount;
}
enum CarType {None, Toyota, Audi, Bmw}
struct Cars {
CarType carType;
uint count;
}
Cars public item;
Cars memory toyota = Cars(carType, toyotaCount)
}
'''
I made some changes to your contract and added some comments. Note that you should change how the cars are stored, because we use three Cars structs for the three different cars.
contract CarShop {
address owner;
uint256 toyotaCount;
uint256 audiCount;
uint256 bmwCount;
Cars public toyota;
Cars public audi;
Cars public bmw;
enum CarType {Toyota, Audi, Bmw}
struct Cars {
CarType carType;
uint count;
}
constructor(uint256 _toyotaCount, uint256 _audiCount, uint256 _bmwCount) {
owner = msg.sender;
toyotaCount = _toyotaCount;
audiCount = _audiCount;
bmwCount = _bmwCount;
// initialize the three cars with their count
toyota = Cars(CarType.Toyota, _toyotaCount);
audi = Cars(CarType.Audi, _audiCount);
bmw = Cars(CarType.Bmw, _bmwCount);
}
/**
* #dev Add car count function
* #param _carType type of the car: 0 for Toyota, 1 for Audi, 2 for Bmw
* #return _count increment car count
*/
function addCarCount(CarType _carType, uint256 _count) public {
require(msg.sender == owner, "Only owner can add car count");
if(_carType == CarType.Toyota) {
toyota.count += _count;
} else if(_carType == CarType.Audi) {
audi.count += _count;
} else if(_carType == CarType.Bmw) {
bmw.count += _count;
}
}
}
I deployed the contract with each 10 cars in store. I created a script that adds 3 cars to the audi struct.
import { ethers } from "hardhat";
async function main() {
const [owner] = await ethers.getSigners();
const contractAddress = process.env.CAR_CONTRACT_ADDRESS;
const contract = await ethers.getContractFactory("CarShop");
const contractInstance = await contract.attach(`${contractAddress}`);
const audi = await contractInstance.audi();
console.log(audi);
await contractInstance.connect(owner).addCarCount(1, 3);
const audiAfter = await contractInstance.audi();
console.log(audiAfter);
}
main().catch((error) => {
console.error(error);
process.exitCode = 1;
});
Results:
[
1,
BigNumber { value: "10" },
carType: 1,
count: BigNumber { value: "10" }
]
[
1,
BigNumber { value: "13" },
carType: 1,
count: BigNumber { value: "13" }
]
you cannot declare a memory variable in the contract level only in a function
Cars memory toyota = Cars(carType, toyotaCount)

Solidity struct mapping not stored in contract

I read many articles on how to use mappings, mappings in struct and came out with something that should be correct to me, based on a few threads.
I know that since solidity 0.7.0 things have changed with nested mappings in struct and did the following :
contract Test {
constructor() {
}
struct Bid {
uint auction_id;
address addr;
uint amount;
}
struct Auction {
uint id;
string dtype;
uint start_date;
uint end_date;
string label;
uint price;
uint amount;
bool closed;
mapping(uint => Bid) bids;
uint bidCount;
}
uint public auctionCount = 0;
mapping(uint => Auction) public auctions;
function createAuction( string memory plabel, string memory ptype, uint nbhours, uint pprice) external {
Auction storage nd = auctions[auctionCount];
nd.id = auctionCount;
nd.dtype = ptype;
nd.start_date = block.timestamp;
nd.end_date = block.timestamp+nbhours*60*60;
nd.label = plabel;
nd.price = pprice;
nd.amount = 0;
nd.closed = false;
nd.bidCount = 0;
auctionCount++;
}
}
Everything compiles fine, the createAuction transaction is succesful.
When checking on the contract in Ganache, auctionCount is incremented but I have no items added in the drawsmapping.
I also debugged the transaction with truffle and it goes through the function, assigning values through the execution of createAuction, but the changes are not persistent.
I even tried removing one string attribute since I read that when there are 3 it could have been a problem (ok, I have only 2 max ;)).
I must have missed something, but I'm out of options right now.
Thanks in advance for your help !
If you are talking about auctions mapping, ensure you use the correct index when accessing mapping items. In your case, the first Auction item you add to the mapping will have a 0 index. I tried your contract in Remix, and everything worked well.

For await x of y using an AsyncIterator causes memory leak

When using AsyncIterator i have a substential memory leak when used in for-x-of-y
I need this when scraping a HTML-Page which includes the information about the next HTML-Page to be scraped:
Scrape Data
Evaluate Data
Scrape Next Data
The async Part is needed since axios is used to obtain the HTML
Here is a repro, which allows to see the memory rising von ~4MB to ~25MB at the end of the script. The memory is not freed till the program terminates.
const scraper = async ():Promise<void> => {
let browser = new BrowserTest();
let parser = new ParserTest();
for await (const data of browser){
console.log(await parser.parse(data))
}
}
class BrowserTest {
private i: number = 0;
public async next(): Promise<IteratorResult<string>> {
this.i += 1;
return {
done: this.i > 1000,
value: 'peter '.repeat(this.i)
}
}
[Symbol.asyncIterator](): AsyncIterator<string> {
return this;
}
}
class ParserTest {
public async parse(data: string): Promise<string[]> {
return data.split(' ');
}
}
scraper()
It looks like that the data of the for-await-x-of-y is dangling in memory. The callstack gets huge aswell.
In the repro the Problem could still be handled. But for my actual code a whole HTML-Page stays in memory which is ~250kb each call.
In this screenshot you can see the heap memory on the first iteration compared to the heap memory after the last iteration
Cannot post inline Screenshots yet
The expected workflow would be the following:
Obtain Data
Process Data
Extract Info for the next "Obtain Data"
Free all Memory from the last "Obtain Data"
Use extracted information to restart the loop with new Data obtained.
I am unsure an AsyncIterator is the right choice here to archive what is needed.
Any help/hint would be appriciated!
In Short
When using an AsyncIterator the Memory is rising drastically. It drops once the Iteration is done.
The x in `for await (x of y) is not freed till the Iteration is done. Also every Promise awaited inside the for-loop is not freed.
I came to the conclusion that the Garbage Collector cannot catch the contents of Iteration, since the Promises generated by the AsyncIterator will only fully resolve once the Iteration is done.
I think this might be a Bug.
Workaround Repro
As workaround to free the contents of the Parser we encapsulate the Result in a lightweight Container. We then free the contents, so only the Container itself remains in Memory.
The data Object cannot be freed even if you use the same technic to encapsulate it - so it seems to be the case when debugging at least.
const scraper = async ():Promise<void> => {
let browser = new BrowserTest();
for await (const data of browser){
let parser = new ParserTest();
let result = await parser.parse(data);
console.log(result);
/**
* This avoids memory leaks, due to a garbage collector bug
* of async iterators in js
*/
result.free();
}
}
class BrowserTest {
private i: number = 0;
private value: string = "";
public async next(): Promise<IteratorResult<string>> {
this.i += 1;
this.value = 'peter '.repeat(this.i);
return {
done: this.i > 1000,
value: this.value
}
}
public [Symbol.asyncIterator](): AsyncIterator<string> {
return this;
}
}
/**
* Result class for wrapping the result of the parser.
*/
class Result {
private result: string[] = [];
constructor(result: string[]){
this.setResult(result);
}
public setResult(result: string[]) {
this.result = result;
}
public getResult(): string[] {
return this.result;
}
public free(): void {
delete this.result;
}
}
class ParserTest {
public async parse(data: string): Promise<Result>{
let result = data.split(' ');
return new Result(result);
}
}
scraper())
Workaround in actual context
What is not shown in the Repro-Solution is that we also try to free the Result of the Iteration itself. This seems not to have any effect tho(?).
public static async scrape<D,M>(scraper: IScraper<D,M>, callback: (data: DataPackage<Object,Object> | null) => Promise<void>) {
let browser = scraper.getBrowser();
let parser = scraper.getParser();
for await (const parserFragment of browser) {
const fragment = await parserFragment;
const json = await parser.parse(fragment);
await callback(json);
json.free();
fragment.free();
}
}
See: https://github.com/demokratie-live/scapacra/blob/master/src/Scraper.ts
To test with an actual Application: https://github.com/demokratie-live/scapacra-bt (yarn dev ConferenceWeekDetail)
References
Github NodeJs: https://github.com/nodejs/node/issues/30298
Github DEMOCRACY: https://github.com/demokratie-live/democracy-client/issues/926
Conclusion
We found a feasible Solution for us. Therefore i close this Issue. The followup is directed towards the Node.js Repo in order to fix this potential Bug
https://github.com/nodejs/node/issues/30298

Cosmos inserts won't parallelize effectively

Meta-Question:
We're pulling data from EventHub, running some logic, and saving it off to cosmos. Currently Cosmos inserts are our bottleneck. How do we maximize our throughput?
Details
We're trying to optimize our Cosmos throughput and there seems to be some contention in the SDK that makes parallel inserts only marginally faster than serial inserts.
We're logically doing:
for (int i = 0; i < insertCount; i++)
{
taskList.Add(InsertCosmos(sdkContainerClient));
}
var parallelTimes = await Task.WhenAll(taskList);
Here's the results comparing serial inserts, parallel inserts, and "faking" an insert (with Task.Delay):
Serial took: 461ms for 20
- Individual times 28,8,117,19,14,11,10,12,5,8,9,11,18,15,79,23,14,16,14,13
Cosmos Parallel
Parallel took: 231ms for 20
- Individual times 17,15,23,39,45,52,72,74,80,91,96,98,108,117,123,128,139,146,147,145
Just Parallel (no cosmos)
Parallel took: 27ms for 20
- Individual times 27,26,26,26,26,26,26,25,25,25,25,25,25,24,24,24,23,23,23,23
Serial is obvious (just add each value)
no cosmos (the last timing) is also obvious (just take the min time)
But parallel cosmos doesn't parallelize nearly as well, indicating there's some contention.
We're running this on a VM in Azure (same datacenter as Cosmos), have enough RUs so aren't getting 429s, and using Microsoft.Azure.Cosmos 3.2.0.
Full Code Sample
class Program
{
public static void Main(string[] args)
{
CosmosWriteTest().Wait();
}
public static async Task CosmosWriteTest()
{
var cosmosClient = new CosmosClient("todo", new CosmosClientOptions { ConnectionMode = ConnectionMode.Direct });
var database = cosmosClient.GetDatabase("<ourcontainer>");
var sdkContainerClient = database.GetContainer("<ourcontainer>");
int insertCount = 25;
//Warmup
await sdkContainerClient.CreateItemAsync(new TestObject());
//---Serially inserts into Cosmos---
List<long> serialTimes = new List<long>();
var serialTimer = Stopwatch.StartNew();
Console.WriteLine("Cosmos Serial");
for (int i = 0; i < insertCount; i++)
{
serialTimes.Add(await InsertCosmos(sdkContainerClient));
}
serialTimer.Stop();
Console.WriteLine($"Serial took: {serialTimer.ElapsedMilliseconds}ms for {insertCount}");
Console.WriteLine($" - Individual times {string.Join(",", serialTimes)}");
//---Parallel inserts into Cosmos---
Console.WriteLine(Environment.NewLine + "Cosmos Parallel");
var parallelTimer = Stopwatch.StartNew();
var taskList = new List<Task<long>>();
for (int i = 0; i < insertCount; i++)
{
taskList.Add(InsertCosmos(sdkContainerClient));
}
var parallelTimes = await Task.WhenAll(taskList);
parallelTimer.Stop();
Console.WriteLine($"Parallel took: {parallelTimer.ElapsedMilliseconds}ms for {insertCount}");
Console.WriteLine($" - Individual times {string.Join(",", parallelTimes)}");
//---Testing parallelism minus cosmos---
Console.WriteLine(Environment.NewLine + "Just Parallel (no cosmos)");
var justParallelTimer = Stopwatch.StartNew();
var noCosmosTaskList = new List<Task<long>>();
for (int i = 0; i < insertCount; i++)
{
noCosmosTaskList.Add(InsertCosmos(sdkContainerClient, true));
}
var justParallelTimes = await Task.WhenAll(noCosmosTaskList);
justParallelTimer.Stop();
Console.WriteLine($"Parallel took: {justParallelTimer.ElapsedMilliseconds}ms for {insertCount}");
Console.WriteLine($" - Individual times {string.Join(",", justParallelTimes)}");
}
//inserts
private static async Task<long> InsertCosmos(Container sdkContainerClient, bool justDelay = false)
{
var timer = Stopwatch.StartNew();
if (!justDelay)
await sdkContainerClient.CreateItemAsync(new TestObject());
else
await Task.Delay(20);
timer.Stop();
return timer.ElapsedMilliseconds;
}
//Test object to save to Cosmos
public class TestObject
{
public string id { get; set; } = Guid.NewGuid().ToString();
public string pKey { get; set; } = Guid.NewGuid().ToString();
public string Field1 { get; set; } = "Testing this field";
public double Number { get; set; } = 12345;
}
}
This is the scenario for which Bulk is being introduced. Bulk mode is in preview at this moment and available in the 3.2.0-preview2 package.
What you need to do to take advantage of Bulk is turn the AllowBulkExecution flag on:
new CosmosClient(endpoint, authKey, new CosmosClientOptions() { AllowBulkExecution = true } );
This mode was made to benefit this scenario you describe, a list of concurrent operations that need throughput.
We have a sample project here: https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/BulkSupport
And we are still working on the official documentation, but the idea is that when concurrent operations are issued, instead of executing them as individual requests like you are seeing right now, the SDK will group them based on partition affinity and execute them as grouped (batch) operations, reducing the backend service calls and potentially increasing throughput between 50%-100% depending on the volume of operations. This mode will consume more RU/s as it is pushing a higher volume of operations per second than issuing the operations individually (so if you hit 429s it means the bottleneck is now on the provisioned RU/s).
var cosmosClient = new CosmosClient("todo", new CosmosClientOptions { AllowBulkExecution = true });
var database = cosmosClient.GetDatabase("<ourcontainer>");
var sdkContainerClient = database.GetContainer("<ourcontainer>");
//The more operations the better, just 25 might not yield a great difference vs non bulk
int insertCount = 10000;
//Don't do any warmup
List<Task> operations = new List<Tasks>();
var timer = Stopwatch.StartNew();
for (int i = 0; i < insertCount; i++)
{
operations.Add(sdkContainerClient.CreateItemAsync(new TestObject()));
}
await Task.WhenAll(operations);
serialTimer.Stop();
Important: This is a feature that is still in preview. Since this is a mode optimized for throughput (not latency), any single individual operation you do, won't have a great operational latency.
If you want to optimize even further, and your data source lets you access Streams (avoid serialization), you can use the CreateItemStream SDK methods for even better throughput.

Tight Loop - Disk at 100%, Quad Core CPU #25% useage, only 15MBsec disk write speed

I have a tight loop which runs through a load of carts, which themselves contain around 10 events event objects and writes them to the disk in JSON via an intermediate repository (jOliver common domain rewired with GetEventStore.com):
// create ~200,000 carts, each with ~5 events
List<Cart> testData = TestData.GenerateFrom(products);
foreach (var cart in testData)
{
count = count + (cart as IAggregate).GetUncommittedEvents().Count;
repository.Save(cart);
}
I see the disk says it is as 100%, but the throughout is 'low' (15MB/sec, ~5,000 events per second) why is this, things i can think of are:
Since this is single threaded does the 25% CPU usage actually mean 100% of the 1 core that I am on (any way to show specific core my app is running on in Visual Studio)?
Am i constrained by I/O, or by CPU? Can I expect better performance if i create my own thread pool one for each CPU?
How come I can copy a file at ~120MB/sec, but I can only get throughput of 15MB/sec in my app? Is this due to the write size of lots of smaller packets?
Anything else I have missed?
The code I am using is from the geteventstore docs/blog:
public class GetEventStoreRepository : IRepository
{
private const string EventClrTypeHeader = "EventClrTypeName";
private const string AggregateClrTypeHeader = "AggregateClrTypeName";
private const string CommitIdHeader = "CommitId";
private const int WritePageSize = 500;
private const int ReadPageSize = 500;
IStreamNamingConvention streamNamingConvention;
private readonly IEventStoreConnection connection;
private static readonly JsonSerializerSettings serializerSettings = new JsonSerializerSettings { TypeNameHandling = TypeNameHandling.None };
public GetEventStoreRepository(IEventStoreConnection eventStoreConnection, IStreamNamingConvention namingConvention)
{
this.connection = eventStoreConnection;
this.streamNamingConvention = namingConvention;
}
public void Save(IAggregate aggregate)
{
this.Save(aggregate, Guid.NewGuid(), d => { });
}
public void Save(IAggregate aggregate, Guid commitId, Action<IDictionary<string, object>> updateHeaders)
{
var commitHeaders = new Dictionary<string, object>
{
{CommitIdHeader, commitId},
{AggregateClrTypeHeader, aggregate.GetType().AssemblyQualifiedName}
};
updateHeaders(commitHeaders);
var streamName = this.streamNamingConvention.GetStreamName(aggregate.GetType(), aggregate.Identity);
var newEvents = aggregate.GetUncommittedEvents().Cast<object>().ToList();
var originalVersion = aggregate.Version - newEvents.Count;
var expectedVersion = originalVersion == 0 ? ExpectedVersion.NoStream : originalVersion - 1;
var eventsToSave = newEvents.Select(e => ToEventData(Guid.NewGuid(), e, commitHeaders)).ToList();
if (eventsToSave.Count < WritePageSize)
{
this.connection.AppendToStreamAsync(streamName, expectedVersion, eventsToSave).Wait();
}
else
{
var startTransactionTask = this.connection.StartTransactionAsync(streamName, expectedVersion);
startTransactionTask.Wait();
var transaction = startTransactionTask.Result;
var position = 0;
while (position < eventsToSave.Count)
{
var pageEvents = eventsToSave.Skip(position).Take(WritePageSize);
var writeTask = transaction.WriteAsync(pageEvents);
writeTask.Wait();
position += WritePageSize;
}
var commitTask = transaction.CommitAsync();
commitTask.Wait();
}
aggregate.ClearUncommittedEvents();
}
private static EventData ToEventData(Guid eventId, object evnt, IDictionary<string, object> headers)
{
var data = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(evnt, serializerSettings));
var eventHeaders = new Dictionary<string, object>(headers)
{
{
EventClrTypeHeader, evnt.GetType().AssemblyQualifiedName
}
};
var metadata = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(eventHeaders, serializerSettings));
var typeName = evnt.GetType().Name;
return new EventData(eventId, typeName, true, data, metadata);
}
}
It was partially mentioned in the comments, but to enhance on that, as you are working fully single-threaded in the mentioned code (though you use async, you are just waiting for them, so effectively working sync) you are suffering from latency and overhead for context switching and EventStore protocol back and forth. Either really go the async route, but avoid waiting on the async threads and rather parallelize it (EventStore likes parallelization because it can batch multiple writes) or do batching yourself and send, for example, 20 events at a time.

Resources