I have use sdk to implement a safe contract and it actually work fine. My code is follow this website https://blog.logrocket.com/build-treasury-wallet-multisignature-gnosis-safe/. But my aim is build transaction by myself. But I meet a trouble when create and deploy Proxy.
function deployProxyWithNonce(
address _singleton,
bytes memory initializer,
uint256 saltNonce
) internal returns (GnosisSafeProxy proxy) {
console.log("deployProxyWithNonce");
// If the initializer changes the proxy address should change too. Hashing the initializer data is cheaper than just concatinating it
bytes32 salt = keccak256(abi.encodePacked(keccak256(initializer), saltNonce));
bytes memory deploymentData = abi.encodePacked(type(GnosisSafeProxy).creationCode, uint256(uint160(_singleton)));
// solhint-disable-next-line no-inline-assembly
assembly {
proxy := create2(0x0, add(0x20, deploymentData), mload(deploymentData), salt)
}
console.log(address(proxy));
require(address(proxy) != address(0), "Create2 call failed");
}
This is the function deploy proxy in gnosis contract. My problem is input initializer. How can I find out it.
Related
I’m building a program intended to manage multiple payments with one call. The program needs to complete the following steps:
Accept a certain amount of lamports
Pay a portion of the received lamports to specified wallets, such that the amount received is exhausted
Emit an event containing the receipt
I’ve built this logic with an Ethereum smart contract and it works perfectly fine, however when attempting to write a Solana program with Solang and #solana/solidity, I’m running into a number of issues.
The first issue I encountered was simply that #solana/solidity seems to not be built for front end use (transactions required a private key as an argument, rather than being signed by a wallet like Phantom) so I built a fork of the repository that exposes the transaction object to be signed. I also found that the signer’s key needed to be manually added to the array of keys in the transaction instruction — see this Stack Overflow post for more information, including the front end code used to sign and send the transaction.
However, after this post I ran into more errors, take the following for example:
Transaction simulation failed: Attempt to debit an account but found no record of a prior credit.
Transaction simulation failed: Error processing Instruction 0: instruction changed the balance of a read-only account
Program jdN1wZjg5P4xi718DG2HraGuxVx1mM7ebjXpxbJ5R3N invoke [1]
Program data: PO+eZwYByRZpDC4BOjWoKPj20gquFc/JtyxU9NsuG/Y= DEjYtM7vwjNW3HPewJU3dvG4aiov5tUUlrD6Zz5ylBgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABYAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADppnQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAATEtAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAVNC02S0gyMV9Sa3RZZVJIb3FKOFpFAAAAAAAAAAAAAAA=
Program jdN1wZjg5P4xi718DG2HraGuxVx1mM7ebjXpxbJ5R3N consumed 3850 of 200000 compute units
Program jdN1wZjg5P4xi718DG2HraGuxVx1mM7ebjXpxbJ5R3N success
failed to verify account 11111111111111111111111111111111: instruction changed the balance of a read-only account
The error messages seemed to be inconsistent, with some attempts throwing different errors despite the only changes in the code being a server restarting or a library being reinstalled.
Although solutions to the previous errors would be greatly appreciated, at this point I’m more inclined to ask more broadly if what I’m trying to do is possible, and, providing the source code, for help understanding what I need to do to make it work.
Below is the working source code for my Ethereum contract:
// SPDX-License-Identifier: MIT
pragma solidity >=0.8.4;
contract MyContract {
event Receipt(
address From,
address Token,
address[] Receivers,
uint256[] Amounts,
string Payment
);
function send(
address[] calldata _receivers,
uint256[] calldata _amounts,
string calldata _payment
) external payable {
require(
_receivers.length == _amounts.length,
"Receiver count does not match amount count."
);
uint256 total;
for (uint8 i; i < _receivers.length; i++) {
total += _amounts[i];
}
require(
total == msg.value,
"Total payment value does not match ether sent"
);
for (uint8 i; i < _receivers.length; i++) {
(bool sent, ) = _receivers[i].call{value: _amounts[i]}("");
require(sent, "Transfer failed.");
}
emit Receipt(
msg.sender,
0x0000000000000000000000000000000000000000,
_receivers,
_amounts,
_payment
);
}
}
The only differences between this code and my Solana program code are types and the method used to transfer lamports. All references to uint256 is replaced by uint64, the placeholder token address is changed from the null address to the system public key (address"11111111111111111111111111111111"), and the payment loop is changed to the following:
for (uint8 i = 0; i < _receivers.length; i++) {
payable(_receivers[i]).transfer(_amounts[i]); // Using .send() throws the same error
}
The code used to then deploy the program to the Solana test validator is as follows, only slightly modified from the example provided by #solana/solidity:
const { Connection, LAMPORTS_PER_SOL, Keypair, PublicKey } = require('#solana/web3.js');
const { Contract } = require('#solana/solidity');
const { readFileSync } = require('fs');
const PROGRAM_ABI = JSON.parse(readFileSync('./build/sol/MyProgram.abi', 'utf8'));
const BUNDLE_SO = readFileSync('./build/sol/bundle.so');
(async function () {
console.log('Connecting to your local Solana node');
const connection = new Connection('http://localhost:8899', 'confirmed');
const payer = Keypair.generate();
async function airdrop(pubkey, amnt) {
const sig = await connection.requestAirdrop(pubkey, amnt * LAMPORTS_PER_SOL);
return connection.confirmTransaction(sig);
}
console.log('Airdropping SOL to a new wallet');
await airdrop(payer.publicKey, 100);
const program = new Keypair({
publicKey: new Uint8Array([...]),
secretKey: new Uint8Array([...])
});
const storage = new Keypair({
publicKey: new Uint8Array([...]),
secretKey: new Uint8Array([...])
});
const contract = new Contract(connection, program.publicKey, storage.publicKey, PROGRAM_ABI, payer);
console.log('Loading the program');
await contract.load(program, BUNDLE_SO);
console.log('Deploying the program');
await contract.deploy('MyProgram', [], program, storage, 4096 * 8);
console.log('Program deployed!');
process.exit(0);
})();
Is there something I’m misunderstanding or misusing here? I find it hard to believe that such simple behavior on the Ethereum blockchain couldn’t be replicated on Solana — especially given the great lengths the community has gone to to make Solana programming accessible through Solidity. If there’s something I’m doing wrong with this code I’d love to learn. Thank you so much in advance.
Edit: After upgrading my solang version, the first error was fixed. However, I'm now getting another error:
Error: failed to send transaction: Transaction simulation failed: Error processing Instruction 0: instruction changed the balance of a read-only account
I'm not sure which account is supposedly read-only, as it isn't listed in the error response, but I'm pretty sure the only read-only account involved is the program as it's executable. How can I avoid this error?
The error Attempt to debit an account but found no record of a prior credit happens when you attempt to airdrop more than 1 SOL. If you wish to have more than 1 SOL, then airdrop 1 SOL in a loop until you have enough.
I am writing a AWS lambda Authorizer in node.js. We are required to call Azure AD API to fetch the public keys/security policies to validate the incoming the Access Token.
However, to optimize the performance, I decided to store the public keys/security policies in node.js as a constant (this will be active until the Lambda is running or TTL of the keys expire).
Question : Is it safe from a security perspective ? I want to avoid "caching" it in DynamoDB as calls to DynamoDB would also incur additional milliseconds. Ours is a very high traffic application and we would like to save any millisecond possible for optimal performance. Also, any best practice is also higly appreciated
Typically, you should not hard-code things like that in your code. Even though it is not a security problem, it is making maintenance harder.
For example: when the key is "rotated" or the policy changed and you had it hard-coded in your Lambda, you would need to update your code and do another deployment. This is often causing issues, because the developer forgot about this etc. causing issues because your authorizer does not work anymore. If the Lambda loads the information from an external service like S3, SSM or directly Azure AD, you don't need another deployment. In theory, it should sort itself out depending on which service you use and how you manage your keys etc.
I think the best way is to load the key from an external service during the initialisation phase of the Lambda. That means when it is "booted" for the first time and then cache that value for the duration of the Lambdas lifetime (a few minutes to a few hours).
You could for example load the public keys and policies either directly from Azure, from S3 or SSM Parameter Store.
The following code uses the AWS SDK NodeJS v3, which is not bundled with the Lambda Runtime. You can use v2 of the SDK as well.
const { SSMClient, GetParameterCommand } = require("#aws-sdk/client-ssm");
// This only happens once, when the Lambda is started for the first time:
const init = async () => {
const config = {}
try {
// use whatever 'paramName' you defined, when you created the SSM parameter
const paramName = "/azure/publickey"
const command = new GetParameterCommand({Name: paramName});
const ssm = new SSMClient();
const data = await ssm.send(command);
config["publickey"] = data.Parameter.Value;
} catch (error) {
return Promise.reject(new Error("unable to read SSM parameter '"+ paramName + "'."));
}
return new Promise((resolve, reject) => {
resolve(config);
reject(new Error("unable to create configuration. Unknown error."));
});
};
const initPromise = init();
exports.handler = async (event) => {
const config = await initPromise;
console.log("My public key '%s'", config.key);
return "Hello World";
};
The most important point of this code is the init "function", which is only run on once, creating a "config" which should contain your AWS SDK clients and all the configuration you need in your code. This way, you don't have to get the policy for every request that the Lambda is processing etc.
We need to implement checks of client certificate validity in our ASP.NET Core 2.X application that is dockerized and run under Linux. In particular, we are interested in revocation status of certificates. Such validation was implemented by using X509Chain and it works as expected.
var chain = new X509Chain();
var chainPolicy = new X509ChainPolicy
{
RevocationMode = X509RevocationMode.Online,
RevocationFlag = X509RevocationFlag.EntireChain
};
chain.ChainPolicy = chainPolicy;
...
Dockerfile
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2-stretch-slim AS base
WORKDIR /app
EXPOSE 80
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build
....
However, we have requirements regarding the expiration time of CRL cache for our application. It looks like Linux (I assume it is debian for mcr.microsoft.com/dotnet/core/aspnet:2.2-stretch-slim image) caches CRLs by default - first request last for ~150ms and the following requests are handled almost in no time (unfortunately I cannot find available information to confirm this observation).
What is default time for CRL cache in Linux (debian)? Is it possible to change it? Is there a way to check list of the cached CRL?
Is possible to clean CRL cache like in Windows?
certutil -urlcache * delete
Linux certificate util dirmngr seems to be is not a part of the mcr.microsoft.com/dotnet/core/aspnet:2.2-stretch-slim base image for ASP.NET Core 2.2 applications.
As it is .net Core, which is open source, have you looked up the sources at github. There you'lkl find a call to the CrlCache which shows where data is stored:
namespace Internal.Cryptography.Pal
{
internal static class CrlCache
{
private static readonly string s_crlDir =
PersistedFiles.GetUserFeatureDirectory(
X509Persistence.CryptographyFeatureName,
X509Persistence.CrlsSubFeatureName);
internal static class X509Persistence
{
internal const string CryptographyFeatureName = "cryptography";
internal const string X509StoresSubFeatureName = "x509stores";
internal const string CrlsSubFeatureName = "crls";
internal const string OcspSubFeatureName = "ocsp";
}
...
internal const string TopLevelDirectory = "dotnet";
internal const string TopLevelHiddenDirectory = "." + TopLevelDirectory;
internal const string SecondLevelDirectory = "corefx";
...
internal static string GetUserFeatureDirectory(params string[] featurePathParts)
{
Debug.Assert(featurePathParts != null);
Debug.Assert(featurePathParts.Length > 0);
if (s_userProductDirectory == null)
{
EnsureUserDirectories();
}
return Path.Combine(s_userProductDirectory, Path.Combine(featurePathParts));
}
private static void EnsureUserDirectories()
{
string userHomeDirectory = GetHomeDirectory();
if (string.IsNullOrEmpty(userHomeDirectory))
{
throw new InvalidOperationException(SR.PersistedFiles_NoHomeDirectory);
}
s_userProductDirectory = Path.Combine(
userHomeDirectory,
TopLevelHiddenDirectory,
SecondLevelDirectory);
}
internal static string GetHomeDirectory()
{
// First try to get the user's home directory from the HOME environment variable.
// This should work in most cases.
string userHomeDirectory = Environment.GetEnvironmentVariable("HOME");
So the path should be $HOME/.dotnet/corefx/cryptography/crls
We are evaluating how to send messages to connected clients via SignalR. Our application is published in Azure, and has multiple instances. We are able to successfully pass messages to clients connected to the same instance, but not other instances.
We initially were looking at ServiceBus, but we (perhaps mistakenly) found out that AzureSignalR should basically be a service bus that handles all of the backend stuff for us.
We set up signalR in Startup.cs such as:
public void ConfigureServices(IServiceCollection services)
{
var signalRConnString = Configuration.GetConnectionString("AxiomSignalRPrimaryEndPoint");
services.AddSignalR()
.AddAzureSignalR(signalRConnString)
.AddJsonProtocol(options =>
{
options.PayloadSerializerSettings.ContractResolver = new DefaultContractResolver();
});
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app.UseAzureSignalR(routes =>
{
routes.MapHub<CallRegistrationHub>("/callRegistrationHub");
routes.MapHub<CaseHeaderHub>("/caseHeaderHub");
routes.MapHub<EmployeesHub>("/employeesHub");
});
}
Issue
We have to store some objects that should probably be on the service bus, and not stored in an individual instance; However, I am unsure of how to tell the hub that the objects should be on the bus and not internal to that specific instance of the hub, as below:
public class EmployeesHub : Hub
{
private static volatile List<Tuple<string, string, string,string, int>> UpdateList = new List<Tuple<string, string, string,string,int>>();
private static volatile List<Tuple<string, int>> ConnectedClients = new List<Tuple<string, int>>();
}
We have functions that need to send messages to all connected clients that are looking at the current record regardless of in what instance they reside:
public async void LockField(string fieldName, string value, string userName, int IdRec)
{
var clients = ConnectedClients.Where(x => x.Item1 != Context.ConnectionId && x.Item2 == IdRec).Select(x => x.Item1).Distinct().ToList();
clients.ForEach(async x =>
{
await Clients.Client(x).SendAsync("LockField", fieldName, value, userName, true);
});
if (!UpdateList.Any(x=> x.Item1 == Context.ConnectionId && x.Item3 == fieldName && x.Item5 == IdRec))
{
UpdateList.Add(new Tuple<string, string, string,string,int>(Context.ConnectionId,userName, fieldName, value, IdRec));
}
}
This is not working for different instances (which makes sense, because each instance will have its own objects.. However, we were hoping that by using AzureSignalR instead of SignalR (AzureSignalR conn string has an endpoint to the Azure service) that it would handle the service bus functionality for us. We are not sure what steps to take to get this functioning correctly.
Thanks.
The reason for this issue is that I was preemptively attempting to limit message traffic. I was attempting to only send messages to clients that were looking at the same record. However, because my objects were instance-specific, it would only grab the connection IDs from the current instance's object.
Further testing (using ARR affinity) proves that on a Clients.All() call, all clients, including those in different instances, receive the message.
So, our AzureSignalR setup appears to be correct.
Current POC Solution - currently testing
-When a client registers, we will broadcast to all connected clients "What field do you have locked for this Id?"
-If client is on a different Id, it will ignore the message.
-If client does not have any fields locked, it will ignore the message.
-If client has a field locked, it will respond to the message with required info.
-AzureSignalR will then rebroadcast the data required to perform a lock.
This increases message count, but not significantly. But it will resolve the multiple instances holding different connected ClientIds issue.
Just a thought, but have you tried using SignalR Groups? https://learn.microsoft.com/en-us/aspnet/core/signalr/groups?view=aspnetcore-2.2#groups-in-signalr
You could try creating a group for each combination of IdRec and fieldName and then just broadcast messages to the group. This is the gist of how I think your LockField function might look:
public async void LockField(string fieldName, string value, string userName, int IdRec)
{
string groupName = GetGroupName(IdRec, fieldName);
await Clients.Group(groupName).SendAsync("LockField", fieldName, value, userName, true);
await this.Groups.AddToGroupAsync(Context.ConnectionId, groupName);
}
You could implement the GetGroupName method however you please, so long as it produces unique strings. A simple solution might be something like
public string GetGroupName(int IdRec, string fieldName)
{
return $"{IdRec} - {fieldName}";
}
I'm implementing Azure SignalR service in my ASP.NET Core 2.2 app with React front-end. When I send a message, I'm NOT getting any errors but my messages are not reaching the Azure SignalR service.
To be specific, this is a private chat application so when a message reaches the hub, I only need to send it to participants in that particular chat and NOT to all connections.
When I send a message, it hits my hub but I see no indication that the message is making it to the Azure Service.
For security, I use Auth0 JWT Token authentication. In my hub, I correctly see the authorized user claims so I don't think there's any issues with security. As I mentioned, the fact that I'm able to hit the hub tells me that the frontend and security are working fine.
In the Azure portal however, I see no indication of any messages but if I'm reading the data correctly, I do see 2 client connections which is correct in my tests i.e. two open browsers I'm using for testing. Here's a screen shot:
Here's my Startup.cs code:
public void ConfigureServices(IServiceCollection services)
{
// Omitted for brevity
services.AddAuthentication(options => {
options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
})
.AddJwtBearer(jwtOptions => {
jwtOptions.Authority = authority;
jwtOptions.Audience = audience;
jwtOptions.Events = new JwtBearerEvents
{
OnMessageReceived = context =>
{
var accessToken = context.Request.Query["access_token"];
// Check to see if the message is coming into chat
var path = context.HttpContext.Request.Path;
if (!string.IsNullOrEmpty(accessToken) &&
(path.StartsWithSegments("/im")))
{
context.Token = accessToken;
}
return System.Threading.Tasks.Task.CompletedTask;
}
};
});
// Add SignalR
services.AddSignalR(hubOptions => {
hubOptions.KeepAliveInterval = TimeSpan.FromSeconds(10);
}).AddAzureSignalR(Configuration["AzureSignalR:ConnectionString"]);
}
And here's the Configure() method:
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
// Omitted for brevity
app.UseSignalRQueryStringAuth();
app.UseAzureSignalR(routes =>
{
routes.MapHub<Hubs.IngridMessaging>("/im");
});
}
Here's the method I use to map a user's connectionId to the userName:
public override async Task OnConnectedAsync()
{
// Get connectionId
var connectionId = Context.ConnectionId;
// Get current userId
var userId = Utils.GetUserId(Context.User);
// Add connection
var connections = await _myServices.AddHubConnection(userId, connectionId);
await Groups.AddToGroupAsync(connectionId, "Online Users");
await base.OnConnectedAsync();
}
Here's one of my hub methods. Please note that I'm aware a user may have multiple connections simultaneously. I just simplified the code here to make it easier to digest. My actual code accounts for users having multiple connections:
[Authorize]
public async Task CreateConversation(Conversation conversation)
{
// Get sender
var user = Context.User;
var connectionId = Context.ConnectionId;
// Send message to all participants of this chat
foreach(var person in conversation.Participants)
{
var userConnectionId = Utils.GetUserConnectionId(user.Id);
await Clients.User(userConnectionId.ToString()).SendAsync("new_conversation", conversation.Message);
}
}
Any idea what I'm doing wrong that prevents messages from reaching the Azure SignalR service?
It might be caused by misspelled method, incorrect method signature, incorrect hub name, duplicate method name on the client, or missing JSON parser on the client, as it might fail silently on the server.
Taken from Calling methods between the client and server silently fails
:
Misspelled method, incorrect method signature, or incorrect hub name
If the name or signature of a called method does not exactly match an appropriate method on the client, the call will fail. Verify that the method name called by the server matches the name of the method on the client. Also, SignalR creates the hub proxy using camel-cased methods, as is appropriate in JavaScript, so a method called SendMessage on the server would be called sendMessage in the client proxy. If you use the HubName attribute in your server-side code, verify that the name used matches the name used to create the hub on the client. If you do not use the HubName attribute, verify that the name of the hub in a JavaScript client is camel-cased, such as chatHub instead of ChatHub.
Duplicate method name on client
Verify that you do not have a duplicate method on the client that differs only by case. If your client application has a method called sendMessage, verify that there isn't also a method called SendMessage as well.
Missing JSON parser on the client
SignalR requires a JSON parser to be present to serialize calls between the server and the client. If your client doesn't have a built-in JSON parser (such as Internet Explorer 7), you'll need to include one in your application.
Update
In response to your comments, I would suggest you try one of the Azure SignalR samples, such as
Get Started with SignalR: a Chat Room Example to see if you get the same behavior.
Hope it helps!