Visibility of variables from different consoles in Ethereum - azure

I'm new in Ethereum.
And I'm trying to develop contracts using Azure's cluster (I have trial account).
I connected using geth to my network from machine in Azure:
gethadmin#XXXXXX-tx0:~$ geth attach http://ether2ore.eastus.cloudapp.azure.com:8545
Then I initialize variable
>var test_var = 555
undefined
>test_var
555
It's - OK.
But when I tried to connect to same point from my laptop:
C:\Users\boris>geth attach http://ether2ore.eastus.cloudapp.azure.com:8545
I tried to check this variable:
> test_var
ReferenceError: 'test_var' is not defined
at <anonymous>:1:1
I see that it's not defined.
On both consoles I see same accounts:
From - C:\Users\boris>
eth.accounts ["0xab14c61930343149c2f54044054cd46b90c0dee6", "0x7cc276b28bfdbb57151ed3b5552aafb2f2592964", "0xc9b8b9d57219b2c0935d8c28d4d2247fe70232f3", "0x2aed463fd54aa41fed898a9629bee6f0935b74fb"]
From - gethadmin#XXXXXX-tx0
eth.accounts
["0xab14c61930343149c2f54044054cd46b90c0dee6", "0x7cc276b28bfdbb57151ed3b5552aafb2f2592964", "0xc9b8b9d57219b2c0935d8c28d4d2247fe70232f3", "0x2aed463fd54aa41fed898a9629bee6f0935b74fb"]
Command admin.peers on both consoles gives me same results
So, it's same networks.
Maybe I don't understand how it should works, but I suspect that, if I define variable in same network it must be visible from all consoles. Is not it?
Same situation with contracts. I compiled contract in first console and can operate with it, but it's not reachable from another console.
Please, can you explain me why this situation happens or give me proper links to get answer on this question.
Many thanks

The variables you are instantiating are local variables within the console.
Each time you connect, a new JavaScript Console with the web3 API exposed is created locally and it wraps the ethereum function calls so that you can use them within JS rather than having to write the raw calls.
To persist data on the network you would need to deploy a contract with storage. You could then fetch the data from the contract storage without executing a transaction using a getStorageAt or call and the signature of your public variable. However, you will need to execute a transaction calling a contract function (similar to the example in the solidity documentation above) in order to update the data stored in the contract.

Related

Problem with Wallet Creation, signature error

I want to create a wallet for new rest api server, but whenever I call code to generate new Wallet I'm getting error like
"Decoding SignatureHeader failed: Error illegal buffer ..."
Here is screen shot of my code, it is taken from virtual machine
I'm using hyperledger fabric 2.2 and run under the fabric-samples/test-network
I was clone this HyperledgerFabroc
Here is also print screen of the error:
I would appreciate if someone can navigate me how to manage successfully to create a wallet ?
Your error looks to be occurring within a chaincode transaction function. You should not be using the client SDK to do a transaction invocation from within chaincode. Instead look to use the invokeChaincode function on the stub:
https://hyperledger.github.io/fabric-chaincode-node/release-1.4/api/fabric-shim.ChaincodeStub.html#invokeChaincode

[AWS][Amplify] Invoke function locally crashs with no error

I have just joined a developpment team, and the project should run in the cloud using amplify. I have a function called usershandler that i want to run locally. For that, i used :
amplify invoke function usershandler
This is the output i get :
Starting execution...
EVENT: {"httpMethod":"GET","body":"{\"name\": \"Amplify\"}","path":"/users","resource":"/{proxy+}","queryStringParameters":{}}
App started
get All VSM called
Connection to database was a success
null
Result:
{"statusCode":200,"body":"{\"success\":true,\"results\":[]}","headers":{"x-powered-by":"Express","access-control-allow-origin":"*","access-control-allow-headers":"Origin, X-Requested-With, Content-Type, Accept","content-type":"application/json; charset=utf-8","content-length":"29","etag":"W/\"1d-4wD7ChrrlHssGyekznKfKxR7ImE\"","date":"Tue, 21 Jul 2020 12:32:36 GMT","connection":"close"},"isBase64Encoded":false}
Finished execution.
EDIT : Also, when running the invoke command, amplify asks me for a src/event.json while i've seen it looking for the index.js for some ??
EDIT 2 [SOLVED] : downgrading #aws-amplify/cli to 4.14.1 seems to solve this :)
Expected behavior : The server should continue running so i can use it ..
Actual behavior : It always stops after the finished execution message.
The connection to the db works fine, the config.json contains correct values. Don't know why it is acting like this. Have anybody had the same problem?
Have a nice day.
Short answer: You are running the invoke command which is doing just what it is supposed to be doing - invoking the lambda function.
If you are looking to get a local API up, then run the following command:
sam local start-api
This will read your template and based on the endpoints you have setup, run them locally essentially mocking API Gateway locally. Read more about it in the official docs here.
Explanation:
This command comes is one of offering of AWS Serverless Application Model (AWS SAM). A tool to develop serverless application. It is essentially an abstraction of AWS Cloufdformation. Similarly Amplify is an abstraction that makes it simple to not only develop and manage the backend but also brings that power to frontend.
As both of them essentially use Cloudformation templates underneeth, you can leverage the capabilities of one tool with another.
SAM provides a robust set of tools for local development invcluding running a local lambda mocking server, in case you are not using API Gateway.
I use this combination to develop and test my frontend along with backend which is in golang, a language which is not as mature as javascript as a backend language with Amplify as of now.

Chrome run fails in Azure Functions: An attempt was made to access a socket in a way forbidden by its access permissions

I wrote a web bot that uses Selenium framework to crawl. Installed ChromeDriver 72.0.3626.69 and also downloaded Chromium 72.0.3626.121. The app initializes ChromeDriver with this included Chromium binary (and NOT a locally installed Chrome binary). All this perfectly works on my machine locally.
I've been attempting now to port the app to Azure Functions. I wrote a function, tested it, and it works fine locally. But once I publish it to Azure Functions it fails due to about 182 errors of type:
An attempt was made to access a socket in a way forbidden by its
access permissions
I know this happens due to exceeding the TCP connection limits of Azure sandbox, but the only attempt here was to create an instance of ChromeDriver (not even navigate anywhere yet!)
Here is a screenshot of Azure Function call log.
That error appears about 182 times in a row, and that's basically just an attempt to create a browser instance (or ChromeDriver instance, to be precise - can't be sure if that's Chromium or ChromeDriver causing the issue).
The question: Have anyone experienced issues with ChromeDriver/Chromium creating so many (obviously excessive) connections when launching? And what might help to avoid this.
If that's of any help, this is basically a piece of code that crashes on the last line:
ChromeOptions options = new ChromeOptions();
options.BinaryLocation = this.chromePath;
options.AddArgument("no-sandbox");
options.AddArgument("disable-infobars");
options.AddArgument("--disable-extensions");
if (this.headlessMode)
{
options.AddArgument("headless");
}
options.AddUserProfilePreference("profile.default_content_setting_values.images", 2);
Log.LogInformation("Chrome options compiled. Creating ChromeDriverService...");
var driverService = ChromeDriverService.CreateDefaultService(this.driverPath);
driver = new ChromeDriver(driverService, options, timeout);
I believe you are running this function in a Windows Function App which is subject to quite a few limitations as described in this wiki.
But when running on Linux, functions are basically run in a docker container, removing most of these restrictions that windows has. I believe what you are trying should be possible there.
You could either just deploy your function to a Linux Function App or even build a container and use that directly as well.

SqlDataProvider connection string in Suave on Azure

I can't get SqlDataProvider to work when executed in a fsx script which is running in an Azure Web Site.
I have started from the samples that Tomas Petrecek has here: https://github.com/tpetricek/Dojo-Suave-FsHome.
In short it is a FSX script that is executed using the IIS httpPlatformHandler so that all http requests to my Azure Web site is forwarded to my F# script.
The F# Script use Suave to handle the requests.
When I tried adding some database access to my HTTP handlers I got into problems.
The problematic code looks like this:
[<Literal>]
let connStr = "Server=(localdb)\\v11.0;Initial Catalog=My_Database;Integrated Security=true;"
[<Literal>]
let resolutionFolder = __SOURCE_DIRECTORY__
FSharp.Data.Sql.Common.QueryEvents.SqlQueryEvent |> Event.add (printfn "Executing SQL: %s")
// the following line fails when executing in azure
type db = SqlDataProvider<connStr, Common.DatabaseProviderTypes.MSSQLSERVER, ResolutionPath = resolutionFolder>
let saveData someDataToSave =
let ctx = db.GetDataContext(Environment.GetEnvironmentVariable("SQLAZURECONNSTR_QUERIES"))
.....
/// code using the context here
This works just fine when I run it locally, but when I deploy it to the azure site it will fail at the line where the type dbis created.
The error message is (line 70 is the line that has the type db = ...:
D:\home\site\wwwroot\app.fsx(70,11): error FS3033: The type provider
'FSharp.Data.Sql.SqlTypeProvider' reported an error: A network-related
or instance-specific error occurred while establishing a connection to
SQL Server. The server was not found or was not accessible. Verify
that the instance name is correct and that SQL Server is configured to
allow remote connections. (provider: SQL Network Interfaces, error: 52
- Unable to locate a Local Database Runtime installation. Verify that SQL Server Express is properly installed and that the Local Database
Runtime feature is enabled.)
The design-time database in the connStr is not available in the azure site, but I thought this is why we have the GetDataContext overload that takes a connection string to be used at run-time?
Is it because it is running as a script and not as compiled code that it is trying to access the database when creating the TypeProvider?
If yes, does it mean that my only option is to compile and provide the database code as a compiled assembly that I load and use in my Suave FSX script?
Reading the connection string from a config file does not work very well as this is in a azure site. I really need to get the connection string from an environment variable (which is set in the azure management interface).
Hmm, this is a bit unfortunate - as #Fyodor mentioned in the comments, the problem is that the script-based deployment to Azure actually compiles the script on the Azure machine - and so you need to have a statically-resolved connection string that works on Azure.
There are two options:
Use compiled project instead. If you compile your F# code locally and deploy the compiled code to Azure it will work. Sadly, there are no good samples for that.
Do some clever trick to make the connection string accessible to the script at compile time.
Send a PR to the SQL provider so that you can give it the name of an environment variable and it reads the connection string from there.
I think (3) would actually be quite nice and useful feature.
I'm not necessarily sure what the best way to do (2) would be. But I think you might be able to modify app.azure.fsx so that it creates a file (say connection.fsx) that contains something like:
module Connection
let [<Literal>] ConnString = "<Contents of SQLAZURECONNSTR_QUERIES>"
Then app.fsx could load this script and use Connection.ConnString in the argument of SQL type provider.

Azure Document Db Worker Role

I am having problems getting the Microsoft.Azure.Documents library to initialize the client in an azure worker role. I'm using Nuget Package 0.9.1-preview.
I have mimicked what was done in the example for azure document
When running locally through the emulator I can connect fine with the documentdb and it runs as expected. When running in the worker role, I am getting a series of NullReferenceException and then ArgumentNullException.
The bottom System.NullReferenceException that is highlighted above has this call stack
so the nullReferenceExceptions start in this call at the new DocumentClient.
var endpoint = "myendpoint";
var authKey = "myauthkey";
var enpointUri = new Uri(endpoint);
DocumentClient client = new DocumentClient(endpointUri, authKey);
Nothing changes between running it locally vs on the worker role other then the environment (obviously).
Has anyone gotten DocumentDb to work on a worker role or does anyone have an idea why it would be throwing null reference exceptions? The parameters getting passed into the DocumentClient() are filled.
UPDATE:
I tried to rewrite it being more generic which helped at least let the worker role run and let me attached a debugger. It is throwing the error on the new DocumentClient. Seems like some security passing is null. Both the required parameters on initialization are not null. Is there a security setting I need to change for my worker role to be able to connect to my documentdb? (still works locally fine)
UPDATE 2:
I can get the instance to run in release mode, but not debug mode. So it must be something to do with some security setting or storage setting that is misconfigured I guess?
It seems I'm getting System.Security.SecurityExceptions - only when using The DocumentDb - queues do not give me that error. All Call Stacks for that error seem to be with System.Diagnostics.EventLog. The very first Exception I see in the Intellitrace Summary is System.Threading.WaitHandleCannotBeOpenedException.
More Info
Intellitrace summary exception data:
top is the earliest and bottom is the latest (so System.Security.SecurityException happens first then the NullReference)
The solution for me to get rid of the security exception and null reference exception was to disable intellitrace. Once I did that, I was able to deploy and attach debugger and see everything working.
Not sure what is between the null in intellitrace and the DocumentClient, but hopefully it's just in relation to the nuget and it will be fixed in the next iteration.
unable to repro.
I created a new Worker Role. Single instance. Added authkey & endoint config to cscfg.
Created private static DocumentClient at WorkerRole class level
Init DocumentClient in OnStart
Dispose DocumentClient in OnStop
In RunAsync inside loop,
execute a query Works as expected.
Test in emulator works.
Deployed as Release to Production slot. works.
Deployed as Debug to Staging with Remote Debug. works.
Attached VS to CloudService, breakpoint hit inside loop.
Working solution : http://ryancrawcour.blob.core.windows.net/samples/AzureCloudService1.zip

Resources