How to run Cardano Wallet? - node.js

I have installed cardano-wallet using this documentation. Everything is OK, Just I don't know how to run it so I can have interaction with it via node js:
const { WalletServer } = require('cardano-wallet-js');
let walletServer = WalletServer.init('http://127.0.0.1:1337/v2');
async function test() {
let information = await walletServer.getNetworkInformation();
console.log(information);
}
test()
Does's anyone have an idea?

According to IOHK documentation, prior to running a server you have to run a node:
cardano-node run \
--topology ~/cardano/config/mainnet-topology.json \
--database-path ~/cardano/db/ \
--socket-path ~/cardano/db/node.socket \
--host-addr 127.0.0.1 \
--port 1337 \
--config ~/cardano/config/mainnet-config.json
And after that call a serve command with appropriate flags:
cardano-wallet serve \
--port 8090 \
--mainnet \
--database ~/cardano/wallets/db \
--node-socket $CARDANO_NODE_SOCKET_PATH
If you need more details, read my medium post.

you have to run cardano node in order query blockchain.
follow this article
https://developers.cardano.org/docs/get-started/cardano-wallet-js
you have to first download this file docker-compose.yml
wget https://raw.githubusercontent.com/input-output-hk/cardano-wallet/master/docker-compose.yml
then run your node either testnet or mainnet by this command
NETWORK=testnet docker-compose up
then you can able to connect with blockchain
ref - https://github.com/tango-crypto/cardano-wallet-js

Related

How to setup RPC Node of solana?

I want to setup a full node of solana not a validator or voter node jsut to get the data of blockchain on local machine how could I do it?
If you want a local RPC node, know that the specs required are very high, currently 12 cores, 256GB RAM, and 1TB of NVME SSD space. More info at https://docs.solana.com/running-validator/validator-reqs
If you want to run an RPC node, the only additional command line argument that you must provide is --no-voting, and you don't need the voting args, so for example, you'd run:
solana-keygen new -o identity.json
solana-validator \
--rpc-port 8899 \
--entrypoint entrypoint.devnet.solana.com:8001 \
--limit-ledger-size \
--log ~/solana-validator.log \
--no-voting \
--identity identity.json
Otherwise, you can follow all of the instructions at https://docs.solana.com/running-validator/validator-start

Docker Chrome Memory Leak When Using --Disable-Dev-Shm-Usage

This is a follow-up to my previous question:
I am creating a NodeJS-based image that I install latest Chrome and Chromedriver on, then run a NodeJS-based cron job that uses Selenium Webdriver for testing on a one-minute interval.
This runs in an Azure Container Instance, which is the simplest way to run containers in Azure.
My challenge is that Docker containers in ACI run with 64 MB of dev/shm by default, which causes Chrome failures due to the relatively low amount of memory. Chrome provides a disable-dev-shm-usage flag, but running that creates a memory leak that I can't seem to figure out how to prevent. How can I address this best for my container in ACI, please?
Azure Container Instance Container Memory Consumption
Dockerfile
# 1) Build from this Dockerfile's directory:
# docker build -t "<some tag>" -f Dockerfile .
# 2) Start the image (e.g. in Docker)
# 3) Observe that the button's value is printed.
# ---------------------------------------------------------------------------------------------
# 1) Use alpine-based NodeJS base image
FROM node:latest
# 2) Install latest stable Chrome
# https://gerg.dev/2021/06/making-chromedriver-and-chrome-versions-match-in-a-docker-image/
RUN echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" | \
tee -a /etc/apt/sources.list.d/google.list && \
wget -q -O - https://dl.google.com/linux/linux_signing_key.pub | \
apt-key add - && \
apt-get update && \
apt-get install -y google-chrome-stable libxss1
# 3) Install the Chromedriver version that corresponds to the installed major Chrome version
# https://blogs.sap.com/2020/12/01/ui5-testing-how-to-handle-chromedriver-update-in-docker-image/
RUN google-chrome --version | grep -oE "[0-9]{1,10}.[0-9]{1,10}.[0-9]{1,10}" > /tmp/chromebrowser-main-version.txt
RUN wget --no-verbose -O /tmp/latest_chromedriver_version.txt https://chromedriver.storage.googleapis.com/LATEST_RELEASE_$(cat /tmp/chromebrowser-main-version.txt)
RUN wget --no-verbose -O /tmp/chromedriver_linux64.zip https://chromedriver.storage.googleapis.com/$(cat /tmp/latest_chromedriver_version.txt)/chromedriver_linux64.zip && rm -rf /opt/selenium/chromedriver && unzip /tmp/chromedriver_linux64.zip -d /opt/selenium && rm /tmp/chromedriver_linux64.zip && mv /opt/selenium/chromedriver /opt/selenium/chromedriver-$(cat /tmp/latest_chromedriver_version.txt) && chmod 755 /opt/selenium/chromedriver-$(cat /tmp/latest_chromedriver_version.txt) && ln -fs /opt/selenium/chromedriver-$(cat /tmp/latest_chromedriver_version.txt) /usr/bin/chromedriver
# 4) Set the variable for the container working directory, create and set the working directory
ARG WORK_DIRECTORY=/program
RUN mkdir -p $WORK_DIRECTORY
WORKDIR $WORK_DIRECTORY
# 5) Install npm packages (do this AFTER setting the working directory)
COPY package.json .
RUN npm config set unsafe-perm true
RUN npm i
ENV NODE_ENV=development NODE_PATH=$WORK_DIRECTORY
# 6) Copy script to execute to working directory
COPY runtest.js .
EXPOSE 8080
# 7) Execute the script in NodeJS
CMD ["node", "runtest.js"]
runtest.js
const { Builder, By } = require('selenium-webdriver');
const { Options } = require('selenium-webdriver/chrome');
const cron = require('node-cron');
cron.schedule('*/1 * * * *', async () => await main());
async function main() {
let driver;
try {
//Browser Setup
let options = new Options()
.headless() // run headless Chrome
.excludeSwitches(['enable-logging']) // disable 'DevTools listening on...'
.addArguments([
// no-sandbox is not an advised flag due to security but eliminates "DevToolsActivePort file doesn't exist" error
'no-sandbox',
// Docker containers run with 64 MB of dev/shm by default, which causes Chrome failures
// Disabling dev/shm uses tmp, which solves the problem but appears to result in memory leaks
'disable-dev-shm-usage'
]);
driver = await new Builder().forBrowser('chrome').setChromeOptions(options).build();
// Navigate to Google and get the "Google Search" button text.
await driver.get('https://www.google.com');
let btnText = await driver.findElement(By.name('btnK')).getAttribute('value');
log(`Google button text: ${btnText}`);
} catch (e) {
log(e);
} finally {
if (driver) {
await driver.close(); // helps chromedriver shut down cleanly and delete the "scoped_dir" temp directories that eventually fill up the harddrive.
await driver.quit();
driver = null;
log(' Closed and quit the driver, then set to null.');
} else {
log(' *** No driver to close and quit ***');
}
}
}
function log(msg) {
console.log(`${new Date()}: ${msg}`);
}
UPDATE
Interestingly, it seems to stabilize once it reaches a certain consumption. The container is allocated 2 GB of memory. I don't see crashes in my app logs, so this seems functional overall.

Docker run cant find google authentication "oauth2google.DefaultTokenSource: google: could not find default credentials"

Hey there I am trying to figure out why i keep getting this error when running the docker run command. Here is what i am running
docker run -p 127.0.0.1:2575:2575 -v ~/.config:/home/.config gcr.io/cloud-healthcare-containers/mllp-adapter /usr/mllp_adapter/mllp_adapter --hl7_v2_project_id=****** --hl7_v2_location_id=us-east1 --hl7_v2_dataset_id=*****--hl7_v2_store_id=*****--export_stats=false --receiver_ip=0.0.0.0
I have tried both ubuntu and windows with an error that it failed to connect and to see googles service authentication documentation. I have confirmed the account is active and the keys are exported to the config below
randon#ubuntu-VM:~/Downloads$ gcloud auth configure-docker
WARNING: Your config file at [/home/brandon/.docker/config.json] contains these credential helper entries:
{
"credHelpers": {
"gcr.io": "gcloud",
"us.gcr.io": "gcloud",
"eu.gcr.io": "gcloud",
"asia.gcr.io": "gcloud",
"staging-k8s.gcr.io": "gcloud",
"marketplace.gcr.io": "gcloud"
}
I am thinking its something to do with the -v command on how it uses the google authentication. Any help or guidance to fix, Thank you
-v ~/.config:/root/.config is used to give the container access to gcloud credentials;
I was facing the same for hours and I decided check the source code even I not being a go developer.
So, there I figured out the we have a credentials option to set the credentials file. It's not documented for now.
The docker command should be like:
docker run \
--network=host \
-v ~/.config:/root/.config \
gcr.io/cloud-healthcare-containers/mllp-adapter \
/usr/mllp_adapter/mllp_adapter \
--hl7_v2_project_id=$PROJECT_ID \
--hl7_v2_location_id=$LOCATION \
--hl7_v2_dataset_id=$DATASET_ID \
--hl7_v2_store_id=$HL7V2_STORE_ID \
--credentials=/root/.config/$GOOGLE_APPLICATION_CREDENTIALS \
--export_stats=false \
--receiver_ip=0.0.0.0 \
--port=2575 \
--api_addr_prefix=https://healthcare.googleapis.com:443/v1 \
--logtostderr
Don't forget to put your credentials file inside your ~/.config folder.
Here it worked fine. I hope helped you.
Cheers

Fabric chaincode container (nodejs) cannot access npm

I appreciate help in this matter.
I have the latest images (2.2.0, CA 1.4.8), but I'm getting the error when I'm installing chaincode at the first peer:
failed to invoke chaincode lifecycle, error: timeout expired while executing transaction
I'm working behind a proxy, using a VPN.
I tried to increase the timeouts at the docker config, for all peers:
CORE_CHAINCODE_DEPLOYTIMEOUT=300s
CORE_CHAINCODE_STARTUPTIMEOUT=300s
The process works perfectly up to that point (channel created, peers joined the channel). The chaincode can be installed manually with npm install.
I couldn't find an answer to this anywhere. Can someone provide guidance?
UPDATE: It seems that the chaincode container gets boostrap (and even attributed a random name), but gets stuck at:
+ INPUT_DIR=/chaincode/input
+ OUTPUT_DIR=/chaincode/output
+ cp -R /chaincode/input/src/. /chaincode/output
+ cd /chaincode/output
+ '[' -f package-lock.json -o -f npm-shrinkwrap.json ]
+ npm install --production
I believe it is the proxy blocking npm.
I tried to solve this with:
npm config set proxy proxy
npm config set https-proxy proxy
npm set maxsockets 3
After days of struggling, I've found a solution:
-Had to build a custom fabric nodeenv image, that contained the env variables to setup the npm proxy vars: as in node chaincode instantiate behind proxy. After that, I've setup the following env vars in docker.yaml:
- CORE_CHAINCODE_NODE_RUNTIME=my_custom_image
- CORE_CHAINCODE_PULL=true
Useful! For Chinese users, npmjs may sometimes be disconnected and can build their own instances.
For example, if I use version 2.3.1, Download https://github.com/hyperledger/fabric-chaincode-node/tree/v.2.3.1 Edition. Then enter docker/fabric-nodeenv, modify the Dockerfile, and add a line to modify the warehouse source for npmjs:
RUN npm config set registry http://registry.npm.taobao.org
The entire file is as follows:
ARG NODE_VER=12.16.1
FROM node:${NODE_VER}-alpine
RUN npm config set registry http://registry.npm.taobao.org
RUN apk add --no-cache \
make \
python \
g++;
RUN mkdir -p /chaincode/input \
&& mkdir -p /chaincode/output \
&& mkdir -p /usr/local/src;
ADD build.sh start.sh /chaincode/
Then build a docker image, using the command
docker image build-t whatever/fabric-nodeenv:2.3.
Wait a few minutes, and he'll build the image, and then with docker images, you'll see the image he created
Finally, in peer's configuration file, add in peer-environment
- CORE_CHAINCODE_NODE_RUNTIME=whatever/fabric-nodeenv:2.3
Hope it can help others!

Microsoft !GERMAN! Azure Cloud - getting oauth2-proxy to work

I am trying to setup oauth2-proxy to authenticate against microsofts german azure cloud. It's quite a ride, but I got as far as being able to do the oauth handshake. However, I am getting an error when trying to receive user mail and name via the graph API.
I run the proxy within docker like this:
docker run -it -p 8081:8081 \
--name oauth2-proxy --rm \
bitnami/oauth2-proxy:latest \
--upstream=http://localhost:8080 \
--provider=azure \
--email-domain=homefully.de \
--cookie-secret=super-secret-cookie \
--client-id=$CLIENT_ID \
--client-secret="$CLIENT_SECRET" \
--http-address="0.0.0.0:8081" \
--redirect-url="http://localhost:8081/oauth2/callback" \
--login-url="https://login.microsoftonline.de/common/oauth2/authorize" \
--redeem-url="https://login.microsoftonline.de/common/oauth2/token" \
--resource="https://graph.microsoft.de" \
--profile-url="https://graph.microsoft.de/me"
Right now it's stumbling upon the profile url (which is used to retrieve the identity of the user loggin in)
The log output is this:
2019/01/28 09:24:51 api.go:21: 400 GET https://graph.microsoft.de/me {
"error": {
"code": "BadRequest",
"message": "Invalid request.",
"innerError": {
"request-id": "1e55a321-87c2-4b85-96db-e80b2a5af1a3",
"date": "2019-01-28T09:24:51"
}
}
}
I would REALLY appreciate suggestions about what I am doing wrong here? So far the documentation has not been really helpful to me. It seems that things are slighly different in the german azure cloud, but documentation is pretty thin on that. The fact that the azure docs only describe the US cloud where all urls are different (not in a very logical sense unfortunately) makes things a lot harder...
Best,
Matthias
the issue was that the profile url https://graph.microsoft.de/me was incorrect.
While https://graph.microsoft.com/me is valid for the US cloud, the german cloud requires the version embedded in the URL like this:
https://graph.microsoft.de/v1.0/me.
This worked for me:
docker run -it -p 8081:8081 \
--name oauth2-proxy --rm \
bitnami/oauth2-proxy:latest \
--upstream=http://localhost:8080 \
--provider=azure \
--email-domain=homefully.de \
--cookie-secret=super-secret-cookie \
--client-id=$CLIENT_ID \
--client-secret="$CLIENT_SECRET" \
--http-address="0.0.0.0:8081" \
--redirect-url="http://localhost:8081/oauth2/callback" \
--login-url="https://login.microsoftonline.de/common/oauth2/authorize" \
--redeem-url="https://login.microsoftonline.de/common/oauth2/token" \
--resource="https://graph.microsoft.de" \
--profile-url="https://graph.microsoft.de/v1.0/me"

Resources