Issue on Connection with Cordapp to Springboot - frontend

I am new to Springboot. Currently, I am working on Corda Blockchain, I am facing some issues in connecting my cordapp through spring-boot.
enter image description here
enter image description here
If anyone have any reference or solution,kindly share it help me to solve this problem

You don't set the values inside NodeRPCConnection; see example here:
https://github.com/corda/samples/blob/release-V4/cordapp-example/clients/src/main/kotlin/com/example/server/NodeRPCConnection.kt
They are injected when you run one of the runPartyServer gradle tasks:
https://github.com/corda/samples/blob/release-V4/cordapp-example/clients/build.gradle
For example to start PartyA webserver, all you have to do is type: ./gradlew runPartyAServer.
Obviously, you must first start your node then run the webserver.

Related

InvalidResourceName error on learn/modules/connect-iot-edge-device-to-iot-central

I'm just running through this learning module and seem to be stuck on unit 5/8 (Exercise - Deploy an IoT Edge device and manage it from IoT Central)
https://learn.microsoft.com/en-us/learn/modules/connect-iot-edge-device-to-iot-central/
See screenshot below for the error I'm getting. I've followed the steps up to this point which have been fairly straight forward. Wondering if this is a bug? Can anyone shed any light on this? Thanks!
Most likely, the value of $APP_NAME is empty. That would result in a value of ip- for the dnsLabelPrefix parameter in the script. To verify this, try reading the value echo $APP_NAME.
This variable was set in module 3 Exercise - Create an IoT Central application. It's likely that if you took a break and started a new sandbox, this value is no longer set. Try setting the value explicitly before running the step resulting in the error. Simply set the value to your IoT Central app name.
APP_NAME="store-manager-$RANDOM"
echo "Your application name is: $APP_NAME"

"Error: Key not loaded" in h2o deployed through a K3s cluster, using python3 client

I can confirm the 3-replica cluster of h2o inside K3s is correctly deployed, as executing in the Python3 interpreter h2o.init(ip="x.x.x.x") works as expected. I followed the instructions noted here: https://www.h2o.ai/blog/running-h2o-cluster-on-a-kubernetes-cluster/
Nevertheless, I had to modify the service.yaml and comment out the line which says clusterIP: None, as K3s was complaining about something related to its inability to set the clusterIP to None. But even though, I can certify it is working correctly, and I am able to use an external IP to connect to the cluster.
If I try to load the dataset using the h2o cluster inside the K3s cluster using the exact same steps as described here http://docs.h2o.ai/h2o/latest-stable/h2o-docs/automl.html, this is the output that I get:
>>> train = h2o.import_file("https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv")
...
h2o.exceptions.H2OResponseError: Server error java.lang.IllegalArgumentException:
Error: Key not loaded: Key<Frame> https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv
Request: POST /3/ParseSetup
data: {'check_header': '0', 'source_frames': '["https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv"]'}
The same error occurs if I use the h2o.upoad_file("x.csv") method.
There is a clue about what may be happening here: Key not loaded: Key<Frame> while POSTing source frame through ParseSetup in H2O API call but I am not using curl, and I can not find any parameter that could help me overcome this issue: http://docs.h2o.ai/h2o/latest-stable/h2o-py/docs/h2o.html?highlight=import_file#h2o.import_file
I need to use the Python client inside the same K3s cluster due to different technical reasons, so I am not able to kick off nor Flow nor Firebug to know what may be happening.
I can confirm it is working correctly when I simply issue a h2o.init(), using the local Java instance.
UPDATE 1:
I have tried in different K3s clusters without success. I changed the service.yaml to a NodePort, and now this is the error traceback:
>>> train = h2o.import_file("https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv")
...
h2o.exceptions.H2OResponseError: Server error java.lang.IllegalArgumentException:
Error: Job is missing
Request: GET /3/Jobs/$03010a2a016132d4ffffffff$_a2366be93ec99a78d7bc161de8c54d67
UPDATE 2:
I have tried using different services (NodePort, LoadBalancer, ClusterIP) and none of them work. I also have tried using Minikube with the official image, and with a custom image made by me, without success. I suspect this is something related to either h2o itself, or the clustering between pods. I will keep digging and let's think there will be some gold in it.
UPDATE 3:
I also found out that the post about running H2O in Docker is really outdated https://www.h2o.ai/blog/h2o-docker/ nor is working the Dockerfile present at GitHub (I changed it to uncomment the ENTRYPOINT section without success): https://github.com/h2oai/h2o-3/blob/master/Dockerfile
Even though, I tried with the custom image I built for h2o-k8s and it is working seamlessly in pure Docker. I am wondering why it is still not working in K8s...
UPDATE 4:
I have tried modifying the environment variable called H2O_KUBERNETES_SERVICE_DNS without success.
In the meantime, the cluster started to be unavailable, that is, the readinessProbe's would not successfully complete. No matter what I change now, it does not work.
I spinned up a K3d cluster in local to see what happened, and surprisingly, the readinessProbe's were not failing, using v3.30.0.6. But now I started testing it with R instead of Python. I am glad I tried, because I may have pinpointed what was wrong. There is a version mismatch between the client and the server. So I updated accordingly the image to v3.30.0.1.
But now again, the readinessProbe is not working in my k3d cluster, so I am unable to test it.
It seems it is working now. R client version 3.30.0.1 with server version 3.30.0.1. Also tried with Python version 3.30.0.7 and server version 3.30.0.7 and it started working. Marvelous. The problem was caused by a version mismatch between the client and the server, as the python client was updated to 3.30.0.7 while the latest server for docker was 3.30.0.6.

How to set ssm param locally for serverless offline

I recently starting working on serverless architecture. Here is example of serverless.xml for the same.
test:
name: test
handler: handler.lambda_handler
timeout: 6
environment:
APP_ID: ${ssm:/path/to/ssm/test~true}
Now when I am trying to run serverless offline command then it complains about ssm variable.
Following is the error that coming on console.
I want to run everything on my locally machine for development. Can someone help on this how I can solve this problem.
ServerlessError: Trying to populate non string value into a string for variable ${ssm:/path/to/ssm/test~true}. Please make sure the value of the property is a string.
at Variables.populateVariable (C:\Users\kumarn\AppData\Roaming\npm\node_modules\serverless\lib\classes\Variables.js:464:13)
at Variables.renderMatches (C:\Users\kumarn\AppData\Roaming\npm\node_modules\serverless\lib\classes\Variables.js:386:21)
at C:\Users\kumarn\AppData\Roaming\npm\node_modules\serverless\lib\classes\Variables.js:406:29
From previous event:
you can solve this by adding the plugin:
https://github.com/janders223/serverless-offline-ssm
if you're feeling more adventurous you can also use localstack https://github.com/localstack/localstack
note that free version does not support everything

AWS ECS error in prisma container - environment variable PRISMA_CONFIG

I'm new to AWS, and I'm trying to deploy my local web app on AWS using ECR and ECS, but got stuck when running a cluster, it throws the error about the PRISMA_CONFIG environment variable in prisma container.
In my local environment, i'm using docker to build the app using nodejs, prisma and mongodb, it's working fine.
Now on ECS, i created a task definition and for prisma container, i tried to copy the yml config from my local docker-compose.yml file to make it work.
There is field called "ENVIRONMENT", I've inputted the value in the Environment variables, it's just not working and throw the error while the cluster was running, then the task Stopped.
the yml is in multiple lines, but the input box supports string only
the variable key is PRISMA_CONFIG
and the following are the values that i've already tried
| port: 4466\n databases:\n default:\n connector: mongo\n uri: mongodb://prisma:prisma#mongo\n
| \nport: 4466 \ndatabases: \ndefault: \nconnector: mongo \nuri: mongodb://prisma:prisma#mongo
|\nport: 4466\n databases:\n default:\n connector: mongo\n uri: mongodb://prisma:prisma#mongo
\nport: 4466\n databases:\n default:\n connector: mongo\n uri: mongodb://prisma:prisma#mongo
port: 4466\n databases:\n default:\n connector: mongo\n uri: mongodb://prisma:prisma#mongo\n
and the errors
Exception in thread "main" java.lang.RuntimeException: Unable to load Prisma config: java.lang.RuntimeException: No valid Prisma config could be loaded.
expected a comment or a line break, but found p(112)
expected chomping or indentation indicators, but found \(92)
i expected that all containers will run without errors, but actual results are the container stopped after running for a minute.
Please help for this.
or suggest other way to deploy to AWS?
THANK YOU VERY MUCH.
I've been looking for a similar solution to load the prisma config without the multiline string.
There are repositories that load the prisma environment variables separately without a prisma config:
Check out this repo for example:
https://github.com/akoenig/prisma-docker-compose/blob/master/.prisma.env
Here akoenig uses the following env variables using a env_file. So, I'm assuming you can just pass in these environment variables separately to achieve what prisma is looking for.
# CONTENTS OF env_file
PORT=4466
SQL_CLIENT_HOST_CLIENT1=database
SQL_CLIENT_HOST_READONLY_CLIENT1=database
SQL_CLIENT_HOST=database
SQL_CLIENT_PORT=3306
SQL_CLIENT_USER=root
SQL_CLIENT_PASSWORD=prisma
SQL_CLIENT_CONNECTION_LIMIT=10
SQL_INTERNAL_HOST=database
SQL_INTERNAL_PORT=3306
SQL_INTERNAL_USER=root
SQL_INTERNAL_PASSWORD=prisma
SQL_INTERNAL_DATABASE=graphcool
CLUSTER_ADDRESS=http://prisma:4466
SQL_INTERNAL_CONNECTION_LIMIT=10
SCHEMA_MANAGER_SECRET=graphcool
SCHEMA_MANAGER_ENDPOINT=http://prisma:4466/cluster/schema
#CLUSTER_PUBLIC_KEY=
BUGSNAG_API_KEY=""
ENABLE_METRICS=0
JAVA_OPTS=-Xmx1G
This is for a mySQL database. You would need to tailor this to suit your values. But in theory you should just be able to pass these variables one by one into single variables in AWS's GUI.
I've also asked this question on the Prisma Slack channel and am waiting to see if they have other suggestions: https://prisma.slack.com/archives/CA491RJH0/p1569689413383000
Let me know how it goes.
Not and expert here but, have you set up an environment variable PRISMA_API_MANAGEMENT_SECRET you would have defined the secret when you configured your fargate instance.
have a look at the following artical
https://www.prisma.io/tutorials/deploy-prisma-to-aws-fargate-ct14

mesos-slave can not connect No credentials provided error

I am new to mesos.
After starting mesos-master, I tried to connect mesos-slave with the following command
/usr/sbin/mesos-slave --ip=192.192.7.180 --master=192.192.7.19:5050 --work_dir=/tmp/mesos/work/int --no-systemd_enable_support
It is not connecting to master. It is throwing the following error
No credentials provided. Attempting to register without authentication
Thank you in advance.
No credentials provided meaning you are trying to load a slave which is on a different network which is not configured in mesos config files.
once you register it then you can add it.
for the example sake try
for master
./bin/mesos-master.sh –ip=127.0.0.1 –work_dir=/var/lib/mesos
for slave
./bin/mesos-slave.sh –master=127.0.0.1:5050 –work_dir=/tmp/mesos –no-systemd_enable_support
open browser
localhost:5050
if all the steps are followed properly you should find a Mesos dashboard more details

Resources