I'm using nestjs-keycloak-connect module in multi-tenant mode.
The log shows everything is correct but Resource denied due to mismatched role(s).
The example controller:
#Controller(':company')
#UseGuards(AuthGuard, RoleGuard)
export class CompanyController {
#Get('/')
#Roles({
roles: ['admin'],
})
view(#Param('company') company: string) {
return `your company is : ${company}`;
}
}
[Nest] 23435 - 09/08/2022, 11:13:53 PM VERBOSE [Keycloak] Using token validation method: ONLINE
[Nest] 23435 - 09/08/2022, 11:13:53 PM VERBOSE [Keycloak] Authenticated User: {"exp":1662662924,"iat":1662662624,"jti":"13f4b99a-d5bb-4b5f-8fbd-2bffbbcc16ed","iss":"http://localhost:8080/realms/testrealm","aud":"account","sub":"ac10f640-535a-4658-8bcf-daac003e076c","typ":"Bearer","azp":"k","session_state":"66edf11e-e69b-42a9-a1cf-52988d5c9d51","acr":"1","realm_access":{"roles":["default-roles-testrealm","offline_access","admin","uma_authorization"]},"resource_access":{"account":{"roles":["manage-account","manage-account-links","view-profile"]}},"scope":"profile email","sid":"66edf11e-e69b-42a9-a1cf-52988d5c9d51","email_verified":true,"preferred_username":"x#y.z","given_name":"","family_name":"","email":"x#y.z"}
[Nest] 23435 - 09/08/2022, 11:13:53 PM VERBOSE [Keycloak] Controller has no #Resource defined, request allowed due to policy enforcement
[Nest] 23435 - 09/08/2022, 11:13:53 PM VERBOSE [Keycloak] Using matching mode: any
[Nest] 23435 - 09/08/2022, 11:13:53 PM VERBOSE [Keycloak] Roles: ["admin"]
[Nest] 23435 - 09/08/2022, 11:13:53 PM VERBOSE [Keycloak] Resource denied due to mismatched role(s)
I can't understand where is the problem!
I've needed to add the role to the client. I've added the role to the realm.
Related
This is a tracing network with one channel composed of 3 Orgs, 1 anchor peer per organization, 1 MSP per org, and 1 CA for org3. And I'm not using TLS (because I couldn't find a dependable sample with TLS ON)
I'm trying to use Fabric-sdk-node to build a web front end for it, and I'm using fabcar sample. and when I use invoke.js (almost the same as the example), I find there is no reponse.
Store path:/root/hyperledger-fabric/test/webapp/hfc-key-store
Successfully loaded user1 from persistence
Assigning transaction_id: 8387c087b4b7b9210cdc68ff0ff7fda99c706bad052b9b5138c86df5463244be
Transaction proposal was bad
proposalResponses[0].response is bad
undefined
Failed to send Proposal or receive valid response. Response null or status is not 200. exiting...
Failed to invoke successfully :: Error: Failed to send Proposal or receive valid response. Response null or status is not 200. exiting...
and the code is
if (proposalResponses && proposalResponses[0].response && proposalResponses[0].response.status === 200) {
isProposalGood = true;
console.log('Transaction proposal was good');
} else {
console.error('Transaction proposal was bad');
if (!proposalResponses) {
console.log('proposalResponses is bad');
}
if (!proposalResponses[0].response) {
console.log('proposalResponses[0].response is bad');
//console.log(proposalResponses[0].response.status);
}
}
When I check the the docker logs(in ca, peer0, orderer ), I find the only thing changed in orderer.trace.com
2021-05-04 07:41:41.136 UTC [comm.grpc.server] 1 -> INFO 007 streaming call completed {"grpc.start_time": "2021-05-04T07:40:41.77Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.21.0.8:39422", "error": "context finished before block retrieved: context canceled", "grpc.code": "Unknown", "grpc.call_duration": "59.36575566s"}
After several attempts, this error occurred on one occasion in peer0.sell.trace.com
2021-05-04 01:44:31.199 UTC [ConnProducer] NewConnection -> ERRO 034 Failed connecting to orderer.trace.com:7050 , error: context deadline exceeded
2021-05-04 01:44:31.200 UTC [deliveryClient] connect -> ERRO 035 Failed obtaining connection: Could not connect to any of the endpoints: [orderer.trace.com:7050]
2021-05-04 01:44:31.200 UTC [deliveryClient] try -> WARN 036 Got error: Could not connect to any of the endpoints: [orderer.trace.com:7050] , at 1 attempt. Retrying in 1s
I'm a very newbie in both fabric and nodejs, so any kind of help would be great. Thanks in advance.
NEW EDIT
My peer yaml
peer0.sell.trace.com:
container_name: peer0.sell.trace.com
image: hyperledger/fabric-peer:latest
environment:
- CORE_PEER_ID=peer0.sell.trace.com
- CORE_PEER_ADDRESS=peer0.sell.trace.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.sell.trace.com:7051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer1.sell.trace.com:7051
- CORE_PEER_LOCALMSPID=OrgSellMSP
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=test_default
- FABRIC_LOGGING_SPEC=INFO
#- FABRIC_LOGGING_SPEC=DEBUG
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=true
##TLS
#- CORE_PEER_TLS_ENABLED=true
#- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
#- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
#- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
- GODEBUG=netdns=go
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/sell.trace.com/peers/peer0.sell.trace.com/msp:/etc/hyperledger/fabric/msp
- ./crypto-config/peerOrganizations/sell.trace.com/peers/peer0.sell.trace.com/tls:/etc/hyperledger/fabric/tls
#- ./crypto-config/peerOrganizations/sell.trace.com/users/Admin#sell.trace.com/tls:/etc/hyperledger/client/tls
ports:
- 1151:7051
- 1153:7053
networks:
default:
aliases:
- test
my connection.json
{
"name": "first-network-org_sell",
"version": "1.0.0",
"client": {
"organization": "org_sell",
"connection": {
"timeout": {
"peer": {
"endorser": "3000"
}
}
}
},
"organizations": {
"org_sell": {
"mspid": "OrgSellMSP",
"peers": [
"peer0.sell.trace.com",
"peer1.sell.trace.com"
]
}
},
"peers": {
"peer0.sell.trace.com": {
"url": "grpc://localhost:7051",
},
"peer1.sell.trace.com": {
"url": "grpc://localhost:7051",
}
}
}
Based on Hyperldeger Fabric is created a network on which there are:1 orderer, 1 ca, 1 couchdb, 1 cli, 1 peer
Afterwards, is added a new org with: 1 peer, 1 couchdb and 1 cli
Until this stage there is no error. All the containers are running. Then is enrolled the ca admin. Still no problem. The admin is connected with no problem. I want to create admin for the new organization.
enrollandregisterNewAdmin.js
const gateway = new Gateway();
await gateway.connect(ccpPath, { wallet, identity: 'admin', discovery: { enabled: true, asLocalhost: true } });
const ca = gateway.getClient().getCertificateAuthority();
const adminIdentity = gateway.getCurrentIdentity();
const secret = await ca.register({
affiliation: 'org1.department1',
enrollmentID: 'adminOrg3',
role: 'client',
attrs: [ {"name": "hf.Registrar.Roles", "value": "client"},
{"name": "hf.Registrar.DelegateRoles", "value": "client"},
{"name": "hf.Revoker", "value": "true"},
{"name": "hf.IntermediateCA", "value": "true"},
{"name": "hf.GenCRL", "value": "true"},
{"name": "hf.AffiliationMgr", "value": "true"},
{"name": "hf.Registrar.Attributes", "value": "hf.Registrar.Roles,hf.Registrar.DelegateRoles,hf.Revoker,hf.IntermediateCA,hf.GenCRL,hf.Registrar.Attributes,hf.AffiliationMgr"} ] }
, adminIdentity);
const enrollment = await ca.enroll({ enrollmentID: 'adminOrg3', enrollmentSecret: secret});
const userIdentity = X509WalletMixin.createIdentity('Org3MSP', enrollment.certificate, enrollment.key.toBytes());
await wallet.import('adminOrg3', userIdentity);
Finally the certificates of 'adminOrg3' are imported to the wallet with no error. But when I am trying to invoke/query with the 'adminOrg3'. I receive this error:
[Channel.js]: Channel:byfn received discovery error:access denied
[Channel.js]: Error: Channel:byfn Discovery error:access denied
error: [Network]: _initializeInternalChannel: Unable to initialize channel. Attempted to contact 1 Peers. Last error was Error: Channel:byfn Discovery error:access denied
This is a common error when the wallet exists from a previous deployment. But the wallet is deleted each time the network is restarted.
docker logs peer0.org3.example.com
2021-02-22 10:21:09.588 UTC [cauthdsl] deduplicate -> ERRO 082 Principal deserialization failure (the supplied identity is not valid: x509: certificate signed by unknown authority) for identity 0
My config file for new org
docker-compose-org3.yaml
version: '2'
volumes:
peer0.org3.example.com:
networks:
byfn:
services:
peer0.org3.example.com:
container_name: peer0.org3.example.com
extends:
file: base/peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.org3.example.com
- CORE_PEER_ADDRESS=peer0.org3.example.com:11051
- CORE_PEER_LISTENADDRESS=0.0.0.0:11051
- CORE_PEER_CHAINCODEADDRESS=peer0.org3.example.com:11052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:11052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org3.example.com:11051
- CORE_PEER_LOCALMSPID=Org3MSP
volumes:
- /var/run/:/host/var/run/
- ./org3-artifacts/crypto-config/peerOrganizations/org3.example.com/peers/peer0.org3.example.com/msp:/etc/hyperledger/fabric/msp
- ./org3-artifacts/crypto-config/peerOrganizations/org3.example.com/peers/peer0.org3.example.com/tls:/etc/hyperledger/fabric/tls
- peer0.org3.example.com:/var/hyperledger/production
ports:
- 11051:11051
networks:
- byfn
Org3cli:
container_name: Org3cli
image: hyperledger/fabric-tools:$IMAGE_TAG
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- FABRIC_LOGGING_SPEC=INFO
#- FABRIC_LOGGING_SPEC=DEBUG
- CORE_PEER_ID=Org3cli
- CORE_PEER_ADDRESS=peer0.org3.example.com:11051
- CORE_PEER_LOCALMSPID=Org3MSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org3.example.com/peers/peer0.org3.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org3.example.com/peers/peer0.org3.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org3.example.com/peers/peer0.org3.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org3.example.com/users/Admin#org3.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- ./../chaincode/:/opt/gopath/src/github.com/chaincode
- ./org3-artifacts/crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./crypto-config/peerOrganizations/org1.example.com:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com
-./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
depends_on:
- peer0.org3.example.com
networks:
- byfn
Is it possible under the same affiliation to exist different MSPs?
Is needed any change to the configuration files?
Just to clarify few things ...
did you add the new org on the channel before trying to connect with the new org user?
are you running the peers in docker containers and use volumes for the peer file system mapping? - It may happen that the peers still load the content of the old channels...
-Tsvetan
I have an error while running a pipeline in Jenkins using a Kubernetes Cloud server.
Everything works fine until the moment of the npm install where i get Cannot contact nodejs-rn5f3: hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel#3b1e0041:nodejs-rn5f3": Remote call on nodejs-rn5f3 failed. The channel is closing down or has closed down
How can I fix this error ?
Here are my logs :
[Pipeline] Start of Pipeline
[Pipeline] podTemplate
[Pipeline] {
[Pipeline] node
Still waiting to schedule task
‘nodejs-rn5f3’ is offline
Agent nodejs-rn5f3 is provisioned from template nodejs
---
apiVersion: "v1"
kind: "Pod"
metadata:
labels:
jenkins: "slave"
jenkins/label-digest: "XXXXXXXXXXXXXXXXXXXXXXXXXX"
jenkins/label: "nodejs"
name: "nodejs-rn5f3"
spec:
containers:
- args:
- "cat"
command:
- "/bin/sh"
- "-c"
image: "node:15.5.1-alpine3.10"
imagePullPolicy: "IfNotPresent"
name: "node"
resources:
limits: {}
requests: {}
tty: true
volumeMounts:
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
workingDir: "/home/jenkins/agent"
- env:
- name: "JENKINS_SECRET"
value: "********"
- name: "JENKINS_AGENT_NAME"
value: "nodejs-rn5f3"
- name: "JENKINS_WEB_SOCKET"
value: "true"
- name: "JENKINS_NAME"
value: "nodejs-rn5f3"
- name: "JENKINS_AGENT_WORKDIR"
value: "/home/jenkins/agent"
- name: "JENKINS_URL"
value: "http://XX.XX.XX.XX/"
image: "jenkins/inbound-agent:4.3-4"
name: "jnlp"
resources:
requests:
cpu: "100m"
memory: "256Mi"
volumeMounts:
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
hostNetwork: false
nodeSelector:
kubernetes.io/os: "linux"
restartPolicy: "Never"
volumes:
- emptyDir:
medium: ""
name: "workspace-volume"
Running on nodejs-rn5f3 in /home/jenkins/agent/workspace/something
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] checkout
Selected Git installation does not exist. Using Default
[... cloning repository]
[Pipeline] container
[Pipeline] {
[Pipeline] sh
+ ls -la
total 1240
drwxr-xr-x 5 node node 4096 Feb 26 07:33 .
drwxr-xr-x 4 node node 4096 Feb 26 07:33 ..
-rw-r--r-- 1 node node 1689 Feb 26 07:33 package.json
and some other files and folders
[Pipeline] sh
+ cat package.json
{
[...]
"dependencies": {
[blabla....]
},
"devDependencies": {
[blabla...]
}
}
[Pipeline] sh
+ npm install
Cannot contact nodejs-rn5f3: hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel#3b1e0041:nodejs-rn5f3": Remote call on nodejs-rn5f3 failed. The channel is closing down or has closed down
At this stage, here are the logs of the container jnlp in my pod nodejs-rnf5f3 :
INFO: Connected
Feb 26, 2021 8:05:53 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Read side closed
Feb 26, 2021 8:05:53 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Terminated
Feb 26, 2021 8:05:53 AM jenkins.slaves.restarter.JnlpSlaveRestarterInstaller$FindEffectiveRestarters$1 onReconnect
INFO: Restarting agent via jenkins.slaves.restarter.UnixSlaveRestarter#1a39588e
Feb 26, 2021 8:05:55 AM hudson.remoting.jnlp.Main createEngine
INFO: Setting up agent: nodejs-rnf5f3
Feb 26, 2021 8:05:55 AM hudson.remoting.jnlp.Main$CuiListener <init>
INFO: Jenkins agent is running in headless mode.
Feb 26, 2021 8:05:55 AM hudson.remoting.Engine startEngine
INFO: Using Remoting version: 4.3
Feb 26, 2021 8:05:55 AM org.jenkinsci.remoting.engine.WorkDirManager initializeWorkDir
INFO: Using /home/jenkins/agent/remoting as a remoting work directory
Feb 26, 2021 8:05:55 AM org.jenkinsci.remoting.engine.WorkDirManager setupLogging
INFO: Both error and output logs will be printed to /home/jenkins/agent/remoting
Feb 26, 2021 8:05:55 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: WebSocket connection open
Feb 26, 2021 8:05:58 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Connected
.... same as above
I don't know where this error come from. Is this related to the usage of resources ?
Here are the usage of my containers :
POD NAME CPU(cores) MEMORY(bytes)
jenkins-1-jenkins-0 jenkins-master 61m 674Mi
nodejs-rnf5f3 jnlp 468m 104Mi
nodejs-rnf5f3 node 1243m 1284Mi
My cluster is a e2-medium in GKE with 2 nodes.
If I had to bet (but its just a wild guess) I had say that the pod was killed due to running out of memory (OOM Killed).
The ChannelClosedException is a symptom, not the problem.
Its kind of hard to debug because the agent pod is being deleted, you can try kubectl get events in the relevant namespace, but events only last for 1 hour by default.
Getting below error while compiling the below code:
I am new to nestjs and this is the attached sample application code.
I tried replacing UsersService in module.ts with UsersModule as well but it didn't work. What am i doing wrong ?
[Nest] 19960 - 10/19/2020, 11:42:09 PM [NestFactory] Starting Nest application...
[Nest] 19960 - 10/19/2020, 11:42:09 PM [InstanceLoader] MongooseModule dependencies initialized +25ms
[Nest] 19960 - 10/19/2020, 11:42:09 PM [InstanceLoader] PassportModule dependencies initialized +1ms
[Nest] 19960 - 10/19/2020, 11:42:09 PM [ExceptionHandler] Nest can't resolve dependencies of the AuthService (?, JwtService). Please make sure that the argument UsersService at index [0] is available in the AuthService context.
Potential solutions:
If UsersService is a provider, is it part of the current AuthService?
If UsersService is exported from a separate #module, is that module imported within AuthService?
#module({
imports: [ /* the Module containing UsersService */ ]
})
Repoistory : https://github.com/richakhetan/task-manager-nest
You have a circular dependency between the AuthModule and UserModule and between the UserService and AuthService. To resolve this, on both the modules and the services you need to use a forwardRef. Generally, this would just look like
#Module({
imports: [forwardRef(() => UserModule)],
providers: [AuthService],
exports: [AuthService],
})
export class AuthModule {}
#Module({
imports: [forwardRef(() => AuthModule)],
providers: [UserService],
exports: [UserService],
})
export class UserModule {}
#Injectable()
export class AuthService {
constructor(#Inject(forwardRef(() => UserService)) private readonly userService: UserService) {}
}
#Injectable()
export class UserService {
cosntructor(#Inject(forwardRef(() => AuthService) private readonly authService: AuthService) {}
}
Edit
Forgot to add the exports property
modules were not properly structured. I removed the code causing circular dependency from the modules and created a new module creating a clear structure.
Detailed Code can be found in the repository.
Repository : https://github.com/richakhetan/task-manager-nest
I'm getting an error running my JHipster application with Prometheus configuration for metrics.
I use the configuration from the official website :
https://www.jhipster.tech/monitoring/
In my application-dev.yml I have :
metrics:
prometheus:
enabled: true
And my class for auth is :
#Configuration
#Order(1)
#ConditionalOnProperty(prefix = "jhipster", name = "metrics.prometheus.enabled")
public class BasicAuthConfiguration extends WebSecurityConfigurerAdapter {
#Override
protected void configure(HttpSecurity http) throws Exception {
http
.antMatcher("/management/prometheus/**")
.authorizeRequests()
.anyRequest().hasAuthority(AuthoritiesConstants.ADMIN)
.and()
.httpBasic().realmName("jhipster")
.and()
.sessionManagement()
.sessionCreationPolicy(SessionCreationPolicy.STATELESS)
.and().csrf().disable();
}
}
2019-06-25 12:22:52.693 INFO 13260 --- [ restartedMain] com.ex.App : The following profiles are active: dev,swagger
2019-06-25 12:22:55.170 WARN 13260 --- [ restartedMain] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Unable to start web server; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'undertowServletWebServerFactory' defined in class path resource [org/springframework/boot/autoconfigure/web/servlet/ServletWebServerFactoryConfiguration$EmbeddedUndertow.class]: Initialization of bean failed; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'webConfigurer' defined in file [/home/eclipse-workspace/back_docker/target/classes/com/ex/config/WebConfigurer.class]: Unsatisfied dependency expressed through constructor parameter 1; nested exception is org.springframework.boot.context.properties.ConfigurationPropertiesBindException: Error creating bean with name 'io.github.jhipster.config.JHipsterProperties': Could not bind properties to 'JHipsterProperties' : prefix=jhipster, ignoreInvalidFields=false, ignoreUnknownFields=false; nested exception is org.springframework.boot.context.properties.bind.BindException: Failed to bind properties under 'jhipster' to io.github.jhipster.config.JHipsterProperties
2019-06-25 12:22:55.188 ERROR 13260 --- [ restartedMain] o.s.b.d.LoggingFailureAnalysisReporter :
***************************
APPLICATION FAILED TO START
***************************
Description:
Binding to target [Bindable#7585af55 type = io.github.jhipster.config.JHipsterProperties, value = 'provided', annotations = array<Annotation>[#org.springframework.boot.context.properties.ConfigurationProperties(ignoreInvalidFields=false, ignoreUnknownFields=false, value=jhipster, prefix=jhipster)]] failed:
Property: jhipster.metrics.prometheus.enabled
Value: true
Origin: class path resource [config/application-dev.yml]:128:22
Reason: The elements [jhipster.metrics.prometheus.enabled] were left unbound.
Action:
Update your application's configuration
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 11.679 s
[INFO] Finished at: 2019-06-25T12:22:55+02:00
[INFO] ------------------------------------------------------------------------
I changed my JHipster project from microservice application to microservice gateway and it solved this issue.