I am installing fabric8 on my linux VM machine. I have followed the steps which is mentioned in the guide. I have installed KVM drive and downloaded fabric8 binary code. After that when I tried gofabric8 start command I am getting the following error. Any suggestions?
Unable to download driver Driver install for linux not yet supported
minikube is already available on your PATH
kubectl is already available on your PATH
funktion is already available on your PATH
There are no extra logs I can see. So run the minikube. Details are given below.
Command I used
/root/.fabric8/bin/minikube start --vm-driver=kvm --v=7
log message
Starting local Kubernetes cluster...
Found binary path at /usr/local/bin/docker-machine-driver-kvm
Launching plugin server for driver kvm
Plugin server listening at address 127.0.0.1:41218
() Calling .GetVersion
Using API Version 1
() Calling .SetConfigRaw
() Calling .GetMachineName
(minikube) Calling .GetState
(minikube) DBG | Getting current state...
(minikube) DBG | Fetching VM...
***(minikube) Failed to fetch machine
(minikube) DBG | panic: runtime error: invalid memory address or nil pointer*** dereference
(minikube) DBG | [signal 0xb code=0x1 addr=0x0 pc=0x4d7167]
(minikube) DBG |
(minikube) DBG | goroutine 22 [running]:
(minikube) DBG | panic(0x8c54a0, 0xc82000a100)
(minikube) DBG | /usr/local/go/src/runtime/panic.go:464 +0x3e6
(minikube) DBG | github.com/alexzorin/libvirt-go.(*VirDomain).GetState(0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
(minikube) DBG | /home/daniel/go/src/github.com/alexzorin/libvirt-go/domain.go:192 +0x87
(minikube) DBG | github.com/dhiltgen/docker-machine-kvm.(*Driver).GetState(0xc8201320e0, 0x0, 0x0, 0x0)
(minikube) DBG | /home/daniel/go/src/github.com/dhiltgen/docker-machine-kvm/kvm.go:450 +0x70
(minikube) DBG | github.com/docker/machine/libmachine/drivers/rpc.(*RPCServerDriver).GetState(0xc8200e3b60, 0xe0d430, 0xc820154910, 0x0, 0x0)
(minikube) DBG | /home/daniel/go/src/github.com/docker/machine/libmachine/drivers/rpc/server_driver.go:191 +0x45
(minikube) DBG | reflect.Value.call(0x8858a0, 0x93fe88, 0x13, 0x959cd0, 0x4, 0xc82002def8, 0x3, 0x3, 0x0, 0x0, ...)
(minikube) DBG | /usr/local/go/src/reflect/value.go:435 +0x120d
(minikube) DBG | reflect.Value.Call(0x8858a0, 0x93fe88, 0x13, 0xc82002def8, 0x3, 0x3, 0x0, 0x0, 0x0)
(minikube) DBG | /usr/local/go/src/reflect/value.go:303 +0xb1
(minikube) DBG | net/rpc.(*service).call(0xc82004dac0, 0xc82004d880, 0xc820154080, 0xc8200a9680, 0xc8201506a0, 0x7e14a0, 0xe0d430, 0x16, 0x89fca0, 0xc820154910, ...)
(minikube) DBG | /usr/local/go/src/net/rpc/server.go:383 +0x1c2
(minikube) DBG | created by net/rpc.(*Server).ServeCodec
(minikube) DBG | /usr/local/go/src/net/rpc/server.go:477 +0x49d
E0119 09:42:23.506636 7110 start.go:96] Error starting host: Error getting state for host: unexpected EOF.
Retrying.
E0119 09:42:23.507014 7110 start.go:102] Error starting host: Error getting state for host: unexpected EOF
Related
I've been trying to connect to k8s cluster which is running in azure from my Mac laptop, but unfortunately I can't retrieve any information.
user#MyMac ~ % k get nodes
error: unknown flag: --environment
error: unknown flag: --environment
error: unknown flag: --environment
Unable to connect to the server: getting credentials: exec: executable kubelogin failed with exit code 1
when I extend the log I get this:
user#MyMac ~ % kubectl get deployments --all-namespaces=true -v 8
I0924 10:32:14.451255 28517 loader.go:372] Config loaded from file: /Users/user/.kube/config
I0924 10:32:14.461468 28517 round_trippers.go:432] GET https://dev-cluster.privatelink.westeurope.azmk8s.io:443/api?timeout=32s
I0924 10:32:14.461484 28517 round_trippers.go:438] Request Headers:
I0924 10:32:14.461490 28517 round_trippers.go:442] Accept: application/json, */*
I0924 10:32:14.461495 28517 round_trippers.go:442] User-Agent: kubectl/v1.22.5 (darwin/amd64) kubernetes/5c99e2a
error: unknown flag: --environment
I0924 10:32:14.555302 28517 round_trippers.go:457] Response Status: in 93 milliseconds
I0924 10:32:14.555318 28517 round_trippers.go:460] Response Headers:
I0924 10:32:14.555828 28517 cached_discovery.go:121] skipped caching discovery info due to Get "https://dev-cluster.privatelink.westeurope.azmk8s.io:443/api?timeout=32s": getting credentials: exec:
I0924 10:32:14.569821 28517 shortcut.go:89] Error loading discovery information: Get "https://dev-cluster.privatelink.westeurope.azmk8s.io:443/api?timeout=32s": getting credentials: exec: executable kubelogin failed with exit code 1
I0924 10:32:14.570037 28517 round_trippers.go:432] GET https://dev-cluster.privatelink.westeurope.azmk8s.io:443/api?timeout=32s
I0924 10:32:14.570050 28517 round_trippers.go:438] Request Headers:
I0924 10:32:14.570068 28517 round_trippers.go:442] Accept: application/json, */*
I0924 10:32:14.570088 28517 round_trippers.go:442] User-Agent: kubectl/v1.22.5 (darwin/amd64) kubernetes/5c99e2a
I0924 10:32:14.618944 28517 round_trippers.go:457] Response Status: in 17 milliseconds
I0924 10:32:14.618976 28517 round_trippers.go:460] Response Headers:
I0924 10:32:14.619147 28517 cached_discovery.go:121] skipped caching discovery info due to Get "https://dev-cluster.privatelink.westeurope.azmk8s.io:443/api?timeout=32s": getting credentials: exec: executable kubelogin failed with exit code 1
I0924 10:32:14.619790 28517 helpers.go:235] Connection error: Get https://dev-cluster.privatelink.westeurope.azmk8s.io:443/api?timeout=32s: getting credentials: exec: executable kubelogin failed with exit code 1
F0924 10:32:14.620768 28517 helpers.go:116] Unable to connect to the server: getting credentials: exec: executable kubelogin failed with exit code 1
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc0000cc001, 0xc000258000, 0x97, 0x23d)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x3cd80e0, 0xc000000003, 0x0, 0x0, 0xc0004d8150, 0x2, 0x33f6d63, 0xa, 0x74, 0x100e100)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x1e5
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x3cd80e0, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc0004e0db0, 0x1, 0x1)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x185
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1500
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc00081c3f0, 0x68, 0x1)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:94 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x2e6b0e0, 0xc0004e7410, 0x2cebdc8)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:189 +0x935
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:116
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func2(0xc0001ef680, 0xc000820cc0, 0x1, 0x4)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get/get.go:180 +0x159
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0001ef680, 0xc000820c80, 0x4, 0x4, 0xc0001ef680, 0xc000820c80)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:856 +0x2c2
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000401180, 0xc0000ce180, 0xc0000ce120, 0x6)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:960 +0x375
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:897
main.main()
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:49 +0x21d
goroutine 18 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x3cd80e0)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:420 +0xdf
goroutine 23 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x2cebcd0, 0x2e695e0, 0xc0004e6000, 0x1, 0xc00009eb40)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x118
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x2cebcd0, 0x12a05f200, 0x0, 0x1, 0xc00009eb40)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x2cebcd0, 0x12a05f200, 0xc00009eb40)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs.InitLogs
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs/logs.go:51 +0x96
I updated the az cli, but nothing changed.
I removed too the .kube/config file, and it didn't work too.
I don't know what went wrong after the update of the MacOs.
This happens because the ./kube config file was rewritten in the upgrade process, so you would need to add the credentials, run this command to refresh them:
az aks get-credentials --resource-group group --name cluster-name --admin --overwrite-existing
Details of the Kubernetes Service Connection:
Authentication method: Azure Subscription
Azure Subscription:
Cluster:
Namespace:
Use cluster admin credentials
After adding some new secrets to Terraform using the 1Password provider, we saw an error without much helpful output.
$ terraform plan
...
Error: rpc error: code = Unavailable desc = transport is closing
Error: rpc error: code = Canceled desc = context canceled
...
Terraform provider:
terraform {
required_providers {
onepassword = {
source = "anasinnyk/onepassword"
version = "~> 1.2.1"
}
}
required_version = "~> 0.13"
}
Terraform yml:
data "onepassword_item_password" "search_cloud_id" {
name = "Azure Elastic Cloud ID"
vault = data.onepassword_vault.vault_name.id
}
data "onepassword_item_password" "search_api_key" {
name = "Azure Elastic Cloud API key"
vault = data.onepassword_vault.vault_name.id
}
resource "kubernetes_secret" "search" {
metadata {
name = "search"
namespace = kubernetes_namespace.production.id
}
data = {
"ELASTICSEARCH_CLOUD_ID" = data.onepassword_item_password.api_search_cloud_id.password
"ELASTICSEARCH_API_KEY" = data.onepassword_item_password.api_search_api_key.password
}
type = "Opaque"
}
We managed to get some useful output by removing one data reference at a time, which lead to the errors printing:
panic: runtime error: invalid memory address or nil pointer dereference
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x147d1bd]
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1:
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: goroutine 194 [running]:
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/anasinnyk/terraform-provider-1password/onepassword.resourceItemPasswordRead(0x19418a0, 0xc0004ac540, 0xc000096f80, 0x173d040, 0xc0007ac740, 0xc0003bce40, 0xc000119910, 0x100c9b8)
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/anasinnyk/terraform-provider-1password/onepassword/resource_item_password.go:75 +0x18d
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).read(0xc0004613f0, 0x1941820, 0xc000384300, 0xc000096f80, 0x173d040, 0xc0007ac740, 0x0, 0x0, 0x0)
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/hashicorp/terraform-plugin-sdk/v2#v2.0.0/helper/schema/resource.go:288 +0x1ec
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).ReadDataApply(0xc0004613f0, 0x1941820, 0xc000384300, 0xc000304b80, 0x173d040, 0xc0007ac740, 0xc0007ac740, 0xc000304b80, 0x0, 0x0)
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/hashicorp/terraform-plugin-sdk/v2#v2.0.0/helper/schema/resource.go:489 +0xff
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/hashicorp/terraform-plugin-sdk/v2/internal/helper/plugin.(*GRPCProviderServer).ReadDataSource(0xc00026e6a0, 0x1941820, 0xc000384300, 0xc0003842c0, 0xc00026e6a0, 0xc00026e6b0, 0x185a058)
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/hashicorp/terraform-plugin-sdk/v2#v2.0.0/internal/helper/plugin/grpc_provider.go:1102 +0x4c5
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfplugin5._Provider_ReadDataSource_Handler.func1(0x1941820, 0xc000384300, 0x17dcd60, 0xc0003842c0, 0xc000384300, 0x1773c80, 0xc0004ac401, 0xc000304640)
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/hashicorp/terraform-plugin-sdk/v2#v2.0.0/internal/tfplugin5/tfplugin5.pb.go:3348 +0x86
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/hashicorp/terraform-plugin-sdk/v2/plugin.Serve.func3.1(0x19418e0, 0xc0003d4480, 0x17dcd60, 0xc0003842c0, 0xc000304620, 0xc000304640, 0xc0007c8ba0, 0x11b81c8, 0x17c7a20, 0xc0003d4480)
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/hashicorp/terraform-plugin-sdk/v2#v2.0.0/plugin/serve.go:76 +0x87
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfplugin5._Provider_ReadDataSource_Handler(0x17fdb60, 0xc00026e6a0, 0x19418e0, 0xc0003d4480, 0xc0004ac4e0, 0xc00000d080, 0x19418e0, 0xc0003d4480, 0xc000010090, 0x90)
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: github.com/hashicorp/terraform-plugin-sdk/v2#v2.0.0/internal/tfplugin5/tfplugin5.pb.go:3350 +0x14b
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: google.golang.org/grpc.(*Server).processUnaryRPC(0xc00027ae00, 0x1949c60, 0xc000103380, 0xc00018e000, 0xc00020acf0, 0x1e49910, 0x0, 0x0, 0x0)
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: google.golang.org/grpc#v1.30.0/server.go:1171 +0x50a
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: google.golang.org/grpc.(*Server).handleStream(0xc00027ae00, 0x1949c60, 0xc000103380, 0xc00018e000, 0x0)
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: google.golang.org/grpc#v1.30.0/server.go:1494 +0xccd
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc0000382e0, 0xc00027ae00, 0x1949c60, 0xc000103380, 0xc00018e000)
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: google.golang.org/grpc#v1.30.0/server.go:834 +0xa1
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: created by google.golang.org/grpc.(*Server).serveStreams.func1
2021-08-27T15:34:29.367+0930 [DEBUG] plugin.terraform-provider-onepassword_v1.2.1: google.golang.org/grpc#v1.30.0/server.go:832 +0x204
2021-08-27T15:34:29.368+0930 [WARN] plugin.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2021/08/27 15:34:29 [ERROR] eval: *terraform.evalReadDataRefresh, err: rpc error: code = Unavailable desc = transport is closing
2021/08/27 15:34:29 [ERROR] eval: *terraform.evalReadDataRefresh, err: rpc error: code = Unavailable desc = transport is closing
2021/08/27 15:34:29 [ERROR] eval: *terraform.evalReadDataRefresh, err: rpc error: code = Unavailable desc = transport is closing
2021/08/27 15:34:29 [ERROR] eval: *terraform.EvalSequence, err: rpc error: code = Unavailable desc = transport is closing
2021-08-27T15:34:29.369+0930 [DEBUG] plugin: plugin process exited: path=.terraform/plugins/registry.terraform.io/anasinnyk/onepassword/1.2.1/darwin_amd64/terraform-provider-onepassword_v1.2.1 pid=17549 error="exit status 2"
2021/08/27 15:34:29 [ERROR] eval: *terraform.EvalSequence, err: rpc error: code = Unavailable desc = transport is closing
2021/08/27 15:34:29 [TRACE] [walkRefresh] Exiting eval tree: data.onepassword_item_password.search_api_key
2021/08/27 15:34:29 [ERROR] eval: *terraform.EvalSequence, err: rpc error: code = Unavailable desc = transport is closing
2021/08/27 15:34:29 [TRACE] vertex "data.onepassword_item_password.search_api_key": visit complete
2021/08/27 15:34:29 [TRACE] vertex "data.onepassword_item_password.search_api_key": dynamic subgraph encountered errors
2021/08/27 15:34:29 [TRACE] vertex "data.onepassword_item_password.search_api_key": visit complete
2021/08/27 15:34:29 [TRACE] vertex "data.onepassword_item_password.search_api_key (expand)": dynamic subgraph encountered errors
2021/08/27 15:34:29 [TRACE] vertex "data.onepassword_item_password.search_api_key (expand)": visit complete
2021/08/27 15:34:29 [TRACE] dag/walk: upstream of "provider[\"registry.terraform.io/hashicorp/kubernetes\"] (close)" errored, so skipping
2021/08/27 15:34:29 [TRACE] dag/walk: upstream of "provider[\"registry.terraform.io/anasinnyk/onepassword\"] (close)" errored, so skipping
2021/08/27 15:34:29 [TRACE] dag/walk: upstream of "root" errored, so skipping
2021-08-27T15:34:29.501+0930 [DEBUG] plugin: plugin exited
2021-08-27T15:34:29.502+0930 [WARN] plugin.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2021-08-27T15:34:29.507+0930 [DEBUG] plugin: plugin process exited: path=.terraform/plugins/registry.terraform.io/hashicorp/kubernetes/1.13.3/darwin_amd64/terraform-provider-kubernetes_v1.13.3_x4 pid=17673
2021-08-27T15:34:29.507+0930 [DEBUG] plugin: plugin exited
!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
Terraform crashed! This is always indicative of a bug within Terraform.
A crash log has been placed at "crash.log" relative to your current
working directory. It would be immensely helpful if you could please
report the crash with Terraform[1] so that we can fix this.
When reporting bugs, please include your terraform version. That
information is available on the first line of crash.log. You can also
get it by running 'terraform --version' on the command line.
SECURITY WARNING: the "crash.log" file that was created may contain
sensitive information that must be redacted before it is safe to share
on the issue tracker.
[1]: https://github.com/hashicorp/terraform/issues
!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
This led us to find that one of our team members managed to create two 1Password entries with the same name in the same vault.
After deleting the duplicate entry in 1Password, terraform plan ran without error again.
I'm trying to figure out how fabric-sdk-go works.
I created a connection with hyperledger, installed chaincode, but when I try to execute the request for some reason I get an error.
My function:
response, err := setup.client.Query(channel.Request{ChaincodeID: setup.ChainCodeID, Fcn: "invoke", Args: [][]byte{[]byte("query"), []byte("hello")}})
And the output log:
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x9384f0]
goroutine 1 [running]:
github.com/hyperledger/fabric-sdk-go/pkg/client/channel.(*Client).Query(0x0, 0xce8db5, 0x5, 0xce9c7d, 0x6, 0xc0001b1bd0, 0x3, 0x3, 0x0, 0x0, ...)
/home/batazor/.gvm/pkgsets/go1.12/global/pkg/mod/github.com/hyperledger/fabric-sdk-go#v1.0.0-alpha5/pkg/client/channel/chclient.go:97 +0xc0
main.(*FabricSetup).QueryHello(0xc000171eb0, 0x0, 0x0, 0x28, 0xc0001b0460)
/home/batazor/.gvm/pkgsets/go1.12/global/src/github.com/batazor/hyperledger-fabric/cmd/hyperledger-fabric/example.go:10 +0x217
main.main()
/home/batazor/.gvm/pkgsets/go1.12/global/src/github.com/batazor/hyperledger-fabric/cmd/hyperledger-fabric/main.go:43 +0x143
P.S. My chaincode -> https://github.com/batazor/hyperledger-fabric/blob/master/chaincode/hello/go/hello.go
I skipped the channel connection setup step
https://github.com/chainHero/heroes-service/blob/master/blockchain/setup.go#L93-L136
I have deployed hyperledger fabric on aws ec2 instances.
peer0.sales.billerxchange.com (ec2 instance 1)
peer1.sales.billerxchange.com (ec2 instance 2)
peer0.employee.billerxchange.com (ec2 instance 3)
peer1.employee.billerxchange.com (ec2 instance 4)
ca.sales.billerxchange.com (ec2 instance 5)
ca.employee.billerxchange.com (ec2 instance 6)
orderer.billerxchange.com (ec2 instance 7)
all peers,ca and orderer are deployed on seperate instances and managed by docker swarm and deployed on docker overlay network.
i'm able join channel but when i try updating anchor peer in channel
peer channel update -c settlement -o orderer.billerxchange.com:7050 -f billerxchangepeer.tx
i updated peer0 as anchor peer peer0 will get update and all the other peers in the network will crash. only peer0 remains up rest all peer containers are stopped or exited. I've attached container logs please check.
2018-09-28 20:14:58.041 UTC [gossip/gossip] learnAnchorPeers -> INFO 032 Learning about the configured anchor peers of BillerXchange for channel settlement : [{peer0.sales.billerxchange.com 7051}]
2018-09-28 20:14:58.042 UTC [committer/txvalidator] Validate -> INFO 033 [settlement] Validated block [1] in 5ms
2018-09-28 20:14:58.046 UTC [gossip/state] commitBlock -> ERRO 034 Got error while committing(unexpected Previous block hash. Expected PreviousHash = [20c677285adc8bb6ecaa08d07f4b56038f1eb646a1e2c46c4915716772ac622b], PreviousHash referred in the latest block= [4abc11f19eed89a96d1fd6280512cea1504c1ff202e4f0d0cc5ad1945648343a]
github.com/hyperledger/fabric/common/ledger/blkstorage/fsblkstorage.(*blockfileMgr).addBlock
/opt/gopath/src/github.com/hyperledger/fabric/common/ledger/blkstorage/fsblkstorage/blockfile_mgr.go:254
github.com/hyperledger/fabric/common/ledger/blkstorage/fsblkstorage.(*fsBlockStore).AddBlock
/opt/gopath/src/github.com/hyperledger/fabric/common/ledger/blkstorage/fsblkstorage/fs_blockstore.go:43
github.com/hyperledger/fabric/core/ledger/ledgerstorage.(*Store).CommitWithPvtData
/opt/gopath/src/github.com/hyperledger/fabric/core/ledger/ledgerstorage/store.go:124
github.com/hyperledger/fabric/core/ledger/kvledger.(*kvLedger).CommitWithPvtData
/opt/gopath/src/github.com/hyperledger/fabric/core/ledger/kvledger/kv_ledger.go:271
github.com/hyperledger/fabric/core/ledger/ledgermgmt.(*closableLedger).CommitWithPvtData
<autogenerated>:1
github.com/hyperledger/fabric/core/committer.(*LedgerCommitter).CommitWithPvtData
/opt/gopath/src/github.com/hyperledger/fabric/core/committer/committer_impl.go:93
github.com/hyperledger/fabric/gossip/privdata.(*coordinator).StoreBlock
/opt/gopath/src/github.com/hyperledger/fabric/gossip/privdata/coordinator.go:229
github.com/hyperledger/fabric/gossip/state.(*GossipStateProviderImpl).commitBlock
/opt/gopath/src/github.com/hyperledger/fabric/gossip/state/state.go:771
github.com/hyperledger/fabric/gossip/state.(*GossipStateProviderImpl).deliverPayloads
/opt/gopath/src/github.com/hyperledger/fabric/gossip/state/state.go:558
runtime.goexit
/opt/go/src/runtime/asm_amd64.s:2361
commit failed
github.com/hyperledger/fabric/gossip/privdata.(*coordinator).StoreBlock
/opt/gopath/src/github.com/hyperledger/fabric/gossip/privdata/coordinator.go:231
github.com/hyperledger/fabric/gossip/state.(*GossipStateProviderImpl).commitBlock
/opt/gopath/src/github.com/hyperledger/fabric/gossip/state/state.go:771
github.com/hyperledger/fabric/gossip/state.(*GossipStateProviderImpl).deliverPayloads
/opt/gopath/src/github.com/hyperledger/fabric/gossip/state/state.go:558
runtime.goexit
/opt/go/src/runtime/asm_amd64.s:2361
github.com/hyperledger/fabric/gossip/state.(*GossipStateProviderImpl).commitBlock
/opt/gopath/src/github.com/hyperledger/fabric/gossip/state/state.go:772
github.com/hyperledger/fabric/gossip/state.(*GossipStateProviderImpl).deliverPayloads
/opt/gopath/src/github.com/hyperledger/fabric/gossip/state/state.go:558
runtime.goexit
/opt/go/src/runtime/asm_amd64.s:2361)
2018-09-28 20:14:58.046 UTC [gossip/state] deliverPayloads -> PANI 035 Cannot commit block to the ledger due to unexpected Previous block hash. Expected PreviousHash = [20c677285adc8bb6ecaa08d07f4b56038f1eb646a1e2c46c4915716772ac622b], PreviousHash referred in the latest block= [4abc11f19eed89a96d1fd6280512cea1504c1ff202e4f0d0cc5ad1945648343a]
github.com/hyperledger/fabric/common/ledger/blkstorage/fsblkstorage.(*blockfileMgr).addBlock
/opt/gopath/src/github.com/hyperledger/fabric/common/ledger/blkstorage/fsblkstorage/blockfile_mgr.go:254
github.com/hyperledger/fabric/common/ledger/blkstorage/fsblkstorage.(*fsBlockStore).AddBlock
/opt/gopath/src/github.com/hyperledger/fabric/common/ledger/blkstorage/fsblkstorage/fs_blockstore.go:43
github.com/hyperledger/fabric/core/ledger/ledgerstorage.(*Store).CommitWithPvtData
/opt/gopath/src/github.com/hyperledger/fabric/core/ledger/ledgerstorage/store.go:124
github.com/hyperledger/fabric/core/ledger/kvledger.(*kvLedger).CommitWithPvtData
/opt/gopath/src/github.com/hyperledger/fabric/core/ledger/kvledger/kv_ledger.go:271
github.com/hyperledger/fabric/core/ledger/ledgermgmt.(*closableLedger).CommitWithPvtData
<autogenerated>:1
github.com/hyperledger/fabric/core/committer.(*LedgerCommitter).CommitWithPvtData
/opt/gopath/src/github.com/hyperledger/fabric/core/committer/committer_impl.go:93
github.com/hyperledger/fabric/gossip/privdata.(*coordinator).StoreBlock
/opt/gopath/src/github.com/hyperledger/fabric/gossip/privdata/coordinator.go:229
github.com/hyperledger/fabric/gossip/state.(*GossipStateProviderImpl).commitBlock
/opt/gopath/src/github.com/hyperledger/fabric/gossip/state/state.go:771
github.com/hyperledger/fabric/gossip/state.(*GossipStateProviderImpl).deliverPayloads
/opt/gopath/src/github.com/hyperledger/fabric/gossip/state/state.go:558
runtime.goexit
/opt/go/src/runtime/asm_amd64.s:2361
commit failed
github.com/hyperledger/fabric/gossip/privdata.(*coordinator).StoreBlock
/opt/gopath/src/github.com/hyperledger/fabric/gossip/privdata/coordinator.go:231
github.com/hyperledger/fabric/gossip/state.(*GossipStateProviderImpl).commitBlock
/opt/gopath/src/github.com/hyperledger/fabric/gossip/state/state.go:771
github.com/hyperledger/fabric/gossip/state.(*GossipStateProviderImpl).deliverPayloads
/opt/gopath/src/github.com/hyperledger/fabric/gossip/state/state.go:558
runtime.goexit
/opt/go/src/runtime/asm_amd64.s:2361
github.com/hyperledger/fabric/gossip/state.(*GossipStateProviderImpl).deliverPayloads
/opt/gopath/src/github.com/hyperledger/fabric/gossip/state/state.go:563
runtime.goexit
/opt/go/src/runtime/asm_amd64.s:2361
panic: Cannot commit block to the ledger due to unexpected Previous block hash. Expected PreviousHash = [20c677285adc8bb6ecaa08d07f4b56038f1eb646a1e2c46c4915716772ac622b], PreviousHash referred in the latest block= [4abc11f19eed89a96d1fd6280512cea1504c1ff202e4f0d0cc5ad1945648343a]
github.com/hyperledger/fabric/common/ledger/blkstorage/fsblkstorage.(*blockfileMgr).addBlock
/opt/gopath/src/github.com/hyperledger/fabric/common/ledger/blkstorage/fsblkstorage/blockfile_mgr.go:254
github.com/hyperledger/fabric/common/ledger/blkstorage/fsblkstorage.(*fsBlockStore).AddBlock
/opt/gopath/src/github.com/hyperledger/fabric/common/ledger/blkstorage/fsblkstorage/fs_blockstore.go:43
github.com/hyperledger/fabric/core/ledger/ledgerstorage.(*Store).CommitWithPvtData
/opt/gopath/src/github.com/hyperledger/fabric/core/ledger/ledgerstorage/store.go:124
github.com/hyperledger/fabric/core/ledger/kvledger.(*kvLedger).CommitWithPvtData
/opt/gopath/src/github.com/hyperledger/fabric/core/ledger/kvledger/kv_ledger.go:271
github.com/hyperledger/fabric/core/ledger/ledgermgmt.(*closableLedger).CommitWithPvtData
<autogenerated>:1
github.com/hyperledger/fabric/core/committer.(*LedgerCommitter).CommitWithPvtData
/opt/gopath/src/github.com/hyperledger/fabric/core/committer/committer_impl.go:93
github.com/hyperledger/fabric/gossip/privdata.(*coordinator).StoreBlock
/opt/gopath/src/github.com/hyperledger/fabric/gossip/privdata/coordinator.go:229
github.com/hyperledger/fabric/gossip/state.(*GossipStateProviderImpl).commitBlock
/opt/gopath/src/github.com/hyperledger/fabric/gossip/state/state.go:771
github.com/hyperledger/fabric/gossip/state.(*GossipStateProviderImpl).deliverPayloads
/opt/gopath/src/github.com/hyperledger/fabric/gossip/state/state.go:558
runtime.goexit
/opt/go/src/runtime/asm_amd64.s:2361
commit failed
github.com/hyperledger/fabric/gossip/privdata.(*coordinator).StoreBlock
/opt/gopath/src/github.com/hyperledger/fabric/gossip/privdata/coordinator.go:231
github.com/hyperledger/fabric/gossip/state.(*GossipStateProviderImpl).commitBlock
/opt/gopath/src/github.com/hyperledger/fabric/gossip/state/state.go:771
github.com/hyperledger/fabric/gossip/state.(*GossipStateProviderImpl).deliverPayloads
/opt/gopath/src/github.com/hyperledger/fabric/gossip/state/state.go:558
runtime.goexit
/opt/go/src/runtime/asm_amd64.s:2361
github.com/hyperledger/fabric/gossip/state.(*GossipStateProviderImpl).deliverPayloads
/opt/gopath/src/github.com/hyperledger/fabric/gossip/state/state.go:563
runtime.goexit
/opt/go/src/runtime/asm_amd64.s:2361
goroutine 696 [running]:
github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc4202d00b0, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore/entry.go:229 +0x4f4
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).log(0xc42000e4d8, 0x4, 0x1113a7d, 0x2c, 0xc422400dd0, 0x1, 0x1, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0xf6
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).Panicf(0xc42000e4d8, 0x1113a7d, 0x2c, 0xc422400dd0, 0x1, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159 +0x79
github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panicf(0xc42000e4e0, 0x1113a7d, 0x2c, 0xc422400dd0, 0x1, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/common/flogging/zap.go:74 +0x60
github.com/hyperledger/fabric/gossip/state.(*GossipStateProviderImpl).deliverPayloads(0xc420173980)
/opt/gopath/src/github.com/hyperledger/fabric/gossip/state/state.go:563 +0x4af
created by github.com/hyperledger/fabric/gossip/state.NewGossipStateProvider
/opt/gopath/src/github.com/hyperledger/fabric/gossip/state/state.go:239 +0x699
I am encountering a very weird situation with virtualbox, minikube, and node.
Here is the situation. I am running minikube with the virtualbox driver, and I am calling minikube ip from inside node using 'child_process.exec'.
exec('minikube ip', (error, stdout, stderr) => {
if (error) {
console.error(`exec error: ${error}`);
return;
}
});
When I do I get this output.
stderr: exec error: Error: Command failed: minikube ip
Found binary path at /usr/local/bin/minikube
Launching plugin server for driver virtualbox
Plugin server listening at address 127.0.0.1:59431
() Calling .GetVersion
Using API Version 1
() Calling .SetConfigRaw
() Calling .GetMachineName
(minikube) Calling .GetIP
(minikube) DBG | COMMAND: /usr/local/bin/VBoxManage showvminfo minikube --machinereadable
(minikube) DBG | STDOUT:
(minikube) DBG | {
(minikube) DBG | }
(minikube) DBG | STDERR:
(minikube) DBG | {
(minikube) DBG | VBoxManage: error: Could not find a registered machine named 'minikube'
(minikube) DBG | VBoxManage: error: Details: code VBOX_E_OBJECT_NOT_FOUND (0x80bb0001), component VirtualBoxWrap, interface IVirtualBox, callee nsISupports
(minikube) DBG | VBoxManage: error: Context: "FindMachine(Bstr(VMNameOrUuid).raw(), machine.asOutParam())" at line 2780 of file VBoxManageInfo.cpp
(minikube) DBG | }
E0301 08:44:20.777956 59415 ip.go:44] Error getting IP: machine does not exist
however if I run minikube ip from the terminal it works fine
Found binary path at /usr/local/bin/minikube
Launching plugin server for driver virtualbox
Plugin server listening at address 127.0.0.1:59573
() Calling .GetVersion
Using API Version 1
() Calling .SetConfigRaw
() Calling .GetMachineName
(minikube) Calling .GetIP
(minikube) DBG | COMMAND: /usr/local/bin/VBoxManage showvminfo minikube --machinereadable
(minikube) DBG | STDOUT:
(minikube) DBG | {
(minikube) DBG | name="minikube"
(minikube) DBG | groups="/"
(minikube) DBG | ostype="Linux 2.6 / 3.x / 4.x (64-bit)"
(minikube) DBG | UUID="1b33cdcc-277b-41bb-95d6-6265b049e201"
(minikube) DBG | CfgFile="/Users/namedev/.minikube/machines/minikube/minikube/minikube.vbox"
(minikube) DBG | SnapFldr="/Users/namedev/.minikube/machines/minikube/minikube/Snapshots"
(minikube) DBG | LogFldr="/Users/namedev/.minikube/machines/minikube/minikube/Logs"
(minikube) DBG | hardwareuuid="1b33cdcc-277b-41bb-95d6-6265b049e201"
(minikube) DBG | memory=8192
(minikube) DBG | pagefusion="off"
(minikube) DBG | vram=8
(minikube) DBG | cpuexecutioncap=100
(minikube) DBG | hpet="on"
(minikube) DBG | chipset="piix3"
(minikube) DBG | firmware="BIOS"
(minikube) DBG | cpus=2
(minikube) DBG | pae="on"
(minikube) DBG | longmode="on"
(minikube) DBG | triplefaultreset="off"
(minikube) DBG | apic="on"
(minikube) DBG | x2apic="off"
(minikube) DBG | cpuid-portability-level=0
(minikube) DBG | bootmenu="disabled"
(minikube) DBG | boot1="dvd"
(minikube) DBG | boot2="dvd"
(minikube) DBG | boot3="disk"
(minikube) DBG | boot4="none"
(minikube) DBG | acpi="on"
(minikube) DBG | ioapic="on"
(minikube) DBG | biosapic="apic"
(minikube) DBG | biossystemtimeoffset=0
(minikube) DBG | rtcuseutc="on"
(minikube) DBG | hwvirtex="on"
(minikube) DBG | nestedpaging="on"
(minikube) DBG | largepages="on"
(minikube) DBG | vtxvpid="on"
(minikube) DBG | vtxux="on"
(minikube) DBG | paravirtprovider="default"
(minikube) DBG | effparavirtprovider="kvm"
(minikube) DBG | VMState="running"
(minikube) DBG | VMStateChangeTime="2017-03-01T00:18:25.490000000"
(minikube) DBG | monitorcount=1
(minikube) DBG | accelerate3d="off"
(minikube) DBG | accelerate2dvideo="off"
(minikube) DBG | teleporterenabled="off"
(minikube) DBG | teleporterport=0
(minikube) DBG | teleporteraddress=""
(minikube) DBG | teleporterpassword=""
(minikube) DBG | tracing-enabled="off"
(minikube) DBG | tracing-allow-vm-access="off"
(minikube) DBG | tracing-config=""
(minikube) DBG | autostart-enabled="off"
(minikube) DBG | autostart-delay=0
(minikube) DBG | defaultfrontend=""
(minikube) DBG | storagecontrollername0="SATA"
(minikube) DBG | storagecontrollertype0="IntelAhci"
(minikube) DBG | storagecontrollerinstance0="0"
(minikube) DBG | storagecontrollermaxportcount0="30"
(minikube) DBG | storagecontrollerportcount0="30"
(minikube) DBG | storagecontrollerbootable0="on"
(minikube) DBG | "SATA-0-0"="/Users/namedev/.minikube/machines/minikube/boot2docker.iso"
(minikube) DBG | "SATA-ImageUUID-0-0"="977e41e5-d157-4fe3-8215-7d475ad7b32f"
(minikube) DBG | "SATA-tempeject"="off"
(minikube) DBG | "SATA-IsEjected"="off"
(minikube) DBG | "SATA-1-0"="/Users/namedev/.minikube/machines/minikube/disk.vmdk"
(minikube) DBG | "SATA-ImageUUID-1-0"="828a95e8-f149-46ca-bee5-f298ec6a444c"
(minikube) DBG | "SATA-2-0"="none"
(minikube) DBG | "SATA-3-0"="none"
(minikube) DBG | "SATA-4-0"="none"
(minikube) DBG | "SATA-5-0"="none"
(minikube) DBG | "SATA-6-0"="none"
(minikube) DBG | "SATA-7-0"="none"
(minikube) DBG | "SATA-8-0"="none"
(minikube) DBG | "SATA-9-0"="none"
(minikube) DBG | "SATA-10-0"="none"
(minikube) DBG | "SATA-11-0"="none"
(minikube) DBG | "SATA-12-0"="none"
(minikube) DBG | "SATA-13-0"="none"
(minikube) DBG | "SATA-14-0"="none"
(minikube) DBG | "SATA-15-0"="none"
(minikube) DBG | "SATA-16-0"="none"
(minikube) DBG | "SATA-17-0"="none"
(minikube) DBG | "SATA-18-0"="none"
(minikube) DBG | "SATA-19-0"="none"
(minikube) DBG | "SATA-20-0"="none"
(minikube) DBG | "SATA-21-0"="none"
(minikube) DBG | "SATA-22-0"="none"
(minikube) DBG | "SATA-23-0"="none"
(minikube) DBG | "SATA-24-0"="none"
(minikube) DBG | "SATA-25-0"="none"
(minikube) DBG | "SATA-26-0"="none"
(minikube) DBG | "SATA-27-0"="none"
(minikube) DBG | "SATA-28-0"="none"
(minikube) DBG | "SATA-29-0"="none"
(minikube) DBG | natnet1="nat"
(minikube) DBG | macaddress1="0800276B97F7"
(minikube) DBG | cableconnected1="on"
(minikube) DBG | nic1="nat"
(minikube) DBG | nictype1="82540EM"
(minikube) DBG | nicspeed1="0"
(minikube) DBG | mtu="0"
(minikube) DBG | sockSnd="64"
(minikube) DBG | sockRcv="64"
(minikube) DBG | tcpWndSnd="64"
(minikube) DBG | tcpWndRcv="64"
(minikube) DBG | Forwarding(0)="ssh,tcp,127.0.0.1,53964,,22"
(minikube) DBG | hostonlyadapter2="vboxnet1"
(minikube) DBG | macaddress2="08002721BF68"
(minikube) DBG | cableconnected2="on"
(minikube) DBG | nic2="hostonly"
(minikube) DBG | nictype2="82540EM"
(minikube) DBG | nicspeed2="0"
(minikube) DBG | nic3="none"
(minikube) DBG | nic4="none"
(minikube) DBG | nic5="none"
(minikube) DBG | nic6="none"
(minikube) DBG | nic7="none"
(minikube) DBG | nic8="none"
(minikube) DBG | hidpointing="ps2mouse"
(minikube) DBG | hidkeyboard="ps2kbd"
(minikube) DBG | uart1="off"
(minikube) DBG | uart2="off"
(minikube) DBG | uart3="off"
(minikube) DBG | uart4="off"
(minikube) DBG | lpt1="off"
(minikube) DBG | lpt2="off"
(minikube) DBG | audio="coreaudio"
(minikube) DBG | clipboard="disabled"
(minikube) DBG | draganddrop="disabled"
(minikube) DBG | SessionName="headless"
(minikube) DBG | VideoMode="720,400,0"#0,0 1
(minikube) DBG | vrde="off"
(minikube) DBG | usb="off"
(minikube) DBG | ehci="off"
(minikube) DBG | xhci="off"
(minikube) DBG | SharedFolderNameMachineMapping1="Users"
(minikube) DBG | SharedFolderPathMachineMapping1="/Users"
(minikube) DBG | VRDEActiveConnection="off"
(minikube) DBG | VRDEClients=0
(minikube) DBG | vcpenabled="off"
(minikube) DBG | vcpscreens=0
(minikube) DBG | vcpfile="/Users/namedev/.minikube/machines/minikube/minikube/minikube.webm"
(minikube) DBG | vcpwidth=1024
(minikube) DBG | vcpheight=768
(minikube) DBG | vcprate=512
(minikube) DBG | vcpfps=25
(minikube) DBG | GuestMemoryBalloon=0
(minikube) DBG | GuestOSType="Linux26_64"
(minikube) DBG | GuestAdditionsRunLevel=2
(minikube) DBG | GuestAdditionsVersion="5.1.6 r110634"
(minikube) DBG | GuestAdditionsFacility_VirtualBox Base Driver=50,1488327525999
(minikube) DBG | GuestAdditionsFacility_VirtualBox System Service=50,1488327526490
(minikube) DBG | GuestAdditionsFacility_Seamless Mode=0,1488327525997
(minikube) DBG | GuestAdditionsFacility_Graphics Mode=0,1488327525997
(minikube) DBG | }
(minikube) DBG | STDERR:
(minikube) DBG | {
(minikube) DBG | }
(minikube) DBG | COMMAND: /usr/local/bin/VBoxManage showvminfo minikube --machinereadable
(minikube) DBG | STDOUT:
(minikube) DBG | {
(minikube) DBG | name="minikube"
(minikube) DBG | groups="/"
(minikube) DBG | ostype="Linux 2.6 / 3.x / 4.x (64-bit)"
(minikube) DBG | UUID="1b33cdcc-277b-41bb-95d6-6265b049e201"
(minikube) DBG | CfgFile="/Users/namedev/.minikube/machines/minikube/minikube/minikube.vbox"
(minikube) DBG | SnapFldr="/Users/namedev/.minikube/machines/minikube/minikube/Snapshots"
(minikube) DBG | LogFldr="/Users/namedev/.minikube/machines/minikube/minikube/Logs"
(minikube) DBG | hardwareuuid="1b33cdcc-277b-41bb-95d6-6265b049e201"
(minikube) DBG | memory=8192
(minikube) DBG | pagefusion="off"
(minikube) DBG | vram=8
(minikube) DBG | cpuexecutioncap=100
(minikube) DBG | hpet="on"
(minikube) DBG | chipset="piix3"
(minikube) DBG | firmware="BIOS"
(minikube) DBG | cpus=2
(minikube) DBG | pae="on"
(minikube) DBG | longmode="on"
(minikube) DBG | triplefaultreset="off"
(minikube) DBG | apic="on"
(minikube) DBG | x2apic="off"
(minikube) DBG | cpuid-portability-level=0
(minikube) DBG | bootmenu="disabled"
(minikube) DBG | boot1="dvd"
(minikube) DBG | boot2="dvd"
(minikube) DBG | boot3="disk"
(minikube) DBG | boot4="none"
(minikube) DBG | acpi="on"
(minikube) DBG | ioapic="on"
(minikube) DBG | biosapic="apic"
(minikube) DBG | biossystemtimeoffset=0
(minikube) DBG | rtcuseutc="on"
(minikube) DBG | hwvirtex="on"
(minikube) DBG | nestedpaging="on"
(minikube) DBG | largepages="on"
(minikube) DBG | vtxvpid="on"
(minikube) DBG | vtxux="on"
(minikube) DBG | paravirtprovider="default"
(minikube) DBG | effparavirtprovider="kvm"
(minikube) DBG | VMState="running"
(minikube) DBG | VMStateChangeTime="2017-03-01T00:18:25.490000000"
(minikube) DBG | monitorcount=1
(minikube) DBG | accelerate3d="off"
(minikube) DBG | accelerate2dvideo="off"
(minikube) DBG | teleporterenabled="off"
(minikube) DBG | teleporterport=0
(minikube) DBG | teleporteraddress=""
(minikube) DBG | teleporterpassword=""
(minikube) DBG | tracing-enabled="off"
(minikube) DBG | tracing-allow-vm-access="off"
(minikube) DBG | tracing-config=""
(minikube) DBG | autostart-enabled="off"
(minikube) DBG | autostart-delay=0
(minikube) DBG | defaultfrontend=""
(minikube) DBG | storagecontrollername0="SATA"
(minikube) DBG | storagecontrollertype0="IntelAhci"
(minikube) DBG | storagecontrollerinstance0="0"
(minikube) DBG | storagecontrollermaxportcount0="30"
(minikube) DBG | storagecontrollerportcount0="30"
(minikube) DBG | storagecontrollerbootable0="on"
(minikube) DBG | "SATA-0-0"="/Users/namedev/.minikube/machines/minikube/boot2docker.iso"
(minikube) DBG | "SATA-ImageUUID-0-0"="977e41e5-d157-4fe3-8215-7d475ad7b32f"
(minikube) DBG | "SATA-tempeject"="off"
(minikube) DBG | "SATA-IsEjected"="off"
(minikube) DBG | "SATA-1-0"="/Users/namedev/.minikube/machines/minikube/disk.vmdk"
(minikube) DBG | "SATA-ImageUUID-1-0"="828a95e8-f149-46ca-bee5-f298ec6a444c"
(minikube) DBG | "SATA-2-0"="none"
(minikube) DBG | "SATA-3-0"="none"
(minikube) DBG | "SATA-4-0"="none"
(minikube) DBG | "SATA-5-0"="none"
(minikube) DBG | "SATA-6-0"="none"
(minikube) DBG | "SATA-7-0"="none"
(minikube) DBG | "SATA-8-0"="none"
(minikube) DBG | "SATA-9-0"="none"
(minikube) DBG | "SATA-10-0"="none"
(minikube) DBG | "SATA-11-0"="none"
(minikube) DBG | "SATA-12-0"="none"
(minikube) DBG | "SATA-13-0"="none"
(minikube) DBG | "SATA-14-0"="none"
(minikube) DBG | "SATA-15-0"="none"
(minikube) DBG | "SATA-16-0"="none"
(minikube) DBG | "SATA-17-0"="none"
(minikube) DBG | "SATA-18-0"="none"
(minikube) DBG | "SATA-19-0"="none"
(minikube) DBG | "SATA-20-0"="none"
(minikube) DBG | "SATA-21-0"="none"
(minikube) DBG | "SATA-22-0"="none"
(minikube) DBG | "SATA-23-0"="none"
(minikube) DBG | "SATA-24-0"="none"
(minikube) DBG | "SATA-25-0"="none"
(minikube) DBG | "SATA-26-0"="none"
(minikube) DBG | "SATA-27-0"="none"
(minikube) DBG | "SATA-28-0"="none"
(minikube) DBG | "SATA-29-0"="none"
(minikube) DBG | natnet1="nat"
(minikube) DBG | macaddress1="0800276B97F7"
(minikube) DBG | cableconnected1="on"
(minikube) DBG | nic1="nat"
(minikube) DBG | nictype1="82540EM"
(minikube) DBG | nicspeed1="0"
(minikube) DBG | mtu="0"
(minikube) DBG | sockSnd="64"
(minikube) DBG | sockRcv="64"
(minikube) DBG | tcpWndSnd="64"
(minikube) DBG | tcpWndRcv="64"
(minikube) DBG | Forwarding(0)="ssh,tcp,127.0.0.1,53964,,22"
(minikube) DBG | hostonlyadapter2="vboxnet1"
(minikube) DBG | macaddress2="08002721BF68"
(minikube) DBG | cableconnected2="on"
(minikube) DBG | nic2="hostonly"
(minikube) DBG | nictype2="82540EM"
(minikube) DBG | nicspeed2="0"
(minikube) DBG | nic3="none"
(minikube) DBG | nic4="none"
(minikube) DBG | nic5="none"
(minikube) DBG | nic6="none"
(minikube) DBG | nic7="none"
(minikube) DBG | nic8="none"
(minikube) DBG | hidpointing="ps2mouse"
(minikube) DBG | hidkeyboard="ps2kbd"
(minikube) DBG | uart1="off"
(minikube) DBG | uart2="off"
(minikube) DBG | uart3="off"
(minikube) DBG | uart4="off"
(minikube) DBG | lpt1="off"
(minikube) DBG | lpt2="off"
(minikube) DBG | audio="coreaudio"
(minikube) DBG | clipboard="disabled"
(minikube) DBG | draganddrop="disabled"
(minikube) DBG | SessionName="headless"
(minikube) DBG | VideoMode="720,400,0"#0,0 1
(minikube) DBG | vrde="off"
(minikube) DBG | usb="off"
(minikube) DBG | ehci="off"
(minikube) DBG | xhci="off"
(minikube) DBG | SharedFolderNameMachineMapping1="Users"
(minikube) DBG | SharedFolderPathMachineMapping1="/Users"
(minikube) DBG | VRDEActiveConnection="off"
(minikube) DBG | VRDEClients=0
(minikube) DBG | vcpenabled="off"
(minikube) DBG | vcpscreens=0
(minikube) DBG | vcpfile="/Users/namedev/.minikube/machines/minikube/minikube/minikube.webm"
(minikube) DBG | vcpwidth=1024
(minikube) DBG | vcpheight=768
(minikube) DBG | vcprate=512
(minikube) DBG | vcpfps=25
(minikube) DBG | GuestMemoryBalloon=0
(minikube) DBG | GuestOSType="Linux26_64"
(minikube) DBG | GuestAdditionsRunLevel=2
(minikube) DBG | GuestAdditionsVersion="5.1.6 r110634"
(minikube) DBG | GuestAdditionsFacility_VirtualBox Base Driver=50,1488327525999
(minikube) DBG | GuestAdditionsFacility_VirtualBox System Service=50,1488327526490
(minikube) DBG | GuestAdditionsFacility_Seamless Mode=0,1488327525997
(minikube) DBG | GuestAdditionsFacility_Graphics Mode=0,1488327525997
(minikube) DBG | }
(minikube) DBG | STDERR:
(minikube) DBG | {
(minikube) DBG | }
(minikube) DBG | Host-only MAC: 08002721bf68
(minikube) DBG |
(minikube) DBG | Using SSH client type: external
(minikube) DBG | Using SSH private key: /Users/namedev/.minikube/machines/minikube/id_rsa (-rw-------)
(minikube) DBG | &{[-F /dev/null -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=quiet -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none docker#127.0.0.1 -o IdentitiesOnly=yes -i /Users/namedev/.minikube/machines/minikube/id_rsa -p 53964] /usr/bin/ssh <nil>}
(minikube) DBG | About to run SSH command:
(minikube) DBG | ip addr show
(minikube) DBG | SSH cmd err, output: <nil>: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
(minikube) DBG | link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
(minikube) DBG | inet 127.0.0.1/8 scope host lo
(minikube) DBG | valid_lft forever preferred_lft forever
(minikube) DBG | inet6 ::1/128 scope host
(minikube) DBG | valid_lft forever preferred_lft forever
(minikube) DBG | 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
(minikube) DBG | link/ether 08:00:27:6b:97:f7 brd ff:ff:ff:ff:ff:ff
(minikube) DBG | inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
(minikube) DBG | valid_lft 82347sec preferred_lft 82347sec
(minikube) DBG | inet6 fe80::a00:27ff:fe6b:97f7/64 scope link
(minikube) DBG | valid_lft forever preferred_lft forever
192.168.99.102
Making call to close driver server
(minikube) Calling .Close
(minikube) DBG | 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
(minikube) DBG | link/ether 08:00:27:21:bf:68 brd ff:ff:ff:ff:ff:ff
(minikube) DBG | inet 192.168.99.102/24 brd 192.168.99.255 scope global dynamic eth1
(minikube) DBG | valid_lft 1194sec preferred_lft 1194sec
(minikube) DBG | inet6 fe80::a00:27ff:fe21:bf68/64 scope link
(minikube) DBG | valid_lft forever preferred_lft forever
(minikube) DBG | 4: sit0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
(minikube) DBG | link/sit 0.0.0.0 brd 0.0.0.0
(minikube) DBG | 6: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
(minikube) DBG | link/ether 02:42:f6:89:ce:77 brd ff:ff:ff:ff:ff:ff
(minikube) DBG | inet 172.17.0.1/16 scope global docker0
(minikube) DBG | valid_lft forever preferred_lft forever
(minikube) DBG | inet6 fe80::42:f6ff:fe89:ce77/64 scope link
(minikube) DBG | valid_lft forever preferred_lft forever
(minikube) DBG | 8: vetha3e3212#if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
(minikube) DBG | link/ether a2:f2:25:6e:36:4d brd ff:ff:ff:ff:ff:ff link-netnsid 0
(minikube) DBG | inet6 fe80::a0f2:25ff:fe6e:364d/64 scope link
(minikube) DBG | valid_lft forever preferred_lft forever
(minikube) DBG | 10: veth5a5eea5#if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
(minikube) DBG | link/ether 26:6e:b4:68:46:80 brd ff:ff:ff:ff:ff:ff link-netnsid 1
(minikube) DBG | inet6 fe80::246e:b4ff:fe68:4680/64 scope link
(minikube) DBG | valid_lft forever preferred_lft forever
(minikube) DBG | 12: veth784d436#if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
(minikube) DBG | link/ether ca:46:10:26:92:a7 brd ff:ff:ff:ff:ff:ff link-netnsid 2
(minikube) DBG | inet6 fe80::c846:10ff:fe26:92a7/64 scope link
(minikube) DBG | valid_lft forever preferred_lft forever
Successfully made call to close driver server
Making call to close connection to plugin binary
(minikube) DBG | 14: veth48a4207#if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
I have confirmed that the port numbers on this line
Plugin server listening at address 127.0.0.1:59573 changes everytime minikube ip is called
If I switch to the xhyve driver it all works great.
I am running
macOS 10.12.3
virtualbox 5.1.4 r110228
node 7.5.0
I found the answer. Apparently VBoxManage is picky about what user it uses. I was able to get it to work with sudo -u USERNAME minikube ip
exec('sudo -u USERNAME minikube ip', (error, stdout, stderr) => {
if (error) {
console.error(`exec error: ${error}`);
return;
}
});