Running chef 12 on Ubuntu 14.1. Using self signed certs to setup the server, when I try to run knife commands from my client it fails with following error. Any operation has same error. chef server logs do not have any errors or info during the query.
knife config
[root#ip-10-233-2-40 ~]# cat ~/.chef/knife.rb
log_level :debug
log_location STDOUT
node_name 'admin'
client_key '/root/.chef/admin.pem'
validation_client_name 'dev'
validation_key '/root/.chef/dev-validator.pem'
chef_server_url 'https://chef.example.com/organizations/dev'
syntax_check_cache_path '/root/.chef/syntax_check_cache'
root#ip-10-233-2-177:~/ssl-certs# chef-server-ctl status
run: bookshelf: (pid 1092) 1998s; run: log: (pid 1064) 1998s
run: nginx: (pid 6140) 723s; run: log: (pid 1063) 1998s
run: oc_bifrost: (pid 1077) 1998s; run: log: (pid 1058) 1998s
run: oc_id: (pid 1091) 1998s; run: log: (pid 1061) 1998s
run: opscode-erchef: (pid 1090) 1998s; run: log: (pid 1066) 1998s
run: opscode-expander: (pid 1076) 1998s; run: log: (pid 1060) 1998s
run: opscode-expander-reindexer: (pid 1096) 1998s; run: log: (pid 1059) 1998s
run: opscode-solr4: (pid 1075) 1998s; run: log: (pid 1057) 1998s
run: postgresql: (pid 1085) 1998s; run: log: (pid 1056) 1998s
run: rabbitmq: (pid 1062) 1998s; run: log: (pid 1046) 1998s
run: redis_lb: (pid 6124) 723s; run: log: (pid 1065) 1998s
[root#ip-10-233-2-40 ~]# knife environment create staging
ERROR: The object you are looking for could not be found
/opt/chef/embedded/lib/ruby/2.1.0/net/http/response.rb:325:in `stream_check': undefined method `closed?' for nil:NilClass (NoMethodError)
from /opt/chef/embedded/lib/ruby/2.1.0/net/http/response.rb:199:in `read_body'
from /opt/chef/embedded/lib/ruby/2.1.0/net/http/response.rb:226:in `body'
from /opt/chef/embedded/lib/ruby/gems/2.1.0/gems/chef-12.5.1/lib/chef/knife.rb:499:in `rescue in format_rest_error'
from /opt/chef/embedded/lib/ruby/gems/2.1.0/gems/chef-12.5.1/lib/chef/knife.rb:497:in `format_rest_error'
from /opt/chef/embedded/lib/ruby/gems/2.1.0/gems/chef-12.5.1/lib/chef/knife.rb:459:in `humanize_http_exception'
from /opt/chef/embedded/lib/ruby/gems/2.1.0/gems/chef-12.5.1/lib/chef/knife.rb:418:in `humanize_exception'
from /opt/chef/embedded/lib/ruby/gems/2.1.0/gems/chef-12.5.1/lib/chef/knife.rb:409:in `rescue in run_with_pretty_exceptions'
from /opt/chef/embedded/lib/ruby/gems/2.1.0/gems/chef-12.5.1/lib/chef/knife.rb:400:in `run_with_pretty_exceptions'
from /opt/chef/embedded/lib/ruby/gems/2.1.0/gems/chef-12.5.1/lib/chef/knife.rb:203:in `run'
from /opt/chef/embedded/lib/ruby/gems/2.1.0/gems/chef-12.5.1/lib/chef/application/knife.rb:142:in `run'
from /opt/chef/embedded/lib/ruby/gems/2.1.0/gems/chef-12.5.1/bin/knife:25:in `<top (required)>'
from /usr/bin/knife:54:in `load'
from /usr/bin/knife:54:in `<main>'`enter code here`
Update
[root#ip-10-233-2-40 ~]# knife client list -VV
INFO: Using configuration from /root/.chef/knife.rb
DEBUG: Chef::HTTP calling Chef::HTTP::JSONInput#handle_request
DEBUG: Chef::HTTP calling Chef::HTTP::JSONOutput#handle_request
DEBUG: Chef::HTTP calling Chef::HTTP::CookieManager#handle_request
DEBUG: Chef::HTTP calling Chef::HTTP::Decompressor#handle_request
DEBUG: Chef::HTTP calling Chef::HTTP::Authenticator#handle_request
DEBUG: Signing the request as admin
DEBUG: Chef::HTTP calling Chef::HTTP::RemoteRequestID#handle_request
DEBUG: Using 10.233.0.182:3128 for proxy
DEBUG: Initiating GET to https://chef.example.com/organizations/dev/clients
DEBUG: ---- HTTP Request Header Data: ----
DEBUG: Accept: application/json
DEBUG: Accept-Encoding: gzip;q=1.0,deflate;q=0.6,identity;q=0.3
DEBUG: X-OPS-SIGN: algorithm=sha1;version=1.0;
DEBUG: X-OPS-USERID: admin
DEBUG: X-OPS-TIMESTAMP: 2015-10-21T17:40:17Z
DEBUG: X-OPS-CONTENT-HASH: 2jmj7l5rSw0yVb/vlWAYkK/YBwk=
DEBUG: X-OPS-AUTHORIZATION-1: m/vlWcZBPE7XUN7qhX6t/T9hXTT+2x/JehpOYq6My1ffEID6n+U+Xc+lHWto
DEBUG: X-OPS-AUTHORIZATION-2: Lq4ZEfNT1ltZkkYZ9Ii8EoF3eajUQmb2buwKMWae3yvxrZ5rgllJPf5q4gy3
DEBUG: X-OPS-AUTHORIZATION-3: IEqUUst+KzmoRHCiC1LeYxKXy+oeo45F4Vw4xHlOWgS0piqXfrmXnkrxs8Um
DEBUG: X-OPS-AUTHORIZATION-4: ZDqdLvcQ10WjoW9Wz4F2+fRh/BdRHjwMF80LVPwrtylf+GbdIhmCU3xxVvOq
DEBUG: X-OPS-AUTHORIZATION-5: w1Z2p03UcpRfMZy1pQV59A0Y3yv57Db5n3PJdjD9TlitNK++/HXcqO3IfO2U
DEBUG: X-OPS-AUTHORIZATION-6: 0QbZYZaeGSkJw0ArQDeffnjbpzAhSXhUfbs+in9tRg==
DEBUG: HOST: chef.example.com:443
DEBUG: X-Ops-Server-API-Version: 1
DEBUG: X-REMOTE-REQUEST-ID: 6a00a52a-7eeb-43d6-920d-fffc685c1b2a
DEBUG: ---- End HTTP Request Header Data ----
/opt/chef/embedded/lib/ruby/2.1.0/net/http/response.rb:119:in `error!': 404 "Not Found" (Net::HTTPServerException)
from /opt/chef/embedded/lib/ruby/2.1.0/net/http/response.rb:128:in `value'
from /opt/chef/embedded/lib/ruby/2.1.0/net/http.rb:915:in `connect'
from /opt/chef/embedded/lib/ruby/2.1.0/net/http.rb:863:in `do_start'
from /opt/chef/embedded/lib/ruby/2.1.0/net/http.rb:852:in `start'
from /opt/chef/embedded/lib/ruby/2.1.0/net/http.rb:1375:in `request'
from /opt/chef/embedded/lib/ruby/gems/2.1.0/gems/chef-12.5.1/lib/chef/http/basic_client.rb:65:in `request'
from /opt/chef/embedded/lib/ruby/gems/2.1.0/gems/chef-12.5.1/lib/chef/http.rb:266:in `block in send_http_request'
from /opt/chef/embedded/lib/ruby/gems/2.1.0/gems/chef-12.5.1/lib/chef/http.rb:298:in `block in retrying_http_errors'
from /opt/chef/embedded/lib/ruby/gems/2.1.0/gems/chef-12.5.1/lib/chef/http.rb:296:in `loop'
from /opt/chef/embedded/lib/ruby/gems/2.1.0/gems/chef-12.5.1/lib/chef/http.rb:296:in `retrying_http_errors'
from /opt/chef/embedded/lib/ruby/gems/2.1.0/gems/chef-12.5.1/lib/chef/http.rb:260:in `send_http_request'
from /opt/chef/embedded/lib/ruby/gems/2.1.0/gems/chef-12.5.1/lib/chef/http.rb:143:in `request'
from /opt/chef/embedded/lib/ruby/gems/2.1.0/gems/chef-12.5.1/lib/chef/http.rb:110:in `get'
from /opt/chef/embedded/lib/ruby/gems/2.1.0/gems/chef-12.5.1/lib/chef/api_client_v1.rb:198:in `list'
from /opt/chef/embedded/lib/ruby/gems/2.1.0/gems/chef-12.5.1/lib/chef/knife/client_list.rb:38:in `run'
from /opt/chef/embedded/lib/ruby/gems/2.1.0/gems/chef-12.5.1/lib/chef/knife.rb:405:in `block in run_with_pretty_exceptions'
from /opt/chef/embedded/lib/ruby/gems/2.1.0/gems/chef-12.5.1/lib/chef/local_mode.rb:44:in `with_server_connectivity'
from /opt/chef/embedded/lib/ruby/gems/2.1.0/gems/chef-12.5.1/lib/chef/knife.rb:404:in `run_with_pretty_exceptions'
from /opt/chef/embedded/lib/ruby/gems/2.1.0/gems/chef-12.5.1/lib/chef/knife.rb:203:in `run'
from /opt/chef/embedded/lib/ruby/gems/2.1.0/gems/chef-12.5.1/lib/chef/application/knife.rb:142:in `run'
from /opt/chef/embedded/lib/ruby/gems/2.1.0/gems/chef-12.5.1/bin/knife:25:in `<top (required)>'
from /usr/bin/knife:54:in `load'
from /usr/bin/knife:54:in `<main>'
[root#ip-10-233-2-40 ~]# telnet chef.example.com 443
Trying 10.233.2.177...
Connected to chef.example.com.
Escape character is '^]'.
Related
I've been trying to connect to k8s cluster which is running in azure from my Mac laptop, but unfortunately I can't retrieve any information.
user#MyMac ~ % k get nodes
error: unknown flag: --environment
error: unknown flag: --environment
error: unknown flag: --environment
Unable to connect to the server: getting credentials: exec: executable kubelogin failed with exit code 1
when I extend the log I get this:
user#MyMac ~ % kubectl get deployments --all-namespaces=true -v 8
I0924 10:32:14.451255 28517 loader.go:372] Config loaded from file: /Users/user/.kube/config
I0924 10:32:14.461468 28517 round_trippers.go:432] GET https://dev-cluster.privatelink.westeurope.azmk8s.io:443/api?timeout=32s
I0924 10:32:14.461484 28517 round_trippers.go:438] Request Headers:
I0924 10:32:14.461490 28517 round_trippers.go:442] Accept: application/json, */*
I0924 10:32:14.461495 28517 round_trippers.go:442] User-Agent: kubectl/v1.22.5 (darwin/amd64) kubernetes/5c99e2a
error: unknown flag: --environment
I0924 10:32:14.555302 28517 round_trippers.go:457] Response Status: in 93 milliseconds
I0924 10:32:14.555318 28517 round_trippers.go:460] Response Headers:
I0924 10:32:14.555828 28517 cached_discovery.go:121] skipped caching discovery info due to Get "https://dev-cluster.privatelink.westeurope.azmk8s.io:443/api?timeout=32s": getting credentials: exec:
I0924 10:32:14.569821 28517 shortcut.go:89] Error loading discovery information: Get "https://dev-cluster.privatelink.westeurope.azmk8s.io:443/api?timeout=32s": getting credentials: exec: executable kubelogin failed with exit code 1
I0924 10:32:14.570037 28517 round_trippers.go:432] GET https://dev-cluster.privatelink.westeurope.azmk8s.io:443/api?timeout=32s
I0924 10:32:14.570050 28517 round_trippers.go:438] Request Headers:
I0924 10:32:14.570068 28517 round_trippers.go:442] Accept: application/json, */*
I0924 10:32:14.570088 28517 round_trippers.go:442] User-Agent: kubectl/v1.22.5 (darwin/amd64) kubernetes/5c99e2a
I0924 10:32:14.618944 28517 round_trippers.go:457] Response Status: in 17 milliseconds
I0924 10:32:14.618976 28517 round_trippers.go:460] Response Headers:
I0924 10:32:14.619147 28517 cached_discovery.go:121] skipped caching discovery info due to Get "https://dev-cluster.privatelink.westeurope.azmk8s.io:443/api?timeout=32s": getting credentials: exec: executable kubelogin failed with exit code 1
I0924 10:32:14.619790 28517 helpers.go:235] Connection error: Get https://dev-cluster.privatelink.westeurope.azmk8s.io:443/api?timeout=32s: getting credentials: exec: executable kubelogin failed with exit code 1
F0924 10:32:14.620768 28517 helpers.go:116] Unable to connect to the server: getting credentials: exec: executable kubelogin failed with exit code 1
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc0000cc001, 0xc000258000, 0x97, 0x23d)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x3cd80e0, 0xc000000003, 0x0, 0x0, 0xc0004d8150, 0x2, 0x33f6d63, 0xa, 0x74, 0x100e100)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x1e5
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x3cd80e0, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc0004e0db0, 0x1, 0x1)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x185
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1500
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc00081c3f0, 0x68, 0x1)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:94 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x2e6b0e0, 0xc0004e7410, 0x2cebdc8)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:189 +0x935
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:116
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func2(0xc0001ef680, 0xc000820cc0, 0x1, 0x4)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get/get.go:180 +0x159
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0001ef680, 0xc000820c80, 0x4, 0x4, 0xc0001ef680, 0xc000820c80)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:856 +0x2c2
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000401180, 0xc0000ce180, 0xc0000ce120, 0x6)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:960 +0x375
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:897
main.main()
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:49 +0x21d
goroutine 18 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x3cd80e0)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:420 +0xdf
goroutine 23 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x2cebcd0, 0x2e695e0, 0xc0004e6000, 0x1, 0xc00009eb40)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x118
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x2cebcd0, 0x12a05f200, 0x0, 0x1, 0xc00009eb40)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x2cebcd0, 0x12a05f200, 0xc00009eb40)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs.InitLogs
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs/logs.go:51 +0x96
I updated the az cli, but nothing changed.
I removed too the .kube/config file, and it didn't work too.
I don't know what went wrong after the update of the MacOs.
This happens because the ./kube config file was rewritten in the upgrade process, so you would need to add the credentials, run this command to refresh them:
az aks get-credentials --resource-group group --name cluster-name --admin --overwrite-existing
Details of the Kubernetes Service Connection:
Authentication method: Azure Subscription
Azure Subscription:
Cluster:
Namespace:
Use cluster admin credentials
I get the following error when running 'pm2 start project.json' in my project.
port: 3000 }
0|serv | Tue, 08 Sep 2020 03:14:18 GMT app LoadSettingFromRedis: loaded
0|serv | { Error: listen EADDRINUSE 127.0.0.1:3000
0|serv | at Server.setupListenHandle [as _listen2] (net.js:1360:14)
0|serv | at listenInCluster (net.js:1401:12)
0|serv | at doListen (net.js:1510:7)
0|serv | at _combinedTickCallback (internal/process/next_tick.js:142:11)
0|serv | at process._tickCallback (internal/process/next_tick.js:181:9)
0|serv | errno: 'EADDRINUSE',
0|serv | code: 'EADDRINUSE',
0|serv | syscall: 'listen',
0|serv | address: '127.0.0.1',
0|serv | port: 3000 }
0|serv | Tue, 08 Sep 2020 03:15:08 GMT app LoadSettingFromRedis: loaded
0|serv | Tue, 08 Sep 2020 03:20:43 GMT app LoadSettingFromRedis: loaded
When I check what process is listening on port 3000, I get node. I kill this process, but it still doesn't solve the issue. Does anyone know what is the problem here?
It means your port is already in use. Try killing port with following command
sudo kill -9 $(sudo lsof -t -i:3000)
if that not works try following
sudo lsof -i tcp:3000 // this will return some PIDs
sudo kill -9 [your pid to remove]
Then run pm2 start command again
Hello I am trying to create a testbucket on a Ceph Raspberry Pi cluster(local) and I get the following error message:
OS:Debian Jessie
Ceph: v12.2.12 Luminous
s3cmd:2.0.2
[ceph_deploy.rgw][INFO ] The Ceph Object Gateway (RGW) is now running on host admin and default port 7480
./s3cmd --debug mb s3://testbucket
Debug Message:
DEBUG: Unicodising 'mb' using UTF-8
DEBUG: Unicodising 's3://testbucket' using UTF-8
DEBUG: Command: mb
DEBUG: CreateRequest: resource[uri]=/
DEBUG: Using signature v2
DEBUG: SignHeaders: u'PUT\n\n\n\nx-amz-date:Wed, 15 Jan 2020 02:28:25 +0000\n/testbucket/'
DEBUG: Processing request, please wait...
DEBUG: get_hostname(testbucket): 192.168.178.50:7480
DEBUG: ConnMan.get(): creating new connection: http://192.168.178.50:7480
DEBUG: non-proxied HTTPConnection(192.168.178.50, 7480)
DEBUG: Response:
DEBUG: Unicodising './s3cmd' using UTF-8
DEBUG: Unicodising '--debug' using UTF-8
DEBUG: Unicodising 'mb' using UTF-8
DEBUG: Unicodising 's3://testbucket' using UTF-8
Invoked as: ./s3cmd --debug mb s3://testbucket
Problem: error: [Errno 111] Connection refused
S3cmd: 2.0.2
python: 2.7.17 (default, Oct 19 2019, 23:36:22)
[GCC 9.2.1 20190909]
environment LANG=en_GB.UTF-8
Traceback (most recent call last):
File "./s3cmd", line 3092, in <module>
rc = main()
File "./s3cmd", line 3001, in main
rc = cmd_func(args)
File "./s3cmd", line 237, in cmd_bucket_create
response = s3.bucket_create(uri.bucket(), cfg.bucket_location)
File "/home/cephuser/s3cmd-2.0.2/S3/S3.py", line 398, in bucket_create
response = self.send_request(request)
File "/home/cephuser/s3cmd-2.0.2/S3/S3.py", line 1258, in send_request
conn = ConnMan.get(self.get_hostname(resource['bucket']))
File "/home/cephuser/s3cmd-2.0.2/S3/ConnMan.py", line 253, in get
conn.c.connect()
File "/usr/lib/python2.7/httplib.py", line 831, in connect
self.timeout, self.source_address)
File "/usr/lib/python2.7/socket.py", line 575, in create_connection
raise err
error: [Errno 111] Connection refused
Does anyone know about the error ?
Solution:
Ceph-rgw Service stop automatically after installation
After my gateway was permanently available I could create a testbucket with s3cmd
I have this script for measuring the time it takes for a query to perform:
for (var i = 0; i < 10000; i++) {
(function(start) {
Models.User.findOneAsync({
userId: 'ABCD'
}, 'age name location')
.then(function(user) {
logger.debug(Date.now() - start);
})
})(Date.now());
}
As a result, I'm getting an incremented list of results:
2015-05-19T11:09:16.204Z - debug: 4369
2015-05-19T11:09:16.205Z - debug: 4367
2015-05-19T11:09:16.205Z - debug: 4367
2015-05-19T11:09:16.206Z - debug: 4368
2015-05-19T11:09:16.206Z - debug: 4367
2015-05-19T11:09:16.206Z - debug: 4368
2015-05-19T11:09:16.206Z - debug: 4367
2015-05-19T11:09:16.206Z - debug: 4369
2015-05-19T11:09:16.206Z - debug: 4368
2015-05-19T11:09:16.206Z - debug: 4368
2015-05-19T11:09:16.206Z - debug: 4368
2015-05-19T11:09:16.206Z - debug: 4367
2015-05-19T11:09:16.212Z - debug: 4373
2015-05-19T11:09:16.212Z - debug: 4373
2015-05-19T11:09:16.248Z - debug: 4408
2015-05-19T11:09:16.376Z - debug: 4536
2015-05-19T11:09:16.459Z - debug: 4619
2015-05-19T11:09:16.475Z - debug: 4635
2015-05-19T11:09:16.493Z - debug: 4654
2015-05-19T11:09:16.552Z - debug: 4713
2015-05-19T11:09:16.636Z - debug: 4796
2015-05-19T11:09:16.794Z - debug: 4954
2015-05-19T11:09:16.830Z - debug: 4990
2015-05-19T11:09:16.841Z - debug: 5001
2015-05-19T11:09:16.845Z - debug: 5005
2015-05-19T11:09:17.133Z - debug: 5293
2015-05-19T11:09:17.176Z - debug: 5336
2015-05-19T11:09:17.182Z - debug: 5341
2015-05-19T11:09:17.230Z - debug: 5390
2015-05-19T11:09:17.421Z - debug: 5580
2015-05-19T11:09:17.437Z - debug: 5596
2015-05-19T11:09:17.441Z - debug: 5600
2015-05-19T11:09:17.513Z - debug: 5672
2015-05-19T11:09:17.569Z - debug: 5728
2015-05-19T11:09:17.658Z - debug: 5817
2015-05-19T11:09:17.697Z - debug: 5855
2015-05-19T11:09:17.708Z - debug: 5867
2015-05-19T11:09:17.712Z - debug: 5870
2015-05-19T11:09:17.732Z - debug: 5891
2015-05-19T11:09:17.805Z - debug: 5963
2015-05-19T11:09:17.852Z - debug: 6010
2015-05-19T11:09:17.890Z - debug: 6048
2015-05-19T11:09:17.985Z - debug: 6143
2015-05-19T11:09:17.989Z - debug: 6147
2015-05-19T11:09:18.013Z - debug: 6171
2015-05-19T11:09:18.016Z - debug: 6175
2015-05-19T11:09:18.031Z - debug: 6189
2015-05-19T11:09:18.170Z - debug: 6327
2015-05-19T11:09:18.196Z - debug: 6353
2015-05-19T11:09:18.205Z - debug: 6362
2015-05-19T11:09:18.209Z - debug: 6367
2015-05-19T11:09:18.224Z - debug: 6382
2015-05-19T11:09:18.317Z - debug: 6474
2015-05-19T11:09:18.360Z - debug: 6516
2015-05-19T11:09:18.369Z - debug: 6526
2015-05-19T11:09:18.433Z - debug: 6590
2015-05-19T11:09:18.460Z - debug: 6616
2015-05-19T11:09:18.513Z - debug: 6668
2015-05-19T11:09:18.541Z - debug: 6697
2015-05-19T11:09:18.553Z - debug: 6711
2015-05-19T11:09:18.586Z - debug: 6741
2015-05-19T11:09:18.672Z - debug: 6827
2015-05-19T11:09:18.688Z - debug: 6844
2015-05-19T11:09:18.693Z - debug: 6849
2015-05-19T11:09:18.729Z - debug: 6884
2015-05-19T11:09:18.817Z - debug: 6972
2015-05-19T11:09:18.823Z - debug: 6980
2015-05-19T11:09:18.828Z - debug: 6983
2015-05-19T11:09:18.882Z - debug: 7036
2015-05-19T11:09:18.919Z - debug: 7075
2015-05-19T11:09:19.016Z - debug: 7170
2015-05-19T11:09:19.020Z - debug: 7174
2015-05-19T11:09:19.043Z - debug: 7197
2015-05-19T11:09:19.066Z - debug: 7222
2015-05-19T11:09:19.177Z - debug: 7331
2015-05-19T11:09:19.182Z - debug: 7335
2015-05-19T11:09:19.189Z - debug: 7343
2015-05-19T11:09:19.189Z - debug: 7344
2015-05-19T11:09:19.191Z - debug: 7344
2015-05-19T11:09:19.280Z - debug: 7433
2015-05-19T11:09:19.340Z - debug: 7494
2015-05-19T11:09:19.344Z - debug: 7497
2015-05-19T11:09:19.358Z - debug: 7512
2015-05-19T11:09:19.362Z - debug: 7518
2015-05-19T11:09:19.455Z - debug: 7608
2015-05-19T11:09:19.499Z - debug: 7651
2015-05-19T11:09:19.504Z - debug: 7656
2015-05-19T11:09:19.515Z - debug: 7669
2015-05-19T11:09:19.569Z - debug: 7722
2015-05-19T11:09:19.574Z - debug: 7726
2015-05-19T11:09:19.574Z - debug: 7726
2015-05-19T11:09:19.667Z - debug: 7818
2015-05-19T11:09:19.672Z - debug: 7823
2015-05-19T11:09:19.678Z - debug: 7830
2015-05-19T11:09:19.689Z - debug: 7844
2015-05-19T11:09:19.716Z - debug: 7868
2015-05-19T11:09:19.835Z - debug: 7986
2015-05-19T11:09:19.839Z - debug: 7989
2015-05-19T11:09:19.845Z - debug: 7997
2015-05-19T11:09:19.978Z - debug: 8128
2015-05-19T11:09:19.989Z - debug: 8136
2015-05-19T11:09:19.995Z - debug: 8146
2015-05-19T11:09:19.999Z - debug: 8153
2015-05-19T11:09:20.012Z - debug: 8166
2015-05-19T11:09:20.023Z - debug: 8174
2015-05-19T11:09:20.026Z - debug: 8177
2015-05-19T11:09:20.116Z - debug: 8262
2015-05-19T11:09:20.127Z - debug: 8272
2015-05-19T11:09:20.136Z - debug: 8287
2015-05-19T11:09:20.154Z - debug: 8307
2015-05-19T11:09:20.179Z - debug: 8324
2015-05-19T11:09:20.262Z - debug: 8407
2015-05-19T11:09:20.275Z - debug: 8425
2015-05-19T11:09:20.279Z - debug: 8423
2015-05-19T11:09:20.306Z - debug: 8456
2015-05-19T11:09:20.309Z - debug: 8463
2015-05-19T11:09:20.396Z - debug: 8540
2015-05-19T11:09:20.422Z - debug: 8565
2015-05-19T11:09:20.424Z - debug: 8574
2015-05-19T11:09:20.441Z - debug: 8594
2015-05-19T11:09:20.452Z - debug: 8601
2015-05-19T11:09:20.455Z - debug: 8605
2015-05-19T11:09:20.461Z - debug: 8604
2015-05-19T11:09:20.549Z - debug: 8691
2015-05-19T11:09:20.555Z - debug: 8697
2015-05-19T11:09:20.565Z - debug: 8711
2015-05-19T11:09:20.598Z - debug: 8752
2015-05-19T11:09:20.602Z - debug: 8750
2015-05-19T11:09:20.659Z - debug: 8801
2015-05-19T11:09:20.689Z - debug: 8830
2015-05-19T11:09:20.701Z - debug: 8842
2015-05-19T11:09:20.707Z - debug: 8853
2015-05-19T11:09:20.712Z - debug: 8864
2015-05-19T11:09:20.752Z - debug: 8898
2015-05-19T11:09:20.798Z - debug: 8938
2015-05-19T11:09:20.829Z - debug: 8969
2015-05-19T11:09:20.844Z - debug: 8989
2015-05-19T11:09:20.850Z - debug: 8990
2015-05-19T11:09:20.869Z - debug: 9021
2015-05-19T11:09:20.880Z - debug: 9033
2015-05-19T11:09:20.893Z - debug: 9038
2015-05-19T11:09:20.939Z - debug: 9078
2015-05-19T11:09:20.965Z - debug: 9104
2015-05-19T11:09:20.979Z - debug: 9124
2015-05-19T11:09:20.984Z - debug: 9122
2015-05-19T11:09:21.051Z - debug: 9189
2015-05-19T11:09:21.057Z - debug: 9202
2015-05-19T11:09:21.057Z - debug: 9210
2015-05-19T11:09:21.096Z - debug: 9234
2015-05-19T11:09:21.112Z - debug: 9249
2015-05-19T11:09:21.121Z - debug: 9265
2015-05-19T11:09:21.130Z - debug: 9267
2015-05-19T11:09:21.133Z - debug: 9284
2015-05-19T11:09:21.195Z - debug: 9339
2015-05-19T11:09:21.200Z - debug: 9345
2015-05-19T11:09:21.239Z - debug: 9375
2015-05-19T11:09:21.247Z - debug: 9383
2015-05-19T11:09:21.270Z - debug: 9404
2015-05-19T11:09:21.283Z - debug: 9434
2015-05-19T11:09:21.334Z - debug: 9468
2015-05-19T11:09:21.337Z - debug: 9481
2015-05-19T11:09:21.348Z - debug: 9500
2015-05-19T11:09:21.352Z - debug: 9496
2015-05-19T11:09:21.378Z - debug: 9512
2015-05-19T11:09:21.385Z - debug: 9518
2015-05-19T11:09:21.416Z - debug: 9559
2015-05-19T11:09:21.419Z - debug: 9552
2015-05-19T11:09:21.470Z - debug: 9603
2015-05-19T11:09:21.475Z - debug: 9625
2015-05-19T11:09:21.490Z - debug: 9634
2015-05-19T11:09:21.517Z - debug: 9649
2015-05-19T11:09:21.522Z - debug: 9654
2015-05-19T11:09:21.554Z - debug: 9697
2015-05-19T11:09:21.562Z - debug: 9694
2015-05-19T11:09:21.575Z - debug: 9706
2015-05-19T11:09:21.622Z - debug: 9773
2015-05-19T11:09:21.635Z - debug: 9779
2015-05-19T11:09:21.656Z - debug: 9787
2015-05-19T11:09:21.683Z - debug: 9814
2015-05-19T11:09:21.688Z - debug: 9818
2015-05-19T11:09:21.691Z - debug: 9833
2015-05-19T11:09:21.716Z - debug: 9846
2015-05-19T11:09:21.720Z - debug: 9869
2015-05-19T11:09:21.751Z - debug: 9893
2015-05-19T11:09:21.778Z - debug: 9921
2015-05-19T11:09:21.786Z - debug: 9936
2015-05-19T11:09:21.790Z - debug: 9919
2015-05-19T11:09:21.820Z - debug: 9949
2015-05-19T11:09:21.830Z - debug: 9959
2015-05-19T11:09:21.864Z - debug: 10005
2015-05-19T11:09:21.870Z - debug: 9998
2015-05-19T11:09:21.884Z - debug: 10033
2015-05-19T11:09:21.928Z - debug: 10070
2015-05-19T11:09:21.931Z - debug: 10059
2015-05-19T11:09:21.949Z - debug: 10076
2015-05-19T11:09:21.984Z - debug: 10111
2015-05-19T11:09:21.987Z - debug: 10128
2015-05-19T11:09:22.015Z - debug: 10142
2015-05-19T11:09:22.019Z - debug: 10160
2015-05-19T11:09:22.046Z - debug: 10172
2015-05-19T11:09:22.060Z - debug: 10202
2015-05-19T11:09:22.064Z - debug: 10215
2015-05-19T11:09:22.089Z - debug: 10215
2015-05-19T11:09:22.123Z - debug: 10248
2015-05-19T11:09:22.130Z - debug: 10270
2015-05-19T11:09:22.152Z - debug: 10277
2015-05-19T11:09:22.156Z - debug: 10302
2015-05-19T11:09:22.180Z - debug: 10320
2015-05-19T11:09:22.183Z - debug: 10325
2015-05-19T11:09:22.186Z - debug: 10310
2015-05-19T11:09:22.228Z - debug: 10349
2015-05-19T11:09:22.260Z - debug: 10380
Ok, so it's not entirely increment but it's going up forever...
I would expect to get a list of almost the same number (queries should take almost the same time).
Anyone knows why the times keeps going up?
It's because you're not measuring just the query times.
The basic flow of this code is:
The for loop of 10000 iterations runs to completion, setting the start time for each iteration to the time each iteration occurred.
For the first 5 of those iterations (or whatever poolsize you're using with your MongoDB connection), their queries start as soon as their findOneAsync call is made.
As queries complete, their findOneAsync callbacks are put on the event queue and their connections are returned to the pool, allowing subsequent iterations' queries to start.
So all iterations' times include the time to complete the rest of the for loop after it, the time spent waiting for a connection in the pool to become available, and the time their findOneAsync callback spent waiting in the event queue.
If you want to get an accurate picture of how long the queries are taking, use MongoDB's profiling support.
I was using the following instructions to install and configure StatsD on a Graphite server:
https://www.digitalocean.com/community/tutorials/how-to-configure-statsd-to-collect-arbitrary-stats-for-graphite-on-ubuntu-14-04
Now that I have a server with StatsD running, I do not see the metrics being logged under /var/log/statsd/statsd.log when I am testing sending them from the command line. Here is what I see:
29 Oct 02:30:39 - server is up
29 Oct 02:47:49 - reading config file: /etc/statsd/localConfig.js
29 Oct 02:47:49 - server is up
29 Oct 14:16:45 - reading config file: /etc/statsd/localConfig.js
29 Oct 14:16:45 - server is up
29 Oct 15:36:47 - reading config file: /etc/statsd/localConfig.js
29 Oct 15:36:47 - DEBUG: Loading server: ./servers/udp
29 Oct 15:36:47 - server is up
29 Oct 15:36:47 - DEBUG: Loading backend: ./backends/graphite
29 Oct 15:36:47 - DEBUG: numStats: 3
The log stays at the last entry of 'numStats: 3', even though I keep entering different metrics at the command line.
Here are a sample of the metrics I entered:
echo "sample.gauge:14|g" | nc -u -w0 127.0.0.1 8125
echo "sample.gauge:10|g" | nc -u -w0 127.0.0.1 8125
echo "sample.count:1|c" | nc -u -w0 127.0.0.1 8125
echo "sample.set:50|s" | nc -u -w0 127.0.0.1 8125
Of interest, I see this under /var/log/statsd/stderr.log:
events.js:72
throw er; // Unhandled 'error' event
^
Error: listen EADDRINUSE
at errnoException (net.js:901:11)
at Server._listen2 (net.js:1039:14)
at listen (net.js:1061:10)
at Server.listen (net.js:1135:5)
at /usr/share/statsd/stats.js:383:16
at null.<anonymous> (/usr/share/statsd/lib/config.js:40:5)
at EventEmitter.emit (events.js:95:17)
at /usr/share/statsd/lib/config.js:20:12
at fs.js:268:14
at Object.oncomplete (fs.js:107:15)
Here is what my localConfig.js file looks like:
{
graphitePort: 2003
, graphiteHost: "localhost"
, port: 8125
, graphite: {
legacyNamespace: false
},
debug: true,
dumpMessages: true
}
Would anybody be able to shed some light as to where the problem lies?
Thanks!
There is a management interface available by default on port 8126: https://github.com/etsy/statsd/blob/master/docs/admin_interface.md
You likely have another service listening on that port in the same system.
Try this:
# localConfig.js
{
graphitePort: 2003
, graphiteHost: "localhost"
, port: 8125
, mgmt_port: 8127
, graphite: {
legacyNamespace: false
},
debug: true,
dumpMessages: true
}
See https://github.com/etsy/statsd/blob/master/exampleConfig.js#L28