telegraf file output failed to write message - fileoutputstream

This is my /etc/telegraf/telegraf.conf file, section outputs.file:
[[outputs.file]]
files = ["stdout", "/home/zeinab/metrics.out"]
data_format = "influx"
But telegraf log (which are written in /var/log/syslog) shows error continuously:
Jan 15 12:14:33 ZiZi telegraf[19916]: kernel,host=ZiZi context_switches=9275452836i,boot_time=1515496651i,processes_forked=1203986i,interrupts=1381624861i 1516005867000000000
Jan 15 12:14:33 ZiZi telegraf[19916]: 2018-01-15T08:44:33Z E! Error writing to output [file]: failed to write message: kernel,host=ZiZi context_switches=9275452836i,boot_time=1515496651i,processes_forked=1203986i,interrupts=1381624861i 1516005867000000000
Jan 15 12:14:33 ZiZi telegraf[19916]: , invalid argument
EDIT 1:
The whole un-commented configs are:
[global_tags]
[agent]
interval = "10s"
round_interval = true
metric_batch_size = 1000
metric_buffer_limit = 10000
collection_jitter = "0s"
flush_interval = "10s"
flush_jitter = "0s"
precision = ""
debug = false
quiet = false
logfile = ""
hostname = ""
omit_hostname = false
[[outputs.influxdb]]
urls = ["udp://127.0.0.1:8089"] # UDP endpoint example
database = "telegraf" # required
[[outputs.elasticsearch]]
urls = [ "http://127.0.0.1:9200" ] # required.
timeout = "5s"
index_name = "telegraf-%Y.%m.%d" # required.
manage_template = true
template_name = "telegraf"
overwrite_template = false
[[outputs.file]]
files = ["stdout", "/home/zeinab/metrics.out"]
data_format = "influx"
[[inputs.cpu]]
percpu = true
totalcpu = true
collect_cpu_time = false
report_active = false
[[inputs.disk]]
ignore_fs = ["tmpfs", "devtmpfs", "devfs"]
[[inputs.diskio]]
[[inputs.kernel]]
[[inputs.mem]]
[[inputs.processes]]
[[inputs.swap]]
[[inputs.system]]
[[inputs.jolokia2_agent]]
urls = ["http://192.168.100.179:8778/jolokia"]
[[inputs.jolokia2_agent.metric]]
name = "heap_memory_usage"
mbean = "java.lang:type=Memory"
paths = ["HeapMemoryUsage"]
[[inputs.jolokia2_agent.metrics]]
name = "send_success"
mbean = "wr-core:type=monitor,name=execution"
paths = ["MessageSendSuccessCount"]
I start telegraf as a service:
service telegraf start

Related

How to prompt for user info for a key in a map type variable in Terraform

I have the following variable in terraform:
rds_config_list = [
{
rds_name = "shiftleft"
rds_identifier = "shiftleft-postgres"
rds_password = <USETINPUT>
rds_snapshot_identifier = "shiftleft"
rds_postgres_instance_class = "db.m6g.large"
rds_postgres_engine_version = "13.3"
rds_postgres_family = "postgres13"
rds_postgres_allocated_storage = 100
rds_postgres_max_allocated_storage = 1000
rds_backup_retention_period = 7
rds_postgres_multi_az = false
rds_postgres_deletion_protection = false
},
{
rds_name = "shiftleft2"
rds_identifier = "shiftleft2-postgres"
rds_password = <USETINPUT>
rds_snapshot_identifier = "shiftleft2"
rds_postgres_instance_class = "db.m6g.large"
rds_postgres_engine_version = "13.3"
rds_postgres_family = "postgres13"
rds_postgres_allocated_storage = 100
rds_postgres_max_allocated_storage = 1000
rds_backup_retention_period = 7
rds_postgres_multi_az = false
rds_postgres_deletion_protection = false
}
]
i want to prompt for user input for the passwords for each of them and not have the same password for all databases, this is inside a .tfvars file. is there any way to do it?

InfluxDB write failure node-influx library

I am writing data to influxdb using the node-influx library.
https://github.com/node-influx/node-influx
It writes about 500,000 records and then I start seeing this error following which there are no more writes. It looks like a dns issue but I am running it inside a docker container on ubuntu 18.04 host.
Error: getaddrinfo EAGAIN influxdb influxdb:8086 |at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:56:26)
I have the logging level set to debug but I am not seeing any other errors. Any idea what might be causing this?
Update
tried with a different influx version
increased ulimit of host
used ip address of docker-container instead of service name, no error is thrown, writes stop after sometime silently
tried to call the write API with curl
curl -i -XPOST 'http://localhost:8086/write?db=mydb' --data-binary 'snmp,hostipv4=172.16.102.82,oidname=cpu_idle,site=gotham value=1000 1574751020815819489'
This works and a record is inserted in the DB.
Update2
It seems to be a dns issue on the docker network. I am not able to ping the influxdb container from the worker container. The writes are not reaching influx.
Update3
As a workaround for now, I am forcing a process.exit(1) on catching the error on my worker and using docker-compose restart: on-failure to restart the service. This resumes the writes.
retention policy is set for 2 days on the DB
influxdb.conf
reporting-disabled = true
[meta]
dir = "/var/lib/influxdb/meta"
retention-autocreate = false
logging-enabled = true
[logging]
format = "auto"
level = "debug"
[data]
engine = "tsm1"
dir = "/var/lib/influxdb/data"
wal-dir = "/var/lib/influxdb/wal"
wal-fsync-delay = "200ms"
index-version = "inmem"
wal-logging-enabled = true
query-log-enabled = true
cache-max-memory-size = "2g"
cache-snapshot-memory-size = "256m"
cache-snapshot-write-cold-duration = "20m"
compact-full-write-cold-duration = "24h"
max-concurrent-compactions = 0
compact-throughput = "48m"
max-points-per-block = 0
max-series-per-database = 1000000
trace-logging-enabled = false
[coordinator]
write-timeout = "10s"
max-concurrent-queries = 0
query-timeout = "0s"
log-queries-after = "0s"
max-select-point = 0
max-select-series = 0
max-select-buckets = 0
[retention]
enabled = true
check-interval = "30m0s"
[shard-precreation]
enabled = true
check-interval = "10m0s"
advance-period = "30m0s"
[monitor]
store-enabled = true
store-database = "_internal"
store-interval = "10s"
[http]
enabled = true
bind-address = ":8086"
auth-enabled = false
log-enabled = true
max-concurrent-write-limit = 0
max-enqueued-write-limit = 0
enqueued-write-timeout = 0
[continuous_queries]
enabled = false
log-enabled = true
run-interval = "10s"

SSD vs HDD as a performance factor in npm configuration: where to put .npm cache and node_modules directories to achieve best performance?

Considering a dual-drive laptop with a 256GB SSD and 1TB HDD, where should one put their .npm cache and node_modules directories to achieve best performance?
Any other advice warmly welcome.
For your information:
My whole /home/username partition is on the HDD.
I use nvm
And here's the whole npm config (as extracted with npm config ls -l command):
; cli configs
long = true
metrics-registry = "https://registry.npmjs.org/"
scope = ""
user-agent = "npm/6.9.0 node/v10.15.3 linux x64"
; default values
access = null
allow-same-version = false
also = null
always-auth = false
audit = true
audit-level = "low"
auth-type = "legacy"
before = null
bin-links = true
browser = null
ca = null
cache = "/home/username/.npm"
cache-lock-retries = 10
cache-lock-stale = 60000
cache-lock-wait = 10000
cache-max = null
cache-min = 10
cafile = undefined
cert = null
cidr = null
color = true
commit-hooks = true
depth = null
description = true
dev = false
dry-run = false
editor = "vi"
engine-strict = false
fetch-retries = 2
fetch-retry-factor = 10
fetch-retry-maxtimeout = 60000
fetch-retry-mintimeout = 10000
force = false
git = "git"
git-tag-version = true
global = false
global-style = false
globalconfig = "/home/username/.nvm/versions/node/v10.15.3/etc/npmrc"
globalignorefile = "/home/username/.nvm/versions/node/v10.15.3/etc/npmignore"
group = 1001
ham-it-up = false
heading = "npm"
https-proxy = null
if-present = false
ignore-prepublish = false
ignore-scripts = false
init-author-email = ""
init-author-name = ""
init-author-url = ""
init-license = "ISC"
init-module = "/home/username/.npm-init.js"
init-version = "1.0.0"
json = false
key = null
legacy-bundling = false
link = false
local-address = undefined
loglevel = "notice"
logs-max = 10
; long = false (overridden)
maxsockets = 50
message = "%s"
; metrics-registry = null (overridden)
node-options = null
node-version = "10.15.3"
noproxy = null
offline = false
onload-script = null
only = null
optional = true
otp = null
package-lock = true
package-lock-only = false
parseable = false
prefer-offline = false
prefer-online = false
prefix = "/home/username/.nvm/versions/node/v10.15.3"
preid = ""
production = false
progress = true
proxy = null
read-only = false
rebuild-bundle = true
registry = "https://registry.npmjs.org/"
rollback = true
save = true
save-bundle = false
save-dev = false
save-exact = false
save-optional = false
save-prefix = "^"
save-prod = false
scope = ""
script-shell = null
scripts-prepend-node-path = "warn-only"
searchexclude = null
searchlimit = 20
searchopts = ""
searchstaleness = 900
send-metrics = false
shell = "/bin/bash"
shrinkwrap = true
sign-git-commit = false
sign-git-tag = false
sso-poll-frequency = 500
sso-type = "oauth"
strict-ssl = true
tag = "latest"
tag-version-prefix = "v"
timing = false
tmp = "/tmp"
umask = 18
unicode = true
unsafe-perm = true
update-notifier = true
usage = false
user = 1001
; user-agent = "npm/{npm-version} node/{node-version} {platform} {arch}" (overridden)
userconfig = "/home/username/.npmrc"
version = false
versions = false
viewer = "man"
Well, SSD drives have better performance, easy answer would be: put everything on SSD.
If you are constrained by disk space, then I would move the .npm cached folder to SSD, since it is shared by every yarn/npm install.

S3cmd configuration not working properly ERROR: S3 error: None

I am trying to install s3cmd in centos with below configuration. But when i try to list down all buckets it give error s3cmd ls
ERROR: S3 error: None
I have checked python version is 2.6.6 and s3cmd version 1.5.1.2
http://s3tools.org/kb/item14.htm
http://s3tools.org/kb/item1.htm
[default]
access_key = ACCESS_KEY
access_token =
add_encoding_exts =
add_headers =
bucket_location = US
ca_certs_file =
cache_file =
check_ssl_certificate = True
cloudfront_host = cloudfront.amazonaws.com
default_mime_type = binary/octet-stream
delay_updates = False
delete_after = False
delete_after_fetch = False
delete_removed = False
dry_run = False
enable_multipart = True
encoding = UTF-8
encrypt = False
expiry_date =
expiry_days =
expiry_prefix =
follow_symlinks = False
force = False
get_continue = False
gpg_command = /usr/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase = secret
guess_mime_type = True
host_base = vault.ecloud.co.uk
host_bucket = %(bucket)s.vault.ecloud.co.uk
human_readable_sizes = False
ignore_failed_copy = False
invalidate_default_index_on_cf = False
invalidate_default_index_root_on_cf = True
invalidate_on_cf = False
list_md5 = False
log_target_prefix =
max_delete = -1
mime_type =
multipart_chunk_size_mb = 15
preserve_attrs = True
progress_meter = True
proxy_host =
proxy_port = 0
put_continue = False
recursive = False
recv_chunk = 4096
reduced_redundancy = False
restore_days = 1
secret_key = SECRET_KEY
send_chunk = 4096
server_side_encryption = False
signature_v2 = False
simpledb_host = sdb.amazonaws.com
skip_existing = False
socket_timeout = 300
urlencoding_mode = normal
use_https = True
use_mime_magic = True
verbosity = WARNING
website_endpoint = http://%(bucket)s.s3-website-%(location)s.vault.ecloud.co.uk/
website_error =
website_index = index.html
After few search i found solution, it was due to RequestTimeTooSkewed.
Through this command i was able to debug this s3cmd --configuration --debug
Error><Code>RequestTimeTooSkewed</Code></Error>
You can fix RequestTimeTooSkewed with these commands
apt-get install ntp
or
yum install ntp
Configure NTP to use amazon servers , like so :
vim /etc/ntp.conf
service ntpd restart
For detail you can follow this link http://www.emind.co/how-to/how-to-fix-amazon-s3-requesttimetooskewed

DEBUG: Routing failed, re-queued. Kannel

Hi I am trying to setup Kannel 1.4.3 for sending and receiving SMS. But I'm getting Routing Failed error
ERROR: AT2[Huawei-E220-00]: Couldn't connect (retrying in 10 seconds).
Message in the browser:
3: Queued for later delivery
Details:
Modem - Huawei e220
Sim - AT&T
OS - Ubuntu 14.04 LTS
Please let me know if there is anything wrong in the following smskannel.conf:
#---------------------------------------------
# CORE
#
group = core
admin-port = 13000
smsbox-port = 13001
admin-password = bar
#status-password = foo
box-deny-ip = "*.*.*.*"
box-allow-ip = "127.0.0.1"
#unified-prefix = "+358,00358,0;+,00"
#---------------------------------------------
# SMSC CONNECTIONS
#
group = smsc
smsc = at
smsc-id = Huawei-E220-00
port = 10000
modemtype = huawei_e220_00
device = /dev/ttyUSB0
sms-center = +13123149810
my-number = +1xxxxxxxxxx
connect-allow-ip = 127.0.0.1
sim-buffering = true
keepalive = 5
#---------------------------------------------
# SMSBOX SETUP
#
group = smsbox
bearerbox-host = 127.0.0.1
sendsms-port = 13013
global-sender = 13013
#---------------------------------------------
# SEND-SMS USERS
#
group = sendsms-user
username = tester
password = foobar
#---------------------------------------------
# SERVICES
group = sms-service
keyword = nop
text = "You asked nothing and I did it!"
group = sms-service
keyword = default
text = "No service specified"
group = sms-service
keyword = complex
catch-all = yes
accept-x-kannel-headers = true
max-messages = 3
concatenation = true
get-url = "http://127.0.0.1:13013/cgi-bin/sendsms?username=tester&password=foobar&to=+16782304782&text=Hello World"
#---------------------------------------------
# MODEMS
#
group = modems
id = huawei_e220_00
name = "Huawei E220"
detect-string = "huawei"
init-string = "ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0"
message-storage = "SM"
need-sleep = true
speed = 460800`

Resources