Out of the blue I am getting this error on all playbooks that try to use the ping module.
Simple script like this:
- hosts: "tag_Role_App:&tag_Cluster_{{cluster_id}}"
gather_facts: false
max_fail_percentage: 0
any_errors_fatal: true
tasks:
- action: ping
Results in:
...
obj = getattr(self._module_cache[path], self.class_name)
AttributeError: module 'ansible.plugins.action.ping' has no attribute 'ActionModule'
Python 3.9, ansible==6.0.0 ansible-core==2.13.1, I've tried upgrading. No change.
ansible.cfg (although I don't think this matters)
[defaults]
action_plugins =
host_key_checking = False
ask_pass = False
ask_sudo_pass = False
command_warnings = True
log_path = /var/log/ansible/ansible.log
gathering = smart
nocows = 1
pattern = a^
#command_warnings = False
interpreter_python = auto_silent
force_color = 0
nocolor = 1
timeout = 60
[ssh_connection]
pipelining = True
ssh_args = -o UserKnownHostsFile=/dev/null -o ConnectionAttempts=20 -o ControlPersist=15m -F /etc/ansible/ssh.config -q
retries = 20
nocolor = true
Related
I am writing data to influxdb using the node-influx library.
https://github.com/node-influx/node-influx
It writes about 500,000 records and then I start seeing this error following which there are no more writes. It looks like a dns issue but I am running it inside a docker container on ubuntu 18.04 host.
Error: getaddrinfo EAGAIN influxdb influxdb:8086 |at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:56:26)
I have the logging level set to debug but I am not seeing any other errors. Any idea what might be causing this?
Update
tried with a different influx version
increased ulimit of host
used ip address of docker-container instead of service name, no error is thrown, writes stop after sometime silently
tried to call the write API with curl
curl -i -XPOST 'http://localhost:8086/write?db=mydb' --data-binary 'snmp,hostipv4=172.16.102.82,oidname=cpu_idle,site=gotham value=1000 1574751020815819489'
This works and a record is inserted in the DB.
Update2
It seems to be a dns issue on the docker network. I am not able to ping the influxdb container from the worker container. The writes are not reaching influx.
Update3
As a workaround for now, I am forcing a process.exit(1) on catching the error on my worker and using docker-compose restart: on-failure to restart the service. This resumes the writes.
retention policy is set for 2 days on the DB
influxdb.conf
reporting-disabled = true
[meta]
dir = "/var/lib/influxdb/meta"
retention-autocreate = false
logging-enabled = true
[logging]
format = "auto"
level = "debug"
[data]
engine = "tsm1"
dir = "/var/lib/influxdb/data"
wal-dir = "/var/lib/influxdb/wal"
wal-fsync-delay = "200ms"
index-version = "inmem"
wal-logging-enabled = true
query-log-enabled = true
cache-max-memory-size = "2g"
cache-snapshot-memory-size = "256m"
cache-snapshot-write-cold-duration = "20m"
compact-full-write-cold-duration = "24h"
max-concurrent-compactions = 0
compact-throughput = "48m"
max-points-per-block = 0
max-series-per-database = 1000000
trace-logging-enabled = false
[coordinator]
write-timeout = "10s"
max-concurrent-queries = 0
query-timeout = "0s"
log-queries-after = "0s"
max-select-point = 0
max-select-series = 0
max-select-buckets = 0
[retention]
enabled = true
check-interval = "30m0s"
[shard-precreation]
enabled = true
check-interval = "10m0s"
advance-period = "30m0s"
[monitor]
store-enabled = true
store-database = "_internal"
store-interval = "10s"
[http]
enabled = true
bind-address = ":8086"
auth-enabled = false
log-enabled = true
max-concurrent-write-limit = 0
max-enqueued-write-limit = 0
enqueued-write-timeout = 0
[continuous_queries]
enabled = false
log-enabled = true
run-interval = "10s"
I'm trying to install npm package
npm ERR! code EHOSTUNREACH
npm ERR! errno EHOSTUNREACH
npm ERR! request to https://registry.npmjs.org/express-session failed, reason: connect EHOSTUNREACH 104.16.23.35:443 - Local (192.0.108.1:52659)
I tried resetting npm configuration to the default values. I also tried installing with/out vpn, but still didn't work..
Here is the configuration:
; userconfig /Users/mac/.npmrc
access = null
allow-same-version = false
also = null
always-auth = false
audit = true
audit-level = "low"
auth-type = "legacy"
before = null
bin-links = true
browser = null
ca = null
cache = "/Users/mac/.npm"
cache-lock-retries = 10
cache-lock-stale = 60000
cache-lock-wait = 10000
cache-max = 0
cache-min = 10
cafile = "/Users/mac/Projects/NodeProjects/Bloggy/undefined"
cert = null
cidr = null
color = true
commit-hooks = true
depth = 0
description = true
dev = false
dry-run = false
editor = "vi"
engine-strict = false
fetch-retries = 2
fetch-retry-factor = 10
fetch-retry-maxtimeout = 60000
fetch-retry-mintimeout = 10000
force = false
git = "git"
git-tag-version = true
global = false
global-style = false
globalconfig = "/usr/local/etc/npmrc"
globalignorefile = "/usr/local/etc/npmignore"
group = 20
ham-it-up = false
heading = "npm"
https-proxy = null
if-present = false
ignore-prepublish = false
ignore-scripts = false
init-author-email = ""
init-author-name = ""
init-author-url = ""
init-license = "ISC"
init-module = "/Users/mac/.npm-init.js"
init-version = "1.0.0"
json = false
key = null
legacy-bundling = false
link = false
local-address = undefined
loglevel = "notice"
logs-max = 10
long = false
maxsockets = 50
message = "%s"
node-options = null
node-version = "10.15.3"
noproxy = null
offline = false
onload-script = null
only = null
optional = true
otp = null
package-lock = true
package-lock-only = false
parseable = false
prefer-offline = false
prefer-online = false
prefix = "/usr/local"
preid = ""
production = false
progress = true
proxy = null
read-only = false
rebuild-bundle = true
registry = "https://registry.npmjs.org/"
rollback = true
save = true
save-bundle = false
save-dev = false
save-exact = false
save-optional = false
save-prefix = "^"
save-prod = false
scope = ""
script-shell = null
scripts-prepend-node-path = "warn-only"
searchexclude = null
searchlimit = 20
searchopts = ""
searchstaleness = 900
send-metrics = false
shell = "/bin/bash"
shrinkwrap = true
sign-git-commit = false
sign-git-tag = false
sso-poll-frequency = 500
sso-type = "oauth"
strict-ssl = true
tag = "latest"
tag-version-prefix = "v"
timing = false
tmp = "/var/folders/qc/f1s84bcj5y10v57pvz0s4st40000gn/T"
umask = 18
unicode = true
unsafe-perm = true
update-notifier = true
usage = false
user = 0
userconfig = "/Users/mac/.npmrc"
version = false
versions = false
viewer = "man"
I set these values while trying to fix this problem, but still doesn't work. I may need to say that npm was working perfectly before..
To answer your question, you have a problem with connecting registry.npmjs.org.
Try firing below commands(considering you are using windows).
ping registry.npmjs.org
traceroute -n registry.npmjs.org
if the first command returns "Destination Host Unreachable", you are behind the firewall and its blocking you reaching to the server.
Further you can check your .npmrc entries or npm config ls
I had the same problem on ubuntu (wsl).
Try these:
$ ping registry.npmjs.org
$ npm view npm version
If the ping is not successful, there are multiple options:
check your connection
check your firewall
check your proxy
For me it was the proxy and I had to disable the option Detect automaticaly the settings
This is my /etc/telegraf/telegraf.conf file, section outputs.file:
[[outputs.file]]
files = ["stdout", "/home/zeinab/metrics.out"]
data_format = "influx"
But telegraf log (which are written in /var/log/syslog) shows error continuously:
Jan 15 12:14:33 ZiZi telegraf[19916]: kernel,host=ZiZi context_switches=9275452836i,boot_time=1515496651i,processes_forked=1203986i,interrupts=1381624861i 1516005867000000000
Jan 15 12:14:33 ZiZi telegraf[19916]: 2018-01-15T08:44:33Z E! Error writing to output [file]: failed to write message: kernel,host=ZiZi context_switches=9275452836i,boot_time=1515496651i,processes_forked=1203986i,interrupts=1381624861i 1516005867000000000
Jan 15 12:14:33 ZiZi telegraf[19916]: , invalid argument
EDIT 1:
The whole un-commented configs are:
[global_tags]
[agent]
interval = "10s"
round_interval = true
metric_batch_size = 1000
metric_buffer_limit = 10000
collection_jitter = "0s"
flush_interval = "10s"
flush_jitter = "0s"
precision = ""
debug = false
quiet = false
logfile = ""
hostname = ""
omit_hostname = false
[[outputs.influxdb]]
urls = ["udp://127.0.0.1:8089"] # UDP endpoint example
database = "telegraf" # required
[[outputs.elasticsearch]]
urls = [ "http://127.0.0.1:9200" ] # required.
timeout = "5s"
index_name = "telegraf-%Y.%m.%d" # required.
manage_template = true
template_name = "telegraf"
overwrite_template = false
[[outputs.file]]
files = ["stdout", "/home/zeinab/metrics.out"]
data_format = "influx"
[[inputs.cpu]]
percpu = true
totalcpu = true
collect_cpu_time = false
report_active = false
[[inputs.disk]]
ignore_fs = ["tmpfs", "devtmpfs", "devfs"]
[[inputs.diskio]]
[[inputs.kernel]]
[[inputs.mem]]
[[inputs.processes]]
[[inputs.swap]]
[[inputs.system]]
[[inputs.jolokia2_agent]]
urls = ["http://192.168.100.179:8778/jolokia"]
[[inputs.jolokia2_agent.metric]]
name = "heap_memory_usage"
mbean = "java.lang:type=Memory"
paths = ["HeapMemoryUsage"]
[[inputs.jolokia2_agent.metrics]]
name = "send_success"
mbean = "wr-core:type=monitor,name=execution"
paths = ["MessageSendSuccessCount"]
I start telegraf as a service:
service telegraf start
I am trying to install s3cmd in centos with below configuration. But when i try to list down all buckets it give error s3cmd ls
ERROR: S3 error: None
I have checked python version is 2.6.6 and s3cmd version 1.5.1.2
http://s3tools.org/kb/item14.htm
http://s3tools.org/kb/item1.htm
[default]
access_key = ACCESS_KEY
access_token =
add_encoding_exts =
add_headers =
bucket_location = US
ca_certs_file =
cache_file =
check_ssl_certificate = True
cloudfront_host = cloudfront.amazonaws.com
default_mime_type = binary/octet-stream
delay_updates = False
delete_after = False
delete_after_fetch = False
delete_removed = False
dry_run = False
enable_multipart = True
encoding = UTF-8
encrypt = False
expiry_date =
expiry_days =
expiry_prefix =
follow_symlinks = False
force = False
get_continue = False
gpg_command = /usr/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase = secret
guess_mime_type = True
host_base = vault.ecloud.co.uk
host_bucket = %(bucket)s.vault.ecloud.co.uk
human_readable_sizes = False
ignore_failed_copy = False
invalidate_default_index_on_cf = False
invalidate_default_index_root_on_cf = True
invalidate_on_cf = False
list_md5 = False
log_target_prefix =
max_delete = -1
mime_type =
multipart_chunk_size_mb = 15
preserve_attrs = True
progress_meter = True
proxy_host =
proxy_port = 0
put_continue = False
recursive = False
recv_chunk = 4096
reduced_redundancy = False
restore_days = 1
secret_key = SECRET_KEY
send_chunk = 4096
server_side_encryption = False
signature_v2 = False
simpledb_host = sdb.amazonaws.com
skip_existing = False
socket_timeout = 300
urlencoding_mode = normal
use_https = True
use_mime_magic = True
verbosity = WARNING
website_endpoint = http://%(bucket)s.s3-website-%(location)s.vault.ecloud.co.uk/
website_error =
website_index = index.html
After few search i found solution, it was due to RequestTimeTooSkewed.
Through this command i was able to debug this s3cmd --configuration --debug
Error><Code>RequestTimeTooSkewed</Code></Error>
You can fix RequestTimeTooSkewed with these commands
apt-get install ntp
or
yum install ntp
Configure NTP to use amazon servers , like so :
vim /etc/ntp.conf
service ntpd restart
For detail you can follow this link http://www.emind.co/how-to/how-to-fix-amazon-s3-requesttimetooskewed
Hi I am trying to setup Kannel 1.4.3 for sending and receiving SMS. But I'm getting Routing Failed error
ERROR: AT2[Huawei-E220-00]: Couldn't connect (retrying in 10 seconds).
Message in the browser:
3: Queued for later delivery
Details:
Modem - Huawei e220
Sim - AT&T
OS - Ubuntu 14.04 LTS
Please let me know if there is anything wrong in the following smskannel.conf:
#---------------------------------------------
# CORE
#
group = core
admin-port = 13000
smsbox-port = 13001
admin-password = bar
#status-password = foo
box-deny-ip = "*.*.*.*"
box-allow-ip = "127.0.0.1"
#unified-prefix = "+358,00358,0;+,00"
#---------------------------------------------
# SMSC CONNECTIONS
#
group = smsc
smsc = at
smsc-id = Huawei-E220-00
port = 10000
modemtype = huawei_e220_00
device = /dev/ttyUSB0
sms-center = +13123149810
my-number = +1xxxxxxxxxx
connect-allow-ip = 127.0.0.1
sim-buffering = true
keepalive = 5
#---------------------------------------------
# SMSBOX SETUP
#
group = smsbox
bearerbox-host = 127.0.0.1
sendsms-port = 13013
global-sender = 13013
#---------------------------------------------
# SEND-SMS USERS
#
group = sendsms-user
username = tester
password = foobar
#---------------------------------------------
# SERVICES
group = sms-service
keyword = nop
text = "You asked nothing and I did it!"
group = sms-service
keyword = default
text = "No service specified"
group = sms-service
keyword = complex
catch-all = yes
accept-x-kannel-headers = true
max-messages = 3
concatenation = true
get-url = "http://127.0.0.1:13013/cgi-bin/sendsms?username=tester&password=foobar&to=+16782304782&text=Hello World"
#---------------------------------------------
# MODEMS
#
group = modems
id = huawei_e220_00
name = "Huawei E220"
detect-string = "huawei"
init-string = "ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0"
message-storage = "SM"
need-sleep = true
speed = 460800`