Hi I am trying to setup Kannel 1.4.3 for sending and receiving SMS. But I'm getting Routing Failed error
ERROR: AT2[Huawei-E220-00]: Couldn't connect (retrying in 10 seconds).
Message in the browser:
3: Queued for later delivery
Details:
Modem - Huawei e220
Sim - AT&T
OS - Ubuntu 14.04 LTS
Please let me know if there is anything wrong in the following smskannel.conf:
#---------------------------------------------
# CORE
#
group = core
admin-port = 13000
smsbox-port = 13001
admin-password = bar
#status-password = foo
box-deny-ip = "*.*.*.*"
box-allow-ip = "127.0.0.1"
#unified-prefix = "+358,00358,0;+,00"
#---------------------------------------------
# SMSC CONNECTIONS
#
group = smsc
smsc = at
smsc-id = Huawei-E220-00
port = 10000
modemtype = huawei_e220_00
device = /dev/ttyUSB0
sms-center = +13123149810
my-number = +1xxxxxxxxxx
connect-allow-ip = 127.0.0.1
sim-buffering = true
keepalive = 5
#---------------------------------------------
# SMSBOX SETUP
#
group = smsbox
bearerbox-host = 127.0.0.1
sendsms-port = 13013
global-sender = 13013
#---------------------------------------------
# SEND-SMS USERS
#
group = sendsms-user
username = tester
password = foobar
#---------------------------------------------
# SERVICES
group = sms-service
keyword = nop
text = "You asked nothing and I did it!"
group = sms-service
keyword = default
text = "No service specified"
group = sms-service
keyword = complex
catch-all = yes
accept-x-kannel-headers = true
max-messages = 3
concatenation = true
get-url = "http://127.0.0.1:13013/cgi-bin/sendsms?username=tester&password=foobar&to=+16782304782&text=Hello World"
#---------------------------------------------
# MODEMS
#
group = modems
id = huawei_e220_00
name = "Huawei E220"
detect-string = "huawei"
init-string = "ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0"
message-storage = "SM"
need-sleep = true
speed = 460800`
Related
i have migrated one of our jumpservers from openldap/nscd to sssd client.
Server is running Ubuntu 14.04 x64.
Everything works fine except one very important feature: Pasword reset dialog does not pop-up when user authenticates with expired password. We have Policy of 90 days retention set on our ldap server (OpenLdap 2.4). Playing with different flags on /etc/sssd/sssd.conf did not bring desired result
Here is sssd.conf
# LDAP sssd config
[sssd]
debug_level = 8
domains = mydomain.local
config_file_version = 2
reconnection_retries = 3
services = nss, pam, ssh, sudo
[domain/mydomain.local]
debug_level = 8
cache_credentials = true
entry_cache_timeout = 600
ldap_search_base = dc=mydomain,dc=local
ldap_sudo_search_base = ou=SUDOers,dc=mydomain,dc=local
id_provider = ldap
auth_provider = ldap
chpass_provider = ldap
sudo_provider = ldap
subdomain_homedir = /home/%d/%u
ldap_uri = ldaps://10.10.10.10
ldap_tls_reqcert = allow
account_cache_expiration = 7
ldap_schema = rfc2307
ldap_pwd_policy = shadow
ldap_chpass_update_last_change = true
pwd_expiration_warning = 0
reconnection_retries = 3
access_provider = simple
simple_allow_groups = Access_Jumpserver
[nss]
debug_level = 8
filter_groups = root
filter_users = backup,bin,daemon,Debian-exim,games,gnats,irc,list,lp,mail,man,messagebus,news,root,smmsp,smmta,sshd,sync,sys,syslog,uucp,uuidd
reconnection_retries = 3
enum_cache_timeout = 300
entry_cache_nowait_percentage = 75
[pam]
debug_level = 8
pam_verbosity = 8
reconnection_retries = 3
offline_credentials_expiration = 7
offline_failed_login_attempts = 5
offline_failed_login_delay = 15
[sudo]
debug_level = 8
I would be happy for any direction here
As a temporary solution i am using the following function in login script that is invoked from /etc/bash.bashrc
calculate_pwd_age() {
local MAX_AGE=90
let "shdw_epoch = $(ldapsearch -x -LLL -H ldaps://10.0.0.1 "uid=${USER}" shadowLastChange | awk 'NR==2{print $2}')"
let "today = $(date +'%s') / 86400"
let "shdw_diff = ${today} - ${shdw_epoch}"
if [[ ${shdw_diff} -ge ${MAX_AGE} ]]; then
echo -e "\nYour password has expired. Please change it right now:\n"
sleep 2
passwd
fi
}
I am writing data to influxdb using the node-influx library.
https://github.com/node-influx/node-influx
It writes about 500,000 records and then I start seeing this error following which there are no more writes. It looks like a dns issue but I am running it inside a docker container on ubuntu 18.04 host.
Error: getaddrinfo EAGAIN influxdb influxdb:8086 |at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:56:26)
I have the logging level set to debug but I am not seeing any other errors. Any idea what might be causing this?
Update
tried with a different influx version
increased ulimit of host
used ip address of docker-container instead of service name, no error is thrown, writes stop after sometime silently
tried to call the write API with curl
curl -i -XPOST 'http://localhost:8086/write?db=mydb' --data-binary 'snmp,hostipv4=172.16.102.82,oidname=cpu_idle,site=gotham value=1000 1574751020815819489'
This works and a record is inserted in the DB.
Update2
It seems to be a dns issue on the docker network. I am not able to ping the influxdb container from the worker container. The writes are not reaching influx.
Update3
As a workaround for now, I am forcing a process.exit(1) on catching the error on my worker and using docker-compose restart: on-failure to restart the service. This resumes the writes.
retention policy is set for 2 days on the DB
influxdb.conf
reporting-disabled = true
[meta]
dir = "/var/lib/influxdb/meta"
retention-autocreate = false
logging-enabled = true
[logging]
format = "auto"
level = "debug"
[data]
engine = "tsm1"
dir = "/var/lib/influxdb/data"
wal-dir = "/var/lib/influxdb/wal"
wal-fsync-delay = "200ms"
index-version = "inmem"
wal-logging-enabled = true
query-log-enabled = true
cache-max-memory-size = "2g"
cache-snapshot-memory-size = "256m"
cache-snapshot-write-cold-duration = "20m"
compact-full-write-cold-duration = "24h"
max-concurrent-compactions = 0
compact-throughput = "48m"
max-points-per-block = 0
max-series-per-database = 1000000
trace-logging-enabled = false
[coordinator]
write-timeout = "10s"
max-concurrent-queries = 0
query-timeout = "0s"
log-queries-after = "0s"
max-select-point = 0
max-select-series = 0
max-select-buckets = 0
[retention]
enabled = true
check-interval = "30m0s"
[shard-precreation]
enabled = true
check-interval = "10m0s"
advance-period = "30m0s"
[monitor]
store-enabled = true
store-database = "_internal"
store-interval = "10s"
[http]
enabled = true
bind-address = ":8086"
auth-enabled = false
log-enabled = true
max-concurrent-write-limit = 0
max-enqueued-write-limit = 0
enqueued-write-timeout = 0
[continuous_queries]
enabled = false
log-enabled = true
run-interval = "10s"
This is my /etc/telegraf/telegraf.conf file, section outputs.file:
[[outputs.file]]
files = ["stdout", "/home/zeinab/metrics.out"]
data_format = "influx"
But telegraf log (which are written in /var/log/syslog) shows error continuously:
Jan 15 12:14:33 ZiZi telegraf[19916]: kernel,host=ZiZi context_switches=9275452836i,boot_time=1515496651i,processes_forked=1203986i,interrupts=1381624861i 1516005867000000000
Jan 15 12:14:33 ZiZi telegraf[19916]: 2018-01-15T08:44:33Z E! Error writing to output [file]: failed to write message: kernel,host=ZiZi context_switches=9275452836i,boot_time=1515496651i,processes_forked=1203986i,interrupts=1381624861i 1516005867000000000
Jan 15 12:14:33 ZiZi telegraf[19916]: , invalid argument
EDIT 1:
The whole un-commented configs are:
[global_tags]
[agent]
interval = "10s"
round_interval = true
metric_batch_size = 1000
metric_buffer_limit = 10000
collection_jitter = "0s"
flush_interval = "10s"
flush_jitter = "0s"
precision = ""
debug = false
quiet = false
logfile = ""
hostname = ""
omit_hostname = false
[[outputs.influxdb]]
urls = ["udp://127.0.0.1:8089"] # UDP endpoint example
database = "telegraf" # required
[[outputs.elasticsearch]]
urls = [ "http://127.0.0.1:9200" ] # required.
timeout = "5s"
index_name = "telegraf-%Y.%m.%d" # required.
manage_template = true
template_name = "telegraf"
overwrite_template = false
[[outputs.file]]
files = ["stdout", "/home/zeinab/metrics.out"]
data_format = "influx"
[[inputs.cpu]]
percpu = true
totalcpu = true
collect_cpu_time = false
report_active = false
[[inputs.disk]]
ignore_fs = ["tmpfs", "devtmpfs", "devfs"]
[[inputs.diskio]]
[[inputs.kernel]]
[[inputs.mem]]
[[inputs.processes]]
[[inputs.swap]]
[[inputs.system]]
[[inputs.jolokia2_agent]]
urls = ["http://192.168.100.179:8778/jolokia"]
[[inputs.jolokia2_agent.metric]]
name = "heap_memory_usage"
mbean = "java.lang:type=Memory"
paths = ["HeapMemoryUsage"]
[[inputs.jolokia2_agent.metrics]]
name = "send_success"
mbean = "wr-core:type=monitor,name=execution"
paths = ["MessageSendSuccessCount"]
I start telegraf as a service:
service telegraf start
All nodes registering as down after new torque install. I'm not sure why
[root#rbx-1 6.0.1]# pbsnodes -a
rbx-1
state = down
power_state = Running
np = 1
ntype = cluster
mom_service_port = 15002
mom_manager_port = 15003
rbx-2
state = down
power_state = Running
np = 1
ntype = cluster
mom_service_port = 15002
mom_manager_port = 15003
Here is qmgr says
[root#rbx-1 6.0.1]# qmgr -c 'p s'
create queue batch
set queue batch queue_type = Execution
set queue batch resources_default.nodes = 1
set queue batch resources_default.walltime = 01:00:00
set queue batch enabled = True
set queue batch started = True
#
# Set server attributes.
#
set server scheduling = True
set server acl_hosts = rbx-1
set server managers = root#rbx-1
set server operators = root#rbx-1
set server default_queue = batch
set server log_events = 2047
set server mail_from = adm
set server node_check_rate = 150
set server tcp_timeout = 300
set server job_stat_rate = 300
set server poll_jobs = True
set server down_on_error = True
set server mom_job_sync = True
set server keep_completed = 300
set server next_job_number = 0
set server moab_array_compatible = True
set server nppcu = 1
set server timeout_for_job_delete = 120
set server timeout_for_job_requeue = 120
Please help- I don't know what's causing this or what to try next. Any ideas on tutorials or other would be helpful
Try running momctl -d0 -h rbx-1 to see if the MOMs are communicating with the server. Make sure the host names in the server_name file match up with /etc/hosts on the server and the compute nodes. I'd guess you don't have the short names in /etc/hosts on the nodes.
I am using the mongoose module for my Express.js app, and I keep getting this error everytime I start up the app:
========================================================================================
= Please ensure that you set the default write concern for the database by setting =
= one of the options =
= =
= w: (value of > -1 or the string 'majority'), where < 1 means =
= no write acknowlegement =
= journal: true/false, wait for flush to journal before acknowlegement =
= fsync: true/false, wait for flush to file system before acknowlegement =
= =
= For backward compatibility safe is still supported and =
= allows values of [true | false | {j:true} | {w:n, wtimeout:n} | {fsync:true}] =
= the default value is false which means the driver receives does not =
= return the information of the success/error of the insert/update/remove =
= =
= ex: new Db(new Server('localhost', 27017), {safe:false}) =
= =
= http://www.mongodb.org/display/DOCS/getLastError+Command =
= =
= The default of no acknowlegement will change in the very near future =
= =
= This message will disappear when the default safe is set on the driver Db =
========================================================================================
I cannot figure out how to set the write concern. I am connecting to my database like this:
mongoose.connect('mongodb://localhost/reader')
What you want to do is:
mongoose.connect('mongodb://localhost/reader', {db:{safe:false}})
That will give you the default behavior that existed before this whole explicit write concern thing happened in the mongo driver.
More information here: http://mongoosejs.com/docs/api.html#index_Mongoose-createConnection
It was because of the connect-mongodb package. I changed it to connect-mongo and this fixed the problem!