i have migrated one of our jumpservers from openldap/nscd to sssd client.
Server is running Ubuntu 14.04 x64.
Everything works fine except one very important feature: Pasword reset dialog does not pop-up when user authenticates with expired password. We have Policy of 90 days retention set on our ldap server (OpenLdap 2.4). Playing with different flags on /etc/sssd/sssd.conf did not bring desired result
Here is sssd.conf
# LDAP sssd config
[sssd]
debug_level = 8
domains = mydomain.local
config_file_version = 2
reconnection_retries = 3
services = nss, pam, ssh, sudo
[domain/mydomain.local]
debug_level = 8
cache_credentials = true
entry_cache_timeout = 600
ldap_search_base = dc=mydomain,dc=local
ldap_sudo_search_base = ou=SUDOers,dc=mydomain,dc=local
id_provider = ldap
auth_provider = ldap
chpass_provider = ldap
sudo_provider = ldap
subdomain_homedir = /home/%d/%u
ldap_uri = ldaps://10.10.10.10
ldap_tls_reqcert = allow
account_cache_expiration = 7
ldap_schema = rfc2307
ldap_pwd_policy = shadow
ldap_chpass_update_last_change = true
pwd_expiration_warning = 0
reconnection_retries = 3
access_provider = simple
simple_allow_groups = Access_Jumpserver
[nss]
debug_level = 8
filter_groups = root
filter_users = backup,bin,daemon,Debian-exim,games,gnats,irc,list,lp,mail,man,messagebus,news,root,smmsp,smmta,sshd,sync,sys,syslog,uucp,uuidd
reconnection_retries = 3
enum_cache_timeout = 300
entry_cache_nowait_percentage = 75
[pam]
debug_level = 8
pam_verbosity = 8
reconnection_retries = 3
offline_credentials_expiration = 7
offline_failed_login_attempts = 5
offline_failed_login_delay = 15
[sudo]
debug_level = 8
I would be happy for any direction here
As a temporary solution i am using the following function in login script that is invoked from /etc/bash.bashrc
calculate_pwd_age() {
local MAX_AGE=90
let "shdw_epoch = $(ldapsearch -x -LLL -H ldaps://10.0.0.1 "uid=${USER}" shadowLastChange | awk 'NR==2{print $2}')"
let "today = $(date +'%s') / 86400"
let "shdw_diff = ${today} - ${shdw_epoch}"
if [[ ${shdw_diff} -ge ${MAX_AGE} ]]; then
echo -e "\nYour password has expired. Please change it right now:\n"
sleep 2
passwd
fi
}
Related
we have big-data Hadoop cluster based on horton-works HDP version 2.6.4 and ambari 2.6.1 version
all machines are with RHEL 7.2 version
in our cluster we have more then 540 machines and on all machines we have ambari-agent that communicate with ambari server , ( Ambari server is installed only on one machine ) while ambari-agent installed on all machines
until using ansible everything was good , when we do ambari-agent upgrade and ambari-agent restart
but recently we start to use ansible ( ansible-playbook ) in order to automate the installation
and ansible is running on all machines
so when task do the ambari-agent restart , then imminently we notice that ansible execution stooped and killed
after some investigation we saw that ambari agent is using the following ports
url_port = 8440
secured_url_port = 8441
ping_port = 8670
but I not see that any ansible process used above ports , so we not think its related
but the basic issue is clear
when ansible task doing on remote machine - ambari-agent restart , then its caused ansible interrupt and ansible killed
ambari-agent configuration looks like this
[server]
hostname = datanode02.gtfactory.com
url_port = 8440
secured_url_port = 8441
connect_retry_delay = 10
max_reconnect_retry_delay = 30
[agent]
logdir = /var/log/ambari-agent
piddir = /var/run/ambari-agent
prefix = /var/lib/ambari-agent/data
loglevel = INFO
data_cleanup_interval = 86400
data_cleanup_max_age = 2592000
data_cleanup_max_size_mb = 100
ping_port = 8670
cache_dir = /var/lib/ambari-agent/cache
tolerate_download_failures = true
run_as_user = root
parallel_execution = 0
alert_grace_period = 5
status_command_timeout = 5
alert_kinit_timeout = 14400000
system_resource_overrides = /etc/resource_overrides
[security]
keysdir = /var/lib/ambari-agent/keys
server_crt = ca.crt
passphrase_env_var_name = AMBARI_PASSPHRASE
ssl_verify_cert = 0
credential_lib_dir = /var/lib/ambari-agent/cred/lib
credential_conf_dir = /var/lib/ambari-agent/cred/conf
credential_shell_cmd = org.apache.hadoop.security.alias.CredentialShell
[network]
use_system_proxy_settings = true
[services]
pidlookuppath = /var/run/
[heartbeat]
state_interval_seconds = 60
dirs = /etc/hadoop,/etc/hadoop/conf,/etc/hbase,/etc/hcatalog,/etc/hive,/etc/oozie,
/etc/sqoop,
/var/run/hadoop,/var/run/zookeeper,/var/run/hbase,/var/run/templeton,/var/run/oozie,
/var/log/hadoop,/var/log/zookeeper,/var/log/hbase,/var/run/templeton,/var/log/hive
log_lines_count = 300
idle_interval_min = 1
idle_interval_max = 10
[logging]
syslog_enabled = 0
for now we are thinking about the following:
maybe ansible crash because TLSv1 is restricted ( Transport Layer Security ) , the default is that ambari-agent connects to TLSv1
so we think to set force_https_protocol=PROTOCOL_TLSv1_2 in ambari agent configuration , but this is only assumption
our suggestion and the new conf that maybe can help?
[security]
force_https_protocol=PROTOCOL_TLSv1_2 <------ the new update
keysdir = /var/lib/ambari-agent/keys
server_crt = ca.crt
passphrase_env_var_name = AMBARI_PASSPHRASE
ssl_verify_cert = 0
credential_lib_dir = /var/lib/ambari-agent/cred/lib
credential_conf_dir = /var/lib/ambari-agent/cred/conf
credential_shell_cmd = org.apache.hadoop.security.alias.CredentialShell
I am writing data to influxdb using the node-influx library.
https://github.com/node-influx/node-influx
It writes about 500,000 records and then I start seeing this error following which there are no more writes. It looks like a dns issue but I am running it inside a docker container on ubuntu 18.04 host.
Error: getaddrinfo EAGAIN influxdb influxdb:8086 |at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:56:26)
I have the logging level set to debug but I am not seeing any other errors. Any idea what might be causing this?
Update
tried with a different influx version
increased ulimit of host
used ip address of docker-container instead of service name, no error is thrown, writes stop after sometime silently
tried to call the write API with curl
curl -i -XPOST 'http://localhost:8086/write?db=mydb' --data-binary 'snmp,hostipv4=172.16.102.82,oidname=cpu_idle,site=gotham value=1000 1574751020815819489'
This works and a record is inserted in the DB.
Update2
It seems to be a dns issue on the docker network. I am not able to ping the influxdb container from the worker container. The writes are not reaching influx.
Update3
As a workaround for now, I am forcing a process.exit(1) on catching the error on my worker and using docker-compose restart: on-failure to restart the service. This resumes the writes.
retention policy is set for 2 days on the DB
influxdb.conf
reporting-disabled = true
[meta]
dir = "/var/lib/influxdb/meta"
retention-autocreate = false
logging-enabled = true
[logging]
format = "auto"
level = "debug"
[data]
engine = "tsm1"
dir = "/var/lib/influxdb/data"
wal-dir = "/var/lib/influxdb/wal"
wal-fsync-delay = "200ms"
index-version = "inmem"
wal-logging-enabled = true
query-log-enabled = true
cache-max-memory-size = "2g"
cache-snapshot-memory-size = "256m"
cache-snapshot-write-cold-duration = "20m"
compact-full-write-cold-duration = "24h"
max-concurrent-compactions = 0
compact-throughput = "48m"
max-points-per-block = 0
max-series-per-database = 1000000
trace-logging-enabled = false
[coordinator]
write-timeout = "10s"
max-concurrent-queries = 0
query-timeout = "0s"
log-queries-after = "0s"
max-select-point = 0
max-select-series = 0
max-select-buckets = 0
[retention]
enabled = true
check-interval = "30m0s"
[shard-precreation]
enabled = true
check-interval = "10m0s"
advance-period = "30m0s"
[monitor]
store-enabled = true
store-database = "_internal"
store-interval = "10s"
[http]
enabled = true
bind-address = ":8086"
auth-enabled = false
log-enabled = true
max-concurrent-write-limit = 0
max-enqueued-write-limit = 0
enqueued-write-timeout = 0
[continuous_queries]
enabled = false
log-enabled = true
run-interval = "10s"
Discovery is successful:
[root#ncdqd0110 iqn.11351.com.xxx:AAA]# iscsiadm -m discoverydb -t sendtargets -p 127.0.0.1:54541 --discover
127.0.0.1:54541,-1 iqn.2495.com.xxx:AAA
Login is failing:
[root#ncdqd0110 ~]# iscsiadm --mode node --target iqn.2495.com.xxx:AAA --portal 127.0.0.1:54541 --login
Logging in to [iface: default, target: iqn.2495.com.xxx:AAA, portal: 127.0.0.1,54541] (multiple)
iscsiadm: Could not login to [iface: default, target: iqn.2495.com.xxx:AAA, portal: 127.0.0.1,54541].
iscsiadm: initiator reported error (8 - connection timed out)
iscsiadm: Could not log into all portals
This is happening in Redhat v7.0. In Suse its working fine.
Few results of commands are given below:
[root#ncdqd0110 iqn.2495.com.xxx:AAA]# netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 10.31.224.1 0.0.0.0 UG 0 0 0 eno16780032
10.31.224.0 0.0.0.0 255.255.252.0 U 0 0 0 eno16780032
[root#ncdqd0110 ~]# ip route show
default via 10.31.224.1 dev eno16780032 proto static metric 1024
10.31.224.0/22 dev eno16780032 proto kernel scope link src 10.31.227.110
BEGIN RECORD 6.2.0.873-21
node.name = iqn.2495.com.xxx:AAA
node.tpgt = -1
node.startup = automatic
node.leading_login = No
iface.net_ifacename = eno16780032
iface.iscsi_ifacename = default
iface.transport_name = tcp
iface.vlan_id = 0
iface.vlan_priority = 0
iface.iface_num = 0
iface.mtu = 0
iface.port = 0
iface.tos = 0
iface.ttl = 0
iface.tcp_wsf = 0
iface.tcp_timer_scale = 0
iface.def_task_mgmt_timeout = 0
iface.erl = 0
iface.max_receive_data_len = 0
iface.first_burst_len = 0
iface.max_outstanding_r2t = 0
iface.max_burst_len = 0
node.discovery_address = 127.0.0.1
node.discovery_port = 54541
node.discovery_type = send_targets
node.session.initial_cmdsn = 0
node.session.initial_login_retry_max = 8
node.session.xmit_thread_priority = -20
node.session.cmds_max = 128
node.session.queue_depth = 32
node.session.nr_sessions = 1
node.session.auth.authmethod = None
node.session.timeo.replacement_timeout = 120
node.session.err_timeo.abort_timeout = 15
node.session.err_timeo.lu_reset_timeout = 30
node.session.err_timeo.tgt_reset_timeout = 30
node.session.err_timeo.host_reset_timeout = 60
node.session.iscsi.FastAbort = Yes
node.session.iscsi.InitialR2T = No
node.session.iscsi.ImmediateData = Yes
node.session.iscsi.FirstBurstLength = 262144
node.session.iscsi.MaxBurstLength = 16776192
node.session.iscsi.DefaultTime2Retain = 0
node.session.iscsi.DefaultTime2Wait = 2
node.session.iscsi.MaxConnections = 1
node.session.iscsi.MaxOutstandingR2T = 1
node.session.iscsi.ERL = 0
node.conn[0].address = 127.0.0.1
node.conn[0].port = 54541
node.conn[0].startup = manual
node.conn[0].tcp.window_size = 524288
node.conn[0].tcp.type_of_service = 0
node.conn[0].timeo.logout_timeout = 15
node.conn[0].timeo.login_timeout = 15
node.conn[0].timeo.auth_timeout = 45
node.conn[0].timeo.noop_out_interval = 5
node.conn[0].timeo.noop_out_timeout = 5
node.conn[0].iscsi.MaxXmitDataSegmentLength = 0
node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144
node.conn[0].iscsi.HeaderDigest = None
node.conn[0].iscsi.IFMarker = No
node.conn[0].iscsi.OFMarker = No
END RECORD
If any one know regarding this issue please let me know, how to fix this.
This appears to be an SELinux policy choice. The discovery session is running within iscsiadm, but iscsid is restricted in which ports it can connect to.
One option is to use the audit2why/audit2allow utils from policycoreutils to create a local policy module, extending the default system SELinux policy to allow this.
Hi I am trying to setup Kannel 1.4.3 for sending and receiving SMS. But I'm getting Routing Failed error
ERROR: AT2[Huawei-E220-00]: Couldn't connect (retrying in 10 seconds).
Message in the browser:
3: Queued for later delivery
Details:
Modem - Huawei e220
Sim - AT&T
OS - Ubuntu 14.04 LTS
Please let me know if there is anything wrong in the following smskannel.conf:
#---------------------------------------------
# CORE
#
group = core
admin-port = 13000
smsbox-port = 13001
admin-password = bar
#status-password = foo
box-deny-ip = "*.*.*.*"
box-allow-ip = "127.0.0.1"
#unified-prefix = "+358,00358,0;+,00"
#---------------------------------------------
# SMSC CONNECTIONS
#
group = smsc
smsc = at
smsc-id = Huawei-E220-00
port = 10000
modemtype = huawei_e220_00
device = /dev/ttyUSB0
sms-center = +13123149810
my-number = +1xxxxxxxxxx
connect-allow-ip = 127.0.0.1
sim-buffering = true
keepalive = 5
#---------------------------------------------
# SMSBOX SETUP
#
group = smsbox
bearerbox-host = 127.0.0.1
sendsms-port = 13013
global-sender = 13013
#---------------------------------------------
# SEND-SMS USERS
#
group = sendsms-user
username = tester
password = foobar
#---------------------------------------------
# SERVICES
group = sms-service
keyword = nop
text = "You asked nothing and I did it!"
group = sms-service
keyword = default
text = "No service specified"
group = sms-service
keyword = complex
catch-all = yes
accept-x-kannel-headers = true
max-messages = 3
concatenation = true
get-url = "http://127.0.0.1:13013/cgi-bin/sendsms?username=tester&password=foobar&to=+16782304782&text=Hello World"
#---------------------------------------------
# MODEMS
#
group = modems
id = huawei_e220_00
name = "Huawei E220"
detect-string = "huawei"
init-string = "ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0"
message-storage = "SM"
need-sleep = true
speed = 460800`
When answering the question Check if scheduled local agents can run in Notes client I found a forum post by Javed Khan indicating that this can be checked by checking if a bit in the Preferences environment value is set.
Const LOCAL_AGENTS = &H8000000
Call Session.SetEnvironmentVar("Preferences", Cstr( Clng( Session.GetEnvironmentValue( "Preferences", True )) Or LOCAL_AGENTS ), True )
The "Scheduled local agents" settings is apparently the 28:th bit.
My question is: Is there any online documentation for the meaning of the other bits?
Here is the list, taken from http://www-10.lotus.com/ldd/46dom.nsf/55c38d716d632d9b8525689b005ba1c0/e870840587eed796852568f6006facde?OpenDocument
0 <0> = Keep workspace in back when maximized (Enabled=1)
1 <2> = Scan for unread
2 <4> =
3 <8> = Large fonts
4 <16> =
5 <32> = Make Internet URLs (http//:) into hotspots
6 <64> =
7 <128> = Typewriter fonts only
8 <256> = Monochrome display
9 <512> = Scandinavian collation
10 <1024> =
11 <2048> =
12 <4096> = Sign sent mail (Enabled(1))
13 <8192> = Encrypt sent mail
14 <16384> = Metric(1)/Imperial(0) measurements
15 <32768> = Numbers last collation
16 <65536> = French casing
17 <131072> = empty trash folder (prompt during db close=0/always during db close=1/manual=1
18 <262144> = Check for new mail every x minutes (Enabled=0)
19 <524288> = Enable local background indexing
20 <1048576> = Encrypt saved mail
21 <2097152> =
22 <4194304> =
23 <8388608> = Right double-click closes window
24 <16777216> = Prompt for location
25 <33554432> =
26 <67108864> = Mark documents read when opened in the preview pane
27 <134217728> = Enable local scheduled agents
28 <268435456> = Save sent mail (Always prompt=10/Don't keep a copy=00/Always keep a copy=01)
29 <536870912> =
30 <1073741824> = New mail notification (None=10/Audible=00/Visible=01)
31 <2147483648> =