I want to set below configuration in phusion passenger.
max = 6
count = 2
active = 2
inactive = 0
Waiting on global queue = 0
Can you please tell me where to set this.
Related
we have big-data Hadoop cluster based on horton-works HDP version 2.6.4 and ambari 2.6.1 version
all machines are with RHEL 7.2 version
in our cluster we have more then 540 machines and on all machines we have ambari-agent that communicate with ambari server , ( Ambari server is installed only on one machine ) while ambari-agent installed on all machines
until using ansible everything was good , when we do ambari-agent upgrade and ambari-agent restart
but recently we start to use ansible ( ansible-playbook ) in order to automate the installation
and ansible is running on all machines
so when task do the ambari-agent restart , then imminently we notice that ansible execution stooped and killed
after some investigation we saw that ambari agent is using the following ports
url_port = 8440
secured_url_port = 8441
ping_port = 8670
but I not see that any ansible process used above ports , so we not think its related
but the basic issue is clear
when ansible task doing on remote machine - ambari-agent restart , then its caused ansible interrupt and ansible killed
ambari-agent configuration looks like this
[server]
hostname = datanode02.gtfactory.com
url_port = 8440
secured_url_port = 8441
connect_retry_delay = 10
max_reconnect_retry_delay = 30
[agent]
logdir = /var/log/ambari-agent
piddir = /var/run/ambari-agent
prefix = /var/lib/ambari-agent/data
loglevel = INFO
data_cleanup_interval = 86400
data_cleanup_max_age = 2592000
data_cleanup_max_size_mb = 100
ping_port = 8670
cache_dir = /var/lib/ambari-agent/cache
tolerate_download_failures = true
run_as_user = root
parallel_execution = 0
alert_grace_period = 5
status_command_timeout = 5
alert_kinit_timeout = 14400000
system_resource_overrides = /etc/resource_overrides
[security]
keysdir = /var/lib/ambari-agent/keys
server_crt = ca.crt
passphrase_env_var_name = AMBARI_PASSPHRASE
ssl_verify_cert = 0
credential_lib_dir = /var/lib/ambari-agent/cred/lib
credential_conf_dir = /var/lib/ambari-agent/cred/conf
credential_shell_cmd = org.apache.hadoop.security.alias.CredentialShell
[network]
use_system_proxy_settings = true
[services]
pidlookuppath = /var/run/
[heartbeat]
state_interval_seconds = 60
dirs = /etc/hadoop,/etc/hadoop/conf,/etc/hbase,/etc/hcatalog,/etc/hive,/etc/oozie,
/etc/sqoop,
/var/run/hadoop,/var/run/zookeeper,/var/run/hbase,/var/run/templeton,/var/run/oozie,
/var/log/hadoop,/var/log/zookeeper,/var/log/hbase,/var/run/templeton,/var/log/hive
log_lines_count = 300
idle_interval_min = 1
idle_interval_max = 10
[logging]
syslog_enabled = 0
for now we are thinking about the following:
maybe ansible crash because TLSv1 is restricted ( Transport Layer Security ) , the default is that ambari-agent connects to TLSv1
so we think to set force_https_protocol=PROTOCOL_TLSv1_2 in ambari agent configuration , but this is only assumption
our suggestion and the new conf that maybe can help?
[security]
force_https_protocol=PROTOCOL_TLSv1_2 <------ the new update
keysdir = /var/lib/ambari-agent/keys
server_crt = ca.crt
passphrase_env_var_name = AMBARI_PASSPHRASE
ssl_verify_cert = 0
credential_lib_dir = /var/lib/ambari-agent/cred/lib
credential_conf_dir = /var/lib/ambari-agent/cred/conf
credential_shell_cmd = org.apache.hadoop.security.alias.CredentialShell
I have two windows services "abc" and "xyz". I want to find out parent process id of each services. But every time parent process id is same for both services.
what I have done so far..
service1 = psutil.win_service_get('abc')
service2 = psutil.win_service_get('xyz')
s_pid1 = service1.pid()
s_pid2 = service2.pid()
p1 = psutil.Process(s_pid1)
p2 = psutil.Process(s_pid2)
p_pid1 = p1.ppid()
p_pid2 = p2.ppid()
parent_proc1 = p1.parent()
parent_proc2 = p2.parent()
print(p_pid1, parent_proc1, p_pid2, parent_proc2)
every time it's printing like:
636 psutil.Process(pid=636, name='services.exe', started='09:28:27') 636 psutil.Process(pid=636, name='services.exe', started='09:28:27')
I can't find out what's wrong. Any help will be appreciated.
All nodes registering as down after new torque install. I'm not sure why
[root#rbx-1 6.0.1]# pbsnodes -a
rbx-1
state = down
power_state = Running
np = 1
ntype = cluster
mom_service_port = 15002
mom_manager_port = 15003
rbx-2
state = down
power_state = Running
np = 1
ntype = cluster
mom_service_port = 15002
mom_manager_port = 15003
Here is qmgr says
[root#rbx-1 6.0.1]# qmgr -c 'p s'
create queue batch
set queue batch queue_type = Execution
set queue batch resources_default.nodes = 1
set queue batch resources_default.walltime = 01:00:00
set queue batch enabled = True
set queue batch started = True
#
# Set server attributes.
#
set server scheduling = True
set server acl_hosts = rbx-1
set server managers = root#rbx-1
set server operators = root#rbx-1
set server default_queue = batch
set server log_events = 2047
set server mail_from = adm
set server node_check_rate = 150
set server tcp_timeout = 300
set server job_stat_rate = 300
set server poll_jobs = True
set server down_on_error = True
set server mom_job_sync = True
set server keep_completed = 300
set server next_job_number = 0
set server moab_array_compatible = True
set server nppcu = 1
set server timeout_for_job_delete = 120
set server timeout_for_job_requeue = 120
Please help- I don't know what's causing this or what to try next. Any ideas on tutorials or other would be helpful
Try running momctl -d0 -h rbx-1 to see if the MOMs are communicating with the server. Make sure the host names in the server_name file match up with /etc/hosts on the server and the compute nodes. I'd guess you don't have the short names in /etc/hosts on the nodes.
When configuring watchers, what would be the purpose of including both of these settings under a watching:
singleton = True
numprocess = 1
The documentation states that setting singleton has the following effect:
singleton:
If set to True, this watcher will have at the most one process. Defaults to False.
I read that as negating the need to specify numprocesses however in the github repository they provide an example:
https://github.com/circus-tent/circus/blob/master/examples/example6.ini
Included here as well, where they specify both:
[circus]
check_delay = 5
endpoint = tcp://127.0.0.1:5555
pubsub_endpoint = tcp://127.0.0.1:5556
stats_endpoint = tcp://127.0.0.1:5557
httpd = True
debug = True
httpd_port = 8080
[watcher:swiss]
cmd = ../bin/python
args = -u flask_app.py
warmup_delay = 0
numprocesses = 1
singleton = True
stdout_stream.class = StdoutStream
stderr_stream.class = StdoutStream
So I would assume they do something different and in some way work together?
numprocess is the initial number of process for a given watcher. In the example you provided it is set to 1, but a user can typically add more processes as needed.
singleton would only allow a maxiumum of 1 process running for a given watcher, so it would forbid you from increment the number of processes dynamically.
The code below from circus test suite describes it well ::
#tornado.testing.gen_test
def test_singleton(self):
# yield self._stop_runners()
yield self.start_arbiter(singleton=True, loop=get_ioloop())
cli = AsyncCircusClient(endpoint=self.arbiter.endpoint)
# adding more than one process should fail
yield cli.send_message('incr', name='test')
res = yield cli.send_message('list', name='test')
self.assertEqual(len(res.get('pids')), 1)
yield self.stop_arbiter()
Discovery is successful:
[root#ncdqd0110 iqn.11351.com.xxx:AAA]# iscsiadm -m discoverydb -t sendtargets -p 127.0.0.1:54541 --discover
127.0.0.1:54541,-1 iqn.2495.com.xxx:AAA
Login is failing:
[root#ncdqd0110 ~]# iscsiadm --mode node --target iqn.2495.com.xxx:AAA --portal 127.0.0.1:54541 --login
Logging in to [iface: default, target: iqn.2495.com.xxx:AAA, portal: 127.0.0.1,54541] (multiple)
iscsiadm: Could not login to [iface: default, target: iqn.2495.com.xxx:AAA, portal: 127.0.0.1,54541].
iscsiadm: initiator reported error (8 - connection timed out)
iscsiadm: Could not log into all portals
This is happening in Redhat v7.0. In Suse its working fine.
Few results of commands are given below:
[root#ncdqd0110 iqn.2495.com.xxx:AAA]# netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 10.31.224.1 0.0.0.0 UG 0 0 0 eno16780032
10.31.224.0 0.0.0.0 255.255.252.0 U 0 0 0 eno16780032
[root#ncdqd0110 ~]# ip route show
default via 10.31.224.1 dev eno16780032 proto static metric 1024
10.31.224.0/22 dev eno16780032 proto kernel scope link src 10.31.227.110
BEGIN RECORD 6.2.0.873-21
node.name = iqn.2495.com.xxx:AAA
node.tpgt = -1
node.startup = automatic
node.leading_login = No
iface.net_ifacename = eno16780032
iface.iscsi_ifacename = default
iface.transport_name = tcp
iface.vlan_id = 0
iface.vlan_priority = 0
iface.iface_num = 0
iface.mtu = 0
iface.port = 0
iface.tos = 0
iface.ttl = 0
iface.tcp_wsf = 0
iface.tcp_timer_scale = 0
iface.def_task_mgmt_timeout = 0
iface.erl = 0
iface.max_receive_data_len = 0
iface.first_burst_len = 0
iface.max_outstanding_r2t = 0
iface.max_burst_len = 0
node.discovery_address = 127.0.0.1
node.discovery_port = 54541
node.discovery_type = send_targets
node.session.initial_cmdsn = 0
node.session.initial_login_retry_max = 8
node.session.xmit_thread_priority = -20
node.session.cmds_max = 128
node.session.queue_depth = 32
node.session.nr_sessions = 1
node.session.auth.authmethod = None
node.session.timeo.replacement_timeout = 120
node.session.err_timeo.abort_timeout = 15
node.session.err_timeo.lu_reset_timeout = 30
node.session.err_timeo.tgt_reset_timeout = 30
node.session.err_timeo.host_reset_timeout = 60
node.session.iscsi.FastAbort = Yes
node.session.iscsi.InitialR2T = No
node.session.iscsi.ImmediateData = Yes
node.session.iscsi.FirstBurstLength = 262144
node.session.iscsi.MaxBurstLength = 16776192
node.session.iscsi.DefaultTime2Retain = 0
node.session.iscsi.DefaultTime2Wait = 2
node.session.iscsi.MaxConnections = 1
node.session.iscsi.MaxOutstandingR2T = 1
node.session.iscsi.ERL = 0
node.conn[0].address = 127.0.0.1
node.conn[0].port = 54541
node.conn[0].startup = manual
node.conn[0].tcp.window_size = 524288
node.conn[0].tcp.type_of_service = 0
node.conn[0].timeo.logout_timeout = 15
node.conn[0].timeo.login_timeout = 15
node.conn[0].timeo.auth_timeout = 45
node.conn[0].timeo.noop_out_interval = 5
node.conn[0].timeo.noop_out_timeout = 5
node.conn[0].iscsi.MaxXmitDataSegmentLength = 0
node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144
node.conn[0].iscsi.HeaderDigest = None
node.conn[0].iscsi.IFMarker = No
node.conn[0].iscsi.OFMarker = No
END RECORD
If any one know regarding this issue please let me know, how to fix this.
This appears to be an SELinux policy choice. The discovery session is running within iscsiadm, but iscsid is restricted in which ports it can connect to.
One option is to use the audit2why/audit2allow utils from policycoreutils to create a local policy module, extending the default system SELinux policy to allow this.