'Failed to open TCP connection to 127.0.0.1:7058' when running selenium/capybara test in parallel mode(using 8 threads) - multithreading

We ran cukes in parallel mode using 8 threads on Jenkins but most of cukes were failed due to following error.
Error: (Connection refused - connect(2) for "127.0.0.1" port 7058) (Errno::ECONNREFUSED)
./features/support/hooks.rb:3:in `Before'
Parameters:
OS: Linux
Selenium : gem selenium-webdriver', '~> 2.53.4'
Capybara: gem 'capybara', '>= 2.10.0'
Browser: Firefox version 45.5.0
hooks.rb
1.Before do |scenario|
2.Capybara.reset_sessions!
3. page.driver.browser.manage.window.maximize
4. page.driver.browser.manage.delete_all_cookies
5.end
env.rb
browser=ENV['BROWSER'] || 'ff'
case browser
when 'ff', 'firefox'
Capybara.register_driver :selenium do |app|
Selenium::WebDriver::Firefox::Binary.path=("/usr/bin/firefox") if REGISTRY[:local_path_for_selenium]
profile = Selenium::WebDriver::Firefox::Profile.new
profile.assume_untrusted_certificate_issuer = false
profile.secure_ssl = false
profile['browser.manage.timeouts.implicit_wait'] = 100
profile['browser.manage.timeouts.script_timeout'] = 100
profile['browser.manage.timeouts.read_timeout'] = 500
profile['browser.manage.timeouts.page_load'] = 120
profile["browser.download.folderList"] = 2
profile['browser.download.dir'] = "#{Rails.root}/downloads"
profile['browser.helperApps.neverAsk.saveToDisk'] = "application/xlsx"
profile['browser.helperApps.neverAsk.openFile'] = "application/xlsx"
http_client = Selenium::WebDriver::Remote::Http::Default.new
http_client.timeout = 410
Capybara::Selenium::Driver.new(app, :profile => profile, :http_client => http_client)
end
Capybara.default_driver = :selenium
Regards,
Ajay
Any proposition will be welcome
Thanks in advance for your time.

Related

Sparklyr gateway did not respond while retrieving ports

I am using sparklyr in a batch setup where multiple concurrent jobs with different parameters, are arriving and are processed by same sparklyr codebase. In "certain" random situations, the code gives errors (as below). I think it is under high load.
I am seeking guidance on the best way to troubleshoot it (including understanding the architecture of different components in the call chain). Therefore, beside pointing any correction in the code used to establish connection, any pointers to study further will be appreciated.
Thanks.
Stack versions:
Spark version 2.3.2.3.1.5.6091-7
Using Scala version 2.11.12 (OpenJDK 64-Bit Server VM, Java 1.8.0_322)
SparklyR version: sparklyr-2.1-2.11.jar
Yarn Cluster: hdp-3.1.5
Error:
2022-03-03 23:00:09 | Connecting to SPARK ...
2022-03-03 23:02:19 | Couldn't connect to SPARK (Error). Error in force(code): Failed while connecting to sparklyr to port (10980) for sessionid (38361): Sparklyr gateway did not respond while retrieving ports information after 120 seconds
Path: /usr/hdp/3.1.5.6091-7/spark2/bin/spark-submit
Parameters: --driver-memory, 3G, --executor-memory, 3G, --keytab, /etc/security/keytabs/appuser.headless.keytab, --principal, appuser#myorg.com, --class, sparklyr.Shell, '/usr/lib64/R/library/sparklyr/java/sparklyr-2.1-2.11.jar', 10980, 38361
Log: /tmp/RtmpyhpKkv/file187b82097bb55_spark.log
---- Output Log ----
22/03/03 23:00:17 INFO sparklyr: Session (38361) is starting under 127.0.0.1 port 10980
22/03/03 23:00:17 INFO sparklyr: Session (38361) found port 10980 is available
22/03/03 23:00:17 INFO sparklyr: Gateway (38361) is waiting for sparklyr client to connect to port 10980
22/03/03 23:01:17 INFO sparklyr: Gateway (38361) is terminating backend since no client has connected after 60 seconds to 192.168.1.55/10980.
22/03/03 23:01:17 INFO ShutdownHookManager: Shutdown hook called
22/03/03 23:01:17 INFO ShutdownHookManager: Deleting directory /tmp/spark-4fec5364-e440-41a8-87c4-b5e94472bb2f
---- Error Log ----
Connection code:
conf <- spark_config()
conf$spark.executor.memory <- "10G"
conf$spark.executor.cores <- 6
conf$spark.executor.instances <- 6
conf$spark.driver.memory <- "10g"
conf$spark.driver.memoryOverhead <-"3g"
conf$spark.shuffle.service.enabled <- "true"
conf$spark.port.maxRetries <- 125
conf$spark.sql.hive.convertMetastoreOrc <- "true"
conf$spark.local.dir = '/var/log/myapp/sparkjobs'
conf$'sparklyr.shell.driver-memory' <- "3G"
conf$'sparklyr.shell.executor-memory' <- "3G"
conf$spark.serializer <- "org.apache.spark.serializer.KryoSerializer"
conf$hive.metastore.uris = configs$DEFAULT$HIVE_METASTORE_URL
conf$spark.sql.session.timeZone <- "UTC"
# fix as per cloudera suggestion for future timeout issue
conf$spark.sql.broadcastTimeout <- 1200
conf$sparklyr.shell.keytab = "/etc/security/keytabs/appuser.headless.keytab"
conf$sparklyr.shell.principal = "appuser#myorg.com"
conf$spark.yarn.keytab= "/etc/security/keytabs/appuser.headless.keytab"
conf$spark.yarn.principal= "appuser#myorg.com"
conf$spark.sql.catalogImplementation <- "hive"
conf$sparklyr.gateway.config.retries <- 10
conf$sparklyr.connect.timeout <- 120
conf$sparklyr.gateway.port.query.attempts <- 10
conf$sparklyr.gateway.port.query.retry.interval.seconds <- 60
conf$sparklyr.gateway.port <- 10090 + round(runif(1, 1, 1000))
tryCatch
(
{
logging(paste0("Connecting to SPARK ... "))
withTimeout({ sc <- spark_connect(master = "yarn-client", spark_home = eval(SPARK_HOME_PATH), version = "2.1.0",app_name = "myjobname", config = conf) }, timeout = 540)
if (!is.null(sc)) {
return(sc)
}
},
TimeoutException = function(ex)
{
logging(paste0("Couldn't connect to SPARK (Timed up).", ex));
stop("Timeout occured");
},
error = function(err)
{
logging(paste0("Couldn't connect to SPARK (Error). ", err));
stop("Exception occured");
}
)

Access rejected by local host in freeradius

I am no able to execute the radtest command and i cant figure out what the issue it i keep getting the error :
rad_recv: Access-Reject packet from host 127.0.0.1 port 1812, id=82, length=20
here is the execution:
rad_recv: Access-Reject packet from host 127.0.0.1 port 1812, id=75, length=20
root#localhost:/etc/freeradius# radtest testing password 127.0.0.1 0 testing123
Sending Access-Request of id 82 to 127.0.0.1 port 1812
User-Name = "testing"
User-Password = "password"
NAS-IP-Address = 127.0.0.1
NAS-Port = 0
Message-Authenticator = 0x00000000000000000000000000000000
rad_recv: Access-Reject packet from host 127.0.0.1 port 1812, id=82, length=20
here is the debug output:
+group authorize {
++[preprocess] = ok
++policy rewrite_calling_station_id {
+++? if (Calling-Station-Id =~ /([0-9a-f]{2})[-:]?([0-9a-f]{2})[-:.]?([0-9a-f]{2})[-:]?([0-9a-f]{2})[-:.]?([0-9a-f]{2})[-:]?([0-9a-f]{2})/i)
(Attribute Calling-Station-Id was not found)
? Evaluating (Calling-Station-Id =~ /([0-9a-f]{2})[-:]?([0-9a-f]{2})[-:.]?([0-9a-f]{2})[-:]?([0-9a-f]{2})[-:.]?([0-9a-f]{2})[-:]?([0-9a-f]{2})/i) -> FALSE
+++? if (Calling-Station-Id =~ /([0-9a-f]{2})[-:]?([0-9a-f]{2})[-:.]?([0-9a-f]{2})[-:]?([0-9a-f]{2})[-:.]?([0-9a-f]{2})[-:]?([0-9a-f]{2})/i) -> FALSE
+++else else {
++++[noop] = noop
+++} # else else = noop
++} # policy rewrite_calling_station_id = noop
[authorized_macs] expand: %{Calling-Station-ID} ->
++[authorized_macs] = noop
++? if (!ok)
? Evaluating !(ok) -> TRUE
++? if (!ok) -> TRUE
++if (!ok) {
+++[reject] = reject
++} # if (!ok) = reject
+} # group authorize = reject
Using Post-Auth-Type Reject
# Executing group from file /etc/freeradius/sites-enabled/default
+group REJECT {
[eap] Request didn't contain an EAP-Message, not inserting EAP-Failure
++[eap] = noop
[attr_filter.access_reject] expand: %{User-Name} -> testing
attr_filter: Matched entry DEFAULT at line 11
++[attr_filter.access_reject] = updated
+} # group REJECT = updated
Delaying reject of request 2 for 1 seconds
Going to the next request
Waking up in 0.9 seconds.
Sending delayed reject for request 2
Sending Access-Reject of id 82 to 127.0.0.1 port 55664
Waking up in 4.9 seconds.
Cleaning up request 2 ID 82 with timestamp +269
Ready to process requests.
this is my users:
testing Cleartext-Password := "password"

probelm with puppet tagmail puppetlabs module

i'm using puppet 6.14.0 and tagmail module 3.2.0 on Centos 7.
below is my config on the master:
[master]
dns_alt_names=*******
vardir = /opt/puppetlabs/server/data/puppetserver
logdir = /var/log/puppetlabs/puppetserver
rundir = /var/run/puppetlabs/puppetserver
pidfile = /var/run/puppetlabs/puppetserver/puppetserver.pid
codedir = /etc/puippetlabs/code
confdir = /etc/puppetlabs/puppet
reports = puppetdb,console,tagmail
tagmap = $confdir/tagmail.conf
tagmail.conf(using a local smtp server, i'm able to telnet it )
[transport]
reportfrom = **********
smtpserver = localhost
smtpport = 25
smtphelo = localhost
[tagmap]
all: my_email_address
and below is my config on one managed node
[main]
certname = *********
server = *********
environment =uat
runinterval = 120
[agent]
report = true
pluginsync = true
but i'm not receiving any report from tagmail.
is someone having the same problem or i'm missing something on my config ?

puppet ruby gem nullpo segfault

Why the pg gem is crash when call it inside of puppet environment but not when are called in the pure environment ? What I've missed ?
Server: Ubuntu 18.04 latest
Puppet 5.5 masterless
postgres 10
$ /opt/puppetlabs/puppet/bin/gem install pg
Fetching: pg-1.1.4.gem (100%)
Building native extensions. This could take a while...
Successfully installed pg-1.1.4
Parsing documentation for pg-1.1.4
Installing ri documentation for pg-1.1.4
Done installing documentation for pg after 0 seconds
1 gem installed
$ run-puppet
/opt/puppetlabs/puppet/lib/ruby/gems/2.4.0/gems/pg-1.1.4/lib/pg.rb:56: [BUG] Segmentation fault at > 0x0000000000000000
ruby 2.4.9p362 (2019-10-02 revision 67824) [x86_64-linux]
Similiar code snippet what actually test connection between ruby env and host services:
require ('pg')
conn = PG.connect( dbname: "puppetdb", password: 'password2' , host: 'localhost', user: 'user2' )
conn.close()
#
Puppet::Functions.create_function(:lookup_ssl) do
begin
require ('pg')
rescue LoadError
raise Puppet::DataBinding::LookupError, "Error loading pq gem library."
end
dispatch :up do
param 'Hash', :options
param 'Hash', :search
end
def up(options, search)
data = [] #
conn = PG.connect( dbname: options['database'], password: options['password'], host: options['host'], user: options['user'] )
conn.close()
return data
end
end
My friend has this issue and he resolved by using Postgres 9.6, reinstall pg by using cmd:
gem install pg -- --with-pg-config=POSTGRES_PATH/9.6/bin/pg_config

No value from hiera on puppet manifests when installed foreman

If try to get data from module use calling_class the data don't come to puppet manifests, if put the variable to common or osfamily yaml file value will be available from manifets.
My environment:
Puppet Master 3.7.4 + Foreman 1.7 + Hiera 1.3.4
Hiera configs:
---
:backends:
- yaml
:hierarchy:
- "%{::environment}/node/%{::fqdn}" #node settings
- "%{::environment}/profile/%{calling_class}" # profile settings
- "%{::environment}/%{::environment}" # environment settings
- "%{::environment}/%{::osfamily}" # osfamily settings
- common # common settings
:yaml:
:datadir: '/etc/puppet/hiera'
/etc/puppet/hiera/production/profile/common.yaml
profile::common::directory_hierarchy:
- "C:\\SiteName"
- "C:\\SiteName\\Config"
profile::common::system: "common"
And on profile module manifest /etc/puppet/environments/production/modules/profile/manifests/common.pp
class profile::common (
$directory_hierarchy =undef,
$system =undef
)
{
notify { "Dir is- $directory_hierarchy my fqdn is $fqdn, system = $system": }
}
Puppet config /etc/puppet/puppet.config
[main]
logdir = /var/log/puppet
rundir = /var/run/puppet
ssldir = $vardir/ssl
privatekeydir = $ssldir/private_keys { group = service }
hostprivkey = $privatekeydir/$certname.pem { mode = 640 }
autosign = $confdir/autosign.conf { mode = 664 }
show_diff = false
hiera_config = $confdir/hiera.yaml
[agent]
classfile = $vardir/classes.txt
localconfig = $vardir/localconfig
default_schedules = false
report = true
pluginsync = true
masterport = 8140
environment = production
certname = puppet024.novalocal
server = puppet024.novalocal
listen = false
splay = false
splaylimit = 1800
runinterval = 1800
noop = false
configtimeout = 120
usecacheonfailure = true
[master]
autosign = $confdir/autosign.conf { mode = 664 }
reports = foreman
external_nodes = /etc/puppet/node.rb
node_terminus = exec
ca = true
ssldir = /var/lib/puppet/ssl
certname = puppet024.novalocal
strict_variables = false
environmentpath = /etc/puppet/environments
basemodulepath = /etc/puppet/environments/common:/etc/puppet/modules:/usr/share/puppet/modules
parser = future
And the more interesting thing that if deploy the same code without foreman it will be working.
Maybe I've missed some configuration or plugins?
You need have a environment (production in your sample) folder structures as below:
/etc/puppet/hiera/environments/production/node/%{::fqdn}.yaml
/etc/puppet/hiera/environments/production/profile/%{calling_class}.yaml
/etc/puppet/hiera/environments/production/production/*.yaml
/etc/puppet/hiera/environments/production/%{::osfamily}.yaml
/etc/puppet/hiera/environments/common.yaml
So the environment path you pasted is wrong also.
/etc/puppet/hiera/production/profile/common.yaml
Side notes
By first view, shouldn't mix hieradata with modulepath, so if can, move the modules out of basemodulepath
basemodulepath = /etc/puppet/environments/common
With the puppet.conf you pasted, the real profile module path is at one of three folders:
/etc/puppet/environments/common/modules/profile
/etc/puppet/modules/profile
/usr/share/puppet/modules/profile

Resources