I am migrating from Jetty 9.0 to 9.4
Apparently, following method from SSLContextFactory is removed from 9.3.0
public void checkKeyStore()
I checked api documentation but did not get any replacement for this nor it was deprecated in 9.2.x and directly got removed in 9.3.0.
I am not security expert but here is my code and I am not exactly sure how should I get around this in 9.4.x
private void configureHttps(Server server, int httpsPort, JettyAppConfig config, HttpConfiguration httpConfig)
throws Exception {
boolean shouldStartHttpsPort = false;
SslContextFactory sslContextFactory = createJettySslContextFactory();
if (sslContextFactory != null) {
shouldStartHttpsPort = true;
try {
sslContextFactory.checkKeyStore(); //NEED REPLACEMENT in 9.4
} catch (IllegalStateException e) {
logger.debug("keystore check failed", e);
shouldStartHttpsPort = false;
}
}
if (shouldStartHttpsPort) {
HttpConfiguration httpsConfig = new HttpConfiguration(httpConfig);
httpsConfig.addCustomizer(new SecureRequestCustomizer());
ServerConnector httpsConnector = new ServerConnector(server,
new SslConnectionFactory(sslContextFactory, HttpVersion.HTTP_1_1.asString()),
new HttpConnectionFactory(httpsConfig));
httpsConnector.setPort(httpsPort);
httpsConnector.setIdleTimeout(config.getConnectionIdleTimeout());
server.addConnector(httpsConnector);
} else {
logger.info("Keystore not configured, not starting HTTPS");
}
}
The role of checkKeyStore() in Jetty 9.0.0 thru 9.2.0 was only to attempt to load the keystore from disk, that's it. Any other impact on SslContextFactory would be considered a bug.
Here's an alternate implementation for you.
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import org.eclipse.jetty.util.IO;
import org.eclipse.jetty.util.StringUtil;
import org.eclipse.jetty.util.ssl.SslContextFactory;
private static void check(SslContextFactory ssl) throws IOException
{
if (ssl.getKeyStore() == null) // too late, already loaded?
{
if (StringUtil.isNotBlank(ssl.getKeyStorePath())) // no path, no check
{
Path keystorePath = Paths.get(ssl.getKeyStorePath());
try (InputStream inputStream = Files.newInputStream(keystorePath);
OutputStream outputStream = new ByteArrayOutputStream())
{
IO.copy(inputStream, outputStream);
}
}
}
}
The method checkKeyStore() was removed in April 2015 when support for SNI was introduced in Jetty 9.3.0, which introduced the ability to have hierarchies of SslContextFactory implementations.
The behavior that checkKeyStore() provided was instead moved to the loadKeyStore() method, which is always called during doStart() of the SslContextFactory.
In short, in Jetty 9.0.0 checkKeyStore() always ran, and now in Jetty 9.4.18, it's behavior always runs still, but in a different place. The new behavior also checks the TrustStore, your SNI setup, your cipher suites selections, your protocols selections, etc. The new techniques do much more then the old one.
Since you are using embedded-jetty, consider instead of checking the SslContextFactory, just let the lifecycle fail on the server.start() call.
Note: For WebAppContext specifically make sure you set setThrowUnavailableOnStartupException(true) to allow it to report failures up the lifecycle.
Related
I just configured authentication in IgniteDB ( a specific server, not a localhost )
https://apacheignite.readme.io/docs/advanced-security
However I encountered some issue while trying to connect. Where should I provide the credential?
TcpDiscoverySpi spi = new TcpDiscoverySpi();
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
String ipList = appConfig.getIgniteIPAddressList();
List<String> addressList= Arrays.asList(ipList.split(";"));
ipFinder.setAddresses(addressList);
spi.setIpFinder(ipFinder);
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setIgniteInstanceName("IgnitePod");
cfg.setClientMode(true);
cfg.setDiscoverySpi(spi);
Ignite ignite = Ignition.start(cfg);
Anybody has idea on implementing it?
https://apacheignite.readme.io/docs/advanced-security
Describes how to configure the authentication via username and password for THIN connections only (JDBC, ODBC).
You can create users using SQL commands like next:
https://apacheignite-sql.readme.io/docs/create-user
You can provide credentials to thin client connection string using its properties:
https://apacheignite-sql.readme.io/docs/connection-string-and-dsn#section-supported-arguments
https://apacheignite-sql.readme.io/docs/jdbc-driver#section-additional-connection-string-examples
Please also check that you have Ignite persistence configured.
As Andrei notes, Ignite only authenticates thin clients by default, and even then only when persistence is enabled. If you need to have thick-clients authenticate also, you can do this using a plugin. Third-party, commercial solutions also exist.
Apache Ignite does not provide these kinds of security capabilities with its open-source version. One can either implement it on your own or use commercial Gridgain distribution.
Here are the steps to implement a custom security plugin.
One would need to implement GridSecurityProcessor which would be used to authenticate the joining node.
In GridSecurityProcessor, you would have to implement authenticateNode() api as follows
public SecurityContext authenticateNode(ClusterNode node, SecurityCredentials cred) throws IgniteCheckedException {
SecurityCredentials userSecurityCredentials;
if (securityPluginConfiguration != null) {
if ((userSecurityCredentials = securityPluginConfiguration.getSecurityCredentials()) != null) {
return userSecurityCredentials.equals(cred) ? new SecurityContextImpl() : null;
}
if (cred == null && userSecurityCredentials == null) {
return new SecurityContextImpl();
}
}
if (cred == null)
return new SecurityContextImpl();
return null;
}
Also, you would need to extend TcpDiscoverySpi to pass the user credentials during initLocalNode() as follows
#Override
protected void initLocalNode(int srvPort, boolean addExtAddrAttr) {
try {
super.initLocalNode(srvPort, addExtAddrAttr);
this.setSecurityCredentials();
} catch (Exception e) {
e.printStackTrace();
}
}
private void setSecurityCredentials() {
if (securityCredentials != null) {
Map<String,Object> attributes = new HashMap<>(locNode.getAttributes());
attributes.put(IgniteNodeAttributes.ATTR_SECURITY_CREDENTIALS, securityCredentials);
this.locNode.setAttributes(attributes);
}
}
You can follow the link given below to get detailed steps that can be followed to write a custom security plugin and its usage.
https://www.bugdbug.com/post/how-to-secure-apache-ignite-cluster
Was able to solve my own problem by creating my own CustomTCPDiscoveryAPI.
First, create this class :
import org.apache.ignite.IgniteException;
import org.apache.ignite.cluster.ClusterNode;
import org.apache.ignite.internal.IgniteNodeAttributes;
import org.apache.ignite.internal.processors.security.SecurityContext;
import org.apache.ignite.lang.IgniteProductVersion;
import org.apache.ignite.plugin.security.SecurityCredentials;
import org.apache.ignite.spi.discovery.DiscoverySpiNodeAuthenticator;
import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
import java.util.Map;
public class CustomTcpDiscoverySpi extends TcpDiscoverySpi implements DiscoverySpiNodeAuthenticator {
SecurityCredentials securityCredentials;
public CustomTcpDiscoverySpi(final SecurityCredentials securityCredentials) {
this.securityCredentials = securityCredentials;
this.setAuthenticator(this);
}
#Override
public SecurityContext authenticateNode(ClusterNode clusterNode, SecurityCredentials securityCredentials) throws IgniteException {
return null;
}
#Override
public boolean isGlobalNodeAuthentication() {
return true;
}
#Override
public void setNodeAttributes(final Map<String, Object> attrs, final IgniteProductVersion ver) {
attrs.put(IgniteNodeAttributes.ATTR_SECURITY_CREDENTIALS, this.securityCredentials);
super.setNodeAttributes(attrs, ver);
}
}
And then, use it like below :
SecurityCredentials cred = new SecurityCredentials();
cred.setLogin(appConfig.getIgniteUser());
cred.setPassword(appConfig.getIgnitePassword());
CustomTcpDiscoverySpi spi = new CustomTcpDiscoverySpi(cred);
//TcpDiscoverySpi spi = new TcpDiscoverySpi(); - > removed to use the CustomTCPDiscovery
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
String ipList = appConfig.getIgniteIPAddressList();
List<String> addressList= Arrays.asList(ipList.split(";"));
ipFinder.setAddresses(addressList);
spi.setIpFinder(ipFinder);
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setIgniteInstanceName("IgnitePod");
cfg.setClientMode(true);
cfg.setAuthenticationEnabled(true);
// Ignite persistence configuration.
DataStorageConfiguration storageCfg = new DataStorageConfiguration();
// Enabling the persistence.
storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
// Applying settings.
// tests
cfg.setDataStorageConfiguration(storageCfg);
cfg.setDiscoverySpi(spi);
Ignite ignite = Ignition.start(cfg);
Hope this helps other people who stuck with the same problem.
The only option for peer-authenticating server nodes which is available in vanilla Apache Ignite is SSL+certificates.
I am new to Spring Integration and new to Stack Overflow. I am looking for some help in understanding Spring Integration as it relates to a request-reply pattern. From reading on the web, I am thinking that I should be using a Service Activator to enable this type of use case.
I am using JMS to facilitate the sending and receiving of XML based messages. Our underlining implementation is IBM Websphere MQ.
I am also using Spring Boot (version 1.3.6.RELEASE) and attempting to use a pure annotation based configuration approach (if that is possible). I have searched the web and see some example but nothing that so far I can see that helps me understand how it all fits together. The Spring Integration documentation is excellent but I am still struggling with how all the pieces fit together. I apologize in advance if there is something out there that I missed. I treat posting here as a last alternative.
Here is what I have for my configuration:
package com.daluga.spring.integration.configuration
import com.ibm.mq.jms.MQConnectionFactory;
import com.ibm.mq.jms.MQQueue;
import com.ibm.msg.client.wmq.WMQConstants;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.integration.annotation.InboundChannelAdapter;
import org.springframework.integration.annotation.IntegrationComponentScan;
import org.springframework.integration.annotation.Poller;
import org.springframework.integration.channel.QueueChannel;
import org.springframework.integration.config.EnableIntegration;
import org.springframework.jms.annotation.EnableJms;
import org.springframework.jms.connection.CachingConnectionFactory;
import org.springframework.jms.core.JmsTemplate;
import javax.jms.ConnectionFactory;
import javax.jms.DeliveryMode;
import javax.jms.Destination;
import javax.jms.JMSException;
//import com.ibm.msg.client.services.Trace;
#Configuration
public class MQConfiguration {
private static final Logger LOGGER = LoggerFactory.getLogger(MQConfiguration.class);
#Value("${host-name}")
private String hostName;
#Value("${port}")
private int port;
#Value("${channel}")
private String channel;
#Value("${time-to-live}")
private int timeToLive;
#Autowired
#Qualifier("MQConnectionFactory")
ConnectionFactory connectionFactory;
#Bean(name = "jmsTemplate")
public JmsTemplate provideJmsTemplate() {
JmsTemplate jmsTemplate = new JmsTemplate(connectionFactory);
jmsTemplate.setExplicitQosEnabled(true);
jmsTemplate.setTimeToLive(timeToLive);
jmsTemplate.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
return jmsTemplate;
}
#Bean(name = "MQConnectionFactory")
public ConnectionFactory connectionFactory() {
CachingConnectionFactory ccf = new CachingConnectionFactory();
//Trace.setOn();
try {
MQConnectionFactory mqcf = new MQConnectionFactory();
mqcf.setHostName(hostName);
mqcf.setPort(port);
mqcf.setChannel(channel);
mqcf.setTransportType(WMQConstants.WMQ_CM_CLIENT);
ccf.setTargetConnectionFactory(mqcf);
ccf.setSessionCacheSize(2);
} catch (JMSException e) {
throw new RuntimeException(e);
}
return ccf;
}
#Bean(name = "requestQueue")
public Destination createRequestQueue() {
Destination queue = null;
try {
queue = new MQQueue("REQUEST.QUEUE");
} catch (JMSException e) {
throw new RuntimeException(e);
}
return queue;
}
#Bean(name = "replyQueue")
public Destination createReplyQueue() {
Destination queue = null;
try {
queue = new MQQueue("REPLY.QUEUE");
} catch (JMSException e) {
throw new RuntimeException(e);
}
return queue;
}
#Bean(name = "requestChannel")
public QueueChannel createRequestChannel() {
QueueChannel channel = new QueueChannel();
return channel;
}
#Bean(name = "replyChannel")
public QueueChannel createReplyChannel() {
QueueChannel channel = new QueueChannel();
return channel;
}
}
And here is my Service class:
package com.daluga.spring.integration.service
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.integration.annotation.ServiceActivator;
import org.springframework.stereotype.Service;
#Service
public class MyRequestReplyService {
private static final Logger LOGGER = LoggerFactory.getLogger(MyRequestReplyService.class);
#ServiceActivator(inputChannel = "replyChannel")
public void sendAndReceive(String requestPayload) {
// How to get replyPayload
}
}
So, at this point, I am not quite sure how to glue all this together to make this work. I don't understand how to glue together my request and reply queues to the service activator to make this all work.
The service I am calling (JMS/Webshere MQ based) is using the typical message and correlation id so that I can properly tied the request to the corresponding response.
Can anyone provide me any guidance on how to get this to work? Please let me know what additional information I can provide to make this clear.
Thanks in advance for your help!
Dan
Gateways provide request/reply semantics.
Instead of using a JmsTemplate directly, you should be using Spring Integration's built-in JMS Support.
#Bean
#ServiceActivator(inputChannel="requestChannel")
public MessageHandler jmsOutGateway() {
JmsOutboundGateway outGateway = new JmsOutboundGateway();
// set properties
outGateway.setOutputChannel(replyChannel());
return outGateway;
}
If you want to roll your own, change the service activator method the return a reply type and use one of the template sendAndReceive() or convertSendAndReceive() methods.
The sample app uses XML configuration but should provide some additional guidance.
I am trying to implement the Apache Configuration 2 in my codebase
import java.io.File;
import java.util.concurrent.TimeUnit;
import org.apache.commons.configuration2.PropertiesConfiguration;
import org.apache.commons.configuration2.builder.ConfigurationBuilderEvent;
import org.apache.commons.configuration2.builder.ReloadingFileBasedConfigurationBuilder;
import org.apache.commons.configuration2.builder.fluent.Parameters;
import org.apache.commons.configuration2.convert.DefaultListDelimiterHandler;
import org.apache.commons.configuration2.event.EventListener;
import org.apache.commons.configuration2.ex.ConfigurationException;
import org.apache.commons.configuration2.reloading.PeriodicReloadingTrigger;
import org.apache.commons.configuration2.CompositeConfiguration;
public class Test {
private static final long DELAY_MILLIS = 10 * 60 * 5;
public static void main(String[] args) {
// TODO Auto-generated method stub
CompositeConfiguration compositeConfiguration = new CompositeConfiguration();
PropertiesConfiguration props = null;
try {
props = initPropertiesConfiguration(new File("/tmp/DEV.properties"));
} catch (ConfigurationException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
compositeConfiguration.addConfiguration( props );
compositeConfiguration.addEventListener(ConfigurationBuilderEvent.ANY,
new EventListener<ConfigurationBuilderEvent>()
{
#Override
public void onEvent(ConfigurationBuilderEvent event)
{
System.out.println("Event:" + event);
}
});
System.out.println(compositeConfiguration.getString("property1"));
try {
Thread.sleep(14*1000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
// Have a script which changes the value of property1 in DEV.properties
System.out.println(compositeConfiguration.getString("property1"));
}
protected static PropertiesConfiguration initPropertiesConfiguration(File propsFile) throws ConfigurationException {
if(propsFile.exists()) {
final ReloadingFileBasedConfigurationBuilder<PropertiesConfiguration> builder =
new ReloadingFileBasedConfigurationBuilder<PropertiesConfiguration>(PropertiesConfiguration.class)
.configure(new Parameters().fileBased()
.setFile(propsFile)
.setReloadingRefreshDelay(DELAY_MILLIS)
.setThrowExceptionOnMissing(false)
.setListDelimiterHandler(new DefaultListDelimiterHandler(';')));
final PropertiesConfiguration propsConfiguration = builder.getConfiguration();
PeriodicReloadingTrigger trigger = new PeriodicReloadingTrigger(builder.getReloadingController(),
null, 1, TimeUnit.SECONDS);
trigger.start();
return propsConfiguration;
} else {
return new PropertiesConfiguration();
}
}
}
Here is a sample code that I using to check whether the Automatic Reloading works or not. However when the underlying property file is updated, the configuration doesn't reflect it.
As per the documentation :
One important point to keep in mind when using this approach to reloading is that reloads are only functional if the builder is used as central component for accessing configuration data. The configuration instance obtained from the builder will not change automagically! So if an application fetches a configuration object from the builder at startup and then uses it throughout its life time, changes on the external configuration file become never visible. The correct approach is to keep a reference to the builder centrally and obtain the configuration from there every time configuration data is needed.
https://commons.apache.org/proper/commons-configuration/userguide/howto_reloading.html#Reloading_File-based_Configurations
This is different from what the old implementation was.
I was able to successfully execute your sample code by making 2 changes :
make the builder available globally and access the configuration from the builder :
System.out.println(builder.getConfiguration().getString("property1"));
add the listener to the builder :
`builder.addEventListener(ConfigurationBuilderEvent.ANY, new EventListener() {
public void onEvent(ConfigurationBuilderEvent event) {
System.out.println("Event:" + event);
}
});
Posting my sample program, where I was able to successfully demonstrate it
import java.io.File;
import java.util.concurrent.TimeUnit;
import org.apache.commons.configuration2.PropertiesConfiguration;
import org.apache.commons.configuration2.builder.ConfigurationBuilderEvent;
import org.apache.commons.configuration2.builder.ReloadingFileBasedConfigurationBuilder;
import org.apache.commons.configuration2.builder.fluent.Parameters;
import org.apache.commons.configuration2.event.EventListener;
import org.apache.commons.configuration2.reloading.PeriodicReloadingTrigger;
public class TestDynamicProps {
public static void main(String[] args) throws Exception {
Parameters params = new Parameters();
ReloadingFileBasedConfigurationBuilder<PropertiesConfiguration> builder =
new ReloadingFileBasedConfigurationBuilder<PropertiesConfiguration>(PropertiesConfiguration.class)
.configure(params.fileBased()
.setFile(new File("src/main/resources/override.properties")));
PeriodicReloadingTrigger trigger = new PeriodicReloadingTrigger(builder.getReloadingController(),
null, 1, TimeUnit.SECONDS);
trigger.start();
builder.addEventListener(ConfigurationBuilderEvent.ANY, new EventListener<ConfigurationBuilderEvent>() {
public void onEvent(ConfigurationBuilderEvent event) {
System.out.println("Event:" + event);
}
});
while (true) {
Thread.sleep(1000);
System.out.println(builder.getConfiguration().getString("property1"));
}
}
}
The problem with your implementation is, that the reloading is done on the ReloadingFileBasedConfigurationBuilder Object and is not being returned to the PropertiesConfiguration Object.
I want to close browsers after completion of all test. Problem is I am not able to close the browser since the object created ThreadLocal driver does not recognize the driver after completion of test value returning is null.
Below is my working code
package demo;
import java.lang.reflect.Method;
import org.openqa.selenium.By;
import org.testng.annotations.AfterMethod;
import org.testng.annotations.BeforeMethod;
import org.testng.annotations.DataProvider;
import org.testng.annotations.Test;
public class ParallelMethodTest {
private static ThreadLocal<dummy> driver;
private int input;
private int length;
#BeforeMethod
public void beforeMethod() {
System.err.println("Before ID" + Thread.currentThread().getId());
System.setProperty("webdriver.chrome.driver", "chromedriver.exe");
if (driver == null) {
driver = new ThreadLocal<dummy>();
}
if (driver.get()== null) {
driver.set(new dummy());
}
}
#DataProvider(name = "sessionDataProvider", parallel = true)
public static Object[][] sessionDataProvider(Method method) {
int len = 12;
Object[][] parameters = new Object[len][2];
for (int i = 0; i < len; i++) {
parameters[i][0] = i;
parameters[i][1]=len;
}
return parameters;
}
#Test(dataProvider = "sessionDataProvider")
public void executSessionOne(int input,int length) {
System.err.println("Test ID---" + Thread.currentThread().getId());
this.input=input;
this.length=length;
// First session of WebDriver
// find user name text box and fill it
System.out.println("Parameter size is:"+length);
driver.get().getDriver().findElement(By.name("q")).sendKeys(input + "");
System.out.println("Input is:"+input);
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
#AfterMethod
public void afterMethod() {
System.err.println("After ID" + Thread.currentThread().getId());
driver.get().close();
}
}
package demo;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.testng.annotations.AfterClass;
public class dummy {
public WebDriver getDriver() {
return newDriver;
}
public void setNewDriver(WebDriver newDriver) {
this.newDriver = newDriver;
}
private WebDriver newDriver;
public dummy() {
newDriver = new ChromeDriver();
newDriver.get("https://www.google.co.in/");
}
#AfterClass
public void close(){
if(newDriver!=null){
System.out.println("In After Class");
newDriver.quit();
}
}
}
Thanks in Advance.
private static ThreadLocal<dummy> driver is added at the class level. What is happening is that you have already declared the variable at class level. i.e. memory is already allocated to it. Multiple threads are just setting and resetting the values of the same variable.
What you need to do is create a factory that will return an instance of Driver based on a parameter you pass to it.Logic can be anything but taking a general use case example the factory will create a new object and return only if an existing object doesn't exist. Declare and initialise the driver (from factory) in your #Test Methods
Sample code for the factory would be something like
static RemoteWebDriver firefoxDriver;
static RemoteWebDriver someOtherDriver;
static synchronized RemoteWebDriver getDriver(String browser, String browserVersion, String platform, String platformVersion)
{
if (browser == 'firefox')
{
if (firefoxDriver == null)
{
DesiredCapabilities cloudCaps = new DesiredCapabilities();
cloudCaps.setCapability("browser", browser);
cloudCaps.setCapability("browser_version", browserVersion);
cloudCaps.setCapability("os", platform);
cloudCaps.setCapability("os_version", platformVersion);
cloudCaps.setCapability("browserstack.debug", "true");
cloudCaps.setCapability("browserstack.local", "true");
firefoxDriver = new RemoteWebDriver(new URL(URL),cloudCaps);
}
}
else
{
if (someOtherDriver == null)
{
DesiredCapabilities cloudCaps = new DesiredCapabilities();
cloudCaps.setCapability("browser", browser);
cloudCaps.setCapability("browser_version", browserVersion);
cloudCaps.setCapability("os", platform);
cloudCaps.setCapability("os_version", platformVersion);
cloudCaps.setCapability("browserstack.debug", "true");
cloudCaps.setCapability("browserstack.local", "true");
someOtherDriver = new RemoteWebDriver(new URL(URL),cloudCaps);
}
return someOtherDriver;
}
You have a concurrency issue: multiple threads can create a ThreadLocal instance because dummy == null can evaluate to true on more than one thread when run in parallel. As such, some threads can execute driver.set(new dummy()); but then another thread replaces driver with a new ThreadLocal instance.
In my experience it is simpler and less error prone to always use ThreadLocal as a static final to ensure that multiple objects can access it (static) and that it is only defined once (final).
You can see my answers to the following Stack Overflow questions for related details and code samples:
How to avoid empty extra browser opens when running parallel tests with TestNG
Session not found exception with Selenium Web driver parallel execution of Data Provider test case
This is happening because you are creating the driver instance in beforeMethod function so it's scope ends after the function ends.
So when your afterMethod start it's getting null because webdriver instance already destroy as beforeMethod function is already completed.
Refer below links:-
http://www.java-made-easy.com/variable-scope.html
What is the default scope of a method in Java?
I've encounter some problem while applying a small library to send email using wildfly email resource
Idea with library is to provide singleton providing asynchronous method to send emails.
in short service looks like
#Singleton
public class MailService {
private static final String MIME_TYPE = "text/html; charset=utf-8";
private static final Logger LOG = Logger.getLogger(MailService.class.getName());
#Inject
private Session session;
#Asynchronous
public void sendEmail(final EmailModel email) {
try {
MimeMessage message = new MimeMessage(session);
if (email.normalRecipientsListIsEmpty()) {
throw new RuntimeException("need destination address.");
}
message.setRecipients(Message.RecipientType.TO, InternetAddress.parse(email.getNormalRecipients()));
message.setRecipients(Message.RecipientType.CC, InternetAddress.parse(email.getCCRecipients()));
message.setRecipients(Message.RecipientType.BCC, InternetAddress.parse(email.getBCCRecipients()));
message.setSubject(email.getSubject());
message.setContent(email.getContent(), MIME_TYPE);
Transport.send(message);
} catch (MessagingException e) {
throw new RuntimeException("Failed to sen email.", e);
}
}
}
Injected session is produced in project via #Produces annotation in Stateless service field.
While on windows everything works fine, however if deployed on wildfly running on linux, there is an timeout exception with message like "could not obtain a lock on method within 5000milis"
When i moved whole code to project, with no changes, everything started to work perfectly.
My question is, why is this happening? Is there a difference in implementation somewhere or in configuration? How can i fix that and move code back to library where it can be reused in other projects?