sphinx-4 NullPointerException at startRecognition - cmusphinx

I'm trying to follow this tutorial, and it crashes upon startup after having lots of problems with the dictionary and models, such as.
The dictionary is missing a phonetic transcription for the word 'humphrey'
and
Dec 18, 2014 1:14:50 PM edu.cmu.sphinx.linguist.lextree.HMMTree addPronunciation
SEVERE: Missing HMM for unit T with lc=N rc=EH1
13:14:50.601 SEVERE lexTreeLinguist Bad HMM Unit: EH1
I loaded this dictionary and got the language and acoustic models from their SourceForge page
It then crashes with this:
Exception in thread "main" java.lang.NullPointerException
at edu.cmu.sphinx.linguist.lextree.HMMNode.getBaseUnit(HMMTree.java:506)
at edu.cmu.sphinx.linguist.lextree.HMMNode.<init>(HMMTree.java:484)
at edu.cmu.sphinx.linguist.lextree.Node.addSuccessor(HMMTree.java:165)
at edu.cmu.sphinx.linguist.lextree.HMMTree$EntryPoint.createEntryPointMap(HMMTree.java:1163)
at edu.cmu.sphinx.linguist.lextree.HMMTree$EntryPointTable.createEntryPointMaps(HMMTree.java:1021)
at edu.cmu.sphinx.linguist.lextree.HMMTree.compile(HMMTree.java:795)
at edu.cmu.sphinx.linguist.lextree.HMMTree.<init>(HMMTree.java:716)
at edu.cmu.sphinx.linguist.lextree.LexTreeLinguist.generateHmmTree(LexTreeLinguist.java:433)
at edu.cmu.sphinx.linguist.lextree.LexTreeLinguist.compileGrammar(LexTreeLinguist.java:420)
at edu.cmu.sphinx.linguist.lextree.LexTreeLinguist.allocate(LexTreeLinguist.java:337)
at edu.cmu.sphinx.decoder.search.WordPruningBreadthFirstSearchManager.allocate(WordPruningBreadthFirstSearchManager.java:232)
at edu.cmu.sphinx.decoder.AbstractDecoder.allocate(AbstractDecoder.java:92)
at edu.cmu.sphinx.recognizer.Recognizer.allocate(Recognizer.java:167)
at edu.cmu.sphinx.api.LiveSpeechRecognizer.startRecognition(LiveSpeechRecognizer.java:46)
at com.test.sphinxtest.App.main(App.java:25)
Here's my code.
package com.test.sphinxtest;
import java.io.IOException;
import edu.cmu.sphinx.api.Configuration;
import edu.cmu.sphinx.api.LiveSpeechRecognizer;
import edu.cmu.sphinx.api.SpeechResult;
/**
* Hello world!
*
*/
public class App
{
public static void main( String[] args )
{
Configuration configuration = new Configuration();
configuration.setAcousticModelPath("models/acousticmodel/en-us");
configuration.setDictionaryPath("dictionary/cmudict-0.6d");
configuration.setLanguageModelPath("models/languagemodel/en-us.lm");
try {
LiveSpeechRecognizer recognizer = new LiveSpeechRecognizer(configuration);
recognizer.startRecognition(true);
SpeechResult result = recognizer.getResult();
recognizer.stopRecognition();
while ((result = recognizer.getResult()) != null) {
System.out.println(result.getHypothesis());
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}

The correct dictionary should not have stress marks, you can download it from here:
https://raw.githubusercontent.com/cmusphinx/pocketsphinx/master/model/en-us/cmudict-en-us.dict

Related

Cassandra cluster is not scaling. 3 Nodes are even a little faster then 6 nodes (Code and data provided)

I am using Datastax Enterprise 4.8 for testing purposes in my bachelor thesis. I am loading wheather data into the cluster (about 33 Mio rows).
The data looks something like the following
//id;unix timestamp; validity; station info; temp in °C; humidity in %
3;1950040101;5;24; 5.7000;83.0000
3;1950040102;5;24; 5.6000;83.0000
3;1950040103;5;24; 5.5000;83.0000
I know my data model is not very clean (I use decimal for the timestamp but I just wanted to try it this way).
CREATE TABLE temp{
id int,
timestamp decimal,
validity decimal,
structure decimal,
temperature float,
humidity float,
PRIMARY KEY((id),timestamp));
I roughly based it on an article on the datastax website:
https://academy.datastax.com/resources/getting-started-time-series-data-modeling
The insertion is done based on the often mentioned article on lostechies
https://lostechies.com/ryansvihla/2016/04/29/cassandra-batch-loading-without-the-batch%E2%80%8A-%E2%80%8Athe-nuanced-edition/
This is my insertion code:
import java.io.BufferedReader;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.math.BigDecimal;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
import com.datastax.driver.core.BoundStatement;
import com.datastax.driver.core.Cluster;
import com.datastax.driver.core.ConsistencyLevel;
import com.datastax.driver.core.PreparedStatement;
import com.datastax.driver.core.ResultSet;
import com.datastax.driver.core.ResultSetFuture;
import com.datastax.driver.core.Session;
import com.datastax.driver.extras.codecs.jdk8.InstantCodec;
import com.google.common.base.Stopwatch;
import com.google.common.util.concurrent.FutureCallback;
import com.google.common.util.concurrent.Futures;
import com.google.common.util.concurrent.MoreExecutors;
public class BulkLoader {
private final int threads;
private final String[] contactHosts;
private final Cluster cluster;
private final Session session;
private final ExecutorService executor;
public BulkLoader(int threads, String... contactHosts) {
this.threads = threads;
this.contactHosts = contactHosts;
this.cluster = Cluster.builder().addContactPoints(contactHosts).build();
cluster.getConfiguration().getCodecRegistry()
.register(InstantCodec.instance);
session = cluster.newSession();
// fixed thread pool that closes on app exit
executor = MoreExecutors
.getExitingExecutorService((ThreadPoolExecutor) Executors
.newFixedThreadPool(threads));
}
public static class IngestCallback implements FutureCallback<ResultSet> {
public void onSuccess(ResultSet result) {
}
public void onFailure(Throwable t) {
throw new RuntimeException(t);
}
}
public void ingest(Iterator<Object[]> boundItemsIterator, String insertCQL)
throws InterruptedException {
final PreparedStatement statement = session.prepare(insertCQL);
while (boundItemsIterator.hasNext()) {
BoundStatement boundStatement = statement.bind(boundItemsIterator
.next());
boundStatement.setConsistencyLevel(ConsistencyLevel.QUORUM);
ResultSetFuture future = session.executeAsync(boundStatement);
Futures.addCallback(future, new IngestCallback(), executor);
}
}
public void stop() {
session.close();
cluster.close();
}
public static List<Object[]> readCSV(File csv) {
BufferedReader fileReader = null;
List<Object[]> result = new LinkedList<Object[]>();
try {
fileReader = new BufferedReader(new FileReader(csv));
String line = "";
while ((line = fileReader.readLine()) != null) {
String[] tokens = line.split(";");
if (tokens.length < 6) {
System.out.println("Unvollständig");
continue;
}
Object[] tmp = new Object[6];
tmp[0] = (int) Integer.parseInt(tokens[0]);
tmp[1] = new BigDecimal(Integer.parseInt(tokens[1]));
tmp[2] = new BigDecimal(Integer.parseInt(tokens[2]));
tmp[3] = new BigDecimal(Integer.parseInt(tokens[2]));
tmp[4] = (float) Float.parseFloat(tokens[4]);
tmp[5] = (float) Float.parseFloat(tokens[5]);
result.add(tmp);
}
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} finally {
try {
fileReader.close();
} catch (IOException e) {
e.printStackTrace();
}
}
return result;
}
public static void main(String[] args) {
Stopwatch watch = Stopwatch.createStarted();
File folder = new File(
"C:/VirtualMachines/Kiosk/BachelorarbeitStraubinger/workspace/bulk/src/main/resources");
List<Object[]> data = new LinkedList<Object[]>();
BulkLoader loader = new BulkLoader(16, "10.2.57.38", "10.2.57.37",
"10.2.57.36", "10.2.57.35", "10.2.57.34", "10.2.57.33");
int cnt = 0;
File[] listOfFiles = folder.listFiles();
for (File file : listOfFiles) {
if (file.isFile() && file.getName().contains(".th")) {
data = readCSV(file);
cnt += data.size();
try {
loader.ingest(
data.iterator(),
"INSERT INTO wheather.temp (id, timestamp, validity, structure, temperature, humidity) VALUES(?,?,?,?,?,?)");
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
System.out.println(file.getName()
+ " -> Datasets importet: " + cnt);
}
}
}
System.out.println("total time seconds = "
+ watch.elapsed(TimeUnit.SECONDS));
watch.stop();
loader.stop();
}
}
The replication factor is 3 and i run test on 6 or 3 nodes. With vNodes enabled and num_tokens = 256.
I get roughly the same insert times when running it on either cluster. Any ideas why that is?
It is likely that you're maxing out the client application / client server. If you're reading from a static file, you may benefit from breaking it up into a few pieces and running them in parallel, or even looking at Brian Hess' loader ( https://github.com/brianmhess/cassandra-loader ) or the real cassandra bulk loader ( http://www.datastax.com/dev/blog/using-the-cassandra-bulk-loader-updated ) , which turns the data into a series of sstables and streams those in directly. Both are likely faster than your existing code.
Physics.
You're probably maxing out the throughput your app is capable of. Normally the answer would be to have multiple clients/app servers but it looks like you are reading from a CSV. I suggest either cutting up the CSV in pieces and running multiple instances of your app or generate fake data and multiple instances of that.
Edit: I also think it's worth noting that with a data model like that, a payload size that small, and proper hardware, I'd imagine each node could be capable of 15-20K inserts/second (Not accounting for node density/compaction).

Commons Configuration2 ReloadingFileBasedConfiguration

I am trying to implement the Apache Configuration 2 in my codebase
import java.io.File;
import java.util.concurrent.TimeUnit;
import org.apache.commons.configuration2.PropertiesConfiguration;
import org.apache.commons.configuration2.builder.ConfigurationBuilderEvent;
import org.apache.commons.configuration2.builder.ReloadingFileBasedConfigurationBuilder;
import org.apache.commons.configuration2.builder.fluent.Parameters;
import org.apache.commons.configuration2.convert.DefaultListDelimiterHandler;
import org.apache.commons.configuration2.event.EventListener;
import org.apache.commons.configuration2.ex.ConfigurationException;
import org.apache.commons.configuration2.reloading.PeriodicReloadingTrigger;
import org.apache.commons.configuration2.CompositeConfiguration;
public class Test {
private static final long DELAY_MILLIS = 10 * 60 * 5;
public static void main(String[] args) {
// TODO Auto-generated method stub
CompositeConfiguration compositeConfiguration = new CompositeConfiguration();
PropertiesConfiguration props = null;
try {
props = initPropertiesConfiguration(new File("/tmp/DEV.properties"));
} catch (ConfigurationException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
compositeConfiguration.addConfiguration( props );
compositeConfiguration.addEventListener(ConfigurationBuilderEvent.ANY,
new EventListener<ConfigurationBuilderEvent>()
{
#Override
public void onEvent(ConfigurationBuilderEvent event)
{
System.out.println("Event:" + event);
}
});
System.out.println(compositeConfiguration.getString("property1"));
try {
Thread.sleep(14*1000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
// Have a script which changes the value of property1 in DEV.properties
System.out.println(compositeConfiguration.getString("property1"));
}
protected static PropertiesConfiguration initPropertiesConfiguration(File propsFile) throws ConfigurationException {
if(propsFile.exists()) {
final ReloadingFileBasedConfigurationBuilder<PropertiesConfiguration> builder =
new ReloadingFileBasedConfigurationBuilder<PropertiesConfiguration>(PropertiesConfiguration.class)
.configure(new Parameters().fileBased()
.setFile(propsFile)
.setReloadingRefreshDelay(DELAY_MILLIS)
.setThrowExceptionOnMissing(false)
.setListDelimiterHandler(new DefaultListDelimiterHandler(';')));
final PropertiesConfiguration propsConfiguration = builder.getConfiguration();
PeriodicReloadingTrigger trigger = new PeriodicReloadingTrigger(builder.getReloadingController(),
null, 1, TimeUnit.SECONDS);
trigger.start();
return propsConfiguration;
} else {
return new PropertiesConfiguration();
}
}
}
Here is a sample code that I using to check whether the Automatic Reloading works or not. However when the underlying property file is updated, the configuration doesn't reflect it.
As per the documentation :
One important point to keep in mind when using this approach to reloading is that reloads are only functional if the builder is used as central component for accessing configuration data. The configuration instance obtained from the builder will not change automagically! So if an application fetches a configuration object from the builder at startup and then uses it throughout its life time, changes on the external configuration file become never visible. The correct approach is to keep a reference to the builder centrally and obtain the configuration from there every time configuration data is needed.
https://commons.apache.org/proper/commons-configuration/userguide/howto_reloading.html#Reloading_File-based_Configurations
This is different from what the old implementation was.
I was able to successfully execute your sample code by making 2 changes :
make the builder available globally and access the configuration from the builder :
System.out.println(builder.getConfiguration().getString("property1"));
add the listener to the builder :
`builder.addEventListener(ConfigurationBuilderEvent.ANY, new EventListener() {
public void onEvent(ConfigurationBuilderEvent event) {
System.out.println("Event:" + event);
}
});
Posting my sample program, where I was able to successfully demonstrate it
import java.io.File;
import java.util.concurrent.TimeUnit;
import org.apache.commons.configuration2.PropertiesConfiguration;
import org.apache.commons.configuration2.builder.ConfigurationBuilderEvent;
import org.apache.commons.configuration2.builder.ReloadingFileBasedConfigurationBuilder;
import org.apache.commons.configuration2.builder.fluent.Parameters;
import org.apache.commons.configuration2.event.EventListener;
import org.apache.commons.configuration2.reloading.PeriodicReloadingTrigger;
public class TestDynamicProps {
public static void main(String[] args) throws Exception {
Parameters params = new Parameters();
ReloadingFileBasedConfigurationBuilder<PropertiesConfiguration> builder =
new ReloadingFileBasedConfigurationBuilder<PropertiesConfiguration>(PropertiesConfiguration.class)
.configure(params.fileBased()
.setFile(new File("src/main/resources/override.properties")));
PeriodicReloadingTrigger trigger = new PeriodicReloadingTrigger(builder.getReloadingController(),
null, 1, TimeUnit.SECONDS);
trigger.start();
builder.addEventListener(ConfigurationBuilderEvent.ANY, new EventListener<ConfigurationBuilderEvent>() {
public void onEvent(ConfigurationBuilderEvent event) {
System.out.println("Event:" + event);
}
});
while (true) {
Thread.sleep(1000);
System.out.println(builder.getConfiguration().getString("property1"));
}
}
}
The problem with your implementation is, that the reloading is done on the ReloadingFileBasedConfigurationBuilder Object and is not being returned to the PropertiesConfiguration Object.

TestNG Close Browsers after Parallel Test Execution

I want to close browsers after completion of all test. Problem is I am not able to close the browser since the object created ThreadLocal driver does not recognize the driver after completion of test value returning is null.
Below is my working code
package demo;
import java.lang.reflect.Method;
import org.openqa.selenium.By;
import org.testng.annotations.AfterMethod;
import org.testng.annotations.BeforeMethod;
import org.testng.annotations.DataProvider;
import org.testng.annotations.Test;
public class ParallelMethodTest {
private static ThreadLocal<dummy> driver;
private int input;
private int length;
#BeforeMethod
public void beforeMethod() {
System.err.println("Before ID" + Thread.currentThread().getId());
System.setProperty("webdriver.chrome.driver", "chromedriver.exe");
if (driver == null) {
driver = new ThreadLocal<dummy>();
}
if (driver.get()== null) {
driver.set(new dummy());
}
}
#DataProvider(name = "sessionDataProvider", parallel = true)
public static Object[][] sessionDataProvider(Method method) {
int len = 12;
Object[][] parameters = new Object[len][2];
for (int i = 0; i < len; i++) {
parameters[i][0] = i;
parameters[i][1]=len;
}
return parameters;
}
#Test(dataProvider = "sessionDataProvider")
public void executSessionOne(int input,int length) {
System.err.println("Test ID---" + Thread.currentThread().getId());
this.input=input;
this.length=length;
// First session of WebDriver
// find user name text box and fill it
System.out.println("Parameter size is:"+length);
driver.get().getDriver().findElement(By.name("q")).sendKeys(input + "");
System.out.println("Input is:"+input);
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
#AfterMethod
public void afterMethod() {
System.err.println("After ID" + Thread.currentThread().getId());
driver.get().close();
}
}
package demo;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.testng.annotations.AfterClass;
public class dummy {
public WebDriver getDriver() {
return newDriver;
}
public void setNewDriver(WebDriver newDriver) {
this.newDriver = newDriver;
}
private WebDriver newDriver;
public dummy() {
newDriver = new ChromeDriver();
newDriver.get("https://www.google.co.in/");
}
#AfterClass
public void close(){
if(newDriver!=null){
System.out.println("In After Class");
newDriver.quit();
}
}
}
Thanks in Advance.
private static ThreadLocal<dummy> driver is added at the class level. What is happening is that you have already declared the variable at class level. i.e. memory is already allocated to it. Multiple threads are just setting and resetting the values of the same variable.
What you need to do is create a factory that will return an instance of Driver based on a parameter you pass to it.Logic can be anything but taking a general use case example the factory will create a new object and return only if an existing object doesn't exist. Declare and initialise the driver (from factory) in your #Test Methods
Sample code for the factory would be something like
static RemoteWebDriver firefoxDriver;
static RemoteWebDriver someOtherDriver;
static synchronized RemoteWebDriver getDriver(String browser, String browserVersion, String platform, String platformVersion)
{
if (browser == 'firefox')
{
if (firefoxDriver == null)
{
DesiredCapabilities cloudCaps = new DesiredCapabilities();
cloudCaps.setCapability("browser", browser);
cloudCaps.setCapability("browser_version", browserVersion);
cloudCaps.setCapability("os", platform);
cloudCaps.setCapability("os_version", platformVersion);
cloudCaps.setCapability("browserstack.debug", "true");
cloudCaps.setCapability("browserstack.local", "true");
firefoxDriver = new RemoteWebDriver(new URL(URL),cloudCaps);
}
}
else
{
if (someOtherDriver == null)
{
DesiredCapabilities cloudCaps = new DesiredCapabilities();
cloudCaps.setCapability("browser", browser);
cloudCaps.setCapability("browser_version", browserVersion);
cloudCaps.setCapability("os", platform);
cloudCaps.setCapability("os_version", platformVersion);
cloudCaps.setCapability("browserstack.debug", "true");
cloudCaps.setCapability("browserstack.local", "true");
someOtherDriver = new RemoteWebDriver(new URL(URL),cloudCaps);
}
return someOtherDriver;
}
You have a concurrency issue: multiple threads can create a ThreadLocal instance because dummy == null can evaluate to true on more than one thread when run in parallel. As such, some threads can execute driver.set(new dummy()); but then another thread replaces driver with a new ThreadLocal instance.
In my experience it is simpler and less error prone to always use ThreadLocal as a static final to ensure that multiple objects can access it (static) and that it is only defined once (final).
You can see my answers to the following Stack Overflow questions for related details and code samples:
How to avoid empty extra browser opens when running parallel tests with TestNG
Session not found exception with Selenium Web driver parallel execution of Data Provider test case
This is happening because you are creating the driver instance in beforeMethod function so it's scope ends after the function ends.
So when your afterMethod start it's getting null because webdriver instance already destroy as beforeMethod function is already completed.
Refer below links:-
http://www.java-made-easy.com/variable-scope.html
What is the default scope of a method in Java?

groovy.lang.MissingPropertyException while Downloading Files from FTP Server

I want to create job in grails which download the files from ftp
server after certain interval of time say 2-3 days and store it on
specified local path. the same code with minor changes is written in
java which was working fine but when write the similar code in Grails
I'm facing the Error and not able to resolve it. Can any body Tell me
where I'm making mistake?
Following is the Error that I'm facing when job start.
JOB STARTED::************************************************************************************
2015-08-24 18:20:35,285 INFO org.quartz.core.JobRunShell:207 Job GRAILS_JOBS.com.hoteligence.connector.job.DownloadIpgGzipFilesJob threw a JobExecutionException:
org.quartz.JobExecutionException: groovy.lang.MissingPropertyException: No such property: ftpClient for class: com.hoteligence.connector.job.DownloadIpgGzipFilesJob [See nested exception: groovy.lang.MissingPropertyException: No such property: ftpClient for class: com.hoteligence.connector.job.DownloadIpgGzipFilesJob]
at grails.plugins.quartz.GrailsJobFactory$GrailsJob.execute(GrailsJobFactory.java:111)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
Caused by: groovy.lang.MissingPropertyException: No such property: ftpClient for class: com.hoteligence.connector.job.DownloadIpgGzipFilesJob
at com.hoteligence.connector.job.DownloadIpgGzipFilesJob.execute(DownloadIpgGzipFilesJob.groovy:93)
at grails.plugins.quartz.GrailsJobFactory$GrailsJob.execute(GrailsJobFactory.java:104)
... 2 more
/* I've added all the related dependencies in grails Build Config.
*/
package com.hoteligence.connector.job
import java.io.BufferedOutputStream;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Date;
import org.codehaus.groovy.grails.commons.ConfigurationHolder as ConfigHolder;
import org.apache.commons.net.ftp.FTP;
import org.apache.commons.net.ftp.FTPClient;
import org.apache.commons.net.ftp.FTPFile;
import org.apache.commons.net.ftp.FTPReply;
/**
* #author Gajanan
* this is back-end job which download Files from ftp server and store it on locally
*/
class DownloadIpgGzipFilesJob {
static triggers = {
simple repeatInterval: Long.parseLong(ConfigHolder.config.DEVICE_PING_ALERT_JOB_REPEAT_INTERVAL),
startDelay : 60000
}
def execute() {
try{
println "JOB STARTED::************************************************************************************";
/* following is the details which are required for server connectivity
*/
String server = ConfigHolder.config.IPG_SERVER_NAME;
int port = ConfigHolder.config.IPG_SERVER_PORT_NO;
String user = ConfigHolder.config.IPG_SERVER_USER_NAME;
String pass = ConfigHolder.config.IPG_SERVER_USER_PASSWORD;
String [] fileNames = ConfigHolder.config.IPG_DOWNLOADABLE_GZIP_FILE_LIST.split(",");
String downloadFilePath = ConfigHolder.config.IPG_GZIP_DOWNLOAD_LOCATION;
String fileDate = (todaysDate.getYear()+1900)+""+((todaysDate.getMonth()+1)<=9?("0"+(todaysDate.getMonth()+1)):(todaysDate.getMonth()+1))+""+todaysDate.getDate();
FTPClient ftpClient = new FTPClient();
/* Here we are making connection to the server and the reply
from server is printed on console
*/
ftpClient.connect(server, port);
showServerReply(ftpClient);
int replyCode = ftpClient.getReplyCode();
if (!FTPReply.isPositiveCompletion(replyCode)) {
System.out.println("Connect failed");
return;
}
boolean success = ftpClient.login(user, pass);
showServerReply(ftpClient);
if (!success) {
System.out.println("Could not login to the server");
return;
}
/* Here we are iterate the FileList and download them to specified directory
*/
for(int i =0; i<fileNames.length;i++) {
String fileName = "on_"+ConfigHolder.config.IPG_DATA_COUNTRY_CODE+fileNames[i]+fileDate+".xml.gz";
System.out.println(fileName);
downloadFtpFileByName(ftpClient,fileName,downloadFilePath+fileName);
}
}
catch (IOException ex) {
System.out.println("Oops! Something wrong happened");
ex.printStackTrace();
}
catch(Exception e) {
e.printStackTrace();
}
finally {
// logs out and disconnects from server
/* In finally block we forcefully close the connection and close the file node also
*/
try {
if (ftpClient.isConnected()) {
ftpClient.logout();
ftpClient.disconnect();
}
} catch (IOException ex) {
ex.printStackTrace();
}
}
}
/* this function is nothing but to print the ftp server reply after connection to ftp server
*/
private static void showServerReply(FTPClient ftpClient) {
String[] replies = ftpClient.getReplyStrings();
if (replies != null && replies.length > 0) {
for (String aReply : replies) {
System.out.println("SERVER: " + aReply);
}
}
}
/* This is the actual logic where we copy the file from ftp
and store on local directory
this method accept three parameter FtpClient object, Name of the file which has to be downloaded from server and the path where downloaded file has to be stored
*/
private static void downloadFtpFileByName(FTPClient ftpClient,String fileName,String downloadfileName){
System.out.println("Strat Time::"+System.currentTimeMillis());
try {
String remoteFile1 = "/"+fileName; // file on server
File downloadFile1 = new File(downloadfileName); // new file which is going to be copied on local directory
OutputStream outputStream1 = new BufferedOutputStream(new FileOutputStream(downloadFile1));
Boolean success = ftpClient.retrieveFile(remoteFile1, outputStream1);
if (success) {
System.out.println("File"+fileName+" has been downloaded successfully.");
}
else
{
System.out.println("File"+fileName+" has been DOWNLOAD FAILURE....");
}
outputStream1.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("END Time::"+System.currentTimeMillis());
}
}
Move this line:
FTPClient ftpClient = new FTPClient();
Outside of the try { ... } catch() block (ie, move it up to before the try)
You are declaring the local variable inside the try, then trying to use it in the finally block

JSF Custom EL function - encapsulating an Exception

Following the example (based on a similar question):
/**
*
*/
package za.co.sindi.jsf.functions;
import java.io.IOException;
import org.markdown4j.Markdown4jProcessor;
/**
*
* #author Buhake Sindi
* #since 22 January 2013
*
*/
public final class SomeFunctions {
/**
* Private constructor
*/
private SomeFunctions() {
//TODO: Nothing...
}
public static String process(String input) {
SomeProcessor processor = new SomeProcessor();
try {
return processor.process(input);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace(); //I don't believe this is correct.
}
}
}
What do I do inside the catch block? Do I just log it on a java Logger or is there a preferred JSF way on encapsulating an Exception?
Depends on the concrete functional requirements.
If an empty output is acceptable, log it and return null.
public static String process(String input) {
SomeProcessor processor = new SomeProcessor();
try {
return processor.process(input);
} catch (IOException e) {
someLogger.warn("Processing markdown failed", e);
return null;
}
}
If it's not acceptable, throw it.
public static String process(String input) throws IOException {
SomeProcessor processor = new SomeProcessor();
return processor.process(input);
}
Unrelated to the concrete problem, you'd better process markdown and store as property (and ultimately also in DB) only once during create/update instead of processing the very same output again and again. So, effectively you end up with 2 properties / DB columns, one with raw markdown and another with parsed/processed markdown.

Resources