Apache Batik No WriteAdapter is available? - svg

I'm writing code to convert SVG's to PNG's:
package com.example;
import java.io.*;
import java.nio.file.Paths;
import org.apache.batik.transcoder.image.PNGTranscoder;
import org.apache.batik.transcoder.SVGAbstractTranscoder;
import org.apache.batik.transcoder.TranscoderInput;
import org.apache.batik.transcoder.TranscoderOutput;
public class Main {
public static void main(String [] args) throws Exception {
// read the input SVG document into TranscoderInput
String svgURI = Paths.get(args[0]).toUri().toURL().toString();
TranscoderInput input = new TranscoderInput(svgURI);
// define OutputStream to PNG Image and attach to TranscoderOutput
OutputStream ostream = new FileOutputStream("out.png");
TranscoderOutput output = new TranscoderOutput(ostream);
// create a JPEG transcoder
PNGTranscoder t = new PNGTranscoder();
// set the transcoding hints
t.addTranscodingHint(SVGAbstractTranscoder.KEY_HEIGHT, new Float(600));
t.addTranscodingHint(SVGAbstractTranscoder.KEY_WIDTH, new Float(600));
// convert and write output
t.transcode(input, output);
// flush and close the stream then exit
ostream.flush();
ostream.close();
}
}
I get the following exceptions executing it with a variety of SVG's:
Exception in thread "main" org.apache.batik.transcoder.TranscoderException: null
Enclosed Exception:
Could not write PNG file because no WriteAdapter is availble
at org.apache.batik.transcoder.image.ImageTranscoder.transcode(ImageTranscoder.java:132)
at org.apache.batik.transcoder.XMLAbstractTranscoder.transcode(XMLAbstractTranscoder.java:142)
at org.apache.batik.transcoder.SVGAbstractTranscoder.transcode(SVGAbstractTranscoder.java:156)
at com.example.Main.main(Main.java:26)
Batik version (reported by Maven):
version=1.9
groupId=org.apache.xmlgraphics
artifactId=batik-transcoder
I get the same error with Batik 1.7.
Suggestions?

The problem was solved by Peter Coppens on the xmlgraphics-batik-users mailing list. The problem is that the Maven repository for Batik 1.9 is missing a dependency, which can be addressed by adding to pom.xml:
<dependency>
<groupId>org.apache.xmlgraphics</groupId>
<artifactId>batik-codec</artifactId>
<version>1.9</version>
</dependency>
The cryptic exception disappears and the code functions as expected with this addition. This was reported as a bug for Batk 1.7 (https://bz.apache.org/bugzilla/show_bug.cgi?id=44682).

Related

I am trying to convert ppt into pdf using Apache POI but getting following error.Please help me out of this

Following code is used:
public static void main(String[] args) throws IOException {
FileInputStream is = new FileInputStream("C:/Users/hp/Downloads/sampPPT.ppt");
HSLFSlideShow ppt = new HSLFSlideShow(is);
is.close();
Dimension pgsize = ppt.getPageSize();
int idx = 1;
for (HSLFSlide slide : ppt.getSlides()) {
BufferedImage img = new BufferedImage(pgsize.width, pgsize.height, BufferedImage.TYPE_INT_RGB);
Graphics2D graphics = img.createGraphics();
// clear the drawing area
graphics.setPaint(Color.white);
graphics.fill(new Rectangle2D.Float(0, 0, pgsize.width, pgsize.height));
// render
slide.draw(graphics);
// save the output
FileOutputStream out = new FileOutputStream("C:/Users/hp/Downloads/slide-" + idx + ".png");
javax.imageio.ImageIO.write(img, "png", out);
out.close();
idx++;
}
}
This throws following exception:
Exception in thread "main" java.lang.IllegalAccessError: class org.apache.poi.hslf.usermodel.HSLFSlideShowImpl tried to access private field org.apache.poi.POIDocument.directory (org.apache.poi.hslf.usermodel.HSLFSlideShowImpl and org.apache.poi.POIDocument are in unnamed module of loader 'app')
at org.apache.poi.hslf.usermodel.HSLFSlideShowImpl.readCurrentUserStream(HSLFSlideShowImpl.java:340)
at org.apache.poi.hslf.usermodel.HSLFSlideShowImpl.<init>(HSLFSlideShowImpl.java:154)
at org.apache.poi.hslf.usermodel.HSLFSlideShowImpl.<init>(HSLFSlideShowImpl.java:127)
at org.apache.poi.hslf.usermodel.HSLFSlideShowImpl.<init>(HSLFSlideShowImpl.java:116)
at org.apache.poi.hslf.usermodel.HSLFSlideShow.<init>(HSLFSlideShow.java:138)
at PPTConv.PPTConv.main(PPTConv.java:27)
To make an answer why such exceptions occur. Maybe it is helpful for others too:
This kind of exception occur if you mix Apache POI jars from different versions. This is not supported. See FAQ.
In that special case there probably are poi-*.jar and poi-scratchpad-*.jar from different versions in classpath. The class org.apache.poi.hslf.usermodel.HSLFSlideShowImpl, which extends org.apache.poi.POIDocument, is contained in poi-scratchpad-*.jar while org.apache.poi.POIDocument is contained in poi-*.jar. If those *.jars are from different versions, then following can occur:
The org.apache.poi.hslf.usermodel.HSLFSlideShowImpl of poi-scratchpad-3.15.jar calls currentUser = new CurrentUserAtom(directory); in code line 340. This is possible because it extends org.apache.poi.POIDocument and this has field protected DirectoryNode directory; in version 3.15 (poi-3.15.jar).
But the same class org.apache.poi.POIDocument of version 3.16 (poi-3.16.jar) has field private DirectoryNode directory;. So if org.apache.poi.hslf.usermodel.HSLFSlideShowImpl of version 3.15 calls currentUser = new CurrentUserAtom(directory); in code line 340, but org.apache.poi.POIDocument is from version 3.16, then java.lang.IllegalAccessError: class org.apache.poi.hslf.usermodel.HSLFSlideShowImpl tried to access private field org.apache.poi.POIDocument.directory is thrown because it really tries to access a private field now.

I am unable to fetch excel data to selenium code At ubuntu o/s

public class ReadAndWrite {
public static void main(String[] args) throws InterruptedException, BiffException, IOException
{
System.out.println("hello");
ReadAndWrite.login();
}
public static void login() throws BiffException, IOException, InterruptedException{
WebDriver driver=new FirefoxDriver();
driver.get("URL");
System.out.println("hello");
FileInputStream fi = new FileInputStream("/home/sagarpatra/Desktop/Xpath.ods");
System.out.println("hiiiiiii");
Workbook w = Workbook.getWorkbook(fi);
Sheet sh = w.getSheet(1);
//or w.getSheet(Sheetnumber)
//String variable1 = s.getCell(column, row).getContents();
for(int row=1; row <=sh.getRows();row++)
{
String username = sh.getCell(0, row).getContents();
System.out.println("Username "+username);
driver.get("URL");
driver.findElement(By.name("Email")).sendKeys(username);
String password= sh.getCell(1, row).getContents();
System.out.println("Password "+password);
driver.findElement(By.name("Passwd")).sendKeys(password);
Thread.sleep(10000);
driver.findElement(By.name("Login")).click();
System.out.println("Waiting for page to load fully...");
Thread.sleep(30000);
}
driver.quit();
}
}
I don't know what is wrong with my code, or how to fix it. It outputs the following error:
Exception in thread "main" jxl.read.biff.BiffException: Unable to recognize OLE stream
at jxl.read.biff.CompoundFile.<init>(CompoundFile.java:116)
at jxl.read.biff.File.<init>(File.java:127)
at jxl.Workbook.getWorkbook(Workbook.java:221)
at jxl.Workbook.getWorkbook(Workbook.java:198)
at test.ReadTest.main(ReadTest.java:19)
I would try using Apache MetaModel instead. I have had better luck with that, than using JXL. Here is a example project I wrote that reads from a .XLSX file. I use this library to run tests on a Linux Jenkins server from .XLS files generated on MS Windows.
Also, it should be noted that this library is also perfect for making a parameterized DataProvider that queries a database with JDBC.
Using JXL, you limit yourself to one data type, either .XLS or .CSV. I believe MetaModel is actually using JXL under the hood and wrapping it to make it easier to use. So, it also would support the OpenOffice documents in the same fashion and suffer the same file compatibility issues.

Read a String-object out of a .txt file from the res folder of a Blackberry app

I just started to develop a simple Blackberry app which shows a text sequence in a RichTextField on a MainScreen. When I define the String directly in the sourcecode, then I have no problem to display it. But if I try to read it in from a .txt file which is located in the res folder, then I get a NullPointerException.
The code below is what I did so far.
package mypackage;
import java.io.IOException;
import java.io.InputStream;
import net.rim.device.api.io.IOUtilities;
import net.rim.device.api.ui.component.RichTextField;
import net.rim.device.api.ui.container.MainScreen;
public final class MyScreen extends MainScreen{
String str = readFile("Testfile.txt");
public MyScreen(){
setTitle("Read Files");
add(new RichTextField(str));
}
public String readFile(String filename){
InputStream is = this.getClass().getResourceAsStream("/"+filename);
try {
byte[] filebytes = IOUtilities.streamToBytes(is);
is.close();
return new String(filebytes);
}
catch (IOException e){
System.out.println(e.getMessage());
}
return "";
}
}
Parts of this code I found in this forum but my problem is that I don't understand when I have to open a connection and when to close it.
And when do I need a Buffer?
And why do I have to convert a InputStream to a byte[] and then the byte[] to a String?
All I need is one method, where I can type in the Filename and get back a String-Object with the text which is in my .txt file.
And of course the method should save resources...
package mypackage;
import java.io.IOException;
import java.io.InputStream;
import net.rim.device.api.io.IOUtilities;
import net.rim.device.api.ui.component.RichTextField;
import net.rim.device.api.ui.container.MainScreen;
public final class MyScreen extends MainScreen {
public MyScreen() throws IOException {
setTitle("Read Files");
add(new RichTextField(readFileToString("Testfile.txt")));
}
public String readFileToString(String path) throws IOException {
InputStream is = getClass().getResourceAsStream("/"+path);
byte[] content = IOUtilities.streamToBytes(is);
is.close();
return new String(content);
}
}
Yes!!! I found a way to solve my problem.
I don't know why my previous code didn't work but this one works...
The only thing I've changed is that I've added the throws IOException instead of surrounding it with a try - catch block...

Smooks GroovyContentHandlerFactory Exception when upgrading from 1.4 to 1.5.1?

I have recently upgraded my Smooks application from 1.4 to 1.5.1, but I keep getting the exception below:
Error when processing EDI file org.milyn.cdr.SmooksConfigurationException: Error
invoking #Initialize method 'initialize' on class 'org.milyn.smooks.scripting.groovy.GroovyContentHandlerFactory'.
I am pretty new to Smooks and Groovy, but this is an extract of my code, which was working in version 1.4.
I also have all the 1.5.1 classes in my classpath, including the 1.5 EDI definitions I am trying to load.
Smooks smooks = null;
try {
smooks = new Smooks();
}
catch(Exception exception) { System.out.println("Error " + exception); }
try {
smooks.setReaderConfig(new UNEdifactReaderConfigurator("urn:org.milyn.edi.unedifact:d01b-mapping:*"));
// Create an exec context - no profiles....
ExecutionContext executionContext = smooks.createExecutionContext();
DOMResult domResult = new DOMResult();
// Configure the execution context to generate a report...
executionContext.setEventListener(new HtmlReportGenerator("EDI/reports/report.html"));
smooks.filterSource(new StreamSource((InputStream) bufferedinputstream), domResult);
Extract from GroovyContentHandlerFactory
#Initialize
public void initialize() throws IOException {
String templateText = StreamUtils.readStreamAsString(getClass().getResourceAsStream("ScriptedGroovy.ftl"));
classTemplate = new FreeMarkerTemplate(templateText);
Any help or ideas would be much appreciated as I have spent hours on trying to figure this one out.
Cheers, Matt

How to get the html content from nutch

Is there is any way to get the html content of each webpage in nutch while crawling the web page?
Yes, you can acutally export the content of the crawled segments. It is not straightforward, but it works well for me. First, create a java project with the following code:
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.SequenceFile;
import org.apache.hadoop.io.Text;
import org.apache.nutch.protocol.Content;
import org.apache.nutch.util.NutchConfiguration;
import java.io.File;
import java.io.FileOutputStream;
public class NutchSegmentOutputParser {
public static void main(String[] args) {
if (args.length != 2) {
System.out.println("usage: segmentdir (-local | -dfs <namenode:port>) outputdir");
return;
}
try {
Configuration conf = NutchConfiguration.create();
FileSystem fs = FileSystem.get(conf);
String segment = args[0];
File outDir = new File(args[1]);
if (!outDir.exists()) {
if (outDir.mkdir()) {
System.out.println("Creating output dir " + outDir.getAbsolutePath());
}
}
Path file = new Path(segment, Content.DIR_NAME + "/part-00000/data");
SequenceFile.Reader reader = new SequenceFile.Reader(fs, file, conf);
Text key = new Text();
Content content = new Content();
while (reader.next(key, content)) {
String filename = key.toString().replaceFirst("http://", "").replaceAll("/", "___").trim();
File f = new File(outDir.getCanonicalPath() + "/" + filename);
FileOutputStream fos = new FileOutputStream(f);
fos.write(content.getContent());
fos.close();
System.out.println(f.getAbsolutePath());
}
reader.close();
fs.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}
I recommend using Maven; add the following dependencies:
<dependency>
<groupId>org.apache.nutch</groupId>
<artifactId>nutch</artifactId>
<version>1.5.1</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>0.23.1</version>
</dependency>
and create a jar package (i.e. NutchSegmentOutputParser.jar)
You need Hadoop to be installed on your machine. Then run:
$/hadoop-dir/bin/hadoop --config \
NutchSegmentOutputParser.jar:~/.m2/repository/org/apache/nutch/nutch/1.5.1/nutch-1.5.1.jar \
NutchSegmentOutputParser nutch-crawled-dir/2012xxxxxxxxx/ outdir
where nutch-crawled-dir/2012xxxxxxxxx/ is the crawled directory you want to extract content from (it contains 'segment' subdirectory) and outdir is an output dir. The output file names are generated from URI, however, the slashes are replaced by "_".
Hope it helps.
Try this:
public ParseResult filter(Content content, ParseResult parseResult, HTMLMetaTags
metaTags, DocumentFragment doc)
{
Parse parse = parseResult.get(content.getUrl());
LOG.info("parse.getText: " +parse.getText());
return parseResult;
}
Then check the content in hadoop.log.
Its super basic.
public ParseResult getParse(Content content) {
LOG.info("getContent: " + new String(content.getContent()));
The Content object has a method getContent(), which returns a byte array. Just have Java create a new String() with the BA, and you've got the raw html of whatever nutch had fetched.
I'm using Nutch 1.9
Here's the JavaDoc on org.apache.nutch.protocol.Content
https://nutch.apache.org/apidocs/apidocs-1.2/org/apache/nutch/protocol/Content.html#getContent()
Yes there is a way. Have a look at cache.jsp to see how it displays the cached data.

Resources