Extend DefaultCodec to support Zip compression for Hadoop file - apache-spark

I've got some Spark code that reads two files from HDFS (a header file and a body file), reduces the RDD[String] to a single partition, then writes the result as a compressed file using the GZip codec:
spark.sparkContext.textFile("path_to_header.txt,path_to_body.txt")
.coalesce(1)
.saveAsTextFile("output_path", classOf[GzipCodec])
This works 100% as expected. We're now being asked to support zip compression for Windows users who are unable to natively decompress *.gzip files. Obviously, zip format isn't natively supported, so I'm attempting to roll my own compression codec.
I'm running into a "ZipException: no current ZIP entry" exception when running the code however:
Exception occured while exporting org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 16.0 failed 2 times, most recent failure: Lost task 0.1 in stage 16.0 (TID 675, xxxxxxx.xxxxx.xxx, executor 16): java.util.zip.ZipException: no current ZIP entry
at java.util.zip.ZipOutputStream.write(Unknown Source)
at io.ZipCompressorStream.write(ZipCompressorStream.java:23)
at java.io.DataOutputStream.write(Unknown Source)
at org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.writeObject(TextOutputFormat.java:81)
at org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.write(TextOutputFormat.java:102)
at org.apache.spark.SparkHadoopWriter.write(SparkHadoopWriter.scala:95)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply$mcV$sp(PairRDDFunctions.scala:1205)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply(PairRDDFunctions.scala:1203)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply(PairRDDFunctions.scala:1203)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1348)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1211)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1190)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
I've created a ZipCodec class that extends DefaultCodec:
public class ZipCodec extends DefaultCodec {
#Override
public CompressionOutputStream createOutputStream(final OutputStream out, final Compressor compressor) throws IOException {
return new ZipCompressorStream(new ZipOutputStream(out));
}
As well as a ZipCompressorStream which extends CompressorStream:
public class ZipCompressorStream extends CompressorStream {
public ZipCompressorStream(final ZipOutputStream out) {
super(out);
}
#Override
public void write(final int b) throws IOException {
out.write(b);
}
#Override
public void write(final byte[] data, final int offset, final int length) throws IOException {
out.write(data, offset, length);
}
We're currently using Spark 1.6.0 and Hadoop 2.6.0-cdh5.8.2
Any thoughts at all?
Thanks in advance!

ZIP is a container format, when GZip is just a stream-like format (used to store one file). That's why when crating a new ZIP file you need to start an entry first (giving a name), then after writing close that entry before closing a container. See example here: https://www.programcreek.com/java-api-examples/?class=java.util.zip.ZipOutputStream&method=putNextEntry

Related

com.jscape.inet.sftp.SftpException: cause: java.util.NoSuchElementException: no common elements found- JSCAPE Java library

I tried to connect and download a file from a Linux server but faced the following exception when connecting to the server using the jscape java library.
Code
package com.example.util;
import com.jscape.inet.sftp.Sftp;
import com.jscape.inet.ssh.util.SshParameters;
public class TestFTPManager
{
private static final String hostname = "mycompany.example.com";
private static final String username = "exampleuser";
private static final String password = "examplepassword";
private static final int port = 22;
private Sftp sftpClient;
public TestFTPManager()
{
this.sftpClient = new Sftp( new SshParameters(hostname, port, username, password ));
}
public void connect() throws Exception
{
this.sftpClient.connect();
}
public void setAscii() throws Exception
{
this.sftpClient.setAscii();
}
public void setBinary() throws Exception
{
this.sftpClient.setBinary();
}
public Sftp getSftpClient()
{
return sftpClient;
}
public void setSftpClient( Sftp sftpClient )
{
this.sftpClient = sftpClient;
}
public static void main(String[] args)
{
try
{
TestFTPManager sftpManager = new TestFTPManager();
sftpManager.getSftpClient().connect(); // Error
System.out.println( "Connection successful!" );
// download operation is done here.
sftpManager.getSftpClient().disconnect();
System.out.println( "Disconnection successful!" );
} catch (Exception e)
{
e.printStackTrace();
}
}
}
Error
com.jscape.inet.sftp.SftpException: cause: java.util.NoSuchElementException: no common elements found
at com.jscape.inet.sftp.SftpConfiguration.createClient(Unknown Source)
at com.jscape.inet.sftp.Sftp.connect(Unknown Source)
at com.jscape.inet.sftp.Sftp.connect(Unknown Source)
at com.example.util.TestFTPManager.main(TestFTPManager.java:54)
Caused by: com.jscape.inet.ssh.transport.TransportException: cause: java.util.NoSuchElementException: no common elements found
at com.jscape.inet.ssh.transport.AlgorithmSuite.<init>(Unknown Source)
at com.jscape.inet.ssh.transport.TransportClient.getSuite(Unknown Source)
at com.jscape.inet.ssh.transport.Transport.exchangeKeys(Unknown Source)
at com.jscape.inet.ssh.transport.Transport.exchangeKeys(Unknown Source)
at com.jscape.inet.ssh.transport.TransportClient.<init>(Unknown Source)
at com.jscape.inet.ssh.transport.TransportClient.<init>(Unknown Source)
at com.jscape.inet.ssh.SshConfiguration.createConnectionClient(Unknown Source)
at com.jscape.inet.ssh.SshStandaloneConnector.openConnection(Unknown Source)
... 4 more
Caused by: java.util.NoSuchElementException: no common elements found
at com.jscape.inet.ssh.types.SshNameList.getFirstCommonNameFrom(Unknown Source)
at com.jscape.inet.ssh.transport.AlgorithmSuite.a(Unknown Source)
at com.jscape.inet.ssh.transport.AlgorithmSuite.h(Unknown Source)
... 12 more
However, when I commented down the following lines (lines no 23,23,25) in the /etc/ssh/sshd_config file in the server. I could successfully connect and download the file from the server without any exceptions.
Question: How to get rid of getting this exception without commenting down (lines no 23,23,25) the /etc/ssh/sshd_config file in the server? I would appreciate having an explanation why I get this exception too.
If you are facing this issue please check the following findings.
I used JSCAPE (Java) library version 8.8.0. According to my understanding, this version is not supported some of the Ciphers and KexAlgorithms specified in the sshd_config file.
When you refer to the JSCAPE documentation, com.jscape.inet.sftp class contains what you need to set key exchanges, ciphers, macs and compressions if needed. Please click here to see the official documentation, there you can see how you can put these things into code.
However, the JSCAPE (Java) library that I use (version 8.8.0) does not contain these classes and methods to set key exchanges, ciphers, macs and compressions if needed.
One of the things you can try is to use the latest version of the JSCAPE library, but I doubt it is available for free.

Numerous Kafka Producer Network Thread generated during data publishing, Null Pointer Exception Spring Kafka

I am writing a Kafka Producer using Spring Kafka 2.3.9 that suppose to publish around 200000 messages to a topic. For example, I have a list of 200000 objects that I fetched from a database and I want to publish json messages of those objects to a topic.
The producer that I have written is working fine for publishing, let's say, 1000 messages. Then it is creating some null pointer error(I have included the screen shot below).
During debugging, I found that the number of Kafka Producer Network Thread is very high. I could not count them but they are definitely more than 500.
I have read the thread Kafka Producer Thread, huge amound of threads even when no message is send and did a similar configuration by making producerPerConsumerPartition property false on DefaultKafkaProducerFactory. But still it is not decreasing the Kafka Producer Network Thread count.
Below are my code snippets, error and picture of those threads. I can't post all of the code segments since it is from a real project.
Code segments
public DefaultKafkaProducerFactory<String, String> getProducerFactory() throws IOException, IllegalStateException {
Map<String, Object> configProps = getProducerConfigMap();
DefaultKafkaProducerFactory<String, String> defaultKafkaProducerFactory = new DefaultKafkaProducerFactory<>(configProps);
//defaultKafkaProducerFactory.transactionCapable();
defaultKafkaProducerFactory.setProducerPerConsumerPartition(false);
defaultKafkaProducerFactory.setProducerPerThread(false);
return defaultKafkaProducerFactory;
}
public Map<String, Object> getProducerConfigMap() throws IOException, IllegalStateException {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getBootstrapAddress());
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.RETRIES_CONFIG, kafkaProperties.getKafkaRetryConfig());
configProps.put(ProducerConfig.ACKS_CONFIG, kafkaProperties.getKafkaAcknowledgementConfig());
configProps.put(ProducerConfig.CLIENT_ID_CONFIG, kafkaProperties.getKafkaClientId());
configProps.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 512 * 1024 * 1024);
configProps.put(ProducerConfig.MAX_BLOCK_MS_CONFIG, 10 * 1000);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
//updateSSLConfig(configProps);
return configProps;
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
ProducerFactory<String, String> producerFactory = getProducerFactory();
KafkaTemplate<String, String> kt = new KafkaTemplate<String, String>(stringProducerFactory, true);
kt.setCloseTimeout(java.time.Duration.ofSeconds(5));
return kt;
}
Error
2020-12-07 18:14:19.249 INFO 26651 --- [onPool-worker-1] o.a.k.clients.producer.KafkaProducer : [Producer clientId=kafka-client-09f48ec8-7a69-4b76-a4f4-a418e96ff68e-1] Closing the Kafka producer with timeoutMillis = 0 ms.
2020-12-07 18:14:19.254 ERROR 26651 --- [onPool-worker-1] c.w.p.r.g.xxxxxxxx.xxx.KafkaPublisher : Exception happened publishing to topic. Failed to construct kafka producer
2020-12-07 18:14:19.273 INFO 26651 --- [ main] ConditionEvaluationReportLoggingListener :
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2020-12-07 18:14:19.281 ERROR 26651 --- [ main] o.s.boot.SpringApplication : Application run failed
java.lang.IllegalStateException: Failed to execute CommandLineRunner
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:787) [spring-boot-2.2.8.RELEASE.jar:2.2.8.RELEASE]
at org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:768) [spring-boot-2.2.8.RELEASE.jar:2.2.8.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:322) [spring-boot-2.2.8.RELEASE.jar:2.2.8.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1226) [spring-boot-2.2.8.RELEASE.jar:2.2.8.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1215) [spring-boot-2.2.8.RELEASE.jar:2.2.8.RELEASE]
at xxx.xxx.xxx.Application.main(Application.java:46) [classes/:na]
Caused by: java.util.concurrent.CompletionException: java.lang.NullPointerException
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273) ~[na:1.8.0_144]
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280) ~[na:1.8.0_144]
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1592) ~[na:1.8.0_144]
at java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1582) ~[na:1.8.0_144]
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) ~[na:1.8.0_144]
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) ~[na:1.8.0_144]
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) ~[na:1.8.0_144]
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) ~[na:1.8.0_144]
Caused by: java.lang.NullPointerException: null
at com.xxx.xxx.xxx.xxx.KafkaPublisher.publishData(KafkaPublisher.java:124) ~[classes/:na]
at com.xxx.xxx.xxx.xxx.lambda$0(Publisher.java:39) ~[classes/:na]
at java.util.ArrayList.forEach(ArrayList.java:1249) ~[na:1.8.0_144]
at com.xxx.xxx.xxx.xxx.publishData(Publisher.java:38) ~[classes/:na]
at com.xxx.xxx.xxx.xxx.Application.lambda$0(Application.java:75) [classes/:na]
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590) ~[na:1.8.0_144]
... 5 common frames omitted
Following is the code for publishing the message - line number 124 is when we actually call KafkaTemplate
public void publishData(Object object) {
ListenableFuture<SendResult<String, String>> future = null;
// Convert the Object to JSON
String json = convertObjectToJson(object);
// Generate unique key for the message
String key = UUID.randomUUID().toString();
// Post the JSON to Kafka
try {
future = kafkaConfig.kafkaTemplate().send(kafkaProperties.getTopicName(), key, json);
} catch (Exception e) {
logger.error("Exception happened publishing to topic. {}", e.getMessage());
}
future.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {
#Override
public void onSuccess(SendResult<String, String> result) {
logger.info("Sent message with key=[" + key + "]");
}
#Override
public void onFailure(Throwable ex) {
logger.error("Unable to send message=[ {} due to {}", json, ex.getMessage());
}
});
kafkaConfig.kafkaTemplate().flush();
}
============================
I am not sure if this error is causing by those many network threads.
After posting the data, I have called KafkaTemplate flush method. It did not work.
I also called ProducerFactory closeThreadBoundProducer, reset, destroy methods. None of them seems working.
Am I missing any configuration?
The null pointer issue was not actually related to Spring Kafka. We were reading the topic name from a different location connected by a network. That network connection was failing for few cases and that caused null pointer issue which ultimately caused the above error.

Kafka-Streams throwing NullPointerException when consuming

I have this problem:
When I'm consuming from a topic using the Processor API, when inside the processor the method context().forward(K, V), Kafka Streams throws a null pointer exception.
This is the stacktrace of it:
Exception in thread "StreamThread-1" java.lang.NullPointerException
at org.apache.kafka.streams.processor.internals.StreamTask.forward(StreamTask.java:336)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:187)
at org.apache.kafka.streams.processor.ProcessorContext$forward.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:133)
at com.bnsf.ltf.processor.ConversionProcessor.process(ConversionProcessor.groovy:23)
at com.bnsf.ltf.processor.ConversionProcessor.process(ConversionProcessor.groovy)
at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:68)
at org.apache.kafka.streams.processor.internals.StreamTask.forward(StreamTask.java:338)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:187)
at org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:64)
at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:174)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:320)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:218)
My Gradle dependencies look like this:
compile('org.codehaus.groovy:groovy-all')
compile('org.apache.kafka:kafka-streams:0.10.0.0')
Update: I tried with version 0.10.0.1 and it still throws the same error.
This is the code of the Topology I'm building...
topologyBuilder.addSource('inboundTopic', stringDeserializer, stringDeserializer, conversionConfiguration.inTopic)
.addProcessor('conversionProcess', new ProcessorSupplier() {
#Override
Processor get() {
return conversionProcessor
}
}, 'inboundTopic')
.addSink('outputTopic', conversionConfiguration.outTopic, stringSerializer, stringSerializer, 'conversionProcess')
stream = new KafkaStreams(topologyBuilder, streamConfig)
stream.start()
My processor looks like this:
#Override
void process(String key, String message) {
// Call to a service and the return of the service is set on the
// converted local variable named converted
context().forward(key, converted)
context().commit()
}
Provide your Processor directly.
.addProcessor('conversionProcess', () -> new MyProcessor(), 'inboundTopic')
MyProcessor should, in turn, inherit from AbstractProcessor.

Increment column value in Spark

I've a Spark Streaming object that fetches data from RabbitMQ and saves it into HBase. This save is an Increment operation. I'm using the saveAsNewAPIHadoopDataset but I keep getting below Exception
Code:
pairDStream.foreachRDD(new VoidFunction<JavaPairRDD<String, Integer>>() {
#Override
public void call(JavaPairRDD<String, Integer> arg0)
throws Exception {
Configuration dbConf = HBaseConfiguration.create();
dbConf.set("hbase.table.namespace.mappings", "tablename:/mapr/tablename");
Job jobConf = Job.getInstance(dbConf);
jobConf.getConfiguration().set(TableOutputFormat.OUTPUT_TABLE, "tablename");
jobConf.setOutputFormatClass(org.apache.hadoop.hbase.mapreduce.TableOutputFormat.class);
JavaPairRDD<ImmutableBytesWritable, Increment> hbasePuts = arg0.mapToPair(
new PairFunction<Tuple2<String,Integer>, ImmutableBytesWritable, Increment>() {
#Override
public Tuple2<ImmutableBytesWritable, Increment> call(
Tuple2<String, Integer> arg0)
throws Exception {
String[] keys = arg0._1.split("_");
Increment inc = new Increment(Bytes.toBytes(keys[0]));
inc.addColumn(Bytes.toBytes("data"),
Bytes.toBytes(keys[1]),
arg0._2);
return new Tuple2<ImmutableBytesWritable, Increment>(new ImmutableBytesWritable(), inc);
}
});
// save to HBase- Spark built-in API method
hbasePuts.saveAsNewAPIHadoopDataset(jobConf.getConfiguration());
}
});
Exception:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 6.0 failed 4 times, most recent failure: Lost task 1.3 in stage 6.0 (TID 100, dev-arc-app036.vega.cloud.ironport.com): java.io.IOException: Pass a Delete or a Put
at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:128)
at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:87)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply$mcV$sp(PairRDDFunctions.scala:1113)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply(PairRDDFunctions.scala:1111)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply(PairRDDFunctions.scala:1111)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1250)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1119)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1091)
Is it possible to use "saveAsNewAPIHadoopDataset" method to Increment rather than Put?
Any help is greatly appreciated.
Thanks
Akhila.

SMB connection with java

Hi i have created a test program to connect with SMB protocol. My motive is to create a test.txt file on a shared location like String path="smb://192.168.143.134/rtf2xml/"+sharedFolder+"/test.txt";
But when I try to run my program (below is the code sample)
import java.io.IOException;
import jcifs.smb.NtlmPasswordAuthentication;
import jcifs.smb.SmbFile;
import jcifs.smb.SmbFileOutputStream;
public class Test {
/**
* #param args
* #throws IOException
*/
public static void main(String[] args) throws IOException {
String user = "abc";
String pass ="123456";
String sharedFolder="INPUT";
String path="smb://192.168.143.134/rtf2xml/"+sharedFolder+"/test.txt";
NtlmPasswordAuthentication auth = new NtlmPasswordAuthentication("192.168.143.134",user, pass);
SmbFile smbFile = new SmbFile(path, auth);
smbFile.createNewFile();
SmbFileOutputStream smbfos = new SmbFileOutputStream(smbFile);
smbfos.write("testing....and writing to a file".getBytes());
System.out.println("completed ...nice !");
}
}
It is throwing exception
Exception in thread "main" jcifs.smb.SmbException: Failed to negotiate
jcifs.smb.SmbException: Timeout trying to open socket
java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.connect0(Native Method)
at java.net.DualStackPlainSocketImpl.socketConnect(Unknown Source)
at java.net.AbstractPlainSocketImpl.doConnect(Unknown Source)
at java.net.AbstractPlainSocketImpl.connectToAddress(Unknown Source)
at java.net.AbstractPlainSocketImpl.connect(Unknown Source)
at java.net.PlainSocketImpl.connect(Unknown Source)
at java.net.SocksSocketImpl.connect(Unknown Source)
at java.net.Socket.connect(Unknown Source)
at java.net.Socket.connect(Unknown Source)
at java.net.Socket.<init>(Unknown Source)
at java.net.Socket.<init>(Unknown Source)
at jcifs.netbios.NbtSocket.<init>(NbtSocket.java:59)
at jcifs.smb.SmbTransport.run(SmbTransport.java:342)
at java.lang.Thread.run(Unknown Source)
at jcifs.smb.SmbTransport.start(SmbTransport.java:315)
at jcifs.smb.SmbTransport.negotiate0(SmbTransport.java:865)
at jcifs.smb.SmbTransport.negotiate(SmbTransport.java:941)
at jcifs.smb.SmbTree.treeConnect(SmbTree.java:119)
at jcifs.smb.SmbFile.connect(SmbFile.java:827)
at jcifs.smb.SmbFile.connect0(SmbFile.java:797)
at jcifs.smb.SmbFile.open0(SmbFile.java:852)
at jcifs.smb.SmbFile.createNewFile(SmbFile.java:2265)
at Test.main(Test.java:22)
at jcifs.smb.SmbTransport.negotiate(SmbTransport.java:947)
at jcifs.smb.SmbTree.treeConnect(SmbTree.java:119)
at jcifs.smb.SmbFile.connect(SmbFile.java:827)
at jcifs.smb.SmbFile.connect0(SmbFile.java:797)
at jcifs.smb.SmbFile.open0(SmbFile.java:852)
at jcifs.smb.SmbFile.createNewFile(SmbFile.java:2265)
at Test.main(Test.java:22)
How to get rid of this?
But l think you have to make sure that the destination server(192.168.143.134) is up and alive.
you can write it the following way, since the IP in included in the smb link.
public static void main(String[] args) throws IOException {
String user = "abc";
String pass ="123456";
String sharedFolder="INPUT";
String path="smb://192.168.143.134/rtf2xml/"+sharedFolder+"/test.txt";
NtlmPasswordAuthentication auth = new NtlmPasswordAuthentication("",user, pass);//note here
SmbFile smbFile = new SmbFile(path, auth);
smbFile.createNewFile();
SmbFileOutputStream smbfos = new SmbFileOutputStream(smbFile);
smbfos.write("testing....and writing to a file".getBytes());
System.out.println("completed ...nice !");
}
....

Resources