Exception during bulk loads do Cassandra 1.1 - cassandra

I'm getting exception while bulk loads with sstableloader. I'm using JDK 1.6.0_25 64bit, Ubuntu 12.04 server. Ipv6 is turned off. Network communication between hosts works correctly. I'm going crazy ;-(
Exception in thread "Streaming to /192.168.219.36:1" java.lang.RuntimeException: java.net.SocketException: Invalid argument or cannot assign requested address
at org.apache.cassandra.utils.FBUtilities.unchecked(FBUtilities.java:628)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.net.SocketException: Invalid argument or cannot assign requested address
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
at java.net.Socket.connect(Socket.java:529)
at java.net.Socket.connect(Socket.java:478)
at java.net.Socket.<init>(Socket.java:375)
at java.net.Socket.<init>(Socket.java:276)
at org.apache.cassandra.net.OutboundTcpConnectionPool.newSocket(OutboundTcpConnectionPool.java:96)
at org.apache.cassandra.streaming.FileStreamTask.connectAttempt(FileStreamTask.java:245)
at org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
All hosts runs Cassandra 1.1 (datastax edition). Ports 7000,7199,9160 opened. Any ideas ??

Why are you streaming to port 1? It should be the RPC_PORT and the default for that is 9160.
/192.168.219.36: 1
What params are you passing to the bulkloader?

Related

How to handle dynamic port in Apache Spark / Hadoop Yarn

I have setup a Hadoop cluster with 1 name node and 2 data nodes. I've also installed Yarn and Spark on top of that in the name node.
I notice that whenever I try run the example jar here:
spark-submit --deploy-mode cluster --class org.apache.spark.examples.SparkPi $SPARK_HOME/examples/jars/spark-examples_*.jar 10
I will always get the no route to host exception:
Uncaught exception: org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:301)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:102)
at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:110)
at org.apache.spark.deploy.yarn.ApplicationMaster.runExecutorLauncher(ApplicationMaster.scala:558)
at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:277)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:926)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:925)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:925)
at org.apache.spark.deploy.yarn.ExecutorLauncher$.main(ApplicationMaster.scala:957)
at org.apache.spark.deploy.yarn.ExecutorLauncher.main(ApplicationMaster.scala)
Caused by: java.io.IOException: Failed to connect to lnx-pen205/xx.xx.xx.xx:9222
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:288)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:218)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:230)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:204)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:202)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:198)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: io.netty.channel.AbstractChannel$AnnotatedNoRouteToHostException: No route to host: lnx-pen205/xx.xx.xx.xx:9222
Caused by: java.net.NoRouteToHostException: No route to host
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:710)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:829)
I noticed that the port being used will be randomly assigned during the runtime, the example .jar will work if for example I set the spark.driver.port as 9222 then opening said port with the firewall. But then if any other session is started (for example, pyspark shell), it wouldn't start as the port is already in use.
My question is: How do I allow connections to the ports dynamically defined by Spark/Yarn? I read somewhere that I should disable the firewall, but that does not sound like a good idea.. Thanks in advance.
There's spark.driver.port as well as spark.driver.blockManager.port. Both are starting ranges to spark.port.maxRetries (default 16).
So, you'll need to open at least 32 ports for these.
I did some testing with dynamic Spark ports in Mesos + Docker a few years ago - https://stackoverflow.com/a/56486271/2308683

JanusGraph, Spark cluster failing to connect to Cassandra

I am trying to run a Spark job on a cluster that creates a JanusGraph.
I have an instance of JanusGraph server, Cassandra, ES running on a single machine, only the Spark computation occurs on the cluster. (Basically, I did a janusgraph.sh start on the machine
My configuration is as follows (x is the IP of the machine I am running the above instances on):
def getGraph(): JanusGraph = {
val config = JanusGraphFactory.build()
config.set("storage.backend", "cassandrathrift")
config.set("storage.cassandrathrift.keyspace", "jgex")
config.set("storage.hostname", "x")
config.set("index.jgex.backend", "elasticsearch")
config.set("index.jgex.index-name", "jgex")
config.set("jgex.hostname", "x")
config.open()
}
But when I do a spark-submit of the fat jar on the cluster, I get this:
java.lang.IllegalArgumentException: Could not instantiate implementation: org.janusgraph.diskstorage.cassandra.thrift.CassandraThriftStoreManager
at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:69)
at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:477)
at org.janusgraph.diskstorage.Backend.getStorageManager(Backend.java:409)
at org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration.<init>(GraphDatabaseConfiguration.java:1376)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:164)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:133)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:123)
at org.janusgraph.core.JanusGraphFactory$Builder.open(JanusGraphFactory.java:264)
at janus_create$.getGraph(janus_create.scala:66)
at janus_create$.makePropertiesandIndexes(janus_create.scala:830)
at janus_create$.main(janus_create.scala:921)
at janus_create.main(janus_create.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:627)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:58)
... 16 more
Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Temporary failure in storage backend
at org.janusgraph.diskstorage.cassandra.thrift.CassandraThriftStoreManager.getCassandraPartitioner(CassandraThriftStoreManager.java:219)
at org.janusgraph.diskstorage.cassandra.thrift.CassandraThriftStoreManager.<init>(CassandraThriftStoreManager.java:198)
... 21 more
Caused by: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused)
at org.apache.thrift.transport.TSocket.open(TSocket.java:187)
at org.apache.thrift.transport.TFramedTransport.open(TFramedTransport.java:81)
at org.janusgraph.diskstorage.cassandra.thrift.thriftpool.CTConnectionFactory.makeRawConnection(CTConnectionFactory.java:110)
at org.janusgraph.diskstorage.cassandra.thrift.thriftpool.CTConnectionFactory.makeObject(CTConnectionFactory.java:74)
at org.janusgraph.diskstorage.cassandra.thrift.thriftpool.CTConnectionFactory.makeObject(CTConnectionFactory.java:43)
at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1179)
at org.janusgraph.diskstorage.cassandra.thrift.CassandraThriftStoreManager.getCassandraPartitioner(CassandraThriftStoreManager.java:216)
... 22 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.apache.thrift.transport.TSocket.open(TSocket.java:182)
... 28 more
I tried switching between cassandra and cassandrathrift but both did not work. Also, where do I specify where my gremlin is running. Is that relevant?
The prepackaged distribution assumes a single, localhost node of each Cassandra, Elasticsearch, and Gremlin Server. Notice the java.net.ConnectException: Connection refused at the bottom of your stack trace. If you have Spark running on a remote cluster, you need to ensure that the servers are available on a non-localhost address.
Stop the distribution first with bin/janusgraph.sh stop
Update the listen_address and rpc_address in $JANUSGRAPH_HOME/conf/cassandra/cassandra.yaml using the IP address of the machine (Cassandra docs)
Add network.host to $JANUSGRAPH_HOME/elasticsearch/config/elasticsearch.yml using the IP address of the machine (Elasticsearch docs)
Update the host in $JANUSGRAPH_HOME/conf/gremlin-server/gremlin-server.yaml using the IP address of the machine (JanusGraph docs)
Assuming you are using the default Gremlin Server configuration gremlin-server.yaml, you need to update the properties file in $JANUSGRAPH_HOME/conf/gremlin-server/janusgraph-cassandra-es-server.properties using the IP address of the machine. Update storage.hostname and index.search.hostname using the IP address of the machine, matching the server settings above.
Update: It seems that there are errors in your graph connection properties that might be contributing to the problem:
config.set("storage.cassandrathrift.keyspace", "jgex") -- should be "storage.cassandra.keyspace"
config.set("jgex.hostname", "x") -- should be "index.jgex.hostname"

Connection Error in Ignite web agent

For analysis of node in Apache ignite, I am using monitoring tool at [1] : https://console.gridgain.com/. I tried running ignite-web-agent.sh for making connection with web server at link above, but I am getting following error messege. I also tried replacing localhost with IP address of machine, but still it is not working. While running ignite node do I have to start ignite REST server for connection? If yes then how can I enable this REST server mode through java code?
ignite-web-agent-1.7.2]$ . ignite-web-agent.sh
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
[12:58:13,881][INFO ][main][AgentLauncher] Starting Apache Ignite Web Console Agent...
Agent configuration:
User's security tokens : ****************Fq3q
URI to Ignite node REST server: http://localhost:8080
URI to Ignite Console server : https://console.gridgain.com:3002
Path to agent property file : default.properties
Path to JDBC drivers folder : /home/rishikesh/ignite-web-agent-1.7.2/jdbc-drivers
[12:58:14,318][INFO ][EventThread][AgentLauncher] Connecting to: https://console.gridgain.com:3002
[12:58:34,314][ERROR][EventThread][AgentLauncher] Connection error.
io.socket.client.SocketIOException: timeout
at io.socket.client.Manager$1$4$1.run(Manager.java:312)
at io.socket.thread.EventThread$2.run(EventThread.java:75)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[12:58:37,304][INFO ][EventThread][AgentLauncher] Connecting to: https://console.gridgain.com:3002
[12:58:53,744][ERROR][EventThread][AgentLauncher] Connection error.
io.socket.engineio.client.EngineIOException: xhr poll error
at io.socket.engineio.client.Transport.onError(Transport.java:64)
at io.socket.engineio.client.transports.PollingXHR.access$100(PollingXHR.java:18)
at io.socket.engineio.client.transports.PollingXHR$6$1.run(PollingXHR.java:126)
at io.socket.thread.EventThread$2.run(EventThread.java:75)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.UnknownHostException: console.gridgain.com
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at sun.net.www.protocol.http.HttpURLConnection$10.run(HttpURLConnection.java:1890)
at sun.net.www.protocol.http.HttpURLConnection$10.run(HttpURLConnection.java:1885)
at java.security.AccessController.doPrivileged(Native Method)
at sun.net.www.protocol.http.HttpURLConnection.getChainedException(HttpURLConnection.java:1884)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1457)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1441)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:338)
at io.socket.engineio.client.transports.PollingXHR$Request$1.run(PollingXHR.java:214)
... 1 more
Web agent can't resolve console.gridgain.com
try to change 'URI to Ignite Console server' in /home/rishikesh/ignite-web-agent-1.7.2/default.properties please replace:
server-uri=https://console.gridgain.com:3002
to:
server-uri=https://104.197.2.239:3002
To enable REST module move ignite-rest-http folder from lib/optional/ to lib/ in binary distribution or add following dependence to your maven project:
<dependency>
<groupId>org.apache.ignite</groupId>
<artifactId>ignite-rest-http/artifactId>
<version>${ignite.version}</version>
</dependency>

Connection error: Request did not complete within rpc_timeout when doing cqlsh

I have installed Cassandra v2.0.16 on CentOS release 6.6 (Final).
But when I try to do cqlsh on it, It gives the following error,
Connection error: Request did not complete within rpc_timeout
On My local machine, I am running ubuntu 12.04, it works fine, but on my production machine (User Managed VPS running Cent OS) its not working properly.
I went to the /var/log/cassandra/system.log
ERROR [main] 2015-07-02 04:42:12,973 CassandraDaemon.java (line 571) Exception encountered during startup
org.jboss.netty.channel.ChannelException: Failed to bind to: localhost/127.0.0.1:9042
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at org.apache.cassandra.transport.Server.run(Server.java:159)
at org.apache.cassandra.transport.Server.start(Server.java:110)
at org.apache.cassandra.service.CassandraDaemon.start(CassandraDaemon.java:489)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:643)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:366)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:290)
at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
I am just stuck here.
Please any help would be appreciated.
I am not getting how to solve this problem.
Note: I have reinstalled Cassandra 2 to 3 times. But stuck with same problem

Scriptella: ResourceException with Jaybird

I'm very newbie with Linux/Java/Scriptella and I'm trying a jdbc connection with scriptella on a Firebird local database, but I'm receiving the following errors:
2-dic-2013 1.03.34 <INFO> Execution Progress.Initializing properties: 1%
2-dic-2013 1.03.34 <GRAVE> Script /home/maurizio/Scrivania/JATROPHA/applicazioni/prova_per_scriptella.etl execution failed.
javax/resource/ResourceException
2-dic-2013 1.03.34 <GRAVE> Scriptella bug report. Submit to issue tracker.
Scriptella version: 1.1
Exception:
scriptella.execution.EtlExecutorException: javax/resource/ResourceException
at scriptella.execution.EtlExecutor.execute(EtlExecutor.java:190)
at scriptella.tools.launcher.EtlLauncher.execute(EtlLauncher.java:276)
at scriptella.tools.launcher.EtlLauncher.launch(EtlLauncher.java:193)
at scriptella.tools.launcher.EtlLauncher.main(EtlLauncher.java:321)
Caused by: java.lang.NoClassDefFoundError: javax/resource/ResourceException
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:270)
at scriptella.core.DriverFactory.getDriver(DriverFactory.java:53)
at scriptella.core.ConnectionManager.<init>(ConnectionManager.java:70)
at scriptella.core.Session.<init>(Session.java:51)
at scriptella.execution.EtlExecutor.prepare(EtlExecutor.java:248)
at scriptella.execution.EtlExecutor.execute(EtlExecutor.java:178)
... 3 more
Caused by: java.lang.ClassNotFoundException: javax.resource.ResourceException
at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
... 10 more
I'm using Ubuntu 10.04 Lucid Lynx.
I start scriptella via console in directory /home/maurizio/Scrivania/JATROPHA/applicazioni/ with the command scriptella/scriptella-1.1/bin/scriptella.sh -debug "prova_per_scriptella.etl"
My ETL file prova_per_scriptella.etl contains the following rows:
<!DOCTYPE etl SYSTEM "http://scriptella.javaforge.com/dtd/etl.dtd">
<etl>
<description>Prova connessione Firebird</description>
<connection
id="fb_destination"
driver="org.firebirdsql.jdbc.FBDriver"
url="jdbc:firebirdsql:localhost/3050:/home/maurizio/Scrivania/JATROPHA/db/jatrofa.fdb"
user="user"
password="password"
classpath="/home/maurizio/Scrivania/JATROPHA/applicazioni/jaybird/Jaybird-2.2.3JDK_1.6/jaybird-2.2.3.jar"
/>
</etl>
The env var $_SCRIPTELLA_CP of batch command scriptella/scriptella-1.1/bin/scriptella.sh results in
:/home/maurizio/Scrivania/JATROPHA/applicazioni/scriptella/scriptella-1.1/lib/commons-jexl.jar:
/home/maurizio/Scrivania/JATROPHA/applicazioni/scriptella/scriptella-1.1/lib/commons-logging.jar:
/home/maurizio/Scrivania/JATROPHA/applicazioni/scriptella/scriptella-1.1/lib/jaybird-2.2.3.jar:
/home/maurizio/Scrivania/JATROPHA/applicazioni/scriptella/scriptella-1.1/lib/jsqlparser-0.8.0.jar:
/home/maurizio/Scrivania/JATROPHA/applicazioni/scriptella/scriptella-1.1/lib/scriptella-core.jar:
/home/maurizio/Scrivania/JATROPHA/applicazioni/scriptella/scriptella-1.1/lib/scriptella-drivers.jar:
/home/maurizio/Scrivania/JATROPHA/applicazioni/scriptella/scriptella-1.1/lib/scriptella-tools.jar:
/home/maurizio/Scrivania/JATROPHA/applicazioni/scriptella/scriptella-1.1/lib/sqlsheet-6.5.jar
Any help would be very appreciated.
Thanks in advance.
You are missing the required dependency connector-api-1.5.jar, or you need to use jaybird-full-2.2.3.jar (which includes both the normal Jaybird and connector-api). See the releasenotes of Jaybird 2.2.3
The error means that you probably miss additional J2EE classes on classpath. Try downloading mini-j2ee.jar from http://www.firebirdsql.org/en/jdbc-driver/ and adding it to classpath attribute in etl.xml:
classpath="/path/to/mini-j2ee.jar:/home/maurizio/Scrivania/JATROPHA/applicazioni/jaybird/Jaybird-2.2.3JDK_1.6/jaybird-2.2.3.jar"

Resources