I am using presto-server-0.149 on macOS 10.11. Just for testing purpose, I run a single node and everything is okay. When I add etc/catalog/localfile.properties with:
connector.name=localfile
presto-logs.http-request-log-location=/var/log/apache2/access_log
I get the following error:
2016-07-04T12:02:45.435-0700 INFO main io.airlift.bootstrap.LifeCycleManager Life cycle starting...
2016-07-04T12:02:45.435-0700 INFO main io.airlift.bootstrap.LifeCycleManager Life cycle startup complete. System ready.
2016-07-04T12:02:45.436-0700 INFO main com.facebook.presto.metadata.CatalogManager -- Added catalog jmx using connector jmx --
2016-07-04T12:02:45.436-0700 INFO main com.facebook.presto.metadata.CatalogManager -- Loading catalog etc/catalog/localfile.properties --
2016-07-04T12:02:45.797-0700 INFO main Bootstrap PROPERTY DEFAULT RUNTIME DESCRIPTION
2016-07-04T12:02:45.797-0700 INFO main Bootstrap presto-logs.http-request-log.pattern null null If log location is a directory this glob is used to match the file names in the directory
2016-07-04T12:02:45.797-0700 INFO main Bootstrap presto-logs.http-request-log.location var/log/http-request.log var/log/http-request.log Directory or file where http request logs are written
2016-07-04T12:02:45.797-0700 INFO main Bootstrap
2016-07-04T12:02:45.797-0700 WARN main Bootstrap UNUSED PROPERTIES
2016-07-04T12:02:45.797-0700 WARN main Bootstrap presto-logs.http-request-log-location=/var/log/apache2/access_log
2016-07-04T12:02:45.797-0700 WARN main Bootstrap
2016-07-04T12:02:45.989-0700 ERROR main com.facebook.presto.server.PrestoServer Unable to create injector, see the following errors:
1) Configuration property 'presto-logs.http-request-log-location=/var/log/apache2/access_log' was not used
at io.airlift.bootstrap.Bootstrap.lambda$initialize$2(Bootstrap.java:235)
1 error
com.google.inject.CreationException: Unable to create injector, see the following errors:
1) Configuration property 'presto-logs.http-request-log-location=/var/log/apache2/access_log' was not used
at io.airlift.bootstrap.Bootstrap.lambda$initialize$2(Bootstrap.java:235)
1 error
at com.google.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:466)
at com.google.inject.internal.InternalInjectorCreator.initializeStatically(InternalInjectorCreator.java:155)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:107)
at com.google.inject.Guice.createInjector(Guice.java:96)
at io.airlift.bootstrap.Bootstrap.initialize(Bootstrap.java:242)
at com.facebook.presto.localfile.LocalFileConnectorFactory.create(LocalFileConnectorFactory.java:64)
at com.facebook.presto.connector.ConnectorManager.createConnector(ConnectorManager.java:315)
at com.facebook.presto.connector.ConnectorManager.addCatalogConnector(ConnectorManager.java:169)
at com.facebook.presto.connector.ConnectorManager.createConnection(ConnectorManager.java:162)
at com.facebook.presto.connector.ConnectorManager.createConnection(ConnectorManager.java:148)
at com.facebook.presto.metadata.CatalogManager.loadCatalog(CatalogManager.java:99)
at com.facebook.presto.metadata.CatalogManager.loadCatalogs(CatalogManager.java:77)
at com.facebook.presto.server.PrestoServer.run(PrestoServer.java:115)
at com.facebook.presto.server.PrestoServer.main(PrestoServer.java:63)
UPDATE
Based on Dain Sundstrom's answer below, I was able to fix my problem. It turned out that the fb documentation for Local File Connector is incorrect. And since I needed to get something to feed Presto to test the localfile connector, I changed the config filepath to Presto's own request log:
presto-logs.http-request-log.location=/var/presto/data/var/log/http-request.log
You have a typo in the configuration property. It should be:
presto-logs.http-request-log.location=/var/log/apache2/access_log
Also, this connector can only process the http log format created by Presto itself, so you would need to reconfigure your Apache2 server to output the same format.
Related
I am using these steps to use apache pulsar on docker: https://github.com/streamnative/tgip/blob/master/episodes/001/demo.md
I was able to use these steps before to install and use pulsar but for some reason now when am creating a directory, it is going to write protected and pulsar zookeeper container is exiting with following logs as soon as it is created:
ERROR org.apache.zookeeper.server.ZooKeeperServerMain - Unable to access datadir, exiting abnormally
org.apache.zookeeper.server.persistence.FileTxnSnapLog$DatadirException: Unable to create data directory data/zookeeper/version-2
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.<init>(FileTxnSnapLog.java:136) ~[org.apache.zookeeper-zookeeper-3.6.3.jar:3.6.3]
at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:137) ~[org.apache.zookeeper-zookeeper-3.6.3.jar:3.6.3]
at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:112) ~[org.apache.zookeeper-zookeeper-3.6.3.jar:3.6.3]
at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:67) [org.apache.zookeeper-zookeeper-3.6.3.jar:3.6.3]
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:140) [org.apache.zookeeper-zookeeper-3.6.3.jar:3.6.3]
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:90) [org.apache.zookeeper-zookeeper-3.6.3.jar:3.6.3]
Unable to access datadir, exiting abnormally
23:36:15.223 [main] INFO org.apache.zookeeper.audit.ZKAuditProvider - ZooKeeper audit is disabled.
23:36:15.226 [main] ERROR org.apache.zookeeper.util.ServiceUtils - Exiting JVM with code 3
23:36:15.196 [PurgeTask] ERROR org.apache.zookeeper.server.DatadirCleanupManager - Error occurred while purging.
org.apache.zookeeper.server.persistence.FileTxnSnapLog$DatadirException: Unable to create data directory data/zookeeper/version-2
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.<init>(FileTxnSnapLog.java:136) ~[org.apache.zookeeper-zookeeper-3.6.3.jar:3.6.3]
at org.apache.zookeeper.server.PurgeTxnLog.purge(PurgeTxnLog.java:80) ~[org.apache.zookeeper-zookeeper-3.6.3.jar:3.6.3]
at org.apache.zookeeper.server.DatadirCleanupManager$PurgeTask.run(DatadirCleanupManager.java:141) [org.apache.zookeeper-zookeeper-3.6.3.jar:3.6.3]
at java.util.TimerThread.mainLoop(Timer.java:556) [?:?]
at java.util.TimerThread.run(Timer.java:506) [?:?]
23:36:15.229 [PurgeTask] INFO org.apache.zookeeper.server.DatadirCleanupManager - Purge task completed
I have made sure that SELinux is disabled and tried changing permission using chmod 777 data/ and every other step available to resolve this but still unable to find any. Please help me with the possible resolution.
Running an application in in client mode, the driver logs are printed with the below info messages, any idea on how to resolve this? Any spark configs to be updated? or missing?
[INFO ][dispatcher-event-loop-29][SparkRackResolver:54] Got an error when resolving hostNames. Falling back to /default-rack for all
The jobs runs fine, this msg is not in the executor logs.
Check this bug:
https://issues.apache.org/jira/browse/SPARK-28005
If you want to suppress this in the logs you can try to add this into your log4j.properties
log4j.logger.org.apache.spark.deploy.yarn.SparkRackResolver=ERROR
This can happen while using spart-submit with master yarn in a deploy mode local (not using --deploy-mode cluster) and the path to topology.py script is not correct into your core-site.xml.
Path to core-site.xml can be set via environment variable HADOOP_CONF_DIR (or YARN_CONF_DIR).
Check the path in the param net.topology.script.file.name value of core-site.xml.
If the path is incorrect, deploying driver in local mode will lead to error of executing with the following warning:
23/01/15 18:39:43 WARN ScriptBasedMapping: Exception running /home/alexander/xxx/.conf/topology.py 10.15.21.199
java.io.IOException: Cannot run program "/etc/hadoop/conf.cloudera.yarn/topology.py" (in directory "/home/john"): error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
...
23/01/15 18:39:43 INFO SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
I am attempting to run a job on a Spark cluster setup in Mesos. I can run a job if I copy the jar to the server and then use a file: URL, but I cannot get spark to download a jar using https:. Every time I do get the error below in the stderr file.
I0226 00:11:05.618361 22652 logging.cpp:172] INFO level logging started!
I0226 00:11:05.618552 22652 fetcher.cpp:409] Fetcher Info: ...
I0226 00:11:05.619721 22652 fetcher.cpp:364] Fetching URI 'https://jenkins.company.com/nexus/...
I0226 00:11:05.619738 22652 fetcher.cpp:238] Fetching directly into the sandbox directory
I0226 00:11:05.619751 22652 fetcher.cpp:176] Fetching URI 'https://jenkins.company.com/nexus/...
I0226 00:11:05.619762 22652 fetcher.cpp:126] Downloading resource from 'https://jenkins.company.com/nexus/...
Failed to fetch 'https://jenkins.company.com/nexus/... ': Error downloading resource: SSL connect error
Failed to synchronize with slave (it's probably exited)
I am able to use wget to download the jar from the specified URL. I have also verified that the JDK on the server has the correct certificate for the nexus server where I am attempting to download the jar from.
I am new to Spark and Mesos and any help resolving this issue would be greatly appreciated.
Did you specify you private Nexus repository with the --repository flag on application start?
I personally never use encryption together with Spark, but from the docs it seems to be possible/necessary to configure it within Spark. I guess just for the SDKs is not enough.
See
http://spark.apache.org/docs/latest/submitting-applications.html#advanced-dependency-management
http://spark.apache.org/docs/latest/configuration.html#encryption
I have a small Rexster/Titan cluster using Cassandra. A Rexster extension is used to query the graph. I did some benchmarking and did start and stop Rexster/Titan many times. But now I run into a strange issue: Rexster refuses to start but does not display any error message.
I tried to figure out what is causing this and reduced the cluster to a single node 192.168.0.4.
If I remove my extension Rexster manages to start up.
# console output
Forking Cassandra...
Running `nodetool statusthrift`..... OK
(returned exit status 0 and printed string "running").
Forking Titan + Rexster...
Connecting to Titan + Rexster (127.0.0.1:8184)...... OK
(connected to 127.0.0.1:8184).
Run rexster-console.sh to connect.
but when I place my extension uber JAR in the ext folder Rexster refuses to start.
# console output
Forking Cassandra...
Running `nodetool statusthrift`..... OK
(returned exit status 0 and printed string "running").
Forking Titan + Rexster...
Connecting to Titan + Rexster (127.0.0.1:8184)............................
timeout exceeded (60 seconds): could not connect to 127.0.0.1:8184
See /var/lib/titan/bin/../log/rexstitan.log for Rexster log output.
If I now check rexstitan.log, as suggested by the console output, I can not find any error message.
# rexstitan.log
0 [main] INFO com.tinkerpop.rexster.Application - .:Welcome to Rexster:.
73 [main] INFO com.tinkerpop.rexster.server.RexsterProperties -
Using [/var/lib/titan/rexhome/../conf/rexster-cassandra-cluster.xml]
as configuration source.
78 [main] INFO com.tinkerpop.rexster.Application - Rexster is watching
[/var/lib/titan/rexhome/../conf/rexster-cassandra-cluster.xml] for change.
244 [main] INFO com.netflix.astyanax.connectionpool.impl.ConnectionPoolMBeanManager -
Registering mbean: com.netflix.MonitoredResources:type=ASTYANAX,
name=ClusterTitanConnectionPool,ServiceType=connectionpool
252 [main] INFO com.netflix.astyanax.connectionpool.impl.CountingConnectionPoolMonitor -
AddHost: 192.168.0.4
537 [main] INFO com.netflix.astyanax.connectionpool.impl.ConnectionPoolMBeanManager -
Registering mbean: com.netflix.MonitoredResources:type=ASTYANAX,
name=KeyspaceTitanConnectionPool,ServiceType=connectionpool
538 [main] INFO com.netflix.astyanax.connectionpool.impl.CountingConnectionPoolMonitor -
AddHost: 192.168.0.4
1951 [main] INFO com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration -
Set cluster.partition=false from store features
1971 [main] INFO com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration -
Set default timestamp provider MICRO
2019 [main] INFO com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration -
Generated unique-instance-id=7f0000012902-node1
2045 [main] INFO com.netflix.astyanax.connectionpool.impl.ConnectionPoolMBeanManager -
Registering mbean: com.netflix.MonitoredResources:type=ASTYANAX,
name=ClusterTitanConnectionPool,ServiceType=connectionpool
2046 [main] INFO com.netflix.astyanax.connectionpool.impl.CountingConnectionPoolMonitor -
AddHost: 192.168.0.4
2053 [main] INFO com.netflix.astyanax.connectionpool.impl.ConnectionPoolMBeanManager -
Registering mbean: com.netflix.MonitoredResources:type=ASTYANAX,
name=KeyspaceTitanConnectionPool,ServiceType=connectionpool
2054 [main] INFO com.netflix.astyanax.connectionpool.impl.CountingConnectionPoolMonitor -
AddHost: 192.168.0.4
2228 [main] INFO com.thinkaurelius.titan.diskstorage.Backend -
Initiated backend operations thread pool of size 4
6619 [main] INFO com.thinkaurelius.titan.diskstorage.log.kcvs.KCVSLog -
Loaded unidentified ReadMarker start time Timepoint[1423479705116000 μs]
into com.thinkaurelius.titan.diskstorage.log.kcvs.KCVSLog$MessagePuller#212f3ff1
6625 [main] INFO com.tinkerpop.rexster.RexsterApplicationGraph -
Graph [graph] - configured with allowable namespace [*:*]
The only entry that looks strange to me is the one concerning the log:
6619 [main] INFO com.thinkaurelius.titan.diskstorage.log.kcvs.KCVSLog -
Loaded unidentified ReadMarker start time Timepoint[1423479705116000 μs]
into com.thinkaurelius.titan.diskstorage.log.kcvs.KCVSLog$MessagePuller#212f3ff1
My exception uses the logger for debugging. You can see the instantiation an usage on github: https://github.com/sebschlicht/titan-graphity-kribble/blob/master/src/main/java/de/uniko/sebschlicht/titan/extensions/GraphityExtension.java#L22
Though Rexster failed to start there is a process with the PID displayed in the console but curl fails to connect to Rexster:
$ curl 192.168.0.4:8182
curl: (7) Failed to connect to 192.168.0.4 port 8182: Connection refused
Why doesn't Rexster throw an exception? How can I debug this situation?
edit:
I removed any log messages in my code. I removed all exceptions that may be thrown during startup. Still Rexster refuses to start with my extension and the only hint in the log files is the unidentified read marker. I have to clue what prevents Rexster from starting.
The log message is nothing to worry about.
After rebuilding the application in another project step-by-step Rexster is now able to start with the extension. During this rebuild I noticed two situations, that can cause the behaviour described:
Missing dependency
If your project depends on a second project you might use Maven to inject it as a dependency. However, if you use
mvn clean package
to build the extension's JAR file it does not contain this dependency by default. You need to use a Maven plugin (e.g. maven-shade-plugin) to create a shaded JAR that contains all the dependencies your extension needs. Set the dependency scope to provided for all Titan/Rexster/Blueprints related dependencies. Use the shaded uber-JAR to deploy the extension to Rexster.
However, this was not new to me and should not have caused the problem in my case. There might be more situations that cause this problem or maybe there was a problem with Maven that messed up the shaded JAR. Feel free to browse the commit on github to catch this voodoo.
Missing extension
Another cause of this behaviour is a missing extension.
If you specify an extension in the com.tinkerpop.rexster.extension.RexsterExtension resource file, that is not present on startup, Rexster does neither log nor throw an exception, but refuses to start.
I'm trying to run nutch and load the crawled data into cassandra.
I've got my sbt file
"org.apache.gora" % "gora-cassandra" % "0.3",
"org.apache.nutch" % "nutch" % "2.2.1",
"com.datastax.cassandra" % "cassandra-driver-core" % "2.1.2"
and am kicking off the job
ToolRunner.run(NutchConfiguration.create(), new Crawler(), Array("urls"));
but am hitting the slightly vague error
EDIT - updated to be full logs from start of request
[Ljava.lang.String;#526950c7
****file:/home/abdev/Working/Qordaoba/gl/web-crawling-services/crawling-services/urls
[error] play - Cannot invoke the action, eventually got an error: java.lang.RuntimeException: job failed: name=generate: null, jobid=job_local_0002
[error] application -
! #6kemm159h - Internal server error, for (POST) [/nutch/job] ->
play.api.Application$$anon$1: Execution exception[[RuntimeException: job failed: name=generate: null, jobid=job_local_0002]]
at play.api.Application$class.handleError(Application.scala:296) ~[play_2.11-2.3.6.jar:2.3.6]
at play.api.DefaultApplication.handleError(Application.scala:402) [play_2.11-2.3.6.jar:2.3.6]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3$$anonfun$applyOrElse$4.apply(PlayDefaultUpstreamHandler.scala:320) [play_2.11-2.3.6.jar:2.3.6]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3$$anonfun$applyOrElse$4.apply(PlayDefaultUpstreamHandler.scala:320) [play_2.11-2.3.6.jar:2.3.6]
at scala.Option.map(Option.scala:145) [scala-library-2.11.1.jar:na]
Caused by: java.lang.RuntimeException: job failed: name=generate: null, jobid=job_local_0002
at org.apache.nutch.util.NutchJob.waitForCompletion(NutchJob.java:54) ~[nutch-2.2.1.jar:na]
at org.apache.nutch.crawl.GeneratorJob.run(GeneratorJob.java:199) ~[nutch-2.2.1.jar:na]
at org.apache.nutch.crawl.Crawler.runTool(Crawler.java:68) ~[nutch-2.2.1.jar:na]
at org.apache.nutch.crawl.Crawler.run(Crawler.java:152) ~[nutch-2.2.1.jar:na]
at org.apache.nutch.crawl.Crawler.run(Crawler.java:250) ~[nutch-2.2.1.jar:na]
In cassandra - the keyspace webpage and tables sc p f are being created before the error is thrown.
EDIT --- If I put all (sorry its a long list I know) the below jars in my lib folder - then the job runs; and the first few logs are about connecting to cassandra. I don't see those logs when I'm trying to just use the SBT dependencies.
Logs when running with below jar files:
SLF4J: The following set of substitute loggers may have been accessed
SLF4J: during the initialization phase. Logging calls during this
SLF4J: phase were not honored. However, subsequent logging calls to these
SLF4J: loggers will work as normally expected.
SLF4J: See also http://www.slf4j.org/codes.html#substituteLogger
SLF4J: org.webjars.WebJarExtractor
[info] Compiling 5 Scala sources and 1 Java source to /home/abdev/Working/Qordaoba/gl/web-crawling-services/crawling-services/target/scala-2.11/classes...
14/12/10 07:31:03 INFO play: Application started (Dev)
14/12/10 07:31:03 INFO slf4j.Slf4jLogger: Slf4jLogger started
[Ljava.lang.String;#3a6f1296
14/12/10 07:31:05 INFO connection.CassandraHostRetryService: Downed Host Retry service started with queue size -1 and retry delay 10s
14/12/10 07:31:05 INFO service.JmxMonitor: Registering JMX me.prettyprint.cassandra.service_Test Cluster:ServiceType=hector,MonitorType=hector
14/12/10 07:31:06 INFO crawl.InjectorJob: InjectorJob: Using class org.apache.gora.cassandra.store.CassandraStore as the Gora storage class.
14/12/10 07:31:06 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/12/10 07:31:06 INFO input.FileInputFormat: Total input paths to process : 1
Full list of Jar files
activation-1.1.jar
antlr-3.2.jar
aopalliance-1.0.jar
apache-cassandra-1.2.19.jar
apache-cassandra-clientutil-1.2.19.jar
apache-cassandra-thrift-1.2.19.jar
apache-nutch-2.2.1.jar
asm-3.2.jar
avro-1.3.3.jar
commons-beanutils-1.7.0.jar
commons-beanutils-core-1.8.0.jar
commons-cli-1.1.jar
commons-cli-1.2.jar
commons-codec-1.2.jar
commons-codec-1.4.jar
commons-collections-3.2.1.jar
commons-configuration-1.6.jar
commons-digester-1.8.jar
commons-el-1.0.jar
commons-httpclient-3.1.jar
commons-io-2.4.jar
commons-lang-2.6.jar
commons-logging-1.1.1.jar
commons-math-2.1.jar
commons-net-1.4.1.jar
compress-lzf-0.8.4.jar
concurrentlinkedhashmap-lru-1.3.jar
cql-internal-only-1.4.1.zip
crawler-commons-0.2.jar
cxf-api-2.5.2.jar
cxf-common-utilities-2.5.2.jar
cxf-rt-bindings-xml-2.5.2.jar
cxf-rt-core-2.5.2.jar
cxf-rt-frontend-jaxrs-2.5.2.jar
cxf-rt-transports-common-2.5.2.jar
cxf-rt-transports-http-2.5.2.jar
elasticsearch-0.19.4.jar
geronimo-javamail_1.4_spec-1.7.1.jar
geronimo-stax-api_1.0_spec-1.0.1.jar
gora-cassandra-0.3.jar
gora-core-0.3.jar
guava-11.0.2.jar
guava-13.0.1.jar
hadoop-core-1.2.0.jar
hamcrest-core-1.3.jar
hector-core-1.1-4.jar
high-scale-lib-1.1.2.jar
hsqldb-2.2.8.jar
httpclient-4.1.1.jar
httpcore-4.1.jar
icu4j-4.0.1.jar
jackson-core-asl-1.8.8.jar
jackson-core-asl-1.9.2.jar
jackson-jaxrs-1.7.1.jar
jackson-mapper-asl-1.8.8.jar
jackson-mapper-asl-1.9.2.jar
jackson-xc-1.7.1.jar
jamm-0.2.5.jar
jaxb-api-2.2.2.jar
jaxb-impl-2.2.3-1.jar
jbcrypt-0.3m.jar
jdom-1.1.jar
jersey-core-1.8.jar
jersey-json-1.8.jar
jersey-server-1.8.jar
jettison-1.3.1.jar
jetty-6.1.26.jar
jetty-client-6.1.26.jar
jetty-sslengine-6.1.26.jar
jetty-util5-6.1.26.jar
jetty-util-6.1.26.jar
jline-0.9.1.jar
jline-1.0.jar
json-simple-1.1.jar
jsr305-1.3.9.jar
jsr311-api-1.1.1.jar
junit-4.11.jar
juniversalchardet-1.0.3.jar
libthrift-0.7.0.jar
log4j-1.2.16.jar
lucene-analyzers-3.6.0.jar
lucene-core-3.6.0.jar
lucene-highlighter-3.6.0.jar
lucene-memory-3.6.0.jar
lucene-queries-3.6.0.jar
lz4-1.1.0.jar
metrics-core-2.2.0.jar
neethi-3.0.1.jar
org.osgi.core-4.0.0.jar
org.restlet.ext.jackson-2.0.5.jar
org.restlet-2.0.5.jar
oro-2.0.8.jar
paranamer-2.2.jar
paranamer-ant-2.2.jar
paranamer-generator-2.2.jar
qdox-1.10.1.jar
serializer-2.7.1.jar
servlet-api-2.5-6.1.14.jar
servlet-api-2.5-20081211.jar
slf4j-api-1.6.6.jar
slf4j-api-1.7.2.jar
slf4j-log4j12-1.6.1.jar
slf4j-log4j12-1.7.2.jar
snakeyaml-1.6.jar
snappy-java-1.0.5.jar
snaptree-0.1.jar
solr-solrj-3.4.0.jar
spring-aop-3.0.6.RELEASE.jar
spring-asm-3.0.6.RELEASE.jar
spring-beans-3.0.6.RELEASE.jar
spring-context-3.0.6.RELEASE.jar
spring-core-3.0.6.RELEASE.jar
spring-expression-3.0.6.RELEASE.jar
spring-web-3.0.6.RELEASE.jar
stax2-api-3.1.1.jar
stax-api-1.0.1.jar
stax-api-1.0-2.jar
thrift-python-internal-only-0.7.0.zip
tika-core-1.3.jar
woodstox-core-asl-4.1.1.jar
wsdl4j-1.6.2.jar
wstx-asl-3.2.7.jar
xercesImpl-2.9.1.jar
xml-apis-1.3.04.jar
xmlenc-0.52.jar
xmlParserAPIs-2.6.2.jar
xmlschema-core-2.0.1.jar
zookeeper-3.3.1.jar
Thanks,
Brent