TaskMemoryManager is disabled - cygwin

i am trying to execute tasktracker on Cygwin but following error occur's as:-
mapred.TaskTracker: Process Tree implementation is missing on this system. TaskMemoryManager is disabled.
Rest all (i.e. Namenode,Secondarynamenode,Jobtracker and Datanode) working properly through cygwin but the issue is with the Tasktracker.I am hadoop version:hadoop-19.0.1
So,How I get rid of it.If anybody knows please help!.
Your Help will be appreciated!

I didn't encountered this specific problem but ...
Make sure that you are using the same hadoop version that it is in use on the cluster.
Update Hadoop to more recent version if possible.
The following patches may address (or maybe not) your problem:
https://issues.apache.org/jira/browse/HADOOP-6230
https://issues.apache.org/jira/browse/MAPREDUCE-834

Related

Rattle(useGtkBuilder = TRUE) starts Rattle, but Rattle itself is frozen

I have version 3.4.0 of R. on Windows 7. I installed the library rattle. There were errors, but I found out from Stackoverflow that using rattle( useGtkBuilder = TRUE) solves the problem.
It did - only partially. Now, when I load the CSV file, and click on the execute button, in the Rattle GUI, nothing happens !
All the menu options are apparently working, but no file is getting loaded.
Any idea ?
I could solve this problem, in the following way..
>install.packages("rattle", repos="http://rattle.togaware.com", type="source")
>install.packages("https://cran.r-project.org/bin/windows/contrib/3.3/RGtk2_2.20.31.zip", repos=NULL)
>library(RGtk2)
>library(rattle)
>rattle()
There were several messages asking me to install packages related to Cairo and XML, which i allowed to proceed.
Dont know why it exactly worked..but everything is working fine, and i ran a logistic regression model, and the results and the log-code, look just super.
Thanks and hope it helps other users, towards a simple/lucid way out,
Regards,
Raghavendra B
As per the Rattle trouble shooting page, the current version of Rattle on CRAN has an issue with the most recent release of RGtk2 on CRAN. A fix is being readied for CRAN at present but in the meantime the following should fix it for RGtk2 2.20.33:
> install.packages("https://togaware.com/access/rattle_5.0.14.tar.gz",
repos = NULL,
type = "source")

Spark 1.4 image for Google Cloud?

With bdutil, the latest version of tarball I can find is on spark 1.3.1:
gs://spark-dist/spark-1.3.1-bin-hadoop2.6.tgz
There are a few new DataFrame features in Spark 1.4 that I want to use. Any chance the Spark 1.4 image be available for bdutil, or any workaround?
UPDATE:
Following the suggestion from Angus Davis, I downloaded and pointed to spark-1.4.1-bin-hadoop2.6.tgz, the deployment went well; however, run into error when calling SqlContext.parquetFile(). I cannot explain why this exception is possible, GoogleHadoopFileSystem should be a subclass of org.apache.hadoop.fs.FileSystem. Will continue investigate on this.
Caused by: java.lang.ClassCastException: com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem cannot be cast to org.apache.hadoop.fs.FileSystem
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2595)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:354)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hadoop.hive.metastore.Warehouse.getFs(Warehouse.java:112)
at org.apache.hadoop.hive.metastore.Warehouse.getDnsPath(Warehouse.java:144)
at org.apache.hadoop.hive.metastore.Warehouse.getWhRoot(Warehouse.java:159)
at org.apache.hadoop.hive.metastore.Warehouse.getDefaultDatabasePath(Warehouse.java:177)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:504)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:523)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:397)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:356)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:54)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:59)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newHMSHandler(HiveMetaStore.java:4944)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:171)
Asked a separate question about the exception here
UPDATE:
The error turned out to be a Spark defect; resolution/workaround provided in the above question.
Thanks!
Haiying
If a local workaround is acceptable, you can copy the spark-1.4.1-bin-hadoop2.6.tgz from an apache mirror into a bucket that you control. You can then edit extensions/spark/spark-env.sh and change SPARK_HADOOP2_TARBALL_URI='<your copy of spark 1.4.1>' (make certain that the service account running your VMs has permission to read the tarball).
Note that I haven't done any testing to see if Spark 1.4.1 works out of the box right now, but I'd be interested in hearing your experience if you decide to give it a go.

Perl: libapt-pkg-perl AptPkg::Cache->new strange behaviour under precise

I have a very strange problem with the constructor of AptPkg::Cache object in the precise package of libapt-pkg-perl (v. 0.1.25).
The perl script is designed to download a debian package for three different architectures (i386, armel, armhf). For each architecture I do the following:
Configure AptPkg::Config '$_config' with the right parameters and package-lists for the desired architecture.
Create the cache object with AptPkg::Cache->new .
Call the method AptPkg::Cache->policy to create the AptPkg::Policy object.
Call the method AptPkg::Policy->candidate("program-name") .
Download the package for the selected architecture.
This works very well with Ubuntu Lucid, but with Ubuntu Precise I can only download the package for the first architecture defined. For the other two architectures there will be no installation candidate (method AptPkg::Policy->candidate("Package-Name") doesn't return an object).
I tried to build a workaround and I found one solution how the script works for all three architectures, without problems, in precise:
If I create the cache object (with AptPkg::Cache->new) twice in a row it works and the script downloads the debian package for all three architectures:
my $cache = AptPkg::Cache->new;
$cache = AptPkg::Cache->new;
I'm sure that the problem has something to do with the method AptPkg::Cache->new because I checked everything else, what could cause the problem, twice. All config-variables are set correctly and I even get a different Hash for AptPkg::Cache->new for each architecture, but it seems that I am overlooking something important.
I'm not very familiar with perl, so I am asking you guys if someone can explain why the script works with the workaround but not without it. Further it looks quite strange if you have the same line of code twice in your script.
Maybe you hit this bug - https://bugs.launchpad.net/ubuntu/+source/libapt-pkg-perl/+bug/994509
There is a script there to test if you're affected. If it's something else consider submitting a bug report.
edit: Just saw this is 11 months old :/

Locating typesizes.h

Where is typesizes.h located?
I've installed the latest gcc build, and I need to set the __FD_SETSIZE to a higher value than 1024.
I tried looking in /usr/lib/bits/, where it should be.
I'm trying to compile UnrealIRCd, can't proceed without changing __FD_SETSIZE.
Does anyone know why I'm missing typesize.h?
Thanks for any help received.
Did you try
locate typesizes.h
It should give you the path where the file is located

Error running cassandra Word count example

I am tryin to run the cassandra word count example on eclipse. I have loaded all the requisite jar files. But i am still getting some errors, in fileCassandraDemonThread.java
TNonblockingServer.Args serverArgs = new TNonblockingServer.Args(serverTransport).inputTransportFactory(inTransportFactory)
.outputTransportFactory(outTransportFactory)
.inputProtocolFactory(tProtocolFactory)
.outputProtocolFactory(tProtocolFactory)
.processor(processor);
It throws the compilation error: TNonblockingServer.Args cannot be resolved to a type
Can somebody tell if i am missing any file to be linked?
Thanks for the help.
Sounds like you don't have lib/*.jar on your runtime classpath, or less likely you have an old Thrift jar somewhere else that's getting used instead of the right one.

Resources