Lsyncd did not create a lsyncd.pid file - linux

I just installed lsyncd-2.1.5 on a CentOS 6.4 server. I was able to run make and make install on the distribution to compile the daemon. I was able to setup the following configure file at /etc/lsyncd.lua. I was able to setup the daemon file at /etc/init.d/lsyncd. I was also able to setup the logs correctly. However, when I go to run start command on lsyncd, it throws the error:
/bin/bash: line 1: Illegal Insruction /usr/local/bin/lsyncd -pidfile /var/run/lsyncd.pid /etc/lsyncd.lua
I checked at /var/run for the file lsyncd.pid and this file was not created by lsyncd.
Any thoughts on what I should do here? Can I get this file created? Do I have to reinstall?
Let me know if I can provide any further information.

Here is what I did to solve this issue. I removed all instances of my lsyncd distribution. I had previously downloaded and compiled my package in the folder /var/tmp, so I now navigated to the root folder and ran my download command here. I un-tarred the package, and compiled the package in the root folder and setup all of my configuration files. After I started the service now, the lsyncd.pid file was now in the /var/run folder. Very strange. Can anyone tell me what the difference is between compiling in the root or /var/tmp?
Or is this possibly a situation where something possibly went wrong the first time around? Does anyone have any insight on this?

Related

debian packaging and package.rules files

I am working on changing machines from the RHEL world over to the debian/ubuntu world, and I am struggling a bit with a packaging problem. I am trying to build a package for Ubuntu 16.4.
I've got an very old pre-compiled application that can only listen through xinetd. I am creating a binary only package similar to what this person was doing: I need my Debian rules file to simply copy files to it's target. I simply need to copy pre-compiled files into directories.
I have no problem getting files in /opt and in /var/log, however I have been trying to get the dpkg to copy the needed setup file into /etc/xinetd.d/
So I have a debian/package.install file something like this:
opt/oldapplication-3.10/* opt/oldapplication-3.10/
var/log/* var/log/
etc/xinetd.d/oldapplication /etc/xinetd.d
The xinetd setup file never makes it to xinetd.d, and trying to look at the dpkg install with debug doesn't give me any hints. The file is definitely in the tarball, it just simply never gets moved.
Looking through the different dh helper applications, I can't see anything that fits, and google does nothing to illuminate the problem.
Do I have to simply move the file over in a postinst script? Is that the only way to solve this, or is there a more "debian" way to do this by creating a file in the dpkg's debian directory? Is there a more generic setup I should be doing to put files into /etc?
Thanks.

Spark Installation Problems

I am following instructions from here:
https://www.datacamp.com/community/tutorials/apache-spark-python#gs.WEktovg
I downloaded and prebuilt version of Spark , untarred it and mv it to /usr/local/spark.
According to this, this is all I should have to do.
Unfortunately, I can run the interactive shell as it cant find the file.
When i run :
./bin/pyspark
I get
-bash: ./bin/pyspark: No such file or directory.
I also notice that installing it this way does not add it to the bin directory.
Is this tutorial wrong or am I missing a trick?
You need to change your working directory to /usr/local/spark. Then this command will work.
And also, when you untar it, it usually will not add it to bin folder. You need to add it manually by adding the path to environment variables.
Update your working Directory to /usr/local/spark and execute the command. Hopefully this will fix the issue.

Need to create dfs.domain.socket.path manually in Hadoop-2.0.0 to use Impala?

I am following the instructions to configure hadoop-2.0.0 cluster for installing Impala. In hdfs-site.xml, I add two properties "dfs.client.read.shortcircuit" and "dfs.domain.socket.path" (/var/lib/hadoop-hdfs/dn_socket).
But when I start the Hadoop cluster by start-dfs.sh, it fails to start datanodes. The log in datanode says that "failed to stat a path component: '/var/lib/hadoop-hdfs'". Then I create /var/lib/hadoop-hdfs manually, and start Hadoop cluster again. It fails again and log says that it's the permission problem about that directory. OK, fine. I change the owner of hadoop-hdfs from root to ubuntu (ubuntu is the machine username). Now it finally works normally.
I am just confused. Am I doing in the right way? Do we really need to create /var/lib/hadoop-hdfs by ourselves and change the permission or the owner of that directory? Or I missed some configuration setting?
I was running into similar problems using Cloudera Manager. It was an issue of trying to run in 'single-user mode' instead of using root. I think you are doing something similar with user ubuntu. Is this a clean install or are you upgrading / did you have a failed install last time?
I'm guessing you sudo-ed somewhere you should have run something as 'ubuntu'.
If you can make it work by manually setting permissions, go for it. I have a feeling there are lots of other files owned by root that should be owned byubuntu lurking about in your system.
Anecdotally, if there is no critical data in the server, I have found it is easier to very thoroughly remove any and all files from the old install and then reinstall fresh.
I was facing a similar issue with starting the datanodes. Then, I came across this link https://github.com/cloudera/Impala/wiki/Build-prerequisites, where it states that we need to create the /var/lib/hadoop-hdfs manually and set the appropriate permissions. This has also fixed my problem.
Make certain directory /var/lib/hadoop-hdfs/present is OK.

Freeswitch mod_java installation problem

I am trying to install mod_java on ubuntu.
I have installed the latest java(1.6).
I have configured freeswitch with mod_java module enabled in module.conf.xml
then when i run the make file, it says:
freeswitch_java.h:5:17: error: jni.h: No such file or directory
I have searched through the java installtion folders, but did not find any include folder or jni.h.
Can anyone help, what is being the problem here.
Thanks for reading this question.
I had the same problem. The solution was to run configure with the option --with-java:
./configure --with-java=/usr/lib/jvm/java-1.6.0-openjdk/include/
I don't know if it makes any difference but I added mod_java after building freeswitch without it. It was disabled in my initial build in module.conf.xml but afterwards I ran the above command plus:
make mod_java-install
It worked for me on ubuntu with openjdk. Are you using the Sun JDK? Maybe in the version you have dont have the include folder which has the source files. Try installing the other JDK. Or try and see of ther are some other related packages in apt that will get you the include folder.
Type this linux command to locate your jni.h file on your filesystem.
locate jni.h
you should be able to get it somewhere
in /usr/lib/java directory or some other directory
depending upon your java home.
copy paste the jni.h in src/include folder of your freeswitch src.
It will throw you some more errors for different .h files
just copy all of them to your src/include folder.
in latest freeswitch, installing through Makefile, its not possible to configure as the Makefile downloads and installs. Its possible by modifying the Makefile.in file to add the include path
mod_java_la_CPPFLAGS
-I/usr/lib/jvm/default-java/include \

InnoSetup: "The volume for a file has been externally altered"

InnoSetup appears to be corrupting my executable when compiling the setup project.
Executing the source file works fine, but executing the file after installation produces Win32 error 1006 "The volume for a file has been externally altered".
I've tried disabling compression and setting various flags, to no avail.
Has anyone experienced this?
UPDATE
Okay there's been some twists to the situation:
At the moment, I can even manually copy a working file to the location it is installed to and get "The volume for a file...". To be clear: I uninstall the application, create the same folder and paste the files there and run.
UPDATE 2
Some more details for those that want it:
The InnoSetup script is compiled by FinalBuilder using output from msbuild, also executed by FinalBuilder, running on my machine with XP SP3. The executable is a C# .Net assembly compiled in configuration Release|AnyCPU. The file works when executed in the folder the Install Script takes it from. It produces the same behaviour on an XP Virtual Machine. The MD5 hashes of the source file and the installed file are the same.
Ok, I just received this same error. I have a config which my executable uses. I looked in my folder a million times - but finally notice the config file was zero length. I corrected the config and the error stopped occurring.
Check the simplest things first... good lucK!
ERROR_FILE_INVALID
1006 (0x3EE): The volume for a file has been externally altered so that the opened file is no longer valid.
I suspect you're having this issue after moving the files to a network share. It seems to me that what's happening is you have an open file-handle - possibly to a temporary file you are creating - and then some other process (perhaps running on a different host) is coming along and renaming or deleting that file or its' parent directory tree.
So my advice is:
Try installing to a local directory
Run after an anti-virus scan, in
safe-mode or on a different machine
to see if there isn't some
background nasty changing
volume/directory properties while
your program is running.
Make sure the program itself isn't doing anything weird with the volume or directory tree you're working with.
Never seen that before. I've got a few questions and suggestions:
- Are you signing the EXE during the compile of the setup? If so, try leaving that part out.
- WHat OS are you installing on or does it happen on all machines you've tried?
- Run the install with the /LOG="c:\install.log" option and post the log. It might show something happening during install.
- Run a byte compare or MD5 check on the source EXE and the installed EXE. Are they the same? Do they have the same version resource?

Resources