Spring Tool Suite 3.8.2 - Installation on Ubuntu - linux

I managed to install STS 3.8.2 on Ubuntu 16.04 - with a lot of hacking experiments. I have it working, but I am not happy with my solution.
Here is what I had to do:
Extracted the tar file into /opt/sts-bundle.
If you put it anywhere else, like /opt/sts, the TC server fails to start from STS.
With files in /opt/sts-bundle, TC server still fails to start from STS - permission errors. To get it to work you need to futz around with permissions of the pivotal-c-server subdirectories, essentially you need to open it up your group (the same one running STS) (security hole ?).
A local install in your own ~/sts-bundle fails on "files not found" while attempting to backup - all the conf files. It still looks in /opt/sts-bundle for all these config files (just to copy them to /backup). You can change the top directory of the server in STS server properties - but it still looks in /opt/sts-bundle. Seems hard-coded - don't know where. So you have to create all the config files in the conf directory in the tree rooted at /opt/sts-bundle ("touch" works - creating empty files). TC Server still fails to start with a "failed to clean" error - with no clue from the detailed message what files are being "cleaned".
I tried creating a non-privileged user "tcserver" per suggestion from the Pivotal TC Server docs. I installed to /opt/sts-bundle, while logged in as tcserver (with sudo privileges). That fails when I am using STS as a regular developer that is not "tcserver". Could not figure out how to tell TC server to run under a different user than the one that started STS.
The solution I have working and I am not happy with, starts by extracting the tar.gz file into /opt/sts-bundle, as it wants. Then changing owner and group of sts-bundle to my id and my group (same ones that are used in STS UI). I am not happy with that. It seems wrong to put things in /opt that are owned by a single developer.
I am new to Linux, and I still have some Windows habits that need to be unlearned.
The question is: how do I get the clean solution (installing using a "tcserver" user in the global /opt directory) to work for developers who are not "tcserver"? How should the tcserver user be related to the developers (same group?).
Am I making this problem harder than it should be? What am I missing?

I'm not sure this what you want, but I don't install the STS bundles in some kind of shared directory as a special user at all. I just install it in my user.home dir, as myself, and launch it from there.
It is very unsophisticated. I just download the tar.gz file, unpack it in my home dir and then launch it from a trivial bash script which looks something like this:
#!/bin/bash
/home/kdvolder/Applications/sts-bundle/sts-*/STS
That script is on my PATH. So I can just type 'STS' in a terminal and STS will start.
I don't have to do anything else and it works.
If you are trying to somehow install this so that several different users can run a shared installation then this isn't a good setup. But I think for your own personal laptop or desktop which only you are using, this simple setup is perfectly fine.
For a shared-user env, unfortunately, I don't know how to help you. It could be complicated to sort out all the permissions issues etc because Eclipse is a complicated beast w.r.t to installation of plugins etc.

Related

Cannot access files within launched crouton but can from within chroot?

I recently updated a chroot on an old Chromebook from Ubuntu bionic to focal. The chroot has encryption enabled.
I usually work with Git repositories and other files within the Chrome's Downloads folder and haven't had any issues with this previously.
Since the update though, I found I was unable to run things like git clone -- I get an error saying cannot create worktree dir: no such file found. I looked around and found people had similar problems but there's been no clear solution.
Then I decided to look inside one of the existing folders within Downloads and noticed a problem there...
I can open a repo within my Downloads folder on ChromeOS and see all files as I used to.
I can enter-chroot and run ls on the same folder and see all files as I used to there too.
But when I launch the chroot/crouton (I used xfce4), and try to ls the folder from within the terminal, or even look at the folder contents from a UI window, the contents of the repo look encrypted -- as in all the filenames have changed to strings of equal-length and apparently random characters.
It's almost as if encryption is working in reverse -- so my files are unencrypted outside the crouton, but as soon as I go into the xfce UI, they're encrypted and there's no decryption happening. But that's speculation on my part...
Any ideas as to what is going on here? And how I can continue to work within crouton?
It seems this is to do with the fact that Chrome OS encrypts files and that something had happened since I updated Crouton (rather than my updating Ubuntu from Bionic to Focal).
I realised this was a bigger issue when even command line tools like tar and git (which I'd installed) weren't working.
When I tried to unpack a download of Firefox with tar xjf I got an error saying "Required key not available". Some searching around that led me to issue #3261 on the Crouton Github repo.
The solution for me was:
Ensure /etc/pam.d/su-l was writable. (I did ls -l /etc/pam.d/su-l to check but ultimately used sudo...)
Edit the file /etc/pam.d/su-l. (I used sudo vi /etc/pam.d/su-l to ensure the file wasn't read-only in that instant, and because I had no other text editor options but vi available.)
Comment out the line session optional pam_keyinit.so force revoke. (So it should read # session optional pam_keyinit.so force revoke.
Save the file.
Restart the chroot.

How to get write permission to a /var/lib folder for app installed as a distiubutable package

I'm creating a mono app and I've build up a *.deb installer.
In windows I write quite a bit of configuration information into the program data directory. The linux corollary seems to be /var/lib/[appname]. I've figured out how to create the directories as part of the install package, but when the app goes to run I get an excpetion because the app doesn't have write permission.
How do I get my app to have write permission to the /var/lib/[appname] folder? Is that the correct place to put things like a local db for an app?
It seems the only way to do this is via the postinst script file.
You can use that hook to execute a script to chmod the directories to anything you want. You can find the complete documentation for the postinst file here: https://www.debian.org/doc/manuals/maint-guide/dother.en.html#maintscripts

Need to create dfs.domain.socket.path manually in Hadoop-2.0.0 to use Impala?

I am following the instructions to configure hadoop-2.0.0 cluster for installing Impala. In hdfs-site.xml, I add two properties "dfs.client.read.shortcircuit" and "dfs.domain.socket.path" (/var/lib/hadoop-hdfs/dn_socket).
But when I start the Hadoop cluster by start-dfs.sh, it fails to start datanodes. The log in datanode says that "failed to stat a path component: '/var/lib/hadoop-hdfs'". Then I create /var/lib/hadoop-hdfs manually, and start Hadoop cluster again. It fails again and log says that it's the permission problem about that directory. OK, fine. I change the owner of hadoop-hdfs from root to ubuntu (ubuntu is the machine username). Now it finally works normally.
I am just confused. Am I doing in the right way? Do we really need to create /var/lib/hadoop-hdfs by ourselves and change the permission or the owner of that directory? Or I missed some configuration setting?
I was running into similar problems using Cloudera Manager. It was an issue of trying to run in 'single-user mode' instead of using root. I think you are doing something similar with user ubuntu. Is this a clean install or are you upgrading / did you have a failed install last time?
I'm guessing you sudo-ed somewhere you should have run something as 'ubuntu'.
If you can make it work by manually setting permissions, go for it. I have a feeling there are lots of other files owned by root that should be owned byubuntu lurking about in your system.
Anecdotally, if there is no critical data in the server, I have found it is easier to very thoroughly remove any and all files from the old install and then reinstall fresh.
I was facing a similar issue with starting the datanodes. Then, I came across this link https://github.com/cloudera/Impala/wiki/Build-prerequisites, where it states that we need to create the /var/lib/hadoop-hdfs manually and set the appropriate permissions. This has also fixed my problem.
Make certain directory /var/lib/hadoop-hdfs/present is OK.

InnoSetup: "The volume for a file has been externally altered"

InnoSetup appears to be corrupting my executable when compiling the setup project.
Executing the source file works fine, but executing the file after installation produces Win32 error 1006 "The volume for a file has been externally altered".
I've tried disabling compression and setting various flags, to no avail.
Has anyone experienced this?
UPDATE
Okay there's been some twists to the situation:
At the moment, I can even manually copy a working file to the location it is installed to and get "The volume for a file...". To be clear: I uninstall the application, create the same folder and paste the files there and run.
UPDATE 2
Some more details for those that want it:
The InnoSetup script is compiled by FinalBuilder using output from msbuild, also executed by FinalBuilder, running on my machine with XP SP3. The executable is a C# .Net assembly compiled in configuration Release|AnyCPU. The file works when executed in the folder the Install Script takes it from. It produces the same behaviour on an XP Virtual Machine. The MD5 hashes of the source file and the installed file are the same.
Ok, I just received this same error. I have a config which my executable uses. I looked in my folder a million times - but finally notice the config file was zero length. I corrected the config and the error stopped occurring.
Check the simplest things first... good lucK!
ERROR_FILE_INVALID
1006 (0x3EE): The volume for a file has been externally altered so that the opened file is no longer valid.
I suspect you're having this issue after moving the files to a network share. It seems to me that what's happening is you have an open file-handle - possibly to a temporary file you are creating - and then some other process (perhaps running on a different host) is coming along and renaming or deleting that file or its' parent directory tree.
So my advice is:
Try installing to a local directory
Run after an anti-virus scan, in
safe-mode or on a different machine
to see if there isn't some
background nasty changing
volume/directory properties while
your program is running.
Make sure the program itself isn't doing anything weird with the volume or directory tree you're working with.
Never seen that before. I've got a few questions and suggestions:
- Are you signing the EXE during the compile of the setup? If so, try leaving that part out.
- WHat OS are you installing on or does it happen on all machines you've tried?
- Run the install with the /LOG="c:\install.log" option and post the log. It might show something happening during install.
- Run a byte compare or MD5 check on the source EXE and the installed EXE. Are they the same? Do they have the same version resource?

OS X permission denied for /usr/local/lib

I'm looking for any advice/intuition/clues/answers on a permission issue that has been plaguing me ever since I switched over to a new Macbook pro. Here's the dilemma. Certain programs copy libraries under /usr/local/lib during install and upon running these programs I get a crash which I believe is related to permission restrictions to files in this folder. I've had errors (can't access files from this path) trying to install plugins for audacity and then tried doing an "ls" under this folder. I immediately get permission denied unless I prefix the cmd with sudo. I've tried owning the /usr/local/lib/audacity folder with my user account and even still I get permissions errors on these files. It's important to note that the problem is not exclusive to Audacity. I've seen the same problem with Polycom video conference software and I've also been unable to run Parallels on this machine. (I haven't traced Parallels to the same issue but I'm betting its related.) I vaguely recall some weird Linux cmd magic I used to use back in the day that would not only grant permission to a user but tweak some low level bits allowing/disabling certain things like execution and I seem to recall the permission thing ran deeper than execution but its been years. I can't recall the detils and I'm wondering if there's something similar on OS X that I'm possibly overlooking. Is there something special about that location and the files there in? Could I have somehow altered my file system in a way tht the files appear different? For what its worth, I seem to be able to use at least one of the programs if I log in as root. I haven't tried with the other programs as I've just discovered the ability. Please help.
It sounds like the folder isn't world executable. Try:
sudo chmod 755 /usr/local/lib
and then you should be able to use ls or anything else in the folder (still won't allow you to write but your user account shouldn't be able to do that anyway)
Found the answer from a coworker buddy. The folder needed to be marked executable.
sudo chmod 755 /usr/local/lib
fixes everything!

Resources