Is it possible for a team to use Eclipse installed on a shared network drive? - linux

Our lead programmer likes to install tools on a shared network drive to minimize effort when updating. He recently installed Eclipse to the network drive, but when I run it, I get a window that says Workspace in use or cannot be created, choose a different one. After clicking OK, I get a window that gives me a drop down menu with only one item, the workspace on his machine. I can then browse to the workspace on my machine, click OK, and Eclipse continues to start up and run just fine. There's a check box in that second window that says Use this workspace as the default that I've checked after browsing and selecting my workspace, but the next time I start up Eclipse, it reverts back to the lead's workspace.
Are we violating some assumption that Eclipse makes about the install? We're on a Linux network, if it makes a difference.

Setup the shared eclipse such that it can not be modified by the users accessing it. This should (if I recall correctly) force eclipse into a "Shared User, Hands Off" mode and default to storing settings per user account.
Do not share Workspaces (or Projects) -- this will only break things horribly -- use a different strategy such as a proper revision control system.
Perhaps this documentation will be helpful.
"""The set up for this [shared] scenario requires making the install area read-only for regular users. When users start Eclipse, this causes the configuration area to automatically default to a directory under the user home dir. If this measure is not taken, all users will end up using the same location for their configuration area, which is not supported."""
I would try to run Eclipse locally as well as over the network. Using a shared network drive may make Eclipse more painful than it sometimes is. A development environment should work for the developer, even at the expense of a slightly more complicated setup.

Eclipse stores a lot of settings, including the workspace list, in it's installation directory (especially the "configuration" directory). It's hard to say how well sharing the installation will work, but I wouldn't be surprised if there were a number of issues caused by "fighting" between Eclipse instances running on different developer's workstations.
To fix the particular issue you're having, you could set up a separate startup script that passes your workspace as a command-line argument to Eclipse, bypassing the workspace selection dialog you're seeing.

Related

Setting up development environment in Windows 10 without admin rights

Let me give a quick background of the work I do and then I'll explain the problem I am facing.
I am a software developer with more than 15+ years work experience. My work involves a lot of varied tasks:
data analysis using R, Python
development of web applications using Ruby on Rails, JS, etc.
building models using open source libraries
So far, I have been doing all this in my personal laptop (Ubuntu 18.04) and have faced no issues.
But I would soon need to start using a laptop provided by the organisation that I am working for. This org is not a IT company, it's a public body. They only use Windows (10) and don't provide admin access to anyone. It's very hard to get permission to install any kind of "approved" software. Just to give an example, they refused to install Chrome in my laptop as they wouldn't be able to control the updates.
So here's my problem - what do I do to work peacefully using their laptop? The primary reason I have to use the work laptop is that there are a lot of important documents kept in shared drives that are accessible only in their machines.
I have been looking at options like WSL or Hyper-V. But, before I put in a request to the IT team to get them to agree, I wanted to know a few things:
1) Which among WSL or Hyper-V would be the better approach for setting up the dev environment that I want?
2) IF I get the IT team to install WSL/HV, would I be able to set up everything else without having to go back to them for each software? Is there a way of secure local admin access these options would provide that will ease their concerns?
3) Is there some other way of setting up what I want?
If still applicable and actual I can share my solution:
If you should work on a windows machine where you don’t have administrative privileges, you can very easily make a portable R/Rstudio installation.
Download a recent version of R from the CRAN site and the recent version of RStudio. After download extract RStudio installation exec with 7Zip and copy files from $_OUTDIR to the desired location (in case you making an update, simply overwrite all files, that already exist). Your RStudio executable will be in
your-chosen-directory/bin/rstudio.exe
Then run CRAN-R installation, ignore the warning that you don’t have administrative privileges and go forward until installation will complete. Run RStudio, from the menu
Tools->Global Options
locate where your R installation is located.
If you performing an update (more recent version of R), copy all files from the library subfolder of the old R installation into new, but this time DON’T OVERWRITE! This operation vill preserves the packages you have installed in the previous version of R. After copying update all your packages from the RStudio window (Packages->Update). When the packages update process will end check which packages failed to update (You will see warning messages near them in the RStudio console). Remove these packages (write down names of failed packages and delete corresponding folders from library subfolder). For this, you will need to exit from RStudio. After deletion launch RStudio again and execute the packages install command in the RStudio console:
install.packages(c("package1", "package2", "package3"))
Congratulations, You are ready to go!

Error loading extension with localization

My nw.js app suddenly stopped working on Windows 10 with the following error;
Failed to load extension from {path}. Default locale is defined but default data couldn't be loaded.
Structure & manifest
_locales
en
js/i18n.js
Manifest
"default_language": "en"
I don't know what windows has changed recently but it has been working solidly on previous versions of Windows for years. I've updated the country tag as per available language packs for windows here and chromium tags but still no luck.
According to this thread :
"I use Chrome and stopped updating it once they made tabbed-options
mandatory. I also keep my User Data folder in a non-default location.
When this bug started, I used the --single-process trick for a while
but as mwalsher said, it stopped working when they messed with the Web
Store. I used but hate the manual method outlined above, so what I did
was to simply move my User Data folder to a FAT32 partition. Problem
solved; now I can successfully install packed extensions from an older
version of Chromium, running in normal mode, to a non-default User
Data folder. Even better, thanks to a system I set up
(http://superuser.com/questions/196886/how-to-relocate-chrome-profile-but-also-make-new-links-open-with-the-relocated-p/257706#257706),
it was /extremely/ easy to change it (I had only to change a single
byte and reboot)."
..
"Change the security permissions of the temp directory might fix this
problem. On my computer, the temp directory only has 3 full control
user (My Account, System, Administrators) at beginning. I manually
give everyone full control to this folder (maybe adding list
permission only also works). However it doesn't work immediately,
until next day I restart the computer with great surprise.
..
As a workaround, --no-sandbox might work. Note that this is just as
unsafe as --single-process, so be careful when using it.
..
..
perform a "chrome://restart
Try this first:
..I restarted Chrome and tried to install it again, and now it
installed cleanly..

Liferay With Multiple Server Instances

I'm working with multiple Liferay Projects (different Portal, plugins, user and usergroups etc ) in the same time, and often have to switch between them. This switch requires lots of steps like
Editing the portal-ext.properties (to change the Liferay Database, and edit some custom project-specific properties), and edit 'portal-setup-wizard.properties'
Add/remove portlets themes and hooks from the Eclipse Server instance, sometimes clean the Tomcat's 'data' 'Webapps' and 'work' folder
Go to Liferay's Control-Panel/Server/Plugins Installation and re-index portlets like 'Users and Organizations' or 'Documents and Media'
So, I thought that creating a new Server Instance for each project, with a new tomcat and JRE, would be a nice idea. When I had to switch project, I could just stop the old server and start another. At first, I thought (was adviced actually) that using the same Liferay Plugins SDK (6.1.0), should be ok, as long as the Server instances are the same version.
Practically this doesn't work 100% perfectly. While most of the work is getting done, there are some problems here and there, like a theme not getting deployed propertly, hooks not beeing applied etc. As I understand, there is some [Liferay SDK] - [Liferay Server] binding, and that means that only 1 Server (the first one I created) will fully work.
For example, By investigating the [Liferay SDK folder]/bild.[user name].properties, I can see some properties that are referring to a specific Server/JRE location :
app.server.portal.dir
app.server.lib.global.dir
app.server.deploy.dir
app.server.type
app.server.dir
So, my question is, what should I do to work with multiple Liferay Projects ?
Is the multi-Server practice, a good approach to work with multiple-projects ?
If yes, should I create a different SDK for each Server? Maybe a different Eclipse workspace too ? Or is there some way to use the same SDK
What about working with Servers of different Liferay Version ?
Personally, I set up every project with its own source, tomcat, database, etc. even if it means duplication. These days storage is cheap and makes this possible. Of course your milage may very but I thought I'd share my setup with you.
I have a project directory with all my projects which looks like so:
/projects
/foo-project
/bar-project
/my-project
Inside a project I have
/my-project
/tomcat
/bin
/conf
...
/src
/portal
... my portal source ...
/plugins
... my plugin source ...
/portal-ext.properties
I then setup tomcat to use different ports (8080, 8081, 8082, etc...) so that I can just leave them all running if I have to or want to.
I setup Liferay to use different database for each Liferay instance.
I place the portal-ext.properties as a sibling to the tomcat directory and Liferay will read this file (assuming the default behavior). This offers quick and easy edits as well as figuring out how you've set each project up.
The advantages should be clear. You can just "walk away" from a project and into another without tearing down and setting up. And when you return everything will still be as you left it. Context switching is also quicker and helpful if you want to answer a question about a project you're not yet working on.
Depending on the complexity of each of your projects, multi-instance might not work for you. Hooks and EXTs may conflict with each other and it appears as if this is already the case with your projects.
If you can afford the space (which is not much) this has been the fastest way I have found as a Liferay developer.
If we start working on a new Liferay project in our company, we setup:
a new database schema,
a new, clean Liferay server connected with that schema and
a fresh Eclipse workspace, with
a clean SDK project
Only this way you're sure to have cleanly separate projects. To switch to another project, just shutdown the current Liferay server, startup the new one and switch to the right workspace in Eclipse. This all takes no more than 2 minutes, a lot less than to do all the cleanup actions you have to do if you share workspace and server.
In my opinion, this is the approach of most development teams.
Why mess with all these complications in a single computer? I use Oracle VirtualBox and set up a separate VM for each project. Even though I work on a laptop, it has 8 cores, and I've bumped my memory up to 16GB and set each machine up with 4GB of RAM.
I can have multiple VMs running at once and have set all active projects as home pages in Chrome. Using bridged networking each VM has its own IP address, and they all listen on 8080.
Another benefit is that, although my primary project is being developed using Eclipse Indigo and LR 6.1 CE GA1, I have another using Eclipse Juno, its specific IDE plugin and LR 6.1.1 CE GA2. So it also works as a new version tester.
VirtualBox is free. Memory is cheap. And remember that you can put a VM to sleep without shutting it down. That takes about 10-20 seconds and waking it up again takes 30-60 seconds.
The simplest solution would be :
Create 3 different users, the Liferay SDK's bundle.properties file is separate for each user. So, lets say, if you want to run 3 servers with the same sdk. Create 3 files like
bundle.user1.properties
bundle.user2.properties
bundle.user3.properties
Now, when you want to deploy something for server 1, log in the server using user1 and try to deploy the portlet, this will read bundle.user1.properties and it will deploy the portlet/hook to the specified location.
Hope this will resolve your deployment issue.
Also, when you have 3 users, you can run 3 different servers together in a different user accounts, in this way, they would be secure and apart from admin, nobody can shutdown the same.
Hope this helps!

Does process have permission to view files on azure?

I have recently deployed a fubu mvc application to windows azure. Everything works except when the pipeline tries to find the view to render. This all works correctly on my local machine.
So I am wondering: does the process on the Azure box have rights to read/scan files on disk?
Any suggestions to fix it are welcome though.
EDIT:
As part of the deployment there is a stage that azure goes through called "Preparing files for eployment". I checked on the log and my view was not in there
So I changed copy to output as true and it worked
It depends a bit on where you are trying to read and how you have configured your roles. By default, the code will run as a very low privilege user that only has R/W to the code directory (and any LocalResource(s) defined by the user). However, you can run your code as SYSTEM, in which case you can R/W anywhere (you might still have to take ownership, but you are all powerful as SYSTEM).
If your views are defined as part of your package and uploaded, the code should have permission to view it. I am curious as why you think this is a permission issue right now. Do you see an error that indicates that, or are you guessing it? If I had to guess, my first thought would be your views didn't get packaged correctly and are not on the VM. You can confirm they are there either by RDP or by cracking open the package and snooping around.

IIS executable not executing

I have been looking at an issue for a week straight and have been unable to figure it out and I am desperate for the fix.
On a client site, we have two environments: UAT and PROD. UAT works perfect (Please keep this in mind). We are now trying to deploy the solution to PROD but certain parts of the solution are not working.
We have developed an asp.net application that we provide to clients to allow them to invoke SSIS packages (there are a couple of drop downs that they first select then click a button named "invoke").
When the user clicks the Invoke button, a batch file named InvokeSSIS.bat is called that assembles a command line call to dtexec with the appropriate parameters.
I'm having a problem with a particular package that is responsible for calling an executable which generates a spreadsheet that i will be importing into my system.
The executable is on an mapped H:\ drive.
I have modified the InvokeSSIS.bat batch file to capture the command the batch file is generating. If I execute this command from the command line, it works perfectly. From the webapp Invoker, it executes the package but the tasks responsible for calling the executable doesn't execute as the entire package takes only 1 second to complete (whereas it should take about a minute.)
The executable DOES have a GUI, but it is NOT interactive. This is because when you call the GUI with specific parameters, it automatically runs in batch mode and executes a macro used to generate the desired spreadsheet.
I know this is ok because it works on the UAT server AND it works from the command line!
I have checked the permissions on the executable (bu right-clicking the executable and clicking properties.) I have granted Full Control on the executable to the same user specified as the identity tab of the application pool i am using.
Can someone please help me? As I said I am dying over here!
Please let me know if you have any ideas or what other info you need.
Environment (both UAT and PROD)
OS: Windows Server 2003
IIS 6
asp.net 2.0
SQL Server 2008
Thanks!
Steve
You can't use a mapped drive with IIS.
You must use the \\servername syntax to reach files on other systems.
I agree with user544284 that this is at least in part a mapping issue. I'll ignore for a minute the complete insanity of having a web application call a batch file to start an executable that's on a remote network drive through a drive letter mapping.
Most likely the UAT box has something set up that maps that drive letter for you which Prod is missing.
The only other possibility is a security violation is occurring. Running .exe's from a network drive is generally frowned on. Do the two environments have the exact same version of windows? Are they configured the same with regards to UAC? Any differences here are going to be important.
Which brings up an interesting thought. I wonder if someone logged in to the UAT server using the same account credentials the app pool is using and added the ip address of the machine where the exe lives to the list of "Local Intranet" sites... Or, if they installed SSIS on the UAT server itself.
Just because YOU can log in to the server and run it on the command line means nothing. You have to find out if the drive letter is mapped at all for the user that the web app is running under and whether that user has the required security bits and whether the local OS will allow it regardless.
Okay, I can't ignore it: hairbrained is the nicest adjective I can come up with for this "architecture". Do yourself a favor and go back to the drawing board on this one. It has the word "brittle" written all over it, as you have already found. Instead of building out a batch file to call dtexec, just do it directly either by something like this or this.

Resources