Is there a way to prevent default behaviour of loading last VM, and instead always present VM Library when I click VMWare Fusion from Dock?
Found a solution:
Go into Virtual Machine Library
Right click on VM's and select Get
Info.
Ensure Start automatically when VMWare Fusion launches is
unchecked.
Also answered here:
https://communities.vmware.com/message/2747861#2747861
Related
I would like to install Lotus Notes 7/designer/client/admin on same computer probably in other drive, where we have already installed Lotus notes 9. therefore I would like to know if this would create any conflict ? or any probable issues ? or some one can please explain me how to do that any guidance.
You can avoid the pitfalls, by installing a virtual system on your machine (VMWare or VirtualBox) and running it from there. When you use a shared folder from the host system, all your data will be directly accessible on the host. You can have several virtual machines, each for one version of Notes, or for one client needing special setup or different user ids.
As Notes 7 (and all versions before) do not even need an installation to work the answer is easily given: just install the client to another machine (even a virtual machine) and copy over Program + Data Diectory to your pc. Then you only need to adjust notes.ini (if you changed path when copying) and create a shortcut to launch notes7. Create a shortcut to Notes7ProgramDirectory\nlnotes.exe =Notes7ProgramDirectory\notes.ini
Now you can launch Notes7 after Notes9 is already launched. The other way around will not work: if notes7 is already started, then Notes9 will not start in parallel.
I found the answer to how to create a persistent mapped drive here:
Map a network drive to be used by a service
I used the sysinternals tools to set it up and it works perfectly. The server was cloned and now it shows up there as well. Using the sysinternals tools, I am able to delete it, but it shows back up on reboot. Nothing I try seems to work.
In all my searching, I finally found the solution.
This works for Windows Server 2012.
Go to search and look for Group Policy Editor and click on what comes up.
In the Group Policy Editor
- Computer Configuration
-- Windows Settings
--- Scripts (Startup/Shutdown)
Under Startup there should be a script called MapX (X=the drive letter you are trying to delete).
Remove this script and reboot and the mapped drive should be gone.
I imported this github c++ project: https://github.com/RedhawkSDR/USRP_UHD into redhawk, hoping to run it with a USRP N210.
Redhawk only allows me to run the project as a component or C++ Application, so I tried running it as a component.
Here is the exact error I get when I try to run as a component:
An internal error occurred during: "Launching USRP_UHD".
Could not initialize class gov.redhawk.ide.debug.internal.ScaDebugInstance
How can I fix this?
The USRP_UHD device is a Redhawk Device that interfaces with the N210. In Redhawk, Devices are deployed and managed by an instance of the Device Manager, which is referred to as a Node.
To run the USRP_UHD Redhawk Device in a Domain:
Install the USRP_UHD Device to the Target SDR. This can be accomplished by clicking and dragging the top level folder of the USRP_UHD project from the Project Explorer view to the Target SDR in the SCA Explorer view.
Create a new SCA Node Project using the Redhawk IDE that contains a USRP_UHD Device instance. The first wizard page will prompt you for a Node name (project name) and a Domain name. You can override the domain name later at run time if the name you choose now ends up being different from your running Domain. After clicking Next, the second and final wizard page allows you to choose from a list of Devices that are installed in your Target SDR. Select the USRP_UHD and click Finish. The Overview tab of the SCA Node Editor will appear after clicking Finish.
Configure the Node. Within the SCA Node Editor, you can edit the properties of the USRP_UHD Device using either the Devices tab or the Diagram tab. Generally, you will want to at least configure the IP address of the N210 using the USRP_ip_address property of the USRP_UHD Device so that the USRP_UHD Device will connect to the USRP hardware upon deployment.
Install the Node to the Target SDR. Again, this can be accomplished by clicking and dragging the top level folder of the Node from the Project Explorer view to the Target SDR in the SCA Explorer view.
Launch a Domain and the Node (Device Manager) that you created containing the USRP_UHD Device. This can be done by right-clicking on the Target SDR in the SCA Explorer view and selecting Launch…. In the dialog box that pops up, you can choose a Domain Name (this doesn’t have to be the same as the Domain Name specified in the Node) and a debug level for the Domain Manager. To also launch a Node, choose the Node that you created from the list of Device Managers and set the debug level appropriately for the Device Manager. Select OK to launch both.
Inspect the Domain that you launched by expanding the Domain within the SCA Explorer view. You should see the Node under the Device Managers folder, and after expanding the Node you should see the USRP_UHD Device instance (probably named USRP_UHD_1).
If this doesn't fix the issue, please provide some more information about your environment (specifically, what version of the Redhawk framework and IDE, what version of Java is reported by "java -version", what OS and version, what branch/release of USRP_UHD, what version of UHD software) and the steps you are taking to run the USRP_UHD as a component. In Redhawk version 1.9, I was able to choose Run as…->Local Component Program and it successfully launched the USRP_UHD Device in the Sandbox without the error you have experienced. You may also wish to try with both Redhawk version 1.8 and 1.9 (be sure to use the latest release of each) to see if the issue appears in both versions.
Is the version of UHD that you are using version 3.7.1? It should be reported by any of the uhd_* commands, such as uhd_find_devices.
uhd_find_devices should find your N210 device if you can ping it. I have seen the X310 not respond to uhd_find_devices, but it does respond once the IP address is specified. Try specifying the IP address of the N210 as shown below:
uhd_find_devices --args="addr=192.168.10.2"
Replace 192.168.10.2 (the default IP address for the N210) with the IP address of your N210, of course. If your N210 is still not found, try unplugging the power source to the N210 and then plugging it back in to force a reboot. Again, I’ve seen this help with the X310 when it wouldn’t respond to the uhd_find_devices command even with the IP address specified.
Then also try to probe the N210 using the following command:
uhd_usrp_probe --args="addr=192.168.10.2"
I believe that if the N210 has a firmware version that is incompatible with the version of UHD you have, the N210 will still be found and the probe command will inform you that the firmware must be updated.
If neither command can communicate with the N210, I have to think the issue is between the UHD software and the N210 instead of being a Redhawk-related issue. To load the firmware, see the link below. Also, there are instructions for setting up networking and troubleshooting communication problems at the same link. If you haven’t done so already, take a quick look and see if anything there helps. Let me know what you find.
Load the Images onto the On-board Flash (USRP-N Series only)
Did you follow the networking setup steps listed here: USRP N210 Networking Setup?
As described in the link you need to make sure your host PC has an IP address on the same subnet as the USRP. You can use ifconfig to set the static IP address for a specific interface ex. "eth0"
Java 7 update 55 (version 1.7.0_55) introduced an issue/bug with Eclipse (including the Redhawk IDE, since it’s Eclipse-based) that is described here.
The change made in Java 7 update 55 that seems to have caused various issues with Eclipse/JacORB has been reverted here and will be available in Java 8 update 22. There is a beta release available here, but being a beta release, it may have other issues and as such may not be worth trying. Instead, you can do what I did and patch the Redhawk IDE as a workaround to the bug in Java 7 update 55.
Set environment variable IDE_HOME to refer to the same directory as the eclipse executable (not the executable itself, though).
export IDE_HOME=/usr/local/redhawk/ide/R.1.9 # replace with your path
Append the following line to the $IDE_HOME/eclipse.ini file. If a line already specifies the endorsed dirs, replace it with this line.
-Djava.endorsed.dirs=$IDE_HOME/jacorb/lib
Create the JacORB lib directory at the path specified in the previous step.
mkdir -p $IDE_HOME/jacorb/lib
Find the exact name of the JacORB directory located within $IDE_HOME/plugins, which will begin with “org.jacorb.system”, and assign it to an environment variable named JACORB_DIR:
export JACORB_DIR=`find $IDE_HOME/plugins/ -maxdepth 1 -name org.jacorb.system*`
Copy the contents of the JacORB jars directory into the $IDE_HOME/jacorb/lib directory:
cp -R $JACORB_DIR/jars/* $IDE_HOME/jacorb/lib/.
This should resolve any potential issues resulting from the Eclipse/JacORB bug. Does this also fix the remaining issues you’re having with the USRP?
I am trying to attach a cloud drive as described here http://msdn.microsoft.com/en-us/library/gg466226.aspx#bk_Storage but I get the error ERROR_AZURE_DRIVE_DEV_PATH_NOT_SET ?
What does this mean? I've tripled checked my config at it seems ok.
I am trying to connect the cloud drive in a Windows Service on a VM Role.
I discovered that the FixMSI.js script from http://msdn.microsoft.com/en-us/library/gg466226.aspx#bk_Install was failing. For some reason $(BuiltOutputPath) was empty. I did it relative to the $(ProjectDir) instead.
It then failed with a different error (and much earlier). CloudDriveException 0x80070103.
Searching for this gave me this article which basically told me to manually edit the driver inf file for the wa miniport. http://msdn.microsoft.com/en-us/library/windowsazure/hh708321.aspx.
Now it attaches ok. The strange thing now is that the device has a warning when the vm starts (but only when hosted in azure), I have to manually go into the vm on azure and update the driver.
try to change BuiltOutputPath to BuildOutputPath. According to Richard, this is an error in the document. Refer to the Community Content section on the document for more information.
Our lead programmer likes to install tools on a shared network drive to minimize effort when updating. He recently installed Eclipse to the network drive, but when I run it, I get a window that says Workspace in use or cannot be created, choose a different one. After clicking OK, I get a window that gives me a drop down menu with only one item, the workspace on his machine. I can then browse to the workspace on my machine, click OK, and Eclipse continues to start up and run just fine. There's a check box in that second window that says Use this workspace as the default that I've checked after browsing and selecting my workspace, but the next time I start up Eclipse, it reverts back to the lead's workspace.
Are we violating some assumption that Eclipse makes about the install? We're on a Linux network, if it makes a difference.
Setup the shared eclipse such that it can not be modified by the users accessing it. This should (if I recall correctly) force eclipse into a "Shared User, Hands Off" mode and default to storing settings per user account.
Do not share Workspaces (or Projects) -- this will only break things horribly -- use a different strategy such as a proper revision control system.
Perhaps this documentation will be helpful.
"""The set up for this [shared] scenario requires making the install area read-only for regular users. When users start Eclipse, this causes the configuration area to automatically default to a directory under the user home dir. If this measure is not taken, all users will end up using the same location for their configuration area, which is not supported."""
I would try to run Eclipse locally as well as over the network. Using a shared network drive may make Eclipse more painful than it sometimes is. A development environment should work for the developer, even at the expense of a slightly more complicated setup.
Eclipse stores a lot of settings, including the workspace list, in it's installation directory (especially the "configuration" directory). It's hard to say how well sharing the installation will work, but I wouldn't be surprised if there were a number of issues caused by "fighting" between Eclipse instances running on different developer's workstations.
To fix the particular issue you're having, you could set up a separate startup script that passes your workspace as a command-line argument to Eclipse, bypassing the workspace selection dialog you're seeing.