Is anyone aware of an easy way of duplicating and renaming a virtual PC (can be MS VPC, VMWare or Virtual Box), which is running SharePoint, K2 and acting as a domain controller? I’m looking for a method of creating an image which can be quickly and easily copied and run by multiple parties on the same network simultaneously without name conflicts. It’s either that or go through a ground-up build on each and every machine as far as I can see.
I'd advise against it.. renaming an installed SharePoint machine is sure to cause you pain indefinately and unexpectedly. The way to go is with scripted installs:
create copy of a VM with OS
rename machine + run sysprep
script install SQL
script install MOSS
script configure MOSS (replaces config wizard + a lot of manual settings)
It can all be done unattended.
As a shortcut to install short-lived development machines I have used the following. Just make sure the SharePoint configuration wizard runs after the rename and there should be no problem.
create a copy of a VM having: OS+SQL+MOSS(no config wiz)
rename machine
script configure MOSS
It has the advantage of your development machines being identically installed. Takes about 10 minutes to create a fresh one. It doesn't have sysprep but they are renamed so you can run them all on your network. Not running sysprep has never caused me grief but I wouldn't do it for production environments. Running the configuration of MOSS scripted makes sure it will work on the renamed environment (and all MOSS farms are configured exactly the same, same ports, SSP setup, etc, yay!)
For MOSS configuration scripting see h tt p://stsadm.blogspot.com/2008/03/sample-install-script.html
Plently of samples for SQL out there too.
SharePoint doesn't like having the server re-named from under it's feet (so to speak). Neither does SQL Server (which I assume you'd have installed on the VM for the installation). Not sure about a DC being renamed, there's probably problems there as well...
Having said that, there are some instructions I've read for renaming both SharePoint machines and SQL Server machines, so you might get somewhere.
On the third hand, I've tried it a few times and always ended up rebuilding the server from the ground up for SharePoint as it can get subtly mangled in ways which aren't always apparent straight away (the admin interface and shared services seem to be especially easy to confuse). I've found that I can build a vanilla MOSS install pretty quickly these days...
Sharepoint writes the name of the server into configuration tables in SQL Server. So if you change the name of the server, things stop working.
What you can do, is to install just the OS. Then take a copy each time you need a new machine. Run sysprep
to give the machine a new name. Then install SQL Server and MOSS.
This is not exactly what you are after but it should save you some time.
I've done this, and it wasn't too bad.
Rename the SharePoint-server first, then rename the Windows server.
This posting has a nice checklist.
Don't forget to remove the NIC node from the settings file of the virtual machine, otherwise you get name collision due to duplicate MAC addresses. Here's a how-to.
I believe the solutions above are really good. But I would suggest an alternative ...
If this is a development virtual PC I would suggest that you do the following
Do not rename the server
Change the IP address to be on different network
Change the MAC address so that there are no packet collisions
Since you are using it as a development VPC, edit the computer's lmhosts file edit the entry to point to the new IP address
You might want to skip the step 2 and be on the same network. But changing the hosts file will still point back to you. For example you server name was "myserver" and it was pointed 192.168.1.100 which was the local ip (has hosts file entry) , then if you copy the server give it ip 192.168.1.150 and edit the hosts file and point myserver to 192.168.1.150, the system will still work flawlessly. There will some domain name collisions in the event log of the machine, but it wont affect your development.
Related
So, I have a collection of Windows Server 2016 virtual machines that are used to run some tests in pairs. To perform these tests, I copy a selection of scripts and files from the network on to the machine, before performing the tests.
I'm basically using a selection of scripts that have existed around here since before my time and whilst i would like to use other methods, so much of our infrastructure relies on these scripts that overhauling the system would be a colossal task.
First up, i sort out the mapped drives with
net use X: \\network\location1 /user:domain\user password
net use Y: \\network\location2 /user:domain\user password
and so on
Soon after, i use rsync to copy files from a location in /cygdrive/y/somewhere to /cygdrive/c/somewhere_else
During the rsync, i will get errors that "files have vanished" (I'm currently unable to post the exact error, I will edit this later to include this). When i check what's currently in the /cygdrive directory, all i see is /cygdrive/c and everything else has disappeared.
I've tried making a symbolic link to /cygdrive/y in a different location, I've tried including persistent:yes on the net use command, I've changed the power settings on the network card to not sleep. None of these work.
I'm currently looking into the settings for the virtual machines themselves at this point, but I have some doubts as we have other virtual windows machines that do not seem to have this issue.
Has anyone has heard of anything similar and/or knows of a decent method to troubleshoot this?
Right, so I've been working on this all day and finally noticed a positive change, but since my systems are in VMware's vCloud, this may not work for some people. It's was simply a matter of having the VM turned off and upgrading the Virtual Hardware Version to the latest version. I have noticed with this though, that upon a restart, one of the first messages that comes up mentions that the computer is "disabling group policies".
I did a bit of research into this and found out that Windows 8 and 10 (no mention of any Windows Server machines) both automatically update Group Policies in the background, disconnecting and reconnecting mapped drives to recreate them.
It's possible that changing the Group Policy drive from "recreate" to "update" should fix this issue, and that the Virtual Hardware update happened to resolve this in a similar manner.
I've just installed Neo4j 1.8.2 onto Azure by following this step-by-step process...
http://de.slideshare.net/neo4j/neo4j-on-azure-step-by-step-22598695
Unfortunately, when I browse to http://:7474/webadmin Fiddler says Error 10061 - No connection could be made because the target machine actively refused it.
I've followed the instructions exactly and haven't received any errors.
Any help much appreciated.
So, I think I got to the bottom of this. I think it was due to the size of compute / VM I was creating. It looks like the problem is caused when running on Extra Small instances. I created a new installation using a Small instance and everything now works :).
Try setting the server to accept connections form all hosts, and maybe use a newer Neo4j, say 1.9.4
http://docs.neo4j.org/chunked/stable/security-server.html#_secure_the_port_and_remote_client_connection_accepts
The way the VM Depot image is set up, it's pre-configured to allow all hosts to connect, and the Neo4j server will auto-start. The only thing you need to take care of, when constructing your VM, is to open an Input Endpoint, with any public port you want (preferably 7474 to stay true to Neo4j) and internal port 7474.
Note that the UI changed a bit since the how-to was published: You can specify the endpoint as the last step before creating your virtual machine. Other than that, the instructions should be the same. And... once the VM is up and running (it'll take about 5-10 minutes), you just visit http://yourservicename.cloudapp.net:7474 and you should see the web admin. Note: this is not the same as your vm name. If you named your VM something like 'neo' then you do not want http://neo:7474 or http://neo.cloudapp.net:7474. You need to use your cloud service name (you had to create a name for the service when you deployed the VM.
I've deployed that image several times in demos, and just tried again right now to make sure nothing wonky happened. Worked perfectly.
The project I am currently working on has a mix of legacy software and new development. The new dev work is being done on Linux and we have created a large domain on the Linux side. However, all of the legacy software must remain on Windows...
I haven't found any documentation indicating a mixed domain is possible although I can't see why the node managers or servers would have a problem communicating.
Can I add a Windows managed server to my Linux domain? Has anyone ever tried this? I can leave the domains separate if need be (although management won't be happy) but I was tasked with consolidating everything into a single domain.
If you don't have an exact answer, any links to documentation would be appreciated.
I do not have a practical experience with running such mixed-OS domain but I do not see a why it should not conceptually work.
Weblogic runs on Java, so that should work on both platforms.
The only problem that you may experience is that if the domain was created for a particular OS, its startup scripts will either be .sh for Linux or e.g. .cmd for Windows. In this case, you will probably need to get startup scripts for the particular OS and slightly modify them to match your target domain.
WebLogic is supported on both platforms, and startup scripts are also for both windows and linux.
The protocol they communicate is not in any way I know platform specific, so there's no reason for this to not work.
There doesn't seem to be any documentation on this however, so you need to just go for it.
We've got this up and running... it wasn't all that bad. Here's what we did:
Create a domain on Linux (NFS)
Add Weblogic .cmd start/stop scripts into <domain home>/bin folder
On Windows side:
Create a symlink under C: to the NFS domain location
mklink /D folder_name \\OUR-NFS01\path\to\domain
Update nodemanager.properties and nodemanager.domains to use the symlink path
Update nodemanager.properties to use our startManagedWebLogic.cmd for the start script
Update all of the .cmd files to reference the symlink path to the domain (e.g. DOMAIN_HOME)
Make sure in nodemanager.properties and .cmd files we reference the correct Windows JAVA_HOME location
Make sure any paths in the admin console (e.g. log file location) for the Windows managed server also reference the symlink path
That was it. Once we had the Windows nodemanager up and running we were able to start a managed server on the Windows host.
Side Note: We had issues using running the nodemanager as a Windows service when using mapped network drives. The service would not always see that mapped drive. That is why we chose to use a symlink instead (and it seems cleaner to me anyway).
The most recent WebLogic documentation is quite clear on this. A domain can mix hardware, operating system and JVM as long as all of them are supported:
Hardware, Operating System, and JVM Platform Compatibility
Oracle does recommend to use homogenous clusters as managed servers are expected to be equivalent to eachother, if this is not the case this may negatively impact load balancing and performance (see the above link).
I've got 6 web sites, 2 databases and 1 cloud environment setup on my account
I used the cloud to run some tasks via Windows Task Manager, everything was installed on my D drive but between last week and today the 8 of March my folder containing the "exe" to run as been removed.
Also I've installed SVN tortoise to get the files deployed and it not installed anymore
I wonder if somebody has a clue about my problem
Best Regards
Franck merlin
If you're using Cloud Services (web/worker roles), these are stateless virtual machines. That is: Windows Azure provides the operating system, then brings your deployment package into the environment after bootup. Every single virtual machine instance booted this way starts from a clean OS image, along with the exact same set of code bits from you.
Should you RDP into the box and manually install anything, anything you install is going to be temporary at best. Your stuff will likely survive reboots. However, if the OS needs updating (especially the underlying host OS), your changes will be lost as a fresh OS is brought up.
This is why, with Cloud Services, all customizations should be done via startup tasks or the OnStart() event. You should never manually install anything via RDP since:
Your changes will be temporary
Your changes won't propagate to additional instances; you'll be required to RDP into every single box to perform the same changes.
You may want to download the Azure Training Kit and look through some of the Cloud Service labs to get a better feel for startup tasks.
In addition to what David said, check out http://blogs.msdn.com/b/kwill/archive/2012/10/05/windows-azure-disk-partition-preservation.aspx for the scenarios where the different drives will be destroyed.
Also take a look at http://blogs.msdn.com/b/kwill/archive/2012/09/19/role-instance-restarts-due-to-os-upgrades.aspx which points you to the RSS feed and MSDN article where you can see that a new OS is currently being deployed.
I was recently working with a product from Symantech called Norton EndPoint protection. It consists of a server console application and a deployment application and I would like to incorporate their deployment method into a future version of one of my products.
The deployment application allows you to select computer workstations running Win2K, WinXP, or Win7. The selection of workstations is provided from either AD (Active Directory) or NT Domain (WINs/DNS NetBIOS lookup). From the list, one can click and choose which workstations to deploy the end point software which is Symantech's virus & spyware protection suite.
Then, after selecting which workstations should receive the package, the software copies the setup.exe program to each workstation (presumable over the administrative share \pcname\c$) and then commands the workstation to execute setup.exe resulting in the workstation installing the software.
I really like how their product works but not sure what they are doing to accomplish all the steps. I've not done any deep investigations into this such as sniffing the network, etc... and wanted to check here to see if anyone is familiar with what I'm talking about and if you know how it's accomplished or have ideas how it could be accomplished.
My thinking is that they are using the admin share to copy the software to the selected workstations and then issuing an RPC call to command the workstation to do the install.
What's interesting is that the workstations do this without any of the logged in users knowing what's going on until the very end where a reboot is necessary. At which point, the user gets a pop-up asking to reboot now or later, etc... My hunch is that the setup.exe program is popping this message.
To the point: I'm looking to find out the mechanism by which one Windows based machine can tell another to do some action or run some program.
My programming language is C/C++
Any thoughts/suggestions appreciated.
I was also looking into this, since I too want to remote deploy software. I chose to packet sniff pstools since it has proven itself quite reliable in such remote admin tasks.
I must admit I was definitely over-thinking this challenge. You have probably done your packet sniff by now and discovered the same things I have. I hope by leaving this post behind we can assist other developers.
This is how pstools accomplishes execution of arbitrary code:
It copies a system service executable to \\server\admin$ (you either have to already have local admin on the remote machine, or supply credentials). Once the file is copied, it uses the Service Control Manager API to make the copied file a system service and start it.
Obviously, this system service can now do whatever it wants, including binding to an RPC named pipe. In our case, the system service would install an msi. To get confirmation of successful installation you could either remote poll a registry key, or an rpc function. Either way, you should remove the system service when you are done and delete the file (psexec does not do this, I guess they don't want it to be used surreptitiously, and in that case leaving the service behind would at least give an admin a fighting chance of realizing someone had compromised their box.) This method does not require any preconfiguration of the remote machine, simply that you have admin creds and that file sharing and rpc are open in the firewall.
I've seen demos in C# using WMI, but I don't like those solutions. File sharing and RPC are most likely to be open in firewalls. If they aren't, file sharing and remote MMC management of the remote server wouldn't work. WMI can be blocked and still leave these functional.
I've worked with a lot of software that does remote installations, and a lot of them are not as reliable as pstools. My guess is that this is because those developers are using other methods that are not as likely to be open at the firewall level.
The simple solution is often the most elusive. As always, my hat is off to the SysInternals folks. They are true hackers in the positive, old school meaning of the word!
This sort of functionality is also available with products LANDesk and Altiris. You need a daemonized listener on the client side that will listen for instructions/connections from the server. Once a connection is made any number of things can happen: you can transfer files, kick on installation scripts, etc. usually transparently to any users on that box.
I've used the Twisted Framework (http://twistedmatrix.com) to do this with a small handful of Linux machines. It's Python and Linux, not Windows, but the premise is the same: a listening client accepts instructions from a server and executes them. Very simple.
This functionality can also be accomplished with VB/Powershell scripts in a Windows-based domain.