Weblogic: Mixed Windows and Linux Domain - linux

The project I am currently working on has a mix of legacy software and new development. The new dev work is being done on Linux and we have created a large domain on the Linux side. However, all of the legacy software must remain on Windows...
I haven't found any documentation indicating a mixed domain is possible although I can't see why the node managers or servers would have a problem communicating.
Can I add a Windows managed server to my Linux domain? Has anyone ever tried this? I can leave the domains separate if need be (although management won't be happy) but I was tasked with consolidating everything into a single domain.
If you don't have an exact answer, any links to documentation would be appreciated.

I do not have a practical experience with running such mixed-OS domain but I do not see a why it should not conceptually work.
Weblogic runs on Java, so that should work on both platforms.
The only problem that you may experience is that if the domain was created for a particular OS, its startup scripts will either be .sh for Linux or e.g. .cmd for Windows. In this case, you will probably need to get startup scripts for the particular OS and slightly modify them to match your target domain.

WebLogic is supported on both platforms, and startup scripts are also for both windows and linux.
The protocol they communicate is not in any way I know platform specific, so there's no reason for this to not work.
There doesn't seem to be any documentation on this however, so you need to just go for it.

We've got this up and running... it wasn't all that bad. Here's what we did:
Create a domain on Linux (NFS)
Add Weblogic .cmd start/stop scripts into <domain home>/bin folder
On Windows side:
Create a symlink under C: to the NFS domain location
mklink /D folder_name \\OUR-NFS01\path\to\domain
Update nodemanager.properties and nodemanager.domains to use the symlink path
Update nodemanager.properties to use our startManagedWebLogic.cmd for the start script
Update all of the .cmd files to reference the symlink path to the domain (e.g. DOMAIN_HOME)
Make sure in nodemanager.properties and .cmd files we reference the correct Windows JAVA_HOME location
Make sure any paths in the admin console (e.g. log file location) for the Windows managed server also reference the symlink path
That was it. Once we had the Windows nodemanager up and running we were able to start a managed server on the Windows host.
Side Note: We had issues using running the nodemanager as a Windows service when using mapped network drives. The service would not always see that mapped drive. That is why we chose to use a symlink instead (and it seems cleaner to me anyway).

The most recent WebLogic documentation is quite clear on this. A domain can mix hardware, operating system and JVM as long as all of them are supported:
Hardware, Operating System, and JVM Platform Compatibility
Oracle does recommend to use homogenous clusters as managed servers are expected to be equivalent to eachother, if this is not the case this may negatively impact load balancing and performance (see the above link).

Related

How to Get Fully Functional PowerShell running on Linux?

Getting powershell running on Linux is straightforward.
Unfortunately this is based on .NetCore which excludes a lot a important functionality and modules e.g the DNSServer module.
Is there a workaround to obtain a fully functional PowerShell installation on linux including modules that don't appear in .NetCore (specifically DNSServer) ?
Modules like DNSServer are owned and maintained by the DNS team within Microsoft and aren't part of the PowerShell project itself. This also means they aren't open source.
On top of that, for DNSServer specifically, that module uses WMI under the hood (I'd go so far as to say it's a thin wrapper around the WMI calls), and since WMI is also not open source and not available on Linux I'd say there's little chance of this module making there any time soon.
As a general case, your best bet is probably to use PSRemoting from Linux to a Windows machine that has the modules you want, then either use Implicit Remoting (Import-PSSession) or just straight up make remote calls with Invoke-Command.

Changing the order in which directories are searched for programs in Linux

I recently was in a situation when the software center on my Ubuntu installation was not starting. When I tried to launch it from console, I found that python was unable to find Gtk, although i hadn't removed it.
from gi.repository import Gtk,Gobject
ImportError: cannot import name Gtk
I came across a closely related question at Stackoverflow( i am unable to provide link to the question as of now).The solution accepted(and also worked for me) was to remove the duplicate installation of gtk from /usr/local as Gobject was present in this directory but not Gtk.
So, I removed it and again launched software-center and it launched.
While I am happy that the problem is solved, I would like to know if removing files from /usr/local can cause severe problems.
Also, echo $PATH onn my console gives:
/home/rahul/.local/bin:/home/rahul/.local/bin:/home/rahul/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/rahul/.local/bin:/home/rahul/.local/bin
which tells that /usr/local/bin is searched for before /usr/bin . Should $PATH be modified such that the order of lookup is reversed? If yes, then how?
You actually don't want to change that.
/usr/local is a path that, according to the Linux Filesystem Hierarchy Standard is dedicated to data specific to this host. Let's go a bit back in time: when computers were expensive, to use one you had to go to some lab, where many identical UNIX(-like) workstations were found. Since disk space was also expensive, and the machines were all identical and had more or less the same purpose (think of a university), they had /usr mounted from a remote file server (most likely through the NFS protocol), which was the only machine with big enough disks, holding all the applications that could be used from the workstation. This also allowed for ease of administration, where adding a new application or upgrading another to a newer version could just be done once on the file server, and all machines would "see" the change instantly. This is why this scheme permaned even as bigger disks become inexpensive.
Now imagine that, for any reason, a single workstation needed a different version of an application, or maybe a new application was bought with with only a few licenses and could thus be run only on selected machines: how to handle this situation? This is why /usr/local was born, so that single machines could somehow override network-wide data with local data. For this to work, of course /usr/local must point to a local partition, and things in said path must come first than things in /usr in all search paths.
Nowadays, UNIX-like Linux machines are very often stand-alone machines, so you might think this scheme no longer makes senses, but you would be wrong: modern Linux distributions have package management systems which, more or less, play the role that of the above-mentioned central file server: what if you need a different version of an application that what is available in the Ubuntu repository? You could install it manually, but if you put it in /usr, it could be overwritten by an update performed by the package management system. So you could just put it in /usr/local, as this path is usually granted not to be altered in any way by the package management system. Again, it should be clear that in this case you want anything in /usr/local to be found first than anything in /usr.
Hope you get the idea :).

Hard-coding paths in Linux

Coming from Windows background here.
Is it an acceptable practice for GUI Linux applications to store their data files (not user-specific) at hard-coded locations (e. g. /etc/myapp/stuff)? I couldn't find any syscalls that would return the preferred directory for app data. Is there a convention out there as to what goes where?
/opt/appname/stuff according to the Linux Filesystem Hierarchy Standard
Your distribution's packaging system likely provides ways to handle common installation paths. What distribution are you using?
Generally speaking, yes there is a convention. On most Linux systems, application configuration files are typically located at /etc/appname/. You'll want to consult the LSB (Linux Standard Base) and the Linux FHS (Filesystem Hierarchy Standard) for their respective recommendations.
Also, if you are targeting your application towards a specific Linux distro, then that distro vendor probably has their own specific recommendations as far as packaging and related-conventions are concerned. You'll want to look at your distro vendor's developer pages for more information.
Configuration files for processes with elevated privileges are generally stored in /etc. Data files for processes with elevated privileges (Web Server, Mail Server, Chat Server, etc.) are generally stored in /var. And that's where consistency ends. Some folks say you start with the location to store them (/etc|/var) then have an appname sub-folder for your app, then continue from there as necessary.
If you're not a system daemon with elevated privileges, your only consistent choice is a dot directory in the launching user's home directory. I think the Free Desktop Standards (XDG) specify ~/.config for per-user configuration, and ~/.cache for replaceable static and/or generated data you need to save.
Looking at my Home Directory, a few key dot directories I have are:
~/.cache
~/.config
~/.irssi
~/.maildir
~/.mozilla
~/.kde
~/.ssh
~/.vnc
[edit]
While not a syscall, the XDG specifications I reference are at http://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html
There are certain conventions.
System-wide, readable/editable (text-based) configuration files go in /etc/appname/.
System-wide, per-machine binary data files that change (eg. binary databases) go in /var/*/appname/ - /var/cache/appname/, /var/spool/appname/ and /var/lib/appname/ are the most common.
System-wide binary data files that could notionally be shared between machines (eg. things like graphics and sound files) go in /usr/share/appname/.
The full paths that Unix/Linux/GNU applications use to store config files and other data is usually set when an application is configured prior to compilation. These paths then get hard-coded into the compiled binary (you can see examples of this by running strings(1) over some existing executables).
That is, these types of paths are build-time configurable, not run-time configurable by default. Many apps will support command line options to specify where a configuration file is, and that configuration file will usually contain paths for other application resources. This allows an application to run with minimal configuration (built-in paths) but also allows a site to customise the paths completely.
Under Linux, only the basic services (opening a file, doing networking and interprocess communication etc) are provided as system calls. The rest is done using libraries.
If you are coding a GUI application, you should look into your toolkit's documentation to see if it provides a mechanism for managing defaults. Both KDE and Gnome have one for instance.

Automated deployment of files to multiple Macs

We have a set of Mac machines (mostly PPC) that are used for running Java applications for experiments. The applications consist of folders with a bunch of jar files, some documentation, and some shell scripts.
I'd like to be able to push out new version of our experiments to a directory on one Linux server, and then instruct the Macs to update their versions, or retrieve an entire new experiment if they don't yet have it.
../deployment/
../deployment/experiment1/
../deployment/experiment2/
and so on
I'd like to come up with a way to automate the update process. The Macs are not always on, and they have their IP addresses assigned by DHCP, so the server (which has a domain name) can't contact them directly. I imagine that I would need some sort of daemon running full-time on the Macs, pinging the server every minute or so, to find out whether some "experiments have been updated" announcement has been set.
Can anyone think of an efficient way to manage this? Solutions can involve either existing Mac applications, or shell scripts that I can write.
You might have some success with a simple Subversion setup; if you have the dev tools on your farm of Macs, then they'll already have Subversion installed.
Your script is as simple as running svn up on the deployment directory as often as you want and checking your changes in to the Subversion server from your machine. You can do this without any special setup on the server.
If you don't care about history and a version control system seems too "heavy", the traditional Unix tool for this is called rsync, and there's lots of information on its website.
Perhaps you're looking for a solution that doesn't involve any polling; in that case, maybe you could have a process that runs on each Mac and registers a local network Bonjour service; DNS-SD libraries are probably available for your language of choice, and it's a pretty simple matter to get a list of active machines in this case. I wrote this script in Ruby to find local machines running SSH:
#!/usr/bin/env ruby
require 'rubygems'
require 'dnssd'
handle = DNSSD.browse('_ssh._tcp') do |reply|
puts "#{reply.name}.#{reply.domain}"
end
sleep 1
handle.stop
You can use AppleScript remotely if you turn on Remote Events on the client machines. As an example, you can control programs like iTunes remotely.
I'd suggest that you put an update script on your remote machines (AppleScript or otherwise) and then use remote AppleScript to trigger running your update script as needed.
If you update often then Jim Puls idea is a great one. If you'd rather have direct control over when the machines start looking for an update then remote AppleScript is the simplest solution I can think of.

How to duplicate a virtual PC with SharePoint, K2 and domain controller

Is anyone aware of an easy way of duplicating and renaming a virtual PC (can be MS VPC, VMWare or Virtual Box), which is running SharePoint, K2 and acting as a domain controller? I’m looking for a method of creating an image which can be quickly and easily copied and run by multiple parties on the same network simultaneously without name conflicts. It’s either that or go through a ground-up build on each and every machine as far as I can see.
I'd advise against it.. renaming an installed SharePoint machine is sure to cause you pain indefinately and unexpectedly. The way to go is with scripted installs:
create copy of a VM with OS
rename machine + run sysprep
script install SQL
script install MOSS
script configure MOSS (replaces config wizard + a lot of manual settings)
It can all be done unattended.
As a shortcut to install short-lived development machines I have used the following. Just make sure the SharePoint configuration wizard runs after the rename and there should be no problem.
create a copy of a VM having: OS+SQL+MOSS(no config wiz)
rename machine
script configure MOSS
It has the advantage of your development machines being identically installed. Takes about 10 minutes to create a fresh one. It doesn't have sysprep but they are renamed so you can run them all on your network. Not running sysprep has never caused me grief but I wouldn't do it for production environments. Running the configuration of MOSS scripted makes sure it will work on the renamed environment (and all MOSS farms are configured exactly the same, same ports, SSP setup, etc, yay!)
For MOSS configuration scripting see h tt p://stsadm.blogspot.com/2008/03/sample-install-script.html
Plently of samples for SQL out there too.
SharePoint doesn't like having the server re-named from under it's feet (so to speak). Neither does SQL Server (which I assume you'd have installed on the VM for the installation). Not sure about a DC being renamed, there's probably problems there as well...
Having said that, there are some instructions I've read for renaming both SharePoint machines and SQL Server machines, so you might get somewhere.
On the third hand, I've tried it a few times and always ended up rebuilding the server from the ground up for SharePoint as it can get subtly mangled in ways which aren't always apparent straight away (the admin interface and shared services seem to be especially easy to confuse). I've found that I can build a vanilla MOSS install pretty quickly these days...
Sharepoint writes the name of the server into configuration tables in SQL Server. So if you change the name of the server, things stop working.
What you can do, is to install just the OS. Then take a copy each time you need a new machine. Run sysprep
to give the machine a new name. Then install SQL Server and MOSS.
This is not exactly what you are after but it should save you some time.
I've done this, and it wasn't too bad.
Rename the SharePoint-server first, then rename the Windows server.
This posting has a nice checklist.
Don't forget to remove the NIC node from the settings file of the virtual machine, otherwise you get name collision due to duplicate MAC addresses. Here's a how-to.
I believe the solutions above are really good. But I would suggest an alternative ...
If this is a development virtual PC I would suggest that you do the following
Do not rename the server
Change the IP address to be on different network
Change the MAC address so that there are no packet collisions
Since you are using it as a development VPC, edit the computer's lmhosts file edit the entry to point to the new IP address
You might want to skip the step 2 and be on the same network. But changing the hosts file will still point back to you. For example you server name was "myserver" and it was pointed 192.168.1.100 which was the local ip (has hosts file entry) , then if you copy the server give it ip 192.168.1.150 and edit the hosts file and point myserver to 192.168.1.150, the system will still work flawlessly. There will some domain name collisions in the event log of the machine, but it wont affect your development.

Resources