I'm new to RTRT tool. I want to configure a TDP for a new hardware. While creating the TDP I require a .xdp file. So, my question is whether the .xdp file is dependent on hardware? If yes, then from where will I get the .xdp file?
XDP i.e., the target deployment port is hardware specific. You will have to edit this as per your need in RTRT which basically has all the settings you need to run your project on. In the target deployment port editor you can edit/create the .xdp file as per your need.
Related
I'm trying to set up a situation where I drop files into a folder on one Azure VM, and they're automatically copied to another Azure VM. I was thinking about mapping a drive from the receiver to the sender and using a file watch/copy program to send the files over the mapped drive.
What's a good recommendation for a file watch/copy program that's simple and efficient, and what security setups do I need to get the two Azure boxes to "talk" to each other? They're in the same account/resource group/etc, so I'm not going outside of a virtual network or anything like that.
By default, VMs in the same virtual network can talk to each other (this is true even if default NSGs are applied). So you wouldn't have to do anything special to get that type of communication working.
To answer the second part, you might want to consider just using built-in FCI rules to execute a short script to do the copy. See this link for a short intro into FCI rules.
Alternatively, you could use a service such as Azure files to have files shared between those servers using CIFS. It really depends on why you are trying to have a copy of the file on two servers.
Hope that helps!
Are the settings or configuration specifics of a printer on a *nix system using CUPS stored in a file? My assumption is yes, as *nix systems seem to use files for everything as opposed to using a registry system as does Windows. If so, where are such files located? Are they capable of having their file permissions modified, and if so, what could cause such a thing to occur in a non-manual way?
This question relates to one of my other questions in helping to explore a single, individual theory toward an answer there, but is decidedly separate.
Check on /etc/cups, for printers the file is printers.conf.
They can have permissions modified since they usually belong to the lp group, not a single user. Check cron jobs, system updates and any other cups interface that your distribution provides.
The project I am currently working on has a mix of legacy software and new development. The new dev work is being done on Linux and we have created a large domain on the Linux side. However, all of the legacy software must remain on Windows...
I haven't found any documentation indicating a mixed domain is possible although I can't see why the node managers or servers would have a problem communicating.
Can I add a Windows managed server to my Linux domain? Has anyone ever tried this? I can leave the domains separate if need be (although management won't be happy) but I was tasked with consolidating everything into a single domain.
If you don't have an exact answer, any links to documentation would be appreciated.
I do not have a practical experience with running such mixed-OS domain but I do not see a why it should not conceptually work.
Weblogic runs on Java, so that should work on both platforms.
The only problem that you may experience is that if the domain was created for a particular OS, its startup scripts will either be .sh for Linux or e.g. .cmd for Windows. In this case, you will probably need to get startup scripts for the particular OS and slightly modify them to match your target domain.
WebLogic is supported on both platforms, and startup scripts are also for both windows and linux.
The protocol they communicate is not in any way I know platform specific, so there's no reason for this to not work.
There doesn't seem to be any documentation on this however, so you need to just go for it.
We've got this up and running... it wasn't all that bad. Here's what we did:
Create a domain on Linux (NFS)
Add Weblogic .cmd start/stop scripts into <domain home>/bin folder
On Windows side:
Create a symlink under C: to the NFS domain location
mklink /D folder_name \\OUR-NFS01\path\to\domain
Update nodemanager.properties and nodemanager.domains to use the symlink path
Update nodemanager.properties to use our startManagedWebLogic.cmd for the start script
Update all of the .cmd files to reference the symlink path to the domain (e.g. DOMAIN_HOME)
Make sure in nodemanager.properties and .cmd files we reference the correct Windows JAVA_HOME location
Make sure any paths in the admin console (e.g. log file location) for the Windows managed server also reference the symlink path
That was it. Once we had the Windows nodemanager up and running we were able to start a managed server on the Windows host.
Side Note: We had issues using running the nodemanager as a Windows service when using mapped network drives. The service would not always see that mapped drive. That is why we chose to use a symlink instead (and it seems cleaner to me anyway).
The most recent WebLogic documentation is quite clear on this. A domain can mix hardware, operating system and JVM as long as all of them are supported:
Hardware, Operating System, and JVM Platform Compatibility
Oracle does recommend to use homogenous clusters as managed servers are expected to be equivalent to eachother, if this is not the case this may negatively impact load balancing and performance (see the above link).
Coming from Windows background here.
Is it an acceptable practice for GUI Linux applications to store their data files (not user-specific) at hard-coded locations (e. g. /etc/myapp/stuff)? I couldn't find any syscalls that would return the preferred directory for app data. Is there a convention out there as to what goes where?
/opt/appname/stuff according to the Linux Filesystem Hierarchy Standard
Your distribution's packaging system likely provides ways to handle common installation paths. What distribution are you using?
Generally speaking, yes there is a convention. On most Linux systems, application configuration files are typically located at /etc/appname/. You'll want to consult the LSB (Linux Standard Base) and the Linux FHS (Filesystem Hierarchy Standard) for their respective recommendations.
Also, if you are targeting your application towards a specific Linux distro, then that distro vendor probably has their own specific recommendations as far as packaging and related-conventions are concerned. You'll want to look at your distro vendor's developer pages for more information.
Configuration files for processes with elevated privileges are generally stored in /etc. Data files for processes with elevated privileges (Web Server, Mail Server, Chat Server, etc.) are generally stored in /var. And that's where consistency ends. Some folks say you start with the location to store them (/etc|/var) then have an appname sub-folder for your app, then continue from there as necessary.
If you're not a system daemon with elevated privileges, your only consistent choice is a dot directory in the launching user's home directory. I think the Free Desktop Standards (XDG) specify ~/.config for per-user configuration, and ~/.cache for replaceable static and/or generated data you need to save.
Looking at my Home Directory, a few key dot directories I have are:
~/.cache
~/.config
~/.irssi
~/.maildir
~/.mozilla
~/.kde
~/.ssh
~/.vnc
[edit]
While not a syscall, the XDG specifications I reference are at http://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html
There are certain conventions.
System-wide, readable/editable (text-based) configuration files go in /etc/appname/.
System-wide, per-machine binary data files that change (eg. binary databases) go in /var/*/appname/ - /var/cache/appname/, /var/spool/appname/ and /var/lib/appname/ are the most common.
System-wide binary data files that could notionally be shared between machines (eg. things like graphics and sound files) go in /usr/share/appname/.
The full paths that Unix/Linux/GNU applications use to store config files and other data is usually set when an application is configured prior to compilation. These paths then get hard-coded into the compiled binary (you can see examples of this by running strings(1) over some existing executables).
That is, these types of paths are build-time configurable, not run-time configurable by default. Many apps will support command line options to specify where a configuration file is, and that configuration file will usually contain paths for other application resources. This allows an application to run with minimal configuration (built-in paths) but also allows a site to customise the paths completely.
Under Linux, only the basic services (opening a file, doing networking and interprocess communication etc) are provided as system calls. The rest is done using libraries.
If you are coding a GUI application, you should look into your toolkit's documentation to see if it provides a mechanism for managing defaults. Both KDE and Gnome have one for instance.
I need a tool/script to fetch network card configurations from multiple Linux machines, mostly Red Hat Enterprise 5. I only know some basic bash, and I need something that can be run remotely pulling server names from a CSV. It also needs to be be run quickly and easily by non-technical types from a Windows machine. I've found WBEM/CMI/SBLIM, but I'd rather not write a whole C++ application. Can anyone point me to a tool or script that could accomplish this?
For Red Hat Enterprise Linux servers, you likely just need to take a copy of the files in /etc/sysconfig/networking/devices/ from each server. You can use an sftp client to accomplish that over ssh.
(The files are just easy-to-read text config files containing the network device configuration)
Can you give more details as to what information you need to pull? The various parameters to ifconfig give quite a lot of information about a Linux machine's network card configuration, so if you can do it that way it will be very easy. Simply write a script that converts the CSV into something white-space delimited, and then you can do something like:
#!/bin/bash
for host in $HOSTS ; do
CARDINFO=`ssh $host 'ifconfig'`
# Do whatever processing you need on CARDINFO here
done
That's a very rough sketch of the pseudocode. You'll also need to set up passwordless SSH on the hosts you want to access, but that's easy to do on Red Hat.
If you want to use WBEM/CIM for that (as mentioned in your original question), and you prefer a scripting environment over a programming language such as C/C++/Java, then there are PyWBEM and PowerCIM as two ways to do that in Python. If it needs to be bash etc, then there are command line clients (such as cimcli from the OpenPegasus project or wbemcli from the SBLIM project) and you could parse their output. Personally, I would prefer a Python based approach using PyWBEM. It is very easy to use, connecting to a CIM server is one line and enumerating CIM instances of a class is one more line.
On the side of the Linux system you want to query, the CIM server would need to run (tog-pegasus or sfcb) along with the right CIM provider packages (sblim). This approach has the advantage that your interface will be the same regardless of which Linux distribution you are using. Parsing config files is often dependent on the type of Linux distribution and I have seen them change across versions.
One main purpose of CIM is to provide reliable interfaces that are consistent across different types of environments and that change only compatibly over time.
Last but not least, using CIM allows you to get away without having to install any agent software on the system you want to inspect (as long as you can ensure that the CIM server is running).
Andy