freeipa migrate from v3 to v4 - ipa

My goal is to migrate from freeipa v3 to v4. Both versions are a cluster of two nodes.
v3 is centos 6 and v4 is centos 7.
I want to migrate the dns entries from the old cluster to the new one. Both have the same dns zone(s) and after all dns entries are on both clusters I will migrate all hosts to the new cluster.
Also the users will be created manually. Goal is to have a fresh freeipa environment.
Which commands do I need to know or use to achieve that?
Also an export and import function will do the trick.

This is all documented in the official documentation in the "MIGRATING IDENTITY MANAGEMENT FROM RED HAT ENTERPRISE LINUX 6 TO VERSION 7" section:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/migrate-6-to-7
The key is not to start from scratch but rather add CentOS 7 replicas to CentOS 6 deployment, move services over to CentOS 7 replicas and then decommission CentOS 6 machines. This is, in general, much easier and more reliable to do than to start from scratch by importing older data.

Related

dotnet client's traffic not similar while using hazelcast

I am using the 3.11.2 version with 4 nodes clyster setup. One of my applications was written in java another one in dotnet. When I check the java client's cluster, all nodes have almost the same network traffic. On the other hand at dotnet client's cluster 1 node using, for example, 200MB traffic other 3 only 3-5MB. Java cluster configured for cache, dotnet cluster using map.
How can I fix this?
PS : I know 3.11.2 is old version but I don't like to upgrade it unless I hit a pug which force me to do it.
Mehmet

Adding workstation nodes in HPC Pack 2016

I am using Microsoft HPC Pack 2016 update 2 on a local network and on-premise cluster. We have employed topology 5 (all nodes on the enterprise network). Head node is successfully setup and running. The problem is that after manual installation of HPC Pack 2016 update 2 on different Windows 10 workstations which are all on the same local network, some cannot be found and added to the cluster using the HPC Cluster Manager. I can’t see them on the HPC Cluster Manager running on the head node, neither through “resource management > nodes”, nor using the wizard to add node. While the same steps to install and add node work for some of the workstations, it does not work on some others. Is there any way to track down to find the cause?
In my case the problem was due to trust relationship. This can be verified using nltest /trusted_domains command. Resetting the trust relationship fixed the problem.

Looking for ways to add new nodes to Service Fabric development cluster

I remember I had 5 nodes in my local development SF Cluster from the moment I installed Service Fabric SDK on my machine. Then at some point of time I notices it had only one:
Now I can't find a way to add 2 more nodes back to my cluster. All the articles I can find are about standalone or Azure cluster and usually they say the approach does not work for dev cluster like here: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-windows-server-add-remove-nodes
Any idea on how came 5 nodes happened to be one? How to add more nodes on dev cluster?
To change the mode to one-node cluster or a 5 node cluster, select Switch Cluster Mode in the Service Fabric Local Cluster Manager. You can find it in the windows tray:

hadoop nodes with Linux + windows

I have 4 windows machines, On which i have installed hadoop on 3 out of 4.
One machine having the Harton work Sandbox ( say for 4-th machine) , Now i need to make the 4th machine as my server ( Name node )
and rest of the machine as slaves.
Whether it will work if i update the configuration files in the rest of 3 machines
Or is there any other way to do this ?
Any other suggestions ?
Thanks
finally i got this but i could not find any help
Hadoop cluster configuration with Ubuntu Master and Windows slave
A non-secure cluster will work (non-secure in the sense that you do not enable Hadoop Kerberos based auth and security, ie. hadoop.security.authentication is left as simple). You need to update all nodes config to point to the new 4th node as the master for various services you plan to host on it. You mention namenode, but I assume you really mean to make the 4th node the 'head' node, meaning it will probably also run resourcemanager and historyserver (or the jobtracker for old-style Hadoop). And that is only core, w/o considering higher level components like Hive, Pig, Oozie etc, and not even mentioning Ambari or Hue.
Doing a post-install configuration of existing Windows (or Linux, makes no difference) nodes is possible, editing the various xx-site.xml files. You'll have to know what to change and is not trivial. Probably it would be much easier to just deploy again the windows machines, with an appropriate clusterproperties.txt config file. See Option III - Manual Install One Node At A Time.

Restart tasktracker and job tracker of hadoop CDH4 using Cloudera services

I have made few entries in mapred-site.xml, to pick these changes i need to restart TT and JT running at my cluster nodes.
Is there any i can restart them using Cloud Era manager web services from command line.
So I can automate those steps any time changed made configuration files for hadoop it will restart TT and JT..
Since version 4.0, Cloudera Manager exposes its functionality through an HTTP API which allows you to do the operations through "curl" from the shell. The API is available in both the Free Edition and the Enterprise Edition.
Their repository hosts a set of client-side utilities for communicating with the Cloudera Manager API. You can find more on the documentation page.

Resources