Adding workstation nodes in HPC Pack 2016 - azure

I am using Microsoft HPC Pack 2016 update 2 on a local network and on-premise cluster. We have employed topology 5 (all nodes on the enterprise network). Head node is successfully setup and running. The problem is that after manual installation of HPC Pack 2016 update 2 on different Windows 10 workstations which are all on the same local network, some cannot be found and added to the cluster using the HPC Cluster Manager. I can’t see them on the HPC Cluster Manager running on the head node, neither through “resource management > nodes”, nor using the wizard to add node. While the same steps to install and add node work for some of the workstations, it does not work on some others. Is there any way to track down to find the cause?

In my case the problem was due to trust relationship. This can be verified using nltest /trusted_domains command. Resetting the trust relationship fixed the problem.

Related

How to patch GKE Managed Instance Groups (Node Pools) for package security updates?

I have a GKE cluster running multiple nodes across two zones. My goal is to have a job scheduled to run once a week to run sudo apt-get upgrade to update the system packages. Doing some research I found that GCP provides a tool called "OS patch management" that does exactly that. I tried to use it but the Patch Job execution raised an error informing
Failure reason: Instance is part of a Managed Instance Group.
I also noticed that during the creation of the GKE Node pool, there is an option for enabling "Auto upgrade". But according to its description, it will only upgrade the version of the Kubernetes.
According to the Blog Exploring container security: the shared responsibility model in GKE:
For GKE, at a high level, we are responsible for protecting:
The nodes’ operating system, such as Container-Optimized OS (COS) or Ubuntu. GKE promptly makes any patches to these images available. If you have auto-upgrade enabled, these are automatically deployed. This is the base layer of your container—it’s not the same as the operating system running in your containers.
Conversely, you are responsible for protecting:
The nodes that run your workloads. You are responsible for any extra software installed on the nodes, or configuration changes made to the default. You are also responsible for keeping your nodes updated. We provide hardened VM images and configurations by default, manage the containers that are necessary to run GKE, and provide patches for your OS—you’re just responsible for upgrading. If you use node auto-upgrade, it moves the responsibility of upgrading these nodes back to us.
The node auto-upgrade feature DOES patch the OS of your nodes, it does not just upgrade the Kubernetes version.
OS Patch Management only works for GCE VM's. Not for GKE
You should refrain from doing OS level upgrades in GKE, that could cause some unexpected behavior (maybe a package get's upgraded and changes something that will mess up the GKE configuration).
You should let GKE auto-upgrade the OS and Kubernetes. Auto-upgrade will upgrade the OS as GKE releases are inter-twined with the OS release.
One easy way to go is to signup your clusters to release channels, this way they get upgraded as often as you want (depending on the channel) and your OS will be patched regularly.
Also you can follow the GKE hardening guide which provide you with step to make sure your GKE clusters are as secured as possible

freeipa migrate from v3 to v4

My goal is to migrate from freeipa v3 to v4. Both versions are a cluster of two nodes.
v3 is centos 6 and v4 is centos 7.
I want to migrate the dns entries from the old cluster to the new one. Both have the same dns zone(s) and after all dns entries are on both clusters I will migrate all hosts to the new cluster.
Also the users will be created manually. Goal is to have a fresh freeipa environment.
Which commands do I need to know or use to achieve that?
Also an export and import function will do the trick.
This is all documented in the official documentation in the "MIGRATING IDENTITY MANAGEMENT FROM RED HAT ENTERPRISE LINUX 6 TO VERSION 7" section:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/migrate-6-to-7
The key is not to start from scratch but rather add CentOS 7 replicas to CentOS 6 deployment, move services over to CentOS 7 replicas and then decommission CentOS 6 machines. This is, in general, much easier and more reliable to do than to start from scratch by importing older data.

Looking for ways to add new nodes to Service Fabric development cluster

I remember I had 5 nodes in my local development SF Cluster from the moment I installed Service Fabric SDK on my machine. Then at some point of time I notices it had only one:
Now I can't find a way to add 2 more nodes back to my cluster. All the articles I can find are about standalone or Azure cluster and usually they say the approach does not work for dev cluster like here: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-windows-server-add-remove-nodes
Any idea on how came 5 nodes happened to be one? How to add more nodes on dev cluster?
To change the mode to one-node cluster or a 5 node cluster, select Switch Cluster Mode in the Service Fabric Local Cluster Manager. You can find it in the windows tray:

hadoop nodes with Linux + windows

I have 4 windows machines, On which i have installed hadoop on 3 out of 4.
One machine having the Harton work Sandbox ( say for 4-th machine) , Now i need to make the 4th machine as my server ( Name node )
and rest of the machine as slaves.
Whether it will work if i update the configuration files in the rest of 3 machines
Or is there any other way to do this ?
Any other suggestions ?
Thanks
finally i got this but i could not find any help
Hadoop cluster configuration with Ubuntu Master and Windows slave
A non-secure cluster will work (non-secure in the sense that you do not enable Hadoop Kerberos based auth and security, ie. hadoop.security.authentication is left as simple). You need to update all nodes config to point to the new 4th node as the master for various services you plan to host on it. You mention namenode, but I assume you really mean to make the 4th node the 'head' node, meaning it will probably also run resourcemanager and historyserver (or the jobtracker for old-style Hadoop). And that is only core, w/o considering higher level components like Hive, Pig, Oozie etc, and not even mentioning Ambari or Hue.
Doing a post-install configuration of existing Windows (or Linux, makes no difference) nodes is possible, editing the various xx-site.xml files. You'll have to know what to change and is not trivial. Probably it would be much easier to just deploy again the windows machines, with an appropriate clusterproperties.txt config file. See Option III - Manual Install One Node At A Time.

For a single CDH (Hadoop) cluster installation, which host should I use?

I started with a Windows 7 computer, and set up an Ubuntu Linux virtual machine which I run using VirtualBox. The Cloudera Manager Free Edition version 4 has been executed, and I have been following the prompts on localhost:7180.
I am now stuck when the prompt asks me to "Specify hosts for your CDH cluster installation." Can I install all of the Hadoop components, as well as run them, in the linux virtual machine alone?
Please help point me in the right direction in which host I should specify.
Yes, you can run cdh in a linux virtual machine alone. You could do it using "standalone" or "pseudo distributed" modes. IMHO, the most effective method for doing it is to use the "pseudo distributed" mode.
In this case, there are multiple java-virtual-machines (JVM) running, so they simulated as they were a cluster with multiples nodes (each thread simulated to be a cluster node).
Cloudera has documented how to get deployed as "pseudo distributed":
https://www.cloudera.com/documentation/enterprise/5-6-x/topics/cdh_qs_cdh5_pseudo.html
Note: 3 ways for deploying cdh:
standalone: using a machine alone, with a unique jvm
pseudo-distributed: using a machine alone, but several jvm's, so
simulated to be a cluster
distributed: using a cluster, so several
nodes with different purposes (workers, namenode, etc).
you can specify hostname of your machine. it will install everything on your machine only.

Resources