Using affinity in devices - redhawksdr

The use of affinity to control cpu and other resources of a component appears to be a new feature in RedHawk 2.1. The manual only describes its use in Resource components but I would like to use it in a Device. I tried adding the block for cpu to the DCD for the device but it appeared to have no affect. Is there a way to control affinity for a Device in RedHawk?

Section 10.4 Describes how to add the affinity section to the DCD file. The same affinity directives for a Components are available for Devices and Sevices. Consult section 7.3.5 Resource Affinity for more detailed information. You can provide a cpu set using the following:
<affinity>
<simpleref id="affinity_class" value="cpu />
<simpleref id="affinity_value" value="1-7" />
</affinity>
The value is any valid string that numa_parse_cpustring accepts.
The caveat is the stock rpms for REDHAWK are not compiled with --enable-affinity=yes. So you will need to recompile the framework to take advantage of these options.

The Device Manager is responsible for deploying Devices and Services. Check if section 10.4 of the documentation answers your question.
To enable affinity processing by the Device Manager, build the REDHAWK
software with the affinity option enabled

Related

Are Xen vTPM's integrated to Openstack cloud?

Xen has the ability to attach virtual trusted platform modules (vTPMs) to guest VMs: http://wiki.xenproject.org/wiki/Virtual_Trusted_Platform_Module_(vTPM). I would like to know if there is any Openstack integration for this feature - can managed VM for instance be provisioned vTPMs?
I saw something similar for Hyper-V here:
http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/hyper-v-vtpm-devices.html
OpenStack provides the following as part of Cloud tenant threat mitigation:
Use separated clouds for tenants, if necessary.
Use storage encryption per VM or per tenant.
OpenStack Nova has a Trusted Filter for Filter Scheduler to schedule workloads to trusted resources only (trusted computing pools), so workloads not requiring trusted execution can be scheduled on any node, depending on utilization, while workloads with a trusted execution requirement will be scheduled only to trusted nodes.
With the following process:
Before you can run OpenStack with XenServer, you must install the hypervisor on an appropriate server .
Xen is a type 1 hypervisor: When your server starts, Xen is the first software that runs. Consequently, you must install XenServer before you install the operating system where you want to run OpenStack code. You then install nova-compute into a dedicated virtual machine on the host.
While XAPI is the preferred mechanism for supporting XenServer (and its deprecated sibling XCP), most existing Xen Project integration with OpenStack is done through libvirt below.
compute_driver = libvirt.LibvirtDriver
[libvirt]
virt_type = xen
Hardware TPM is also supported:
Our solution essentially mimics how one may download software and compute its SHA-256 hash and compare against its advertised SHA-256 hash to determine its legitimacy. It involves using Intel TXT, which is composed of hardware, software, and firmware. The hardware, attached to the platform, called the Trusted Platform Module (TPM)[3], provides the hardware root of trust. Firmware on the TPM is used to compute secure hashes and save the secure hashes to a set of registers called Platform Configuration Registers (PCRs), with different registers containing different measurements. Other components are Intel virtualization technology, signed code modules, and a trusted boot loader called TBOOT1. Essentially the BIOS, option ROM, and kernel/Ramdisk are all measured in the various PCRs. From a bare metal trust standpoint, we are interested in PCRs 0-7(BIOS, option ROM). The kernel/Ramdisk measurements would depend on the image the tenant seeks to launch on their bare metal instance. PCR value testing is provided by an Open Attestation service, OAT[2]. Additional details in references.
with these security considerations:
At the time of this writing, very few clouds are using secure boot technologies in a production environment. As a result, these technologies are still somewhat immature. We recommend planning carefully in terms of hardware selection. For example, ensure that you have a TPM and Intel TXT support. Then verify how the node hardware vendor populates the PCR values. For example, which values will be available for validation. Typically the PCR values listed under the software context in the table above are the ones that a cloud architect has direct control over. But even these may change as the software in the cloud is upgraded. Configuration management should be linked into the PCR policy engine to ensure that the validation is always up to date.
References
Tighten the security of your OpenStack Clouds - OpenStack Superuser
Xen, XAPI, XenServer - OpenStack Configuration Reference  - kilo
XenServer - OpenStack
XenServer/XenAndXenServer - OpenStack
XenAPI Specific Bugs : OpenStack Compute (nova)
OpenStack - Xen
Xen via Libvirt - OpenStack Configuration Reference  - liberty
Hypervisors - OpenStack Configuration Reference  - kilo
OpenStack Docs: Overview of nova.conf
OpenStack Docs: nova.conf - configuration options
OpenStack Docs: Telemetry configuration options
Configure APIs - OpenStack Configuration Reference  - kilo
OpenStack Docs: Glossary
Bare-metal-trust - OpenStack
Baremetal driver - OpenStack Configuration Reference  - juno
OpenStack Docs: Integrity life-cycle
Current Series Release Notes — Nova Release Notes 16.0.0.0b3.dev171 documentation
Enhanced-Platform-Awareness-OVF-Meta-Data-Import - OpenStack
Example nova.conf configuration files - OpenStack Configuration Reference  - kilo
Chapter 7. Configuring a Basic Overcloud using Pre-Provisioned Nodes - Red Hat Customer Portal
Feature Support Matrix — nova 16.0.0.0b3.dev171 documentation
Trusted Computing for Infrastructure (pdf)
What is Hyper.sh | Hyper.sh User Guide
Xen TPM Manager
Supporting Open Source Software Development in SSOs/SDOs
Xen Cloud Platform Virtual
Machine Installation Guide (pdf)
OpenStack Docs: Security hardening
policy.json - OpenStack Configuration Reference  - kilo
Appendix B. Firewalls and default ports - OpenStack Configuration Reference  - kilo
New, updated and deprecated options in Kilo for Orchestration - OpenStack Configuration Reference  - kilo

Can packer.io template specify processor type in azure builder?

Constraints:
My application requires SSE4.2 instruction set.
I am using packer.io to provision my Windows Azure VM (OpenLogic 6.5 OS.)
Windows Azure returns an AMD-processor-backed-VM about 15% of the time. The rest of the time - they are Intel-processor-based. AMD processors do not support SSE4.2, but they do support SSE4a. So, my application is terminated with SIGILL on AMD processors.
Questions:
Can I request specific architecture (Intel CPU) when Packer
provisions a VM? I know that instance types >= A8 come only with Intel processors, but they are more expensive, and I would not want to use them for development.
If Packer cannot do it, what are the other options
(Powershell, ect...) that would give me this functionality?
Thank you.
Answering my own question. Azure does not provide a way to request processor type. The only way to ensure Intel processor is to not use A-series machines (as confirmed by a MSFT representative.) Thus, no tool can do it.

Cuda GPUDirect to NIC/Harddrive?

I am currently writing a CUDA application and am running into a few IO issues "feeding the beast."
I am wondering if there is any way that I can directly read data from a RAID controller or NIC and have that data sent directly to the GPU. What I'm trying to accomplish is shown directly on slide #3 of the following presentation: http://developer.download.nvidia.com/devzone/devcenter/cuda/docs/GPUDirect_Technology_Overview.pdf.
That being said, apparently this has been answered already here: Is it possible to access hard disk directly from gpu?, however the presentation that I've attached leads to believe all I need is to set an environment variable in Linux (but it doesn't offer any useful code snippets/examples).
Therefore, I'm wondering if it is possible to read data directly from a NIC/RAID controller into the GPU and what would be required to do so? Would I need to write my own driver for the hardware? Are there any examples where certain copies are avoided?
Thanks in advance for the help.
GPUDirect is a technology "umbrella term", which in general is a brand referring to technologies that enable direct data transfer to and/or from a GPU, somehow bypassing unnecessary trips through host memory.
GPUDirect v1 is a technology that works with specific infiniband adapters, and enables the sharing of a data buffer between the GPU driver and the IB driver. This technology has mostly been superceded by GPUDirect (v3) RDMA. This v1 technology does not enable general usage with any NIC. The environment variable reference:
however the presentation that I've attached leads to believe all I need is to set an environment variable in Linux
refers to enabling GPUDirect v1. It is not a general purpose NIC enabler.
GPUDirect v2 is also called GPUDirect Peer-to-Peer, and it is for transfer of data between two CUDA GPUs on the same PCIE fabric only. It does not enable interoperability with any other kind of device.
GPUDirect v3 is also called GPUDirect RDMA.
Therefore, I'm wondering if it is possible to read data directly from a NIC/RAID controller into the GPU and what would be required to do so?
Today, the canonical use case for GPUDirect RDMA is with a Mellanox Infiniband (IB) adapter. (It can also be made to work, perhaps with assistance from Mellanox, using a Mellanox Ethernet Adapter and RoCE). If this fits your definition of "NIC", then it's possible by loading a proper software stack, assuming you have appropriate hardware. The GPU and the IB device need to be on the same PCIE fabric, which means they need to be attached to the same PCIE root complex (effectively, connected to the same CPU socket). When used with a Mellanox IB adapter, typical usage would involve a GPUDirect RDMA-aware MPI.
If you have your own unspecified NIC or RAID controller, and you don't already have a GPUDirect RDMA linux device driver for it, then it's not possible to use GPUDirect. (If there is a GPUDirect RDMA driver for it, contact the manufacturer or driver provider for assistance.) If you have access to the driver source code, and are familiar with writing your own linux device drivers, you could try crafting your own GPUDirect driver. The steps involved are beyond the scope of my answer, but the starting point is documented here.
Would I need to write my own driver for the hardware?
Yes, if you don't already have a GPUDirect RDMA driver for it, one would need to be written.
Are there any examples where certain copies are avoided?
The GPUDirect RDMA MPI link gives examples and explains how GPUDirect RDMA can avoid unnecessary device<->host data copies during the transfer of data from GPU to IB adapter. In general, data can be transferred directly (over PCIE) from memory on the GPU device to memory on the IB device (or vice-versa) with no trip through host memory (GPUDirect v1 did not achieve this.)
UPDATE: NVIDIA has recently announced a new GPU Direct technology called GPU Direct Storage.

What is the difference between CKRM and cgroup.

Are they same?
Could some one please explain in detail. I have gone through the web links and found both do the resource management.
Is one is newer than the other one?
How a multimedia application can utilize this in its code.
CKRM (Class-based Kernel Resource Management) project started to give better resource management in linux kernel and later its been stoped and not mearged with linux kernel.
Later similar project started for same purpose called cgroup and mearged with 2.6 onwards.
We can also get more details from linux source code Documentation's cgroup directory.
In my system its: /usr/src/linux-3.14.1/Documentation/cgroups

How to enable HPET on a Hyper-V VM

I have been searching for documentation on how to properly enable the HPET on a Hyper-V. I haven't been able to find anything specifying if it works or not, and if it does work, how to properly enable it. From our initial tests, it doesn't seem to be consistent with either the machines timer or the HPET.
We are deploying Lync and UCMA based applications and have noticed a significant performance difference between machines with HPET enable and HPET disabled in terms of their ability to handle capacity. We would like to be able to virtualize these machines, but the HPET is currently our limiting factor.
Can anyone point me in the right direction to find an answer?
I am not sure but I don't think we can enable HPET in VM .
Generally for a physical machine we can enable it
1.From BIOS enable HPET
and
2.From OS run bcdedit /set useplatformclock true and then reboot.
Looking at Microsoft's Hypervisor Top-Level Functional Specification, the only references to the HPET I can see relate to the hypervisor's own use of the HPET. It doesn't appear to provide a virtual HPET device.

Resources