CUDA versions confusion - graphics

I'd like to begin learning CUDA but I'm confused about the versions. The latest release of CUDA is 3 and I have the CUDA 3.0.1 driver on my system but in theory my graphics card only supports 1.0. Can I use the features of the later versions or I need to stick to the 1.0?

The latest public release is 3.2 but 4.0 is out in beta to registered developers. Compatibility is based on the features that the hardware supports. You can use the latest version of the SDK but will need to compile for the feature set supported by your card and not attempt to use SDK features that are not supported. You do this by setting the arch flag.
There's some explanation here:
Fermi Compatibility Guide - NVIDIA

CUDA toolkit versions (3.1, 3.2, 4.0) are different from the graphics card compute capability (1.0 / 1.1 for older geforce cards, 1.2 for many mobile cards, 1.3 for slightly old geforce, 2.0+ for the latest fermi architecture). All the toolkits work with all cuda capable graphics cards. Although the complete functionality may not be available, you can still write functional cuda code.

Related

No Interface : ID3D12Device2

if (FAILED(hr = D3D12CreateDevice(NULL, D3D_FEATURE_LEVEL_11_0, IID_PPV_ARGS(p_ppD3D12Device2))))
{
// hr = nointerface
}
After checking the MS document, I knew the the problem maybe can be solved by update the windows.
GPU: Nvidia Quadro P4000
OS: Windows 2016 Server
But I have a few questions.
That mean some games using the ID3D12Device2 can not be played by some Win10 PC.
Which version windows update I need to apply? (I don't want to update irrelevant stuff.)
What is the advantage of using ID3D12Device2 、3、 4 instead of ID3D12Device?
"Windows 2016 Server" is the equivalent to Windows 10 Anniversary Update (14393). That version of Windows will therefore support ID3D12Device and ID3D12Device1 interfaces, but nothing newer.
If you upgraded to "Windows 2016 Server, Version 1709" then it would be the equivalent to Windows 10 Fall Creators Update (16299) which supports ID3D12Device, ID3D12Device1,ID3D12Device2, and ID3D12Device3.
PC games use whatever version of the device is supported by the version of the Windows 10 OS they deem required to support for their title. If they want to support older versions of Windows, then they use the oldest interface they want (typically ID3D12Device is fine for most basic graphics), and then use QueryInterface to conditional test for newer support if they make use of it. They will then need fallback codepaths if they want to run without that feature.
ComPtr<ID3D12Device> device;
if (FAILED(hr = D3D12CreateDevice(NULL, D3D_FEATURE_LEVEL_11_0, IID_PPV_ARGS(&device))))
{
// Not on Windows 10 or the is no default video device that supports DirectX 12
}
...
ComPtr<ID3D12Device2> device2;
hr = device.As(device2);
if (SUCCEEDED(hr))
{
// This system supports DirectX 12.2
}
Most of the newer versions of the device expose new functionality that you must check for first via ID3D12Device::CheckFeatureSupport to confirm that the driver actually supports the new feature on the current hardware, so just checking for the interface version isn't sufficient.
See SystemInfo and DxCapsViewer for more about the various 'optional' features.
Windows 10 does not ship 'new features' via Windows Update piecemeal. They are shipped as part of newer versions of the OS. If you were using Windows Server 2019 which is the equivalent to Windows 10 October 2018 Update (17763) you would have support up through ID3D12Device4 which supports DirectX Raytracing with proper hardware. There's not yet a release of Windows Server that provides ID3D12Device8 (Amplification & Mesh Shader, DirectX Raytracing 1.1, Variable Rate Shaders) support that shipped in Windows 10 May 2020 (19041). Remember DirectX 12 is considered a 'consumer feature'.
Having said all that, you don't need ID3D12Device2 support to learn DirectX 12. The method exposed by that interface, CreatePipelineState, creates a Pipeline State Object (PSO) from a D3D12_PIPELINE_STATE_STREAM_DESC. This is used for advanced shader scenarios like Amplification & Mesh Shaders, but you don't have to use it. For example, all of DirectX Tool Kit for DX12 works with the base ID3D12Device interface.
A larger concern is that you may not have a video card & driver that supports DirectX 12. See Anatomy of Direct3D 12 Create Device for the recommended way to create the DirectX 12 device.
BTW, D3D12CreateDevice supports being able to directly create newer versions of the interface for cases where you already know the target platform supports it. For example, using DirectX 12 on Xbox or if you are a UWP/Desktop Bridge app that has a particular version of Windows 10 set as the minimum supported OS.

How to use AMD GPU for fastai/pytorch?

I'm using a laptop which has Intel Corporation HD Graphics 5500 (rev 09), and AMD Radeon r5 m255 graphics card.
Does anyone know how to it set up for Deep Learning, specifically fastai/Pytorch?
Update 3:
Since late 2020, torch-mlir project has come a long way and now supports all major Operating systems. Using torch-mlir you can now use your AMD, NVIDIA or Intel GPUs with the latest version of Pytorch.
You can download the binaries for your OS from here.
Update 2:
Since October 21, 2021, You can use DirectML version of Pytorch.
DirectML is a high-performance, hardware-accelerated DirectX 12
based library that provides GPU acceleration for ML based tasks. It supports all DirectX 12-capable GPUs from vendors such as AMD, Intel, NVIDIA, and Qualcomm.
Update:
For latest version of PyTorch with DirectML see: torch-directml
you can install the latest version using pip:
pip install torch-directml
For detailed explanation on how to setup everything see Enable PyTorch with DirectML on Windows.
side note concerning pytorch-directml:
Microsoft has changed the way it released pytorch-directml. it deprecated the old 1.8 version and now the offers the new torch-directml(as apposed to the previously called pytorch-directml).
It is now installed as a plugin for the actual version of Pytorch and works align side it.
Old version:
The initial release of pytorch-directml (Oct 21, 2021):
Microsoft has release Pytorch_DML a few hours ago.
You can now install it (in windows or WSL) using pypi package:
pytorch-directml 1.8.0a0.dev211021
pip install pytorch-directml
So if you are on windows or using WSL, you can hop in and give this a try!
Update :
As of Pytorch 1.8 (March 04, 2021), AMD ROCm versions are made available from Pytorch's official website. You can now easily install them on Linux and Mac, the same way you used to install the CUDA/CPU versions.
Currently, the pip packages are being provided only. Also, the Mac and Windows platforms are still not supported (I haven't tested with WSL2 though!)
Old answer:
You need to install the ROCm version. The official AMD instructions on building Pytorch is here.
There was previously a wheel package for rocm, but it seems AMD doesn't distribute that anymore, and instead, you need to build PyTorch from the source as the guide which I linked to explains.
However, you may consult this page, to build the latest PyTorch version: The unofficial page of ROCm/PyTorch.
Update: In March 2021, Pytorch added support for AMD GPUs, you can just install it and configure it like every other CUDA based GPU. Here is the link
Don't know about PyTorch but, Even though Keras is now integrated with TF, you can use Keras on an AMD GPU using a library PlaidML link! made by Intel. It's pretty cool and easy to set up plus it's pretty handy to switch the Keras backends for different projects

PyTorch C++ - how to know the recommended version of cuDNN?

I've previously inferenced TensorFlow graphs from C++. Now I'm embarking on working out how to inference PyTorch graphs via C++.
My first question is, how can I know the recommended version of cuDNN to use with LibTorch, or if I'm doing my own PyTorch compile?
Determining the recommended CUDA version is easy. Upon going to https://pytorch.org/ and choosing the options under Quick Start Locally (PyTorch Build, Your OS, etc.) the site makes it pretty clear that CUDA 10.1 is recommended, but there is no mention of cuDNN version and upon Googling I'm unable to find a definitive answer for this.
From what I understand about PyTorch on ubuntu, if you use the Python version you have to install the CUDA driver (ex. so nvidia-smi works, version 440 currently), but the CUDA and cuDNN install are not actually required beyond the driver because they are included in the pip3 package, is this correct? If so, then is there a command I can run in a Python script that shows the version of CUDA (expected to be 10.1) and cuDNN that the pip pre-compiled .whl uses? I suspect there is such a command but I'm not familiar enough with PyTorch yet to know what that may be or how to look it up.
I've ran into compile and inferencing errors using C++ with TensorFlow when I was not using the specific recommended version of cuDNN for a certain version of TensorFlow and CUDA so I'm aware these version can be sensitive and I have to make the right choices from the get-go. If anybody can assist in determining the recommended version of cuDNN for a certain version of PyTorch that would be great.
CUDA is supported via the graphics card driver, AFAIK there's no separate "CUDA driver". The system graphics card driver pretty much just needs to be new enough to support the CUDA/cudNN versions for the selected PyTorch version. To the best of my knowledge backwards compatibility is included in most drivers. For example a driver that supports CUDA 10.1 (reported via nvidia-smi) will also likely support CUDA 8, 9, 10.0
If you installed with pip or conda then a version of CUDA and cudNN are included with the install. You can query the actual versions being used in python with torch.version.cuda and torch.backends.cudnn.version().

Why oracle doesn't provide any Development Kit for Java Card 3.0.1 and 3.0.2 and 3.0.4?

I have a JCOP v2.4.2 r3 smart card. It is written in its Security Target that my card support Java Card 3.0.1 Classic edition. I searched Oracle website to find a development kit to developing some applets for my card, But I couldn't find them. There are just a development kit for Java Card 3.0.3, while there are specifications for all the versions.
Q1: How I can my ".java" files to ".cap" files for other versions except v3.0.3?
A1: Please read this: http://www.oracle.com/technetwork/java/javame/javacard/download/releasenotes-jsp-1440109.html
The Java Card 3 platform consists of versions 3.0, 3.0.1 and 3.0.4 of the specifications and versions 3.0.1, 3.0.2, 3.0.3 and 3.0.4 of the development kit.
SDK 3.0.3 targets specification 3.0.1.
A2: (also applies to A1) you can use ant-javacard: open source, cross-platform, all JavaCard versions Ant task for working with the SDK: https://github.com/martinpaljak/ant-javacard

Working with Direct X and VS2012

I have both Visual Studio 2012 Express for Desktop and for Windows 8, and I wanted to create Direct X applications and games. I know that there is a Windows SDK now, and in VS 2012 exp for win8 the IDE is pre-installed with the SDK (I know that from the new Direct3D project). My question is, if I wanted to develop applications for Windows Desktop (using VS2012exp) does it come Windows SDK or do I need to install Direct X SDK? And how do I know if my graphics card support which version of Direct X? Will any Direct X SDK version work with any Direct X version? As you can see I am a newbie at that stuff and any comment would be helpful. Thanks for your time.
If I wanted to develop applications for Windows
Desktop (using VS2012exp) does it come Windows SDK or do I need to
install Direct X SDK?
Yes, with Windows 8 SDK and Visual Studio 2012 (or Windows 8.1 SDK and Visual Studio 2013 preview) you can develop anything:
DirectX applications (both, Windows Desktop and Windows Store)
for any supported target platform (x86, x64, ARM)
for any reasonably modern Windows operating system (starting from Windows 2000/XP)
using any of API versions: DirectX 9.3, 10.0, 10.1, 11.0, or 11.1
Note:
DirectX 9 API is completely different from 10 and 11, and it is obsolete. Use it only if you targeting Windows versions below Vista.
DirectX 11 is more like an improved version of DirectX 10.
So in most cases, you will want to program for DirectX 11.1.
And no, you don't need to install DirectX SDK. It was deprecated (latest version - june 2010). Do not use it in new code. Use it only if you need to compile some old code which uses D3DX stuff (such as ID3DXEffect, ID3DXFont, ID3DXLine, ID3DXMesh, ID3DXSprite), e.g. samples from books or different SDK samples.
And how do I know if my graphics card support which version of Direct
X?
Well, if we talking about your videocard, you can look at your card vendor's or GPU vendor's site. Or any of informational utilities, such as GPU-Z.
If we talking about end-user hardware, since DirectX 10-11 there are feature levels. So even if you are using latest API (DirectX 11.1 at this moment), you can target old hardware (for example, if you using D3D_FEATURE_LEVEL_9_3, newer features, from D3D_FEATURE_LEVEL_10_0 and higher will be disabled).
Note, that to develop for latest feature level you don't need GPU that supports it. You can run and debug application on WARP device (ivery slow and meant for debugging purposes only, not for end-user release). For example, you can have old DirectX 10 card (Shader model 4.0), but target to DirectX 11 (Shader model 5.0)
Will any Direct X SDK version work with any Direct X version?
Latest DirectX SDK (june 2010) supports DirectX up to 11. No DirectX 11.1 support.
I'm a developer in Visual Studio who works with the DirectX tooling (the DX Diagnostic Tool and on the new project templates). You're asking a few different questions in here, but I'll try my best to answer the ones that I can.
1 - What SDKs are needed for DX application development? This link here as the best information on this. Basically as of the June 2010 DirectX SDK the DX SDK was combined with the Windows development SDK so if you install the most recent Windows SDK you'll have the right stuff for developing the newest DX applications.
http://blogs.msdn.com/b/chuckw/archive/2013/07/01/where-is-the-directx-sdk-2013-edition.aspx
This link also has more indepth info specific to the issue of DX Desktop apps on Windows 8.
http://blogs.msdn.com/b/chuckw/archive/2012/03/23/desktop-games-on-windows-8-consumer-preview.aspx
Note here that you can also install the June 2010 DirectX SDK on your machine, that won't hurt anything, we often install it ourselves as it has some useful sample applications to look at even if they are a bit outdated.
http://www.microsoft.com/en-pk/download/details.aspx?id=6812
2 - How do I know what my graphics card supports? I'm not sure if you mean how do I detect this in my DX application at runtime? Or if you mean how do I just look it up quickly for my specific system. To figure out your own GPU it's usually a pretty quick lookup, just find your device name and punch it in online, most stuff released in the last several years supports DX11 so you should be fine here. If you installed the June 2010 SDK that I mentioned above you can use the capability tool mentioned here:
http://www.danielmoth.com/Blog/What-DX-Level-Does-My-Graphics-Card-Support-Does-It-Go-To-11.aspx
At runtime DX has code to use to check if the running graphics card has the ability to use advanced DX 11 features.
http://msdn.microsoft.com/en-us/library/windows/desktop/hh404562(v=vs.85).aspx#check_support_of_new_direct3d_11.1_features_and_formats
http://msdn.microsoft.com/en-us/library/windows/desktop/ff476876(v=vs.85).aspx
3 - Will any DirectX SDK work with any DX version? So here you basically always want to be using the latest DX SDK as you could see with the link on feature levels above you can target lower levels of DX while still coding using the most recent SDK. Just use the most recent SDK and target feature level 9 if you wanted to create apps that run on DX 9 cards.

Resources