A Pytorch project is supposed to run on GPU. I want to run it on my laptop only with CPU. There are a lot of places calling .cuda() on models, tensors, etc., which fail to execute when cuda is not available. Is it possible to do it without changing the code everywhere?
Here's the simplest fix I can think of:
Put the following line near the top of your code:
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
Do a global replace. Change .cuda() to .to(device), where device is the variable set in step 1.
That depends wholly on the structure and design of the code within the project in question. Usually there would be an argument to specify on which device to run on. You should check your code. It is also not advisable to run it on your laptop CPU especially if the project consists of a moderately large model. Good luck!
Related
I am currently trying to make EV-FlowNet work on my computer. EV-FlowNet is an open-source neural network for event-based optical flow estimation (full code here: https://github.com/daniilidis-group/EV-FlowNet). It is based on tensorflow but
unfortunately, I have no experience with this library so I have a hard time figuring out why things are not working. I have downloaded the trained network, the input data and the ground truth and have positioned them in the folders listed in the README file. I am trying to run 'test.py' and it runs without errors. However, it never enters into the main loop in which the results are visualized.
The condition for the main loop is this:
while not coord.should_stop():
coord is defined like this:
coord = tf.train.Coordinator()
and the threads are defined like this:
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
I have tried googling it but all I could find was that the threads stop if any of them call coord.request_stop(). Since I can't find anything in the code that would make them stop, I am don't understand why coord.should_stop() is true from the very beginning. I know this questions is quite vague but since I have no experience with tensorflow I am not sure what other information might be required. This is why I have included the link to the entire code. Thanks in advance!
`
Is the VkImageUsageFlagBits::VK_IMAGE_USAGE_TRANSFER_SRC_BIT supposed to automatically be set for swapchain images?
I've been looking it up, and I don'd see any indication that this flag is supposed to be automatically added to swapchain images.
However, I have tested that in SDK 1.2.148.1 (haven't tried others), on NVIDIA GTX 1080ti with the latest drivers, that flag is automatically added, even when not requested. I'm not sure whether it's the SDK or the driver that it's setting it, but one of them is.
I first verified it in my own code, then tried it using the Sascha Willem's Triangle example, by commenting out the following line:
https://github.com/SaschaWillems/Vulkan/blob/master/base/VulkanSwapChain.cpp#L347
This is an image with the Triangle app running on NVIDIA Nsight. It shouldn't have the flag:
enter image description here
Vulkan has no mechanism to tell you what the usage flags are on an image, whether swapchain or not. As such, you are relying on layers and debugging tools to feed that information to you.
However, these tools themselves are perfectly capable of setting these flags. If a debugging tool wants to be able to show you the image data in a swapchain image, then that image must be usable as the source for a transfer operation. So such a tool must set those flags. The same goes for any layers that might be involved in debugging.
Indeed, the fact that the debugging tool knows that the flag has been set is evidence that the Vulkan implementation didn't set it. If it had set the flag, nobody outside of the implementation would know about it.
So in all likelihood, this is the act of a debugging layer. Unfortunately, it is setting this flag before the validation layer sees it. And to validate uses of the imageless framebuffer feature, the validation layer needs to look at those flags.
To fix this, you may be able to reorder the layers you have activated. ppEnabledLayerNames is an ordered list of layers; if you aren't doing so already, put the validation layers first in this list. If you're enabling layers more globally, see if you can play around with the order of those layers.
You can set imageUsage bit in VkSwapchainCreateInfoKHR
I need to make a neural network to fit some data, a very large set of data, and my laptop is not powerful enough to use the training tools in the Neural Network Toolbox. I have access to a nice Linux cluster with Matlab, but the compute nodes don't do X11, so I can only use command line or script.
After reading the Mathworks page on Using Neural Network Toolbox, it looks like it should be possible to harness the functionality of tools such as nftool from the command line, but even after looking at the Getting Started Guide and the User's Guide, I found no understandable instructions of how to start calling commands to the NNT without bringing up a GUI.
The closest was on page 20 of the User's Guide, in the third paragraph, where it says that reading the code of the GUI tools provides a good introduction to using the toolbox by command line. However, the code (found in C:\Program Files\MATLAB\R2016a\toolbox\nnet) contains a lot of references to commands I'm not familiar with, particularly running Java from within Matlab.
Despite having used the nftool several times, I do not recognize any of the lines as "Ah, this is what does that". So, I guess what I'm looking for is a set of instructions that connect the commands to what they do.
If you use the NN UI first, it can generate the matlab code for you, as an example. There is no simple answer to your question as the NN toolbox has quite a large array of functionality, so essentially the answer would be a complete tutorial.
http://www.mathworks.com/help/nnet/gs/fit-data-with-a-neural-network.html#f9-33554.
In fact if you click that link, the advice is precisely what I was saying above.
After stepping through the nftool (or similar) screens, don't stop at the Deploy screen, keep going by clicking next to the final 'Save' screen and choose simple or advanced scripts.
I need to build a command line tool, that will take a 3D model as an argument, and will output photos of it, that may or may not be processed by this application. The tool will be deployed on Linux, but I want to make it as cross-platform as possible.
The program is not supposed to present a window of any kind, or accept any other input apart from the command line arguments.
I was wondering, how would someone approach this? I am currently able to display the 3D model on-screen with the help of GLFW, which actually drives my event handlers to peripheral input, and also my main loop. However, I don't know if using GLFW will help me if I want to make a command-line program with input-output as files.
Does anyone have any indications as to how to approach this?
create invisible/hidden window,
use its gl context to render to FBO and
use readpixels to save that to file
For OpenGL to work you need an OpenGL context. Which used to require some kind of windowing system active, that could produce you some drawable for which the context could be created.
Some OpenGL implementations, like Mesa, actually allow you to create an OpenGL context for drawables that are created without a windowing system; Mesa calls this "off-screen mesa". With Gallium3D drivers on Linux this even may give you GPU acceleration. But usually you end up in the "softpipe" software rasterizer.
Does anyone have any indications as to how to approach this?
Don't use OpenGL for it. OpenGL is mostly meant for creating interactive graphics; but of course if your goal is visualization of complex data, then a GPU would be better suited.
With NVidia hardware you'll need to use an X server for that; the X server must be running and active on the console for this to work. AMD hardware with the open source drivers and Mesa may give you off-screen capabilities without X (but I never tried that).
On Windows Server you don't have proper OpenGL support anyway (just v1.4 and very slow), so don't bother with it.
Recently, i began developing a driver of an embedded device running linux.
Until now i have only read about linux internals.
Having no prior experience in driver devlopment, i am finding it a tad difficult to land my first step.
I have downloaded the kernel source-code (v2.6.32).
I have read (skimped) Linux Device Drivers (3e)
I read a few related posts here on StackOverflow.
I understand that linux has a "monolithic" approach.
I have built kernel (included existing driver in menuconfig etc.)
I know the basics of kconfig and makefile files so that should not be a problem.
Can someone describe the structure (i.e. the inter-links)
of the various directories in the kernel-source code.
In other words, given a source-code file,
which other files would it refer to for related code
(The "#include"-s provide a partial idea)
Could someone please help me in getting a better idea?
Any help will be greatly appreciated
Thank You.
Given a C file, you have to look at the functions it calls and data structures it uses, rather than worrying about particular files.
There are two basic routes to developing your own device driver:
Take a driver that is similar to yours; strip out the code that isn't applicable to your device, and fill in new code for your device.
Start with the very basic pieces of a device driver, and add pieces a little at a time until your device begins to function.
The files that compose your driver will make more sense as you complete this process. Do consider what belongs in each file, but to some extent, dividing a driver among files is more an art than a science. Smaller drivers often fit into just one or two files.
A bit of design may also be good. Consider what you device does, and what your driver will need to do. Based on that, you should be able to map out what functions a device driver will need to have.
I also believe Linux Device Drivers, Third Edition may help you get on your way to driver development.
Linux files themselves include files based on what they do, what layer they are in, and what layer they access of the call stack. The Big Picture truly informs how each file is related to the next.
I had to fix a kernel driver once. My biggest tip (if you use vim) is to set it up with ctags so you can jump around the kernel source with ctrl-] every time you see a function you don't understand.