"pynvml.NVMLError_LibraryNotFound: NVML Shared Library Not Found" in bitbucket-pipelines - bitbucket-pipelines

I am trying to execute the following bitbucket-pipelines config:
clone:
depth: full
options:
max-time: 4 # maximum minutes to run the tests
pipelines:
pull-requests: # run when pull request is created (or updated)
"**": # this runs as default for any branch not elsewhere defined
- step:
name: Tests # name to show on the pipelines web page
image: python:3.7 # docker to use (from Docker Hub)
script: # shell commands to run
- pip install -r requirements.txt
- python -m pytest --junitxml=./test-reports/junit.xml .
However, when the unitests are executed i get the following errors (in bitbucket-pipeline):
pynvml.NVMLError_LibraryNotFound: NVML Shared Library Not Found
def _LoadNvmlLibrary():
'''
Load the library if it isn't loaded already
'''
global nvmlLib
if (nvmlLib == None):
# lock to ensure only one caller loads the library
libLoadLock.acquire()
try:
# ensure the library still isn't loaded
if (nvmlLib == None):
try:
if (sys.platform[:3] == "win"):
# cdecl calling convention
# load nvml.dll from %ProgramFiles%/NVIDIA Corporation/NVSMI/nvml.dll
nvmlLib = CDLL(os.path.join(os.getenv("ProgramFiles", "C:/Program Files"), "NVIDIA Corporation/NVSMI/nvml.dll"))
else:
# assume linux
> nvmlLib = CDLL("libnvidia-ml.so.1")
/usr/local/lib/python3.7/site-packages/pynvml.py:644:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <CDLL 'libnvidia-ml.so.1', handle 0 at 0x7fcc75c8a910>
name = 'libnvidia-ml.so.1', mode = 0, handle = None, use_errno = False
use_last_error = False
def __init__(self, name, mode=DEFAULT_MODE, handle=None,
use_errno=False,
use_last_error=False):
self._name = name
flags = self._func_flags_
if use_errno:
flags |= _FUNCFLAG_USE_ERRNO
if use_last_error:
flags |= _FUNCFLAG_USE_LASTERROR
if _sys.platform.startswith("aix"):
"""When the name contains ".a(" and ends with ")",
e.g., "libFOO.a(libFOO.so)" - this is taken to be an
archive(member) syntax for dlopen(), and the mode is adjusted.
Otherwise, name is presented to dlopen() as a file argument.
"""
if name and name.endswith(")") and ".a(" in name:
mode |= ( _os.RTLD_MEMBER | _os.RTLD_NOW )
class _FuncPt
After reading issues, it seems that it related to NVidia drivers. However, i'm not sure if it is the case. In if it is the case, how can i fix these issues.

Related

How to resize display resolution on windows with nim

I'd like to use nim to resize the default display resolution on a machine (windows 10 only), I want to basically do it via a command line call like setDisplay 1280 1024
I've seen and used the python example Resize display resolution using python with cross platform support which I can follow, but just can't translate. I just don't get how to fill in EnumDisplaySettings
import winim/lean
import strformat
var
cxScreen = GetSystemMetrics(SM_CXSCREEN)
cyScreen = GetSystemMetrics(SM_CYSCREEN)
msg = fmt"The screen is {cxScreen} pixels wide by {cyScreen} pixels high."
EnumDisplaySettings(Null,0, 0) #total type mismatch
MessageBox(0, msg, "Winim Example Screen Size", 0)
Tried checking stuff like https://cpp.hotexamples.com/fr/examples/-/-/EnumDisplaySettings/cpp-enumdisplaysettings-function-examples.html but wasn't much help, same for https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-changedisplaysettingsa
I wrote about 2% of this answer myself, and the rest came from pointystick on discord - thanks to them!
The solution is a bit lazy, but it's so fast that for most that won't matter.
With no cmd line args it will just set the display to the default recommendation, else with 2 cmd line args it can reset your display if it finds a match
import winim/lean
import os
import strutils
var modeToFind = (width: 1920, height: 1080, bitsPerPixel: 32,
refreshRate: 60)
var reset = 0
type ModeNotFoundError = object of CatchableError
proc getDisplayMode(): DEVMODEW =
## Finds the wanted screen resolution or raises a ModeNotFoundError.
var
nextMode: DWORD = 0
mode: DEVMODEW
while EnumDisplaySettings(nil, nextMode, mode) != 0:
echo $mode.dmPelsWidth & " x " & $mode.dmPelsHeight &
" x " & $mode.dmBitsPerPel &
" - " & $mode.dmDisplayFrequency
inc nextMode
if (mode.dmPelsWidth == modeToFind.width) and
(mode.dmPelsHeight == modeToFind.height):
echo "Found it!"
return mode
if(reset==1):
return mode
raise newException(ModeNotFoundError, "Cannot find wanted screen mode")
proc changeResolution(): bool =
## Actually changes the resolution. The return value indicates if it worked.
result = false
try:
let wantedMode = getDisplayMode()
result = ChangeDisplaySettings(wantedMode.unsafeAddr, 0.DWORD) == DISP_CHANGE_SUCCESSFUL
except ModeNotFoundError: discard
when isMainModule:
var
cxScreen:int32 = 0 #= GetSystemMetrics(SM_CXSCREEN)
cyScreen:int32 = 0 # = GetSystemMetrics(SM_CYSCREEN)
try:
cxScreen = (int32) parseInt(paramStr(1))
cyScreen = (int32) parseInt(paramStr(2))
modeToFind.width = cxScreen
modeToFind.height = cyScreen
except:
reset = 1
if not changeResolution():
echo "Change Resolution Failed"

ImportError: libopencv_hdf.so.3.1: cannot open shared object file: No such file or directory

I am trying to run my test cases in Bitbucket-pipeline but it is showing
an error message.
Screenshot of Bitbucket-pipeline.yml
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> import cv2
E ImportError: libopencv_hdf.so.3.1: cannot open shared object file: No such file or directory
**ImportError**
You could didn't install opencv3.1 or didn't install correctly that's why you can't import it.
Thanks, I found the answer. I was installing opencv redundant time, so it was overlapping and deleting some important module from itself.
This is my script from Bitbucket-pipeline.yml
image: python:3.6.2
pipelines:
default:
- step:
caches:
- condacache
script:
- wget http://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
- chmod +x Miniconda3-latest-Linux-x86_64.sh
- ./Miniconda3-latest-Linux-x86_64.sh -u -b -p /opt/python
- cd marvin_oms
- /opt/python/bin/conda update -y conda
- /opt/python/bin/pip install --upgrade pip
- /opt/python/bin/conda install -y numpy pandas SQLAlchemy requests lxml virtualenv psycopg2
- apt-get update && apt-get install -y libzbar0 libzbar-dev libgtk2.0-0
- /opt/python/bin/pip install pyzbar
- /opt/python/bin/conda install seaborn opencv=3.1.0 scipy libgcc boost=1.61.0 libpng=1.6.27 cython
- /opt/python/bin/pip install libraries/imgforensics-0.1-cp36-cp36m-linux_x86_64.whl
- /opt/python/bin/pip install -r requirements.txt
- /opt/python/bin/pytest
definitions:
caches:
condacache: /opt/python/bin
that's because you might have pasted another cv2.so file which is overlapping go ahead and open a terminal and type
cd /lib/python3/dist-packages
and then
ls
you will see cv2.so , copy it on desktop as a backup so incase if i am wrong you won't lose it
type :
cp cv2.so /home/ubuntu/cv2.so
and then type this to delete it
rm cv2.so
now type
python3
import cv2
and you're done ...
a proof of concept :
check here the image
and after deleting it :
last one after importing

F# unit test projects in linux with mono (FAKE, NUnit 3)

I am trying to set up a very simple test F# project in linux with mono, using Forge to set up the project and install nuget packages. Forge creates a build.fsx file which uses FAKE. I've tried to adjust this build file (in order to add tests) with inspiration from this tutorial http://fsharp.github.io/FAKE/gettingstarted.html. The tutorial, however, is using C# for testing and assumes Windows with .Net as environment. I want to use F# for testing and linux with mono as environment.
I think I almost got it working, but I am getting some cryptic error messages from NUnit. When running the build.fsx file I get following errors at the end:
...
Invalid argument: -nologo
The value '/home/michel/Documents/FSHARP/UnitTests/test/NUnit.Test.MyTests.dll' is not valid for option '--labels'.
Invalid argument: -xml:./test/TestResults.xml
Running build failed.
Error:
NUnit test failed (255).
---------------------------------------------------------------------
Build Time Report
---------------------------------------------------------------------
Target Duration
------ --------
Clean 00:00:00.0036366
Build 00:00:00.0402828
BuildTest 00:00:00.4911710
Total: 00:00:00.7494956
Status: Failure
---------------------------------------------------------------------
1) Fake.UnitTestCommon+FailedTestsException: NUnit test failed (255).
at Fake.NUnitSequential.NUnit (Microsoft.FSharp.Core.FSharpFunc`2 setParams, IEnumerable`1 assemblies) <0x41d27e50 + 0x0039f> in <filename unknown>:0
at FSI_0001+clo#32-4.Invoke (Microsoft.FSharp.Core.Unit _arg4) <0x41d27dc0 + 0x0006f> in <filename unknown>:0
at Fake.TargetHelper+targetFromTemplate#195[a].Invoke (Microsoft.FSharp.Core.Unit unitVar0) <0x41cd59b0 + 0x00023> in <filename unknown>:0
at Fake.TargetHelper.runSingleTarget (Fake.TargetTemplate`1 target) <0x41ccb490 + 0x000ca> in <filename unknown>:0
My build.fsx file looks like this
// include Fake libs
#r "./packages/FAKE/tools/FakeLib.dll"
open Fake
// Directories
let buildDir = "./build/"
let testDir = "./test/"
// version info
let version = "0.1" // or retrieve from CI server
// Targets
Target "Clean" (fun _ ->
CleanDirs [buildDir; testDir]
)
Target "Build" (fun _ ->
//MSBuildDebug buildDir "Build" appReferences
!! "/UnitTesting/*.fsproj"
|> MSBuildRelease buildDir "Build"
|> Log "AppBuild-Output: "
)
Target "BuildTest" (fun _ ->
!! "src/NUnit.Test.MyTests/*.fsproj"
|> MSBuildDebug testDir "Build"
|> Log "TestBuild-Output: "
)
Target "Test" (fun _ ->
!! (testDir + "/NUnit.Test.MyTests.dll")
|> NUnit (fun p ->
{ p with
ToolPath = "packages/NUnit.ConsoleRunner/tools"
//DisableShadowCopy = true;
OutputFile = testDir + "TestResults.xml" })
)
Target "Default" (fun _ -> trace "HEEEELLOOOOOO world from FAKE!!!")
"Clean" ==> "Build" ==> "BuildTest" ==> "Test" ==> "Default"
RunTargetOrDefault "Default"
FAKE seems to be looking for a file nunit-console.exe under the packages/NUnit.ConsoleRunner/tools directory, but there is no such file. However, there is a nunit3-console.exe file, so I just made a copy of this file with the name nunit-console.exe.
My simple test file NUnit.Test.MyTests.fs looks like following:
namespace NUnit.Test.MyTests
module testmodule =
open NUnit.Framework
let SayHello name = "Hello"
[<TestFixture>]
type myFixture() =
[<Test>]
member self.myTest() =
Assert.AreEqual("Hello World!", SayHello "World")
and the file test/NUnit.Test.MyTests.dll seems to be generated just fine.
What does the cryptic error message mean, and how can I fix it so I can run my tests?
As mentioned by rmunn in the comment, I need to use the function NUnit3 because I am using NUnit version 3.4.1. The function resides in the FAKE.Testing module http://fsharp.github.io/FAKE/apidocs/fake-testing-nunit3.html. I modified my build.fsx file so it now looks like following:
// include Fake libs
#r "./packages/FAKE/tools/FakeLib.dll"
open Fake
open Fake.Testing // NUnit3 is in here
// Directories
let buildDir = "./build/"
let testDir = "./test/"
// version info
let version = "0.1" // or retrieve from CI server
// Targets
Target "Clean" (fun _ ->
CleanDirs [buildDir; testDir]
)
Target "Build" (fun _ ->
!! "/UnitTesting/*.fsproj"
|> MSBuildRelease buildDir "Build"
|> Log "AppBuild-Output: "
)
Target "BuildTest" (fun _ ->
!! "src/NUnit.Test.MyTests/*.fsproj"
|> MSBuildDebug testDir "Build"
|> Log "TestBuild-Output: "
)
Target "Test" (fun _ ->
!! (testDir + "/NUnit.Test.*.dll")
|> NUnit3 (fun p ->
{ p with
ToolPath = "packages/NUnit.ConsoleRunner/tools/nunit3-console.exe" })
)
Target "Default" (fun _ -> trace "HEEEELLOOOOOO world from FAKE!!!")
"Clean" ==> "Build" ==> "BuildTest" ==> "Test" ==> "Default"
RunTargetOrDefault "Default"
Note that you must specify ToolPath all the way to the nunit3-console.exe file, and not just the directory where it resides.
Now everything seems to work, and I get a fine and simple 'test-summary' in the console output when I run build.fsx. :)

(WinApi) ChangeDisplaySettingsEx does not work

I'm trying to write a python script to switch the primary monitor.
I have 3 Monitors (one is plugged into my i5's graphics chip, and 2 are plugged into a ATI HD7870)
I wrote the following script:
import win32api as w
import win32con as c
i = 0
workingDevices = []
def setPrimary(id):
global workingDevices
return w.ChangeDisplaySettingsEx(
workingDevices[id].DeviceName,
w.EnumDisplaySettings(
workingDevices[id].DeviceName,
c.ENUM_CURRENT_SETTINGS
),
c.CDS_SET_PRIMARY | c.CDS_UPDATEREGISTRY | c.CDS_RESET) \
== c.DISP_CHANGE_SUCCESSFUL
while True:
try:
Device = w.EnumDisplayDevices(None, i, 1)
if Device.StateFlags & c.DISPLAY_DEVICE_ATTACHED_TO_DESKTOP: #Attached to desktop
workingDevices.append(Device)
i += 1
except:
break
print("Num Devices: ", len(workingDevices))
for dev in workingDevices:
print("Name: ", dev.DeviceName)
Invoking it leads to:
In [192]: %run test.py
Num Devices: 3
Name: \\.\DISPLAY1
Name: \\.\DISPLAY2
Name: \\.\DISPLAY7
In [193]: setPrimary(0)
Out[193]: True
In [194]: setPrimary(1)
Out[194]: True
In [195]: setPrimary(2)
Out[195]: True
So far it looks great, but the problem is: nothing changes. My monitors flicker shortly because of the CDS_RESET but the primary screen does not change, although ChangeDisplaySettingsEx returns DISP_CHANGE_SUCCESSFUL
Does anyone have an Idea why?
(I use Python 3.5.1 and PyWin32 build 220)
PS I use 1 as the third arg for EnumDisplayDevices because the msdn states it should be set to one, although the PyWin help says it should be set to 0.
But the behaviour of the script does not change independent of this value beeing one or zero
Ok, I found the solution.
Apperantly the primary monitor must always be at position (0, 0).
So when I tried to set another monitor to primary its position was set to (0, 0) which caused it to intersect with the old primary one.
It seems the way to go is to update the positions of all Monitors, and write those changes to the registry, and then once this is done apply the changes by calling ChangeDisplaySettingsEx() with default parameters.
This is my new (now working) code:
import win32api as w
import win32con as c
def load_device_list():
"""loads all Monitor which are plugged into the pc
The list is needed to use setPrimary
"""
workingDevices = []
i = 0
while True:
try:
Device = w.EnumDisplayDevices(None, i, 0)
if Device.StateFlags & c.DISPLAY_DEVICE_ATTACHED_TO_DESKTOP: #Attached to desktop
workingDevices.append(Device)
i += 1
except:
return workingDevices
def setPrimary(id, workingDevices, MonitorPositions):
"""
param id: index in the workingDevices list.
Designates which display should be the new primary one
param workingDevices: List of Monitors returned by load_device_list()
param MonitorPositions: dictionary of form {id: (x_position, y_position)}
specifies the monitor positions
"""
FlagForPrimary = c.CDS_SET_PRIMARY | c.CDS_UPDATEREGISTRY | c.CDS_NORESET
FlagForSec = c.CDS_UPDATEREGISTRY | c.CDS_NORESET
offset_X = - MonitorPositions[id][0]
offset_Y = - MonitorPositions[id][1]
numDevs = len(workingDevices)
#get devmodes, correct positions, and update registry
for i in range(numDevs):
devmode = w.EnumDisplaySettings(workingDevices[i].DeviceName, c.ENUM_CURRENT_SETTINGS)
devmode.Position_x = MonitorPositions[i][0] + offset_X
devmode.Position_y = MonitorPositions[i][1] + offset_Y
if(w.ChangeDisplaySettingsEx(workingDevices[i].DeviceName, devmode,
FlagForSec if i != id else FlagForPrimary) \
!= c.DISP_CHANGE_SUCCESSFUL): return False
#apply Registry updates once all settings are complete
return w.ChangeDisplaySettingsEx() == c.DISP_CHANGE_SUCCESSFUL;
if(__name__ == "__main__"):
devices = load_device_list()
for dev in devices:
print("Name: ", dev.DeviceName)
MonitorPositions = {
0: (0, -1080),
1: (0, 0),
2: (1920, 0)
}
setPrimary(0, devices, MonitorPositions)

SCONS: How do I carry on an action on a target in place

Let's say I want to strip all the debug symbols in the shared libraries that I build whiling keeping the original file name.
I tried to add an command in the method:
def mySharedLibrary(self, *args, **kwargs):
# do some common work for every shared library like add a soname or append some lib files to LIBS parameter
target = SharedLibary(*args, **kwargs)
target = env.Command(target,target, "objcopy --strip-debug ${SOURCE}")
return target
I get this error: two different method was given to the same target,
I guess it's because the two targets returned by env.Command and SharedLibrary are exactly the same name.
Any ideas to do this?
Thanks in advance!
I had the same problem and got the same error. What I had to do was to create an intermediate target/library. The intermediate and final targets each had their own library name, so SCons doesnt get confused.
You could probably do something like the following:
env.SharedLibrary(target = 'namePreStrip', source = 'yourSource')
env.Command(target = 'name', source = 'namePreStrip', 'objcopy...')
I used objcopy to build a library out of several libraries. Here's the actual source code I implemented:
#
# Build an object file out of several other source files, objects, and libraries
# Optionally execute objcopy on the resulting library, depending if objcopyFlags
# has been populated
#
# env - SCons Environment used to build, Mandatory arg
# target - resulting library name, without LIBPREFIX and LIBSUFFIX, ej 'nsp2p',
# Mandatory arg
# sourceFiles - list of '.cc' files that will be compiled and included in the
# resulting lib, Optional arg
# objects - list of already compiled object files to be included in resulting lib,
# Optional arg
# libraries - list of libraries to be included in resulting lib, Optional arg
# objcopyFlags - list of flags to pass to objcopy command. objcopy will only
# be executed if this list is populated, Optional arg
#
# One of [sourceFiles, objects, or libraries] must be specified, else nothing
# will be performed
#
# Not using a custom builder because I dont like the way SCons prints the
# entire command each time its called, even if its not going to actually
# build anything AND I need more method args than provided by custom builders
#
def buildWholeArchive(self, env, target, sourceFiles, objects, libraries, objcopyFlags):
if len(sourceFiles) == 0 and len(objects) == 0 and len(libraries) == 0:
print "Incorrect use of buildWholeArchive, at least one of [sourceFiles | objects | librarires] must be specified, no build action will be performed"
return None
# Compile each source file
objNodes = []
if len(sourceFiles) > 0:
objNodes = env.Object(source = sourceFiles)
cmdList = []
cmdList.append(env['CXX'])
cmdList.append('-nostdlib -r -o $TARGET -Wl,--whole-archive')
for obj in objNodes:
cmdList.append(env.File(obj).abspath)
for obj in objects:
cmdList.append(env.File(obj).abspath)
for lib in libraries:
cmdList.append(lib)
cmdList.append('-Wl,--no-whole-archive')
cmd = ' '.join(cmdList)
libTarget = '%s%s%s' % (env['LIBPREFIX'], target, env['LIBSUFFIX'])
if len(objcopyFlags) > 0:
# First create the library, then run objcopy on it
objTarget = '%s%s_preObjcopy%s' % (env['LIBPREFIX'], target, env['LIBSUFFIX'])
preObjcopyTarget = env.Command(target = objTarget, source = [], action = cmd)
env.Depends(preObjcopyTarget, [objNodes, sourceFiles, objects, libraries])
objCmdList = [env['OBJCOPY']]
objCmdList.extend(objcopyFlags)
objCmdList.append('$SOURCE $TARGET')
objcopyCmd = ' '.join(objCmdList)
archiveTarget = env.Command(target = libTarget, source = preObjcopyTarget, action = objcopyCmd)
else:
# Just create the library
archiveTarget = env.Command(target = libTarget, source = [], action = cmd)
env.Depends(archiveTarget, [objNodes, sourceFiles, objects, libraries])
return archiveTarget
And here is how I called it:
sourceFiles = ['file1.cc', 'file2.cc']
libSource = []
if 'OcteonArchitecture' in env:
libSource.append(lib1)
libSource.append(lib2)
libSource.append(lib3)
objcopy = []
if 'OcteonArchitecture' in env:
objcopy.extend([
'--redefine-sym calloc=ns_calloc',
'--redefine-sym free=ns_free',
'--redefine-sym malloc=ns_malloc',
'--redefine-sym realloc=ns_realloc'])
archiveTarget = clonedEnv.buildWholeArchive(target = libName,
sourceFiles = sourceFiles,
objects = [],
libraries = libSource,
objcopyFlags = objcopy)
env.Alias('libMyLib', archiveTarget)

Resources