I have retrieved the following Docker image from NVIDIA NGC, using Singularity: https://ngc.nvidia.com/catalog/containers/nvidia:cuda.
I have pulled the tag '11.1-cudnn8-devel-ubuntu18.04' as follows:
singularity pull docker://nvcr.io/nvidia/cuda:11.1-cudnn8-devel-ubuntu18.04
This image is then available locally as 'cuda_11.1-cudnn8-devel-ubuntu18.04.sif'.
I then attempt to augment this image with some build tools and some Python according to the following definition file:
Bootstrap: localimage
From: cuda_11.1-cudnn8-devel-ubuntu18.04.sif
%post
apt-get update && apt-get install -y libopenmpi-dev make gcc pkg-config g++ python3 python3-sklearn python3-requests python3-bs4 python3-urllib3
As follows:
singularity build --force --fakeroot cuda_11.1-cudnn8-python-devel-ubuntu18.04.sif cuda-python.def
During installation of tzdata in the image, it asks for user input to determine my time zone:
Setting up tzdata (2020a-0ubuntu0.18.04) ...
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76.)
debconf: falling back to frontend: Readline
Configuring tzdata
------------------
Please select the geographic area in which you live. Subsequent configuration
questions will narrow this down by presenting a list of cities, representing
the time zones in which they are located.
1. Africa 4. Australia 7. Atlantic 10. Pacific 13. Etc
2. America 5. Arctic 8. Europe 11. SystemV
3. Antarctica 6. Asia 9. Indian 12. US
Geographic area: 8
This halts the build process and even though I type in 8 and press ENTER, nothing happens.
It seems as if singularity is not providing a proper "hole through" for ordinary command line input. How can I fix this?
Apparently, this a well-known problem among more experienced Docker/Singularity-users. It turns out I can work around it by setting, at the beginning of my %post script:
export DEBIAN_FRONTEND=noninteractive
and ending with:
unset DEBIAN_FRONTEND
I am not sure how tzdata ends up being configured then, but that is not really so important to me in this case.
Related
Recently, I have started working on snapping.
I have learnt snapping on Ubuntu 18.04 with simple hello-gnu.
After that i have moved to ubuntu 20.04. I have faced many issue. So i decided to simply run hello-gnu snap again on ubuntu 20.04. But it's failing with below error
$snpacraft
Launching a VM.,
snap "snapd" has no updates available
core18 20201210 from Canonical✓ installed
"core18" switched to the "latest/stable" channel
snapd is not logged in, snap install commands will use sudo snap "core20" has no updates available
Skipping pull hello-world (already ran)
Skipping build hello-world (already ran)
Skipping stage hello-world (already ran)
Skipping prime hello-world (already ran)
Failed to generate snap metadata: The specified command 'bin/hello' defined in the app 'hello' does not exist.
Ensure that 'bin/hello' is installed with the correct path.
Run the same command again with --debug to shell into the environment if you wish to introspect this failure.
snapcraft.yaml is look like as below
name: hello-gnu # you probably want to 'snapcraft register <name>'
base: core20 # the base snap is the execution environment for this snap
version: '0.1' # just for humans, typically '1.2+git' or '1.3.2'
summary: Hello simple snap
description: |
This is my-snap's description. You have a paragraph or two to tell the
most important story about your snap. Keep it under 100 words though,
we live in tweetspace and your description wants to look good in the snap
store.
grade: devel # must be 'stable' to release into candidate/stable channels
confinement: devmode # use 'strict' once you have the right plugs and slots
apps:
hello:
command: bin/hello
parts:
hello-world:
# See 'snapcraft plugins'
plugin: autotools
source: http://ftp.gnu.org/gnu/hello/hello-2.10.tar.gz
For Core20 if you set the command path to "usr/local/bin/hello" it will work.
command:usr/local/bin/hello
add
autotools-configure-parameters:
- --prefix=/
to your snapcraft.yaml file below the
plugin: autotools
line.
I had the same issue, and it seems to get worse when you get to the bash step. It works as instructed if you change your base to core18.
I've tried to setup windwos-curses as first step and it completes fine.
python -m pip install windows-curses
Also the following
python -m pip install bpython
does not show any problems.
Unfortunately running bpython results in a
ModuleNotFoundError: No module named 'fcntl'
Does it mean that bpython is not running on Windows 10 or is there another option for the installation here?
Found the solution on their github.
According to #509 Blessings doesn't work on Windows even with the custom curses library. We ought to update the Windows install instructions in the readme and on the site to say that bpython-curses needs to be run instead of bpython. We should also consider making bpython-curses the default on Windows
So, I'm running bpython-curses and it looks good to me (a few commands are not available though).
Unfortunately, there was a bug, namely it deletes the current line and returns back at the start of the history, when I type an underscore or a capital P, but it has been fixed now by Sebastian Ramacher.
Notice also that their home suggests to install an unofficial windows binary for pdcurses, but either way it confirms that you have to launch it by typing bpython-curses on your prompt.
Precisely the same as here, which hasn't been resolved.
Followed the sequential directions here; all channels added.
Tried:
Adding to .bash_profile export PKG_CONFIG_PATH=$PKG_CONFIG_PATH://anaconda/pkgs accordingly the directory where there is cairomm installed..
./configure -with--CAIROMM_CFLAGS -with--CAIROMM_LIBS
Can someone kindly make sure at least I have implemented alternate solutions correctly?
And of course I've tried the simplest conda install graph-tool after adding channels from ostrokach-forge and the like.
Instead of success, I get the following:
PackagesNotFoundError: The following packages are not available from current channels:
Whoa, so unpopular post!
For myself and other novices' use in the future ->
(after all, conda install-ed python is easier)
Slightly different but same in that certain library is not reached, acc. to this query:
conda needs to be able to find all the dependencies at once.
The -c flag only adds that channel for that one command.
conda install -c vgauthier rwest graph-tool. But an easier way is to add those channels to your configuration
conda config --add channels vgauthier --add channels rwest
And then perform
conda install graph-tool
But when I used conda install -c http://conda.anaconda.org/vgauthier graph-tool command, it worked instantly.
Before that, nothing worked. (when I only used the user name vguathier or ostrokach-forge or else)
If I type vi .condarc I see
channels:
- ostrokach-forge
- conda-forge
- defaults
And because I remain quite ignorant in using brew-installed packages with conda-installed python, I started out by following the directions to install all the necessary dependencies using brew. (including pixman)
Wonder how the command found everything though.. plus python upgraded to 3.6 from 3.5.
God this is how I am left with solving os issues.. i'm not 100 % clear how I made the computer connect the dots.
And still, nonetheless, I am still left with how to figure the install with ./configure. I want to understand the error msg that was returned and how to address it.
Currently I'm experimenting with the Cell/BE CPU under Linux. What I'm trying to do is running simulations in the near future, e.g. about the weather or black holes.
Problem is, Linux only discovers the main CPU of the Cell (the PPE), all other SPUs (7 should be available to Linux) are "sleeping". They just don't work out of the box.
What works is the PPE and it's recognized as a two-threaded CPU with one core by the OS. Also, the SPEs are shown at every boot (with small penguins showing a red "PPE" in them), but afterwards are shown nowhere.
Is it possible to "free" these specialised cores for use by the Linux OS? If so, how?
As noone seems to be interested or can answer this question I'll provide the details myself.
In fact there exists a workaround:
First, create an entry point for the SPUFS:
# sudo mkdir /spu
Create a mount point for the filesystem so you won’t have to manually mount after a reboot. Add this line to /etc/fstab
spufs /spu spufs defaults 0 0
Now reboot and test to make sure the SPUFS is mounted (in a terminal):
spu-top
You should see the 7 SPEs running with 0% load average.
Now Google for the following package to get the runtime library and headers you need for SPE development:
libspe2-2.3.0.135.tar.gz
You should find it on the first hit. Just unpack, build, and install it:
./configure
make
sudo make install
You can ignore the build warnings (or fix them if you have obsessive compulsive disorder).
You can use pkg-config to find the location of the runtime and headers though they are in /usr/local if I recall.
You of course need the gcc-spe compiler and the rest of the PPU and SPU toolchains but those you can install with apt-get as they are in the repos.
Source: comment by Exillis via redribbongnulinux.000webhostapp.com
After repeated attempts and trying to google this issue I'm stuck and am looking for help from my fellow stackers.
Following the wiki from tcadmin I have to run the following commands
wget http://www.tcadmin.com/installer/mono-2.11.4-i386.rpm
yum -y install mono-2.11.4-i386.rpm --nogpgcheck
/opt/mono-2.11.4/bin/mozroots --import --sync --quiet
/opt/mono-2.11.4/bin/mono --aot -O=all /opt/mono-2.11.4/lib/mono/2.0/mscorlib.dll
for i in /opt/mono-2.11.4/lib/mono/gac/*/*/*.dll; do /opt/mono-2.11.4/bin/mono --aot -O=all $i; done
when I get to the yum part it fails and outputs this error.
file / from install of mono-2.11.4-bi.x86_64 conflicts with file from package filesystem-3.2-18.el7.x86_64
Most sites and places suggest using an override or force command but this sounds stupid and will probably cause issues down the road for myself and the system.
I have flagged a ticket with the company that supplies the wiki about this issue but I'm yet to have a reply.
Another suggestion was to extract the rpm and move the files one by one but this is quite time consuming..
The ticket was responed to with the following;
It is safe to force install because all files are placed in /opt/mono-2.11.4 but there is a bug with mono on centos 7 that prevents tcadmin from working correctly.
For anyone else who happens upon this thread, I'm pleased to report that while I didn't encounter this error installing mono (that was a whole other process) I did encounter it while trying to install TCAdmin itself, but I was able to complete the installation of TCAdmin on CentOS 7 after using rpmrebuild to modify the spec.
Simply install rpmrbebuild, run rpmrebuild -pe {packagename}.rpm, scroll down to the %files section and remove the lines for any offending directories (in my case, the '/' and '/home' directories), save and quit, press y, and note the result location. In my case, it was /root/rpmbuild/RPMS/noarch/{packagename}.rpm.
Traverse to that directory and run yum -y install ./{packagename.rpm and it will install without a hitch.
The same should also apply to any other packages that return the conflicts with filesystem error. Just adjust the package names in the above examples accordingly.
*Thanks to the venerable Ahmad Samir for pointing me in the right direction with his post in this thread.
I had the same issue trying to install Fluentd agent on CentOS 7:
(venv)[user#machine01 tmp]$ sudo rpm -ivh td-agent-2.1.1-0.x86_64.rpm
warning: td-agent-2.1.1-0.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID a12e206f: NOKEY
Preparing... ################################# [100%]
file /opt from install of td-agent-2.1.1-0.x86_64 conflicts with file from package filesystem-3.2-18.el7.x86_64
I wouldn't say that downgrading your whole OS is the solution. Maybe an elegant workaround would be to rebuild the .rpm file in order to avoid those file systems which are making conflicts. You can do this by modifying the spec file with rpmrebuild command.
However, if you trust the software you are about to install or you want to try if works no matter what, then an easier (and faster) workaround is to force the rpm installation. That's what I did ...
(venv)[user#machine01 tmp]$ sudo rpm -ivh --force td-agent-2.1.1-0.x86_64.rpm
warning: td-agent-2.1.1-0.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID a12e206f: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:td-agent-2.1.1-0 ################################# [100%]
adding 'td-agent' group...
adding 'td-agent' user...
Installing default conffile...
prelink detected. Installing /etc/prelink.conf.d/td-agent-ruby.conf ...
Configure td-agent to start, when booting up the OS...
...and it worked for me
(venv)[user#machine01 tmp]$ sudo systemctl start td-agent.service
(venv)[user#machine01 tmp]$ sudo systemctl status td-agent.service
td-agent.service - LSB: td-agent's init script
Loaded: loaded (/etc/rc.d/init.d/td-agent)
Active: active (running) since vie 2014-12-12 09:34:09 CET; 4s ago
Process: 17178 ExecStart=/etc/rc.d/init.d/td-agent start (code=exited, status=0/SUCCESS)
...
Hope it helps
This is an inherent issue with centos 7.
Going back to centos 6 fixed it.