Google Cloud Functions (GCF) - installing packages With apt-get - python-3.x

I need 2 packages for my Google Cloud Functions (GCF) functions to work properly. I'd typically install them with the following command on Ubuntu:
apt-get -y install firefox xvfb
However, I don't know anyway to package my functions which would direct GCF to download these packages before running my code.
I tried installing them from within my Python function, using subprocess.call().
Here is some code:
try:
print(subprocess.check_output("apt-get -y install firefox", shell=True, stderr=subprocess.STDOUT))
except subprocess.CalledProcessError as e:
print("Ping stdout output:\n", e.output)
try:
print(subprocess.check_output("apt-get -y install xvfb", shell=True, stderr=subprocess.STDOUT))
except subprocess.CalledProcessError as e:
print("Ping stdout output:\n", e.output)
Unfortunately, this does not work. I get the following errors:
Reading package lists...
Building dependency tree...
Reading state information...
The following additional packages will be installed:
libcanberra0 libdbusmenu-gtk3-4 libstartup-notification0 libtdb1
libxcb-util1 sound-theme-freedesktop xul-ext-ubufox
Suggested packages:\n fonts-lyx libcanberra-gtk0 libcanberra-pulse
The following NEW packages will be installed: firefox
libcanberra0 libdbusmenu-gtk3-4 libstartup-notification0 libtdb1
libxcb-util1 sound-theme-freedesktop xul-ext-ubufox
0 upgraded, 8 newly installed, 0 to remove and 5 not upgraded.
Need to get 44.5 MB of archives.
After this operation, 170 MB of additional disk space will be used.
W: Not using locking for read only lock file /var/lib/dpkg/lock
W: chown to _apt:root of directory /var/cache/apt/archives/partial failed - SetupAPTPartialDirectory (30: Read-only file system)
W: chmod 0700 of directory /var/cache/apt/archives/partial failed - SetupAPTPartialDirectory (1: Operation not permitted)
W: chown to _apt:root of directory /var/lib/apt/lists/auxfiles failed - SetupAPTPartialDirectory (30: Read-only file system)
W: chmod 0700 of directory /var/lib/apt/lists/auxfiles failed - SetupAPTPartialDirectory (1: Operation not permitted)
W: Not using locking for read only lock file /var/cache/apt/archives/lock
E: Couldn't determine free space in /var/cache/apt/archives/ - statvfs (38: Function not implemented)
How can I fix this error to download the packages from within my python code? Is there any other (easier/cleaner) way to achieve what I'm trying to do?
Thanks!

You can't arrange for packages to be installed on Cloud Functions instances. This is because your code doesn't run with root privileges. If you need binaries to be available to your code deployed to Cloud Functions, you will have to build it yourself for Debian, and include the binaries in your functions directory so it gets deployed along with the rest of your code.
Even if you're able to do that, there's no guarantee it will work, because the Cloud Fucntions images may not include all the shared libraries required for the executables to work.

You can request that new packages be added to the runtime using the public issue tracker. Note that the issue tracker says "App Engine" but just mention that this is for Cloud Functions.

You can't install packages but can use puppeteer instead of xvfb. https://cloud.google.com/blog/products/gcp/introducing-headless-chrome-support-in-cloud-functions-and-app-engine

Not a direct answer to your question, but it seems like you're trying to use a headless browser inside GCF. It's probably worth mentioning that headless Chrome is now supported in Cloud Functions, which might be easier to use if you're able to transition from Firefox to Chrome:
https://cloud.google.com/blog/products/gcp/introducing-headless-chrome-support-in-cloud-functions-and-app-engine

Related

AWS EC2 Linux: Cannot open shared object file

I am trying to get (specific versions of) Chrome and Chromedriver (and ultimately Selenium) running on an AWS EC2 Linux instance that was handed over to me for testing. Following the first four steps of this guide, I
navigated into my /tmp directory,
ran sudo wget https://chromedriver.storage.googleapis.com/102.0.5005.61/chromedriver_linux64.zip,
unzipped chromedriver,
and moved chromedriver into my /usr/bin.
However, when I go to inspect chromedriver - version for a sanity check, I get the following error:
chromedriver: error while loading shared libraries: libxcb.so.1: cannot open shared object file: No such file or directory
Several posts on this and other sites documenting similar quandaries (error while loading shared libraries: <...>) suggest that maybe this error is due to the absence of libxcb from my instance, but I am such a novice that I'm just not sure (I don't even know what libxcb is for). Hoping to get some feedback on how to proceed. If any other info about my instance would be helpful, feel free to let me know. Thanks!
If you type yum whatprovides libxcb.so.1 you will see the name and version of the library that provides the file you are missing.
Generally, I found out that libX11 provides most libraries needed for chrome and chromedriver. So try:
sudo yum install -y libX11
and then repeat the above steps.

Why does pip3 want to create a kdewallet after installing/updating packages on Ubuntu 20.04?

I recently updated my system from Ubuntu 18.04 to 20.04. Afterwards I tried to update all of my pip3 packages using the shell command mentioned here (I changed "pip" to "pip3").
After it updated a couple of packages, this prompt came up:
KDE Wallet Service
Since I never saw it while updating/installing a package, I cancelled it which resulted in an instant reopening of the same prompt and the terminal showed following warning:
WARNING: Keyring is skipped due to an exception: Failed to unlock the keyring!
I pressed CTRL+C after the same happend with the next package it tried to install. Ever since when I try to install a package the same happens.
Example:
pip3 install numpy
WARNING: Keyring is skipped due to an exception: Failed to unlock the keyring!
WARNING: Keyring is skipped due to an exception: Failed to unlock the keyring!
Collecting numpy
WARNING: Keyring is skipped due to an exception: Failed to unlock the keyring!
Using cached numpy-1.19.2-cp38-cp38-manylinux2010_x86_64.whl (14.5 MB)
Installing collected packages: numpy
WARNING: The scripts f2py, f2py3 and f2py3.8 are installed in '/home/gesuchter/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed numpy-1.19.2
I already tried to fix my issue with reinstalling python3-pip.
Here are hopefully some helpful information:
pip3 --version
pip 20.0.2 from /usr/lib/python3/dist-packages/pip (python 3.8)
ls ~/.local/share/keyrings
default login.keyring user.keystore
find ~/.config/kwalletrc
find: ‘/home/gesuchter/.config/kwalletrc’: No such file or directory
This solved it for me:
python3 -m keyring --disable
Running it adds:
[backend]
default-keyring=keyring.backends.null.Keyring
to the file ~/.config/python_keyring/keyringrc.cfg.
Presumably PYTHON_KEYRING_BACKEND=keyring.backends.null.Keyring mentioned at https://stackoverflow.com/a/68504137/895245 offers an environment variable way to achieve the same result.
Without this, it would show the annoying KDE keyring popup every time I try to install a package:
Google, index this:
The KDE Wallet System
The application '/usr/lib/python3/dist-packages/pip/__main__.py' has requested to create a new wallet named 'kdewallet'. This is used to store sensitive data in a secure fashion. Please choose the new wallet's type below or click cancel to deny the application's request.
Classic, blowfish encrypted file
Use GPG encryption, for better protection
Related: https://askubuntu.com/questions/1205161/annoying-kde-wallet-service-popup-the-application-kded5-has-requested-to-open
Tested on Ubuntu 21.04, Python 3.9.5, pip3 20.3.4, keyring==22.2.0.
This issue was reported upstream, see https://github.com/pypa/pip/issues/8090 and the other issues linked from that thread.
tl;dr: new versions of pip too eagerly search keyrings for credentials that might be required to access some package downloads. If it ends up querying a keyring (such as kdewallet) which doesn't exist yet, the prompt you noticed shows up. It's not yet clear how exactly and in which release this will be fully fixed.
EDIT: Please note that stackoverflow is not the right place for bug reports: pip maintainers are unlikely to see this, and as far as I can tell it's also not really in scope for stackoverflow. It could be a question for another stackexchange community (superuser maybe?). But since you were already sure that this was a problem with pip, the best place for a report would have been any of the communication channels indicated in the README on https://github.com/pypa/pip
You can add this to your startup file
export PYTHON_KEYRING_BACKEND=keyring.backends.null.Keyring
Credit to jrd's comment on a python bug

Deployement Qt error [duplicate]

I wrote application for linux which uses Qt5.
But when I am trying to launch it on the linux without Qt SDK installed, the output in console is:
Failed to load platform plugin "xcb". Available platforms are:
How can I fix this? May be I need to copy some plugin file?
When I use ubuntu with Qt5 installed, but I rename Qt directory, the same problem occurs. So, it uses some file from Qt directory...
UPDATE:
when I create in the app dir "platforms" folder with the file libqxcb.so, the app still doesnot start, but the error message changes:
Failed to load platform plugin "xcb". Available platforms are:
xcb
How can this happen? How can platform plugin be available but can't be loaded?
Use ldd (man ldd) to show shared library dependencies. Running this on libqxcb.so
.../platforms$ ldd libqxcb.so
shows that xcb depends on libQt5DBus.so.5 in addition to libQt5Core.so.5 and libQt5Gui.so.5 (and many other system libs). Add libQt5DBus.so.5 to your collection of shared libs and you should be ready to move on.
As was posted earlier, you need to make sure you install the platform plugins when you deploy your application. Depending on how you want to deploy things, there are two methods to tell your application where the platform plugins (e.g. platforms/plugins/libqxcb.so) are at runtime which may work for you.
The first is to export the path to the directory through the QT_QPA_PLATFORM_PLUGIN_PATH variable.
QT_QPA_PLATFORM_PLUGIN_PATH=path/to/plugins ./my_qt_app
or
export QT_QPA_PLATFORM_PLUGIN_PATH=path/to/plugins
./my_qt_app
The other option, which I prefer is to create a qt.conf file in the same directory as your executable. The contents of which would be:
[Paths]
Plugins=/path/to/plugins
More information regarding this can be found here and at using qt.conf
I tried to start my binary, compiled with Qt 5.7, on Ubuntu 16.04 LTS where Qt 5.5 is preinstalled. It didn't work.
At first, I inspected the binary itself with ldd as was suggested here, and satisfied all "not found" dependencies. Then this notorious This application failed to start because it could not find or load the Qt platform plugin "xcb" error was thrown.
How to resolve this in Linux
Firstly you should create platforms directory where your binary is, because it is the place where Qt looks for XCB library. Copy libqxcb.so there. I wonder why authors of other answers didn't mention this.
Then you may want to run your binary with QT_DEBUG_PLUGINS=1 environment variable set to check which dependencies of libqxcb.so are not satisfied. (You may also use ldd for this as suggested in the accepted answer).
The command output may look like this:
me#xerus:/media/sf_Qt/Package$ LD_LIBRARY_PATH=. QT_DEBUG_PLUGINS=1 ./Binary
QFactoryLoader::QFactoryLoader() checking directory path "/media/sf_Qt/Package/platforms" ...
QFactoryLoader::QFactoryLoader() looking at "/media/sf_Qt/Package/platforms/libqxcb.so"
Found metadata in lib /media/sf_Qt/Package/platforms/libqxcb.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"xcb"
]
},
"className": "QXcbIntegrationPlugin",
"debug": false,
"version": 329472
}
Got keys from plugin meta data ("xcb")
loaded library "/media/sf_Qt/Package/platforms/libqxcb.so"
QLibraryPrivate::loadPlugin failed on "/media/sf_Qt/Package/platforms/libqxcb.so" : "Cannot load library /media/sf_Qt/Package/platforms/libqxcb.so: (/usr/lib/x86_64-linux-gnu/libQt5DBus.so.5: version `Qt_5' not found (required by ./libQt5XcbQpa.so.5))"
This application failed to start because it could not find or load the Qt platform plugin "xcb"
in "".
Available platform plugins are: xcb.
Reinstalling the application may fix this problem.
Aborted (core dumped)
Note the failing libQt5DBus.so.5 library. Copy it to your libraries path, in my case it was the same directory where my binary is (hence LD_LIBRARY_PATH=.). Repeat this process until all dependencies are satisfied.
P.S. thanks to the author of this answer for QT_DEBUG_PLUGINS=1.
I tried the main parts of each answer, to no avail. What finally fixed it for me was to export the following environment variables:
LD_LIBRARY_PATH=/usr/local/lib:~/Qt/5.9.1/gcc_64/lib
QT_QPA_PLATFORM_PLUGIN_PATH=~/Qt/5.9.1/gcc_64/plugins/
Ubuntu 16.04 64bit.
I got the problem for apparently no reasons. The night before I watched a movie on my VideoLan instance, that night I would like to watch another one with VideoLan. VLC just didn't want to run because of the error into the question.
I google a bit and I found the solution it solved my problem: from now on, VLC is runnable just like before. The solution is this comand:
sudo ln -sf /usr/lib/x86_64-linux-gnu/qt5/plugins/platforms/ /usr/bin/
I am not able to explain what are its consequencies, but I know it creates some missing symbolic link.
Since version 5, Qt uses a platform abstraction system (QPA) to abstract from the underlying platform.
The implementation for each platform is provided by plugins. For X11 it is the XCB plugin. See Qt for X11 requirements for more information about the dependencies.
There might be many causes to this problem. The key is to use
export QT_DEBUG_PLUGINS=1
before you run your Qt application. Then, inspect the output, which will point you to the direction of the error. In my case it was:
Cannot load library /opt/nao/plugins/platforms/libqxcb.so: (/opt/nao/bin/../lib/libz.so.1: version `ZLIB_1.2.9' not found (required by /usr/lib/x86_64-linux-gnu/libpng16.so.16))
But that is solved in different threads. See for instance https://stackoverflow.com/a/50097275/2408964.
Probably this information will help. I was on Ubuntu 18.04 and when I tried to install Krita, using the ppa method, I got this error:
This application failed to start because it could not find or load the Qt platform plugin "xcb" in "".
Available platform plugins are: linuxfb, minimal, minimalegl, offscreen, wayland-egl, wayland, xcb.
Reinstalling the application may fix this problem.
Aborted
I tried all the solutions that I found in this thread and other webs without any success.
Finally, I found a post where the author mention that is possible to activate the debugging tool of qt5 using this simple command:
export QT_DEBUG_PLUGINS=1
After adding this command I run again krita I got the same error, however this time I knew the cause of that error.
libxcb-xinerama.so.0: cannot open shared object file: No such file or directory.
This error prevents to the "xcb" to load properly. So, the solution will be install the `libxcb-xinerama.so.0" right? However, when I run the command:
sudo apt install libxcb-xinerama
The lib was already installed. Now what Teo? Well, then I used an old trick :) Yeah, that one --reinstall
sudo apt install --reinstall libxcb-xinerama
TLDR: This last command solved my problem.
I ran into a very similar problem with the same error message. First, debug some by turning on the Qt Debug printer with the command line command:
export QT_DEBUG_PLUGINS=1
and rerun the application. For me this revealed the following:
"Cannot load library /home/.../miniconda3/lib/python3.7/site-packages/PyQt5/Qt/plugins/platforms/libqxcb.so: (libxkbcommon-x11.so.0: cannot open shared object file: No such file or directory)"
"Cannot load library /home/.../miniconda3/lib/python3.7/site-packages/PyQt5/Qt/plugins/platforms/libqxcb.so: (libxkbcommon-x11.so.0: cannot open shared object file: No such file or directory)"
Indeed, I was missing libxkbcommon-x11.so.0 and libxkbcommon-x11.so.0. Next, check your architecture using dpkg from the linux command line. (For me, the command "arch" gave a different and unhelpful result)
dpkg --print-architecture #result for me: amd64
I then googled "libxkbcommon-x11.so.0 ubuntu 18.04 amd64", and likewise for libxkbcommon-x11.so.0, which yields those packages on packages.ubuntu.com. That told me, in retrospect unsurprisingly, I'm missing packages called libxkbcommon-x11-0 and libxkbcommon0, and that installing those packages will include the needed files, but the dev versions will not. Then the solution:
sudo apt-get update
sudo apt-get install libxkbcommon0
sudo apt-get install libxkbcommon-x11-0
So, I spent about a day trying to figure out what was the issue; tried all the proposed solutions, but none of that worked like installing xcb libs or exporting Qt plugins folder. The solution that suggested to use QT_DEBUG_PLUGINS=1 to debug the issue didn't provide me a direct insight like in the answer - instead I was getting something about unresolved symbols within Qt5Core.
That gave me a hint, though: what if it's trying to use different files from different Qt installations? On my machine I had standard version installed in /home/username/Qt/ and some local builds within my project that I compiled by myself (I have other custom built kits as well in other locations). Whenever I tried to use any of the kits (installed by Qt maintenance tool or built by myself), I would get an "xcb error".
The solution was simple: provide the Qt path through CMAKE_PREFIX_PATH and not though Qt5_DIR as I did, and it solved the problem. Example:
cmake .. -DCMAKE_PREFIX_PATH=/home/username/Qt/5.11.1/gcc_64
I faced the same problem when after installing Viber. It had all required qt libraries in /opt/viber/plugins/.
I checked dependencies of /opt/viber/plugins/platforms/libqxcb.so and found missing dependencies. They were libxcb-render.so.0, libxcb-image.so.0, libxcb-icccm.so.4, libxcb-xkb.so.1
So I resolved my issue by installing missing packages with this libraries:
apt-get install libxcb-xkb1 libxcb-icccm4 libxcb-image0 libxcb-render-util0
I like the solution with qt.conf.
Put qt.conf near to the executable with next lines:
[Paths]
Prefix = /path/to/qtbase
And it works like a charm :^)
For a working example:
[Paths]
Prefix = /home/user/SDKS/Qt/5.6.2/5.6/gcc_64/
The documentation on this is here: https://doc.qt.io/qt-5/qt-conf.html
All you need to do is
pip uninstall PyQt5
and
conda install pyqt
Most of the problem of pyqt can be fixed by this simplest solution.
In my case, I needed to deploy two Qt apps on an Ubuntu virtualbox guest. One was command-line ("app"), the other GUI_based ("app_GUI").
I used "ldd app" to find out what the required libs are, and copied them
to the Ubuntu guest.
While the command-line executable "app" worked ok, the GUI-based executable crashed, giving
the "Failed to load platform plugin "xcb" error. I checked ldd for libxcb.so, but this too had no missing dependencies.
The problem seemed to be that while I did copy all the right libraries I accidentally had copied also libraries that were already present at the guest system.. meaning that (a) they were unnecessary to copy them in the first place and (b) worse, copying them produced incompatibilities between the install libraries.
Worse still, they were undetectable by ldd like I said..
The solution? Make sure that you copy libraries shown as missing by ldd and absolutely no extra libraries.
In my case missing header files were the reason libxcb was not built by Qt. Installing them according to https://wiki.qt.io/Building_Qt_5_from_Git#Linux.2FX11 resolved the issue:
yum install libxcb libxcb-devel xcb-util xcb-util-devel mesa-libGL-devel libxkbcommon-devel
Folks trying to get this started on Ubuntu 20.04 please try to run this and see if this solves the problem. This worked for me
sudo apt-get update -y
sudo apt-get install -y libxcb-xinerama0
I link all Qt stuff statically to the generic Linux builds of my open source projects. It makes life a bit easier. You just need to build static versions of Qt libraries first. Of course this cannot be applied to closed source software due to licensing issues. The deployment of Qt5 apps on Linux is currently a bit problematic, because Ubuntu 12.04, for example, doesn't have Qt5 libraries in the package repositories.
I had this problem, and on a hunch I removed the Qt Configs from my environment. I.e.,
rm -rf ~/.config/Qt*
Then I started qtcreator and it reconfigured itself with the existing state of the machine. It no longer remembered where my projects were, but that just meant I had to browse to them "for the first time" again.
But more importantly it built itself a coherent set of library paths, so I could rebuild and run my project executables again without the xcb or qxcb libraries going missing.
I faced the same situation, but on a Ubuntu 20.04 VM.
TL;DR: Check file permissions.
What I did:
I copied the Qt libs required to /usr/local/lib/x86_64-linux-gnu/ and added it to LD_LIBRARY_PATH
I copied the platforms folder from Qt to my application directory and added it to QT_PLUGIN_PATH
I ran ldd on the executable and in the offending libqxcb.so (ldd libqxcb.so), and it complains about some dependencies although ldconfig listed them as found.
linux-vdso.so.1 (0x00007ffee19af000)
libQt5XcbQpa.so.5 => not found
libfontconfig.so.1 => /lib/x86_64-linux-gnu/libfontconfig.so.1 (0x00007f7cb18fb000)
libfreetype.so.6 => /lib/x86_64-linux-gnu/libfreetype.so.6 (0x00007f7cb183c000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f7cb1820000)
libQt5Gui.so.5 => /usr/local/lib/x86_64-linux-gnu/libQt5Gui.so.5 (0x00007f7cb0fd4000)
libQt5DBus.so.5 => not found
I used export QT_DEBUG_PLUGINS=1 for further info. It complains about missing files, although they are there.
What I found:
For some reason, when copying to the VM through the shared folder the files permissions were not the correct ones.
Thus, I ran: sudo chmod 775 * on the libs and voilà.
I solved the issue through this https://github.com/NVlabs/instant-ngp/discussions/300
pip uninstall opencv-python
pip install opencv-python-headless
This seems to have been a problem with the cv2 Python package and how it loops in Qt
sudo ln -sf /usr/lib/...."adapt-it"..../qt5/plugins/platforms/ /usr/bin/
It creates the symbolic link it's missed. Good for QT ! Good for VLC !!

Conflicts with file from package filesystem-3.2

After repeated attempts and trying to google this issue I'm stuck and am looking for help from my fellow stackers.
Following the wiki from tcadmin I have to run the following commands
wget http://www.tcadmin.com/installer/mono-2.11.4-i386.rpm
yum -y install mono-2.11.4-i386.rpm --nogpgcheck
/opt/mono-2.11.4/bin/mozroots --import --sync --quiet
/opt/mono-2.11.4/bin/mono --aot -O=all /opt/mono-2.11.4/lib/mono/2.0/mscorlib.dll
for i in /opt/mono-2.11.4/lib/mono/gac/*/*/*.dll; do /opt/mono-2.11.4/bin/mono --aot -O=all $i; done
when I get to the yum part it fails and outputs this error.
file / from install of mono-2.11.4-bi.x86_64 conflicts with file from package filesystem-3.2-18.el7.x86_64
Most sites and places suggest using an override or force command but this sounds stupid and will probably cause issues down the road for myself and the system.
I have flagged a ticket with the company that supplies the wiki about this issue but I'm yet to have a reply.
Another suggestion was to extract the rpm and move the files one by one but this is quite time consuming..
The ticket was responed to with the following;
It is safe to force install because all files are placed in /opt/mono-2.11.4 but there is a bug with mono on centos 7 that prevents tcadmin from working correctly.
For anyone else who happens upon this thread, I'm pleased to report that while I didn't encounter this error installing mono (that was a whole other process) I did encounter it while trying to install TCAdmin itself, but I was able to complete the installation of TCAdmin on CentOS 7 after using rpmrebuild to modify the spec.
Simply install rpmrbebuild, run rpmrebuild -pe {packagename}.rpm, scroll down to the %files section and remove the lines for any offending directories (in my case, the '/' and '/home' directories), save and quit, press y, and note the result location. In my case, it was /root/rpmbuild/RPMS/noarch/{packagename}.rpm.
Traverse to that directory and run yum -y install ./{packagename.rpm and it will install without a hitch.
The same should also apply to any other packages that return the conflicts with filesystem error. Just adjust the package names in the above examples accordingly.
*Thanks to the venerable Ahmad Samir for pointing me in the right direction with his post in this thread.
I had the same issue trying to install Fluentd agent on CentOS 7:
(venv)[user#machine01 tmp]$ sudo rpm -ivh td-agent-2.1.1-0.x86_64.rpm
warning: td-agent-2.1.1-0.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID a12e206f: NOKEY
Preparing... ################################# [100%]
file /opt from install of td-agent-2.1.1-0.x86_64 conflicts with file from package filesystem-3.2-18.el7.x86_64
I wouldn't say that downgrading your whole OS is the solution. Maybe an elegant workaround would be to rebuild the .rpm file in order to avoid those file systems which are making conflicts. You can do this by modifying the spec file with rpmrebuild command.
However, if you trust the software you are about to install or you want to try if works no matter what, then an easier (and faster) workaround is to force the rpm installation. That's what I did ...
(venv)[user#machine01 tmp]$ sudo rpm -ivh --force td-agent-2.1.1-0.x86_64.rpm
warning: td-agent-2.1.1-0.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID a12e206f: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:td-agent-2.1.1-0 ################################# [100%]
adding 'td-agent' group...
adding 'td-agent' user...
Installing default conffile...
prelink detected. Installing /etc/prelink.conf.d/td-agent-ruby.conf ...
Configure td-agent to start, when booting up the OS...
...and it worked for me
(venv)[user#machine01 tmp]$ sudo systemctl start td-agent.service
(venv)[user#machine01 tmp]$ sudo systemctl status td-agent.service
td-agent.service - LSB: td-agent's init script
Loaded: loaded (/etc/rc.d/init.d/td-agent)
Active: active (running) since vie 2014-12-12 09:34:09 CET; 4s ago
Process: 17178 ExecStart=/etc/rc.d/init.d/td-agent start (code=exited, status=0/SUCCESS)
...
Hope it helps
This is an inherent issue with centos 7.
Going back to centos 6 fixed it.

installing NASM on ubuntu 11.10

I am new to linux and want to run an assembly language program using ubuntu 11.10.
The installation failed showing the following message
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following packages were automatically installed and are no longer required:
mplayer-skins mplayer libggi2 libgii1 libgii1-target-x youtube-dl
Use ‘apt-get autoremove’ to remove them.
The following NEW packages will be installed:
nasm
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 1557kB of archives.
After unpacking 2855kB of additional disk space will be used.
Err
http://archive.ubuntu.com/ubuntu/onieric/main nasm i386 2.09.081 could
not resolve 'archive.ubuntu.com'.
failed to fetch
http://archive.ubuntu.com/ubuntu/pool/main/n/nasm/nasm-2.09.08-1-i386.deb
could not resolve 'archive.ubuntu.com'
E: unable to fetch some archives, maybe run apt-get update or try with
--fix -missing?
I am unable to figure out what the problem exactly is??
i would like to mention that i have been using ubuntu through a Live CD and not as an installed OS.
You seem to have problems with your internet connection. Specifically dns resolution. Check your /etc/resolv.conf

Resources