Getting an error when running pg_dump on cloud foundry - linux

I have a manifest that looks like the below:
applications:
- name: sfcbackups-staging
memory: 1G
disk_quota: 4GB
instances: 1
buildpacks:
- https://github.com/cloudfoundry/apt-buildpack
- nodejs_buildpack
services:
- sfcstaging-db
And an apt.yml that looks like this to install Postgres
---
cleancache: true
keys:
- https://www.postgresql.org/media/keys/ACCC4CF8.asc
repos:
- deb https://apt.postgresql.org/pub/repos/apt/ bionic-pgdg main
packages:
- postgresql-client-11
But whenever I try running the command pg_dump (either via node or ssh) I get the following error:
Can't locate PgCommon.pm in #INC (you may need to install the PgCommon module)(#INC contains:
/etc/perl
/usr/local/lib/x86_64-linux-gnu/perl/5.26.1
/usr/local/share/perl/5.26.1
/usr/lib/x86_64-linux-gnu/perl5/5.26
/usr/share/perl5
/usr/lib/x86_64-linux-gnu/perl/5.26
/usr/share/perl/5.26
/usr/local/lib/site_perl
/usr/lib/x86_64-linux-gnu/perl-base)
at ./pg_dump line 22.
BEGIN failed--compilation aborted at ./pg_dump line 22.
There's a couple of things I have tried so far but to no avail:
Added the following to my apt.yml
- libdbd-pg-perl
- libpq-dev
Running cpan install DBD::Pg (via SSH) to see if this would fix the issue however the same error occurs.
Has anyone had any similar issues?
Thanks in advance

Related

W: Webpacker requires Node.js ">=10.17.0" and you are using v6.17.1

I try to deploy the rails 6 App on platform.sh,
I have this error when deploying my project in Rails 6 with Webpacker 5.2.1.
I have done days of research on google, without success.
I have NVM installed locally, node and npm, I will say that everything is fine locally, the webpacker is compiled correctly, but not on the remote machine.
and this error makes the deployment fail.
W: Webpacker requires Node.js ">=10.17.0" and you are using v6.17.1
W: Please upgrade Node.js https://nodejs.org/en/download/
W: Exiting!
E: Error building project: Step failed with status code 1.
E: Error: Unable to build application, aborting.
when I want to compile the assets with this line on Build hook.
RAILS_ENV=production bundle exec rails webpacker:install
can you help me
I'll share my config
.platform.app.yaml
# The name of this app. Must be unique within a project.
name: app
type: 'ruby:2.7'
mounts:
log:
source: local
source_path: log
tmp:
source: local
source_path: tmp
relationships:
postgresdatabase: 'dbpostgres:postgresql'
# The size of the persistent disk of the application (in MB).
disk: 1024
hooks:
build: |
bundle install --without development test
RAILS_ENV=production bundle exec rails webpacker:install
deploy: |
RAILS_ENV=production bundle exec rake db:migrate
web:
upstream:
socket_family: "unix"
commands:
start: "unicorn -l $SOCKET -E production config.ru"
locations:
'/':
root: "public"
passthru: true
expires: 1h
allow: true
services.yaml
dbpostgres:
# The type of your service (postgresql), which uses the format
# 'type:version'. Be sure to consult the PostgreSQL documentation
# (https://docs.platform.sh/configuration/services/postgresql.html#supported-versions)
# when choosing a version. If you specify a version number which is not available,
# the CLI will return an error.
type: postgresql:13
# The disk attribute is the size of the persistent disk (in MB) allocated to the service.
disk: 9216
configuration:
extensions:
- plpgsql
- pgcrypto
- uuid-ossp
routes.yaml
# Each route describes how an incoming URL is going to be processed by Platform.sh.
"https://www.{default}/":
type: upstream
upstream: "app:http"
"https://{default}/":
type: redirect
to: "https://www.{default}/"

Issues with Running Ansible Playbook on Linux T2 Instance Localhost

I am trying to figure out why my Ansible playbook is not working. I have tried 20 different ways of indenting the playbook but it is not working.
I am currently launching an Amazon Linux t2 instance and then installing ansible using following commands:
sudo yum update -y
sudo amazon-linux-extras install ansible2 -y
Then I create a playbook first.yml using "vim first.yml" , and the playbook looks like this:
---
- name: update web servers
hosts: localhost
remote_user: root
tasks:
- name: ensure apache is at the latest version
yum:
name: httpd
state: latest
I run playbook using "ansible-playbook first.yml" and get the following error:
ERROR! We were unable to read either as JSON nor YAML, these are the
errors we got from each: JSON: No JSON object could be decoded
Syntax Error while loading YAML. mapping values are not allowed in
this context
The error appears to be in '/home/ec2-user/first.yml': line 7, column
8, but may be elsewhere in the file depending on the exact syntax
problem.
The offending line appears to be:
tasks:
^ here
I would appreciate any help, thank you !

Docker error process_linux.go:319: getting the final child's pid from pipe caused "EOF": unknown

My Docker container will not start. The error message in the log is:
{"message":"OCI runtime create failed: container_linux.go:349: starting container process caused \
"process_linux.go:319: getting the final child's pid from pipe caused \\\"EOF\\\"\": unknown"}
What is next step?
I solved it by setting:
sysctl -w user.max_user_namespaces=15000
When using kubernetes, this error can be caused by a wrong memory notation in deployment settings.
Wrong
...
spec:
template:
spec:
container:
resources:
limits:
memory: "300m"
requests:
memory: "300m"
Fixed:
...
spec:
template:
spec:
container:
resources:
limits:
memory: "300Mi"
requests:
memory: "300Mi"
Update 2021:
New answers in the below-linked thread assert that they have found solutions. See this answer, for example.
Original Answer:
This is a known feature.{1}
The solution that worked for me was to restart the server instance.
In Plesk:
Tools & Settings (from side menu)
Server Management (2nd grouping on left side)
click on Restart Server
Reference:
https://github.com/moby/moby/issues/40835
For CentOS users, setting max_user_namespaces can be done as follows.
echo 15000 > /proc/sys/user/max_user_namespaces
https://superuser.com/a/1294246/1640737
I solve it by upgrade the linux kernel from 4.15.0-202-generic to 4.15.0-206-generic #217-Ubuntu.
My docker --version is 20.10.17, build 100c701.
1, install the newest kernel: sudo apt-get install -y linux-image-generic
2, reboot the server
Tips: check your environment will be suitable for the newest kernel before install it !

Running platform local:build in Ubuntu, receive error

Running platform local:build in Ubuntu, receive error below.
I've installed Ubuntu on Windows 10 Pro 64-bit via the Linux Sub-system for Windows. Then, I installed LAMP, Composer, and the Platform.sh CLI.
Now, I've performed a get on my project to check it out locally, and am performing a build on it locally. But this is what happens:
$ platform local:build
Building application m2 (runtime type: php:7.0)
Found a composer.json file; installing dependencies
Loading composer repositories with package information
Installing dependencies (including require-dev) from lock file
Your requirements could not be resolved to an installable set of packages.
Problem 1
- magento/framework 101.0.1 requires ext-gd * -> the requested PHP extension gd is missing from your system.
- magento/framework 101.0.1 requires ext-gd * -> the requested PHP extension gd is missing from your system.
- Installation request for magento/framework 101.0.1 -> satisfiable by magento/framework[101.0.1].
To enable extensions, verify that they are enabled in your .ini files:
-
- /etc/php/7.0/cli/conf.d/10-opcache.ini
- /etc/php/7.0/cli/conf.d/10-pdo.ini
- /etc/php/7.0/cli/conf.d/15-xml.ini
- /etc/php/7.0/cli/conf.d/20-calendar.ini
- /etc/php/7.0/cli/conf.d/20-ctype.ini
- /etc/php/7.0/cli/conf.d/20-curl.ini
- /etc/php/7.0/cli/conf.d/20-dom.ini
- /etc/php/7.0/cli/conf.d/20-exif.ini
- /etc/php/7.0/cli/conf.d/20-fileinfo.ini
- /etc/php/7.0/cli/conf.d/20-ftp.ini
- /etc/php/7.0/cli/conf.d/20-gettext.ini
- /etc/php/7.0/cli/conf.d/20-iconv.ini
- /etc/php/7.0/cli/conf.d/20-json.ini
- /etc/php/7.0/cli/conf.d/20-phar.ini
- /etc/php/7.0/cli/conf.d/20-posix.ini
- /etc/php/7.0/cli/conf.d/20-readline.ini
- /etc/php/7.0/cli/conf.d/20-shmop.ini
- /etc/php/7.0/cli/conf.d/20-simplexml.ini
- /etc/php/7.0/cli/conf.d/20-sockets.ini
- /etc/php/7.0/cli/conf.d/20-sysvmsg.ini
- /etc/php/7.0/cli/conf.d/20-sysvsem.ini
- /etc/php/7.0/cli/conf.d/20-sysvshm.ini
- /etc/php/7.0/cli/conf.d/20-tokenizer.ini
- /etc/php/7.0/cli/conf.d/20-wddx.ini
- /etc/php/7.0/cli/conf.d/20-xmlreader.ini
- /etc/php/7.0/cli/conf.d/20-xmlwriter.ini
- /etc/php/7.0/cli/conf.d/20-xsl.ini
You can also run `php --ini` inside terminal to see which files are used by PHP in CLI mode.
[ProcessFailedException]
The command failed with the exit code: 2
Full command: '/usr/local/bin/composer' 'install' '--no-progress' '--prefer-dist' '--optimize-autoloader' '--no-interaction' '--no-ansi'
Looks like the PHP-GD library was not installed or enabled on your local Ubuntu. Try installing it by:
sudo apt-get install php7.0-gd

Haskell installation in docker container using stack failing: too many open files

I have a simple Dockerfile
FROM haskell:8
WORKDIR "/root"
CMD ["/bin/bash"]
which I run mounting pwd folder to "/root". In my current folder I have a Haskell project that uses stack (funblog). I configured in stack.yml to use "lts-7.20" resolver, which aims to install ghc-8.0.1.
Inside the container, after running "stack update", I ran "stack setup" but I am getting "Too many open files in system" during GHC compilation.
This is my stack.yaml
flags: {}
packages:
- '.'
- location:
git: https://github.com/agrafix/Spock.git
commit: 2c60a48b2c0be0768071cc1b3c7f14590ffcc7d6
subdirs:
- Spock
- Spock-core
- reroute
- location:
git: https://github.com/agrafix/Spock-digestive.git
commit: 4c85647427e21bbaefbf04c4bc315d4bdfabba0e
extra-deps:
- digestive-bootstrap-0.1.0.1
- blaze-bootstrap-0.1.0.1
- digestive-functors-blaze-0.6.0.6
resolver: lts-7.20
One import note: I don't want to use Docker to deploy the app, just to compile it, i.e. as part of my dev process.
Any ideas?
Should I use another image without ghc pre-installed to use with docker? Which one?
update
Yes, I could use the built-in GHC in the container and it is a good idea, but wondered if there is any issue building GHC within Docker.
update 2
For anyone wishing to reproduce (on MAC OSX by the way), you can clone repo https://github.com/carlosayam/funblog and grab this commit 9446bc0e52574cc574a9eb5f2733f69e07b874ef
(I will probably move on using container's GHC)
By default, Docker for macOS limits number of file descriptors to avoid hitting macOS system-wide limits (default limit is 900). To increase the limit, follow these commands:
$ cd ~/Library/Containers/com.docker.docker/Data/database/
$ git reset --hard
HEAD is now at 9410b78 last-start-time changed at 1480947038
$ cat com.docker.driver.amd64-linux/slirp/max-connections
900
$ echo 1200 > com.docker.driver.amd64-linux/slirp/max-connections
$ git add com.docker.driver.amd64-linux/slirp/max-connections
$ git commit -s -m 'Update the maximum number of connections'
[master 227a248] Update the maximum number of connections
1 file changed, 1 insertion(+), 1 deletion(-)
Then check the notice messages by:
$ syslog -k Sender Docker
<Notice>: updating connection limit to 1200
To check how many files you got open, run: sysctl kern.num_files.
To check what's your current limit, run: sysctl kern.maxfiles.
To increase it system-wide, run: sysctl -w kern.maxfiles=20480.
Source: Containers become unresponsive due to "too many connections".
See also: Docker: How to increase number of open files limit.
On Linux, you can also try to run Docker with --ulimit, e.g.
docker run --ulimit nofile=5000:5000 <image-tag>
Source: Docker error: too many open files

Resources