Multicontainer Docker application failing on deploy - node.js

So I have a problem with deploying my application to elastic beanstalk at Amazon. My application is a multi-container Docker application that includes node server and mongoDB inside of it. Somehow the application crashes every time and I get this bizarre error from mongoDB.
Error is as follows:
18-05-28T12:53:02.510+0000 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 3867 processes, 32864 files. Number of processes should be at least 16432 : 0.5 times number of files.
2018-05-28T12:53:02.540+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
2018-05-28T12:53:02.541+0000 I NETWORK [initandlisten] waiting for connections on port 27017
2018-05-28T12:53:03.045+0000 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2018-05-28T12:53:03.045+0000 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...
2018-05-28T12:53:03.045+0000 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock
2018-05-28T12:53:03.045+0000 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
2018-05-28T12:53:03.047+0000 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down
2018-05-28T12:53:03.161+0000 I STORAGE [signalProcessingThread] shutdown: removing fs lock...
2018-05-28T12:53:03.161+0000 I CONTROL [signalProcessingThread] now exiting
2018-05-28T12:53:03.161+0000 I CONTROL [signalProcessingThread] shutting down with code:0
This is my Dockerrun.aws.json file:
{
"AWSEBDockerrunVersion": 2,
"volumes":[
{
"name": "mongo-app",
"host": {
"sourcePath": "/var/app/mongo-app"
}
},
{
"name": "some-api",
"host": {
"sourcePath": "/var/app/some-api"
}
}
],
"containerDefinitions": [
{
"name": "mongo-app",
"image": "mongo:latest",
"memory": 128,
"portMappings": [
{
"hostPort": 27017,
"containerPort": 27017
}
],
"mountPoints": [
{
"sourceVolume": "mongo-app",
"containerPath": "/data/db"
}
]
},
{
"name": "server",
"image": "node:8.11",
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 8001
}
],
"links": [
"mongo-app"
],
"mountPoints":[
{
"sourceVolume": "some-api",
"containerPath": "/some-data"
}
]
}
]
}
And this is my Dockerfile:
FROM node:8.11
RUN mkdir -p /api
WORKDIR /api
COPY package.json /api
RUN cd /api && npm install
COPY . /api
EXPOSE 8001
CMD ["node", "api/app.js"]
Any Ideas why the application is crashing and does not deploy? It seems to me that the mongoDB is causing the problem but I cant understand or find the root of the problem.
Thank you in advance!

I spent a while trying to figure this out as well.
The solution: Add a mountpoint for "containerPath": "/data/configdb". Mongo expects to be able to write to both /data/db and /data/configdb.
Also, you might want to bump up "memory": 128 for Mongo to something higher.

Related

Packer vpshere-iso stuck on wainting for Ip

I am attempting to build a packer template using the following configuration for my packer JSON config file. However, I get packer stuck on waiting for an IP... and it doesn't make any progress.
Thanks for your help in advance.
{
"builders": [
{
"CPUs": 1,
"RAM": 1024,
"RAM_reserve_all": false,
"boot_command": [
"<esc><wait>",
"linux ks=hd:fd0:packer/http/ks.cfg<enter>"
],
"cluster": "{{user `vsphere-cluster`}}",
"convert_to_template": true,
"datacenter": "{{user `vsphere-datacenter`}}",
"datastore": "{{user `vsphere-datastore`}}",
"disk_controller_type": "pvscsi",
"floppy_files": [
"packer/http/ks.cfg"
],
"folder": "{{user `folder`}}",
"guest_os_type": "rhel7_64Guest",
"insecure_connection": "true",
"iso_paths": [
"{{user `iso_url`}}"
],
"network_adapters": [
{
"network": "{{user `vsphere-network`}}",
"network_card": "vmxnet3"
}
],
"notes": "Build via Packer",
"password": "{{user `vsphere-password`}}",
"ssh_password": "{{user `ssh_pass`}}",
"ssh_username": "{{user `ssh_user`}}",
"storage": [
{
"disk_size": 25000,
"disk_thin_provisioned": true
}
],
"type": "vsphere-iso",
"username": "{{user `vsphere-user`}}",
"vcenter_server": "{{user `vsphere-server`}}",
"vm_name": "{{user `vm_name`}}"
}
],
"provisioners": [
{
"execute_command": "sudo {{.Vars}} sh {{.Path}}",
"scripts": [
"packer/scripts/cleanup.sh"
],
"type": "shell"
}
],
"variables": {
"folder": "Templates",
"iso_url": "[datastore001] path/toiso/rhel-server-7.9-x86_64-dvd.iso",
"ssh_pass": "password",
"ssh_user": "root",
"vm-cpu-num": "1",
"vm-disk-size": "25600",
"vm-mem-size": "1024",
"vm_name": "packer-template-test",
"vsphere-cluster": "cluster",
"vsphere-datacenter": "datacenter",
"vsphere-datastore": "datastore",
"vsphere-network": "vsphere-network",
"vsphere-password": "password2",
"vsphere-server": "x.x.x.x",
"vsphere-user": "admin"
}
}
My Kickstart file looks like the following...
lang en_US
keyboard us
timezone America/New_York --isUtc
rootpw --iscrypted (*&YTHGRFDER^&VGGUYIUYIUY*&^&^&^UIYIYI
#Network
network --bootproto=static --ip=10.0.023 --netmask=255.255.255.0 --gateway=10.0.0.1 --nameserver=10.0.0.25 --device=ens192
#platform x86_64
reboot
text
nfs --server=x.x.x.x --dir=/path/to/iso/rhel-server-7.9-x86_64-dvd.iso
bootloader --location=mbr --append="rhgb quiet crashkernel=auto"
zerombr
clearpart --all --initlabel
volgroup System --pesize=4000 pv.0
part pv.0 --fstype=lvmpv --ondisk=sda --size=120000
part /boot --fstype=xfs --ondisk=sda2 --size=500
part /boot/efi --fstype=vfat --ondisk=sda1 --size=500
logvol swap --vgname=System --name=swap --fstype=swap --size=4000
logvol /home --vgname=System --name=home --fstype=xfs --size=500
logvol / --vgname=System --name=root --fstype=xfs --size=44000
auth --passalgo=sha512 --useshadow
selinux --disabled
firewall --enabled
firstboot --disable
%packages
#^graphical-server-environment
%end
This is my packer output after it times out after 30 minutes 23 seconds.
vsphere-iso: output will be in this color.
==> vsphere-iso: Creating VM...
==> vsphere-iso: Customizing hardware...
==> vsphere-iso: Mounting ISO images...
==> vsphere-iso: Adding configuration parameters...
==> vsphere-iso: Creating floppy disk...
vsphere-iso: Copying files flatly from floppy_files
vsphere-iso: Copying file: packer/http/ks.cfg
vsphere-iso: Done copying files from floppy_files
vsphere-iso: Collecting paths from floppy_dirs
vsphere-iso: Resulting paths from floppy_dirs : []
vsphere-iso: Done copying paths from floppy_dirs
vsphere-iso: Copying files from floppy_content
vsphere-iso: Done copying files from floppy_content
==> vsphere-iso: Uploading created floppy image
==> vsphere-iso: Adding generated Floppy...
==> vsphere-iso: Set boot order temporary...
==> vsphere-iso: Power on VM...
==> vsphere-iso: Waiting 10s for boot...
==> vsphere-iso: Typing boot command...
==> vsphere-iso: Waiting for IP...
==> vsphere-iso: Timeout waiting for IP.
==> vsphere-iso: Clear boot order...
==> vsphere-iso: Power off VM...
==> vsphere-iso: Deleting Floppy image ...
==> vsphere-iso: Destroying VM...
Build 'vsphere-iso' errored after 30 minutes 23 seconds: Timeout waiting for IP.
==> Wait completed after 30 minutes 23 seconds
==> Some builds didn't complete successfully and had errors:
--> vsphere-iso: Timeout waiting for IP.
==> Builds finished but no artifacts were created.

Debugging python in docker container using debugpy and vs code results in timeout/connection refused

I'm trying to setup native debugging for a python script running in docker for Visual Studio Code using debugpy. Ideally I'd like to just F5 and be on my way (including a build phase if needed). Currently I'm bouncing between a timeout caused from debugpy.listen(5678) inlined within the VS code editor itself (Exception has occurred: RuntimeError timed out waiting for adapter to connect) or a connection refused.
I created a launch.json from the documentation provided by microsoft:
launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Attach to Integration (test)",
"type": "python",
"request": "attach",
"pathMappings": [
{
"localRoot": "${workspaceFolder}/test",
"remoteRoot": "/test"
}
],
"port": 5678,
"host": "127.0.0.1"
}
]
}
building the image looks like this so far:
Dockerfile
FROM python:3.7-slim-buster as base
RUN apt-get -y update; apt-get install -y vim git cmake
WORKDIR /
RUN mkdir .cache src in out config log
COPY requirements.txt .
RUN pip install -r requirements.txt; rm requirements.txt
#! TODO: config folder needs to be a mapped volume so they can change creds without rebuild
WORKDIR /src
COPY test ../test
COPY config ../config
COPY src/ .
#? D E B U G I M A G E
FROM base as debug
RUN pip install debugpy
CMD python -m debugpy --listen 0.0.0.0:5678 ../test/edu.employer._test.py
#! P R O D U C T I O N I M A G E
# FROM base as prod
# CMD [ "python", "/test/edu.employer._test.py" ]
Some examples I found try to simply things with a docker-compose.yaml, but I'm unsure if i need one at this point.
docker-compose.yaml
services:
tester:
container_name: tester
image: employer/test:1.0.0
build:
context: .
target: debug
dockerfile: test/edu.employer._test.Dockerfile
volumes:
- ./out:/out
- ./.cache:/.cache
- ./log:/log
ports:
- 5678:5678
which I based off a the CLI command: docker run -it -v $(pwd)/out:/out -v $(pwd)/.cache:/.cache -v $(pwd)/log:/log employer/test:1.0.0;
"critical" parts of my script just listen and wait for the bugger:
from __future__ import absolute_import
# Standard
import os
import sys
# 3rd Party
import debugpy
debugpy.listen(5678)
debugpy.wait_for_client()
# 1st Party. NOTE: All source files are in /src, so we can add that path here for testing
# and batch import all integrations files. Not very clean however
sys.path.insert(0, os.path.join('/', 'src'))
import integrations as ints
You have to configure the debugger with: debugpy.listen(("0.0.0.0", 5678)).
This happens because, by default, debugpy is listening on localhost. If you have your docker container on another host you have to add 0.0.0.0.
Turns out I needed to create a tasks.json file and provide the details on running the image...
tasks.json
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"type": "docker-run",
"label": "docker-run: debug",
"dependsOn": ["docker-build"],
"dockerRun": {
"image": "employer/test:1.0.0"
// "env": {
// "FLASK_APP": "path_to/flask_entry_point.py"
// }
},
"python": {
"args": [],
"file": "/test/edu.employer._test.py"
}
}
]
}
and define a preLaunchTask:
{
"name": "Docker: Python",
"type": "docker",
"request": "launch",
"preLaunchTask": "docker-run: debug",
"python": {
"pathMappings": [
{
"localRoot": "${workspaceFolder}/test",
"remoteRoot": "/test"
}
],
//"projectType": "django"
}
}

Unknown error while debuggin rust application in VS Code

I am trying to debug a fairly large rust project in VS code.
The launch.json has this:
{
"type": "lldb",
"request": "launch",
"name": "Debug executable 'rpfm_ui'",
"cargo": {
"args": [
"build",
"--bin=rpfm_ui",
"--package=rpfm_ui"
],
"filter": {
"name": "rpfm_ui",
"kind": "bin"
}
},
"args": [],
"cwd": "${workspaceFolder}"
},
But when I try to run the application I get the following
Finished dev [unoptimized + debuginfo] target(s) in 9.53s
Raw artifacts:
{
fileName: 'c:\\Users\\ole_k\\Desktop\\rpfm-master\\target\\debug\\rpfm_ui.exe',
name: 'rpfm_ui',
kind: 'bin'
}
Filtered artifacts:
{
fileName: 'c:\\Users\\ole_k\\Desktop\\rpfm-master\\target\\debug\\rpfm_ui.exe',
name: 'rpfm_ui',
kind: 'bin'
}
configuration: {
type: 'lldb',
request: 'launch',
name: "Debug executable 'rpfm_ui'",
args: [],
cwd: '${workspaceFolder}',
relativePathBase: 'c:\\Users\\ole_k\\Desktop\\rpfm-master',
program: 'c:\\Users\\ole_k\\Desktop\\rpfm-master\\target\\debug\\rpfm_ui.exe',
sourceLanguages: [ 'rust' ]
}
Listening on port 49771
[adapter\src\terminal.rs:99] FreeConsole() = 1
[adapter\src\terminal.rs:100] AttachConsole(pid) = 1
[adapter\src\terminal.rs:104] FreeConsole() = 1
[2020-06-27T20:43:04Z ERROR codelldb::debug_session] process launch failed: unknown error
Debug adapter exit code=0, signal=null.
I have also seen this:
PS C:\Users\ole_k\Desktop\rpfm-master> & 'c:\Users\ole_k.vscode\extensions\vadimcn.vscode-lldb-1.5.3\adapter\codelldb.exe' 'terminal-agent' '--port=49628'
Error: Os { code: 10061, kind: ConnectionRefused, message: "No connection could be made because the target machine actively refused it." }
[2020-06-27T20:29:08Z ERROR codelldb::debug_session] process launch failed: unknown error
If I run the application from the terminal inside vs code (cargo run --bin rpfm_ui) it works.
There are some external dependencies which are in folders outside of the root folder.
I can debug other projects in the solution which share a lot of the code, but not the external dependencies.
I can debug other projects.
I am running as administrator.
Any ideas on how to resolve the issue?

mongod ERROR: child process failed, exited with error number 14

I got the following error when I tried to restart the db after the server (a linux VM) rebooted without shutting down the db first. I saw someone posted the same error over one and half years ago, but the solution proposed there didn't apply to my situation because it's not a yaml config issue (the db had been running for quite a while). I also included the log at the end. Thanks for any help.
sudo mongod --fork --logpath /nas/is1/bin/mongodb/data/db/mongodb.log --dbpath /nas/is1/bin/mongodb/data/db
about to fork child process, waiting until server is ready for connections.
forked process: 20085
ERROR: child process failed, exited with error number 14
output in the log file.
2017-01-19T15:33:45.286-0500 I CONTROL [initandlisten] MongoDB starting : pid=20085 port=27017 dbpath=/data/mongodb/data/db 64-bit host=raboso
2017-01-19T15:33:45.286-0500 I CONTROL [initandlisten] db version v3.2.1
2017-01-19T15:33:45.286-0500 I CONTROL [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
2017-01-19T15:33:45.286-0500 I CONTROL [initandlisten] allocator: tcmalloc
2017-01-19T15:33:45.286-0500 I CONTROL [initandlisten] modules: none
2017-01-19T15:33:45.286-0500 I CONTROL [initandlisten] build environment:
2017-01-19T15:33:45.286-0500 I CONTROL [initandlisten] distarch: x86_64
2017-01-19T15:33:45.286-0500 I CONTROL [initandlisten] target_arch: x86_64
2017-01-19T15:33:45.286-0500 I CONTROL [initandlisten] options: { processManagement: { fork: true }, storage: { dbPath: "/data/mongodb/data/db" }, systemLog: { destination: "file", path: "/data/mongodb/data/db/mongodb.log" } }
2017-01-19T15:33:45.329-0500 I - [initandlisten] Detected data files in /data/mongodb/data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2017-01-19T15:33:45.346-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=112G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
2017-01-19T15:33:54.009-0500 E STORAGE [initandlisten] WiredTiger (-31802) [1484858034:9041][20085:0x7f0fcf72bcc0], file:sizeStorer.wt, WT_SESSION.open_cursor: sizeStorer.wt read error: failed to read 4096 bytes at offset 49152: WT_ERROR: non-specific WiredTiger error
2017-01-19T15:33:54.011-0500 I - [initandlisten] Invariant failure: ret resulted in status UnknownError -31802: WT_ERROR: non-specific WiredTiger error at src/mongo/db/storage/wiredtiger/wiredtiger_size_storer.cpp 67
2017-01-19T15:33:54.022-0500 I CONTROL [initandlisten]
0x12cf722 0x127ac14 0x1266dad 0x1058db2 0x10425ea 0x103f540 0xf679a8 0x93bc91 0x9403b9 0x7f0fce33bb35 0x939829
----- BEGIN BACKTRACE -----
{"backtrace":[{"b":"400000","o":"ECF722"},{"b":"400000","o":"E7AC14"},{"b":"400000",
"o":"E66DAD"},{"b":"400000","o":"C58DB2"},{"b":"400000","o":"C425EA"},{"b":"400000",
"o":"C3F540"},{"b":"400000","o":"B679A8"},{"b":"400000","o":"53BC91"},{"b":"400000",
"o":"5403B9"},{"b":"7F0FCE31A000","o":"21B35"},{"b":"400000","o":"539829"}],
"processInfo":{ "mongodbVersion" : "3.2.1", "gitVersion" : "a14d55980c2cdc565d4704a7e3ad37e4e535c1b2",
"compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-514.2.2.el7.x86_64",
"version" : "#1 SMP Wed Nov 16 13:15:13 EST 2016", "machine" : "x86_64" },
"somap" : [ { "elfType" : 2, "b" : "400000" }, { "b" : "7FFEF9CD5000", "elfType" : 3 },
{ "b" : "7F0FCF31B000", "path" : "/lib64/librt.so.1", "elfType" : 3 }, { "b" : "7F0FCF117000",
"path" : "/lib64/libdl.so.2", "elfType" : 3 }, { "b" : "7F0FCEE0F000", "path" : "/lib64/libstdc++.so.6",
"elfType" : 3 }, { "b" : "7F0FCEB0D000", "path" : "/lib64/libm.so.6", "elfType" : 3 },
{ "b" : "7F0FCE8F7000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3 }, { "b" : "7F0FCE6DB000",
"path" : "/lib64/libpthread.so.0", "elfType" : 3 }, { "b" : "7F0FCE31A000", "path" : "/lib64/libc.so.6",
"elfType" : 3 }, { "b" : "7F0FCF523000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3 } ] }}
mongod(_ZN5mongo15printStackTraceERSo+0x32) [0x12cf722]
mongod(_ZN5mongo10logContextEPKc+0x134) [0x127ac14]
mongod(_ZN5mongo17invariantOKFailedEPKcRKNS_6StatusES1_j+0xAD) [0x1266dad]
mongod(_ZN5mongo20WiredTigerSizeStorerC1EP15__wt_connectionRKSs+0x222) [0x1058db2]
mongod(_ZN5mongo18WiredTigerKVEngineC2ERKSsS2_S2_mbbb+0x6DA) [0x10425ea]
mongod(+0xC3F540) [0x103f540]
mongod(_ZN5mongo20ServiceContextMongoD29initializeGlobalStorageEngineEv+0x588) [0xf679a8]
mongod(_ZN5mongo13initAndListenEi+0x321) [0x93bc91]
mongod(main+0x149) [0x9403b9]
libc.so.6(__libc_start_main+0xF5) [0x7f0fce33bb35]
mongod(+0x539829) [0x939829]
----- END BACKTRACE -----
2017-01-19T15:33:54.022-0500 I - [initandlisten]
***aborting after invariant() failure
If a system running MongoDB with the WiredTiger storage engine crashes or experiences an unclean shutdown, MongoDB may not be able to recover data files on restart if the crash/shutdown interrupted a WiredTiger checkpoint.
MongoDB cannot automatically recover data files on restart.
Sadly there is no workaround. Either you can restore data from backups or resync from another replica set member.
WiredTiger (-31802) [1484858034:9041][20085:0x7f0fcf72bcc0], file:sizeStorer.wt, WT_SESSION.open_cursor: sizeStorer.wt read error: failed to read 4096 bytes at offset 49152: WT_ERROR: non-specific WiredTiger error
Above error suggets that your database has been corrupted. Repair it by:
mongod --repair --dbpath /path/to/data/db

How to install a nodejs cms like pencilblue on uberspace

I would like to have the pencilblue nodejs cms with mongodb installed on my uberspace account. Which steps do I have to take?
As I found it hard figuring out how to do it, here is how I finally succeeded. Most of it is relevant for nodeJS installations other than pencilblue as well.
First you need to create an account on uberspace.de.
Open your terminal and login into your uberspace console with ssh:
ssh {account}#{server}.uberspace.de
Enter the password you created with the creation of the account.
Create the service directory:
uberspace-setup-svscan
Create the mongo database:
uberspace-setup-mongodb
Create folder for database data:
mkdir data
cd data
mkdir db
Start db:
mongod --dbpath data/db/
You will get some login data. I suggest you write it down somewhere:
Hostname: localhost
Portnum#: {dbPort}
Username: {account}_mongoadmin
Password: {dbPassword}
To connect to the db via shell you may use:
mongo admin --port {dbPort} -u {account}_mongoadmin -p)
Configure npm:
cat > ~/.npmrc <<__EOF__
prefix = $HOME
umask = 077
__EOF__
Install pencilblue-cli:
npm install pencilblue-cli
Change to html-folder and create a .htaccess file (you could do this with your ftp-client as well):
RewriteEngine On
RewriteRule ^(.*) http://localhost:8080/$1 [P]
Now if you want to use github:
Create a new repository on github.
Open a new terminal window and clone pencilblue cms in a local folder on your machine:
git clone git#github.com:pencilblue/pencilblue.git pencilblue
cd pencilblue
git remote set-url origin git#github.com:{yourGitName}/{yourRepoName}.git
git add .
git commit -m "Initial commit."
Setup ssh on uberspace:
Go back to your uberspace console.
ssh-keygen -t rsa -b 4096 -C "{yourEmailAddress}"
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub
Copy the whole key that is printed out and paste it in github under settings/SSH keys.
Clone the new repo in uberspace console:
git clone git#github.com:{yourGitName}/{yourRepoName}.git cms
cd cms
Create a config.js either with vim config.js or upload it with ftp:
module.exports = {
"siteName": "{yourSiteName}",
"siteRoot": "http://{account}.{server}.uberspace.de/",
"sitePort": {
8080
},
"logging": {
"level": "info"
},
"db": {
"type": "mongo",
"servers": [
"mongodb://{account}_mongoadmin:{dbPassword}#127.0.0.1:{dbPort}/"
],
"name": "admin",
"writeConcern": 1
},
"cache": {
"fake": false,
"host": "localhost",
"port": 6379
},
"settings": {
"use_memory": false,
"use_cache": false
},
"templates": {
"use_memory": true,
"use_cache": false
},
"plugins": {
"caching": {
"use_memory": false,
"use_cache": false
}
},
"registry": {
"type": "mongo"
},
"session": {
"storage": "mongo"
},
"media": {
"provider": "mongo",
"max_upload_size": 6291456
},
"cluster": {
"workers": 1,
"self_managed": true
},
"siteIP": "0.0.0.0"
};
Install node_modules:
npm install
Create a service that starts the server:
uberspace-setup-service pbservice node ~/cms/pencilblue.js
Start the service:
svc -u ~/service/pbservice
Now you can go to the page on http://{account}.{server}.uberspace.de
(To start the service (hint: u = up):
svc -u ~/service/pbservice
To stop the service (hint: d = down):
svc -d ~/service/pbservice
To reload the service (hint: h = HUP):
svc -h ~/service/pbservice
To restart the service (hint: du = down, up):
svc -du ~/service/pbservice
To remove the service:
cd ~/service/pbservice
rm ~/service/pbservice
svc -dx . log
rm -rf ~/etc/run-pbservice)

Resources