IIS is not installing in server - iis

I am installing IIS using wix burn which works fine in my local. When I am trying to install in server it's getting failed to install, and the error is an attempt was made to load a program with an incorrect format.
<!--Check For IIS Existance-->
<util:RegistrySearch Id="IISWebServerRole64"
Root="HKLM"
Key="SOFTWARE\Microsoft\InetStp\Components"
Value="W3SVC"
Variable="WebServerRole"
Win64="yes"/>
<!--Installs IIS if already not installed-->
<ExePackage Id="IIS_WebServerRole"
SourceFile="setup.bat"
InstallCondition="NOT WebServerRole"
DisplayName="Installing IIS: IIS-WebServerRole"
InstallCommand="dism.exe /Online /Enable-Feature /FeatureName:IIS-WebServerRole">
</ExePackage>
The setup.bat file is a simple text file, containing %*.
Log File
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_WebServerRole, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_WebServer, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_CommonHttpFeatures, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_StaticContent, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_DefaultDocument, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_DirectoryBrowsing, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_HttpErrors, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_HttpRedirect, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_ApplicationDevelopment, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_NetFxExtensibility, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_ISAPIExtensions, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_ASP, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_ISAPIFilter, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_ASPNET, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_HealthAndDiagnostics, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_HttpLogging, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_LoggingLibraries, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_RequestMonitor, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_HttpTracing, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_CustomLogging, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_Security, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_WindowsAuthentication, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_RequestFiltering, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_IPSecurity, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_Performance, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_HttpCompressionStatic, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_WebServerManagementTools, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_ManagementConsole, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_ManagementScriptingTools, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_ManagementService, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_WindowsActivationService, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_ProcessModel, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_NetFxEnvironment, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_ConfigurationAPI, state: Absent, cached: None
[1AD8:1A88][2019-08-06T19:16:59]i101: Detected package: IIS_NetFx3, state: Absent, cached: None
[2398:0EE4][2019-08-06T19:17:04]e000: Error 0x8007000b: Process returned error: 0xb
[2398:0EE4][2019-08-06T19:17:04]e000: Error 0x8007000b: Failed to execute EXE package.
[1AD8:1A88][2019-08-06T19:17:04]e000: Error 0x8007000b: Failed to configure per-machine EXE package.
[1AD8:1A88][2019-08-06T19:17:04]i319: Applied execute package: IIS_WebServerRole, result: 0x8007000b, restart: None
[1AD8:1A88][2019-08-06T19:17:04]e000: Error 0x8007000b: Failed to execute EXE package.

Related

Airflow Kubernetes executor , Tasks in queued state and pod says invalid image and uses local executor

strong textI have installed airflow in docker and using kubernetes executor unable to trigger dags. Dag runs are in queued state. KubernetesExecutor is creating pod but it says invalid image. If i describe the pod it uses local executor instead of kubernetes executor. Please help
Attaching log file for reference
**kubectl describe pod tablescreationschematablescreation-ecabd38a66664a33b6645a72ef056edc
Name: swedschematablescreationschematablescreation-ecabd38a66664a33b6645a72ef056edc
Namespace: default
Priority: 0
Node: 10.73.96.181
Start Time: Mon, 11 May 2020 18:22:15 +0530
Labels: airflow-worker=5888feda-6aee-49c8-a94b-39cbe5758062
airflow_version=1.10.10
dag_id=Swed-schema-tables-creation
execution_date=2020-05-11T12_52_09.829627_plus_00_00
kubernetes_executor=True
task_id=Schema_Tables_Creation
try_number=1
Annotations: <none>
Status: Pending
IP: 172.17.0.46
IPs:
IP: 172.17.0.46
Containers:
base:
Container ID:
Image: :
Image ID:
Port: <none>
Host Port: <none>
Command:
airflow
run
Swed-schema-tables-creation
Schema_Tables_Creation
2020-05-11T12:52:09.829627+00:00
--local
--pool
default_pool
-sd
/root/airflow/dags/User_Creation_dag.py
State: Waiting
Reason: InvalidImageName
Ready: False
Restart Count: 0
Environment:
AIRFLOW__CORE__EXECUTOR: LocalExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql://airflowkube:airflowkube#10.73.96.181:5434/airflowkube
Mounts:
/root/airflow/dags from airflow-dags (ro)
/root/airflow/logs from airflow-logs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-64cxg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
airflow-dags:
Type: HostPath (bare host directory volume)
Path: /data/Naveen/Airflow/dags
HostPathType:
airflow-logs:
Type: HostPath (bare host directory volume)
Path: /data/Naveen/Airflow/Logs
HostPathType:
default-token-64cxg:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-64cxg
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/swedschematablescreationschematablescreation-ecabd38a66664a33b6645a72ef056edc to evblfnclnullnull1538
Warning Failed 2m15s (x12 over 4m28s) kubelet, evblfnclnullnull1538 Error: InvalidImageName
Warning InspectFailed 2m (x13 over 4m28s) kubelet, evblfnclnullnull1538 Failed to apply default image tag ":": couldn't parse image reference ":": invalid reference format**
It looks like there is no image defined for the workers. You can set this in the airflow.cfg. Be sure to set these worker_container_repository and worker_container_tag. These are null by default which results in the behaviour you're experiencing.
A working example:
AIRFLOW__KUBERNETES__WORKER_CONTAINER_REPOSITORY: "apache/airflow"
AIRFLOW__KUBERNETES__WORKER_CONTAINER_TAG: "1.10.10-python3.6"
AIRFLOW__KUBERNETES__RUN_AS_USER: "50000"
The airflow docs have some more detailed info on the available properties: https://airflow.apache.org/docs/stable/configurations-ref.html#kubernetes

Heroku is giving me a 503 and I don't know why

So Heroku displays my front end fine, but when I make a call to my back end it only returns a 503 and I've had no luck with google finding an answer to my problem.
Here's my server
const restify = require('restify');
const mongoose = require('mongoose')
const db = mongoose.connection
const router = require('./routes')
let PORT = process.env.PORT || process.env.VUE_APP_HOST
require('dotenv').config()
const server = restify.createServer({
name: 'myapp',
version: '1.0.0'
})
server.use(restify.plugins.acceptParser(server.acceptable))
server.use(restify.plugins.queryParser());
server.use(restify.plugins.bodyParser());
router.applyRoutes(server)
const uri = process.env.SERVER
mongoose.connect(uri, {
useNewUrlParser: true,
useUnifiedTopology: true,
useCreateIndex:true
},
() => console.log('Database connected'))
db.on('error', console.error.bind(console, 'connection error:'));
db.once('open', function () {
// we're connected!
console.log('mongoose is connected')
})
server.get('/*', restify.plugins.serveStatic({
directory: './dist',
default: 'index.html',
}));
server.listen(PORT, function () {
console.log('%s listening at %s', server.name, server.url);
});
Here is my log file
-----> Node.js app detected
-----> Creating runtime environment
NPM_CONFIG_LOGLEVEL=error
NODE_ENV=production
NODE_MODULES_CACHE=true
NODE_VERBOSE=false
-----> Installing binaries
engines.node (package.json): 13.8.0
engines.npm (package.json): unspecified (use default)
Resolving node version 13.8.0...
Downloading and installing node 13.8.0...
Using default npm version: 6.13.6
-----> Restoring cache
- node_modules
-----> Installing dependencies
Installing node modules (package.json + package-lock)
audited 19449 packages in 12.185s
46 packages are looking for funding
run `npm fund` for details
found 13 vulnerabilities (11 low, 2 moderate)
run `npm audit fix` to fix them, or `npm audit` for details
-----> Build
Running build
> quiz#0.1.0 build /tmp/build_1b1e8d06246614eaf4f8c73b7396ab26
> vue-cli-service build
- Building for production...
WARNING Compiled with 2 warnings9:31:34 PM
warning
asset size limit: The following asset(s) exceed the recommended size limit (244 KiB).
This can impact web performance.
Assets:
css/chunk-vendors.f2de1e82.css (291 KiB)
js/chunk-vendors.25e54ca6.js (249 KiB)
warning
entrypoint size limit: The following entrypoint(s) combined asset size exceeds the recommended limit (244 KiB). This can impact web performance.
Entrypoints:
app (550 KiB)
css/chunk-vendors.f2de1e82.css
js/chunk-vendors.25e54ca6.js
css/app.83b7036f.css
js/app.d2d7e1c2.js
File Size Gzipped
dist/js/chunk-vendors.25e54ca6.js 249.22 KiB 83.71 KiB
dist/js/chunk-322ddd76.30a6d833.js 84.76 KiB 22.35 KiB
dist/js/chunk-7a6727f2.c0631d11.js 21.09 KiB 6.60 KiB
dist/js/chunk-fd105068.4e2b7450.js 20.63 KiB 5.84 KiB
dist/js/chunk-5e9478d9.aa473c10.js 11.85 KiB 3.72 KiB
dist/js/app.d2d7e1c2.js 9.00 KiB 3.59 KiB
dist/js/chunk-ef9ba634.75ba4138.js 4.23 KiB 1.78 KiB
dist/js/chunk-20b8df38.167c1cfd.js 2.49 KiB 1.09 KiB
dist/js/chunk-2d0ac3bd.a6df4124.js 2.18 KiB 1.05 KiB
dist/js/chunk-2d20ec06.bc0797f6.js 1.77 KiB 0.92 KiB
dist/js/chunk-beee9c80.2f37298d.js 1.37 KiB 0.64 KiB
dist/js/chunk-2d230542.1693dee0.js 1.23 KiB 0.73 KiB
dist/css/chunk-vendors.f2de1e82.css 291.44 KiB 32.34 KiB
dist/css/chunk-fd105068.ee4c284f.css 35.29 KiB 4.49 KiB
dist/css/chunk-322ddd76.fa9ee5dc.css 24.36 KiB 3.88 KiB
dist/css/chunk-beee9c80.0670aa22.css 9.98 KiB 1.31 KiB
dist/css/chunk-5e9478d9.6c52e948.css 8.44 KiB 1.71 KiB
dist/css/chunk-7a6727f2.e044490b.css 3.71 KiB 1.00 KiB
dist/css/chunk-20b8df38.c7315fda.css 0.89 KiB 0.35 KiB
dist/css/app.83b7036f.css 0.03 KiB 0.05 KiB
Images and other types of assets omitted.
DONE Build complete. The dist directory is ready to be deployed.
INFO Check out deployment instructions at https://cli.vuejs.org/guide/deployment.html
-----> Caching build
- node_modules
-----> Pruning devDependencies
removed 1037 packages and audited 444 packages in 13.024s
13 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
-----> Build succeeded!
-----> Discovering process types
Default types for buildpack -> web
-----> Compressing...
Done: 50.5M
-----> Launching...
Released v46
https://quizzor.herokuapp.com/ deployed to Heroku
I cannot find any info on how to resolve this. When I run my server locally everything is fine, but Heroku just seems to hate me.
I went to my mongodb atlas page and whitelisted every IP address. I then set my DB connection string to a variable within Heroku.

Vagrant - No guest IP was given to the Vagrant core NFS helper

Recently our Vagrant DEV VM's will no longer boot on virtualbox (windows 10).
Is this related to the current kernel bug that is causing issues with a lot of the distributions?
BUG: https://bugs.launchpad.net/ubuntu/+source/linux-meta-lts-xenial/+bug/1820526
BUG: https://bugs.launchpad.net/vagrant/+bug/1821083
Here is the launch code:
vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Setting the name of the VM: LOCAL-DEV_20190322_113839
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
default: Adapter 2: hostonly
==> default: You are trying to forward to privileged ports (ports <= 1024). Most
==> default: operating systems restrict this to only privileged process (typically
==> default: processes running as an administrative user). This is a warning in case
==> default: the port forwarding doesn't work. If any problems occur, please try a
==> default: port higher than 1024.
==> default: Forwarding ports...
default: 22 (guest) => 22 (host) (adapter 1)
default: 80 (guest) => 80 (host) (adapter 1)
default: 443 (guest) => 443 (host) (adapter 1)
default: 3306 (guest) => 3306 (host) (adapter 1)
default: 6379 (guest) => 6379 (host) (adapter 1)
default: 4369 (guest) => 4369 (host) (adapter 1)
default: 9090 (guest) => 9090 (host) (adapter 1)
default: 9100 (guest) => 9100 (host) (adapter 1)
default: 9104 (guest) => 9104 (host) (adapter 1)
default: 9150 (guest) => 9150 (host) (adapter 1)
default: 5672 (guest) => 5672 (host) (adapter 1)
default: 8883 (guest) => 8883 (host) (adapter 1)
default: 15672 (guest) => 15672 (host) (adapter 1)
default: 15674 (guest) => 15674 (host) (adapter 1)
default: 15675 (guest) => 15675 (host) (adapter 1)
default: 25672 (guest) => 25672 (host) (adapter 1)
default: 35197 (guest) => 35197 (host) (adapter 1)
default: 1883 (guest) => 1883 (host) (adapter 1)
default: 5673 (guest) => 5673 (host) (adapter 1)
default: 8161 (guest) => 8161 (host) (adapter 1)
default: 61613 (guest) => 61613 (host) (adapter 1)
default: 61614 (guest) => 61614 (host) (adapter 1)
default: 61616 (guest) => 61616 (host) (adapter 1)
default: 9900 (guest) => 9900 (host) (adapter 1)
default: 9910 (guest) => 9910 (host) (adapter 1)
default: 9200 (guest) => 9200 (host) (adapter 1)
default: 9300 (guest) => 9300 (host) (adapter 1)
default: 5601 (guest) => 5601 (host) (adapter 1)
default: 27017 (guest) => 27017 (host) (adapter 1)
default: 27018 (guest) => 27018 (host) (adapter 1)
default: 27019 (guest) => 27019 (host) (adapter 1)
default: 27080 (guest) => 27080 (host) (adapter 1)
default: 28017 (guest) => 28017 (host) (adapter 1)
default: 5432 (guest) => 5432 (host) (adapter 1)
default: 5480 (guest) => 5480 (host) (adapter 1)
default: 10000 (guest) => 10000 (host) (adapter 1)
default: 20000 (guest) => 20000 (host) (adapter 1)
default: 4444 (guest) => 4444 (host) (adapter 1)
default: 3128 (guest) => 3128 (host) (adapter 1)
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: password
default: Warning: Connection reset. Retrying...
default: Warning: Connection aborted. Retrying...
default: Warning: Connection reset. Retrying...
default: Warning: Connection aborted. Retrying...
default: Warning: Connection reset. Retrying...
default: Warning: Connection aborted. Retrying...
default: Warning: Connection reset. Retrying...
default: Warning: Connection aborted. Retrying...
default: Warning: Connection reset. Retrying...
default: Warning: Connection aborted. Retrying...
default: Warning: Connection reset. Retrying...
default: Warning: Connection aborted. Retrying...
default: Warning: Connection reset. Retrying...
default: Warning: Connection aborted. Retrying...
default: Warning: Connection reset. Retrying...
default: Warning: Connection aborted. Retrying...
default: Warning: Connection reset. Retrying...
default: Warning: Connection aborted. Retrying...
default: Warning: Connection reset. Retrying...
default: Warning: Connection aborted. Retrying...
default: Warning: Connection reset. Retrying...
default: Warning: Connection aborted. Retrying...
default: Warning: Connection reset. Retrying...
default: Warning: Connection aborted. Retrying...
default: Warning: Connection reset. Retrying...
default: Warning: Connection aborted. Retrying...
default: Warning: Connection reset. Retrying...
default: Warning: Connection aborted. Retrying...
default: Warning: Connection reset. Retrying...
default: Warning: Connection aborted. Retrying...
default: Warning: Connection reset. Retrying...
default: Warning: Connection aborted. Retrying...
default: Warning: Connection reset. Retrying...
default: Warning: Connection aborted. Retrying...
default: Warning: Connection reset. Retrying...
default: Warning: Connection aborted. Retrying...
default: Warning: Connection reset. Retrying...
==> default: Machine booted and ready!
Got different reports about installed GuestAdditions version:
Virtualbox on your host claims: 5.2.16
VBoxService inside the vm claims: 5.2.26
Going on, assuming VBoxService is correct...
[default] GuestAdditions seems to be installed (5.2.26) correctly, but not running.
Got different reports about installed GuestAdditions version:
Virtualbox on your host claims: 5.2.16
VBoxService inside the vm claims: 5.2.26
Going on, assuming VBoxService is correct...
Job for vboxadd-service.service failed because the control process exited with error code. See "systemctl status vboxadd-service.service" and "journalctl -xe" for details.
Got different reports about installed GuestAdditions version:
Virtualbox on your host claims: 5.2.16
VBoxService inside the vm claims: 5.2.26
Going on, assuming VBoxService is correct...
Got different reports about installed GuestAdditions version:
Virtualbox on your host claims: 5.2.16
VBoxService inside the vm claims: 5.2.26
Going on, assuming VBoxService is correct...
==> default: Checking for guest additions in VM...
==> default: Configuring and enabling network interfaces...
No guest IP was given to the Vagrant core NFS helper. This is an
internal error that should be reported as a bug.
cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.6 LTS"
uname -r
4.4.0-143-generic
I have attempted to rebuild the kernel headers. They seemed to install correctly.
sudo dpkg-reconfigure virtualbox-dkms
-------- Uninstall Beginning --------
Module: virtualbox
Version: 5.1.38
Kernel: 4.4.0-143-generic (x86_64)
-------------------------------------
Status: Before uninstall, this module version was ACTIVE on this kernel.
vboxdrv.ko:
- Uninstallation
- Deleting from: /lib/modules/4.4.0-143-generic/updates/dkms/
- Original module
- No original module was found for this module on this kernel.
- Use the dkms install command to reinstall any previous module version.
vboxnetadp.ko:
- Uninstallation
- Deleting from: /lib/modules/4.4.0-143-generic/updates/dkms/
- Original module
- No original module was found for this module on this kernel.
- Use the dkms install command to reinstall any previous module version.
vboxnetflt.ko:
- Uninstallation
- Deleting from: /lib/modules/4.4.0-143-generic/updates/dkms/
- Original module
- No original module was found for this module on this kernel.
- Use the dkms install command to reinstall any previous module version.
vboxpci.ko:
- Uninstallation
- Deleting from: /lib/modules/4.4.0-143-generic/updates/dkms/
- Original module
- No original module was found for this module on this kernel.
- Use the dkms install command to reinstall any previous module version.
depmod....
DKMS: uninstall completed.
------------------------------
Deleting module version: 5.1.38
completely from the DKMS tree.
------------------------------
Done.
Loading new virtualbox-5.1.38 DKMS files...
Building only for 4.4.0-143-generic
Building initial module for 4.4.0-143-generic
Done.
vboxdrv:
Running module version sanity check.
- Original module
- No original module exists within this kernel
- Installation
- Installing to /lib/modules/4.4.0-143-generic/updates/dkms/
vboxnetadp.ko:
Running module version sanity check.
- Original module
- No original module exists within this kernel
- Installation
- Installing to /lib/modules/4.4.0-143-generic/updates/dkms/
vboxnetflt.ko:
Running module version sanity check.
- Original module
- No original module exists within this kernel
- Installation
- Installing to /lib/modules/4.4.0-143-generic/updates/dkms/
vboxpci.ko:
Running module version sanity check.
- Original module
- No original module exists within this kernel
- Installation
- Installing to /lib/modules/4.4.0-143-generic/updates/dkms/
depmod.....
DKMS: install completed.
I also updated the virtualbox on Ubuntu
sudo apt install virtualbox-5.2
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
libsdl-ttf2.0-0
The following packages will be REMOVED:
virtualbox virtualbox-guest-additions-iso virtualbox-qt
The following NEW packages will be installed:
libsdl-ttf2.0-0 virtualbox-5.2
0 to upgrade, 2 to newly install, 3 to remove and 4 not to upgrade.
Need to get 0 B/73.9 MB of archives.
After this operation, 28.8 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Preconfiguring packages ...
(Reading database ... 375081 files and directories currently installed.)
Removing virtualbox-qt (5.1.38-dfsg-0ubuntu1.16.04.3) ...
Removing virtualbox (5.1.38-dfsg-0ubuntu1.16.04.3) ...
Removing virtualbox-guest-additions-iso (5.1.38-0ubuntu1.16.04.1) ...
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for mime-support (3.59ubuntu1) ...
Processing triggers for hicolor-icon-theme (0.15-0ubuntu1.1) ...
Processing triggers for shared-mime-info (1.5-2ubuntu0.2) ...
Unknown media type in type 'all/all'
Unknown media type in type 'all/allfiles'
Selecting previously unselected package libsdl-ttf2.0-0:amd64.
(Reading database ... 374771 files and directories currently installed.)
Preparing to unpack .../libsdl-ttf2.0-0_2.0.11-3_amd64.deb ...
Unpacking libsdl-ttf2.0-0:amd64 (2.0.11-3) ...
Selecting previously unselected package virtualbox-5.2.
Preparing to unpack .../virtualbox-5.2_5.2.26-128414~Ubuntu~xenial_amd64.deb ...
Unpacking virtualbox-5.2 (5.2.26-128414~Ubuntu~xenial) ...
Processing triggers for libc-bin (2.23-0ubuntu11) ...
Processing triggers for systemd (229-4ubuntu21.17) ...
Processing triggers for ureadahead (0.100.0-19) ...
Processing triggers for hicolor-icon-theme (0.15-0ubuntu1.1) ...
Processing triggers for shared-mime-info (1.5-2ubuntu0.2) ...
Unknown media type in type 'all/all'
Unknown media type in type 'all/allfiles'
Processing triggers for mime-support (3.59ubuntu1) ...
Setting up libsdl-ttf2.0-0:amd64 (2.0.11-3) ...
Setting up virtualbox-5.2 (5.2.26-128414~Ubuntu~xenial) ...
addgroup: The group `vboxusers' already exists as a system group. Exiting.
Processing triggers for libc-bin (2.23-0ubuntu11) ...
Command exists and doesn't throw any errors.
sudo /sbin/vboxconfig
vboxdrv.sh: Stopping VirtualBox services.
vboxdrv.sh: Starting VirtualBox services.
I'm stumped, maybe I will have to wait for a new kernel patch?
There isn't a quick and simple fix for this, your best bet is to downgrade the kernel version.
Get a list of kernels headers and images
dpkg --list | grep linux-header
dpkg --list | grep linux-image
Remove the offending kernel
sudo apt purge linux-image-4.4.0-143-generic linux-headers-4.4.0-143-generic linux-image-unsigned-4.4.0-143-generic linux-modules-4.4.0-143-generic linux-modules-extra-4.4.0-143-generic
apt-mark - hold the offending kernel for future updates
sudo apt-mark hold linux-image-4.4.0-143-generic linux-headers-4.4.0-143-generic linux-image-unsigned-4.4.0-143-generic linux-modules-4.4.0-143-generic linux-modules-extra-4.4.0-143-generic
sudo apt-mark showhold
Remove other kernels and dkms
sudo apt-get remove dkms build-essential linux-headers-*
Reinstall the older kernel and configure dkms using virtualbox-guest-dkms
sudo apt-get install dkms build-essential linux-headers-4.4.0-142-generic virtualbox-guest-dkms
Reboot to enable the old kernel version
sudo reboot
Run apt update
sudo apt update
sudo apt upgrade -y
And you should be able to launch your virtual-box / vagrant boxes again.
I pushed a fix some days ago, and today it migrated in release.
https://launchpad.net/ubuntu/+source/virtualbox/4.3.40-dfsg-0ubuntu14.04.1
https://launchpad.net/ubuntu/+source/virtualbox-lts-xenial/4.3.40-dfsg-0ubuntu1.14.04.1~14.04.1
https://launchpad.net/ubuntu/+source/virtualbox-guest-additions-iso/4.3.40-0ubuntu1.14.04.1
please use the fixed package.

jhipster gateway hangs during start up in Microsoft Azure linux

I'm currently trying to deploy Jhipster registry/mysql/gateway/Microservice to openshift for a testing purpose. Since I'm new to both Openshift and Jhipster, I was not be able to figure out a way to make gateway work. Both my JHipster registry and DB works well and waiting to receive handshake from gateway but gate was not up and get stuck right after the JHipster logo without any useful log. Could anyone pleaase help me to figure out what's wrong with it?
Jhipster has been deployed by using: kompose --file src/main/docker/app.yml --provider openshift up
**Gateway params:**
EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=http://admin:${jhipster.registry.password}#jhipster-registry/eureka
JHIPSTER_SLEEP=30
SPRING_CLOUD_CONFIG_URI=http://admin:${jhipster.registry.password}#jhipster-registry/config
SPRING_DATASOURCE_URL=jdbc:mysql://htgateway-mysql:3306/htgateway?useUnicode=true&characterEncoding=utf8&useSSL=false
SPRING_PROFILES_ACTIVE=prod, swagger
**Registry**:
JHIPSTER_REGISTRY_PASSWORD=***
SPRING_CLOUD_CONFIG_SERVER_NATIVE_SEARCH_LOCATIONS=file:./central-config/localhost-config/
SPRING_CLOUD_CONFIG_SERVER_NATIVE_SEARCH_LOCATIONS=file:./central-config/docker-config/
SPRING_PROFILES_ACTIVE=dev,native,swagger
SPRING_SECURITY_USER_PASSWORD=***
I'm pretty sure that gateway and microservice was working 3 days ago but it's not working at all lately even I rebuilt all with a brand new linux centos machine in Azure (additonal: docker-compose up also not working properly in linux vm)
JHipster registry:
:: JHipster Registry :: Running Spring Boot 2.0.3.RELEASE ::
:: http://www.jhipster.tech ::
2018-07-11 01:45:53.070 INFO 7 --- [ main] i.g.j.registry.JHipsterRegistryApp : The following profiles are active: dev,native,swagger
2018-07-11 01:45:57.231 WARN 7 --- [ main] c.n.c.sources.URLConfigurationSource : No URLs will be polled as dynamic configuration sources.
2018-07-11 01:45:59.437 INFO 7 --- [ main] i.g.j.registry.config.WebConfigurer : Web application configuration, using profiles: dev
2018-07-11 01:45:59.448 INFO 7 --- [ main] i.g.j.registry.config.WebConfigurer : Web application fully configured
2018-07-11 01:45:59.638 ERROR 7 --- [ main] i.g.j.r.security.jwt.TokenProvider : WARNING! You are using the default JWT secret token, this **must** be changed in production!
2018-07-11 01:46:02.023 WARN 7 --- [ main] c.n.c.sources.URLConfigurationSource : No URLs will be polled as dynamic configuration sources.
2018-07-11 01:46:02.873 DEBUG 7 --- [ main] i.g.j.c.apidoc.SwaggerAutoConfiguration : Starting Swagger
2018-07-11 01:46:02.893 DEBUG 7 --- [ main] i.g.j.c.apidoc.SwaggerAutoConfiguration : Started Swagger in 20 ms
2018-07-11 01:46:04.556 INFO 7 --- [ main] com.netflix.discovery.DiscoveryClient : Initializing Eureka in region us-east-1
2018-07-11 01:46:04.556 INFO 7 --- [ main] com.netflix.discovery.DiscoveryClient : Client configured to neither register nor query for data.
2018-07-11 01:46:04.566 INFO 7 --- [ main] com.netflix.discovery.DiscoveryClient : Discovery Client initialized at timestamp 1531273564565 with initial instances count: 0
2018-07-11 01:46:04.710 INFO 7 --- [ main] c.n.d.provider.DiscoveryJerseyProvider : Using JSON encoding codec LegacyJacksonJson
2018-07-11 01:46:04.710 INFO 7 --- [ main] c.n.d.provider.DiscoveryJerseyProvider : Using JSON decoding codec LegacyJacksonJson
2018-07-11 01:46:04.710 INFO 7 --- [ main] c.n.d.provider.DiscoveryJerseyProvider : Using XML encoding codec XStreamXml
2018-07-11 01:46:04.710 INFO 7 --- [ main] c.n.d.provider.DiscoveryJerseyProvider : Using XML decoding codec XStreamXml
2018-07-11 01:46:06.080 INFO 7 --- [ main] c.n.d.provider.DiscoveryJerseyProvider : Using JSON encoding codec LegacyJacksonJson
2018-07-11 01:46:06.081 INFO 7 --- [ main] c.n.d.provider.DiscoveryJerseyProvider : Using JSON decoding codec LegacyJacksonJson
2018-07-11 01:46:06.081 INFO 7 --- [ main] c.n.d.provider.DiscoveryJerseyProvider : Using XML encoding codec XStreamXml
2018-07-11 01:46:06.081 INFO 7 --- [ main] c.n.d.provider.DiscoveryJerseyProvider : Using XML decoding codec XStreamXml
2018-07-11 01:46:06.456 INFO 7 --- [ main] i.g.j.registry.JHipsterRegistryApp : Started JHipsterRegistryApp in 16.619 seconds (JVM running for 17.477)
2018-07-11 01:46:06.464 INFO 7 --- [ main] i.g.j.registry.JHipsterRegistryApp :
----------------------------------------------------------
Application 'jhipster-registry' is running! Access URLs:
Local: http://localhost:8761
External: http://172.17.0.7:8761
Profile(s): [dev, native, swagger]
----------------------------------------------------------
2018-07-11 01:46:06.465 ERROR 7 --- [ main] i.g.j.registry.JHipsterRegistryApp :
----------------------------------------------------------
Your JWT secret key is not configured using Spring Cloud Config, you will not be able to
use the JHipster Registry dashboards to monitor external applications.
Please read the documentation at http://www.jhipster.tech/jhipster-registry/
----------------------------------------------------------
DB:
Initializing database
2018-07-11T01:18:46.046940Z 0 [Warning] InnoDB: New log files created, LSN=45790
2018-07-11T01:18:46.292450Z 0 [Warning] InnoDB: Creating foreign key constraint system tables.
2018-07-11T01:18:46.393396Z 0 [Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: 5bc6e58a-84a8-11e8-84b7-0242ac110008.
2018-07-11T01:18:46.401694Z 0 [Warning] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened.
2018-07-11T01:18:46.408709Z 1 [Warning] root#localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
2018-07-11T01:18:49.829982Z 1 [Warning] 'user' entry 'root#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:18:49.830017Z 1 [Warning] 'user' entry 'mysql.session#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:18:49.830027Z 1 [Warning] 'user' entry 'mysql.sys#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:18:49.830047Z 1 [Warning] 'db' entry 'performance_schema mysql.session#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:18:49.830054Z 1 [Warning] 'db' entry 'sys mysql.sys#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:18:49.830068Z 1 [Warning] 'proxies_priv' entry '# root#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:18:49.830100Z 1 [Warning] 'tables_priv' entry 'user mysql.session#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:18:49.830111Z 1 [Warning] 'tables_priv' entry 'sys_config mysql.sys#localhost' ignored in --skip-name-resolve mode.
Database initialized
Initializing certificates
Generating a 2048 bit RSA private key
.............................+++
................+++
unable to write 'random state'
writing new private key to 'ca-key.pem'
-----
Generating a 2048 bit RSA private key
...........................................+++
....+++
unable to write 'random state'
writing new private key to 'server-key.pem'
-----
Generating a 2048 bit RSA private key
.....................................................+++
................................+++
unable to write 'random state'
writing new private key to 'client-key.pem'
-----
Certificates initialized
MySQL init process in progress...
2018-07-11T01:18:53.373337Z 0 [Note] mysqld (mysqld 5.7.20) starting as process 90 ...
2018-07-11T01:18:53.377206Z 0 [Note] InnoDB: PUNCH HOLE support available
2018-07-11T01:18:53.377227Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2018-07-11T01:18:53.377231Z 0 [Note] InnoDB: Uses event mutexes
2018-07-11T01:18:53.377235Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2018-07-11T01:18:53.377239Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.3
2018-07-11T01:18:53.377243Z 0 [Note] InnoDB: Using Linux native AIO
2018-07-11T01:18:53.377553Z 0 [Note] InnoDB: Number of pools: 1
2018-07-11T01:18:53.377689Z 0 [Note] InnoDB: Using CPU crc32 instructions
2018-07-11T01:18:53.379951Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
2018-07-11T01:18:53.390423Z 0 [Note] InnoDB: Completed initialization of buffer pool
2018-07-11T01:18:53.399872Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
2018-07-11T01:18:53.405088Z 0 [Note] InnoDB: Highest supported file format is Barracuda.
2018-07-11T01:18:53.425642Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables
2018-07-11T01:18:53.425743Z 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
2018-07-11T01:18:53.528298Z 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
2018-07-11T01:18:53.529297Z 0 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active.
2018-07-11T01:18:53.529308Z 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active.
2018-07-11T01:18:53.529551Z 0 [Note] InnoDB: Waiting for purge to start
2018-07-11T01:18:53.579717Z 0 [Note] InnoDB: 5.7.20 started; log sequence number 2565377
2018-07-11T01:18:53.580033Z 0 [Note] Plugin 'FEDERATED' is disabled.
2018-07-11T01:18:53.580711Z 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
2018-07-11T01:18:53.582675Z 0 [Note] InnoDB: Buffer pool(s) load completed at 180711 1:18:53
2018-07-11T01:18:53.605379Z 0 [Warning] 'user' entry 'root#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:18:53.605423Z 0 [Warning] 'user' entry 'mysql.session#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:18:53.605432Z 0 [Warning] 'user' entry 'mysql.sys#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:18:53.605457Z 0 [Warning] 'db' entry 'performance_schema mysql.session#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:18:53.605463Z 0 [Warning] 'db' entry 'sys mysql.sys#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:18:53.605478Z 0 [Warning] 'proxies_priv' entry '# root#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:18:53.607231Z 0 [Warning] 'tables_priv' entry 'user mysql.session#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:18:53.607244Z 0 [Warning] 'tables_priv' entry 'sys_config mysql.sys#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:18:53.617199Z 0 [Note] Event Scheduler: Loaded 0 events
2018-07-11T01:18:53.622678Z 0 [Note] mysqld: ready for connections.
Version: '5.7.20' socket: '/var/run/mysqld/mysqld.sock' port: 0 MySQL Community Server (GPL)
2018-07-11T01:18:53.622700Z 0 [Note] Executing 'SELECT * FROM INFORMATION_SCHEMA.TABLES;' to get a list of tables using the deprecated partition engine. You may use the startup option '--disable-partition-engine-check' to skip this check.
2018-07-11T01:18:53.622703Z 0 [Note] Beginning of list of non-natively partitioned tables
2018-07-11T01:18:53.635430Z 0 [Note] End of list of non-natively partitioned tables
Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/leap-seconds.list' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it.
2018-07-11T01:18:58.105504Z 5 [Warning] 'user' entry 'root#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:18:58.105532Z 5 [Warning] 'user' entry 'mysql.session#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:18:58.105541Z 5 [Warning] 'user' entry 'mysql.sys#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:18:58.105573Z 5 [Warning] 'db' entry 'performance_schema mysql.session#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:18:58.105580Z 5 [Warning] 'db' entry 'sys mysql.sys#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:18:58.105593Z 5 [Warning] 'proxies_priv' entry '# root#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:18:58.105623Z 5 [Warning] 'tables_priv' entry 'user mysql.session#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:18:58.105633Z 5 [Warning] 'tables_priv' entry 'sys_config mysql.sys#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:18:58.121444Z 0 [Note] Giving 1 client threads a chance to die gracefully
2018-07-11T01:18:58.121462Z 0 [Note] Shutting down slave threads
2018-07-11T01:19:00.121609Z 0 [Note] Forcefully disconnecting 0 remaining clients
2018-07-11T01:19:00.121639Z 0 [Note] Event Scheduler: Purging the queue. 0 events
2018-07-11T01:19:00.122581Z 0 [Note] Binlog end
2018-07-11T01:19:00.123328Z 0 [Note] Shutting down plugin 'ngram'
2018-07-11T01:19:00.123345Z 0 [Note] Shutting down plugin 'BLACKHOLE'
2018-07-11T01:19:00.123350Z 0 [Note] Shutting down plugin 'partition'
2018-07-11T01:19:00.123353Z 0 [Note] Shutting down plugin 'ARCHIVE'
2018-07-11T01:19:00.123356Z 0 [Note] Shutting down plugin 'INNODB_SYS_VIRTUAL'
2018-07-11T01:19:00.123360Z 0 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES'
2018-07-11T01:19:00.123363Z 0 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES'
2018-07-11T01:19:00.123365Z 0 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS'
2018-07-11T01:19:00.123368Z 0 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'
2018-07-11T01:19:00.123371Z 0 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'
2018-07-11T01:19:00.123374Z 0 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'
2018-07-11T01:19:00.123377Z 0 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'
2018-07-11T01:19:00.123380Z 0 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS'
2018-07-11T01:19:00.123383Z 0 [Note] Shutting down plugin 'INNODB_SYS_TABLES'
2018-07-11T01:19:00.123385Z 0 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE'
2018-07-11T01:19:00.123388Z 0 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE'
2018-07-11T01:19:00.123391Z 0 [Note] Shutting down plugin 'INNODB_FT_CONFIG'
2018-07-11T01:19:00.123394Z 0 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED'
2018-07-11T01:19:00.123397Z 0 [Note] Shutting down plugin 'INNODB_FT_DELETED'
2018-07-11T01:19:00.123399Z 0 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWORD'
2018-07-11T01:19:00.123402Z 0 [Note] Shutting down plugin 'INNODB_METRICS'
2018-07-11T01:19:00.123405Z 0 [Note] Shutting down plugin 'INNODB_TEMP_TABLE_INFO'
2018-07-11T01:19:00.123408Z 0 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS'
2018-07-11T01:19:00.123410Z 0 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU'
2018-07-11T01:19:00.123413Z 0 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'
2018-07-11T01:19:00.123416Z 0 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX_RESET'
2018-07-11T01:19:00.123419Z 0 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX'
2018-07-11T01:19:00.123421Z 0 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'
2018-07-11T01:19:00.123424Z 0 [Note] Shutting down plugin 'INNODB_CMPMEM'
2018-07-11T01:19:00.123427Z 0 [Note] Shutting down plugin 'INNODB_CMP_RESET'
2018-07-11T01:19:00.123430Z 0 [Note] Shutting down plugin 'INNODB_CMP'
2018-07-11T01:19:00.123432Z 0 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'
2018-07-11T01:19:00.123435Z 0 [Note] Shutting down plugin 'INNODB_LOCKS'
2018-07-11T01:19:00.123438Z 0 [Note] Shutting down plugin 'INNODB_TRX'
2018-07-11T01:19:00.123441Z 0 [Note] Shutting down plugin 'InnoDB'
2018-07-11T01:19:00.123486Z 0 [Note] InnoDB: FTS optimize thread exiting.
2018-07-11T01:19:00.133642Z 0 [Note] InnoDB: Starting shutdown...
2018-07-11T01:19:00.233877Z 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool
2018-07-11T01:19:00.234237Z 0 [Note] InnoDB: Buffer pool(s) dump completed at 180711 1:19:00
2018-07-11T01:19:01.681745Z 0 [Note] InnoDB: Shutdown completed; log sequence number 12156809
2018-07-11T01:19:01.683697Z 0 [Note] InnoDB: Removed temporary tablespace data file: "ibtmp1"
2018-07-11T01:19:01.683710Z 0 [Note] Shutting down plugin 'MRG_MYISAM'
2018-07-11T01:19:01.683718Z 0 [Note] Shutting down plugin 'MyISAM'
2018-07-11T01:19:01.683727Z 0 [Note] Shutting down plugin 'CSV'
2018-07-11T01:19:01.683731Z 0 [Note] Shutting down plugin 'MEMORY'
2018-07-11T01:19:01.683735Z 0 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'
2018-07-11T01:19:01.683773Z 0 [Note] Shutting down plugin 'sha256_password'
2018-07-11T01:19:01.683795Z 0 [Note] Shutting down plugin 'mysql_native_password'
2018-07-11T01:19:01.683930Z 0 [Note] Shutting down plugin 'binlog'
2018-07-11T01:19:01.686021Z 0 [Note] mysqld: Shutdown complete
MySQL init process done. Ready for start up.
2018-07-11T01:19:01.968822Z 0 [Note] mysqld (mysqld 5.7.20) starting as process 1 ...
2018-07-11T01:19:01.972476Z 0 [Note] InnoDB: PUNCH HOLE support available
2018-07-11T01:19:01.972494Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2018-07-11T01:19:01.972499Z 0 [Note] InnoDB: Uses event mutexes
2018-07-11T01:19:01.972503Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2018-07-11T01:19:01.972507Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.3
2018-07-11T01:19:01.972510Z 0 [Note] InnoDB: Using Linux native AIO
2018-07-11T01:19:01.972879Z 0 [Note] InnoDB: Number of pools: 1
2018-07-11T01:19:01.973031Z 0 [Note] InnoDB: Using CPU crc32 instructions
2018-07-11T01:19:01.975195Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
2018-07-11T01:19:01.985489Z 0 [Note] InnoDB: Completed initialization of buffer pool
2018-07-11T01:19:01.988059Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
2018-07-11T01:19:02.000190Z 0 [Note] InnoDB: Highest supported file format is Barracuda.
2018-07-11T01:19:02.017281Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables
2018-07-11T01:19:02.017374Z 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
2018-07-11T01:19:02.126848Z 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
2018-07-11T01:19:02.127715Z 0 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active.
2018-07-11T01:19:02.127726Z 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active.
2018-07-11T01:19:02.128093Z 0 [Note] InnoDB: Waiting for purge to start
2018-07-11T01:19:02.178251Z 0 [Note] InnoDB: 5.7.20 started; log sequence number 12156809
2018-07-11T01:19:02.178487Z 0 [Note] Plugin 'FEDERATED' is disabled.
2018-07-11T01:19:02.179702Z 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
2018-07-11T01:19:02.195570Z 0 [Note] InnoDB: Buffer pool(s) load completed at 180711 1:19:02
2018-07-11T01:19:02.197609Z 0 [Note] Server hostname (bind-address): '*'; port: 3306
2018-07-11T01:19:02.198096Z 0 [Note] IPv6 is available.
2018-07-11T01:19:02.198133Z 0 [Note] - '::' resolves to '::';
2018-07-11T01:19:02.198157Z 0 [Note] Server socket created on IP: '::'.
2018-07-11T01:19:02.233539Z 0 [Warning] 'user' entry 'root#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:19:02.233601Z 0 [Warning] 'user' entry 'mysql.session#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:19:02.233611Z 0 [Warning] 'user' entry 'mysql.sys#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:19:02.233636Z 0 [Warning] 'db' entry 'performance_schema mysql.session#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:19:02.233642Z 0 [Warning] 'db' entry 'sys mysql.sys#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:19:02.233656Z 0 [Warning] 'proxies_priv' entry '# root#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:19:02.236013Z 0 [Warning] 'tables_priv' entry 'user mysql.session#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:19:02.236031Z 0 [Warning] 'tables_priv' entry 'sys_config mysql.sys#localhost' ignored in --skip-name-resolve mode.
2018-07-11T01:19:02.243545Z 0 [Note] Event Scheduler: Loaded 0 events
2018-07-11T01:19:02.244677Z 0 [Note] mysqld: ready for connections.
Version: '5.7.20' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL)
2018-07-11T01:19:02.244693Z 0 [Note] Executing 'SELECT * FROM INFORMATION_SCHEMA.TABLES;' to get a list of tables using the deprecated partition engine. You may use the startup option '--disable-partition-engine-check' to skip this check.
2018-07-11T01:19:02.244697Z 0 [Note] Beginning of list of non-natively partitioned tables
2018-07-11T01:19:02.261578Z 0 [Note] End of list of non-natively partitioned tables
Gateway:
The application will start in 30s...
██╗ ██╗ ██╗ ████████╗ ███████╗ ██████╗ ████████╗ ████████╗ ███████╗
██║ ██║ ██║ ╚══██╔══╝ ██╔═══██╗ ██╔════╝ ╚══██╔══╝ ██╔═════╝ ██╔═══██╗
██║ ████████║ ██║ ███████╔╝ ╚█████╗ ██║ ██████╗ ███████╔╝
██╗ ██║ ██╔═══██║ ██║ ██╔════╝ ╚═══██╗ ██║ ██╔═══╝ ██╔══██║
╚██████╔╝ ██║ ██║ ████████╗ ██║ ██████╔╝ ██║ ████████╗ ██║ ╚██╗
╚═════╝ ╚═╝ ╚═╝ ╚═══════╝ ╚═╝ ╚═════╝ ╚═╝ ╚═══════╝ ╚═╝ ╚═╝
:: JHipster ߤ㠠:: Running Spring Boot 2.0.3.RELEASE ::
:: https://www.jhipster.tech ::

Docker and libseccomp

I'm running into a problem with docker. I've got here OpenSuse 13.2 with a self-built version of libseccomp library. it's fresh version 2.3.1 from couple of weeks ago. If i'm running any docker container, i get the following error:
hostname:/usr/lib/docker # docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
78445dd45222: Pull complete
Digest: sha256:c5515758d4c5e1e838e9cd307f6c6a0d620b5e07e6f927b07d05f6d12a1ac8d7
Status: Downloaded newer image for hello-world:latest
container_linux.go:247: starting container process caused "conditional filtering requires libseccomp version >= 2.2.1"
docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "conditional filtering requires libseccomp version >= 2.2.1".
ERRO[0002] error getting events from daemon: net/http: request canceled
Of course i can use an option --security-opt seccomp:unconfined when starting a container, but this is not my purpose.
# rpm -qa libseccomp
libseccomp-2.3.1-1.x86_64
docker info:
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 1
Server Version: 1.13.0
Storage Driver: devicemapper
Pool Name: docker-254:2-655361-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: ext4
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 307.2 MB
Data Space Total: 107.4 GB
Data Space Available: 20.64 GB
Metadata Space Used: 806.9 kB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.147 GB
Thin Pool Minimum Free Space: 10.74 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.03.01 (2011-10-15)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: oci runc
Default Runtime: runc
Init Binary: docker-init
containerd version: (expected: 03e5862ec0d8d3b3f750e19fca3ee367e13c090e)
runc version: N/A (expected: 2f7393a47307a16f8cee44a37b262e8b81021e3e)
init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 3.16.7-53-desktop
Operating System: openSUSE 13.2 (Harlequin) (x86_64)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 3.868 GiB
Name: hostname
ID: DCOH:JZMG:ZUTM:5MSB:DVAG:SQXS:Z36N:5OXU:GQII:YTMO:RWDA:HYBJ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
WARNING: No kernel memory limit support
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Seems the problem may be with runc. I'm currently running into the same identical error, on Leap 42.1 with docker & runc from OBS Virtualization:containers repo. My setup was running fine until recent set of package updates.
i | runc | package | 0.1.1+gitr2942_2f7393a-33.2 | x86_64 | Virtualization:containers (openSUSE_Leap_42.1)
i | docker | package | 1.13.0-182.1 | x86_64 | Virtualization:containers (openSUSE_Leap_42.1)
strings on /usr/sbin/runc show:
strings /usr/sbin/runc | grep 2.2.1
[..]
conditional filtering requires libseccomp version >= 2.2.1
[..]
Going down further, changelog shows:
* Fri Feb 24 2017
- update to docker-1.13.0 requirement
* Mon Dec 19 2016
- update runc to the version used in docker 1.12.5 (bsc#1016307).
And the source for that package has Godeps/_workspace/src/github.com/seccomp/libseccomp-golang/seccomp_internal.go with this on line 299:
return fmt.Errorf("conditional filtering requires libseccomp version >= 2.2.1")
Looks like there is now an official bug report, and this issue impacts a few different SUSE releases that use that repo:
https://bugzilla.opensuse.org/show_bug.cgi?id=1028639

Resources