ASP.NET Core Angular app fails to run on Ubuntu 16.04 with Nginx and Systemd - linux

I have an ASP.NET Core Angular application targeting dotnet 1.1.0.
I installed Nginx on my Linux Ubuntu 16.04 and configured the nginx confog file as follows:
server {
listen 80;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
and myapp.services file as follows:
[Unit]
Description=Sample application.
[Service]
Type=simple
WorkingDirectory=/var/myappfolder
ExecStart=/usr/bin/dotnet /var/myappfolder/myapp.dll
#User=web
[Install]
WantedBy=multi-user.target
I tested this set up with a simple sample app and it worked fine. However as soon as I deploy my proper app to /var/myappfolder and configure
systemclt start mywebsite
systemclt daemon_reload
and then check
systemclt status mywebsite
I get this error:
jtrade.service - Sample application. Loaded: loaded
(/lib/systemd/system/jtrade.service; disabled; vendor preset: enabled)
Active: failed (Result: signal) since Wed 2017-08-30 18:08:08 UTC; 9s
ago Process: 4640 ExecStart=/usr/bin/dotnet /var/jtrade/jtradep.dll
(code=killed, signal=ABRT) Main PID: 4640 (code=killed, signal=ABRT)
Aug 30 18:08:08 localhost dotnet[4640]: at
Microsoft.DotNet.Configurer.NuGetCacheSentinel.get_NuGetCachePath()
Aug 30 18:08:08 localhost dotnet[4640]: at
Microsoft.DotNet.Configurer.NuGetCacheSentinel.Exists()
Aug 30 18:08:08 localhost dotnet[4640]: at
Microsoft.DotNet.Configurer.DotnetFirstTimeUseConfigurer.ShouldPrimeNugetCache()
Aug 30 18:08:08 localhost dotnet[4640]: at
Microsoft.DotNet.Configurer.DotnetFirstTimeUseConfigurer.Configure()
Aug 30 18:08:08 localhost dotnet[4640]: at
Microsoft.DotNet.Cli.Program.ConfigureDotNetForFirstTimeUse(INuGetCacheSentinel
nugetCacheSentinel)
Aug 30 18:08:08 localhost dotnet[4640]: at
Microsoft.DotNet.Cli.Program.ProcessArgs(String[] args, ITelemetry
telemetryClient)
Aug 30 18:08:08 localhost dotnet[4640]: at
Microsoft.DotNet.Cli.Program.Main(String[] args)
Aug 30 18:08:08 localhost systemd[1]: jtrade.service: Main process
exited, code=killed, status=6/ABRT
Aug 30 18:08:08 localhost systemd[1]: jtrade.service: Unit entered
failed state.
Aug 30 18:08:08 localhost systemd[1]: jtrade.service: Failed with
result 'signal'.
So I dived deeper into debugging this error with journalctl -u myappname and got some more useful info:
Started Sample application..
Aug 31 05:13:34 localhost dotnet[10290]: Unhandled Exception:
System.InvalidOperationException: Required environment variable 'HOME'
is not set. Try setting 'HOME' and running the operation again.
Aug 31 05:13:34 localhost dotnet[10290]: at
NuGet.Common.NuGetEnvironment.GetValueOrThrowMissingEnvVar(Func`1
getValue, String name)
Aug 31 05:13:34 localhost dotnet[10290]: at
NuGet.Common.NuGetEnvironment.GetHome() Aug 31 05:13:34 localhost
dotnet[10290]: at
NuGet.Common.NuGetEnvironment.<>c.<.cctor>b__12_0()
Aug 31 05:13:34 localhost dotnet[10290]: at
System.Lazy`1.CreateValue()
Aug 31 05:13:34 localhost dotnet[10290]: --- End of stack trace from
previous location where exception was thrown ---
Aug 31 05:13:34 localhost dotnet[10290]: at
System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
Aug 31 05:13:34 localhost dotnet[10290]: at
System.Lazy`1.get_Value()
Aug 31 05:13:34 localhost dotnet[10290]: at
NuGet.Common.NuGetEnvironment.GetFolderPath(SpecialFolder folder)
Aug 31 05:13:34 localhost dotnet[10290]: at
NuGet.Common.NuGetEnvironment.GetFolderPath(NuGetFolderPath folder)
Aug 31 05:13:34 localhost dotnet[10290]: at
NuGet.Configuration.SettingsUtility.GetGlobalPackagesFolder(ISettings
settings)
Aug 31 05:13:34 localhost dotnet[10290]: at
NuGet.Configuration.NuGetPathContext.Create(ISettings settings)
Aug 31 05:13:34 localhost dotnet[10290]: at
Microsoft.DotNet.Configurer.NuGetCacheSentinel.get_NuGetCachePath()
Aug 31 05:13:34 localhost dotnet[10290]: at
Microsoft.DotNet.Configurer.NuGetCacheSentinel.Exists()
Aug 31 05:13:34 localhost dotnet[10290]: at
Microsoft.DotNet.Configurer.DotnetFirstTimeUseConfigurer.ShouldPrimeNugetCache()
Aug 31 05:13:34 localhost dotnet[10290]: at
Microsoft.DotNet.Configurer.DotnetFirstTimeUseConfigurer.Configure()
Aug 31 05:13:34 localhost dotnet[10290]: at
Microsoft.DotNet.Cli.Program.ConfigureDotNetForFirstTimeUse(INuGetCacheSentinel
nugetCacheSentinel)
Aug 31 05:13:34 localhost dotnet[10290]: at
Microsoft.DotNet.Cli.Program.ProcessArgs(String[] args, ITelemetry
telemetryClient)
Aug 31 05:13:34 localhost dotnet[10290]: at
Microsoft.DotNet.Cli.Program.Main(String[] args)
Aug 31 05:13:34 localhost systemd[1]: jtrade.service: Main process
exited, code=killed, status=6/ABRT
Aug 31 05:13:34 localhost systemd[1]: jtrade.service: Unit entered
failed state.
Aug 31 05:13:34 localhost systemd[1]: jtrade.service: Failed with
result 'signal'.
From here if I run to see my environment variables with printenv, I find that HOME= /root
Maybe it should be set to something else?

Apparently I just wrote
Environment=HOME=/root
in the .service file and everything started working

Related

systemctl application not showing up on frontend or api?

I am using a droplet for an application and I am trying to set up my BACKEND server however I am getting these error's, but everything seems to be running however my frontend can't seem to pick it up. This server was working at one point but has since stopped working after the branch was changed to master.
Any help would be good
This looks correct as well
Mar 30 14:41:31 ids-bots node[2822]: at Server.emit (node:events:527:28)
Mar 30 14:41:31 ids-bots node[2822]: at parserOnIncoming (node:_http_server:951:12)
Mar 30 14:41:31 ids-bots node[2822]: at HTTPParser.parserOnHeadersComplete (node:_http_common:128:17)
Mar 30 22:41:20 ids-bots systemd[1]: Stopping Jem...
Mar 30 22:41:21 ids-bots systemd[1]: jem.service: Main process exited, code=dumped, status=3/QUIT
Mar 30 22:41:21 ids-bots systemd[1]: jem.service: Failed with result 'core-dump'.
Mar 30 22:41:21 ids-bots systemd[1]: Stopped Jem.
Mar 30 22:41:21 ids-bots systemd[1]: Started Jem.
Mar 30 22:41:22 ids-bots node[6418]: Jem API listening on port 3001
Mar 30 22:41:22 ids-bots node[6418]: Connected database to mongodb://127.0.0.1:27017/jem```

How to set systemd to be sure that it stops my application on server shutdown/reboot

I would like to setup systemd to start my application when my server is starting up and terminate my application cleanly when th eserver is shutting down.
That's my config. file:
[Unit]
Description=My Application
After=network.target
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/local/bin/my_app start
ExecReload=/usr/local/bin/my_app reload
ExecStop=/usr/local/bin/my_app stop
TimeoutStopSec=5
KillMode=mixed
[Install]
WantedBy=multi-user.target
I think that i missed something, because when i reboot my server there's no log about shutting down the application.
log after reboot:
# journalctl -u my_app.service
...
-- Reboot --
nov. 24 10:58:51 localhost systemd[1]: Starting My App...
nov. 24 10:58:51 localhost my_app[3173]: ############################################
nov. 24 10:58:51 localhost my_app[3173]: My App start at 2020-11-24 10:58:51
nov. 24 10:58:51 localhost my_app[3173]: ############################################
nov. 24 10:58:51 localhost my_app[3173]: Starting My App
nov. 24 10:58:51 localhost my_app[3173]: debut=Tue Nov 24 10:58:51 CET 2020
nov. 24 10:58:51 localhost my_app[3173]: Tue Nov 24 10:58:51 CET 2020
nov. 24 10:58:51 localhost my_app[3173]: Starting my_app... Saving PID 3185 to /var/run/my_app.pid
nov. 24 10:58:51 localhost my_app[3173]: Ok
nov. 24 10:58:51 localhost my_app[3173]: lock file created
nov. 24 10:58:51 localhost systemd[1]: Started My App.
but with a shutdown everything looks good, i can see nov. 24 13:47:27 localhost systemd[1]: Stopping My App...
log after shutdown and server start:
# journalctl -u my_app.service
...
-- Reboot --
...skipping...
nov. 24 13:43:45 localhost my_app[3138]: Tue Nov 24 13:43:45 CET 2020
nov. 24 13:43:45 localhost my_app[3138]: Starting my_app... Saving PID 3151 to /var/run/my_app.pid
nov. 24 13:43:45 localhost my_app[3138]: Ok
nov. 24 13:43:45 localhost systemd[1]: Started My App.
nov. 24 13:43:45 localhost my_app[3138]: lock file created
nov. 24 13:47:27 localhost systemd[1]: Stopping My App...
nov. 24 13:47:27 localhost my_app[3622]: ############################################
nov. 24 13:47:27 localhost my_app[3622]: My App stop at 2020-11-24 13:47:27
nov. 24 13:47:27 localhost my_app[3622]: ############################################
nov. 24 13:47:27 localhost my_app[3622]: Shutdown My App
nov. 24 13:47:27 localhost my_app[3622]: debut=Tue Nov 24 13:47:27 CET 2020
nov. 24 13:47:27 localhost my_app[3622]: Tue Nov 24 13:47:27 CET 2020
nov. 24 13:47:27 localhost my_app[3622]: root 3151 1 0 13:43 ? 00:00:00 ./ucybsmgr -iucybsmgr.ini AE_PROD
nov. 24 13:47:27 localhost my_app[3622]: root 3524 3151 0 13:44 ? 00:00:00 /usr/local/my_app/agents/linux/bin/ucxjlx6
nov. 24 13:47:27 localhost my_app[3622]: nfsnobo+ 3526 3524 0 13:44 ? 00:00:00 ucxjlx6-listener
nov. 24 13:47:27 localhost my_app[3622]: root 3649 3644 0 13:47 ? 00:00:00 grep uc
nov. 24 13:47:27 localhost my_app[3622]: Stopping my_app 3151
nov. 24 13:47:27 localhost my_app[3622]: Ok
nov. 24 13:47:27 localhost my_app[3622]: lock file removed
nov. 24 13:47:27 localhost systemd[1]: Stopped My App.
That's why i have some doubt on my config, so if someone can tell me if i'm wrong or not ?
Thanks #kamilcuk for the link i made some changes my final configuration file looks like this:
[Unit]
Description=My Application
After=network.target
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/local/bin/my_app start
ExecReload=/usr/local/bin/my_app reload
ExecStop=/usr/local/bin/my_app stop
TimeoutStopSec=10
# KillMode=mixed
[Install]
WantedBy=multi-user.target
I've disable and remove all links in /etc/systemd/ before re enable it, so everything looks good on shutdown and reboot.

Apache2 is not starting,I am using Ubuntu 16.04

Unable to start Apache2 server after I modifed dir.conf file, even after changing it back to normal
I modified /etc/apache2/mods-enabled/dir.conf and changed the order and wrote index.php in
the first place before index.html, so the file contents have been modified from
"DirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm" to now
"DirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm". But still
changing that back to original order, also does not start apache2 server, it gives the same
error. I tried restarting and stop and start, but none seem to work.
** Here are the details of systemctl status apache2.service **
● apache2.service - LSB: Apache2 web server
Loaded: loaded (/etc/init.d/apache2; bad; vendor preset: enabled)
Drop-In: /lib/systemd/system/apache2.service.d
└─apache2-systemd.conf
Active: failed (Result: exit-code) since Tue 2020-11-24 12:28:31 IST; 5min ago
Docs: man:systemd-sysv-generator(8)
Process: 20300 ExecStop=/etc/init.d/apache2 stop (code=exited, status=0/SUCCESS)
Process: 25755 ExecStart=/etc/init.d/apache2 start (code=exited, status=1/FAILURE)
Nov 24 12:28:31 localhost apache2[25755]: *
Nov 24 12:28:31 localhost apache2[25755]: * The apache2 configtest failed.
Nov 24 12:28:31 localhost apache2[25755]: Output of config test was:
Nov 24 12:28:31 localhost apache2[25755]: AH00534: apache2: Configuration error: More than one MPM loaded.
Nov 24 12:28:31 localhost apache2[25755]: Action 'configtest' failed.
Nov 24 12:28:31 localhost apache2[25755]: The Apache error log may have more information.
Nov 24 12:28:31 localhost systemd[1]: apache2.service: Control process exited, code=exited status=1
Nov 24 12:28:31 localhost systemd[1]: Failed to start LSB: Apache2 web server.
Nov 24 12:28:31 localhost systemd[1]: apache2.service: Unit entered failed state.
Nov 24 12:28:31 localhost systemd[1]: apache2.service: Failed with result 'exit-code'.
**Here is the details of journalctl -xe**
Nov 24 12:44:42 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:75]
Nov 24 12:44:42 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:77]
Nov 24 12:44:43 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:75]
Nov 24 12:44:43 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:77]
Nov 24 12:44:44 localhost systemd[1]: Failed to start MySQL Community Server.
-- Subject: Unit mysql.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mysql.service has failed.
--
-- The result is failed.
Nov 24 12:44:44 localhost systemd[1]: mysql.service: Unit entered failed state.
Nov 24 12:44:44 localhost systemd[1]: mysql.service: Failed with result 'exit-code'.
Nov 24 12:44:44 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:75]
Nov 24 12:44:44 localhost systemd[1]: mysql.service: Service hold-off time over, scheduling restart.
Nov 24 12:44:44 localhost systemd[1]: Stopped MySQL Community Server.
-- Subject: Unit mysql.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mysql.service has finished shutting down.
Nov 24 12:44:44 localhost systemd[1]: Starting MySQL Community Server...
-- Subject: Unit mysql.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mysql.service has begun starting up.
Nov 24 12:44:44 localhost mysqld[5387]: 2020-11-24T07:14:44.545855Z 0 [Warning] Changed limits: max_open_files: 1024 (requested 5000)
Nov 24 12:44:44 localhost mysqld[5387]: 2020-11-24T07:14:44.545908Z 0 [Warning] Changed limits: table_open_cache: 431 (requested 2000)
Nov 24 12:44:44 localhost mysqld[5387]: 2020-11-24T07:14:44.705792Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --expl
Nov 24 12:44:44 localhost mysqld[5387]: 2020-11-24T07:14:44.707018Z 0 [Note] /usr/sbin/mysqld (mysqld 5.7.32-0ubuntu0.16.04.1) starting as process 538
Nov 24 12:44:44 localhost mysqld[5387]: 2020-11-24T07:14:44.709650Z 0 [ERROR] Could not open file '/var/log/mysql/error.log' for error logging: No suc
Nov 24 12:44:44 localhost mysqld[5387]: 2020-11-24T07:14:44.709678Z 0 [ERROR] Aborting
Nov 24 12:44:44 localhost mysqld[5387]: 2020-11-24T07:14:44.709703Z 0 [Note] Binlog end
Nov 24 12:44:44 localhost mysqld[5387]: 2020-11-24T07:14:44.709770Z 0 [Note] /usr/sbin/mysqld: Shutdown complete
Nov 24 12:44:44 localhost systemd[1]: mysql.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 12:44:44 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:77]
Nov 24 12:44:44 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:75]
Nov 24 12:44:45 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:77]
Nov 24 12:44:45 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:75]
Nov 24 12:44:45 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:77]

Bug: Varnish 4 install on CentOS 7 (systemctl)

Installed Varnish from yum; but immediate error when initiating via systemctl.
Jul 28 14:11:54 localhost.localdomain varnishd[6546]: .init_func = VGC_function_vcl_init,
Jul 28 14:11:54 localhost.localdomain varnishd[6546]: .fini_func = VGC_function_vcl_fini,
Jul 28 14:11:54 localhost.localdomain varnishd[6546]: };
Jul 28 14:11:54 localhost.localdomain varnishd[6557]: Assert error in main(), mgt/mgt_main.c line 686:
Jul 28 14:11:54 localhost.localdomain varnishd[6557]: Condition((daemon(1,0)) == 0) not true.
Jul 28 14:11:54 localhost.localdomain varnishd[6557]: errno = 19 (No such device)
Jul 28 14:11:54 localhost.localdomain systemd[1]: Failed to read PID from file /var/run/varnish.pid: Invalid argument
Jul 28 14:11:54 localhost.localdomain systemd[1]: varnish.service never wrote its PID file. Failing.
Jul 28 14:11:54 localhost.localdomain systemd[1]: Failed to start Varnish a high-perfomance HTTP accelerator.
Jul 28 14:11:54 localhost.localdomain systemd[1]: Unit varnish.service entered failed state.
SELinux is disabled; package was installed via root. This is a fresh install.
Looks like you need to reboot. ;)
The message:
Failed to read PID from file /var/run/varnish.pid Invalid argument
is non-critical. It is just systemd trying to read the pidfile too early. You can poll status with:
systemctl status varnish
If its "Main PID" entry is matching the contents of /var/run/varnish.pid(and if varnishd is started via systemd, it always does), you can ignore that message. This is fixed in later versions of systemd.

Can not start keystone service

I installed packstack on my fresh installation of Fedora 21 with all updates. When I run
packstack --allinone I received this error:
ERROR : Error appeared during Puppet run: 192.168. 1.*_keystone.pp Error:
Could not start Service[keystone]: Execution of '/sbin/service openstack-keystone
start'` returned 1: Redirecting to /bin/systemctl start openstack-keystone.service
You will find full trace in log /var/tmp/packstack/20141223-022613-whLvTs/manifests
/192.168.1.*_keystone.pp.log
And this is the log:
Notice: /Stage[main]/Cinder::Keystone::Auth/Keystone_user_role[cinder#services]:
Dependency Service[keystone] has failures: true
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone_user_role[cinder#services]:
Skipping because of failed dependencies
Notice: Finished catalog run in 13.02 seconds
With systemctl status openstack-keystone.service get this:
openstack-keystone.service - OpenStack Identity Service (code-named Keystone)
Loaded: loaded (/usr/lib/systemd/system/openstack-keystone.service; disabled)
Active: failed (Result: start-limit) since Tue 2014-12-23 19:47:36 EET; 1min 59s ago
Process: 22526 ExecStart=/usr/bin/keystone-all (code=exited, status=1/FAILURE)
Main PID: 22526 (code=exited, status=1/FAILURE)
Dec 23 19:47:35 localhost.localdomain systemd[1]: Failed to start OpenStack...
Dec 23 19:47:35 localhost.localdomain systemd[1]: Unit openstack-keystone.s...
Dec 23 19:47:35 localhost.localdomain systemd[1]: openstack-keystone.servic...
Dec 23 19:47:36 localhost.localdomain systemd[1]: start request repeated to...
Dec 23 19:47:36 localhost.localdomain systemd[1]: Failed to start OpenStack...
Dec 23 19:47:36 localhost.localdomain systemd[1]: Unit openstack-keystone.s...
Dec 23 19:47:36 localhost.localdomain systemd[1]: openstack-keystone.servic...
This can happen due SELinux avc denial because of a missing policy.
You can try to put SELinux to permissive mode:
# setenforce 0
A similar bug

Resources