How to add MACs and KEX algorithms in /etc/ssh/sshd_config on Ubuntu 18.04 on GCP - linux

I added following MACs to /etc/ssh/sshd_config of Ubuntu 18.04 compute instance on GCP. But after updating the file ssh is not restarting and journalctl -xe shows /etc/ssh/sshd_config line 130: Bad SSH2 mac spec.
MACs hmac-sha1-512-etm#openssh.com,hmac-sha1-512-etm#openssh.com,umac-128-etm#openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128#openssh.com
I see following error when I try to restart ssh:
$ sudo systemctl restart ssh
Job for ssh.service failed because the control process exited with error code.
See "systemctl status ssh.service" and "journalctl -xe" for details.
$ journalctl -xe
--
-- Unit ssh.service has begun starting up.
Aug 02 11:37:17 ubuntu1804 sshd[23779]: /etc/ssh/sshd_config line 130: Bad SSH2 mac spec 'hmac-sha1-512-etm#openssh.com,hmac-sha1-512-etm#openssh.com,umac-128-etm#open
Aug 02 11:37:17 ubuntu1804 systemd[1]: ssh.service: Control process exited, code=exited status=255
Aug 02 11:37:17 ubuntu1804 systemd[1]: ssh.service: Failed with result 'exit-code'.
Aug 02 11:37:17 ubuntu1804 systemd[1]: Failed to start OpenBSD Secure Shell server.
-- Subject: Unit ssh.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit ssh.service has failed.
--
-- The result is RESULT.
Aug 02 11:37:17 ubuntu1804 systemd[1]: ssh.service: Service hold-off time over, scheduling restart.
Aug 02 11:37:17 ubuntu1804 systemd[1]: ssh.service: Scheduled restart job, restart counter is at 5.
-- Subject: Automatic restarting of a unit has been scheduled
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Automatic restarting of the unit ssh.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
Aug 02 11:37:17 ubuntu1804 systemd[1]: Stopped OpenBSD Secure Shell server.
-- Subject: Unit ssh.service has finished shutting down
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit ssh.service has finished shutting down.
Aug 02 11:37:17 ubuntu1804 systemd[1]: ssh.service: Start request repeated too quickly.
Aug 02 11:37:17 ubuntu1804 systemd[1]: ssh.service: Failed with result 'exit-code'.
Aug 02 11:37:17 ubuntu1804 systemd[1]: Failed to start OpenBSD Secure Shell server.
-- Subject: Unit ssh.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit ssh.service has failed.
--
-- The result is RESULT.
Following is the error received when I try to connect after logoff from the existing ssh session.
ubuntu1804> gcloud compute ssh ubuntu1804 --zone us-east1-b
ssh: connect to host 35.237.57.183 port 22: Connection refused
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
I did not find a single clue about this in google cloud documentation. I can fix the server but I would like to know what is the right way to add such configuration in sshd_config on a Ubuntu linux on GCP.

Verify acceptable values for MACs with ssh -Q mac. I'd assume hmac-sha1-512-etm#openssh.com and hmac-sha1-512-etm#openssh.com won't be there.

Related

Error installing docker on Arch Linux: error initializing graphdriver: loopback attach failed

So I stupidly got a laptop with the latest and greatest hardware, so I had to install ArchLinux (version 5.19.13-arch1-1) instead of debian. I'm only vaguely familiar with linux, my fiance has been helping but this has him stumped too so here we are.
I followed the wiki instructions for installing docker, but consistently get the following error when attempting to start the service:
sudo journalctl --since "5 minutes ago" -xeu docker.service
Oct 26 14:37:34 werk systemd[1]: docker.service: Start request repeated too quickly.
Oct 26 14:37:34 werk systemd[1]: docker.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ The unit docker.service has entered the 'failed' state with result 'exit-code'.
Oct 26 14:37:34 werk systemd[1]: Failed to start Docker Application Container Engine.
░░ Subject: A start job for unit docker.service has failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit docker.service has finished with a failure.
░░
░░ The job identifier is 10988 and the job result is failed.
lines 334-369/369 (END)
sudo journalctl --since "5 minutes ago" -ru docker.service
Oct 26 14:37:34 werk systemd[1]: Failed to start Docker Application Container Engine.
Oct 26 14:37:34 werk systemd[1]: docker.service: Failed with result 'exit-code'.
Oct 26 14:37:34 werk systemd[1]: docker.service: Start request repeated too quickly.
Oct 26 14:37:34 werk systemd[1]: Stopped Docker Application Container Engine.
Oct 26 14:37:34 werk systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Oct 26 14:37:34 werk systemd[1]: Failed to start Docker Application Container Engine.
Oct 26 14:37:34 werk systemd[1]: docker.service: Failed with result 'exit-code'.
Oct 26 14:37:34 werk systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Oct 26 14:37:34 werk dockerd[71427]: failed to start daemon: error initializing graphdriver: loopback attach failed
I don't really understand loopback devices, but the result of losetup -f seemed sus:
losetup -f
losetup: cannot find an unused loop device: Permission denied
[me#werk ~]$ sudo losetup -f
losetup: cannot find an unused loop device: No such device
We also theorized that the docker user didn't have sufficient permissions to do what it needed to do, but could not find any user specified in the docker.service file (located at /etc/systemd/system/multi-user.target.wants/docker.service). We tried to run the ExecStart command line specified in the docker.service file as root, but got the following error:
[me#werk ~]$ sudo /usr/bin/dockerd -H fd://
[sudo] password for me:
INFO[2022-10-26T14:56:00.483059782-06:00] Starting up
failed to load listeners: no sockets found via socket activation: make sure the service was started by systemd
So that was a bit of a bust.
To be extra clear, installation was as follows:
pacman -Syu docker
systemctl enable docker.service
systemctl start docker.service
And in case it matters, I'm on a Framework laptop with 12th gen Intel core processors.

Not able to start redhat httpd service

When I try to start the httpd service it is failing with the error :
Job for httpd.service failed because the control process exited with error code. See "systemctl status httpd.service" and "journalctl -xe" for details.
output of journalctl -xe
The result is failed.
Dec 08 04:09:49 uls-**** systemd[1]: Unit httpd.service entered failed state.
Dec 08 04:09:49 uls-******** systemd[1]: httpd.service failed.
Dec 08 04:09:49 uls-******** sudo[67525]: pam_unix(sudo:session): session closed for user root
Dec 08 04:09:49 uls-******** polkitd[854]: Unregistered Authentication Agent for unix-process:67526:3062933569 (system bus name :1.159161, object path /org/
Dec 08 04:10:01 uls-******** systemd[1]: Started Session 78106 of user root.
-- Subject: Unit session-78106.scope has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit session-78106.scope has finished starting up.
--
-- The start-up result is done.
Dec 08 04:10:01 uls-******** systemd[1]: Started Session 78107 of user root.
-- Subject: Unit session-78107.scope has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit session-78107.scope has finished starting up.
--
-- The start-up result is done.
Dec 08 04:10:01 uls-******** CROND[67561]: (root) CMD (/usr/share/spamassassin/sa-update.cron 2>&1 | tee -a /var/log/sa-update.log)
Dec 08 04:10:01 uls-******** CROND[67562]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Dec 08 04:20:01 uls-******** systemd[1]: Started Session 78109 of user root.
-- Subject: Unit session-78109.scope has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit session-78109.scope has finished starting up.
--
-- The start-up result is done.
Dec 08 04:20:01 uls-******** systemd[1]: Started Session 78108 of user root.
-- Subject: Unit session-78108.scope has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit session-78108.scope has finished starting up.
--
-- The start-up result is done
.
output of systemctl status httpd.service
httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2021-12-08 04:26:51 PST; 53s ago
Docs: man:httpd(8)
man:apachectl(8)
Process: 68719 ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND (code=exited, status=1/FAILURE)
Main PID: 68719 (code=exited, status=1/FAILURE)
I tried to change the port number in httpd.conf file. But I got the same error. Can anyone please help?
try check your config, validate from command apachectl configtest

(Job for apache2.service failed because the control process exited with error code) occured after trying to activate webdav module

I tried to start my apache webserver but I can't. Every time I type in:
sservice apache2 start
I get the Error:
Job for apache2.service failed because the control process exited with error code.
I got the error the first time after I tried to activate the WebDAV module for apache2. But I already deactivated it. I rebooted the server too but no effect.
I'm running the apache on my second pc and access it via SSH.
Heres my Logfile:
--
-- A start job for unit phpsessionclean.service has begun execution.
--
-- The job identifier is 1448.
Jul 28 18:39:31 Server-MS-7B28 systemd[1]: phpsessionclean.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit phpsessionclean.service has successfully entered the 'dead' state.
Jul 28 18:39:31 Server-MS-7B28 systemd[1]: Finished Clean php session files.
-- Subject: A start job for unit phpsessionclean.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit phpsessionclean.service has finished successfully.
--
-- The job identifier is 1448.
Jul 28 18:40:25 Server-MS-7B28 sshd[2785]: Received disconnect from 222.186.31.166 port 58094:11: [preauth]
Jul 28 18:40:25 Server-MS-7B28 sshd[2785]: Disconnected from 222.186.31.166 port 58094 [preauth]
Jul 28 18:40:37 Server-MS-7B28 sshd[2787]: Received disconnect from 112.85.42.104 port 12119:11: [preauth]
Jul 28 18:40:37 Server-MS-7B28 sshd[2787]: Disconnected from 112.85.42.104 port 12119 [preauth]
Jul 28 18:41:43 Server-MS-7B28 sudo[2793]: pam_unix(sudo:auth): Couldn't open /etc/securetty: Datei oder Verzeichnis nicht gefunden
Jul 28 18:41:46 Server-MS-7B28 sudo[2793]: pam_unix(sudo:auth): Couldn't open /etc/securetty: Datei oder Verzeichnis nicht gefunden
Jul 28 18:41:46 Server-MS-7B28 sudo[2793]: elias-server : TTY=pts/0 ; PWD=/home/elias-server ; USER=root ; COMMAND=/bin/bash
Jul 28 18:41:46 Server-MS-7B28 sudo[2793]: pam_unix(sudo:session): session opened for user root by elias-server(uid=0)
Jul 28 18:41:53 Server-MS-7B28 audit[2808]: AVC apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/usr/lib/snapd/snap-confine" pid=2808 comm="apparmor_parser"
Jul 28 18:41:53 Server-MS-7B28 audit[2808]: AVC apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/usr/lib/snapd/snap-confine//mount-namespace-capture-helper" pid=28>
Jul 28 18:41:53 Server-MS-7B28 kernel: audit: type=1400 audit(1595954513.320:3245): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/usr/lib/snapd/snap-confine" pi>
Jul 28 18:41:53 Server-MS-7B28 kernel: audit: type=1400 audit(1595954513.320:3246): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/usr/lib/snapd/snap-confine//mo>
Jul 28 18:42:09 Server-MS-7B28 systemd[1]: Starting The Apache HTTP Server...
-- Subject: A start job for unit apache2.service has begun execution
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit apache2.service has begun execution.
--
-- The job identifier is 1515.
Jul 28 18:42:09 Server-MS-7B28 apachectl[2837]: AH00526: Syntax error on line 32 of /etc/apache2/sites-enabled/000-default.conf:
Jul 28 18:42:09 Server-MS-7B28 apachectl[2837]: Invalid command 'DAV', perhaps misspelled or defined by a module not included in the server configuration
Jul 28 18:42:09 Server-MS-7B28 apachectl[2817]: Action 'start' failed.
Jul 28 18:42:09 Server-MS-7B28 apachectl[2817]: The Apache error log may have more information.
Jul 28 18:42:09 Server-MS-7B28 systemd[1]: apache2.service: Control process exited, code=exited, status=1/FAILURE
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- An ExecStart= process belonging to unit apache2.service has exited.
--
-- The process' exit code is 'exited' and its exit status is 1.
Jul 28 18:42:09 Server-MS-7B28 systemd[1]: apache2.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit apache2.service has entered the 'failed' state with result 'exit-code'.
Jul 28 18:42:09 Server-MS-7B28 systemd[1]: Failed to start The Apache HTTP Server.
-- Subject: A start job for unit apache2.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit apache2.service has finished with a failure.
--
-- The job identifier is 1515 and the job result is failed.
Thank you for your help,
Elias
The problem is because some configuration files are deleted, you have to reinstall it.
REINSTALL APACHE2:
To replace configuration files that have been deleted, without purging the package, you can do:
sudo apt-get -o DPkg::Options::="--force-confmiss" --reinstall install apache2
To fully remove the apache2 config files, you should:
sudo apt-get purge apache2
which will then let you reinstall it in the usual way with:
sudo apt-get install apache2
This can happen if port 80 is already under use.
refer the link for more
You can use this to check if something is using the port.
netstat -plant | grep 80

Arangodb stops and won't restart after dev-xvdb times out

I have arangodb 3.1.16 installed on an AWS C4 Instance. I have a Foxx Service trying to run in production. It is getting an average of 10 packets of 200 octets per second, and returning a flow of 20 packets of 200 octets per second.
Each time I start running my process, the foxx service runs with consistent performance for an hour and then suddenly stops. I do not have access to my foxx api anymore : all requests get connection timeout errors, and do not print on the foxx logs. I do not have access to the web interface anymore : the page just doesn’t load.
After a minute or so, the foxx logs show me an error message : 'ArangoError 18: lock timeout’
After an other minute the logs show me requests that are usually fast but took a very long time (WARNING {queries} slow query: took: 1770.862498)
Using "journalctl -xe", I learned that after a foreign IP tried to connect, I got = "Job dev-xvdb.device/start timed out"
I managed to restart arango using :
ps -eaf |grep arangod
sudo kill #
sudo apt-get --reinstall install arangodb3=3.1.16
How can I solve this recurring issue ?
"journalctl -xe" gives me :
Apr 04 15:03:10 my-ip systemd[1]: arangodb3.service: Failed with result 'exit-code’.
-- Subject: Unit arangodb3.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit arangodb3.service has begun starting up.
Apr 04 15:03:10 my-ip arangodb3[11481]: * Starting arango database server arangod
Apr 04 15:03:10 my-ip arangodb3[11481]: * database version check failed, maybe you need to run 'upgrade'?
Apr 04 15:03:10 my-ip systemd[1]: arangodb3.service: Control process exited, code=exited status=1
Apr 04 15:03:10 my-ip systemd[1]: Failed to start LSB: arangodb.
-- Subject: Unit arangodb3.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit arangodb3.service has failed.
--
-- The result is failed.
Apr 04 15:03:10 my-ip systemd[1]: arangodb3.service: Unit entered failed state.
Apr 04 15:03:10 my-ip systemd[1]: arangodb3.service: Failed with result 'exit-code'.
Apr 04 15:03:10 my-ip sudo[11346]: pam_unix(sudo:session): session closed for user root
Apr 04 15:03:17 my-ip sshd[11502]: Did not receive identification string from UNKNOWN IP 1
Apr 04 15:03:21 my-ip sshd[11503]: Connection closed by UNKNOWN IP 2 port 54736 [preauth]
Apr 04 15:03:21 my-ip sshd[11507]: Did not receive identification string from UNKNOWN IP 2
Apr 04 15:03:21 my-ip sshd[11506]: fatal: Unable to negotiate with UNKNOWN IP 2 port 54730: no matching host key type found. Their offer: ssh-dss [preauth]
Apr 04 15:03:21 my-ip sshd[11504]: Connection closed by UNKNOWN IP 2 port 54732 [preauth]
Apr 04 15:03:22 my-ip sshd[11505]: Connection closed by UNKNOWN IP 2 port 54734 [preauth]
Apr 04 15:03:40 my-ip systemd[1]: dev-xvdb.device: Job dev-xvdb.device/start timed out.
Apr 04 15:03:40 my-ip systemd[1]: Timed out waiting for device dev-xvdb.device.
-- Subject: Unit dev-xvdb.device has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit dev-xvdb.device has failed.
--
-- The result is timeout.
Apr 04 15:03:40 my-ip systemd[1]: Dependency failed for File System Check on /dev/xvdb.
-- Subject: Unit systemd-fsck#dev-xvdb.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit systemd-fsck#dev-xvdb.service has failed.
--
-- The result is dependency.
Apr 04 15:03:40 my-ip systemd[1]: Dependency failed for /mnt.
-- Subject: Unit mnt.mount has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mnt.mount has failed.
--
-- The result is dependency.
Apr 04 15:03:40 my-ip systemd[1]: mnt.mount: Job mnt.mount/start failed with result 'dependency'.
Apr 04 15:03:40 my-ip systemd[1]: systemd-fsck#dev-xvdb.service: Job systemd-fsck#dev-xvdb.service/start failed with result 'dependency'.
Apr 04 15:03:40 my-ip systemd[1]: dev-xvdb.device: Job dev-xvdb.device/start failed with result 'timeout'.
I tried :
sudo curl --dump - -X GET http://127.0.0.1:8529/_api/version && echo
It gives me :
HTTP/1.1 401 Unauthorized
Www-Authenticate: Bearer token_type="JWT", realm="ArangoDB"
Server: ArangoDB
Connection: Keep-Alive
Content-Type: text/plain; charset=utf-8
Content-Length: 0
I tried :
ps auxw | fgrep arangod
It gives me :
root 10439 0.0 0.1 82772 8664 ? Ss 10:09 0:00 /usr/sbin/arangod --uid arangodb --gid arangodb --pid-file /var/run/arangodb/arangod.pid --temp.path /var/tmp/arangod --log.foreground-tty false --supervisor
arangodb 10440 5.7 94.5 12901776 7242340 ? Sl 10:09 16:36 /usr/sbin/arangod --uid arangodb --gid arangodb --pid-file /var/run/arangodb/arangod.pid --temp.path /var/tmp/arangod --log.foreground-tty false --supervisor
ubuntu 11339 0.0 0.0 12916 1000 pts/0 R+ 14:59 0:00 grep -F --color=auto arangod
arangod restart gives me :
2017-04-04T15:01:16Z [11344] INFO ArangoDB 3.1.16 [linux] 64bit, using VPack 0.1.30, ICU 54.1, V8 5.0.71.39, OpenSSL 1.0.2g 1 Mar 2016
2017-04-04T15:01:16Z [11344] INFO using SSL options: SSL_OP_CIPHER_SERVER_PREFERENCE, SSL_OP_TLS_ROLLBACK_BUG
2017-04-04T15:01:16Z [11344] FATAL could not open shutdown file '/var/log/arangodb3/restart/SHUTDOWN': internal error
'service arangodb3 restart’ gives me (after a short wait time) :
Job for arangodb3.service failed because the control process exited with error code. See "systemctl status arangodb3.service" and "journalctl -xe" for details.
'systemctl status arangodb3.service' gives me :
arangodb3.service - LSB: arangodb
Loaded: loaded (/etc/init.d/arangodb3; bad; vendor preset: enabled)
Active: failed (Result: exit-code) since Tue 2017-04-04 15:03:10 UTC; 34s ago
Docs: man:systemd-sysv-generator(8)
Process: 11352 ExecStop=/etc/init.d/arangodb3 stop (code=exited, status=0/SUCCESS)
Process: 11481 ExecStart=/etc/init.d/arangodb3 start (code=exited, status=1/FAILURE)
Tasks: 83
Memory: 6.5G
CPU: 73ms
CGroup: /system.slice/arangodb3.service
├─10439 /usr/sbin/arangod --uid arangodb --gid arangodb --pid-file /var/run/arangodb/arangod.pid --temp.path /var/tmp/arangod --log.foreground-tty false --supervisor
└─10440 /usr/sbin/arangod --uid arangodb --gid arangodb --pid-file /var/run/arangodb/arangod.pid --temp.path /var/tmp/arangod --log.foreground-tty false --supervisor
Apr 04 15:03:10 my-ip systemd[1]: Starting LSB: arangodb...
Apr 04 15:03:10 my-ip arangodb3[11481]: * Starting arango database server arangod
Apr 04 15:03:10 my-ip arangodb3[11481]: * database version check failed, maybe you need to run 'upgrade'?
Apr 04 15:03:10 my-ip systemd[1]: arangodb3.service: Control process exited, code=exited status=1
Apr 04 15:03:10 my-ip systemd[1]: Failed to start LSB: arangodb.
Apr 04 15:03:10 my-ip systemd[1]: arangodb3.service: Unit entered failed state.
From your log output it seems that the mounted disk volume goes away.
If the storage goes away under any kind of Database there is no reasonable way to continue working.
Thus the effects you see is that the ArangoDB isn't able to work with its data anymore - from its perspective its simply not there anymore.
One effect observed by others is that I/O credits on AWS dry up, which could also be the reason for what you see above.
https://aws.amazon.com/blogs/aws/new-burst-balance-metric-for-ec2s-general-purpose-ssd-gp2-volumes/
If I got that correctly, you can get more credits if you choose a bigger volume size. If that doesn't help, you either need to lower your test scenario, or choose a different hosting approach that doesn't have limitations on I/O operations.

How to port an RPM package consist of SysV init script to systemd?

I have created an RPM package for my daemon which on installation create service init script under /etc/init.d/
Now I wanted to port this RPM package for CentOS7.1 and use systemd framework for service startup through boot as well as by admin through start/stop command.
I couldn't find any tutorial. Please help.
EDIT:
I tried the suggestion given by msuchy.
I tried and here is my observation
On sysV based framework (CentOS6.5)
[root#adil work]# /etc/init.d/daemon_script status
Service daemon: Stopped
[root#adil work]# /etc/init.d/daemon_script start
Starting daemon: Initializing daemon... [ OK ]
[root#adil work]#
[root#adil work]# /etc/init.d/daemon_script status
Service daemon: Running
[root#adil work]#
[root#adil work]# /etc/init.d/daemon_script stop
Shutting down parent daemon: [ OK ]
[root#adil work]# /etc/init.d/daemon_script status
Service daemon: Stopped
[root#adil work]#
===========
On systemd based framework, I install the same RPM on CentOS7.1
[root#localhost x86_64]# /etc/init.d/daemon_script
Usage: /etc/init.d/daemon_script {start|stop|restart|status}
[root#localhost x86_64]# /etc/init.d/daemon_script start
Starting daemon_script (via systemctl): Warning: Unit file of daemon_script.service changed on disk, 'systemctl daemon-reload' recommended.
Job for daemon_script.service failed. See 'systemctl status daemon_script.service' and 'journalctl -xn' for details.
[FAILED]
[root#localhost x86_64]# systemctl daemon-reload
[root#localhost x86_64]# systemctl status daemon_script.service
daemon_script.service - SYSV: start and stop Test daemon service.
Loaded: loaded (/etc/rc.d/init.d/daemon_script)
Active: failed (Result: exit-code) since Fri 2015-09-11 15:30:44 IST; 32s ago
Sep 11 15:30:44 localhost.localdomain systemd[1]: Starting SYSV: start and st...
Sep 11 15:30:44 localhost.localdomain systemd[1]: daemon_script.service: cont...
Sep 11 15:30:44 localhost.localdomain systemd[1]: Failed to start SYSV: start...
Sep 11 15:30:44 localhost.localdomain systemd[1]: Unit daemon_script.service ...
Hint: Some lines were ellipsized, use -l to show in full.
[root#localhost x86_64]# systemctl status daemon_script.service -l
daemon_script.service - SYSV: start and stop Test daemon service.
Loaded: loaded (/etc/rc.d/init.d/daemon_script)
Active: failed (Result: exit-code) since Fri 2015-09-11 15:30:44 IST; 46s ago
Sep 11 15:30:44 localhost.localdomain systemd[1]: Starting SYSV: start and stop Test daemon service....
Sep 11 15:30:44 localhost.localdomain systemd[1]: daemon_script.service: control process exited, code=exited status=203
Sep 11 15:30:44 localhost.localdomain systemd[1]: Failed to start SYSV: start and stop Test daemon service..
Sep 11 15:30:44 localhost.localdomain systemd[1]: Unit daemon_script.service entered failed state.
[root#localhost x86_64]#
output of journalctl -xn
-- Logs begin at Fri 2015-09-11 14:50:35 IST, end at Fri 2015-09-11 15:40:01 IST. --
Sep 11 15:31:03 localhost.localdomain systemd[1]: [/usr/lib/systemd/system/dm-event.socket:10] Unknown lvalue 'RemoveOnStop' in section 'Socket'
Sep 11 15:39:33 localhost.localdomain systemd[1]: Starting SYSV: start and stop Test daemon service....
-- Subject: Unit daemon_script.service has begun with start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit daemon_script.service has begun starting up.
Sep 11 15:39:33 localhost.localdomain systemd[8509]: Failed at step EXEC spawning /etc/rc.d/init.d/daemon_script: Exec format error
-- Subject: Process /etc/rc.d/init.d/daemon_script could not be executed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- The process /etc/rc.d/init.d/daemon_script could not be executed and failed.
--
-- The error number returned while executing this process is 8.
Sep 11 15:39:33 localhost.localdomain systemd[1]: daemon_script.service: control process exited, code=exited status=203
Sep 11 15:39:33 localhost.localdomain systemd[1]: Failed to start SYSV: start and stop Test daemon service..
-- Subject: Unit daemon_script.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit daemon_script.service has failed.
--
-- The result is failed.
Sep 11 15:39:33 localhost.localdomain systemd[1]: Unit daemon_script.service entered failed state.
Sep 11 15:40:01 localhost.localdomain systemd[1]: Created slice user-0.slice.
-- Subject: Unit user-0.slice has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit user-0.slice has finished starting up.
--
-- The start-up result is done.
Sep 11 15:40:01 localhost.localdomain systemd[1]: Starting Session 7 of user root.
-- Subject: Unit session-7.scope has begun with start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit session-7.scope has begun starting up.
Sep 11 15:40:01 localhost.localdomain systemd[1]: Started Session 7 of user root.
-- Subject: Unit session-7.scope has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit session-7.scope has finished starting up.
--
-- The start-up result is done.
Sep 11 15:40:01 localhost.localdomain CROND[8528]: (root) CMD (/usr/lib64/sa/sa1 1 1)
[root#localhost x86_64]#
You do not need to migrate your scripts.
It is probably better and you can utilize some interesting functions of Systemd, your files will be smaller. But you do not need to migrate. Systemd works correctly with SysV init files too.
You can not locate any SysV init files on CentOS 7 installation because Red Hat packagers (who created CentOS) put an effort into packaging and migrated all SysV files to unit files. But you do not need to.
The is only one task you need to do. Once you place new SysV file, you must reload the systemd manager configuration using
# systemctl daemon-reload
That is all.
I give you small example
# cat /etc/init.d/foo
#!/usr/bin/sh
echo ahoy
# chmod a+x /etc/init.d/foo
# systemctl start foo
Failed to start foo.service: Unit foo.service failed to load: No such file or directory.
# systemctl daemon-reload
# systemctl start foo
# journalctl -xn
-- Subject: Unit foo.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit foo.service has finished starting up.
--
-- The start-up result is done.
# service foo start
ahoy
So you can use all command (chkconfig, service) as you are used to from CentOS 6. And when you have time, you can study man systemd.unit(5) and bunch of other man pages (see "SEE ALSO" of that man page).

Resources