We're dealing with an issue related to Bluez supervision_timeout value of 42 on a BLE connection. After following Excessive Bluetooth LE timeouts on Linux? and increasing the supervision_timeout to 200, we've found a significant decrease in BLE connection timeouts.
Here's the issue: We are creating our own Archiso ISO on the computers but cannot write to /sys/kernel/debug/bluetooth/hci0/supervision_timeout during chroot as the /sys/kernel directory does not exist at the time.
If the file does get updated when the computer starts (manually writing to file using nvim as root), the file goes back to the value of 42 on computer restart.
So I've got a couple possibilities, but unsure how to perform them.
During Archiso installation, make the supervision_timeout file be 200 instead of 42 (though we can't just copy a file during the chroot process as again, /sys/kernel/.../ directory isn't there at that point). Is this file something that is created from Bluez stack itself? I've been looking for documentation but can't find anything other than Bluez source files that define this number for supervision_timeout.
Write to the file every time the computer starts. However, I cannot perform this operation in the .xinitrc file as only the root user has access to /sys/kernel/debug/ directory.
Posting this to hopefully help someone else (and, admittedly, myself in case I forget as I cannot find the below forum topic from a Google search anymore).
See https://bbs.archlinux.org/viewtopic.php?id=279872
So, following V1del's advice I can successfully update the BLE
connection parameter (supervision_timeout) on startup of the computer.
I needed to change the systemd file a little bit as follows:
[Unit]
Description=Switching supervision timeout
Requires=bluetooth.service
After=bluetooth.service sys-kernel-debug.mount
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/bin/sh -c "sleep 5; echo 200 > /sys/kernel/debug/bluetooth/hci0/supervision_timeout"
[Install]
WantedBy=multi-user.target
I found that I needed to wait for the /sys/debug/ filesystem to be
mounted, so needed to add sys-kernel-debug.mount to the After
declaration.
Note that the sleep in ExecStart is necessary as Bluetooth is not started yet.
I've created a systemd service file (specifically for svnserve; I'm actually using the example from here https://stackoverflow.com/a/40584047/464087), and when I enable it, typing
sudo systemctl enable svnserve
I get the response
Failed to execute operation: Invalid argument
Running
sudo systemctl status svnserve
yields
● svnserve.service - Subversion protocol daemon
Loaded: loaded (/etc/systemd/system/svnserve.service; enabled; vendor preset: enabled)
Active: inactive (dead)
not giving me any clue about anything being wrong. I can then start the service without any error, and it seems to be running as expected, and after starting systemctl status I still get no clue about anything being wrong:
● svnserve.service - Subversion protocol daemon
Loaded: loaded (/etc/systemd/system/svnserve.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2018-01-09 22:10:14 UTC; 6s ago
Process: 9677 ExecStart=/usr/bin/svnserve $DAEMON_ARGS (code=exited, status=0/SUCCESS)
Main PID: 9678 (svnserve)
Tasks: 1
Memory: 964.0K
CPU: 2ms
CGroup: /system.slice/svnserve.service
└─9678 /usr/bin/svnserve --daemon --pid-file /run/svnserve/svnserve.pid --root /srv/svn/repos --log-file /var/log/svnserve/svnserve.log
So what does this error message mean? And to which level of things is "invalid argument" supposed to apply? An argument to the svnserve command? Some property in the service file? A command line argument to the servicectl command itself?
FWIW this is on a Ubuntu 16.04 LTS server.
If you copy/paste the file from a system with one encoding (e.g. Windows) to another (e.g. linux), there may be issues with the file encoding, or characters being interpreted differently. You can convert the file and re-analyze to see if it is being interpreted correctly.
Run the analyzer
$ sudo systemd-analyze verify yourname.service
/etc/systemd/system/yourname.service:1: Assignment outside of section. Ignoring.
Fix the encoding of the service file, e.g. using vim (answer from here)
$ vim +"set nobomb | set fenc=utf8 | x" yourname.service
Edit the file and remove any strange characters that are now exposed at e.g. the start of the file. e.g. it might have characters like ^[[200~
Save the file and re-enable the service
$ sudo systemctl enable yourname.service
I had a similar case, in my case problem went away after removing the Alias line from the [Install] section. Thanks to Anton in another thread: https://stackoverflow.com/a/34978908/2711456 - alias' name may not be the same as service name.
What I also found is the bug with comments (at least at systemd 219), if you have comment after any code of service file, it will failed to enable it.
So bring comment to new string, or remove it.
I tested and it works for me:
WantedBy=multi-user.target
# runs in init 3 (multi-user mode for linux)
this one will not work:
WantedBy=multi-user.target # runs in init 3 (multi-user mode for linux)
some discussion is here: https://github.com/rabbitmq/rabbitmq-server/issues/1422
I experienced the exactly same thing. Deleting "Alias" works, but actually, alias can have the same name with the service file.
The reason it doesn't work is related to the directory where service file is put.
What systemd enable does is creating an alias in the directory "/etc/systemd/system" and in the target directory which wants this service. If the original service file is already located in "/etc/systemd/system", when systemd tries to enable this service, the alias can't be created.
The solution is putting the service file in directory "/lib/systemd/system/", and it will work.
So, I guess we already have a similar answer. I just want to indicate the reason.
Answer:
cd /etc/systemd/system/multi-user.target.wants/ # it can be other WantedBy item
ls -lA # notice that <your>.service is not a link
rm <your>.service # remove it
And now try:
sudo systemctl enable <your>.service
It should create right link and enable your service.
you try this, i was resolved it:
cd /etc/systemd/system/multi-user.target.wants
ls
find name service error "Failed to execute operation: Invalid argument"
rm -rf yourname.service
cd /etc/systemd/system/
nano yourname.service
edit your content service (maybe your content mistake (checking symboy [, ],...bla..bla)
==> save it
systemctl daemon-reload
systemctl enable yourname.service
good luck!!!
After last line of your /etc/systemd/system/youunit.service file, CR symbol is required.
Check it and remove /etc/systemd/system/multi-user.target.wants/youunit.service.
Then try systemctl enable youunit again.
I my case the problem was that the service was a symlink to another file. systemd-analyze did not find any issue but systemctl enable failed. When I removed the symlink and copied the file, it started to work.
In my case, my /etc/systemd/system/my-service.service was a symlink :S
I was trying to edit the sshd_config file and in between that my machine crashed. When I tried again it started showing the below message-
Found a swap file by the name "/etc/ssh/.sshd_config.swp"
dated: Mon Oct 23 07:17:17 2017 [cannot be read]
While opening file "/etc/ssh/sshd_config"
dated: Mon Oct 23 22:19:04 2017
NEWER than swap file!
(1) Another program may be editing the same file. If this is the case,
be careful not to end up with two different instances of the same
file when making changes. Quit, or continue with caution.
(2) An edit session for this file crashed.
If this is the case, use ":recover" or "vim -r /etc/ssh/sshd_config"
to recover the changes (see ":help recovery").
If you did this already, delete the swap file "/etc/ssh/.sshd_config.swp"
to avoid this message.*
I deleted the .swp file but it looks like the original file got deleted. After that I ran this command "sudo service sshd restart ".
Now I am not able to connect to the AWS server using linux terminal. Can anyone please help me with this
The original file shouldn't have been deleted ... the .swp file is the in process edit.
Have you tried rebooting the instance?
If that doesn't help, you may need to recover from a snapshot. You did take a snapshot before editing the ssh config, right?
My computer's clock has been restarted, after this restarting I turn on my computer and waited for my CentOS to boot. but I face a black page which contains :
****An error ocurred during the file system check.
**** Dropping you to a shell; the system will reboot
****when you leave the shell.
****Warning -- SELinux is active
****Disabling security enforment for system recovery.
**** Run 'setenforce 1' to reenable. Give root password for maintenance (or type Control-D to continue):
I typed my password and I face #root line in the very black page.
I really need my CentOs work in GUI . Please help me.
If you have access to a root shell, try this command: "fsck -a" It will try to automatically fix error on your filesystem.
I am attempting to install REDHAWK v1.8.2 on a fresh install of CentOS 6.4 32 bit, but I am unable to get omniNames and omniEvents to start.
sudo /sbin/service omniEvents stop
Stopping CORBA event service: omniEvents
sudo /sbin/service omniNames stop
Stopping omniNames [ OK ]
sudo /sbin/service omniNames start
Starting omniNames [ OK ]
sudo /sbin/service omniEvents start
Starting CORBA event service on port 11169: omniEvents: [25848]: Warning - failed to resolve initial reference 'NameService'. Exception: TRANSIENT
omniEvents.
I tried to verify if omniNames was really running by calling the naming client, but got an error (see below), so it seems omniNames is not successfully starting.
nameclt list
Caught a TRANSIENT exception when trying to validate the type of the
NamingContext. Is the naming service running?
As part of the debugging process, I tried to kill the omniNames process and start it a different way (see below).
sudo killall omniNames
omniNames -start
Wed Nov 13 21:08:08 2013:
Starting omniNames for the first time.
Error: cannot create initial log file '/var/omninames/omninames-orion.log':
No such file or directory
You can set the environment variable OMNINAMES_LOGDIR to specify the
directory where the log files are kept.
I'm not sure why omniNames can't create the log file, because I verified that /var/omninames folder actually exists and even starting omniNames as root yields the same error. Regardless, I set the log directory to my desktop to circumvent the error (see below).
export OMNINAMES_LOGDIR=/home/$USER/Desktop/logs
mkdir -p /home/$USER/Desktop/logs
omniNames -start
Wed Nov 13 21:09:17 2013:
Starting omniNames for the first time.
Wrote initial log file.
Read log file successfully.
Root context is IOR:010000002b00000049444c3a6f6d672e6f72672f436f734e616d696e672f4e616d696e67436f6e746578744578743a312e30000001000000000000005c000000010102000a00000031302e322e382e333500f90a0b0000004e616d6553657276696365000200000000000000080000000100000000545441010000001c00000001000000010001000100000001000105090101000100000009010100
Checkpointing Phase 1: Prepare.
Checkpointing Phase 2: Commit.
Checkpointing completed.
Even though it looks like omniNames successfully started, when I open another terminal window and call the naming client, I get the same error as before (see below).
nameclt list
Caught a TRANSIENT exception when trying to validate the type of the
NamingContext. Is the naming service running?
The only modification I made in the /etc/omniORB.cfg file is to add the lines for InitRef (see below).
InitRef = NameService=corbaname::localhost
InitRef = EventService=corbaloc::localhost:1169/omniEvents
Also, I am not connected to the internet so my version of CentOS has not been updated from the base version, except for the boost libraries as recommended in Appendix J of the manual (http://sourceforge.net/projects/redhawksdr/files/redhawk-doc/1.9.0/REDHAWK_Manual_v1.9.0.pdf/download).
Looks like the issue is in your configuration. You've got the wrong port in your configuration file. It should be port 11169 however you've listed port 1169.
See: http://redhawksdr.github.io/Documentation/mainch2.html#x4-120002.6 for details.
A few other observations and tricks regarding omniOrb in case this was not the issue.
Sometimes omninames/omnievents can get into a bad state. The fix is to delete the log files created by omniNames and omniEvents and restart the services. They are located:
/var/lib/omniEvents/*
/var/omniNames/*
You'll need to be root to delete those files. I always forget where they are located and often do a "locate omni | grep -i log" to remind myself but you must do this as root since they are not visible to standard users.
While it should not matter, I've personally found that using 127.0.0.1 is more reliable than localhost. For some reason, using localhost within a VM in the configuration file has caused me problems in the past. Consider using 127.0.0.1 instead of localhost. This is what the current version of the Redhawk Manual recommends as well.
You mentioned you are using Redhawk v1.8.2. As an FYI, the latest REDHAWK version in the 1.8 series is currently v1.8.5 and 1.9.0 was also recently released.
Hopefully this gets you up and running!