Error when "Connecting Doman" - redhawksdr

In the Redhawk IDE, when I try to connect I get the following error.
Connecting Domain, has encounterd a problem
Details are
Failed to connect
org.omg.CosNaming.NameingContextPackage.NotFound: IDL: omg.org/CosNaming/NamingContext NotFound:1.0
It does not appear that a DomainManager is getting started at all, I have the
OmniEvents and OmniOrb running and the /etc/Omni config file set up as described in the
documenation.
I have tried to delete and redo the DomanManger ConnnectionSettings with REDHAWAK_DEV
corbaname::localhost:2809 but nothing helps.

It appears that omniNames is in a bad state, which should be resolvable through a hard reset of the service. To do a hard reset of omniNames, you will need to stop the omniNames service, delete the files in /var/log/omniORB, and then restart the omniNames service:
# /sbin/service omniNames stop
# rm -f /var/log/omniORB/*
# /sbin/service omniNames start
Note: do not delete the omniORB directory: just delete its contents.
More details on this are in the 1.9 user manual (which may not have been available at the time of your post) in "Appendix H: Resolving omniNames/omniEvents Failures".

Related

systemctl enable fails with cryptic error message [duplicate]

I've created a systemd service file (specifically for svnserve; I'm actually using the example from here https://stackoverflow.com/a/40584047/464087), and when I enable it, typing
sudo systemctl enable svnserve
I get the response
Failed to execute operation: Invalid argument
Running
sudo systemctl status svnserve
yields
● svnserve.service - Subversion protocol daemon
Loaded: loaded (/etc/systemd/system/svnserve.service; enabled; vendor preset: enabled)
Active: inactive (dead)
not giving me any clue about anything being wrong. I can then start the service without any error, and it seems to be running as expected, and after starting systemctl status I still get no clue about anything being wrong:
● svnserve.service - Subversion protocol daemon
Loaded: loaded (/etc/systemd/system/svnserve.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2018-01-09 22:10:14 UTC; 6s ago
Process: 9677 ExecStart=/usr/bin/svnserve $DAEMON_ARGS (code=exited, status=0/SUCCESS)
Main PID: 9678 (svnserve)
Tasks: 1
Memory: 964.0K
CPU: 2ms
CGroup: /system.slice/svnserve.service
└─9678 /usr/bin/svnserve --daemon --pid-file /run/svnserve/svnserve.pid --root /srv/svn/repos --log-file /var/log/svnserve/svnserve.log
So what does this error message mean? And to which level of things is "invalid argument" supposed to apply? An argument to the svnserve command? Some property in the service file? A command line argument to the servicectl command itself?
FWIW this is on a Ubuntu 16.04 LTS server.
If you copy/paste the file from a system with one encoding (e.g. Windows) to another (e.g. linux), there may be issues with the file encoding, or characters being interpreted differently. You can convert the file and re-analyze to see if it is being interpreted correctly.
Run the analyzer
$ sudo systemd-analyze verify yourname.service
/etc/systemd/system/yourname.service:1: Assignment outside of section. Ignoring.
Fix the encoding of the service file, e.g. using vim (answer from here)
$ vim +"set nobomb | set fenc=utf8 | x" yourname.service
Edit the file and remove any strange characters that are now exposed at e.g. the start of the file. e.g. it might have characters like ^[[200~
Save the file and re-enable the service
$ sudo systemctl enable yourname.service
I had a similar case, in my case problem went away after removing the Alias line from the [Install] section. Thanks to Anton in another thread: https://stackoverflow.com/a/34978908/2711456 - alias' name may not be the same as service name.
What I also found is the bug with comments (at least at systemd 219), if you have comment after any code of service file, it will failed to enable it.
So bring comment to new string, or remove it.
I tested and it works for me:
WantedBy=multi-user.target
# runs in init 3 (multi-user mode for linux)
this one will not work:
WantedBy=multi-user.target # runs in init 3 (multi-user mode for linux)
some discussion is here: https://github.com/rabbitmq/rabbitmq-server/issues/1422
I experienced the exactly same thing. Deleting "Alias" works, but actually, alias can have the same name with the service file.
The reason it doesn't work is related to the directory where service file is put.
What systemd enable does is creating an alias in the directory "/etc/systemd/system" and in the target directory which wants this service. If the original service file is already located in "/etc/systemd/system", when systemd tries to enable this service, the alias can't be created.
The solution is putting the service file in directory "/lib/systemd/system/", and it will work.
So, I guess we already have a similar answer. I just want to indicate the reason.
Answer:
cd /etc/systemd/system/multi-user.target.wants/ # it can be other WantedBy item
ls -lA # notice that <your>.service is not a link
rm <your>.service # remove it
And now try:
sudo systemctl enable <your>.service
It should create right link and enable your service.
you try this, i was resolved it:
cd /etc/systemd/system/multi-user.target.wants
ls
find name service error "Failed to execute operation: Invalid argument"
rm -rf yourname.service
cd /etc/systemd/system/
nano yourname.service
edit your content service (maybe your content mistake (checking symboy [, ],...bla..bla)
==> save it
systemctl daemon-reload
systemctl enable yourname.service
good luck!!!
After last line of your /etc/systemd/system/youunit.service file, CR symbol is required.
Check it and remove /etc/systemd/system/multi-user.target.wants/youunit.service.
Then try systemctl enable youunit again.
I my case the problem was that the service was a symlink to another file. systemd-analyze did not find any issue but systemctl enable failed. When I removed the symlink and copied the file, it started to work.
In my case, my /etc/systemd/system/my-service.service was a symlink :S

Database 'neo4j' is unavailable. Cannot reset neo4j database

I have my community 4.1.1 neo4j service installed on the ubuntu commandline running on my windows machine. I have been using neo4j steadily for a month or two now, just recently it has prevented me from accessing the neo4j database, it will say this in neo4j browser:
Database 'neo4j' is unavailable. Run :sysinfo for more info.
I have tried uninstalling neo4j and reinstalling but that has not worked either. I tried playing around with the default listen address previously, but now with the reinstall all config data is back to normal. Running ./neo4j-community-4.1.1/bin/cypher-shell under bin does not work. It says:
Unable to establish connection in 3000ms
If I run ./neo4j-community-4.1.1/bin/cypher-shell -a 192.168.0.19 it says:
Database 'neo4j' is unavailable
When I run ./neo4j-community-4.1.1/bin/neo4j-admin check-consistency --database=neo4j it also states:
.2020-08-18 22:12:16.868+0000 WARN [o.n.c.ConsistencyCheckService] Index was dirty on startup which means it was not shutdown correctly and need to be cleaned up with a successful recovery. Index file: /home/thomp105/neo4j-community-4.1.1/data/databases/neo4j/neostore.relationshipgroupstore.db.id.
I would love to reset everything from scratch but I am unsure how
At this point I cannot even access the browser at localhost:7474. It hangs indefinitely trying to load.
I am truly stumped. Anyone have any advice on how I navigate this issue?
It's not easy to guess the issue without seeing your system, but may I ask if you can try to delete your default database, i.e. neo4j physically from the disk (e.g. rm -rf /home/thomp105/neo4j-community-4.1.1/data/databases/neo4j/), and then try to create another database with different name instead (open neo4j.conf, search for dbms.active_database, which point out on default database, and change it to some other name)?
I had this problem running on a linux server. The server was up but got this error on any query: Database 'neo4j' is unavailable. To troubleshoot I ran sudo neo4j console and the problem went away. When I ran the console as user ne04j the problem came back.
$ /usr/share/neo4j/bin/neo4j console
Directories in use:
home: /var/lib/neo4j
config: /etc/neo4j
logs: /var/log/neo4j
plugins: /var/lib/neo4j/plugins
import: /var/lib/neo4j/import
data: /var/lib/neo4j/data
certificates: /var/lib/neo4j/certificates
run: /var/run/neo4j
So I tried: sudo chown -R neo4j:neo4j /var/lib/neo4j/data and the problem went away. Apparently when I'd done a restore of the database I'd run the neo4j server as root and when the system runs neo4j it does it as the user neo4j so couldn't read any of its data. It seems that an error like this would warrant an easy to parse error message but verbosity is not the neo4j way.

Installing Apache on Windows Subsystem for Linux

Having just updated to the newest Windows 10 release (build 14316), I immediately started playing with WSL, the Windows Subsystem for Linux, which is supposed to run an Ubuntu installation on Windows.
Maybe I'm trying the impossible by trying to install Apache on it, but then someone please explain me why this won't be possible.
At any rate, during installation (sudo apt-get install apache2), I received the following error messages after the dependencies were downloaded and installed correctly:
initctl: Unable to connect to Upstart: Failed to connect to socket /com/ubuntu/upstart: No such file or directory
runlevel:/var/run/utmp: No such file or directory
* Starting web server apache2 *
* The apache2 configtest failed.
Output of config test was:
mktemp: failed to create directory via template '/var/lock/apache2.XXXXXXXXXX': No such file or directory
chmod: missing operand after '755'
Try 'chmod --help' for more information.
invoke-rc.d: initscript apache2, action "start" failed.
Setting up ssl-cert (1.0.33) ...
Processing triggers for libc-bin (2.19-0ubuntu6.7) ...
Processing triggers for ureadahead (0.100.0-16) ...
Processing triggers for ufw (0.34~rc-0ubuntu2) ...
WARN: / is group writable!
Now, I understand that there seem to be some folders and files missing for Apache2 to work. Before I start changing anything that will mess with my Windows installation, I want to ask whether there's a different way? Also, should I worry about / being group writable or is this just standard Windows behaviour?
In order to eliminate this warning
Invalid argument: AH00076: Failed to enable APR_TCP_DEFER_ACCEP
Add this to the end of /etc/apache2/apache2.conf
AcceptFilter http none
Note the following in your output
failed to create directory via template '/var/lock/apache2.XXXXXXXXXX': No such file
I tried listing /var/lock. It points to /run/lock, which doesn't exist.
Create the directory with
mkdir -p /run/lock
The install should now work (you may need to clean the installation first)
You have to start bash.exe in administrator mode to avoid a lot of problems related to network.
i installed Lamp (Apache/MySQL/Php) without any problem :
Start bash.exe in administrator mode
type : sudo apt-get install lamp-server^
add these 2 lines in /etc/apache2/apache2.conf :
Servername localhost
AcceptFilter http none
then you can start apache :
/etc/init.d/apache2 start
Following the great advice here I edited apache2.conf and inserted the following to end of file after receiving all the various errors above and apache2 then worked great on the debian wsl package:
Servername localhost
AcceptFilter http none
AcceptFilter https none

Where should I put the "down" file to prevent Chef from starting

I am running Open Source Chef 11 Server and a dozen or so Linux and SmartOs servers running chef-client. At one point I created a file on one of my linux servers with the filename of "down" in a specific directory and that prevented the chef-client from running, even after reboot. I have since deleted this file and I cannot remember which directory I had put that file in. I can no longer find any documentation that this existed or works. Did I imagine this?
I realize the point of Chef is to have chef-client running at all times but sometimes it is useful to disable the chef-client while experimenting with the server configuration.
I believe this "down" file might be related to runit.
I think I found it.
If I create the file in /etc/sv/chef-client
# touch /etc/sv/chef-client/down
then run
# sv status chef-client
I get back
down: chef-client: 85480s; run: log: (pid 8000) 93131s
If I remove the down file I get back
down: chef-client: 85539s, normally up; run: log: (pid 8000) 93190s

Error When Starting OmniEvents

I am attempting to install REDHAWK v1.8.2 on a fresh install of CentOS 6.4 32 bit, but I am unable to get omniNames and omniEvents to start.
sudo /sbin/service omniEvents stop
Stopping CORBA event service: omniEvents
sudo /sbin/service omniNames stop
Stopping omniNames [ OK ]
sudo /sbin/service omniNames start
Starting omniNames [ OK ]
sudo /sbin/service omniEvents start
Starting CORBA event service on port 11169: omniEvents: [25848]: Warning - failed to resolve initial reference 'NameService'. Exception: TRANSIENT
omniEvents.
I tried to verify if omniNames was really running by calling the naming client, but got an error (see below), so it seems omniNames is not successfully starting.
nameclt list
Caught a TRANSIENT exception when trying to validate the type of the
NamingContext. Is the naming service running?
As part of the debugging process, I tried to kill the omniNames process and start it a different way (see below).
sudo killall omniNames
omniNames -start
Wed Nov 13 21:08:08 2013:
Starting omniNames for the first time.
Error: cannot create initial log file '/var/omninames/omninames-orion.log':
No such file or directory
You can set the environment variable OMNINAMES_LOGDIR to specify the
directory where the log files are kept.
I'm not sure why omniNames can't create the log file, because I verified that /var/omninames folder actually exists and even starting omniNames as root yields the same error. Regardless, I set the log directory to my desktop to circumvent the error (see below).
export OMNINAMES_LOGDIR=/home/$USER/Desktop/logs
mkdir -p /home/$USER/Desktop/logs
omniNames -start
Wed Nov 13 21:09:17 2013:
Starting omniNames for the first time.
Wrote initial log file.
Read log file successfully.
Root context is IOR:010000002b00000049444c3a6f6d672e6f72672f436f734e616d696e672f4e616d696e67436f6e746578744578743a312e30000001000000000000005c000000010102000a00000031302e322e382e333500f90a0b0000004e616d6553657276696365000200000000000000080000000100000000545441010000001c00000001000000010001000100000001000105090101000100000009010100
Checkpointing Phase 1: Prepare.
Checkpointing Phase 2: Commit.
Checkpointing completed.
Even though it looks like omniNames successfully started, when I open another terminal window and call the naming client, I get the same error as before (see below).
nameclt list
Caught a TRANSIENT exception when trying to validate the type of the
NamingContext. Is the naming service running?
The only modification I made in the /etc/omniORB.cfg file is to add the lines for InitRef (see below).
InitRef = NameService=corbaname::localhost
InitRef = EventService=corbaloc::localhost:1169/omniEvents
Also, I am not connected to the internet so my version of CentOS has not been updated from the base version, except for the boost libraries as recommended in Appendix J of the manual (http://sourceforge.net/projects/redhawksdr/files/redhawk-doc/1.9.0/REDHAWK_Manual_v1.9.0.pdf/download).
Looks like the issue is in your configuration. You've got the wrong port in your configuration file. It should be port 11169 however you've listed port 1169.
See: http://redhawksdr.github.io/Documentation/mainch2.html#x4-120002.6 for details.
A few other observations and tricks regarding omniOrb in case this was not the issue.
Sometimes omninames/omnievents can get into a bad state. The fix is to delete the log files created by omniNames and omniEvents and restart the services. They are located:
/var/lib/omniEvents/*
/var/omniNames/*
You'll need to be root to delete those files. I always forget where they are located and often do a "locate omni | grep -i log" to remind myself but you must do this as root since they are not visible to standard users.
While it should not matter, I've personally found that using 127.0.0.1 is more reliable than localhost. For some reason, using localhost within a VM in the configuration file has caused me problems in the past. Consider using 127.0.0.1 instead of localhost. This is what the current version of the Redhawk Manual recommends as well.
You mentioned you are using Redhawk v1.8.2. As an FYI, the latest REDHAWK version in the 1.8 series is currently v1.8.5 and 1.9.0 was also recently released.
Hopefully this gets you up and running!

Resources