Redhawk SDRROOT, don't see components - redhawksdr

Our admin installed stuff to SDRROOT=/var/redhawk/sdr but I wanted to have my own SDRROOT, which I set to ~/redhawk/sdr.
The problem is that the I don't see components like sigGen, hardLimit in the components palette.
I followed the following instruction given by Axios.
These instructions did not solve my problem:
$ mkdir -p $SDRROOT/{dom/components,dom/waveforms,dom/domain,dev/devices,dev/nodes}
$ ln -s /var/redhawk/sdr/dom/mgr $SDRROOT/dom/mgr
$ ln -s /var/redhawk/sdr/dev/mgr $SDRROOT/dev/mgr
$ cp /var/redhawk/sdr/dom/domain/DomainManager.dmd.xml.template \
$SDRROOT/dom/domain/DomainManager.dmd.xml
$ gedit $SDRROOT/dom/domain/DomainManager.dmd.xml

An SDRROOT stands on its own. If you use your own, you need to ensure it contains any software you want to use. You haven't mentioned copying/linking the components you wanted to use into your SDRROOT (SigGen, etc). They should be inside /var/redhawk/sdr/dom/components. You could do this, for example, with:
cd ~/redhawk/sdr/dom/components
for $component in /var/redhawk/sdr/dom/components/*; do
ln -s $component
done
Also, don't forget to update environment variables to point at your SDRROOT. Check the variables that get set in /etc/profile.d/redhawk.sh for reference.

Related

Dockerfile set ENV based on npm package version [duplicate]

Is it possible to set a docker ENV variable to the result of a command?
Like:
ENV MY_VAR whoami
i want MY_VAR to get the value "root" or whatever whoami returns
As an addition to DarkSideF answer.
You should be aware that each line/command in Dockerfile is ran in another container.
You can do something like this:
RUN export bleah=$(hostname -f);echo $bleah;
This is run in a single container.
At this time, a command result can be used with RUN export, but cannot be assigned to an ENV variable.
Known issue: https://github.com/docker/docker/issues/29110
I had same issue and found way to set environment variable as result of function by using RUN command in dockerfile.
For example i need to set SECRET_KEY_BASE for Rails app just once without changing as would when i run:
docker run -e SECRET_KEY_BASE="$(openssl rand -hex 64)"
Instead it i write to Dockerfile string like:
RUN bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" >> /etc/bash.bashrc'
and my env variable available from root, even after bash login.
or may be
RUN /bin/bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" > /etc/profile.d/docker_init.sh'
then it variable available in CMD and ENTRYPOINT commands
Docker cache it as layer and change only if you change some strings before it.
You also can try different ways to set environment variable.
This answer is a response to #DarkSideF,
The method he is proposing is the following, in Dockerfile :
RUN bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" >> /etc/bash.bashrc'
( adding an export in the /etc/bash.bashrc)
It is good but the environment variable will only be available for the process /bin/bash, and if you try to run your docker application for example a Node.js application, /etc/bash.bashrc will completely be ignored and your application won't have a single clue what SECRET_KEY_BASE is when trying to access process.env.SECRET_KEY_BASE.
That is the reason why ENV keyword is what everyone is trying to use with a dynamic command because every time you run your container or use an exec command, Docker will check ENV and pipe every value in the process currently run (similar to -e).
One solution is to use a wrapper (credit to #duglin in this github issue).
Have a wrapper file (e.g. envwrapper) in your project root containing :
#!/bin/bash
export SECRET_KEY_BASE="$(openssl rand -hex 64)"
export ANOTHER_ENV "hello world"
$*
and then in your Dockerfile :
...
COPY . .
RUN mv envwrapper /bin/.
RUN chmod 755 /bin/envwrapper
CMD envwrapper myapp
If you run commands using sh as it seems to be the default in docker.
You can do something like this:
RUN echo "export VAR=`command`" >> /envfile
RUN . /envfile; echo $VAR
This way, you build a env file by redirecting output to the env file of your choice. It's more explicit than having to define profiles and so on.
Then as the file will be available to other layers, it will be possible to source it and use the variables being exported. The way you create the env file isn't important.
Then when you're done you could remove the file to make it unavailable to the running container.
The . is how the env file is loaded.
As an addition to #DarkSideF's answer, if you want to reuse the result of a previous command in your Dockerfile during in the build process, you can use the following workaround:
run a command, store the result in a file
use command substitution to get the previous result from that file into another command
For example :
RUN echo "bla" > ./result
RUN echo $(cat ./result)
For something cleaner, you can use also the following gist which provides a small CLI called envstore.py :
RUN envstore.py set MY_VAR bla
RUN echo $(envstore.py get MY_VAR)
Or you can use python-dotenv library which has a similar CLI.
Not sure if this is what you were looking for, but in order to inject ENV vars or ARGS into your .Dockerfile build this pattern works.
in your my_build.sh:
echo getting version of osbase image to build from
OSBASE=$(grep "osbase_version" .version | sed 's/^.*: //')
echo building docker
docker build -f \
--build-arg ARTIFACT_TAG=$OSBASE \
PATH_TO_MY.Dockerfile \
-t my_artifact_home_url/bucketname:$TAG .
for getting an ARG in your .Dockerfile the snippet might look like this:
FROM scratch
ARG ARTIFACT_TAG
FROM my_artifact_home_url/bucketname:${ARTIFACT_TAG}
alternatively for getting an ENV in your .Dockerfile the snippet might look like this:
FROM someimage:latest
ARG ARTIFACT_TAG
ENV ARTIFACT_TAG=${ARTIFACT_TAG}
the idea is you run the shell script and that calls the .Dockerfile with the args passed in as options on the build.

bash auto complete binary file which is not in PATH

I'm trying to make a custom bash auto completion script on my CLI package.
When I install my package like below, then my command is installed in $PATH (/usr/local/bin),
$ ./configure
$ make
$ sudo make install
so complete -o filenames -F _mycommand mycommand in my bash-autocomplete.sh works properly.
(Because command mycommand is in $PATH (/usr/local/bin)
However, when I install my package locally, and then try to execute binary file from installed location like below,
$ ./configure --prefix=$HOME/usr
$ make
$ make install
complete -o filenames -F _mycommand mycommand doesn't work because OS don't know the location of mycommand.
$ ~/uftrace$ $HOME/usr/bin/mycommand [TAB]
Command 'command' not found,
My question is this:
How can I make bash completion feature with my local binary file? (which is not in PATH)
Can I do this by fixing Makefile or configure or bash-autocomplete.sh?
+
Install the package locally, and than add PATH is not an option because I want to make this bash auto-completion feature regardless of installation location. I want to this feature work at installation point.
From the documentation, that's not possible unless using an "intelligent" completion loader:
First, the command name is identified. [...]
If the command word is a full pathname, a compspec for the full pathname is
searched for first. If no compspec is found for the full pathname, an
attempt is made to find a compspec for the portion following the final
slash.
I put an emphasis (or bold) on the part that should apply: bash will be able to complete the full path (eg: /usr/mycommand or even ./mycommand) but it won't be able to resolve mycommand unless it is found in the PATH and where some completion does the trick.
At last resort, you could register a completion loader which may for example look at the command (using ${1##*/} to get the basename):
_completion_loader() {
if [[ "${1}" == mycommand || "${1##*/}" == mycommand ]]; then
complete -o filenames -F _mycommand "$1"
return 124
fi
return 1 #
}
complete -D -F _completion_loader -o bashdefault -o default
I would not do that on Linux, due to chance of bash-completion begin already there and providing completion by itself: you could check bash-completion as they may perhaps have a way to handle your completion, for example by saving in /etc/bash_completion.d/<yourcommand>.bash.

Human friendly bash auto-completion with sudo

My system is Manjaro Linux based on Arch Linux, I use bash and bash-completion.
It works perfectly when I type something as regular user (no sudo)
$ rfkill <TAB><TAB>
block event help list unblock
but when I type it with sudo
$ sudo rfkill <TAB><TAB>
Display all 3811 possibilities? (y or n)
Obviously, it tries to complete sudo command but I want it to complete rfkill.
I know I can change this behavior by editing /usr/share/bash-completion/completions/sudo file, but I have no idea how to say if second word is not a flag for sudo then use completion for next word.
Do you have?
UPD: I'm testing Ubuntu 16.04 in virtual machine and I see it works as expected. I'll check the difference between ubuntu's /usr/share/bash-completion/completions/sudo file and mine, if any.
UPD2: There is some mirror (meaningless) difference between these files, anyway that didn't help. I have more ideas to test...
I had exactly the same problem (running Manjaro) and found a solution in the Manjaro Forum (Source):
Make sure bash-completion is actually installed by checking whether /usr/share/bash-completion/bash_completion exists. If not install it with pacman -S bash-completion
In your ~/.bashrc file make sure that complete -cf sudo is commented out. Otherwise, this will make sudo only auto-complete filenames and commands but not use bash-completion.
I hope this helps you solving the problem
use double tab:
sudo rfkill <TAB><TAB>
UPD
if there is not that line, add this to your .bashrc
complete -cf sudo

Setting a path with whitespace in Cygwin

I set 2 environment variables to test which one works for me, as following
.bash_profile
NODE_BIN1="/cygdrive/c/Program Files/nodejs"
NODE_BIN2=/cygdrive/c/Program\ Files/nodejs
export NODE_BIN1 NODE_BIN2
then test them in Cygwin terminal
$ cd $NODE_BIN1
kevin#kevin-HP /cygdrive/c/Program (wrong!)
$ cd $NODE_BIN2
kevin#kevin-HP /cygdrive/c/Program (wrong!)
$ cd C:/Program Files/nodejs
kevin#kevin-HP /cygdrive/c/Program (wrong!)
$ cd "C:/Program Files/nodejs"
kevin#kevin-HP /cygdrive/c/Program Files/nodejs
The last result is what I want but actually it's same string as $NODE_BIN1.
Any idea to fix this ?
Thanks a lot !
Try using cygpath?
export NODE_BIN1=`cygpath -w -s "/cygdrive/c/Program Files/nodejs"`
This also provides the same output
export NODE_BIN1=`cygpath -d "/cygdrive/c/Program Files/nodejs"`
Both approaches will set the environment variable correctly. The problem you're experiencing is when you try to use it; bash will split variables on spaces by default, and you end up calling cd with two arguments: /cygdrive/c/Program and Files/nodejs.
The solution, of course, is to switch to zsh. ;)
Okay, okay. If your intention is to be able to switch to this directory with ease, consider writing an alias instead.
alias cdnode='cd "/cygdrive/c/Program\ Files/nodejs"'
If you only want to set this for node's benefit, then don't worry; you're already good to go. You can be absolutely sure using echo instead.
$ echo "[$NODE_BIN1]"
[/cygdrive/c/Program\ Files/nodejs]

How to use $ORIGIN and suid application?

I'm using python with setcap CAP_NET_RAW enabled. My python script imports a shared library which has $ORIGIN in its RPATH. Since my python is now a suid app, $ORIGIN is not evaluated and the library does not load correctly (this is due to a security leak found in glibc ).
Is there a way to tell the linker that my library path is secure and load the library anyway?
A few more notes:
I only need this feature in the development stage. I'm not looking for a production solution.
When working as root, everything works.
I do not want to work as root.
Thanks,
Dave
You can try one of these. Consider that <path-to-mylib> is the absolute pathname after solving the $ORIGIN rpath reference.
Re-run ldconfig after telling it where to find your library
$ echo "<path-to-mylib>" > /etc/ld.so.conf.d/my-new-library.conf
$ ldconfig -v
If running things as root is not an option, export LD_LIBRARY_PATH with the correct directory for every execution of the process
$ echo "export LD_LIBRARY_PATH=<path-to-mylib>" >> ~/.bashrc
$ export LD_LIBRARY_PATH=<path-to-mylib>
$ # then run your stuff...
Did you try sudo?
Instead of $ORIGIN, use fixed paths during development because they will work on setuid programs. Don't change your main build process, just use patchelf to set the rpath to what you need. You could make a shell script which does something like:
ln=`readelf -d |grep RPATH`
IFS=:
set -- $ln
newrpath=`echo $2 |sed 's/\$ORIGIN/\/devel\/myprog\/lib/'`
patchelf --set-rpath newrpath myprogram
Then your binary will no longer search $ORIGIN/../lib but /devel/myprog/lib/../lib

Resources