ncurses test programs failing with message "Error opening terminal: xterm" - ncurses

(Note: this is similar to this question, but the answer there does not apply.)
Running under FreeBSD, I have ncurses installed via the usual pkg method for FreeBSD, but because I've seen some odd behaviour with a previously working curses program, I decided to download an ncurses source tarball from the official site and compile it under my home directory.
The compile went fine, but attempting to run any of the included test programs results in:
Error opening terminal: xterm.
The documentation does say:
NOTE: You must have installed the terminfo database, or set the
environment variable $TERMINFO to point to a SVr4-compatible terminfo
database before running the test programs. Not all vendors' terminfo
databases are SVr4-compatible, but most seem to be.
So it looks like the answer is to set TERMINFO, but to what? I don't see any terminfo database under the build directory itself, but I do have a file /usr/local/share/misc/terminfo.db installed as a result of the regular FreeBSD installation.
Nevertheless, setting (csh syntax) setenv TERMINFO /usr/local/share/misc/terminfo.db (or the same omitting the .db extension) doesn't make any difference.
(Note: this shouldn't matter because I haven't so far attempted to install the local build, but when I ran "configure", I used: ./configure --prefix=$HOME so that it would install under my home directory.

By default, ncurses uses (reads/writes) a directory-tree of terminal descriptions. Optionally (and seen in the makefile for the ncurses port), it reads/writes a hashed database file, as well as reads the directory-tree.
The INSTALL file in the ncurses sources goes into some detail about the --with-hashed-db configure option, which you apparently overlooked. The term(5) manual page gives a better overview.

Related

Run "./" bash/batch file with cygwin

Well the idea goes as followed,
I have a bash file for linux, there I obviously run it by making ./my_run.
The problem is I'm in windows so I downloaded and installed cygwin.
I added cygwin bin to the Enviromental Variables and check that at least "ls" works so I guessed I did it well.
When I try to run it with the cmd it displays:
'.' is not recognized as an internal or external command,
operable program or batch file.
As if the cygwin variables were not correctly installed (as I said I tried ls and works).
Then I tried it directly with cygwin and when doing the ./my_run I got it to work right.
So how is that I can use some commands like ls but when doing ./ it doesn't work on the cmd? How can I fix this?
Well, cygwin is only a shared library and a lot of stuff (the programs) using it (read Cygwin doc). cygwin.dll changes internally path resolution / chars to allow you to say ./my_script and converts it to .\my_script before doing the actual windows call, it also adds the proper extension to executables to allow it to execute windows binaries. This magic persists as long as you use it. cmd.exe is a Microsoft Windows command shell that is completely unaware of Cygwin's shared library and by that reason it doesn't use it, so it will not call it for path translation, even if you populate the environment of zetabytes of stuff. When you run in Cygwin terminal, you are running bash shell, which is a Cygwin executable, linked to cygwin.dll. It manages to use Cygwin library for all the unix system call emulations, so when you pass it e.g. to exec("./my_script", ...);, it internally converts that to try for ./my_script, then .\my_script, ./my_script.exe, ... and the same for .com and .bat extensions.
This fact often makes some people to say that Cygwin is not a good, efficient, environment. But the purpose was not to be efficient (and it is, as it caches entries and makes things best to be efficient) but to be compatible.
In your example ls is a Cygwin executable that mimics the /bin/ls executable from unix systems. It uses the Cygwin library, so all path resolution will be properly made (well, under some constraints, as you'll see after some testing) and everything will work fine. But you cannot pretend all your Windows applications to suddenly transform themselves and begin working as if they where in a different environment. This requires some try and error approach that you have to try yourself. And read Cygwin documentation, it is very good and covers everything I've said here.
If you open up Cygwin and run the command there you should be fine.

How to include additional directories when configuring makefiles

I'm trying to compile geany-plugins-1.28. The debugger plugin (the only one I need) gives the error:
debug.c:53:21: fatal error: vte/vte.h: No such file or directory
#include <vte/vte.h>
Clearly it needs to know where this file is located to compile. I found the vte.h file in the src directory of the main program geany-1.28. When running
sudo ./configure cflags=-I/home/pi/Desktop/geany-1.28/src
I get the same error about the missing header later trying to compile the debugger plugin.
I ran
./configure --help
to get all the flag options. The output is here
How do I get this to configure correctly so that it compiles. I need to compile the debugger version 1.28 myself because apt only installs 1.24 which I think has a bug because it crashes when I run my code with the error:
close failed in file object destructor:
sys.execpthook is missing
lost sys.stderr
CFLAGS is case-sensitive environment variable, so you should set it before running configure, not try to pass it as a command line argument. This variant:
$ export CFLAGS=-I/home/pi/Desktop/geany-1.28/src
$ ./configure
leaves CFLAGS set for current shell until you leave it. While this:
$ CFLAGS=-I/home/pi/Desktop/geany-1.28/src ./configure
sets variable only for current command, i.e. configure.
Some other issues:
You do not need sudo to configure and make. It is also unnecessary for make install if you set PREFIX to a path you have privileges to write to.
Does plugin's build system also builds all it's dependencies? If not, you may face linker errors a bit later.
Update:
I have tried to build debugger plugin and got rid of your error. It seems that vte.h coming with Geany is it's intrinsic, while the plugin requires full-featured file from the library. So I just installed vte and vte-devel from repos. Nevertheless, I got some other unrelated errors coming from Glib. I will not continue my attempts to build all this right now. Hope my effort will be helpful at least a little.
As in this answer stated, vte.h is not the file you are looking for. Install libvte(-dev) package on your system and rerun configure.
Just for the record: vte.h on Geany is a dummy to allow Geany to kind of dynamical enable vte or disable it depending on vte is installed on the system or not.

tput: unknown terminal "xterm-256color"

I'm running OS X 10.10.5. I'm getting an error trying to open a terminal:
tput: unknown terminal "xterm-256color"
This is obviously a missing termcap entry.
$ port list ncurses
ncurses #6.0 devel/ncurses
Any ideas how to install 'ncurses-term' on OS X?
$ sudo port install ncurses-term
Password:
Error: Port ncurses-term not found
The problem was with an Anaconda package:
https://groups.google.com/a/continuum.io/forum/#!topic/anaconda/XKMFYqM12Vg
It appears there is some problem with an earlier version of the ncurses package that interferes with terminfo
conda install -c r ncurses
Notwithstanding the existence of bloated/monolithic packages on Linux, the package maintainers for ncurses packages often split up the 7Mb of terminfo into "base" and "term" chunks (and separate it from the library). The MacPorts maintainer for ncurses has not done this. The terminal database is part of the "ncurses" package. For instance, I see this from
port contents ncurses#6.0_0+universal
under /opt:
/opt/local/share/terminfo/73/screen.xterm-256color
Also there is a system (non-port) copy here:
/usr/share/terminfo/78/xterm-256color
Applications linked with ncurses will generally use one or the other, depending on whether they are linked with the port- or system-library. However, ncurses can be told to look some other place by setting the TERMINFO variable. If you happen to have copied some customization from another machine into your .bashrc, that could have set TERMINFO.
By itself, tput gives no clue where it is looking for a terminal entry. You can check the output from env to see if TERMINFO is set. The infocmp utility can show where it looks (since late 2011), using the -D option, e.g.,
$ infocmp -D
/usr/local/ncurses/share/terminfo
/usr/share/terminfo
/opt/local/share/terminfo
By the way, OSX does not (barring some specialized ports) use termcap as such. It uses terminfo, as part of some given release of ncurses (see for example the manual page for tgetent).

Linux binary can't find shared library, but works while running in strace

(Note: names of the binary and binary and library below are obfuscated to protect the innocent. ;-) The app is proprietary under NDA but the behavior may not depend on it.)
I have a Linux binary which prints the following error when run:
binary: error while loading shared libraries: libshared.so: cannot open shared object file: No such file or directory
Which is confusing on its own since libshared.so is in the LD_LIBRARY_PATH. However,
The library is found correctly when running ldd binary (i.e., the ldd output points to the file location)
The library is found correctly when running strace binary, so that the program manages to print its usage information!
I have never seen an application which behaves differently when run on its own vs in strace, but I figure maybe someone else has seen this happen before? Any ideas how to resolve this?
I don't have the source so I can't rebuild. Running the app in production under strace is probably a non-starter. The OS is RHEL 6.2.
(Old question, but hopefully this will help somebody else)
Under new Linux installations, LD_LIBRARY_PATH is not used by the standard system runtime linker for programs with SUID set. It appears that strace, gdb and friends work differently, and do use LD_LIBRARY_PATH.
For suid programs, all libraries must be found in the system library cache. Check (as root) whether your "missing" library is present using
ldconfig -p | grep <my_library_name>
and, if anything's missing, add it to a new file in /etc/ld.so.conf or ld.so.conf.d/ as appropriate, and then rebuild using
ldconfig -v
Or remove the SUID bit if it's not required, of course.
This really helped me a lot!
I was having a similar problem where libraries were not being picked up properly from the LD_LIBRARY_PATH, even when the ldd command showed all libraries were satisfied. However, when troubleshooting using strace, it was working. In my case, however, the problem was that SGID (set group ID sticky bit being on). When the files were installed, the sysadmin did a recursive chmod that set it on all files (including the executables). Once the SGID was removed on the executables, the libraries were now found without strace and everything worked as it should using the LD_LIBRARY_PATH.

How do i resolve the procedure entry point_impure_ptr error in cygwin/opencobol?

Whenever I try to run my .exe cobol file, i get this error..
fileName.exe Entry Point Not Found
The procedure entry point_impure_ptr could not be located in the dynamic link library cygwin1.dll
I am using OpenCObol and cygwin ver1.7.15.thanks
You'll need to specify the proper path for the command below, but Cygwin seems pretty persnickety with entry point addresses and updates, The system includes a rebaseall command to help fix this problem. Most times I've witnessed it is after a setup.exe pass, while the Cygwin system was still active (and perhaps only in the background and not visible).
C:\Users\btiffin\cygwin\bin\dash -c '/usr/bin/rebaseall'
Run that from a Windows CMD shell (while Cygwin isn't active, say after a clean boot and before running the Cygwin shell. Basically cygwin1.dll can't be open). You'll need to use the proper Windows path to dash for your particular install. Google Cygwin rebase for detailed articles.
I had a similar error message after upgrading from cygwin version 1.5 to 1.7. I solved it by completely removing and reinstalling 1.7 from scratch. I was told there might have been a problem with multiple versions of dlls.

Resources