I am running Vivado in TCL mode under cygwin and noticed that I do not get any output in return to some commands I enter.
The commands which do not return anything seem to be non-builtins or commands which require OS interaction, as far as I can tell.
Consider the following example:
$ vivado -mode tcl
puts HelloTcl
****** Vivado v2015.4.2 (64-bit)
**** SW Build 1494164 on Fri Feb 26 04:18:56 MST 2016
**** IP Build 1491208 on Wed Feb 24 03:25:39 MST 2016
** Copyright 1986-2015 Xilinx, Inc. All Rights Reserved.
HelloTcl
puts 2
2
expr 1 + 2
puts 5
5
help synth_design
read_vhdl
ERROR: [Common 17-163] Missing value for option 'files', please type 'read_vhdl -help' for usage info.
package require Tcl
pwd
exit
exit
INFO: [Common 17-206] Exiting Vivado at Fri Jul 14 13:44:28 2016...
The commands which did not return the expected output are expr 1 + 2, help synth_design and pwd (and possibly package require Tcl).
The situation is the same with the "normal" tclsh.
Can anyone help me understand what the reason of this behavior is?
My OS is Win7 Pro 64bit. Everything works fine with cmd or Powershell. The behavior is also as expected when running Vivado in a terminal under Linux.
It would seem that vivado is only writing values out when you explicitly ask for it, unlike with a standard interactive tclsh which also writes out the result of each command (provided it isn't the empty string). You need to write an explicit puts […].
puts [expr 1 + 2]
puts [pwd]
As long as you know about it, I guess it's not too big a deal. Just a bit annoying.
Related
I am facing a rather strange issue with Oracle Pro*C precompiler on linux: as part of our build process, we invoke the proc utility to generate .cxx files that later get compiled as C++ source files. This proc utility is called through a python script, which captures both stdout and stderr and prints them both in case of a non-0 return code. Whenever the precompiler encounters a compilation error, it reports them on standard output which gets correctly printed out and returns a non-0 return code.
However in our CI environment the precompiler systematically crashes returning a negative return code, with nothing getting printed out neither on standard output nor on standard error.
My ultimate goal is to understand this crash and fix it, but I am unable to reproduce that particular crash outside our CI environment. I managed however to generate a different crash of the proc utility on a linux VM by passing bogus include folders as arguments. And I witness a strange behaviour in my bash terminal, which explains while I get no output at all from my python script. When calling proc directly, the error message is correctly printed out in my terminal:
$> /path/to/proc option_1=foo option2=bar
Pro*C/C++: Release 12.1.0.2.0 - Production on Tue Dec 4 08:13:31 2018
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
System default option values taken from: /usr/lib/oracle/12.1/client64/lib/precomp/admin/pcscfg.cfg
Error at line 3, column 10 in file /usr/include/c++/8/x86_64-redhat-linux/bits/c++config.h
#include <bits/wordsize.h>
.........1
PCC-S-02015, unable to open include file
Error at line 39, column 10 in file /usr/include/c++/8/x86_64-redhat-linux/bits/os_defines.h
#include <features.h>
.........1
PCC-S-02015, unable to open include file
Syntax error at line 44, column 21, file /usr/include/c++/8/x86_64-redhat-linux/bits/os_defines.h:
Error at line 44, column 21 in file /usr/include/c++/8/x86_64-redhat-linux/bits/os_defines.h
#if __GLIBC_PREREQ(2,15) && defined(_GNU_SOURCE)
....................1
PCC-S-02201, Encountered the symbol "," when expecting one of the following:
)
Syntax error at line -1741187720, column 0, file p�:
INTERNAL ERROR: Failed assertion [PGE Code=90105]
Segmentation fault (core dumped)
$>
When redirecting the standard output to a file, no error message gets printed out except the last line regarding a core having been generated. However the file containing the redirected output is empty:
$> /path/to/proc option_1=foo option2=bar > test.txt
Segmentation fault (core dumped)
$> more test.txt
$> ls -al test.txt
-rw-r--r-- 1 me staff 0 3 déc 20:27 test.txt
$>
Also the output of piping to cat results in nothing being printed out at all:
$> /path/to/proc option_1=foo option2=bar | cat
$>
Based on that I have 2 questions:
how is it possible the output does not make it to a file when redirected ?
how else could I attempt to capture it ?
Tying a script to a specific interpreter via a so-called shebang line is a well-known practice on POSIX operating systems. For example, if the following script is executed (given sufficient file-system permissions), the operating system will launch the /bin/sh interpreter with the file name of the script as its first argument. Subsequently, the shell will execute the commands in the script skipping over the shebang line which it will treat as a comment.
#! /bin/sh
date -R
echo hello world
Possible output:
Sat, 01 Apr 2017 12:34:56 +0100
hello world
I used to believe that the interpreter (/bin/sh in this example) must be a native executable and cannot be a script itself that, in turn, would require yet another interpreter to be launched.
However, I went ahead and tried the following experiment nonetheless.
Using the following dumb shell saved as /tmp/interpreter.py, …
#! /usr/bin/python3
import sys
import subprocess
for script in sys.argv[1:]:
with open(script) as istr:
status = any(
map(
subprocess.call,
map(
str.split,
filter(
lambda s : s and not s.startswith('#'),
map(str.strip, istr)
)
)
)
)
if status:
sys.exit(status)
… and the following script saved as /tmp/script.xyz,
#! /tmp/interpreter.py
date -R
echo hello world
… I was able (after making both files executable), to execute script.xyz.
5gon12eder:/tmp> ls -l
total 8
-rwxr-x--- 1 5gon12eder 5gon12eder 493 Jun 19 01:01 interpreter.py
-rwxr-x--- 1 5gon12eder 5gon12eder 70 Jun 19 01:02 script.xyz
5gon12eder:/tmp> ./script.xyz
Mon, 19 Jun 2017 01:07:19 +0200
hello world
This surprised me. I was even able to launch scrip.xyz via another script.
So, what I am asking is this:
Is the behavior observed by my experiment portable?
Was the experiment even conducted correctly or are there situations where this doesn't work? How about different (Unix-like) operating systems?
If this is supposed to work, is it true that there is no observable difference between a native executable and an interpreted script as far as invocation is concerned?
New executables in Unix-like operating systems are started by the system call execve(2). The man page for execve includes:
Interpreter scripts
An interpreter script is a text file that has execute
permission enabled and whose first line is of the form:
#! interpreter [optional-arg]
The interpreter must be a valid pathname for an executable which
is not itself a script. If the filename argument of execve()
specifies an interpreter script, then interpreter will be invoked
with the following arguments:
interpreter [optional-arg] filename arg...
where arg... is the series of words pointed to by the argv
argument of execve().
For portable use, optional-arg should either be absent, or be
specified as a single word (i.e., it should not contain white
space); see NOTES below.
So within those contraints (Unix-like, optional-arg at most one word), yes, shebang scripts are portable. Read the man page for more details, including other differences in invocation between binary executables and scripts.
See boldfaced text below:
This mechanism allows scripts to be used in virtually any context
normal compiled programs can be, including as full system programs,
and even as interpreters of other scripts. As a caveat, though, some
early versions of kernel support limited the length of the interpreter
directive to roughly 32 characters (just 16 in its first
implementation), would fail to split the interpreter name from any
parameters in the directive, or had other quirks. Additionally, some
modern systems allow the entire mechanism to be constrained or
disabled for security purposes (for example, set-user-id support has
been disabled for scripts on many systems). -- WP
And this output from COLUMNS=75 man execve | grep -nA 23 "
Interpreter scripts" | head -39 on a Ubuntu 17.04 box,
particularly lines #186-#189 which tells us what works on Linux, (i.e. scripts can be interpreters, up to four levels deep):
166: Interpreter scripts
167- An interpreter script is a text file that has execute permission
168- enabled and whose first line is of the form:
169-
170- #! interpreter [optional-arg]
171-
172- The interpreter must be a valid pathname for an executable file.
173- If the filename argument of execve() specifies an interpreter
174- script, then interpreter will be invoked with the following argu‐
175- ments:
176-
177- interpreter [optional-arg] filename arg...
178-
179- where arg... is the series of words pointed to by the argv argu‐
180- ment of execve(), starting at argv[1].
181-
182- For portable use, optional-arg should either be absent, or be
183- specified as a single word (i.e., it should not contain white
184- space); see NOTES below.
185-
186- Since Linux 2.6.28, the kernel permits the interpreter of a script
187- to itself be a script. This permission is recursive, up to a
188- limit of four recursions, so that the interpreter may be a script
189- which is interpreted by a script, and so on.
--
343: Interpreter scripts
344- A maximum line length of 127 characters is allowed for the first
345- line in an interpreter scripts.
346-
347- The semantics of the optional-arg argument of an interpreter
348- script vary across implementations. On Linux, the entire string
349- following the interpreter name is passed as a single argument to
350- the interpreter, and this string can include white space. How‐
351- ever, behavior differs on some other systems. Some systems use
352- the first white space to terminate optional-arg. On some systems,
353- an interpreter script can have multiple arguments, and white spa‐
354- ces in optional-arg are used to delimit the arguments.
355-
356- Linux ignores the set-user-ID and set-group-ID bits on scripts.
From Solaris 11 exec(2) man page:
An interpreter file begins with a line of the form
#! pathname [arg]
where pathname is the path of the interpreter, and arg is an
optional argument. When an interpreter file is executed, the
system invokes the specified interpreter. The pathname
specified in the interpreter file is passed as arg0 to the
interpreter. If arg was specified in the interpreter file,
it is passed as arg1 to the interpreter. The remaining
arguments to the interpreter are arg0 through argn of the
originally exec'd file. The interpreter named by pathname
must not be an interpreter file.
As stated by the last statement, chaining interpreters is not supported at all in Solaris, trying to do that will result in the last non-interpreted interpreter (such as /usr/bin/python3) to interpret the first script (such as /tmp/script.xyz, the final command line would become /usr/bin/python3 /tmp/script.xyz), without chaining.
So doing script interpreter chaining is not portable at all.
I have a strange problem: I cannot type or copy the percent sign in my bash...
I tried to read ~/.bashrc, /etc/profile (and stuff in /etc/profile.d). I also tried "sudo bash", but still not possible to type "%". Percent sign in "sh" works...
Any suggestions?
uname -a
Linux 3.2.0-65-generic #99-Ubuntu SMP Fri Jul 4 21:03:29 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
BTW: Question moved to: https://superuser.com/questions/890645/percent-sign-in-bash-is-not-typeable
A workaround is using the ascii value 37: Press and hold the ALT-key, enter 37 on your numeric keyboard and release the ALT-key.
A solution is checking the keyboard mapping. Hold the shift and try all the numbers. On my keyboard I have
!##$%^&*()
Do you have an old keyboard somewhere that you could try?
I had a similar problem with the mapping of the backspace button. Instead of deleting previous characters while editing files it would appear "^?" . I used "stty sane" in the command line and it was reset. Maybe that helps.
I am using Cygwin in Console2 with the following PS1
export PS1='\[\e]2;\w\a\e[1;32m\e[40m\n\w\n\d - \# > \[\e[0;00m\]'
The prompt has the correct text content, but all the colors are ignored.
~/wd
Tue Mar 18 - 01:14 PM >
Screenshot showing Console2:
When I use mintty, the colours are perfect.
TERM is set the same in both Console2 and mintty:
Tue Mar 18 - 06:29 PM > env | grep TERM
TERM=cygwin
TERMCAP=SC|screen|VT 100/ANSI X3.64 virtual terminal:\
You have not show your screenshots. So I'm not sure what do you meaning.
But I believe it is cygwin feature (bug). It thinks that ANSI is not available in Windows terminal (that is true for Console2, but of course not if you are using ANSICON or ConEmu). That means that cygwin process all ANSI sequences internally (it does not send them to the terminal). So, if any problems happens, that all is cygwin implementation problems.
I started sicstus from my Cygwin prompt on my Windows7 64bit installation, and created a prolog program. Then I saved it using the following command that created the file "test.sav" in my current folder.
save_program(test).
When I try to run this file, I get a cryptic error message:
$ ./test.sav
! Existence error in argument 1 of restore/1
! file '%0.bat' does not exist
! goal: restore('%0.bat')
SICStus 4.2.0 (x86-win32-nt-4): Mon Mar 7 20:21:12 WEST 2011
Licensed to SP4idi.ntnu.no
| ?- halt.
./test.sav: line 2: $'\032\r': command not found
./test.sav: line 8: x??xU?u/:?HBa?m[F?????ld?l???l?????./test.sav: line 9: syntax error near unexpected token `)'
./test.sav: line 9: `}?????????8?h????)}???C?qa? ??.?????????/F??7W???yE?lL}>}L???????"???o%"?aac|S[G?????"W????'??K?1Q???????H??M?4??=???bE?
???t[<??????I??\)T?*????????N+?4??#h? ?'?{?1J?*????F?Q??q?<B?5#????l?(s?x?`r?????b?5??%:#I?Eb?#????1-???|a????? ?D??G?)??O?
When I look at the head of the file, this is what I get:
$ head ./test.sav
sicstus-4.2.0 -r %0.bat -a %1 %2 %3 %4 %5 %6 %7 %8 %9
# META_INFO 1
# FILE: "c:/eclipse/workspace_prolog/busstuc/test.sav"
# FR: "timeout"
# META_INFO END
version=4 archmask=0x2c81a
x??xU?u/:?HBa?m[F?????ld?l???l?????head: write error: Permission denied
head: write error
I also tried loading the file in a different manner:
$ sicstus -l ./test.sav
% loading c:/eclipse/workspace_prolog/busstuc/test.sav...
% c:/eclipse/workspace_prolog/busstuc/test.sav loaded, 0 msec 104 bytes
! Consistency error: memory and saved_state are inconsistent
! type 32-bit,BDD,GAUGE,ALL_BUT_PROLOG, saved state, type 32-bit,BDD,GAUGE, emulator
! goal: ensure_loaded(user:'./test.sav')
SICStus 4.2.0 (x86-win32-nt-4): Mon Mar 7 20:21:12 WEST 2011
Licensed to SP4idi.ntnu.no
| ?- halt.
Can someone please explain to me why this is not working?
Am I doing something wrong here?
Thanks!
EDIT: I changed the filename from test.sav to test.bat following Per's suggestion. This happens:
C:\eclipse\workspace_prolog\BussTUC>sicstus-4.2.0 -r C:\eclipse\workspace_prolog\BussTUC\test.bat.bat -a
! Existence error in argument 1 of restore/1
! file 'C:\\eclipse\\workspace_prolog\\BussTUC\\test.bat.bat' does not exist
! goal: restore('C:\\eclipse\\workspace_prolog\\BussTUC\\test.bat.bat')
SICStus 4.2.0 (x86-win32-nt-4): Mon Mar 7 20:21:12 WEST 2011
Licensed to SP4idi.ntnu.no
| ?- halt.
C:\eclipse\workspace_prolog\BussTUC># META_INFO 1
'#' is not recognized as an internal or external command,
operable program or batch file.
C:\eclipse\workspace_prolog\BussTUC>The system cannot write to the specified device.
The system cannot write to the specified device.
| The system cannot write to the specified device.
This, undocumented and unsupported, feature apparently never did work on Windows.
Instead you can use one of the pre-built runtime systems that loads a main.sav from the folder containing the executable. E.g. save your test.sav as main.sav instead and place it alongside sprti.exe in a folder that contains a proper folder structure for SICStus, as described in the manual, in the section Runtime Systems on Windows Target Machines.
The most common solution is to use the spld.exe tool an build a self contained executable but that requires the corresponding C compiler from Microsoft.
(I am one of the SICStus Prolog developers)