I'm talking about this things:
warning: ds segment base generated, but will be ignored in 64-bit mode
I know that -w option can be used to suppress warnings in NASM, but from the list of warnings showed by the help menu nothing fits this type of warning. And -w-all gets rid of everything, except this.
Any way of doing this?
Since that particular error doesn't seem to be one of the suppressible ones (as you've stated, I'd just use sed as a post-processing step, piping the output through something like:
sed '/^warning: .. segment base generated, but will be ignored in 64-bit mode$/d'
Even if you're using nasm on Windows, you can still get the GNUWin32 port of sed to do the job.
And before you complain about this being a kludge, you should know that some of my greatest achievements were kludges, and many of them have out-lived my more well-designed code.
:-)
Related
I'm not really sure why "cout" and "endl" are not being recognized. Any help would be great!
The error is:
and the code is:
The fact that iostream has a red squiggle underneath it is a near certainty that you have something wrong with your environment (such as compiling with a C compiler rather than a C++ one, for example).
You need to fix that, since cout and endl are defined in that header. I'd start by hovering the mouse over the iostream text and see what the tooltip shows you.
If it cannot find the file iostream then you're either not using a C++ compiler, or your environment is severely damaged.
Either way, it's not a correct C++ environment.
Things to look in to are (to start with):
Examine the file extension. Using *.c instead of *.cpp may use a C compiler rather than a C++ one, for example).
Examine the output of your compilation, if available. You will hopefully be able to tell which compiler is being used.
If you are sure you're using a C++ compiler:
You might have a funny character in your iostream string. You could totally delete that line and retype it (don't edit, it may not get rid of the funny character).
Try a different header (like cstdlib) to see if it has the same problem.
Last-straw solution would be re-installation of your development environment, in case things are so damaged it's unrecoverable.
I am trying to debug an application that is cross-compiled on a Windows host for a Linux target.
The problem:
Because the initial compilation is in windows the stored source file paths in the binary is of the form C:\Users\foo\project\.... On the Linux target I have put the source files under \home\foo\project\.... By default gdb does not find the source file because of the different path.
What I have tried so far:
Use "directory" command in gdb to give an exact path for the .c source file in the target Linux system where the app is being debugged. This works but unfortunately there are literally hundreds of files so this solution is unrealistic.
Use the set substitute-path C:\\Users\\foo\\project /home/foo/project command to have gdb substitute all prefixes. Note that the \\ seems necessary such that show substitute-path registers the right string. This unfortunately does not work. My guess is that the substitute-path command does not handle ms-dos style paths.
Tried separating the debug info out into a separate .debug file (see How to generate gcc debug symbol outside the build target?) and then using debugedit to change the paths with the command debugedit --base-dir=C:\Users\foo --dest-dir=/home/foo project.debug. Unfortunately this does not work either. debugedit seems to work fine if the existing path is all UNIX/Linux like but doesn't seem to work with ms-dos style paths.
I have looked around stackoverflow and while there are similar topics I can't find anything that will help me. Would really appreciate any suggestions/help. I realize that cross compiling from Windows is a very roundabout way but can't avoid that for the moment.
Thanks
Although it's rather old question, I did encountered the same problem. I managed to resolve it but using sed on binary executable... (yeah, a 'bit' hack-ish, but did not found another way). With sed I've managed to replace symbols paths right inside the executable, the trick is that new path's length should be the same as the old one.
sed -i "s#C:/srcpath#/srcpath/.#g" ./executable
Be sure to make new path the same length, otherwise the executable will brake.
I also have this same problem. Your option 1 isn't as bad as you think because you can script creating all the 'directory' commands with something like this python code:
def get_directory_paths():
return_array = list()
unix_path = os.path.join('my','unix','path')
for root, dirs, files in os.walk(unix_path):
for dir in dirs:
full_unix_path = os.path.join(root,dir)
escaped_unix_path = re.sub("\s", "\\\\ ", full_unix_path)
return_array.insert(0, "directory " + escaped_unix_path)
return '\n'.join(return_array)
The downside is that if you have two source files with the same name in different directories, I don't think gcc can pick the right one. That worries me, but in my particular situation, I think I'm safe.
For option 2 (which I suspect would fix the aliasing condition from #1), I think the problem is that the substitutions are not ending with a "file separator" according to the linux so they aren't applied:
To avoid unexpected substitution results, a rule is applied only if the from part of the directory name ends at a directory separator. For instance, a rule substituting /usr/source into /mnt/cross will be applied to /usr/source/foo-1.0 but not to /usr/sourceware/foo-2.0. And because the substitution is applied only at the beginning of the directory name, this rule will not be applied to /root/usr/source/baz.c either." (from https://sourceware.org/gdb/current/onlinedocs/gdb/Source-Path.html#index-set-substitute_002dpath )
I haven't tried anything like your #3 and I also considered something like #dragn suggestion, but in my situation the paths are not even close to the same length, so that will be an issue.
I think I'm stuck with #1 and a script, but if anyone has other suggestions, I'm interested options :-)
Not loading VDSO.so is one of the famous bugs you encounter while using gdb and glibc >2.2.
I found that was planned to get repaired in gdb 7.5.1, but it wasn't.
Okay I found a work-around here Here, but I didn't understand it so how to apply it.
OS: Arch Linux
IDE : QT creator 3.0.82
Compiler : GCC 4.8.2
NB: I am not sure if I am breaking the rules including the link above
Not loading VDSO.so is one of the famous bugs you encounter while using gdb and glibc >2.2.
No, it's not. The problem here is simply a useless warning, which you can safely ignore.
I found a work-around here Here, but I didn't understand it so how to apply it.
You didn't find a "workaround". You found a patch to GDB, which disables the warning.
To apply it, use the patch command, and then build your own GDB. But it is much simpler to just ignore the warning in the first place.
For anyone who (like me) just wants gdb to shut up about missing symbols, try adding this to your ~/.gdbinit (but see caveats below):
set logging redirect on
set logging file /dev/null
python
def on_new_objfile(e):
gdb.execute("set logging off")
#print "new objfile:",e.new_objfile.filename
if e.new_objfile.filename[:19] == "system-supplied DSO":
gdb.execute("set logging on") # hide inevitable error message
gdb.events.new_objfile.connect(on_new_objfile)
end
Caveats:
Monopolizes the set logging interface; if you want to use logging you'll need to change it to save the previous logging settings.
Hard-codes "system-supplied DSO"; might be brittle wrt new kernel or gdb versions.
It assumes at least one objfile will be loaded after vdso to reenable output; I'd be very interested if anyone with better gdb internals knowledge could point out the actual after-symbol-load-has-failed hook, since for now it risks leaving output disabled when the program starts if vdso is the last objfile loaded.
How can you set a string to be used instead of standard input? For example, when running the latex command in Unix it will always find some trivial errors, to skip through all errors you have to enter "r" into the command line (I now know that with latex specifically you can use -interactionmode nonstopmode, but is there a more general solution to do this?)
Is there anyway to specify that this should be done automatically? I tried redirecting standard input to read from a file containing "r\n", but this didn't work.
How can I achieve this?
Not all applications that need input can be satisfied with their stdin redirected.
This is because the app can call the isatty C function (if written in C, or some equivalent call for other languages) to determine if the input come from a tty or not.
In such situation, there is a valuable tool to use, and this is expect.
latex --interaction=MODE
where MODE is one of:
errorstopmode: stop at every error and ask for input
scrollmode: scroll over non-fatal errors, but stop at fatal errors (such as "file not found")
nonstopmode: scroll over non-fatal errors, abort at fatal errors
batchmode: like nonstopmode, but don't show messaes at the terminal
For interactive use, errorstopmode (the default) is fine, for non-interactive use, nonstopmode and batchmode are better.
But beware, there are no trivial errors: all errors must be fixed, and all warnings should be fixed if possible.
Redirecting stdin works without problems here:
/tmp $ tex '\undefined\end' <<< r
This is TeX, Version 3.1415926 (TeX Live 2010)
! Undefined control sequence.
<*> \undefined
\end
? OK, entering \nonstopmode...
(see the transcript file for additional information)
No pages of output.
Transcript written on texput.log.
You've got two plausible answers detailing the way to handle Latex specifically. One comment indicates that you need a more general answer.
Most usually, the tool recommended for the general solution is 'expect'. It arranges for the command to have a pseudo-tty connected for input and output, and the command interacts with the pseudo-tty just as it would your real terminal. You tell 'expect' to send certain strings and expect certain other strings, with conditional code and regular expressions to help you do so.
Expect is built using Tcl/Tk. There are alternative implementations for other languages; Perl has an Expect module, for example.
From the man page:
-interaction mode
Sets the interaction mode. The mode can be either batchmode, nonstopmode, scrollmode, and errorstopmode. The meaning of these modes is the same as that of the corresponding \commands.
Looks like -interaction nonstopmode might help you.
It is a simple question to which I am not able to find the answer:
Given a LaTeX command, how do I find out what package(s) it belongs to or comes from?
For example, given the \qquad horizontal spacing command, what package does it come from? Especially troublesome since it works without including any package!
Given a LaTeX command, how do I find out what package(s) it belongs to or comes from?
Consult your references:
If it's in the index to the TeXbook, it's inherited from TeX, the engine that drives LaTeX.
Otherwise, if it's in the index to the LaTeX manual, it's probably defined in latex.ltx or in one of the standard class files, not in a package.
Otherwise, if it's in the index to The LaTeX Companion, the page number probably tells you what package it's from.
Otherwise, you could do some fancy grepping on the results of find /usr/share/texmf -name '*.sty', but be prepared for a painful exercise.
Or, you could ask on http://stackoverflow.com. But then some idiot will respond by asking why you want to know...
You can search http://www.ctan.org/tex-archive/info/symbols/comprehensive/ for that information and more.
Remember that LaTeX is a macro language on top of TeX, and all the macros are made up of TeX which doesn't need to be imported. \qquad is in that category.
As far as I know, there is no really good general answer to this. But there are a number of techniques you might try for any given command. In the case of \qquad, it's part of basic TeX. Remember that you can always use TeX in interactive mode:
$ tex '\show\qquad'
This is TeX, Version 3.141592 (Web2C 7.5.6)
> \qquad=macro:
->\hskip 2em\relax .
\show\qquad
? x
No pages of output.
Some macros are added by LaTeX on top of TeX, such as \begin:
$ tex '\show\begin'
This is TeX, Version 3.141592 (Web2C 7.5.6)
> \begin=undefined.
\show\begin
? x
No pages of output.
whereas
$ latex '\show\begin'
This is pdfTeXk, Version 3.141592-1.40.3 (Web2C 7.5.6)
%&-line parsing enabled.
entering extended mode
LaTeX2e
Babel and hyphenation patterns for english, usenglishmax, dumylang, noh
yphenation, greek, monogreek, ancientgreek, ibycus, pinyin, loaded.
> \begin=macro:
#1->\#ifundefined {#1}{\def \reserved#a {\#latex#error {Environment #1 undefine
d}\#eha }}{\def \reserved#a {\def \#currenvir {#1}\edef \#currenvline {\on#line
}\csname #1\endcsname }}\#ignorefalse \begingroup \#endpefalse \reserved#a .
\show\begin
? x
No pages of output.
Everything else comes from packages. If you really wanna know which package a macro comes from (other than by google or grepping your texmf tree), you can check after each package you load whether it's defined. Try defining this before any \usepackage commands:
\let\oldusepackage\usepackage
\renewcommand\usepackage[1]{
\oldusepackage{#1}
\ifcsname includegraphics\endcsname
\message{^^Jincludegraphics is defined in #1^^J}
\let\usepackage\oldusepackage
\fi}
Then when you run latex on your .tex file, look for a line in the output that says includegraphics is defined in graphicx. It's not likely, but some devious packages might do bad things with \usepackage so there's a chance this might not work. Another alternative would be to simply define the command you're interested in before loading any packages:
\newcommand\includegraphics{}
Then you might get an error message when the package that defines the command is loading. This is actually less reliable than the former approach, since many packages use \def and \let to define their macros rather than \newcommand, bypassing the "already-defined" check. You could also just insert a check by hand in between each load: \ifcsname includegraphics\endcsname\message{^^Jdefined after graphicx^^J}\fi
Due to lack of reputation I cannot comment on Steve's answer, which was very helpful to me, but I would like to extend it a bit.
First, in his second approach (fiddling with usepackage) the case where usepackage has optional arguments is not dealt with. Secondly, packages are often loaded by other packages via RequirePackage which makes it hard to find the actual place of definition of a command. So my refinement of Steve's answer is:
\usepackage{xargs}
\let\oldusepackage\usepackage
\let\oldRequirePackage\RequirePackage
\renewcommandx{\usepackage}[3][1,3]{
\oldusepackage[#1]{#2}[#3]
\ifcsname includegraphics\endcsname
\message{^^Jincludegraphics is defined in #2^^J}
\let\usepackage\oldusepackage
\let\RequirePackage\oldRequirePackage
\fi}
\renewcommandx{\RequirePackage}[3][1,3]{
\oldRequirePackage[#1]{#2}[#3]
\ifcsname includegraphics\endcsname
\message{^^Jincludegraphics is defined in #2^^J}
\let\usepackage\oldusepackage
\let\RequirePackage\oldRequirePackage
\fi}
The xargs package is used here to get the unusual options of usepackage right (first and third parameter are optional).
Putting this directly after documentclass should tell where includegraphics is defined.