I had compiled some examples from svgalib, the console show :
Using EGA driver
svglib 1.4.3
Nothing more, its like its drawing somewhere but I cannot see it.
This could be a ver very noob question about svgalib, but also a configuration problem.
Also I check the virtual console that it says is drawing (if I run from X), running from console just stays there. I also put sleep in the code
example code :
include stdlib.h
include vga.h
int main(void)
{
vga_init();
vga_setmode(G320x200x256);
vga_setcolor(4);
vga_drawpixel(10, 10);
sleep(5);
vga_setmode(TEXT);
return EXIT_SUCCESS;
}
compile with
gcc -o tut tut.c -lvga
So do you have other SVGAlib applications working on your system? Such svgatest, which may be in a separate distribution package (svgalib-bin or similar).
Have you configured svgalib for your system? Common locate of the config file is /etc/vga/libvga.config and read man svgalib should give you more details.
I suspect that once you have SVGAlib working in general, the tutorial example program will work.
Install by software manager all svgalibrary.
Set the resolution at yours graphics screen
es : G1024x768x256
set color pixel white = 15
my linux mint (mate) 17.1 on hard disk work fine.
good luck !
Related
I'm working on a VC++ app, and my job is to correct some bugs in it, the code was mainly written by a student last year.
It is a graphical application that uses both SFML and TGUI.
It targets 32bits architectures but actually its only working well on x64 computers.
On a x64 , absolutly everything works perfectly , but on a 32-bits systems , the text displayed using sf:Text and sf:Font just prints black blocks after the first call to the function that sets sf:Text's string.
I know that code targeting 32bits systems should work fine on x64 , and that's why I am lost. I checked which system architecture I am targeting.
I am using compatibles , 32bits , last updated SFML and TGUi libraries.
I checked a billion times if I got the right dlls, include and lib paths . The linker > start property got all .lib that both sfml and tgui relies on.
I know that the Font is loaded, but I know that it doesn't stay set on the Text even if the variable containing the font isn't destroyed and that I have set the font on the text earlier.
This is how the font is loaded and associated with the text :
MainWindow.cpp :
Font& MainWindow::m_fontDisplayer = m_data.getFont_1();
//init displayer
m_data.getDisplayer().setFont(m_fontDisplayer);
m_data.setDisplayer("");
m_data.getDisplayer().setCharacterSize(54);
m_data.getDisplayer().setFillColor(Color(0, 0, 0, 255));
DataManager(m_data):
Font DataManager::m_font_1 = Font();
if (!m_font_1.loadFromFile("Ressources/font_1.ttf"))
{
printf("\nCould not find font_1.ttf font.");
}
void DataManager::setDisplayer(std::string s) {
if (m_displayer.getFont() == NULL) {
//m_displayer.setFont(m_font_1);
}
m_displayer.setString(s);
m_displayer.setPosition(sf::Vector2f(320 - m_displayer.getGlobalBounds().width,82));}
This function is the one that maked me say that the font isn't set to the Text after some time.(m_displayer is sf::Text)Because I have set a stop point at the commented line and that it is accessed even if setting the font is the first thing that i do. Actually , that "if" statement was only to check the stop point, It doesn't fix the problem (makes my app crash on 32bit), and it would be too long because sf:Font operations are really heavy.
What could possibly cause that compatibility issue between the two architectures?
To add to what i said , I'm sure that the only difference between the systems im testing on is the architecture and not windows version. I tried on multiple computers, and again , the problems only appaers on 32-bit computers.
Thanks to anyone that could explain and help me understand the problem.
EDIT : I cleared the text about Tgui issues as i solved them.
My main requirement is to profile the mentioned anti-debugging check program twice ( Once in the presence of a debugger and the other without it ) to collect some information for analysis during run-time (Assuming only the binary is available)
#include <stdio.h>
#include <sys/ptrace.h>
int i_am_debugged()
{
if (ptrace(PTRACE_TRACEME, 0, 1, 0) < 0)
{
printf("Avoid debugging please");
return 1;
}
return 0;
}
int main()
{
if(i_am_debugged())
{
return 1;
}
printf("It's going well ! No debugging !\n");
return 0;
}
Currently , I wrote a Intel PIN tool for the same but I am unable to profile a run when a debugger is not attached because of the way PIN works and always executes 'It's going well ! No debugging !'.
So, my question:
Is there anything I can do (attach a debugger and run the pin tool or something) to profile both types of runs using my PIN tool or will any other type of profiling (for ex Binary translation, etc) help me in this case?
I want to collect specific information about instructions and not just Call graph,etc and hence would like some functionality similar to PIN's C++ programmer interface.
A detailed answer would be great, Thanks.
Pin uses ptrace to inject itself into the application. This means that using gdb won't be possible when attempting to launch an application with Pin, and also that Pin won't successfully attach to an application that is being debugged.
My suggestion is to start Pin with the -pause_tool knob, and then attach gdb to the process. This will make the application's ptrace call return what you want.
I hope i get what you want.
Intel PIN is NOT a debugger. It's more like a VM which on-the-fly instruments binary code (for x86/x64) and then executes this freshly instrumented code.
Because PIN is not opensource, it's internals are rather "secret" ;).
But its CLEARLY not a debugger.
If i understand you correctly you want some sort of test-suite which
runs your application twice, one time with a debugger attached and one time without?
Then you probably should use gdb.
Just start:
./a.out for the normal run
and e.g. (this one is a little hacky ;-) ) for the debugger-run:
Create a file mygdbscript (needs arguments to gdb like: gdb -x ./mygdbscript)
The content of this file is just:
# you probably dont want gdb to stop under ANY circumstances
# as this is an automatic check. So pass all signals to the application.
handle all nostop print pass
run
quit
Then run with gdb -x ./mygdbscript --args ./a.out
Hope this helps :)
I have compiled and installed the tiny_serial driver example from the book Linux Device Drivers by Greg Kroah-Hartman. I use the sources from https://github.com/duxing2007/ldd3-examples-3.x.git
The device node /dev/ttytiny0 is successfully created but I am having trouble reading anything from the device. Looking at the driver, it seems that I should be able to read a 't' character.
Running stty gets me the following below error:
root#brijesh-M11BB:~/ldd3-examples-3.x# setserial /dev/ttytiny0
/dev/ttytiny0, UART: unknown, Port: 0x0000, IRQ: 0
root#brijesh-M11BB:~/ldd3-examples-3.x# stty -F /dev/ttytiny0
stty: /dev/ttytiny0: Input/output error
Similarly doing cat /dev/tinytty0 also reports a similar error. I also tried minicom -D /dev/ttytiny0 but the device is always shown as offline.
It seems I am missing something, can anyone please point out what I am missing?
This is happening with me on both Ubuntu 15.10 (3.19.x kernel) and older 2.6.28 kernel.
I have debugged it and found the root cause.
strace shows -EIO on tinytty0 device. -EIO was generated from driver tty layer (-ENO_TTY_DEVICE). The reason for ENO_TTY_DEVICE was port->type is set to 0 (i.e unknown). Setting the port->type = PORT_16550A before adding uart driver resolved the issue.
While the previous answer was correct, it did not explain where to fix the code.
Note: I am using linux kernel 3.10
I implemented the one-line fix in the declaration of the uart_port struct, here is how:
static struct uart_port tiny_port = {
.ops = &tiny_ops,
.type = PORT_16550A, // THIS IS THE FIX
};
I believe this may be a bug in the kernel code, because by not setting the uart_port type, it defaults to PORT_UNKNOWN.
While I understand this may be a protection, it seems more correct to leave it at PORT_UNKNOWN rather than that of another random device.
BONUS POINTS:
After fixing the PORT_UNKNOWN issue, if you get a kernel panic for dividing by 0, comment out the following lines from duxing2007's code:
baud = uart_get_baud_rate(port, new, old, 0, port->uartclk/16);
quot = uart_get_divisor(port, baud);
BONUS POINTS 2:
This link shows how a developer with a similar issue (Ira Snyder) implemented the tiny_exit() function, so that you can remove the kernel module without causing a hang.
I wrote a basic character driver for beagle-bone which prints two message in 1 second interval via a workqueue and a tasklet using printk.
At first i build it as module driver, generated .ko file, load it using insmod command and the print is coming when viewed via dmesg.
Then i built as inbuilt driver and load the uImage and after bootup i checked the dmesg prints. But there is no prints.
In the .config file
CONFIG_MY_DRIVER=y
So its taken as built in driver i think.
How can i confirm whether its actually built in the final image. No error was reported while building.
Is there any additional steps to be done for loading the build in driver.
Please pardon me if i went wrong on any basics. I am really new to linux.
This means that you added it probably somewhere to Kconfig file:
"CONFIG_MY_DRIVER=y"
but, Have you added it to Makefile? It works like that, then kernel during a building an Image, takes all of this directives "CONFIG_*" and use it to build particular source files from Makefile.
Example:
cat fs/ext2/Makefile
ext2-$(CONFIG_EXT2_FS_SECURITY) += xattr_security.o
cat fs/ext2/Kconfig
config EXT2_FS_SECURITY
bool "Ext2 Security Labels"
depends on EXT2_FS_XATTR
so in this example above if your source file is xattr_security.c then you should get xattr_security.o file in fs/ext2 dir, when this is build. You should also see it if your file is build, during a compilation process.
I use openjdk-1.6 on Linux platform (that's a requirement)
I need to play stream audio. Based on this example http://www.java2s.com/Code/Java/Development-Class/PlayingStreamingSampledAudio.htm
I wrote something like this:
SourceDataLine.Info info = new DataLine.Info(....);
SourceDataLine line = (SourceDataLine) AudioSystem.getLine(info);
line.open(stream.getFormat());
line.start();
But the problem is in line
line.open(stream.getFormat());
Used format is supported by the system.
I have LineUnavailableException.
But the problem is that I have this exception only under eclipse. When I create executable jar and run it - everything is OK - no exceptions.
As far as I understand from googling and some experiments - problem is in security restrictions. Executable jar runs under current user and has access to sound device, eclipse somehow not. I tried to add all system users to audio group, tried to run eclipse as root... Nothing helped.
I'm not a guru in Linux and eclipse. Does anybody know how to solve this problem or at least how to change security restrictions for eclipse?
Any ideas would be highly appreciated!!
Here is the exact example I have in code.
int sampleRate = 40000;
int sampleBits = 16;
SourceDataLine _line;
AudioFormat format = new AudioFormat(sampleRate,sampleBits,1,true, true);
SourceDataLine.Info info = new SourceDataLine.Info(SourceDataLine.class, format);
_line = (SourceDataLine)AudioSystem.getLine(info);
_line.open(format);
Due to my print of properties was considered as wrong code, here is a link with JRE properties list (under eclipse):
https://docs.google.com/document/d/1ef4f8GvEprhWWrrA57B0p2w6h5qOio4Wjz0ycY9-_0Q/edit?usp=sharing
here using executable jar:
https://docs.google.com/document/d/1lxbPIa4YUwURCdUZsCNL1Iehq--OT2eu35Q0QHKonEg/edit?usp=sharing
To be honest - didn't find any criminal differences...