I have an LCD monitor that supports hsync between 30 to 82 kHz and vsync 50-85kHz. Lowest supported resolution is 640x350. I want to run it in 400x240 resolution. I think I need to edit the modedb structure in modedb.c and I have just figured out what the fields require:
I ran cvt with max vsync (85 kHz) and got a modeline like this:
Modeline "400x240_85.00" 10.50 400 416 448 496 240 243 253 256 -hsync +vsync
I used a calculator to calculate hsync and vsync and hsync would be 21.17 kHz, way too low for this monitor.
Is there a way to get around this? I want to test how certain things work on that resolution, so even cheating the monitor by running it in say 800x480 (this would produce acceptable hsync and vsync) would be ok as long as X and applications on top of it handle it like it was 400x240.
"Sharp-VGA",
56, 800, 480,
33805,
84, 40,
35, 1,
80, 3,
0 | FB_SYNC_OE_ACT_HIGH,
FB_VMODE_NONINTERLACED,
0,
try that one, found in 2.6.19.2 with freescale patching
Edit:
Actually, if you use fbset and the mode is readable from /etc/fb.modes
you should be able to use this too:
mode "800x480"
geometry 800 480 800 480 16
timings 33805 90 50 35 1 80 3
accel false
rgba 5/11,6/5,5/0,0/0
endmode
eg:
fbset -n 800x480
# mode
# geometry <xres> <yres> <vxres> <vyres> <depth>
# timings <pixclock> <left> <right> <upper> <lower> <hslen> <vslen>
# options <value>
# rgba <red,green,blue,alpha>
# endmode
Related
I'm using the Tabulate package to print data in table format. The output is sent to a webpage. While using the default font everything is working fine. However upon changing font family (Outfit from Google fonts or cursive e.g), they stop being aligned. Are there any possible solutions?
Output with default font:
Strength: 16 Dmg: 50 Armor: 3.8 ShadowRes: 3.5%
Agility: 34 Spell: 183 FireRes: 5.1% NatureRes: 6.1%
Intellect: 61 Critical: 3.4% FrostRes: 6.3% ArcaneRes: 3.8%
Output with Google font (looks like this can't really show it because SO font is the default):
Strength: 25 Dmg: 45 Armor: 3.1 ShadowRes: 3.2%
Agility: 20 Spell: 132 FireRes: 3.3% NatureRes: 3.6%
Intellect: 44 Critical: 2.0% FrostRes: 3.6% ArcaneRes: 3.8%
Thanks in advance!
you need a monospace font in order to keep the good size of space
I recently found a great short code Why the irrelevant code made a difference? for obtaining console screen buffer info (which I include below) that replaces the huge code accompanying the standard 'CONSOLE_SCREEN_BUFFER_INFO()' method (which I won't include here!)
import ctypes
import struct
print("xxx",end="") # I added this to show what the problem is
hstd = ctypes.windll.kernel32.GetStdHandle(-11) # STD_OUTPUT_HANDLE = -11
csbi = ctypes.create_string_buffer(22)
res = ctypes.windll.kernel32.GetConsoleScreenBufferInfo(hstd, csbi)
width, height, curx, cury, wattr, left, top, right, bottom, maxx, maxy = struct.unpack("hhhhHhhhhhh", csbi.raw)
# The following two lines are also added
print() # To bring the cursor to next line for displaying infp
print(width, height, curx, cury, wattr, left, top, right, bottom, maxx, maxy) # Display what we got
Output:
80 250 0 7 7 0 0 79 24 80 43
This output is for Windows 10 MSDOS, with clearing the screen before running the code. However. 'curx' = 0 although it should be 3 (after printing "xxx"). The same phenomenon happens also with the 'CONSOLE_SCREEN_BUFFER_INFO()' method. Any idea what is the problem?
Also, any suggestion for a method of obtaining current cursor position -- besides 'curses' library -- will be welcome!
You need to flush the print buffer if you don't output a linefeed:
print("xxx",end="",flush=True)
Then I get the correct curx=3 with your code:
xxx
130 9999 3 0 14 0 0 129 75 130 76
BTW the original answer in the posted question is the "great" code. The "bitness" of HANDLE can break your code, and not defining .argtypes as a "shortcut" is usually the cause of most ctypes problems.
I am trying to get the RGB value of pixels from the TIFF image. So, what I did is:
import tifffile as tiff
a = tiff.imread("a.tif")
print (a.shape) #returns (1295, 1364, 4)
print(a) #returns [[[205 269 172 264]...[230 357 304 515]][[206 270 174 270] ... [140 208 183 286]]]
But since we know pixel color ranges from (0,255) for RGB. So, I don't understand what are these array returning, as some values are bigger than 255 and why are there 4 values?
By the way array size is 1295*1364 i.e size of image.
The normal reasons for a TIFF (or any other image) to be 4-bands are that it is:
RGBA, i.e. it contains Red, Green and Blue channels plus an alpha/transparency channel, or
CMYK, i.e. it contains Cyan, Magenta, Yellow and Black channels - this is most common in the print industry where "separations" are used in 4-colour printing, see here, or
that it is multi-band imagery, such as satellite images with Red, Green, Blue and Near Infra-red bands, e.g. Landsat MSS (Multi Spectral Scanner) or somesuch.
Note that some folks use TIFF files for topographic information, bathymetric information, microscopy and other purposes.
The likely reason for the values to be greater than 256, is that it is 16-bit data. Though it could be 10-bit, 12-bit, 32-bit, floats, doubles or something else.
Without access to your image, it is not possible to say much more. With access to your image, you could use ImageMagick at the command-line to find out more:
magick identify -verbose YourImage.TIF
Sample Output
Image: YourImage.TIF
Format: TIFF (Tagged Image File Format)
Mime type: image/tiff
Class: DirectClass
Geometry: 1024x768+0+0
Units: PixelsPerInch
Colorspace: CMYK <--- check this field
Type: ColorSeparation <--- ... and this one
Endianess: LSB
Depth: 16-bit
Channel depth:
Cyan: 16-bit <--- ... and this
Magenta: 1-bit <--- ... this
Yellow: 16-bit <--- ... and this
Black: 16-bit
Channel statistics:
...
...
You can scale the values like this:
from tifffile import imread
import numpy as np
# Open image
img = imread('image.tif')
# Convert to numpy array
npimg = np.array(img,dtype=np.float)
npimg[:,:,0]/=256
npimg[:,:,1]/=256
npimg[:,:,2]/=256
npimg[:,:,3]/=65535
print(np.mean(npimg[:,:,0]))
print(np.mean(npimg[:,:,1]))
print(np.mean(npimg[:,:,2]))
print(np.mean(npimg[:,:,3]))
I keep getting this error
OpenCV Error: Assertion failed (_img.rows * _img.cols == vecSize) in get, file /build/opencv-SviWsf/opencv-2.4.9.1+dfsg/apps/traincascade/imagestorage.cpp, line 157
terminate called after throwing an instance of 'cv::Exception'
what(): /build/opencv-SviWsf/opencv-2.4.9.1+dfsg/apps/traincascade/imagestorage.cpp:157: error: (-215) _img.rows * _img.cols == vecSize in function get
Aborted (core dumped)
when running opencv_traincascade. I run with these arguments: opencv_traincascade -data data -vec positives.vec -bg bg.txt -numPos 1600 -numNeg 800 -numStages 10 -w 20 -h 20.
My project build is as follows:
workspace
|__bg.txt
|__data/ # where I plan to put cascade
|__info/
|__ # all samples
|__info.lst
|__jersey5050.jpg
|__neg/
|__ # neg images
|__opencv/
|__positives.vec
before I ran opencv_createsamples -img jersey5050.jpg -bg bg.txt -info info/info.lst -maxxangle 0.5 - maxyangle 0.5 -maxzangle 0.5 -num 1800
Not quite sure why I'm getting this error. The images are all converted to greyscale as well. The neg's are sized at 100x100 and jersey5050.jpg is sized at 50x50. I saw someone had a the same error on the OpenCV forums and someone suggested deleting the backup .xml files that are created b OpenCV in case the training is "interrupted". I deleted those and nothing. Please help! I'm using python 3 on mac. I'm also running these commands on an ubuntu server from digitalocean with 2GB of ram but I don't think that's part of the problem.
EDIT
Forgot to mention, after the opencv_createsamples command, i then ran opencv_createsamples -info info/info.lst -num 1800 -w 20 -h20 -vec positives.vec
I solved it haha. Even though I specified in the command the width and height to be 20x20, it changed it to 20x24. So the opencv_traincascade command was throwing an error. Once I changed the width and height arguments in the opencv_traincascade command it worked.
This error is observed when the parameters passed is not matching with the vec file generated, as rightly put by the terminal in this line
Assertion failed (_img.rows * _img.cols == vecSize)
opencv_createsamples displays the parameters passed to it for training. Please verify of the parameters used for creating samples are the same that you passed. I have attached the terminal log for reference.
mayank#mayank-Aspire-A515-51G:~/programs/opencv/CSS/homework/HAAR_classifier/dataset$ opencv_createsamples -info pos.txt -num 235 -w 40 -h 40 -vec positives_test.vec
Info file name: pos.txt
Img file name: (NULL)
Vec file name: positives_test.vec
BG file name: (NULL)
Num: 235
BG color: 0
BG threshold: 80
Invert: FALSE
Max intensity deviation: 40
Max x angle: 1.1
Max y angle: 1.1
Max z angle: 0.5
Show samples: FALSE
Width: 40 <--- confirm
Height: 40 <--- confirm
Max Scale: -1
RNG Seed: 12345
Create training samples from images collection...
Done. Created 235 samples
I have 16 jpg files which are around 920x1200 pixels (the widths slightly differ but heights are all 1200). I'm trying to join them into a pdf with:
convert *.jpg foo.pdf
But the resulting paper size is 1.53x2 inches. If I pass the arguments -page Letter, the page size ends up being a bewildering 1.02x1.32 inches. What is going wrong here? All of the information I can find suggests that this should work. I just want a document that consists of 16 letter-size pages.
This question is pretty old, but I had a similar problem and I think I found the solution.
The documentation for the -page option says "This option is used in concert with -density", but the relationship between the options seems a little unclear, possibly because the documentation is geared towards raster images.
From experimenting with the settings, I found that the pdf page size can be controlled by combining -page -density and -units. The documentation for -page shows that letter is the same as entering 612 x 792. Combining -density 72 with -units pixelsperinch will give you (612px /72px) * 1in = 8.5in.
convert *.jpg -units pixelsperinch -density 72 -page letter foo.pdf should do what the original poster wanted.
I just succeeded with
convert file.mng -page letter file.pdf
For Letter, you need to specify the size as 792x612 PostScript points. Therefor try this command:
convert \
in1.jpg \
in2.jpg \
in3.jpg \
in4.jpg \
in5.jpg \
-gravity center \
-resize 792x612\! \
letter.pdf
Works for me with ImageMagick version 6.7.8-3 2012-07-19 Q16 on Mac OS X:
identify -format "%f[%s] : %W x %H\n" letter.pdf
letter.pdf[0] : 792 x 612
letter.pdf[1] : 792 x 612
letter.pdf[2] : 792 x 612
letter.pdf[3] : 792 x 612
letter.pdf[4] : 792 x 612
Or
pdfinfo -f 1 -l 5 letter.pdf
Title: _
Producer: ImageMagick 6.7.8-3 2012-07-19 Q16 http://www.imagemagick.org
CreationDate: Fri Jul 27 22:28:00 2012
ModDate: Fri Jul 27 22:28:00 2012
Tagged: no
Form: none
Pages: 5
Encrypted: no
Page 1 size: 792 x 612 pts (letter)
Page 1 rot: 0
Page 2 size: 792 x 612 pts (letter)
Page 2 rot: 0
Page 3 size: 792 x 612 pts (letter)
Page 3 rot: 0
Page 4 size: 792 x 612 pts (letter)
Page 4 rot: 0
Page 5 size: 792 x 612 pts (letter)
Page 5 rot: 0
File size: 178642 bytes
Optimized: no
PDF version: 1.3
According to this, 72 dpi is the default density => one dot per pixel (for a computer screen).
So you just need to specify -units pixelsperinch.
You can type the following command :
$ convert *.jpg -units pixelsperinch -page letter foo.pdf
BTW : If you want to use a non standard page size such as A4R for example, you must first determine the page size in dots (or pixels given at 72dpi) :
$ paperconf -s -p A4
595.276 841.89
Then the -page argument for A4R will be 842x595