Zebra Cups Driver paper size don't change - linux

I'm experiencing a issue with a printed area size when I change from 203DPI tom 300DPI a Zebra ZT 230 printer.
I used the embedded driver to install this printer over ethernet connection.
My tag has 10X12 cm (Width x Height), and it has two barcodes, one Code 3 of 9 and other Code 128. When I'm using 203 DPI configured the barcode 3 of 9 was printed without resolution enough to be read using any data colector, so I changed the configuration to 300 DPI to fix that resolution issue, for my surprise when I made this change the tag was printed bigger then the paper, is almost 1/2 of my tag has fit into the media paper, as the following images show.
Doesn't matter change the paper sizer, set any margins the result always is the same.
When Configured with 203 DPI with lack of resolution.
When configured with 300DPI with any paper size defined
I hope if that you guys could tell what I have done wrong to have this result, because I've always used CUPS with lots of printers of Intermec(Honeywell) and Argox, both companies have delivered theirs own Linux Drivers, and for the first time I'm installing a Zebra printer on Linux, and the Zebra company has produced a tutorial, zebra official tutorial link, using cups telling that the embedded driver should be enough to print Zebra tags on Linux.

Label printers generally have a fixed DPI setting (i.e. it's hardware), so changing the DPI setting in your printing code will either have no effect, or will cause the coordinate system to be scaled, which would explain why you're seeing your output enlarged.
If you want 300dpi, you need to buy a 300dpi printer.
If you need to use the 200dpi printer you have, you need to think up an alternative solution, such as enlarging the size of the barcode, or using a different symbology that doesn't require as high a resolution.

Related

Gnuplot HPGL with HP7470a plotter change label size

I am trying to change the label size for a plot on a HP7470a pen plotter. This is a economical version of the HP7475a with only 2 pens. There does not seem to be an option in the terminal to change the label size. How do I get my labels to be bigger than just a few millimetres?
I do not see any provision for changing the font size in any of the gnuplot HP printer drivers. They and the printers they support date back to the 1980s when pen plotters were state-of-the-art and gnuplot haad only reached version 1 or 2. The drivers have been carried forward for decades but largely ignored when gnuplot gained additional capabilities for managing font selection, text markup, etc in version 4.
Some time during gnuplot version 4.4 (~2010) the terminal option fontscale was added for new terminals and backported for some of the then-current terminals. But the pen plotter drivers were not included in that set of updates, probably because no users were asking for it and the developers may not have had the relevant hardware to test it on.
I think it would be easy enough to add support for this option to the hpgl terminal, but I don't have a plotter to test it on. If you are willing to test and report back, then please open a Feature Request on the gnuplot tracker site and discussion can continue there.
Update
With the assistance and testing of Quad, support for font scaling on HPGL printers has been added to the gnuplot development version and will in the next release (5.4.5).

What is the point of having metric mapping modes like MM_LOMETRIC, and MM_LOENGLISH?

Page number 47 of book Programming with MFC (second edition) by Jeff Prosise (chapter 2: Drawing in a window), has the following statement.
One thing to keep in mind when you use the metric mapping modes is that on display screens, 1 logical inch usually doesn't equal 1 physical inch. In other words, if you draw a line that's 100 units long in the MM_LOENGLISH mapping mode, the line probably won't be exactly 1 inch long.
My question is, if windows cannot give any guarantee on the physical dimensions of things we draw using metric mapping modes, then what is the point of having such a mapping mode? Are metric mapping modes relevant only for printers, and completely irrelevant for monitors?
In modern monitors, with digital ports like HDMI/Display port, can't windows OS get physical dimensions of the screen, thus making it possible to draw things using metric dimensions (inches, rather than pixels, note that the current resolution of the monitor will already be known to the OS)?
One of the ideas behind the logical inch is that viewing distance to a monitor was typically larger than the distance to a printed page, so it made sense to have the default of a logical inch on a typical monitor be a bit larger than a physical inch, especially in an era where WYSIWYG was taking off. Rather than put all of the burden to adjust for device resolution on the application, the logical inch lets WYSIWYG application developer think in terms of distances and sizes on the printed page and not have to work in pixels or dots which varied widely from device to device (and especially from monitor to printer).
Another issue was that, with the relatively limited resolutions of early monitors, it just wasn't practical to show legible text as small as typically printed text. For example, text was commonly printed at 6 lines per inch. At typical monitor resolutions, this might mean 12 pixels per line, which really limits font design and legibility (especially before anti-aliased and sub-pixel rendered text was practical). Making the logical inch default to 120-130% of an actual inch (on a typical monitor of the era) means lines of text would be 16 pixels high, making typographic niceties like serifs and italic more tenable (though still not pretty).
Also keep in mind that the user controls the logical inch and could very well set the logical inch so that it matches the physical inch if that suited their needs.
The logical units are still useful today, even as monitors have resolutions approaching those of older laser printers. Consider designing slides for a presentation that will be projected and also printed as handouts. The projection size is a function of the projector's optics and its distance from the screen. There's no way, even with two-way communication between the OS and the display device for the OS to determine the actual physical size (nor would it be useful for most applications).
I'm not a CSS expert, but it's my understanding that even when working in CSS's px units, you're working in a logical unit that may not be exactly the size of a physical pixel. It's supposed to take into account the actual resolution of the device and the typical viewing distance, allowing web designers to make the same 96-per-inch assumption that native application developers had long been using.

How to set the default resolution in windows 8.1 winjs app

I'm working on my first winjs app for windows 8.1. So I'm a newbie in this topic, so please forgive me if I ask something obvious.
My problem is the resolution of the target machine (DELL Latitude E7440 with touch screen) is 1920x1080, but when I run the app, it runs in 1370x771. Which is a bit confusing for me.
So my question is, how can I tell the app to run in the same resolution as the OS runs.
Strange thing I discovered: If I set the "Change the size of apps text, and other items on the screen (...)" settings from larger to default in the display settings then suddenly my app runs in the desired full hd resolution. But I don't wanna depend on this setting.
I know I could use ViewBox control, but all of the graphics and everything is designed for full hd resolution, so ViewBox would simply scale it down, this solution sounds a bit ridiculous considering I have a full hd laptop, and full hd design as well...
This app is only for this particular machine, so I don't have to deal with different resolutions.
Any tip/suggestion is highly appreciated.
Windows abstracts the physical device resolution specifically so you don't have to think about scaling issues. Just do your work against the resolution that's reported from the API. This is done because a high pixel density display can result in UI that's too small to be usable, e.g. touch targets that get too small for fingers. Most of the time, then, a 1920x1080 display that's on a smaller physical device (e.g. 13" or smaller screens) gets a 140% scaling factor applied, hence it reports something closer to 1366x768.
Generally speaking, then, if you write responsive CSS for layout then you really don't need to worry about scaling at all with the exception of providing raster graphics that can work at the 100%, 140%, and 180% scaling plateaus (and 240% on Windows Phone 8.1).
For more details refer to my free ebook from Microsoft Press, Programming Windows Store Apps with HTML, CSS, and JavaScript, 2nd Edition, which you can also read (free) within the Microsoft Press Guided Tour app in the Store. Page 42 (Chapter 1) has a short overview on Views and Resolution Scaling; the Branding Your App section on page 113 talks about sizes of logo/splashscreen images for different resolutions, and then Chapter 8, "Layout and Views" (starting on page 421) goes into all the details, especially "Screen Resolution, Pixel Density, and Scaling" on page 437.

Samsung and micromax tablet both picking assets from mdpi

I have 2 android tablets - 1 samsung and 1 micromax tablet, samsung has 800 x 1280 pixels, 10.1 inches (~149 ppi pixel density) and micromax has 600 x 1024 pixels, 10.1 inches (~118 ppi pixel density).
I want their assets to be seprate but both are picking from mdpi, How can I do it?
Is it possible to use something like drawable-w600-mdpi and drawable-w800-mdpi?
No. Have a look at the Providing Resources page over at Android Developers. In Table 1 you can find all admissible qualifiers. Personally, I would suggest you let the OS handle the layout and choosing the right resource. However, with the new format of specifying resources 'wdp' respectively 'hdp', you might be able to get what you want as the tablets feature different sizes.
However, unless you have a very good reason to do so, I wouldn't do it. Often when you think you need to do something that is discouraged, there is a flaw in your design or the requirements can be met in an easier, more standard, way.
Update:
As by the comments, your problem is probably related to the devices not correctly communicating their DPI and, therefore, Android not choosing the correct resource. An Android Developer blog I recently read details the problem and shows a few examples on how to use the new numeric selectors tied to screen width and height to resolve some of those problems.

Capturing high-quality(300dpi) screenshots of QT-based app in Linux

I need to make a screenshot of my form created in QT designer. There are numerous approaches to do screenshots(gimp, import, etc..) but alt of them deal with same dpi as on my monitor(about 100dpi). This is quite enough to publish on web site, but 300dpi images are required for paper publications. Are there any ways to create 300dpi screenshots?
I don't think that the 300dpi requirement for publication applies to things like screenshots, where the data is inherently pixelated. It's meant for things like graphs that can and should be generated in a vector format.
Just get the best results you can, and only use screenshots for things that are absolutely necessary, and not, for example, commandline I/O or results graphs.
If the final images are being shown smoothed and blurry, either find settings in your PDF creator to prevent this, or manually blow up the image to a multiple of its original size to preserve the original sharp pixelation.
Painting can be done on any QPaintDevice, which includes QPrinter. If you wanted to, you could set up painting redirection to a given device, then have the widget repaint itself. This might give you the higher precision you desire. For more information, look on Qt's website for the Paint System overview, and also maybe look at the QPixmap::grabWidget functions.
You can not grab screenshot in a best resolution than the one of your monitor. DPI has no sense in computer display. Some software convert pixel per point (ppp) to dot per inch (dpi) for paper publication.
Once you have made your screenshots, you can convert them to 300 dpi using a software like photoshop or equivalent.
You can't have more pixels on your screenshot than your widget displays.
For a given widget size (say 900x900px) you can have your image printed at 300dpi, but it will only make a 3 inch square on your paper.
You can force your screen to behave as a 4K display with the command:
xrandr --output eDP1 --rate 40.01 --mode 1366x768 --fb 4096x3072 --panning 4096x3072
remmember to fit the rate and the mode fields as stated from your default xrandr configuration. You can see that with xrandr
and then acquire the screenshot with
import -window root imagefile.png

Resources