Photoshop Smart-object placing larger then original image - graphics

Calling all Photoshop users!! please help me work out this issue. i have decided to break up all of my player components / game assets into smart objects to take advantage of all the benefits that come with this functionality. however i have noticed something strange that can be seen in the below example.
When i create a smart object i can see that the object is created with its original object size, however when i add that object to another scene its size randomly increases and i dont know why.

In Photoshop General preferences uncheck Resize Image During Place — with this option enabled importing SOs from the Library resizes them to document size.
Also Graphic Design Exchange is probably a better place to ask questions like this than Stack Overflow because it has nothing to do with coding.

Related

How can I access to the monitor image data?

1. The problem I've encountered
Hi, I'm currently making a desktop application with Electron.js. Meanwhile, I have needed a feature of taking a screenshot (including the mouse cursor) but this is a problem for me because I do not know how to do this.
I think the reason for me not to be able to solve this problem is that I have no knowledge about operating systems. I think the meaning of "taking a screenshot" is "getting the image data displayed on the computer monitor", but how I can access to that?
2. What I've tried or considered
At first I tried Electron.BrowserWindow.capturePage() but its result didn't meet my want. It is because of two reasons: 1) My application has a transparent background and wherever area of transparency becomes black if I take a screenshot. 2) Mouse cursor is not captured together.
Meanwhile, I am aware of the existence of some APIs such as Screen Capture API and Media Capture and Streams API (in web browsers) and perhaps I can give it a try because I'm using Electron.js and Electron.js uses Chromium web browser and web browsers have implementations of those APIs.
However, it is still a problem that what those APIs handle is media streams (= video), which is not suitable for my case. Of course I think it is possible to take only one frame(?) out of a media stream somehow, but I think it is an overwork, given that what I desire for is just a single screenshot.
Meanwhile, because Electron.js also uses Node.js, I think it is also somehow possible to call Windows API (maybe via Foreign Function Interface?) or to invoke child_process.exec() in order to take a screenshot.
3. The question I would like to ask
How can I access to the monitor image data? So that I can implement "the screenshot feature which meets my requirements--see-through & mouse cursor" (if uses of third-party libraries needed, as least as possible).
What calculates a final image data which is going to be displayed on my computer monitor? It seems that it is a work of my graphics card because my monitor and graphics card are connected each other with a cable.
4. Miscellaneous curiosities (not much related to the question)
...Yet it is another curiosity that how, why, and where the transparent area is processed as #000000 color.
Meanwhile it is also interesting that there are some programs which do not allow me to take a screenshot of contents on them--the area where the programs are located looks black. How could the developers of this kind of programs implement this?
Thank you for reading my question.
After some internet searches, I found it difficult to access and get display data (specifically, video ram data from my graphics card). So I decided to use a workaround--It is a well-known aphorism that 'all loads lead to Rome'.
Which means,
See-through screenshots can be achieved by either "using native screenshot feature (the PrintScreen key)" or "using some scripts that take a picture of the entire screen".
Screenshots with mouse cursor can be achieved by adding (= overlaying) mouse cursor image at the coordinates where my mouse cursor is located at.
However, in my case I do not actually need to save screenshots as files, so I think it is enough to just draw a custom mouse cursor image, hide the original mouse cursor image, make it follow the mouse cursor, and take a screenshot with a manual key press. (I think it is also a feasible option to take a screenshot with the PrintScreen key press, get the screenshot data from the clipboard and do some image processings like adding effects relevant to a mouse cursor.
※ I saw a code that simulates "key press" (SendKey()) in order to take a screenshot and I think this is a good approach because of no manual key press needed.
I think whom interested in this topic may find it helpful from the following links (the numerical order does not represent importance):
Keywords mentioned: GetDC(), BitBlt(), CAPTUREBLT flag, GDI
What is the best way to take screenshots of a Window with C++ in Windows?
How can I take a screenshot in a windows application?
Keywords mentioned: DirectX, buffer
Fastest method of screen capturing on Windows
How to save backbuffer to file in DirectX 10?
Keywords mentioned: mouse cursor, cursor image, hot spot
Capture screen shot with mouse cursor
C# - Capturing the Mouse cursor image
Python - Take screenshot including mouse cursor
Keywords mentioned: PowerShell, CopyFromScreen()
How can I do a screen capture in Windows PowerShell?
Capture screenshot of active window?
Q/A about accessing to video memory
DRM Access the whole video memory
raw video memory, video driver Access the whole video memory through OpenGL programming
graphics RAM API to get the graphics or video memory
direct data write to video memory
Direct video buffer access
How to write data directly into video memory?
Is direct video card access possible? (No API)

Producing PDF files in NodeJS - simpler than puppeteer/chromium but a bit less basic than low level libraries

I'd like to be able to produce PDF files in NodeJS.
Currently, we use puppeteer. We need to produce highly designed documents and so puppeteer/chromium gives me the ability to create a complex layout in HTML with the added benefit of also having the HTML version of the file.
It's great for relatively small documents where design is key.
The problem is when I try to produce long report documents. These documents do not require elaborate design. These are pretty much just a header with some information, and then a simple table with lots of records that stretch far as the eye can see, so they tend to be large. Like, really really large.
When I try using puppeteer for that, well pretty much just crashes and burns because loading such huge layouts into the underlying browser is just too much.
Currently I do "stitching". I create the document by having puppeteer create the doc in parts, and then I connect all those "doclets" into one using PDFKit.
But then I have problems like when one "doclet" ends and a new one begins, there are blank lines. (partially empty pages for no good reason from the perspective of a customer viewing it)
What I'm looking for is a library that has basic layout functionality but that doesn't use a browser (or perhaps uses something lightweight).
Problem is that libraries like PDFkit and pdf-lib seem to be too low level.
I'm going to literally have to "draw" the documents by telling it where exactly the text should be.
If I want tables, I'm going to straight up have to draw rectangles and stuff.
Having to create all of this manually would be a nightmare.
All I want is the ability to create simple layouts (tables, titles, text wrapping, background color) without having to use a library that just launches chromium.
Please, let me know if you know of any such option.
Thanks in advance!
What I tried:
PDFkit/pdf-lib - too low level. Unless I'm getting something wrong, there doesn't seem to be a way to create word wrapped layouts with basic tables.
jsPDF doesn't seem to be able to use the HTML functionality on the server(I think to get it to work I'd have to let it use a browser...? if so, doesn't really help).
Puppeteer/other libraries that pilot a browser - well, it uses a browser so a no-go for large docs.
Praying to Odin - No luck so far.

Extent of support for using Vulkan Swapchain Images as Transfer Destination

In my Vulkan backend implementation I currently check the supported usage flags for the Swapchain and then proceed to either use copy commands or a fall-back render pass to draw to the back buffer from an intermediary Render Target. I wanted to know whether this check is required or is it safe to assume that the Swapchain Images allow usage as a Transfer Destination on typical Desktop hardware.
Also, if anyone knows about Vulkan implementations that do not allow Copying to Swapchain Images, I'd appreciate it if you could share. This is mostly for the sake of curiosity rather than solving a problem.
You can look at the Vulkan Hardware Database.
I couldn't find anywhere which summarized the data, but if you click on a device from the list then click onto the surface tab, then on the surface properties tab you can see supportedUsageFlags in the table and look for TRANSFER_DST_BIT.
I only looked at a few and they all had TRANSFER_DST_BIT present. I believe the database and code for the viewer are open source, so perhaps you can find a better way to mine the particular information you're after.

xml layout in android that supports different screen sizes

how should we set the xml layout in android that supports different screen sizes.
I tried using wrap content and match parent but its not working properly. Please guide me for this.
Thanks in advance.
The comment about, Supporting Multiple Screens is defiantly a good starting place! By default your xml does support different screen sizes.
Although the system performs scaling and resizing to make your application work on different screens, you should make the effort to optimize your application for different screen sizes and densities. In doing so, you maximize the user experience for all devices and your users believe that your application was actually designed for their devices—rather than simply stretched to fit the screen on their devices.
However, like it says you need to optimize it. This refers to images or a completely different xml per screen size/orientation. Does this help any?
If you need something a little more specific to your situation you'll need to provide more information.

How to zoom in and zoom out image of device context(CDC) in SDI or MDI application

I want to know how use pointer of CDC.
Broadly, there are two ways:
- use the CDC::SetViewportOrg/SetViewportExt apis to have GDI do the scaling for you
- manually keep track of the scroll position and zoom level, and in your OnPaint, do your BitBlt or siblings to and from the correct coordinates in the source and destination DC's.
Unfortunately, most of this answer won't mean much to you without some background in MFC, which I presume you don't have from the generic nature of your question. It's a bit of a chicken and egg problem. I suggest you first read the documentation on the mentioned members of CDC (including studying the code of the example that is linked to from the MSDN docs on them), and then coming back to ask more specific questions if you can't figure it out.

Resources