Helllo ,
I'm a ux\ui designer. I designed application at photoshop for xxxhdpi(2960*1440) for a mobile device 'Samsung s8plus'. Once i finished designing I've created an image sized 2960*1440 of my app to look at my design at the S8Plus phone. Than I cut all the icons + made a styleguide for the programmers, etc..
Anyway, the problem that I have is that the developer put all my cuttings and on the app all the cuttings that i made looks smaller than they should look (in a gap of ratio of 1.142)?! For example 56dp pixels of an circle action buttun on the programmed app wont be the 56dp size circle i have made, it looks smaller than it should?! please help me figuring our why, and what should i do?
Related
I'm using Agora sdk for video calling feature in my app. I've tried the advance iOS example, and currently able to see the video call between two users.
However my UI needs to show it in grid layout of same size items (maximum 8 video call views)... Like a vertical UICollectionView of equal sized cells. (screenshot below)
I've tried the Advanced video example from here - https://github.com/AgoraIO/Advanced-Video but couldn't figure out how to make the grids.
Kindly guide me how to do this. Thanks.
I have a small demo that creates NxN grids of all the video feeds in a chat in this example.
https://github.com/maxxfrazer/Agora-iOS-Swift-Example/blob/248a1d2291060891f2fda92a657c2067d841d964/Agora-iOS-Example/ChannelViewController%2BVideoControl.swift#L108
it just leaves black holes where the other ones should be if there's not a square number of people in the chat, but hopefully it is enough to get you started with repositioning views based on the number of connected users.
I have developed an LWUIT app. I have two types of images dispayed in the app. One coming from server side that need to displayed (like a photo posted and saved to server side) and one packaged in my jar and displayed mainly as icons (like a music icon, loading animation gif etc). I need to display all images according to the sreen size and resolution. The first kind is displayed by taking the screen display height and width and then use scale method and show a scaled version of the image. But however I have no idea how to show the second kind. i.e. icons. Example, my loading image looks good in most of the phones but for some phones like samsung, it looks blurred and over-sized. How to do this. My basic idea is to keep 3 types of images of icons like icon_width_lowXheight_low.png, icon_width_mediumXheight_medium.png and image_width_highXheight_high.png and show it based on the screen size. Please let me know the bets way to achieve this?
Thanks,
Parvathy
You should use MultiImages which were added in LWUIT 1.5. I don't have a link for this in LWUIT but our work in Codename One is pretty close to this so check out the How Do I? on multi images (and I suggest migration to Codename One regardless).
I think that you will need to use this
Image i = Image.createImage("your image path here");
i = i.scaled(widthValue, heightValue);
And put this values in relation to the Display.getInstance().getDisplayHeight() and Display.getInstance().getDisplayWidth()
Right?
We are developing this web app: http://projects.igre.emich.edu/iccarsp/
Now we are trying to do the following:
upload an image on the viewer
adjust the image (move, rotate etc) to its accurate location
measure the area of the image area (or maybe digitizing it to measure the frame of the image)
output the result (the background and the uploaded image) as KML
I did research it seems that we can do all of this in Google Earth desktop version, so we are trying to do the same thing on web plugin, but Google did not publish the code for these functions(correct?)
So I am wondering if there is any other way to do the functions on the web in Google Earth Plugin. Any advice will help thanks!
About area calculation, you can still have a look into GEarthExtensions:
http://code.google.com/p/earth-api-utility-library/
The area of a geo.Path Object is accessible via geo.Path.signedArea_
You just need to define a geo.Path around your Image
But there are some limitations:
"The method is inaccurate for large regions because the Earth's curvature is not accounted for." ( from extensions-0.2.1.js)
Hope this can start to help :-)
I am trying to get DirectX (DX9) to grab a screenshot of the desktop and immediately draw it back out (in smaller dimensions) to my form.
I have DirectX working to the capacity that the device is created along with a few surfaces and I can render them to screen. I am using one surface F3F3Surf9_SS to get the desktop Screenshot.
Here is my declaration and initialization of varaibles
F3D3Surf9_SS : IDirect3DSurface9; //Surface SS
F3D3Surf9_A : IDirect3DSurface9; //Surface A
F3D3Surf9_B : IDirect3DSurface9; //Surface B
...
FDirect3D9.CreateDevice(D3DADAPTER_DEFAULT,D3DDEVTYPE_HAL,Form1.Handle,
D3DCREATE_SOFTWARE_VERTEXPROCESSING,#D3DPresentParams,
FDirect3DDevice9);
FDirect3DDevice9.CreateOffscreenPlainSurface(1360,768,D3DFMT_X8R8G8B8,
D3DPOOL_DEFAULT,F3D3Surf9_A,nil);
D3DXLoadSurfaceFromFile(F3D3Surf9_A,nil,nil,'D:\Images\Pillar.bmp',nil,
D3DX_DEFAULT,0,nil);
FDirect3DDevice9.CreateOffscreenPlainSurface(1360,768,D3DFMT_X8R8G8B8,
D3DPOOL_DEFAULT,F3D3Surf9_B,nil);
D3DXLoadSurfaceFromFile(F3D3Surf9_B,nil,nil,'D:\Images\Niagra.bmp',nil,
D3DX_DEFAULT,0,nil);
FDirect3DDevice9.CreateOffscreenPlainSurface(Screen.Width,Screen.Height,D3DFMT_A8R8G8B8,
D3DPOOL_SCRATCH,F3D3Surf9_SS,nil);
Here is the code I use to grab and then render the screenshot
FDIrect3DDevice9.BeginScene;
FDirect3DDevice9.Clear(0,0,D3DCLEAR_TARGET,D3DCOLOR_XRGB(0,0,255),0,0);
FDirect3DDevice9.GetBackBuffer(0,0,D3DBACKBUFFER_TYPE_MONO, BackBuffer);
FDirect3DDevice9.GetFrontBufferData(0,F3D3Surf9_SS); //Get the screen shot
FDirect3DDevice9.StretchRect(F3D3Surf9_SS,nil,BackBuffer,nil,D3DTEXF_NONE); //Draw it
FDIrect3DDevice9.EndScene;
FDirect3DDevice9.Present(nil,nil,0,nil);
However this does not work.
The image does not get drawn to screen. If I draw surface A or B to screen, that works but it doesn't work for Surface SS. However I know Surface SS has the screenshot in it since if I call D3DXSaveSurfaceToFile the resulting bitmap I put on the hard disk is a valid screen shot.
Any thoughts on the proper way to do this?
The reason this would not work is that the F3D3Surf9_SS was declared in system memory by D3DPOOL_SCRATCH and cannot be drawn directly to the back buffer as I was trying to.
So my solution was to use the F3D3Surf9_A surface and use UpdateSurface to copy the screenshot in system memory into the surface A in video memory.
The only other change I had to make to get this to work was create Surface A in the same format as the screenshot surface: D3DFMT_A8R8G8B8. Also had to make sure the the destination surface in UpdateSurface was larger than the source surface.
NOTE:
This is slow since we are reading from video memory to system memory and then right back to video memory.
I needed this for my application since I want to capture everything the OS and other application put on screen but if you are just worried about your own application then there are better alternatives.
If you know of a way to GetFrontBufferData without putting it to system memory (which is the only way I could see it working) please let me know.
I wanted to know how should I use high res images in iOS4 sdk using UIImaageView.
blackBox = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"alert_bg.png"]];
blackBox.frame = CGRectMake(98.0f, 310.0f, 573.0f, 177.0f);
When I use this code I get strange results... the image does not get the correct size. It is looking very big on iPhone 4 screen.
Should I use 326 ppi images?
I have read http://developer.apple.com/library/ios/#documentation/iphone/conceptual/iphoneosprogrammingguide/SupportingResolutionIndependence/SupportingResolutionIndependence.html this but I am very confuse.
Thanks
Saurabh
The key thing to understand about supporting the Retina Display is that, in your code, the screen is always 320x480. You don't need to double the resolution of anything but your image resources themselves. In this case, you just need to put two resources in your app bundle: an alert_bg.png that fits on a 320x480 screen—in this case, I'd guess that'd be 286x88—and an alert_bg#2x.png, exactly double the size of the other, that fits on a 640x960 one. If you ask UIKit for [UIImage imageNamed:#"alert_bg"], it'll automatically pick the correct-resolution resource for the current screen.
You should provide a 480x320 pixels image for the 3G, 3GS and original iPhone, named "alert_bg.png" and another 960x640 px one, named "alert_bg#2x.png" for the iPhone 4.
The "#2x" in the name is automatically added by iOS and loads the image automatically if it finds it, instead of the standard resolution one.