I wanted to know how should I use high res images in iOS4 sdk using UIImaageView.
blackBox = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"alert_bg.png"]];
blackBox.frame = CGRectMake(98.0f, 310.0f, 573.0f, 177.0f);
When I use this code I get strange results... the image does not get the correct size. It is looking very big on iPhone 4 screen.
Should I use 326 ppi images?
I have read http://developer.apple.com/library/ios/#documentation/iphone/conceptual/iphoneosprogrammingguide/SupportingResolutionIndependence/SupportingResolutionIndependence.html this but I am very confuse.
Thanks
Saurabh
The key thing to understand about supporting the Retina Display is that, in your code, the screen is always 320x480. You don't need to double the resolution of anything but your image resources themselves. In this case, you just need to put two resources in your app bundle: an alert_bg.png that fits on a 320x480 screen—in this case, I'd guess that'd be 286x88—and an alert_bg#2x.png, exactly double the size of the other, that fits on a 640x960 one. If you ask UIKit for [UIImage imageNamed:#"alert_bg"], it'll automatically pick the correct-resolution resource for the current screen.
You should provide a 480x320 pixels image for the 3G, 3GS and original iPhone, named "alert_bg.png" and another 960x640 px one, named "alert_bg#2x.png" for the iPhone 4.
The "#2x" in the name is automatically added by iOS and loads the image automatically if it finds it, instead of the standard resolution one.
Related
I'm using multer node and express to upload a image to my app. But some images shows rotate 90 degrees when it's on the client.
why is this happening?, how can I fix it?
By the way I'm using vue on the client and for the upload process, of course I use formdata
UPDATE
After research and comments from the guys above, its a EXIF problem. Any code ideas to solve this?
The behaviour you are experiencing is probably caused by the Exif Orientation metadata.
There is another question here on Stackoverflow about this problem: JS Client-Side Exif Orientation: Rotate and Mirror JPEG Images
The selected answer points to a project called Javascript-Load-Image as a possible solution, that basically means you will have to take the orientation in consideration when rendering the images to get a consistent behaviour.
Another possible alternative would be to edit/remove the orientation metadata in your backend.
Check the following resource for more information:
JPEG Image Orientation and Exif
This is most likely caused by Exif metadata (just like #Romulo suggested).
Browsers ignore Exif metadata when displaying images and that's why you're getting this behaviour.
To check that this is related to Exif take 4 pictures with different phone orientation (landscape left, landscape right, portrait, upside down). One of them will be shown properly, while the other 3 will be rotated. (Also note that if you're using the front camera, the image will also get mirrored).
Not all camera phones do this, but iOS does it consistently. The reason for this is performance. When rotation the phone the sensor also rotates and the picture taken doesn't take the rotation into consideration.
To properly show the photo, the image needs to be rotated, but if you just change the Exif metadata then you don't need to do it. Of course, any client that shows the image needs to be aware of this information (and iOS Photos and such are aware).
This has nothing to do with multer, but with the images are stored.
The bottom line is that you need to rotate the image to compensate for this.
Take a look over this npm package to adjust your image on the server side.
I have developed an LWUIT app. I have two types of images dispayed in the app. One coming from server side that need to displayed (like a photo posted and saved to server side) and one packaged in my jar and displayed mainly as icons (like a music icon, loading animation gif etc). I need to display all images according to the sreen size and resolution. The first kind is displayed by taking the screen display height and width and then use scale method and show a scaled version of the image. But however I have no idea how to show the second kind. i.e. icons. Example, my loading image looks good in most of the phones but for some phones like samsung, it looks blurred and over-sized. How to do this. My basic idea is to keep 3 types of images of icons like icon_width_lowXheight_low.png, icon_width_mediumXheight_medium.png and image_width_highXheight_high.png and show it based on the screen size. Please let me know the bets way to achieve this?
Thanks,
Parvathy
You should use MultiImages which were added in LWUIT 1.5. I don't have a link for this in LWUIT but our work in Codename One is pretty close to this so check out the How Do I? on multi images (and I suggest migration to Codename One regardless).
I think that you will need to use this
Image i = Image.createImage("your image path here");
i = i.scaled(widthValue, heightValue);
And put this values in relation to the Display.getInstance().getDisplayHeight() and Display.getInstance().getDisplayWidth()
Right?
I've got a problem with my MKAnnotationViews when MKUserTrackingModeFollowWithHeading is enabled on the MKMapView.
I positioned my images using the centerOffset property of the MKAnnotationView. Specifying the coordinates of the pin's tip relative to the coordinate system at the center of the image is somewhat counter-intutive, but I came up with the following formula:
annotationView.centerOffset = CGPointMake(imageWidth/2.0 - tipXCoordinate, imageHeight/2.0 - tipYCordinate);
This works fine for zooming the map in and out. The tips of the pins keep their relative position on the map.
However, when I enable MKUserTrackingModeFollowWithHeading, it won't work anymore. The Pins rotate around the center of the image, instead of the tip. So when the map rotates, the tips do no point to the locations they are supposed to annotate.
I've played around a bit with the frameand centerproperties of the MKAnnotationView, but I feel, they are having no effect on the alignement of the pins whatsoever.
Interestingly, the MKPinAnnotationView does not seem to use centerOffset at all, but a shifted frame instead. However, I was unable to reproduce this. Changing the frame of my custom view did not move it at all.
Thanks for any insights you can provide :-)
Solution:
Don't use centerOffset! Use annotationView.layer.anchorPoint instead. The coordinate system of achor point is much nicer, too. Coordinates range from 0.0 (top/left) to 1.0 (bottom/right) of the image rectangle:
annotationView.layer.anchorPoint = CGPointMake(tipXCoordinate/imageWidth, tipYCordinate/imageHeight);
A friend asks me to let you know that you should "try this for instance":
self.layer.anchorPoint = CGPointMake (0.5f, 1.0f);
I have images titled like so in my app:
image~iPhone.png
image#2x~iPhone.png
In interface builder I am loading image.png into my UIImageView. I also programatically load some images into a different view using imageWithContentsOfFile. The images all load fine when I run in the simulator but I get no images when I run on the device. If I use the full name of the image in interface builder it works but I want iOS to distinguish between high res and lower res. I have tried a lot of different things but can't figure this out. I see this error in the debugger as well:
Could not load the "image.png" image referenced from a nib in the bundle with identifier "com.mycompany.myproject"
Xcode 4
Deployment Target 4.1
Base SDK 4.3
Thanks for any help.
Ok...so after much experimenting I got it working.
I had two images named:
image#2x~iPhone.png
image~iPhone.png
and I was trying to load them using IB or imageWithContentsOfFile using
image.png
This worked fine in the simulator but not on my device. I just got a blank white screen where the image should be.
I finally renamed the high resolution image to:
image~iPhone#2x.png
Moving the '#2x' modifier after the device modifier(~iPhone) when referencing my images allowed it to work the way I understood that it should from reading Apple's docs. I was under the impression that you didn't need to include the device modifier when referencing images but I had to.
To sum up, I am now using
- image~iPhone.png
to reference my images in IB and programatically for some images. I now get iOS recognizing that I am on a retina screen and loading the #2x images accordingly. So the #2x modifier had to go at the end and the ~iPhone modifier had to be included in the name of the '.png'.
That is what worked for me. Hope it helps someone else. Note that I am only building my app for iOS4.1 and above so there might be some issues with this if you are supporting previous version.
iOS does not automatically pick the right image for the device like that. You are going to have to write code to check which device it is, and set the image by full name.
e.g. if ([[UIScreen mainScreen] scale] == 2) // set hi res image
Or, you can just use the same image in both, and set the content mode to scale to fill. It will look the same.
EDIT: Try writing either ~iphone (lowercase), or just don't write ~iPhone at all on the file name. If your app is not universal, then writing the ~iphone suffix is completely pointless.
iOS file system is case sensitive and device modifiers should be lowercase, it should be
image~iphone.png
image#2x~iphone.png
The #2x comes before the device modifier.
See the resource programming guide
I am trying to get DirectX (DX9) to grab a screenshot of the desktop and immediately draw it back out (in smaller dimensions) to my form.
I have DirectX working to the capacity that the device is created along with a few surfaces and I can render them to screen. I am using one surface F3F3Surf9_SS to get the desktop Screenshot.
Here is my declaration and initialization of varaibles
F3D3Surf9_SS : IDirect3DSurface9; //Surface SS
F3D3Surf9_A : IDirect3DSurface9; //Surface A
F3D3Surf9_B : IDirect3DSurface9; //Surface B
...
FDirect3D9.CreateDevice(D3DADAPTER_DEFAULT,D3DDEVTYPE_HAL,Form1.Handle,
D3DCREATE_SOFTWARE_VERTEXPROCESSING,#D3DPresentParams,
FDirect3DDevice9);
FDirect3DDevice9.CreateOffscreenPlainSurface(1360,768,D3DFMT_X8R8G8B8,
D3DPOOL_DEFAULT,F3D3Surf9_A,nil);
D3DXLoadSurfaceFromFile(F3D3Surf9_A,nil,nil,'D:\Images\Pillar.bmp',nil,
D3DX_DEFAULT,0,nil);
FDirect3DDevice9.CreateOffscreenPlainSurface(1360,768,D3DFMT_X8R8G8B8,
D3DPOOL_DEFAULT,F3D3Surf9_B,nil);
D3DXLoadSurfaceFromFile(F3D3Surf9_B,nil,nil,'D:\Images\Niagra.bmp',nil,
D3DX_DEFAULT,0,nil);
FDirect3DDevice9.CreateOffscreenPlainSurface(Screen.Width,Screen.Height,D3DFMT_A8R8G8B8,
D3DPOOL_SCRATCH,F3D3Surf9_SS,nil);
Here is the code I use to grab and then render the screenshot
FDIrect3DDevice9.BeginScene;
FDirect3DDevice9.Clear(0,0,D3DCLEAR_TARGET,D3DCOLOR_XRGB(0,0,255),0,0);
FDirect3DDevice9.GetBackBuffer(0,0,D3DBACKBUFFER_TYPE_MONO, BackBuffer);
FDirect3DDevice9.GetFrontBufferData(0,F3D3Surf9_SS); //Get the screen shot
FDirect3DDevice9.StretchRect(F3D3Surf9_SS,nil,BackBuffer,nil,D3DTEXF_NONE); //Draw it
FDIrect3DDevice9.EndScene;
FDirect3DDevice9.Present(nil,nil,0,nil);
However this does not work.
The image does not get drawn to screen. If I draw surface A or B to screen, that works but it doesn't work for Surface SS. However I know Surface SS has the screenshot in it since if I call D3DXSaveSurfaceToFile the resulting bitmap I put on the hard disk is a valid screen shot.
Any thoughts on the proper way to do this?
The reason this would not work is that the F3D3Surf9_SS was declared in system memory by D3DPOOL_SCRATCH and cannot be drawn directly to the back buffer as I was trying to.
So my solution was to use the F3D3Surf9_A surface and use UpdateSurface to copy the screenshot in system memory into the surface A in video memory.
The only other change I had to make to get this to work was create Surface A in the same format as the screenshot surface: D3DFMT_A8R8G8B8. Also had to make sure the the destination surface in UpdateSurface was larger than the source surface.
NOTE:
This is slow since we are reading from video memory to system memory and then right back to video memory.
I needed this for my application since I want to capture everything the OS and other application put on screen but if you are just worried about your own application then there are better alternatives.
If you know of a way to GetFrontBufferData without putting it to system memory (which is the only way I could see it working) please let me know.