I am trying to load some SVG's into my game, but I am having truoble. I have tried reading the Android cook book, docs and answers from the forum, but I can't find a solution. I'm now at a state where I can actually load the images in and the game builds, but now I cannot display them. See as follows:
Loading in the .svg image:
BuildableBitmapTextureAtlas bitmapTextureAtlas = new BuildableBitmapTextureAtlas(
engine.getTextureManager(), 1024, 1024, TextureOptions.BILINEAR); //setting up texture atlas
try {
this.mBuildableBitmapTextureAtlas
.build(new BlackPawnTextureAtlasBuilder<IBitmapTextureAtlasSource, BitmapTextureAtlas>(
0, 0, 1));
} catch (final TextureAtlasBuilderException e) {
Debug.e(e);
}
SVGBitmapTextureAtlasTextureRegionFactory.setAssetBasePath("gfx/game/");
mLowResTextureRegion = SVGBitmapTextureAtlasTextureRegionFactory
.createFromAsset(bitmapTextureAtlas, activity,
"spinningarrow.svg", 32, 32);
mMedResTextureRegion = SVGBitmapTextureAtlasTextureRegionFactory
.createFromAsset(bitmapTextureAtlas, activity, "spinningarrow.svg",
128, 128);
// Create a high-res (256x256) texture region of svg_image.svg
mHiResTextureRegion = SVGBitmapTextureAtlasTextureRegionFactory
.createFromAsset(bitmapTextureAtlas, activity, "spinningarrow.svg",
256, 256);
bitmapTextureAtlas.load();
Actually attaching it to the scene:
float currentSpritePosition = (WIDTH / ((SPRITE_SIZE) * SPRITE_COUNT)) + SPRITE_SIZE * 0.5f;
Sprite lowResSprite = new Sprite(currentSpritePosition, HEIGHT / 2, SPRITE_SIZE, SPRITE_SIZE, rM.mLowResTextureRegion, rM.engine.getVertexBufferObjectManager());
attachChild(lowResSprite);
currentSpritePosition += SPRITE_SIZE;
// Create & attach the med-res sprite to the scene (mid)
Sprite medResSprite = new Sprite(currentSpritePosition, HEIGHT / 2, SPRITE_SIZE, SPRITE_SIZE, rM.mMedResTextureRegion, rM.engine.getVertexBufferObjectManager());
attachChild(medResSprite);
currentSpritePosition += SPRITE_SIZE;
// Create & attach the high-res sprite to the scene (right side)
Sprite hiResSprite = new Sprite(currentSpritePosition, HEIGHT / 2, SPRITE_SIZE, SPRITE_SIZE, rM.mHiResTextureRegion, rM.engine.getVertexBufferObjectManager());
attachChild(hiResSprite);
This builds, but I cannot see the image; even if I hard code its coordinates.
The svg in question is at http://www.sendspace.com/file/nnjsfe
Related
As we have upgraded to net core 6 we are rewriting some of our code base. We have a tag helper in AspNet Core which generates a barcode. This currently uses System.Drawing and ZXing.
TagHelper Old version using System.Drawing - working (top barcode)
public override void Process(TagHelperContext context, TagHelperOutput output)
{
var margin = 0;
var qrCodeWriter = new ZXing.BarcodeWriterPixelData
{
Format = ZXing.BarcodeFormat.PDF_417,
Options = new ZXing.Common.EncodingOptions
{
Height = this.Height > 80 ? this.Height : 80,
Width = this.Width > 400 ? this.Width : 400,
Margin = margin
}
};
var pixelData = qrCodeWriter.Write(QRCodeContent);
// creating a bitmap from the raw pixel data; if only black and white colors are used it makes no difference
// that the pixel data ist BGRA oriented and the bitmap is initialized with RGB
using (var bitmap = new Bitmap(pixelData.Width, pixelData.Height, System.Drawing.Imaging.PixelFormat.Format32bppRgb))
using (var ms = new MemoryStream())
{
var bitmapData = bitmap.LockBits(new Rectangle(0, 0, pixelData.Width, pixelData.Height),
System.Drawing.Imaging.ImageLockMode.WriteOnly, System.Drawing.Imaging.PixelFormat.Format32bppRgb);
try
{
// we assume that the row stride of the bitmap is aligned to 4 byte multiplied by the width of the image
System.Runtime.InteropServices.Marshal.Copy(pixelData.Pixels, 0, bitmapData.Scan0,
pixelData.Pixels.Length);
}
finally
{
bitmap.UnlockBits(bitmapData);
}
// save to stream as PNG
bitmap.Save(ms, System.Drawing.Imaging.ImageFormat.Png);
output.TagName = "img";
output.Attributes.Clear();
output.Attributes.Add("width", Width);
output.Attributes.Add("height", Height);
output.Attributes.Add("alt", Alt);
output.Attributes.Add("src",
$"data:image/png;base64,{Convert.ToBase64String(ms.ToArray())}");
}
}
TagHelper new version using ImageSharp - almost working but not exactly (bottom barcode)
public override void Process(TagHelperContext context, TagHelperOutput output)
{
var margin = 0;
var barcodeWriter = new ZXing.ImageSharp.BarcodeWriter<SixLabors.ImageSharp.PixelFormats.La32>
{
Format = ZXing.BarcodeFormat.PDF_417,
Options = new ZXing.Common.EncodingOptions
{
Height = this.Height > 80 ? this.Height : 80,
Width = this.Width > 400 ? this.Width : 400,
Margin = margin
}
};
var image = barcodeWriter.Write(QRCodeContent);
output.TagName = "img";
output.Attributes.Clear();
output.Attributes.Add("width", Width);
output.Attributes.Add("height", Height);
output.Attributes.Add("alt", Alt);
output.Attributes.Add("src", $"{image.ToBase64String(PngFormat.Instance)}");
}
The issue is as mentioned the 2nd barcode is very slightly different at the end seems to extend the last bar.
What am I missing?
That is a bug in the renderer implementation of the ZXing.Net binding to ImageSharp.
https://github.com/micjahn/ZXing.Net/issues/422
It is fixed in the newest nuget package of the bindings.
https://www.nuget.org/packages/ZXing.Net.Bindings.ImageSharp/
https://www.nuget.org/packages/ZXing.Net.Bindings.ImageSharp.V2/
I have implemented a simple 3D model viewer with three.js. The project contains a database of 3D models, divided into different categories. In one category, the models are tall (with a relatively small width, the height is much higher than the height of the other models), in another category, the models are small (relative to all products, their height and width are smaller than in others, these are small models), in another category, the models are large (their height and wider than many models from other categories.
The viewer has a fixed canvas width and height. For this reason, when loading a model in canvas, many models are immediately loaded at a small scale, which requires subsequent multiple zooming. There are also models, the upper part of which does not fit into the canvas, and the lower part does, at boot time. This also requires subsequent scaling.
It is necessary that the viewer first estimates the dimensions of the model, after that it automatically selects the scale for the model individually, after that it centers the model vertically and horizontally.
It is also necessary that the plane in which the model lies (the model has tangible width and height, the thickness is very small, all these models are close to flat) coincide with the plane of the screen. At boot time, many models have an offset for these planes. How do I implement this so that the viewer automatically expands the model?
Below I am attaching screenshots for models from different categories - where they are clearly recorded:
Below is the code of the viewer that is responsible for initializing the scene:
var objectUrl = $('#modelViewerModal').data('object-url');//'/storage/3d/qqq.obj';
var mesh, renderer, scene, camera, controls;
init();
animate();
function init() {
const screenshotPageWidth = $(document).width();
const screenshotPageHeight = $(document).height();
let modalBody = document.querySelector('#scene-container');
// renderer
renderer = new THREE.WebGLRenderer({modalBody});
var height = screenshotPageHeight / 2;
var width = screenshotPageWidth / 2;
if (screenshotPageHeight < screenshotPageWidth) { // landscape orientation
if (width > 3 * height / 2) {
width = 3 * height / 2;
} else if (width < 3 * height / 2) {
height = 2 * width / 3;
}
} else if (screenshotPageHeight > screenshotPageWidth) { // portrait orientation
if (height > 2 * width / 3) {
height = 2 * width / 3;
} else if (height < 2 * width / 3) {
width = 3 * height / 2;
}
}
// let limitHeight = screen.height - 137;
renderer.setSize(width, height);
modalBody.appendChild( renderer.domElement );
// scene
scene = new THREE.Scene();
scene.background = new THREE.Color( 0x000000 );
// camera
camera = new THREE.PerspectiveCamera( 40, window.innerWidth / window.innerHeight, 1, 10000 );
camera.position.set( 20, 20, 20 );
// controls
controls = new OrbitControls( camera, renderer.domElement );
// ambient
scene.add( new THREE.AmbientLight( 0x222222 ) );
// light
var light = new THREE.DirectionalLight( 0xffffff, 1 );
light.position.set( 2000, 2000, 2000 );
scene.add( light );
var spotLight_01 = getSpotlight('rgb(145, 200, 255)', 1);
spotLight_01.name = 'spotLight_01';
var spotLight_02 = getSpotlight('rgb(255, 220, 180)', 1);
spotLight_02.name = 'spotLight_02';
scene.add(spotLight_01);
scene.add(spotLight_02);
// geometry
var geometry = new THREE.SphereGeometry( 5, 12, 8 );
// material
const material = new THREE.MeshPhongMaterial({
color: 0xeae8dc,
side: THREE.DoubleSide,
transparent: false,
flatShading: false,
opacity: 0
});
// mesh
var objLoader = new THREE.OBJLoader();
objLoader.load(objectUrl,
function ( obj ) {
mesh = obj;
mesh.scale.setScalar( 0.01 );
obj.traverse( function( child ) {
if ( child.isMesh ) child.material = material;
} );
// center the object
var aabb = new THREE.Box3().setFromObject( mesh );
var center = aabb.getCenter( new THREE.Vector3() );
mesh.position.x += (mesh.position.x - center.x);
mesh.position.y += (mesh.position.y - center.y);
mesh.position.z += (mesh.position.z - center.z);
scene.add( mesh );
animate();
} );
}
function animate() {
requestAnimationFrame( animate );
controls.update();
renderer.render( scene, camera );
}
function getSpotlight(color, intensity) {
var light = new THREE.SpotLight(color, intensity);
light.castShadow = true;
light.shadow.mapSize.x = 4096;
light.shadow.mapSize.y = 4096;
return light;
}
This can be done with just two things - the camera object, and the mesh object. What you need to do use use the camera frustum (the representation of the viewable area of the camera https://en.wikipedia.org/wiki/Viewing_frustum) to calculate the distance at which it is the same height (or width) as your model. The height of the model can easily be acquired from the geometry.
// Get the size of the model
mesh.geometry.computeBoundingBox()
const modelSize = mesh.geometry.boundingBox.getSize()
const width = modelSize.x // the exact axis will depend on which angle you are viewing it from, using x for demonstration here
// Compute necessary camera parameters
const fov = camera.fov
const aspect = camera.aspect // camera aspect ratio (width / height)
// three.js stores camera fov as vertical fov. We need to calculate horizontal fov
const hfov = (2 * Math.atan(Math.tan(MathUtils.degToRad(fov) / 2) * aspect) * 180) / Math.PI;
// calculate the distance from the camera at which the frustum is the same width as your model
const dist = (width * 0.5) / Math.tan(MathUtils.degToRad(hfov * 0.5));
// Position camera exactly based on model. There are more elegant ways to do this, but this is a quick and dirty solution
// Camera looks down its own negative Z axis, adding Z effectively "zooms out"
camera.position.copy(mesh.position).translateZ(dist)
These fov and frustum calculations may appear daunting, but these are well-solved problems in 3D engines, and the exact algorithms can be looked up pretty quickly with a few searches.
This solution obviously only works with width, and uses a rigid method for moving the camera, but hopefully it provides you with the tools to apply the solution in a way that works for your application.
I need to change the background color of the currently tabbed page in my UITabBarController. I've searched through every stackoverflow post I could find but nothing worked for me. I thought there would be something like UITabBar.Appearance.SelectedImageTintColor, just for the background color but it doesn't seem so.
For example, I want to change the color of that part when I am on the right tab:
Does someone know how to do that?
You could invoked the following code in your UITabBarController
public xxxTabBarController()
{
//...set ViewControllers
this.TabBar.BarTintColor = UIColor.Red;
}
Update
//3.0 here is if you have three child page in tab , set it as the current value in your project
//
var size = new CGSize(TabBar.Frame.Width / 3.0, IsFullScreen());
this.TabBar.SelectionIndicatorImage = ImageWithColor(size,UIColor.Green);
double IsFullScreen()
{
double height = 64;
if (UIDevice.CurrentDevice.CheckSystemVersion(11, 0))
{
if (UIApplication.SharedApplication.Delegate.GetWindow().SafeAreaInsets.Bottom > 0.0)
{
height = 84;
}
}
return height;
}
UIImage ImageWithColor(CGSize size, UIColor color)
{
var rect = new CGRect(0, 0, size.Width, size.Height);
UIGraphics.BeginImageContextWithOptions(size, false, 0);
CGContext context = UIGraphics.GetCurrentContext();
context.SetFillColor(color.CGColor);
context.FillRect(rect);
UIImage image = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
return image;
}
The trick is to use the SelectionIndicatorImage Property of the UITabBar and generate a completely filled image with your desired color using the following method:
private UIImage ImageWithColor(CGSize size)
{
CGRect rect = new CGRect(0, 0, size.Width, size.Height);
UIGraphics.BeginImageContext(size);
using (CGContext context = UIGraphics.GetCurrentContext())
{
context.SetFillColor(UIColor.Green); //change color if necessary
context.FillRect(rect);
}
UIImage image = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
return image;
}
To initialize everything we override ViewWillLayoutSubviews() like this:
public override void ViewWillLayoutSubviews()
{
base.ViewWillLayoutSubviews();
// The tabbar height will always be 49 unless we force it to reevaluate it's size on runtime ...
myTabBar.InvalidateIntrinsicContentSize();
double height = myTabBar.Frame.Height;
CGSize size = new CGSize(new nfloat(myTabBar.Frame.Width / myTabBar.Items.Length, height));
// Now get our all-green image...
UIImage image = ImageWithColor(size);
// And set it as the selection indicator
myTabBar.SelectionIndicatorImage = image;
}
As mentioned in this article (google translating it step by step when necessary lol) calling InvalidateIntrinsicContentSize() will force the UITabBar to reevaluate it's size and will get you the actual runtime height of the tab bar (instead of the constant 49 height value from XCode).
I tried to add the svg image which contains opacity=0 area into QTextureMaterial on QPlaneMesh, but it shows that the Plane's background is always in gray.
I want that the plane can cantain my image and penetrate to other object in opacity=0 area.
The svg file like https://image.flaticon.com/icons/svg/3154/3154348.svg .
code:
// Background
Qt3DCore::QEntity *planeEntity = new Qt3DCore::QEntity(rootEntity);
Qt3DExtras::QPlaneMesh *planeMesh = new Qt3DExtras::QPlaneMesh(planeEntity);
planeMesh->setHeight(2);
planeMesh->setWidth(2);
Qt3DExtras::QTextureMaterial *planeMaterial = new Qt3DExtras::QTextureMaterial(planeEntity);
Qt3DRender::QTexture2D *planeTexture = new Qt3DRender::QTexture2D(planeMaterial);
FlippedTextureImage *planeTextureImage = new FlippedTextureImage(planeTexture);
planeTextureImage->setSize(QSize(3507, 3000));
planeTexture->addTextureImage(planeTextureImage);
planeMaterial->setTexture(planeTexture);
Qt3DCore::QTransform *planeTransform = new Qt3DCore::QTransform(planeEntity);
planeTransform->setRotationX(90);
planeTransform->setTranslation(QVector3D(0, 0, 0));
Qt3DExtras::QPhongAlphaMaterial *pam2 = new Qt3DExtras::QPhongAlphaMaterial(planeEntity);
pam2->setAlpha(0);
planeEntity->addComponent(planeMesh);
planeEntity->addComponent(pam2);
planeEntity->addComponent(planeMaterial);
planeEntity->addComponent(planeTransform);
class FlippedTextureImage : public Qt3DRender::QPaintedTextureImage
{
public:
FlippedTextureImage(Qt3DCore::QNode *parent = Q_NULLPTR):Qt3DRender::QPaintedTextureImage(parent) {}
void paint(QPainter *painter) override {
QSvgRenderer renderer(QString(qApp->applicationDirPath() + "/gift.svg"));
QImage image(3000, 3000, QImage::Format_RGBA64); // 512x512 RGBA
image.fill(0x00ffffff); // white background
QPainter painter2(&image);
renderer.render(&painter2);
painter->drawImage(0, 0, image);
}
};
and run code like this
QTextureMaterial doesn't support transparency in the textures.
Check out my answer to this question to see how to implement transparency in textures yourself. There is not out-of-the-box solution in Qt3D.
I'm using Managed Direct X 2.0 with C# and I'm attempting to apply a fragment shader to a texture built by rendering the screen to a texture using the RenderToSurface helper class.
The code I'm using to do this is:
RtsHelper.BeginScene(RenderSurface);
device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.White, 1.0f, 0);
//pre-render shader setup
preProc.Begin(FX.None);
preProc.BeginPass(0);
//mesh drawing
mesh.DrawSubset(j);
preProc.CommitChanges();
preProc.EndPass();
preProc.End();
RtsHelper.EndScene(Filter.None);
which renders to my Surface, RenderSurface, which is attached to a Texture object called RenderTexture
Then I call the following code to render the surface to the screen, applying a second shader "PostProc" to the rendered texture. This shader combines color values on a per pixel basis and transforms the scene to grayscale. I'm following the tutorial here: http://rbwhitaker.wikidot.com/post-processing-effects
device.BeginScene();
{
using (Sprite sprite = new Sprite(device))
{
sprite.Begin(SpriteFlags.DoNotSaveState);
postProc.Begin(FX.None);
postProc.BeginPass(0);
sprite.Draw(RenderTexture, new Rectangle(0, 0, WINDOWWIDTH, WINDOWHEIGHT), new Vector3(0, 0, 0), new Vector3(0, 0, 0), Color.White);
postProc.CommitChanges();
postProc.EndPass();
postProc.End();
sprite.End();
}
}
device.EndScene();
device.Present();
this.Invalidate();
However all I see is the original rendered scene, as rendered to the texture, but unmodified by the second shader.
FX file is below in case it's important.
//------------------------------ TEXTURE PROPERTIES ----------------------------
// This is the texture that Sprite will try to set before drawing
texture ScreenTexture;
// Our sampler for the texture, which is just going to be pretty simple
sampler TextureSampler = sampler_state
{
Texture = <ScreenTexture>;
};
//------------------------ PIXEL SHADER ----------------------------------------
// This pixel shader will simply look up the color of the texture at the
// requested point, and turns it into a shade of gray
float4 PixelShaderFunction(float2 TextureCoordinate : TEXCOORD0) : COLOR0
{
float4 color = tex2D(TextureSampler, TextureCoordinate);
float value = (color.r + color.g + color.b) / 3;
color.r = value;
color.g = value;
color.b = value;
return color;
}
//-------------------------- TECHNIQUES ----------------------------------------
// This technique is pretty simple - only one pass, and only a pixel shader
technique BlackAndWhite
{
pass Pass1
{
PixelShader = compile ps_1_1 PixelShaderFunction();
}
}
Fixed it. Was using the wrong flags for the post processor shader initialization
was:
sprite.Begin(SpriteFlags.DoNotSaveState);
postProc.Begin(FX.None);
should be:
sprite.Begin(SpriteFlags.DoNotSaveState);
postProc.Begin(FX.DoNotSaveState);