PIXI.js how to stop PIXI.Graphics being blurry? - pixi.js

I have a PIXI.Graphics object where I add 9 times a 16x16 texture, then I upscale, but the result is very blurry.
Code :
const graphics: PIXI.Graphics = new PIXI.Graphics();
const slotTexture: PIXI.Texture = game.textureManager.getTexture('hotBarSlot');
graphics.beginTextureFill({
texture: slotTexture,
});
graphics.drawRect(0, 0, width * 16, height * 16);
graphics.endFill();
graphics.scale.set(6, 6);
There is the result I get :
This is my texture (it's really small) :
How can I fix this problem ?

Okay, I figure it out!
I have to add this line :
slotTexture.texture.baseTexture.scaleMode = PIXI.SCALE_MODES.NEAREST;

Related

NodeJS image processing: how to merge two images with a polygon mask?

I am trying to do something the same as this Merge images by mask, but in NodeJS Sharp library instead of Python.
I have two images and a polygon, I want the result to be the merged image where all the pixels inside polygon from image1 otherwise image2, more specifically, how to apply the 'mergeImageWithPolygonMask' function below:
const sharp = require('sharp')
let image1 = sharp('image1.jpg')
let image2 = sharp('image2.jpg')
let polygon = [[0,0], [0, 100], [100, 100], [100, 0]]
let newImage = mergeImagesWithPolygonMask(image1, image2, polygon) // how to do this?
newImage.toFile('out.jpg')
After playing with Sharp library for a while, I come up with a solution, which I think will help other people also as this is a quite common use case.
async function mergeImagesWithPolygonMask(image1, image2, polygonMask, config) {
// merge two images with the polygon mask, where within the polygon we have pixel from image1, outside the polygon with pixels from image2
const { height, width } = config;
// console.log("height", height, "width", width);
const mask = Buffer.from(
`<svg height="${height}" width="${width}"><path d="M${polygonMask.join(
"L"
)}Z" fill-opacity="1"/></svg>`
);
const maskImage = await sharp(mask, { density: 72 }).png().toBuffer();
const frontImage = await image2.toBuffer();
const upperLayer = await sharp(frontImage)
.composite([{ input: maskImage, blend: "dest-in", gravity: "northwest" }])
// Set the output format and write to a file
.png()
.toBuffer();
return image1.composite([{ input: upperLayer, gravity: "northwest" }]);
}
This function will first create an svg image and use sharp composite method to create the polygon mask on the front image. And then composite this masked front image to the background image. The size passed in in config specify the size of the svg image.
to use this function:
const sharp = require('sharp')
const image1 = sharp('image1')
const image2 = sharp('image2')
polygon = [[682, 457], [743, 437], [748, 471], [689, 477]]
mergeImagesWithPolygonMask(image1, image2, polygon, {height: 720, width:1280})

How can I load an exported Tileset (image collection) from Tiled in Phaser 3?

I want to load an image collection tileset into my phaser game. I know that with tilesets that are just one image you can just load that image into phaser, but what about an image collection? In Tiled I saw the options to export that tileset as either .tsx or .json, but I don't know if that helps in my case. The reason I need this is because I have some objects that are too big to be used as tiles. I can load them into Tiled and place them like I want to, but obviously they don't show up in my game, unless I can import that tileset into phaser. Does anyone know how to do that, or maybe you know a better option than using an image collection?
Well after some tests, and updating my Tiled version to 1.9.2, it seems there is an pretty simple way.
As long as the tileset collection is marked as "Embeded in map"
(I could have sworn, this checkbox was hidden/deactivated, when selecting "Collection of Images", in my early Tiled version)
Export the map as json
Load map and tile-images
preload() {
this.load.tilemapTiledJSON('map', 'export.tmj');
this.load.image('tile01', 'tile01.png');
this.load.image('tile02', 'tile02.png');
...
}
create Phaser TileSets, just use the filename from the json as the tilsetName (this is the "tricky" part, atleast for me)
create() {
var map = this.make.tilemap({ key: 'map' });
var img1 = map.addTilesetImage( 'tile01.png', 'tile01');
var img2 = map.addTilesetImage( 'tile02.png', 'tile02');
...
// create the layer with all tilesets
map.createLayer('Tile Layer 1', [img1, img2, ...]);
...
}
This should work, atleast with a "Collection of images", with images that have a size of 8x8 pixel (since I don't know the needed/intended Images size, I didn't want to waste time testing various images-sizes needlessly).
Here a small demo:
(due to CORS-issues the map data is inserted as jsonObject and the textures are generate and not loaded)
const mapJsonExport = {"compressionlevel":-1,"height":10,"infinite":false,"layers":[{"compression":"","data":"AQAAAAEAAAACAAAAAgAAAAEAAAACAAAAAgAAAAIAAAACAAAAAgAAAAIAAAACAAAAAgAAAAEAAAABAAAAAQAAAAEAAAABAAAAAQAAAAIAAAABAAAAAQAAAAEAAAABAAAAAQAAAAEAAAABAAAAAQAAAAEAAAACAAAAAQAAAAEAAAACAAAAAgAAAAEAAAABAAAAAgAAAAIAAAABAAAAAgAAAAIAAAACAAAAAgAAAAEAAAACAAAAAQAAAAEAAAABAAAAAQAAAAEAAAABAAAAAQAAAAIAAAACAAAAAgAAAAEAAAACAAAAAgAAAAEAAAACAAAAAQAAAAEAAAACAAAAAQAAAAEAAAABAAAAAQAAAAEAAAACAAAAAgAAAAEAAAABAAAAAgAAAAEAAAABAAAAAQAAAAEAAAACAAAAAQAAAAIAAAACAAAAAgAAAAEAAAABAAAAAgAAAAEAAAACAAAAAgAAAAIAAAACAAAAAgAAAAEAAAABAAAAAgAAAAIAAAACAAAAAgAAAAEAAAACAAAAAQAAAA==","encoding":"base64","height":10,"id":1,"name":"Tile Layer 1","opacity":1,"type":"tilelayer","visible":true,"width":10,"x":0,"y":0}],"nextlayerid":2,"nextobjectid":1,"orientation":"orthogonal","renderorder":"right-down","tiledversion":"1.9.2","tileheight":8,"tilesets":[{"columns":0,"firstgid":1,"grid":{"height":1,"orientation":"orthogonal","width":1},"margin":0,"name":"tiles","spacing":0,"tilecount":2,"tileheight":8,"tiles":[{"id":0,"image":"tile01.png","imageheight":8,"imagewidth":8},{"id":1,"image":"tile02.png","imageheight":8,"imagewidth":8}],"tilewidth":8}],"tilewidth":8,"type":"map","version":"1.9","width":10};
var config = {
width: 8 * 10,
height: 8 * 10,
zoom: 2.2,
scene: { preload, create }
};
function preload() {
// loading inline JSON due to CORS-issues with the code Snippets
this.load.tilemapTiledJSON('map', mapJsonExport);
// generating texture instead of loading them due to CORS-issues with the code Snippets
let graphics = this.make.graphics({add: false});
graphics.fillStyle(0xff0000);
graphics.fillRect(0, 0, 8, 8);
graphics.generateTexture('tile01', 8, 8);
graphics.fillStyle(0x000000);
graphics.fillRect(0, 0, 8, 8);
graphics.generateTexture('tile02', 8, 8);
}
function create () {
let map = this.make.tilemap({ key: 'map' });
let img1 = map.addTilesetImage( 'tile01.png', 'tile01');
let img2 = map.addTilesetImage( 'tile02.png', 'tile02');
map.createLayer('Tile Layer 1', [img1, img2], 0, 0);
}
new Phaser.Game(config);
<script src="https://cdn.jsdelivr.net/npm/phaser#3.55.2/dist/phaser.js"></script>

How can I get PDF dimensions in pixels in Node.js?

I tried pdf2json:
const PDFParser = require("pdf2json");
let pdfParser = new PDFParser();
pdfParser.loadPDF("./30x40.pdf"); // ex: ./abc.pdf
pdfParser.on("pdfParser_dataReady", pdfData => {
width = pdfData.formImage.Width; // pdf width
height = pdfData.formImage.Pages[0].Height; // page height
console.log(`Height : ${height}`) // logs 70.866
console.log(`Width : ${width}`) // logs 53.15
});
But it gave the dimensions in a unknown units!
The dimensions in pixels will help me include them in the pdf-poppler module that converts a pdf file to an image and it needs the pdf file height in pixels.
Try calipers.
Code example:
const Calipers = require('calipers')('png', 'pdf');
Calipers.measure('./30x40.pdf')
.then(data => {
const { width, height } = data.pages[0];
});
Alternatively, try a module which converts it without needing width/height:
pdf2pic pdf-image node-imagemagick
If you're set on using pdf2json, please read this bit of documentation describing the units of the output.
Bit late to the party, but as discussed here: stackoverflow pdf2json unit
you can multiply your width and height by 24
Just like:
width = pdfData.formImage.Width * 24; // pdf width
height = pdfData.formImage.Pages[0].Height * 24; // page height
and you get Pixel.

Create a grid on top of an image

I hope you are all well and safe.
In NodeJS, I wanted to create a grid on top of an image. Like this:
Image without grid
Image with grid
Can someone tell me, please, how can I achieve this (some library or something)?
After creating the grid, I would like to go square by square and check the information for each square. Does anyone have any ideas?
Thank you very much for your time!
The first answer has a native Cairo dependency... Below I used pureimage instead, which has Pure JS implementations of jpeg and png encoding.
static drawParallel = (canvas, step, isYAxis) => {
const c2d = canvas.getContext('2d')
const numberOfSteps = (canvas.width / step) | 0
const end = isYAxis ? canvas.height : canvas.width
console.log(`Steps: ${numberOfSteps}\n`)
c2d.lineWidth = 1.1 // PureImage hides thin lines
c2d.strokeStyle = 'rgba(255,192,203,0.69)'
for (let i = 1; i < numberOfSteps; i++) {
const from = i * step
const to = i * step
const mx = isYAxis ? [from, 0, to, end] : [0, from, end, to]
console.log(`Stroking ${mx[0]}, ${mx[1]} to ${mx[2]}, ${mx[3]}`)
c2d.beginPath()
c2d.moveTo(mx[0], mx[1])
c2d.lineTo(mx[2], mx[3])
c2d.stroke()
}
}
source: https://stackblitz.com/edit/feathersjs-7kqlyt
I would use Canvas. You can use it for all sorts of image jobs and editing. For example, you could get a transparent PNG image of the grid and lay it on:
const Canvas = require("canvas");
const canvas = Canvas.createCanvas(619, 319);
const ctx = canvas.getContext("2d");
let img = await Canvas.loadImage("./path/to/image.png");
ctx.drawImage(img, 0, 0, img.width, img.height);
let grid = await Canvas.loadImage("./path/to/grid.png");
ctx.drawImage(grid, 0, 0, canvas.width, canvas.height);
console.log(canvas.toDataURL());
I myself managed to get something like this.

Three.js doesn't fill shape with custom texture/color spectrum

I have a requirement to draw a polygon and fill it with custom color spectrum.
For color spectrum: after google, I found this post which shows me how to generate a color spectrum the way I need.
For polygon: I choose THREE.Shape() so I can define where to draw it.
Put things together: it works fine with mono color like material = new THREE.MeshBasicMaterial({ color: 0xff0000, side: THREE.DoubleSide }); but it shows nothing when I use color spectrum. You can find my code here (which I forked from here).
Please point me where I do thing wrong.
To not waste your time viewing code, here is snippet:
var curveShape = new THREE.Shape();
curveShape.moveTo(0, 0);
curveShape.lineTo(5, 7);
curveShape.lineTo(2, 9);
curveShape.lineTo(8, 11);
curveShape.lineTo(10, 15);
curveShape.lineTo(9, 16);
curveShape.lineTo(7, 20);
curveShape.lineTo(0, 20);
// geometry
var geometry = new THREE.ShapeGeometry(curveShape);
// material texture
var texture = new THREE.Texture(generateTexture());
texture.needsUpdate = true; // important!
// material
var material = new THREE.MeshBasicMaterial({
map: texture,
overdraw: 0.5,
side: THREE.DoubleSide
});
// mesh
mesh = new THREE.Mesh(geometry, material);
scene.add(mesh);
The problem is that you are just using moveTo() in order to draw your shape. But this method only sets the internal current point of the shape to the given coordinates. To create a real contour, use methods like lineTo().
var curveShape = new THREE.Shape();
curveShape.moveTo(0, 0); // define start point
curveShape.lineTo(5, 7); // draw a line from the current point to the given coordindates
Full example based on your code snippet: https://jsfiddle.net/f2Lommf5/12600/

Resources