I have an image, which I have uploaded on Google earth engine under the asset id: "users/chandrakant/Max_RZSC_WGS1984_25km".
I want to visualize the data, but I am not able to do that using the code below.
A link to my tiff image is in the link below.
https://www.dropbox.com/sh/hpqj2tfei8o82xo/AADNPWEA4PN9ybNN6zx8771Ca?dl=0
var image = ee.Image("users/chandrakant/Max_RZSC_WGS1984_25km");
var vizParams = {
bands: ['b1'],
min: 0.0,
max: 1000.0,
palette: ['bbe029', '0a9501', '074b03'],
};
Map.setCenter(6.746, 46.529, 2);
Map.addLayer(image, vizParams, 'Rootzone Storage Capacity');
Map.centerObject(image);
Can someone provide some suggestion, on why nothing is showing up on the map when I run it? There is no error shown in the console.
Added information:
**Point** (-58.9, -4.83) at 10Km/px
Longitude: -58.895520981116704
Latitude: -4.833137881525199
Zoom Level: 4
Scale (approx. m/px): 9783.93962050256
**Pixels**
Rootzone Storage Capacity: Image (1 band)
b1: masked
Objects
Rootzone Storage Capacity: Image users/chandrakant/RZSC_2000_2015 (1 band)
type: Image
id: users/chandrakant/RZSC_2000_2015
version: 1561473739510841
bands: List (1 element)
0: "b1", float, EPSG:32745, 1440x400 px
properties: Object (4 properties)
system:asset_size: 786204
system:footprint: LinearRing, 5 vertices
system:index: 0
system:time_start: 1451520000000
After reprojecting
var image = ee.Image("users/chandrakant/RZSC_2000_2015_Trial_2");
var vizParams = {
bands: ['b1'],
min: 0.0,
max: 1000.0,
palette: ['bbe029', '0a9501', '074b03'],
};
Map.setCenter(6.746, 46.529, 2);
//Map.addLayer(image, vizParams, 'Rootzone Storage Capacity');
Map.centerObject(image);
print('Projection, crs, and crs_transform:', image.projection());
var reprojected = image.reproject('EPSG:4326');
print('Projection, crs, and crs_transform:', reprojected.projection());
Map.addLayer(reprojected, vizParams, 'Reprojected');
Still not able to view the image.
Related
I am trying to do something the same as this Merge images by mask, but in NodeJS Sharp library instead of Python.
I have two images and a polygon, I want the result to be the merged image where all the pixels inside polygon from image1 otherwise image2, more specifically, how to apply the 'mergeImageWithPolygonMask' function below:
const sharp = require('sharp')
let image1 = sharp('image1.jpg')
let image2 = sharp('image2.jpg')
let polygon = [[0,0], [0, 100], [100, 100], [100, 0]]
let newImage = mergeImagesWithPolygonMask(image1, image2, polygon) // how to do this?
newImage.toFile('out.jpg')
After playing with Sharp library for a while, I come up with a solution, which I think will help other people also as this is a quite common use case.
async function mergeImagesWithPolygonMask(image1, image2, polygonMask, config) {
// merge two images with the polygon mask, where within the polygon we have pixel from image1, outside the polygon with pixels from image2
const { height, width } = config;
// console.log("height", height, "width", width);
const mask = Buffer.from(
`<svg height="${height}" width="${width}"><path d="M${polygonMask.join(
"L"
)}Z" fill-opacity="1"/></svg>`
);
const maskImage = await sharp(mask, { density: 72 }).png().toBuffer();
const frontImage = await image2.toBuffer();
const upperLayer = await sharp(frontImage)
.composite([{ input: maskImage, blend: "dest-in", gravity: "northwest" }])
// Set the output format and write to a file
.png()
.toBuffer();
return image1.composite([{ input: upperLayer, gravity: "northwest" }]);
}
This function will first create an svg image and use sharp composite method to create the polygon mask on the front image. And then composite this masked front image to the background image. The size passed in in config specify the size of the svg image.
to use this function:
const sharp = require('sharp')
const image1 = sharp('image1')
const image2 = sharp('image2')
polygon = [[682, 457], [743, 437], [748, 471], [689, 477]]
mergeImagesWithPolygonMask(image1, image2, polygon, {height: 720, width:1280})
I have been working on a Discord bot to generate image attachments and send them into channels; and it all works a treat. However I have now encountered a problem in that when I use images, and in this case 32 images for an attachment the 'RSS' memory as per process.memoryUsage.rss() hits the roof; if I run this process 2-3 times, the serve dies due to lack of Memory.
So this is all in a Node Process, on Heroku.
The code in principle:
let stage = new Konva.Stage({
width: width,
height: height,
});
let canvas1 = createCanvas(stage.width(), stage.height());
const ctx1 = canvas1.getContext('2d'); // I draw to this
let canvas2 = createCanvas(stage.width(), stage.height());
const ctx2 = canvas2.getContext('2d'); // I draw to this
let layer = new Konva.Layer();
...
// It's these images that seem to cause the most Memory Bloat
Konva.Image.fromURL(
imagePath,
imageNode => {
imageNode.setAttrs(attrs); // attrs = { image, x, y, width, height }
layer.add(imageNode);
// imageNode.destroy(); doesn't save memory
}
);
...
// Add the canvas to the Konva, as an image
layer.add(new Konva.Image({
image: canvas1,
x: 0,
y: 0,
width: stage.width(),
height: stage.height(),
opacity: 0.5,
}));
// Add the canvas to the Konva, as an image
layer.add(new Konva.Image({
image: canvas2,
x: 0,
y: 0,
width: stage.width(),
height: stage.height(),
opacity: 0.5,
}));
...
stage.add(layer);
layer.draw();
let asCanvas = stage.toCanvas(); // I use this for the attachment
stage.destroy();
stage = null;
layer = null;
return asCanvas;
...
let attachment = new Discord.MessageAttachment(
fromAsCanvas.toBuffer(), // fromAsCanvas is from the asCanvas above.
'imageName.png'
);
message.channel
.send({
files: [attachment],
content: 'Message with the Attachment',
});
My belief at this time, is that the images loaded from the system, and added to the layer, then to canvas, are not freeded up from memory and just sit there for a significant amount of time with no consistincy.
I have tried:
Running the Garbage Collector afterwards to confirm if this would help (it does not)
Destroying the image post-add to layer (this removed the image itself)
Destroying the Stage
Destroying the Layer
Nulling the Stage, Layer, and all 4 Canvas related variables
I have lots of logging, e.g:
Start: 74MB
Post ctx1: 75MB
Post ctx2: 75MB
Post Reduce: 77MB
Post forEach: 237MB // adds all 32 the images +160MB
Post Mark Destroyed Guns (spread arrays): 237MB
Post Added Some Things 1: 247MB +10MB
Post Added Some Things 2: 249MB
Post Added Some Things 3: 259MB
Post Added Some Things 4: 260MB
Post Add canvas1 Canvas to Layer: 260MB
Post Add canvas2 to Layer: 260MB
Post Add Layer to Stage: 293MB +33MB
Post Layer.draw: 294MB
Post toCanvas: 321MB +27MB
Post Destroy Stage/etc: 308MB -13MB
Sends message
5 Seconds later RSS is at: 312MB +4MB
As you can see, once I have this solved, I might still have 50MB of extra Memory usage to debug too.
I believe I have now solved this, based on this article: https://github.com/Automattic/node-canvas/issues/785 Basically, I added inside 'onload' the statement img.onload = null; as I had refactored to a promise/onload system during my attempts to solve this.
Followup 1:
In addition to setting img.src = null I then followed up with the following:
process.nextTick(() => {
img.onload = null;
img.onerror = null;
img = null;
})
The setting of img = null brought back a lot more memory, but still left plenty taken up.
Followup 2:
I discovered that destroying the layer and image also helped free up even more memory, to the point that it seems I actually end up recovering memory between processes:
layer.destroyChildren();
layer.destroy();
stage.destroyChildren();
stage.destroy();
stage = null;
layer = null;
I am hoping, this has finally solved it.
I want to load an image collection tileset into my phaser game. I know that with tilesets that are just one image you can just load that image into phaser, but what about an image collection? In Tiled I saw the options to export that tileset as either .tsx or .json, but I don't know if that helps in my case. The reason I need this is because I have some objects that are too big to be used as tiles. I can load them into Tiled and place them like I want to, but obviously they don't show up in my game, unless I can import that tileset into phaser. Does anyone know how to do that, or maybe you know a better option than using an image collection?
Well after some tests, and updating my Tiled version to 1.9.2, it seems there is an pretty simple way.
As long as the tileset collection is marked as "Embeded in map"
(I could have sworn, this checkbox was hidden/deactivated, when selecting "Collection of Images", in my early Tiled version)
Export the map as json
Load map and tile-images
preload() {
this.load.tilemapTiledJSON('map', 'export.tmj');
this.load.image('tile01', 'tile01.png');
this.load.image('tile02', 'tile02.png');
...
}
create Phaser TileSets, just use the filename from the json as the tilsetName (this is the "tricky" part, atleast for me)
create() {
var map = this.make.tilemap({ key: 'map' });
var img1 = map.addTilesetImage( 'tile01.png', 'tile01');
var img2 = map.addTilesetImage( 'tile02.png', 'tile02');
...
// create the layer with all tilesets
map.createLayer('Tile Layer 1', [img1, img2, ...]);
...
}
This should work, atleast with a "Collection of images", with images that have a size of 8x8 pixel (since I don't know the needed/intended Images size, I didn't want to waste time testing various images-sizes needlessly).
Here a small demo:
(due to CORS-issues the map data is inserted as jsonObject and the textures are generate and not loaded)
const mapJsonExport = {"compressionlevel":-1,"height":10,"infinite":false,"layers":[{"compression":"","data":"AQAAAAEAAAACAAAAAgAAAAEAAAACAAAAAgAAAAIAAAACAAAAAgAAAAIAAAACAAAAAgAAAAEAAAABAAAAAQAAAAEAAAABAAAAAQAAAAIAAAABAAAAAQAAAAEAAAABAAAAAQAAAAEAAAABAAAAAQAAAAEAAAACAAAAAQAAAAEAAAACAAAAAgAAAAEAAAABAAAAAgAAAAIAAAABAAAAAgAAAAIAAAACAAAAAgAAAAEAAAACAAAAAQAAAAEAAAABAAAAAQAAAAEAAAABAAAAAQAAAAIAAAACAAAAAgAAAAEAAAACAAAAAgAAAAEAAAACAAAAAQAAAAEAAAACAAAAAQAAAAEAAAABAAAAAQAAAAEAAAACAAAAAgAAAAEAAAABAAAAAgAAAAEAAAABAAAAAQAAAAEAAAACAAAAAQAAAAIAAAACAAAAAgAAAAEAAAABAAAAAgAAAAEAAAACAAAAAgAAAAIAAAACAAAAAgAAAAEAAAABAAAAAgAAAAIAAAACAAAAAgAAAAEAAAACAAAAAQAAAA==","encoding":"base64","height":10,"id":1,"name":"Tile Layer 1","opacity":1,"type":"tilelayer","visible":true,"width":10,"x":0,"y":0}],"nextlayerid":2,"nextobjectid":1,"orientation":"orthogonal","renderorder":"right-down","tiledversion":"1.9.2","tileheight":8,"tilesets":[{"columns":0,"firstgid":1,"grid":{"height":1,"orientation":"orthogonal","width":1},"margin":0,"name":"tiles","spacing":0,"tilecount":2,"tileheight":8,"tiles":[{"id":0,"image":"tile01.png","imageheight":8,"imagewidth":8},{"id":1,"image":"tile02.png","imageheight":8,"imagewidth":8}],"tilewidth":8}],"tilewidth":8,"type":"map","version":"1.9","width":10};
var config = {
width: 8 * 10,
height: 8 * 10,
zoom: 2.2,
scene: { preload, create }
};
function preload() {
// loading inline JSON due to CORS-issues with the code Snippets
this.load.tilemapTiledJSON('map', mapJsonExport);
// generating texture instead of loading them due to CORS-issues with the code Snippets
let graphics = this.make.graphics({add: false});
graphics.fillStyle(0xff0000);
graphics.fillRect(0, 0, 8, 8);
graphics.generateTexture('tile01', 8, 8);
graphics.fillStyle(0x000000);
graphics.fillRect(0, 0, 8, 8);
graphics.generateTexture('tile02', 8, 8);
}
function create () {
let map = this.make.tilemap({ key: 'map' });
let img1 = map.addTilesetImage( 'tile01.png', 'tile01');
let img2 = map.addTilesetImage( 'tile02.png', 'tile02');
map.createLayer('Tile Layer 1', [img1, img2], 0, 0);
}
new Phaser.Game(config);
<script src="https://cdn.jsdelivr.net/npm/phaser#3.55.2/dist/phaser.js"></script>
How i can to crop image after upload and send edited response ulr to frontent?
I will be grateful for the answer
MY CODE:
const stream = cloudinary.uploader.upload_stream(
{
folder,
},
(error: UploadApiErrorResponse | undefined, result: UploadApiResponse | undefined): void => {
console.log(error, result)
if (result) {
resolve({
url: result.url,
size: Math.round(result.bytes / 1024),
height: result.height,
width: result.width,
})
} else {
reject(error)
}
}
)
streamifier.createReadStream(buffer).pipe(stream)
The most common method of integrating Cloudinary is that you upload the original file to your Cloudinary account and store the Upload API response to your database which contains the details for the image: https://cloudinary.com/documentation/image_upload_api_reference#sample_response
If you don't want to store the entire response, you should store at least the fields needed to create a URL for that image later: https://cloudinary.com/documentation/image_transformations#transformation_url_structure
(Which are public_id, resource_type, type, format, and timestamp) though strictly speaking, most of those are optional if your assets are images of type 'upload' - certainly you need the public_id though.
Then, in your frontend code, when adding the image to your page or to your application, you add transformation parameters when building the URL, asking that the image is returned with transformations applied to match it to where/how you're using the image.
A common option is to set the width and height to exactly match the image tag or image view, then apply automatic cropping if the aspect ratio of the original doesn't match, with the crop selection being automatic: https://cloudinary.com/documentation/resizing_and_cropping
A Javascript example to add those parameters, if the image should be 500x500 is:
cloudinary.url( public_id,
{
resource_type: 'image', //these are the defaults and can be ommitted
type: 'upload', //these are the defaults and can be ommitted
height: 500,
width: 500,
crop: 'lfill', // try to fill the requested width and height without scaling up, crop if needed
gravity: 'auto', // select which area to keep automatically
fetch_format: 'auto',
quality: 'auto',
format: 'jpg', // sets the file extension on the URL, and will convert to that format if needed, and no fetch_format was set to override that
});
The resulting URL will be something like: http://res.cloudinary.com/demo/image/upload/c_lfill,f_auto,g_auto,h_500,q_auto,w_500/sample.jpg
I am using Chart.js to map an aircraft's vertical approach path:
The code to get the actual approach path is relatively straightforward:
var position_reports = [
{
altitude: 4606
heading: 42.94507920742035
latitude: 35.16972222222222
longitude: -101.75638888888889
speed: 133
vertical_speed: -714},
},
// additional position reports redacted for clarity
];
var data = [];
for(var x = 0; x < position_reports.length; x++) {
data.push(position_reports[x].altitude);
}
// at this point, data looks like this: [4606, 4605, 4604, 4603, 4603, 4602, etc....]
new Chart(ctx, {
type: 'line',
data: {
labels: labels,
datasets: [
{
label: 'Actual Altitude',
borderColor: '#ff6600',
data: data,
borderWidth: 5,
fill: false,
}
]
},
// additional options redacted for clarity
});
This above aircraft's glidepath (mostly) follows a standard 3-degree descent path. I would like to draw a fixed 4-degree and 2-degree line so I can make sure the aircraft's actual vertical path falls between 4 and 2 degrees. Hand drawn (and thus not to scale) example of what I'm trying to achieve:
How to I go about adding these red lines into Chart.js, relative to the vertical path? Is it possible with this library, or should I be using something else? Any feedback appreciated!
You can always write a custom plugin to do it, but I think this plugin of chart.js itself can help you out looking at the example. If you specify the right start and end it should draw a straight line.
https://github.com/chartjs/chartjs-plugin-annotation#line-annotations