Flutter/Mediapipe: how to save a Flutter Canvas/CustomPainter with mediapipe pose outline to an image file? - flutter-layout

We are using code shown below from this package package:google_mlkit_pose_detection/google_mlkit_pose_detection.dart (on https://pub.dev/packages/google_mlkit_pose_detection)
to save the phone camera image with mediapipe pose outline to an image file on the phone. The library shows key points of human pose like shoulders and arms with circles and lines using Canvas and CustomPainter. We can see these on the phone screen but we want to save the image and the human pose points to a file in the phone. In other words we want to save the Painting into an image on the phone
A similar question was asked here Flutter: How would one save a Canvas/CustomPainter to an image file? but without the mediapipe component - we tried their solution but it didn't work for us.
The main problem in the code example shown below is that in the line picture = recorder.endRecording(); picture is always null.
import 'dart:math';
import 'dart:typed_data';
import 'dart:ui';
import 'dart:ui' as ui;
import 'package:ace_example/painters/keypoints.dart';
import 'package:flutter/material.dart';
import 'package:google_mlkit_pose_detection/google_mlkit_pose_detection.dart';
import 'coordinates_translator.dart';
class PosePainter extends CustomPainter {
PosePainter(this.poses, this.absoluteImageSize, this.rotation);
final List<Pose> poses;
final Size absoluteImageSize;
final InputImageRotation rotation;
Picture? picture;
final paintMid = Paint()
..style = PaintingStyle.stroke
..strokeWidth = 2.0
..color = const Color.fromRGBO(224, 224, 224, 1);
final paintLeft = Paint()
..style = PaintingStyle.fill
..strokeWidth = 2.0
..color = const Color.fromRGBO(255, 138, 0, 1);
final paintRight = Paint()
..style = PaintingStyle.fill
..strokeWidth = 2.0
..color = const Color.fromRGBO(0, 217, 231, 1);
final cycleRadius = 2.0;
final cycleBorderRadius = max(2.0 + 1, 2.0 * 1.2);
final lineWidth = 6.0;
final posePoints = DownwardDogPoints();
#override
void paint(Canvas canvas, Size size) {
final recorder = ui.PictureRecorder();
final canvas = Canvas(
recorder,
Rect.fromPoints(
const Offset(0.0, 0.0), const Offset(0.0, 0.0)));
for (final pose in poses) {
pose.landmarks.forEach((poseType, landmark) {
// draw circle
// white
canvas.drawCircle(
Offset(
translateX(landmark.x, rotation, size, absoluteImageSize),
translateY(landmark.y, rotation, size, absoluteImageSize),
),
cycleBorderRadius,
paintMid);
}
});
}
picture = recorder.endRecording();
}
Future<Image> generateImage() async {
// ui.Image img = (await picture.toImage(384, 683)) as Image;
// return img;
if (picture != null){
ui.Image img = await picture!.toImage(384, 683);
// final pngBytes = await img.toByteData(format: ui.EncodingFormat.png());
ByteData pngBytes = (await img.toByteData(format: ImageByteFormat.png))!;
Uint8List imgList = Uint8List.view(pngBytes.buffer);
return Image.memory(imgList);
} else {
print("Picture is empty");
}
Uint8List imgList = Uint8List.fromList([0, 0, 0 ,0]);
return Image.memory(imgList);
}
#override
bool shouldRepaint(covariant PosePainter oldDelegate) {
return oldDelegate.absoluteImageSize != absoluteImageSize ||
oldDelegate.poses != poses;
}
}

Related

Face detection not working in android studio

I'm new to android studio and want to do face detection. The face detector code uses opencv and is written in python. I'm using chaquopy as the python SDK. When i run the app, the face is not detected. It is not showing any error also. Can someone help me out. Below is my MainActivity.java code:
public class MainActivity extends AppCompatActivity {
Button btn;
ImageView iv;
// now take bitmap and bitmap drawable to get image from image view
BitmapDrawable drawable;
Bitmap bitmap;
String imageString="";
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
btn = (Button)findViewById(R.id.submit);
iv = (ImageView)findViewById(R.id.image_view);
if(!Python.isStarted())
Python.start(new AndroidPlatform(this));
final Python py = Python.getInstance();
btn.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
// on click over button, get image from image view
drawable = (BitmapDrawable)iv.getDrawable();
bitmap = drawable.getBitmap();
imageString = getStringImage(bitmap);
// now in imagestring, we get encoded image string
// now pass this input string to python script
PyObject pyo = py.getModule("myscript");
// calling the main function in python code and passing image string as parameter
PyObject obj = pyo.callAttr("main", imageString);
// obj will return value ie. our image string
String str = obj.toString();
// convert it to byte array
byte data[] = android.util.Base64.decode(str, android.util.Base64.DEFAULT);
// now convert it to bitmap
Bitmap bmp = BitmapFactory.decodeByteArray(data, 0, data.length);
//now set this bitmap to imageview
iv.setImageBitmap(bmp);
}
});
}
// function to convert this image into byte array and finally into base 64 string
private String getStringImage(Bitmap bitmap) {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.PNG, 100, baos);
// store in byte array
byte[] imageBytes = baos.toByteArray();
// finally encode to string
String encodedImage = android.util.Base64.encodeToString(imageBytes, android.util.Base64.DEFAULT); // Base64.DEFAULT
return encodedImage;
}
}
and the below shown is my python script "myscript.py"
import numpy as np
import cv2
import io
from PIL import Image
import base64
import face_recognition
def main(data):
decoded_data = base64.b64decode(data)
np_data = np.fromString(decoded_data, np.uint8)
img = cv2.imdecode(np_data, cv2.IMREAD_UNCHANGED)
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
face_locations = face_recognition.face_locations(img_gray)
for(top,right,bottom,left) in face_locations:
cv2.rectangle(img_rgb, (left,top),(right,bottom),(0,0,255),8)
# convert this image to PIL
pil_im = Image.fromarray(img_rgb)
#convert this image to byte
buff = io.BytesIO()
pil_im.save(buff, format="PNG")
#converting to base64
img_str = base64.b64encode(buff.getvalue())
return ""+str(img_str, 'utf-8')
My minSdkVersion is 16 and targetSdkVersion is 30.
I'm not understanding what is the problem with this code. Can someone help me.
Thanks in advance.

Apache POI, converting powerpoint slides to images, images are low quality

Here is my code
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.apache.poi.hslf.usermodel.HSLFSlide;
import org.apache.poi.hslf.usermodel.HSLFSlideShow;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.awt.*;
import java.awt.geom.Rectangle2D;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
#Service
#Slf4j
#RequiredArgsConstructor(onConstructor = #__(#Autowired))
public class FileConverterService {
public void convertPptToImages() throws Exception {
ClassLoader classLoader = getClass().getClassLoader();
File file = new File(classLoader.getResource("Sylon_GuidedPath_Sprint22Deck.ppt").getFile());
Document pdfDocument = new Document();
// PdfWriter pdfWriter = PdfWriter.getInstance(pdfDocument, new FileOutputStream(""));
FileInputStream is = new FileInputStream(file);
HSLFSlideShow ppt = new HSLFSlideShow(is);
is.close();
Dimension pgsize = ppt.getPageSize();
pdfDocument.setPageSize(new Rectangle((float) pgsize.getWidth(), (float) pgsize.getHeight()));
// convert to images
int idx = 1;
for (HSLFSlide slide : ppt.getSlides()) {
BufferedImage img =
new BufferedImage(pgsize.width, pgsize.height, BufferedImage.TYPE_INT_RGB);
Graphics2D graphics = img.createGraphics();
graphics.setRenderingHint(
RenderingHints.KEY_ALPHA_INTERPOLATION, RenderingHints.VALUE_ALPHA_INTERPOLATION_QUALITY);
graphics.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON);
graphics.setRenderingHint(
RenderingHints.KEY_TEXT_ANTIALIASING, RenderingHints.VALUE_TEXT_ANTIALIAS_ON);
graphics.setRenderingHint(RenderingHints.KEY_RENDERING, RenderingHints.VALUE_RENDER_QUALITY);
graphics.setRenderingHint(
RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_BICUBIC);
graphics.setRenderingHint(
RenderingHints.KEY_FRACTIONALMETRICS, RenderingHints.VALUE_FRACTIONALMETRICS_ON);
// clear the drawing area
graphics.setPaint(Color.white);
graphics.fill(new Rectangle2D.Float(0, 0, pgsize.width, pgsize.height));
// render
slide.draw(graphics);
// save the output
ImageWriter jpgWriter = ImageIO.getImageWritersByFormatName("jpg").next();
ImageWriteParam jpgWriteParam = jpgWriter.getDefaultWriteParam();
jpgWriteParam.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
jpgWriteParam.setCompressionQuality(1f);
jpgWriter.setOutput(new FileImageOutputStream(
new File("slide-" + idx + ".jpg")));
IIOImage outputImage = new IIOImage(img, null, null);
jpgWriter.write(null, outputImage, jpgWriteParam);
jpgWriter.dispose();
idx++;
}
}
I based my code off this documentation, http://poi.apache.org/components/slideshow/how-to-shapes.html#Render
I have tried both jpeg and png, and the image seems to be fairly low resolution and the text is difficult to read compared to the original .ppt. Is there any way to increase the resolution/quality of the images?
What you can try
Applying RenderingHints to the graphics, below is a sample I created comparing the image with/without rendering hint. You can see that the character looks better with rendering hint.
Increase the compression quality for the jpeg image.
Following program demonstrates how to generate image with/without rendering hint and create image with 100% compression quality.
import java.awt.Dimension;
import java.awt.Graphics2D;
import java.awt.RenderingHints;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.FileInputStream;
import javax.imageio.IIOImage;
import javax.imageio.ImageIO;
import javax.imageio.ImageWriteParam;
import javax.imageio.ImageWriter;
import javax.imageio.plugins.jpeg.JPEGImageWriteParam;
import javax.imageio.stream.FileImageOutputStream;
import org.apache.poi.hslf.usermodel.HSLFSlide;
import org.apache.poi.hslf.usermodel.HSLFSlideShow;
public class ImproveSlideConvertToImageQuality {
public static void main(String[] args) throws Exception {
convertPptToImages(true);
convertPptToImages(false);
}
public static void convertPptToImages(boolean withRenderHint) throws Exception {
File file = new File("test.ppt");
String suffix = withRenderHint ? "-with-hint" : "-without-hint";
try (FileInputStream is = new FileInputStream(file); HSLFSlideShow ppt = new HSLFSlideShow(is)) {
Dimension pgsize = ppt.getPageSize();
int idx = 1;
for (HSLFSlide slide : ppt.getSlides()) {
BufferedImage img = new BufferedImage(pgsize.width, pgsize.height, BufferedImage.TYPE_INT_RGB);
Graphics2D graphics = img.createGraphics();
if (withRenderHint) {
graphics.setRenderingHint(RenderingHints.KEY_INTERPOLATION,
RenderingHints.VALUE_INTERPOLATION_BILINEAR);
graphics.setRenderingHint(
RenderingHints.KEY_ALPHA_INTERPOLATION, RenderingHints.VALUE_ALPHA_INTERPOLATION_QUALITY);
graphics.setRenderingHint(
RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON);
graphics.setRenderingHint(
RenderingHints.KEY_FRACTIONALMETRICS, RenderingHints.VALUE_FRACTIONALMETRICS_ON);
graphics.setRenderingHint(
RenderingHints.KEY_RENDERING, RenderingHints.VALUE_RENDER_QUALITY);
graphics.setRenderingHint(RenderingHints.KEY_TEXT_ANTIALIASING,
RenderingHints.VALUE_TEXT_ANTIALIAS_GASP);
}
// render
slide.draw(graphics);
final ImageWriter writer = ImageIO.getImageWritersByFormatName("jpg").next();
writer.setOutput(new FileImageOutputStream(
new File("slide-" + idx + suffix + ".jpeg")));
JPEGImageWriteParam jpegParams = new JPEGImageWriteParam(null);
jpegParams.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
jpegParams.setCompressionQuality(1f);
// writes the file with given compression level
// from your JPEGImageWriteParam instance
IIOImage image = new IIOImage(img, null, null);
writer.write(null, image, jpegParams);
writer.dispose();
idx++;
}
}
}
}
References:
Controlling Rendering Quality
Setting jpg compression level with ImageIO in Java

Xamarin.Forms Action Bar - Center Aligned Image

Using Xamarin.Forms, how do I get the same effect as the application pictured below, specifically to show a centred image on the Action Bar / page tool bar (the section in a blue box)?
I would like to have a long width image in that section, and the solution must work for Android, iOS, Windows Phone, and Universal Windows (even if it means writing custom renderers or platform specific xamarin code).
I suggest you create your own Xamarin.Forms view and handle the navigation by yourself something similar to this:
public class CustomBackNavigationBar : StackLayout
{
public Image BackIcon;
public Image Icon;
public Label IconTitle;
public StackLayout IconContainer;
public CustomBackNavigationBar(string title, string icon)
{
Padding = new Thickness(15,5);
HeightRequest = 40;
Orientation = StackOrientation.Horizontal;
VerticalOptions = LayoutOptions.Start;
BackgroundColor = StaticData.BlueColor;
Spacing = 15;
BackIcon = new Image
{
Source = StaticData.BackIcon,
HorizontalOptions = LayoutOptions.Start
};
Label Title = new Label
{
Text = title,
TextColor = Color.White,
FontSize = Device.GetNamedSize(NamedSize.Default, typeof(Label)),
FontAttributes = FontAttributes.Bold,
VerticalTextAlignment = TextAlignment.Center
};
Icon = new Image
{
Source = icon
};
IconTitle = new Label
{
Text = StaticData.CallAgent,
TextColor = Color.White,
FontSize = Device.GetNamedSize(NamedSize.Micro, typeof(Label)),
};
IconContainer = new StackLayout
{
HorizontalOptions = LayoutOptions.EndAndExpand,
Spacing = 2,
Children = { Icon, IconTitle }
};
Children.Add(BackIcon);
Children.Add(Title);
Children.Add(IconContainer);
#region Events
BackIcon.GestureRecognizers.Clear();
BackIcon.GestureRecognizers.Add(new TapGestureRecognizer
{
Command = new Command(PopAsync)
});
#endregion
}
async void PopAsync()
{
await App.AppNavigation.PopAsync();
}
}

How do I create an editable Label in javafx 2.2

I am looking to create an editable label at an arbitrary position on the pane on which I am writing. I am under the impression that TextField or TextArea objects are what I could use to implement that capability. There is obviously more to it as I don't know how to position the object when I create it. I have found an example on the "Chaotic Java" website but I need to do a bit more work to understand what's going on there. http://chaoticjava.com/posts/another-javafx-example-the-editable-label/
I am looking for more input from this group.
(There are no errors because I have not written any code.)
I was kind of curious about how to achieve this, so I gave it a try. This is what I came up with.
The approach used is pretty the same as that suggested by James in his comment:
I would start with a Pane, . . ., TextFields to represent text while being edited. Register mouse listeners with the Pane and Text objects, and use the layoutX and layoutY properties to position things . . . just to use text fields, and to use CSS to make them look like labels when not focused and text fields when focused.
The only significantly tricky part was working out how to correctly size the text fields as the Text inside the text field is not exposed via public API to allow you to listen to it's layout bounds. You could perhaps use a css lookup function to get at the enclosed Text, but I chose to use a private sun FontMetrics API (which may be deprecated in the future), to get the size of the text. In the future with Java 9, you should be able to perform the task without using the private API.
The solution doesn't try to do anything tricky like deal with multi-format or multi-line text, it is just for short, single line comments of a few words that can be placed over a scene.
TextCreator.java
// ## CAUTION: beware the com.sun imports...
import com.sun.javafx.tk.FontMetrics;
import com.sun.javafx.tk.Toolkit;
import javafx.application.Application;
import javafx.application.Platform;
import javafx.scene.Cursor;
import javafx.scene.Scene;
import javafx.scene.control.TextField;
import javafx.scene.image.Image;
import javafx.scene.image.ImageView;
import javafx.scene.layout.Pane;
import javafx.scene.layout.StackPane;
import javafx.stage.Stage;
/**
* Displays a map of the lonely mountain upon which draggable, editable labels can be overlaid.
*/
public class TextCreator extends Application {
private static final String MAP_IMAGE_LOC =
"http://images.wikia.com/lotr/images/archive/f/f6/20130209175313!F27c_thorins_map_from_the_hobbit.jpg";
public static void main(String[] args) throws Exception {
launch(args);
}
#Override
public void start(final Stage stage) throws Exception {
Pane pane = new Pane();
pane.setOnMouseClicked(event -> {
if (event.getTarget() == pane) {
pane.getChildren().add(
new EditableDraggableText(event.getX(), event.getY())
);
}
});
EditableDraggableText cssStyled =
new EditableDraggableText(439, 253, "Style them with CSS");
cssStyled.getStyleClass().add("highlighted");
pane.getChildren().addAll(
new EditableDraggableText(330, 101, "Click to add a label"),
new EditableDraggableText(318, 225, "You can edit your labels"),
cssStyled,
new EditableDraggableText(336, 307, "And drag them"),
new EditableDraggableText(309, 346, "Around The Lonely Mountain")
);
StackPane layout = new StackPane(
new ImageView(
new Image(
MAP_IMAGE_LOC
)
),
pane
);
Scene scene = new Scene(layout);
scene.getStylesheets().add(getClass().getResource(
"editable-text.css"
).toExternalForm());
stage.setScene(scene);
stage.setResizable(false);
stage.show();
}
/**
* A text field which has no special decorations like background, border or focus ring.
* i.e. the EditableText just looks like a vanilla Text node or a Label node.
*/
class EditableText extends TextField {
// The right margin allows a little bit of space
// to the right of the text for the editor caret.
private final double RIGHT_MARGIN = 5;
EditableText(double x, double y) {
relocate(x, y);
getStyleClass().add("editable-text");
//** CAUTION: this uses a non-public API (FontMetrics) to calculate the field size
// the non-public API may be removed in a future JavaFX version.
// see: https://bugs.openjdk.java.net/browse/JDK-8090775
// Need font/text measurement API
FontMetrics metrics = Toolkit.getToolkit().getFontLoader().getFontMetrics(getFont());
setPrefWidth(RIGHT_MARGIN);
textProperty().addListener((observable, oldTextString, newTextString) ->
setPrefWidth(metrics.computeStringWidth(newTextString) + RIGHT_MARGIN)
);
Platform.runLater(this::requestFocus);
}
}
/**
* An EditableText (a text field which looks like a label), which can be dragged around
* the screen to reposition it.
*/
class EditableDraggableText extends StackPane {
private final double PADDING = 5;
private EditableText text = new EditableText(PADDING, PADDING);
EditableDraggableText(double x, double y) {
relocate(x - PADDING, y - PADDING);
getChildren().add(text);
getStyleClass().add("editable-draggable-text");
// if the text is empty when we lose focus,
// the node has no purpose anymore
// just remove it from the scene.
text.focusedProperty().addListener((observable, hadFocus, hasFocus) -> {
if (!hasFocus && getParent() != null && getParent() instanceof Pane &&
(text.getText() == null || text.getText().trim().isEmpty())) {
((Pane) getParent()).getChildren().remove(this);
}
});
enableDrag();
}
public EditableDraggableText(int x, int y, String text) {
this(x, y);
this.text.setText(text);
}
// make a node movable by dragging it around with the mouse.
private void enableDrag() {
final Delta dragDelta = new Delta();
setOnMousePressed(mouseEvent -> {
this.toFront();
// record a delta distance for the drag and drop operation.
dragDelta.x = mouseEvent.getX();
dragDelta.y = mouseEvent.getY();
getScene().setCursor(Cursor.MOVE);
});
setOnMouseReleased(mouseEvent -> getScene().setCursor(Cursor.HAND));
setOnMouseDragged(mouseEvent -> {
double newX = getLayoutX() + mouseEvent.getX() - dragDelta.x;
if (newX > 0 && newX < getScene().getWidth()) {
setLayoutX(newX);
}
double newY = getLayoutY() + mouseEvent.getY() - dragDelta.y;
if (newY > 0 && newY < getScene().getHeight()) {
setLayoutY(newY);
}
});
setOnMouseEntered(mouseEvent -> {
if (!mouseEvent.isPrimaryButtonDown()) {
getScene().setCursor(Cursor.HAND);
}
});
setOnMouseExited(mouseEvent -> {
if (!mouseEvent.isPrimaryButtonDown()) {
getScene().setCursor(Cursor.DEFAULT);
}
});
}
// records relative x and y co-ordinates.
private class Delta {
double x, y;
}
}
}
editable-text.css
.editable-text {
-fx-background-color: transparent;
-fx-background-insets: 0;
-fx-background-radius: 0;
-fx-padding: 0;
}
.editable-draggable-text:hover .editable-text {
-fx-background-color: yellow;
}
.editable-draggable-text {
-fx-padding: 5;
-fx-background-color: rgba(152, 251, 152, 0.2); // translucent palegreen
}
.editable-draggable-text:hover {
-fx-background-color: orange;
}
.highlighted {
-fx-background-color: rgba(255, 182, 93, 0.3); // translucent mistyrose
-fx-border-style: dashed;
-fx-border-color: firebrick;
}
If you have time, you could clean the sample implementation up and donate it to the ControlsFX project.
You can use a function of label: setGraphic().
Here is my code:
public void editableLabelTest(Stage stage){
Scene scene = new Scene(new VBox(new EditableLabel("I am a label"),
new EditableLabel("I am a label too")));
stage.setScene(scene);
stage.show();
}
class EditableLabel extends Label{
TextField tf = new TextField();
/***
* backup is used to cancel when press ESC...
*/
String backup = "";
public EditableLabel(){
this("");
}
public EditableLabel(String str){
super(str);
this.setOnMouseClicked(e -> {
if(e.getClickCount() == 2){
tf.setText(backup = this.getText());
this.setGraphic(tf);
this.setText("");
tf.requestFocus();
}
});
tf.focusedProperty().addListener((prop, o, n) -> {
if(!n){
toLabel();
}
});
tf.setOnKeyReleased(e -> {
if(e.getCode().equals(KeyCode.ENTER)){
toLabel();
}else if(e.getCode().equals(KeyCode.ESCAPE)){
tf.setText(backup);
toLabel();
}
});
}
void toLabel(){
this.setGraphic(null);
this.setText(tf.getText());
}
}

How do i use the Emgu CV _SmoothGausian Method

Im trying to get the OCR sample app to recognise some small text and how I'm doing it is to resize the image. Once I have resized the image it is all 'pixel-ee'
I want to use the SmothGaussian method to clean it up but I get an error each time I execute the method
Here is the code:
Image<Bgr, Byte> image = new Image<Bgr, byte>(openImageFileDialog.FileName);
using (Image<Gray, byte> gray = image.Convert<Gray, Byte>().Resize(800, 600, Emgu.CV.CvEnum.INTER.CV_INTER_LINEAR, true))
{
gray.Convert<Gray, Byte>()._SmoothGaussian(4);
_ocr.Recognize(gray);
Tesseract.Charactor[] charactors = _ocr.GetCharactors();
foreach (Tesseract.Charactor c in charactors)
{
image.Draw(c.Region, drawColor, 1);
}
imageBox1.Image = image;
//String text = String.Concat( Array.ConvertAll(charactors, delegate(Tesseract.Charactor t) { return t.Text; }) );
String text = _ocr.GetText();
ocrTextBox.Text = text;
}
Here is the image:
_SmoothGaussian can only handle odd numbers as kernel size so try with 3 or 5 as argument instead.

Resources