Resize paths generated by PaintCode - svg

I'm using PaintCode to convert a set of SVGs I have to use in to Swift code. It looks like it just converts the SVG paths in to UIBezierPath()s, which is great.
To display the generated code I'm doing the following:
class FirstImageView: UIView {
init(name: String) { // Irrelevant custom init
super.init(frame: CGRectMake(15, 15, 40, 40)) // 40x40 View
self.opaque = false // Transparent background
}
override func drawRect(rect: CGRect) {
ImagesCollection.firstImage() // Fill the view with this image
}
required init(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
}
}
Where ImagesCollection.firstImage() is referencing:
public class ImagesCollection: NSObject {
public class func firstImage() {
let color4 = UIColor(red: 0.595, green: 0.080, blue: 0.125, alpha: 1.000)
var fill295Path = UIBezierPath()
fill295Path.moveToPoint(CGPointMake(27.5, 2.73))
// Rest of generated graphics code
}
}
Which works great - I generated the graphics at 40x40, set the frame to 40x40, and that works fine. What I'm wondering now, is how can I display that same graphic at a smaller (or larger size) - since they're bezier paths they should scale fine, right? Setting my View's frame to CGRectMake(15, 15, 20, 20) (for a desired 20x20 image) just seems to clip the graphic.
How can I ensure that whatever graphic is drawn in to my View is sized to the view's frame?
Thanks

You can use a CGAffineTransform to scale either your custom view or the bezier path itself.
Scale the view:
myFirstImageView.transform = CGAffineTransformMakeScale(0.5, 0.5)
Scale the bezier path:
// ...
// lots of generated graphics code
fill295Path.applyTransform(CGAffineTransformMakeScale(0.5, 0.5))
fill295Path.stroke() // or fill, etc.

Since you generated the code with PaintCode, you could draw their Frame object around your SVG/Bezier. It would generate the Bezier with dynamic size. In your example, you would get ImagesCollection.firstImage(#frame: CGRect) instead of ImagesCollection.firstImage(). You just pass the rectangle from drawRect to it and you are done.

Slightly off-topic, but per Nate's request here's how I did it with SVGKit:
Firstly, you'll want to incorporate SVGKit in your project by following these instructions
With the markup in your SVG file, parse the SVG data using the SVGKParser:
let svgData: String = "<svg>...</svg>"
let svgParse: SVGKParser = SVGKParser(source: SVGKSource(inputSteam: NSInputStream(data: svgData.dataUsingEncoding(NSUTF8StringEncoding, allowLossyConversion: false)!)))
svgParse.addDefaultSVGParserExtensions()
From there you'll load it into an image, using the SVGKImage type:
var svgImage: SVGKImage = SVGKImage(parsedSVG: svgResult, fromSource: nil)
And lastly, you can use it as the image in an SVGKFastImageView:
SVGKFastImageView(SVGKImage: svgImage)

Related

How to create Mapbox Marker Onclick Eventlistener Android Studio?

Hello Stackoverflow community,
I am trying to develop an Android application with Mapbox.
I followed this guide to create markers on the map.
https://docs.mapbox.com/android/maps/examples/default-point-annotation/
Thus my code is the following:
public fun createMarker(id: String, lon: Double, lat: Double) {
// Create an instance of the Annotation API and get the PointAnnotationManager.
var marker: PointAnnotation? = bitmapFromDrawableRes(
drawercontext,
R.drawable.red_marker
)?.let {
val annotationApi = binding.mapBoxView.mapView?.annotations
val pointAnnotationManager =
annotationApi?.createPointAnnotationManager(binding.mapBoxView.mapView!!)
// Set options for the resulting symbol layer.
val pointAnnotationOptions: PointAnnotationOptions = PointAnnotationOptions()
// Define a geographic coordinate.
.withPoint(Point.fromLngLat(lon, lat))
// Specify the bitmap you assigned to the point annotation
// The bitmap will be added to map style automatically.
.withIconImage(it)
// Add the resulting pointAnnotation to the map.
pointAnnotationManager?.create(pointAnnotationOptions)
}
}
Unfortunately, I can not find any solution to add a click listener to markers (to show extra information outside of the map). In my opinion, this should be an important event, so I don't get why there is so little support. I want to replicate something like this:
https://bl.ocks.org/chriswhong/8977c0d4e869e9eaf06b4e9fda80f3ab
But in Android Studio with Kotlin.
One workaround I have seen is to add a click listener to the map and from there determine the marker with the closest coordinates, but I think that would not be as nice of a solution. Do you know any solutions or workarounds to my problem?
Thanks for the help in advance!
try this:
pointAnnotationManager.apply {
addClickListener(
OnPointAnnotationClickListener {
Toast.makeText(this#MainActivity, "id: ${it.id}", Toast.LENGTH_LONG).show()
false
}
)
}

How to make a PDF using bookdown including SVG images

I have some R markdown that includes the following code:
```{r huff51, fig.show='hold', fig.cap='Design decisions connecting research purpose and outcomes [#huff_2009_designingresearchpublication p. 86].', echo=FALSE}
knitr::include_graphics('images/Huff-2009-fig5.1.svg')
```
When using bookdown to produce HTML output everything works as expected.
When using bookdown to produce PDF output I get an error saying ! LaTeX Error: Unknown graphics extension: .svg.
This is understandable as knitr uses Latex's \includegraphics{images/Huff-2009-fig5.1.svg} to include the image. So, it's not a bug per se.
Is there a better way to include the SVG image so I don't need to pre-process it into, say, a PDF or PNG?
An update to Yihui Xie 's answer in '22. The package you want is now rsvg and the code looks like:
show_fig <- function(f)
{if (knitr::is_latex_output())
{
output = xfun::with_ext(f, 'pdf')
rsvg::rsvg_pdf(xfun::with_ext(f,'svg'), file=output)
} else {
output = xfun::with_ext(f, 'svg')
}
knitr::include_graphics(output)
}
Then you can add inline code to your text with
`r show_fig("image_file_name_no_extension")`
knitr v 1.39, rsvg v 2.3.1
You can create a helper function to convert SVG to PDF. For example, if you have the system package rsvg-convert installed, you may use this function to include SVG graphics:
include_svg = function(path) {
if (knitr::is_latex_output()) {
output = xfun::with_ext(path, 'pdf')
# you can compare the timestamp of pdf against svg to avoid conversion if necessary
system2('rsvg-convert', c('-f', 'pdf', '-a', '-o', shQuote(c(output, path))))
} else {
output = path
}
knitr::include_graphics(output)
}
You may also consider R packages like magick (which is based on ImageMagick) to convert SVG to PDF.
For bookdown, I really don't like having PDF files on my websites. So I use this code:
if (knitr::is_html_output()) {
structure("images/01-02.svg", class = c("knit_image_paths", "knit_asis"))
} else {
# do something for PDF, e.g. an actual PDF file if you have one,
# or even use Yihui's code in the other answer
knitr::include_graphics("images/01-02.pdf")
}
It uses the SVG file for websites (i.e., HTML output).
It works perfectly for generating everything: website, gitbook, pdfbook and epub.
To prevent adding this code to every chunk in your bookdown project, add this to index.Rmd:
insert_graphic <- function(path, ...) {
if (knitr::is_html_output() && grepl("[.]svg$", basename(path), ignore.case = TRUE)) {
structure(path, class = c("knit_image_paths", "knit_asis"))
} else {
knitr::include_graphics(path, ...)
}
}

I need something like this, but for Phaser

i need something for adding object in Phaser, this is something similar but in wade.
wade.addSceneObject(new SceneObject(dotSprite, 0, dotPosition.x, dotPosition.y));
Adding objects (usually called sprites) in phaser is super simple. Just load the image in the preloader function
function preload() {
game.load.image('mushroom', 'assets/sprites/mushroom2.png');
}
And then add the sprite in the create function
function create() {
// This simply creates a sprite using the mushroom image we loaded above and positions it at 200 x 200
var test = game.add.sprite(200, 200, 'mushroom');
}
Phaser has a ton of documentation on how to do things like this. I got the code from this example.
If you are completely new to phaser I highly recommend going through their tutorial

Include existing OpenCV Application into Qt GUI

I want to include an existing openCV application into a GUI created with Qt. I've found some similar questions on stackoverflow
QT How to embed an application into QT widget
Run another executable in my Qt app
The problem is, that I don't want to simply launch the openCV application like I could with QProcess.
The OpenCV application has a "MouseListener", so if I click on the window, it should still call the function of the openCV App. Further would I like to display the detected coordinates in labels of the Qt GUI. Therefore it has to be some kind of interaction.
I've read about the createwindowContainer function (http://blog.qt.io/blog/2013/02/19/introducing-qwidgetcreatewindowcontainer/) but since I am not very familiar with Qt I'm not sure if this is the right choice and how to use it.
I am using Linux Mint 17.2, opencv version 3.1.0 and Qt version 4.8.6
thank you for your inputs
I haven't actually solved the problem how I wanted to at the beginning. But now it's working. If someone has the same problem, maybe my solution can provide some ideas. If you want to display a video in qt or if you have problems with the OpenCV libraries, maybe I can help.
following are a few code snippets. they are not very much commented but I hope the concept is clear:
First I have a MainWindow with a label that I promoted to the type of my CustomLabel. The CustomLabel is my container to display the video and react on my mouse inputs.
CustomLabel::CustomLabel(QWidget* parent) : QLabel(parent), currentImage(NULL),
tickrate_ms(33), vid_fps(0), video_width(0), video_height(0), myTimer(NULL), cap(NULL)
{
// init variables
showPoints = true;
calculatedCenter = cv::Point(0,0);
oldCenter = cv::Point(0,0);
currentState = STATE_NO_STREAM;
NOF_corners = 30; //default init value
termcrit = cv::TermCriteria(cv::TermCriteria::COUNT | cv::TermCriteria::EPS, 30,0.01);
// enable mouse Tracking
this->setMouseTracking(true);
// connect signals with slots
QObject::connect(getMainWindow(), SIGNAL(sendFileOpen()), this, SLOT(onOpenClick()));
QObject::connect(getMainWindow(), SIGNAL(sendWebcamOpen()), this, SLOT(onWebcamBtnOpen()));
QObject::connect(getMainWindow(), SIGNAL(closeVideoStreamSignal()), this, SLOT(onCloseVideoStream()));
}
You have to overwrite the paintEvent-Method:
void CustomLabel::paintEvent(QPaintEvent *e){
QPainter painter(this);
// When no image is loaded, paint the window black
if (!currentImage){
painter.fillRect(QRectF(QPoint(0, 0), QSize(width(), height())), Qt::black);
QWidget::paintEvent(e);
return;
}
// Draw a frame from the video
drawVideoFrame(painter);
QWidget::paintEvent(e);
}
method that was called in paintEvent:
void CustomLabel::drawVideoFrame(QPainter &painter){
painter.drawImage(QRectF(QPoint(0, 0), QSize(width(), height())), *currentImage,
QRectF(QPoint(0, 0), currentImage->size()));
}
And on every tick of my timer I call onTick()
void CustomLabel::onTick() {
/* This method is called every couple of milliseconds.
* It reads from the OpenCV's capture interface and saves a frame as QImage
* the state machine is implemented here. every tick is handled
*/
if(cap->isOpened()){
switch(currentState) {
case STATE_IDLE:
if (!cap->read(currentFrame)){
qDebug() << "cvWindow::_tick !!! Failed to read frame from the capture interface in STATE_IDLE";
}
break;
case STATE_DRAWING:
if (!cap->read(currentFrame)){
qDebug() << "cvWindow::_tick !!! Failed to read frame from the capture interface in STATE_DRAWING";
}
currentFrame.copyTo(currentCopy);
cv::circle(currentCopy, cv::Point(focusPt.x*xScale, focusPt.y*yScale),
sqrt((focusPt.x - currentMousePos.x())*(focusPt.x - currentMousePos.x())*xScale*xScale+(focusPt.y - currentMousePos.y())*
(focusPt.y - currentMousePos.y())*yScale*yScale), cv::Scalar(0, 0, 255), 2, 8, 0);
//qDebug() << "focus pt x " << focusPt.x << "y " << focusPt.y;
break;
case STATE_TRACKING:
if (!cap->read(currentFrame)){
qDebug() << "cvWindow::_tick !!! Failed to read frame from the capture interface in STATE_TRACKING";
}
cv::cvtColor(currentFrame, currentFrame, CV_BGR2GRAY, 0);
if(initGrayFrame){
currentGrayFrame.copyTo(previousGrayFrame);
initGrayFrame = false;
return;
}
cv::calcOpticalFlowPyrLK(previousGrayFrame, currentFrame, previousPts, currentPts, featuresFound, err, cv::Size(21, 21),
3, termcrit, 0, 1e-4);
AcquireNewPoints();
currentCopy = CalculateCenter(currentFrame, currentPts);
if(showPoints){
DrawPoints(currentCopy, currentPts);
}
break;
case STATE_LOST_POLE:
currentState = STATE_IDLE;
initGrayFrame = true;
cv::cvtColor(currentFrame, currentFrame, CV_GRAY2BGR);
break;
default:
break;
}
// if not tracking, draw currentFrame
// OpenCV uses BGR order, convert it to RGB
if(currentState == STATE_IDLE) {
cv::cvtColor(currentFrame, currentFrame, CV_BGR2RGB);
memcpy(currentImage->scanLine(0), (unsigned char*)currentFrame.data, currentImage->width() * currentImage->height() * currentFrame.channels());
} else {
cv::cvtColor(currentCopy, currentCopy, CV_BGR2RGB);
memcpy(currentImage->scanLine(0), (unsigned char*)currentCopy.data, currentImage->width() * currentImage->height() * currentCopy.channels());
previousGrayFrame = currentFrame;
previousPts = currentPts;
}
}
// Trigger paint event to redraw the window
update();
}
Don't mind the yScale and xScale factors, they are just for the opencv drawing functions because the customLabel size is not the same as the video resolution
OpenCV is used just for image processing. If you know to convert cv::Mat to any other required format, you can include OpenCV with any GUI development kit. For Qt, you can convert cv::Mat to QImage and then use it anywhere in Qt SDK. This example shows OpenCV and Qt integration including threading and webcam access. The webcam is accessed using OpenCV and cv::Mat received is converted to QImage and rendered onto a QLabel. https://github.com/nickdademo/qt-opencv-multithreaded
The code contains MatToQImage() function which shows the conversion from cv::Mat to QImage. The integration is pretty simple as everything is in C++.

Can I use HaxeUI with HaxeFlixel?

I tried to use both HaxeUI and HaxeFlixel, but what I obtain is HaxeUI's interface over a white background, covering everything underneath. Moreover, even if it was possible to somewhat make HaxeUI and HaxeFlixel work together, it's not clear how to change the UI of HaxeUI when the state change in HaxeFlixel. Here is the code I used:
private function setupGame():Void {
Toolkit.theme = new GradientTheme();
Toolkit.init();
var stageWidth:Int = Lib.current.stage.stageWidth;
var stageHeight:Int = Lib.current.stage.stageHeight;
if (zoom == -1) {
var ratioX:Float = stageWidth / gameWidth;
var ratioY:Float = stageHeight / gameHeight;
zoom = Math.min(ratioX, ratioY);
gameWidth = Math.ceil(stageWidth / zoom);
gameHeight = Math.ceil(stageHeight / zoom);
}
trace('stage: ${stageWidth}x${stageHeight}, game: ${gameWidth}x${gameHeight}, zoom=$zoom');
addChild(new FlxGame(gameWidth, gameHeight, initialState, zoom, framerate, framerate, skipSplash, startFullscreen));
Toolkit.openFullscreen(function(root:Root) {
var view:IDisplayObject = Toolkit.processXmlResource("assets/xml/haxeui-resource.xml");
root.addChild(view);
});
}
I can guess that, probably, both HaxeUI and HaxeFlixel have their own main loop and that their event handling might not be compatible, but just in case, can someone have a more definitive answer?
Edit:
Actually, it's much better when using openPopup:
Toolkit.openPopup( { x:20, y:150, width:100, height:100 }, function(root:Root) {
var view:IDisplayObject = Toolkit.processXmlResource("assets/xml/haxeui-naming.xml");
root.addChild(view);
});
It's possible to interact with the rest of the screen (managed with HaxeFlixel), but the mouse pointer present in the part of the screen managed with HaxeFlixel remains under the HaxeUI user interface elements.
When using Flixel and HaxeUI together, its almost like running two applications at once. However, they both rely on OpenFL as a back-end and each attach themselves to its display tree.
One technique I'm experimenting with right now is to open a Flixel sub state, and within the sub state, call Toolkit.openFullscreen(). From inside of this, you can set the alpha of the root's background to 0, which allows you to see through it onto the underlying bitmap that Flixel uses to render.
Here is a minimal example of how you might "embed" an editor interface inside a Flixel sub state:
import haxe.ui.toolkit.core.Toolkit;
import haxe.ui.toolkit.core.RootManager;
import haxe.ui.toolkit.themes.DefaultTheme;
import flixel.FlxG;
import flixel.FlxSubState;
// This would typically be a Haxe UI XMLController
import app.MainEditor;
class HaxeUIState extends FlxSubState
{
override public function create()
{
super.create();
// Flixel uses a sprite-based cursor by default,
// so you need to enable the system cursor to be
// able to see what you're clicking.
FlxG.mouse.useSystemCursor = true;
Toolkit.theme = new DefaultTheme();
Toolkit.init();
Toolkit.openFullscreen(function (root) {
var editor = new MainEditor();
// Allows you to see what's going on in the sub state
root.style.backgroundAlpha = 0;
root.addChild(editor.view);
});
}
override public function destroy()
{
super.destroy();
// Switch back to Flixel's cursor
FlxG.mouse.useSystemCursor = true;
// Not sure if this is the "correct" way to close the UI,
// but it works for my purposes. Alternatively you could
// try opening the editor in advance, but hiding it
// until the sub-state opens.
RootManager.instance.destroyAllRoots();
}
// As far as I can tell, the update function continues to get
// called even while Haxe UI is open.
override public function update() {
super.update();
if (FlxG.keys.justPressed.ESCAPE) {
// This will implicitly trigger destroy().
close();
}
}
}
In this way, you can associate different Flixel states with different Haxe UI controllers. (NOTE: They don't strictly have to be sub-states, that's just what worked best in my case.)
When you open a fullscreen or popup with haxeui, the program flow will be blocked (your update() and draw() function won't be called). You should probably have a look at flixel-ui instead.
From my experience haxeflixel and haxeui work well together but they are totally independent projects, and as such, any coordination between flixel states and displayed UI must be added by the coder.
I don't recall having the white background problem you mention, it shouldn't happen unless haxeui root sprite has a solid background, in that case it should be addressed to haxeui project maintainer.

Resources