Snapshot bitmap testing with jest? - jestjs

Jest works great for testing component snapshots.
Is there a way to also use Jest snapshot concept to test JavaScript canvas rendering code by having it compare actual and expected bitmaps?

The regular method works:
expect(bitmap).toMatchSnapshot()
The image will be serialized to a JSON string in the snapshot file:
// Jest Snapshot v1, ...
exports[`renders ok`] = `
ImageData {
"data": Uint8ClampedArray [
0,
255,
16,
255,
...
However the file is rather big, and you can't easily get a visual diff of expected vs. actual.

Related

TF Keras Model Serving REST API JSON Input Format

So I tried following this guide and deploy the model using docker tensorflow serving image. Let's say there are 4 features: feat1, feat2, feat3 and feat4. I tried to hit the prediction endpoint {url}/predict with this JSON body:
{
"instances":
[
{
"feat1": 26,
"feat2": 16,
"feat3": 20.2,
"feat4": 48.8
}
]}
I got 400 response code:
{
"error": "Failed to process element: 0 key: feat1 of 'instances' list. Error: Invalid argument: JSON object: does not have named input: feat"
}
This is the signature passed to model.save():
signatures = {
'serving_default':
_get_serve_tf_examples_fn(model,
tf_transform_output).get_concrete_function(
tf.TensorSpec(
shape=[None],
dtype=tf.string,
name='examples')),
}
I understand that from this signature that in every instances element, the only field being accepted is "examples" but when I tried to only pass this one only with empty string:
{
"instances":
[
{
"examples": ""
}
]
}
I also got bad request: {"error": "Name: <unknown>, Feature: feat1 (data type: int64) is required but could not be found.\n\t [[{{node ParseExample/ParseExampleV2}}]]"}
I couldn't find in the guide how to build the JSON body request the right way, it would be really helpful if anyone can point this out or give references regarding this matter.
In that example, the serving function expects a serialized tf.train.Example proto as input. This page explains how binary data can be passed to a deployed model as a string (explaining why the signature expects a tensor of strings). So what you need to do is build an Example proto containing your features and send that over. It could look something like this:
import base64
import tensorflow was tf
features = {'feat1': 26,, 'feat2': 16, "feat3": 20.2, "feat4": 48.8}
# Create an Example proto from your feature dict.
feature_spec = {
k: tf.train.Feature(float_list=tf.train.FloatList(value=[float(v)]))
for k, v in features.items()
}
example = tf.train.Example(
features=tf.train.Features(feature=feature_spec)).SerializeToString()
# Encode your serialized Example using base64 so it can be added into your
# JSON payload.
b64_example = base64.b64encode(example).decode()
result = [{'examples': {'b64': b64_example}}]
What is the output of saved_model_cli show --dir /path/to/model --all? You should follow the output to serialize your request.
I tried to solve this problem by changing the signature serving input but it raised another exception. This problem already solved, check it out here.

How to make a PDF using bookdown including SVG images

I have some R markdown that includes the following code:
```{r huff51, fig.show='hold', fig.cap='Design decisions connecting research purpose and outcomes [#huff_2009_designingresearchpublication p. 86].', echo=FALSE}
knitr::include_graphics('images/Huff-2009-fig5.1.svg')
```
When using bookdown to produce HTML output everything works as expected.
When using bookdown to produce PDF output I get an error saying ! LaTeX Error: Unknown graphics extension: .svg.
This is understandable as knitr uses Latex's \includegraphics{images/Huff-2009-fig5.1.svg} to include the image. So, it's not a bug per se.
Is there a better way to include the SVG image so I don't need to pre-process it into, say, a PDF or PNG?
An update to Yihui Xie 's answer in '22. The package you want is now rsvg and the code looks like:
show_fig <- function(f)
{if (knitr::is_latex_output())
{
output = xfun::with_ext(f, 'pdf')
rsvg::rsvg_pdf(xfun::with_ext(f,'svg'), file=output)
} else {
output = xfun::with_ext(f, 'svg')
}
knitr::include_graphics(output)
}
Then you can add inline code to your text with
`r show_fig("image_file_name_no_extension")`
knitr v 1.39, rsvg v 2.3.1
You can create a helper function to convert SVG to PDF. For example, if you have the system package rsvg-convert installed, you may use this function to include SVG graphics:
include_svg = function(path) {
if (knitr::is_latex_output()) {
output = xfun::with_ext(path, 'pdf')
# you can compare the timestamp of pdf against svg to avoid conversion if necessary
system2('rsvg-convert', c('-f', 'pdf', '-a', '-o', shQuote(c(output, path))))
} else {
output = path
}
knitr::include_graphics(output)
}
You may also consider R packages like magick (which is based on ImageMagick) to convert SVG to PDF.
For bookdown, I really don't like having PDF files on my websites. So I use this code:
if (knitr::is_html_output()) {
structure("images/01-02.svg", class = c("knit_image_paths", "knit_asis"))
} else {
# do something for PDF, e.g. an actual PDF file if you have one,
# or even use Yihui's code in the other answer
knitr::include_graphics("images/01-02.pdf")
}
It uses the SVG file for websites (i.e., HTML output).
It works perfectly for generating everything: website, gitbook, pdfbook and epub.
To prevent adding this code to every chunk in your bookdown project, add this to index.Rmd:
insert_graphic <- function(path, ...) {
if (knitr::is_html_output() && grepl("[.]svg$", basename(path), ignore.case = TRUE)) {
structure(path, class = c("knit_image_paths", "knit_asis"))
} else {
knitr::include_graphics(path, ...)
}
}

Unexpected node type error SequenceExpression with jest

I was adding a snapshot test to a piece of React code, and I incurred in this error:
Unexpected node type: SequenceExpression (This is an error on an internal node. Probably an internal error. Location has been estimated.)
The code transpiles and works just fine, and the AST explorer doesn't warn me about anything.
Before this new test, no other test gave me any sort of similar error, and we have quite a few of them in our codebase.
I tried to reinstall jest, reinstall babel-jest, remove and reinstall all modules (using yarn --pure-lock), upgraded both jest and babel-jest to the latest version (20.0.1 for both), rinsed and repeated.
Nothing worked.
This occurs only when I try to collect the coverage (with --coverage), while the minimal snippet it occurs with is:
import { tint } from 'polished'
import styled from 'styled-components'
export default styled.label`
background: ${({ x, y }) => (x ? tint(0.3, y.a) : y.b)};
`
Here's what i've found:
This is an issue with jest code coverage being able to understand styled components and polished. I am using babel-plugin-polished with the following in my babelrc:
"plugins": [ "polished" ]
But still if you call export on a value, and do not also use that value in an object or exported object, it will fail.
Fails:
export const charcoalBlue = rgb(104, 131, 145);
Doesn't fail:
export const charcoalBlue = rgb(104, 131, 145);
const colors = { charcoalBlue }
So my solution has been to ignore my style files, or simply ensure I'm using the values I create and not just exporting them.
One way to ignore the style files, place this in your package.json:
"jest": {
"collectCoverageFrom": [
"src/**/*.{js,jsx}",
"!**/*.styles.js",
]
}
And name your style files {ComponentName}.styles.js
Hope this helps!
I came across the same issue!
I fixed it by working around it:
import styled, {css} from 'styled-components';
styled.label`
${({x,y}) => x
? css`background: tint(0.3, y.a);`
: css`background: ${y.b};`
}
`;

I need something like this, but for Phaser

i need something for adding object in Phaser, this is something similar but in wade.
wade.addSceneObject(new SceneObject(dotSprite, 0, dotPosition.x, dotPosition.y));
Adding objects (usually called sprites) in phaser is super simple. Just load the image in the preloader function
function preload() {
game.load.image('mushroom', 'assets/sprites/mushroom2.png');
}
And then add the sprite in the create function
function create() {
// This simply creates a sprite using the mushroom image we loaded above and positions it at 200 x 200
var test = game.add.sprite(200, 200, 'mushroom');
}
Phaser has a ton of documentation on how to do things like this. I got the code from this example.
If you are completely new to phaser I highly recommend going through their tutorial

Resize paths generated by PaintCode

I'm using PaintCode to convert a set of SVGs I have to use in to Swift code. It looks like it just converts the SVG paths in to UIBezierPath()s, which is great.
To display the generated code I'm doing the following:
class FirstImageView: UIView {
init(name: String) { // Irrelevant custom init
super.init(frame: CGRectMake(15, 15, 40, 40)) // 40x40 View
self.opaque = false // Transparent background
}
override func drawRect(rect: CGRect) {
ImagesCollection.firstImage() // Fill the view with this image
}
required init(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
}
}
Where ImagesCollection.firstImage() is referencing:
public class ImagesCollection: NSObject {
public class func firstImage() {
let color4 = UIColor(red: 0.595, green: 0.080, blue: 0.125, alpha: 1.000)
var fill295Path = UIBezierPath()
fill295Path.moveToPoint(CGPointMake(27.5, 2.73))
// Rest of generated graphics code
}
}
Which works great - I generated the graphics at 40x40, set the frame to 40x40, and that works fine. What I'm wondering now, is how can I display that same graphic at a smaller (or larger size) - since they're bezier paths they should scale fine, right? Setting my View's frame to CGRectMake(15, 15, 20, 20) (for a desired 20x20 image) just seems to clip the graphic.
How can I ensure that whatever graphic is drawn in to my View is sized to the view's frame?
Thanks
You can use a CGAffineTransform to scale either your custom view or the bezier path itself.
Scale the view:
myFirstImageView.transform = CGAffineTransformMakeScale(0.5, 0.5)
Scale the bezier path:
// ...
// lots of generated graphics code
fill295Path.applyTransform(CGAffineTransformMakeScale(0.5, 0.5))
fill295Path.stroke() // or fill, etc.
Since you generated the code with PaintCode, you could draw their Frame object around your SVG/Bezier. It would generate the Bezier with dynamic size. In your example, you would get ImagesCollection.firstImage(#frame: CGRect) instead of ImagesCollection.firstImage(). You just pass the rectangle from drawRect to it and you are done.
Slightly off-topic, but per Nate's request here's how I did it with SVGKit:
Firstly, you'll want to incorporate SVGKit in your project by following these instructions
With the markup in your SVG file, parse the SVG data using the SVGKParser:
let svgData: String = "<svg>...</svg>"
let svgParse: SVGKParser = SVGKParser(source: SVGKSource(inputSteam: NSInputStream(data: svgData.dataUsingEncoding(NSUTF8StringEncoding, allowLossyConversion: false)!)))
svgParse.addDefaultSVGParserExtensions()
From there you'll load it into an image, using the SVGKImage type:
var svgImage: SVGKImage = SVGKImage(parsedSVG: svgResult, fromSource: nil)
And lastly, you can use it as the image in an SVGKFastImageView:
SVGKFastImageView(SVGKImage: svgImage)

Resources