Why are there aliasing drawings with gdi? And even don't scale it.
If I don't scale it, I think it won't be aliased.
And draw a circle with SVG will not be aliased.
I guess by "sawtooth" you mean aliasing. GDI is about 30 years old. Since antialiasing requires quite a lot of computation power it's support has never been added. It is technically possible to draw smooth images using GDI and some additional code, however it is better to use newer API that supports antialiasing out of box, such as Direct2D or at least GDI+.
Also svg is just an xml-based file format. You don't "draw" anything with svg, you just describe image with svg and then it gets rendered with some rendering engine, such as cairo. If you render svg using plain GDI you'll still get aliased image.
Related
I have to draw charts on browser using a python backend (which may not matter here). There are numerous libraries like JQPlot, D3, Google Charts for achieving this.
But if you classify them, they are either HTML5 Canvas based or SVG based. Both are important technologies in their own space. But
for charting as a subject, shall I go with SVG based libraries or
HTML5 Canvas based libraries. What are downside and benefits of
both approaches.
I don't have any prior experience with charting and don't want to hit the wall
after I start the project.
Projects with a large amount of data may favor canvas. SVG approaches typically create a DOM node per point (unless you make paths), which can lead to:
An explosion in the size of your DOM tree
Performance problems
Using a path, you can get around this problem, but then you lose interactivity.
Say you're building a stock chart. If you are talking about a chart with, say... max 5 years and end of trade data samples only, I think the answer is clearly SVG. If you're talking about looking at Walmart's historical data from day one of trading or doing full trade information per minute, you are going to have to really look carefully at SVG. Probably will have to employ fancy memory management and a fetch-on-demand approach as SVG will fall apart, particularly if you go one sample to one SVG node.
If interactivity is a requirement, SVG easily has the edge, given:
It is a retained mode API
You can use typical event handlers
You can add/remove nodes easily, etc.
Of course, you see that if you require full interactivity, it may go against mechanisms that allow SVG to scale, like path collapsing, so there is an inherent tension here.
There is going to be a trade-off in extremes. If size is small, the answer is SVG hands-down. If size is large and no interactivity, the answer is SVG with path drawing only or using Canvas. If size is large and interactivity is required, you have to go canvas or tricky SVG, which is complex in either case.
Some libraries out there offer both canvas and SVG renders, such as ZingChart and Dojo. Others tend to stick with just one of the two options.
Being vector based, SVG gets you scalability for free, and a side effect of this is that it's sharp on high resolution displays and sharp when printed. You can kind of get around this with canvas by rendering at 2x resolution and scaling your canvas but it's kind of a half-solution.
SVG I think is the modern way and the way to do this moving forward.
If you are concerned about rendering speed if you have many nodes consider also that if you're using canvas, you're basically using your own Javascript based rendering code which has to render those same nodes. You do get the predictability of only having to render it once, but if you only render it once that also means you lose the ability to re-render when zooming or to do various interactive things. If performance is a problem you can simplify SVG by sub-sampling your data, taking moving averages and plotting that only once per x rows, etc depending on what you're doing. But, we're talking thousands and thousands of nodes with almost no impact.
Canvas still has a place if you are building a web based raster graphics editor or something that in inherently raster-based but essentially if we are looking at charts, we're talking about something that's inherently vector based.
I draw graphics for my program in corel draw (x6),
after that export it as svg files, and my program
uses this svg files.
Let's say I draw "arrow" in corel draw program.
It consists of tip and line.
I need to show this "arrow" in my program,
but I need "tip" part to be not scalable,
while "line" should be scalable.
The most simple solution which works, split "arrow"
into two parts, convert "tip" part to bitmap during program starts.
But it requires too much time for complex pictures.
And I wonder, is it possible in svg format to say this part should
not be scaled, and this should? And how this can be exported from corel draw?
I found something suitable in corel draw, to play with scale for diffrent parts of picture,
but during export to svg all my definitions was lost.
Unfortunately, there is no concept of a non-scaling element. At my last job, I worked with the SVG working group to try to get this feature introduced (nonscaling elements are really useful in engineering drawings), and it is on the roadmap for SVG 2.
The issue is SVG-ISSUE-2400.
The way to do this for now, is to implement a zoom event, that dynamically rescales nonscaling elements when the zoom level changes.
Are SVG graphics a viable option for an in-browser game, with a google-maps style interface? This would involve zooming in/out, and scrolling in two dimensions over a very large distance.
For example, the client might request some area to be drawn in from the server -- and rather than the server returning a generated image for that section, it would return a series of gzipped SVG images and their locations in the requested area. Then the user could zoom in and out without grabbing new "tiles" from the server, since SVGs are scalable.
Would this be better than generating pngs or jpegs and sending back tiles? Would it perform well if there were many clients requesting images all over the place? Would it perform well on the client? What are the downsides to this approach?
In my experienced. The downside is the achievable level of detail using SVG is lower than lossy image compression like jpeg and png. I had difficulties getting all my vector graphics to play nicely with each other. If your artists are comfortable with working in SVG then this may not be an issue. Another note is that SVG compatibility may very between browsers. For instance I'm not sure which browsers support SVG. Webkit does, and I think Firefox does mostly, but I'm fairly sure IE is out of the picture, so to speak.
Overall SVG will put higher demands on client machines and lower demands on your servers. Calculating hundreds of SVG images is a lot more work than arranging PNGs.
In really depends on your game. If you are writing Chess, it would probably work fine. If you want to do something more complex in real time( E.G. a 2d side scrolling game), I have no clue.
using this SVG clock in Raephael as an Example. I am running Chrome on Windows and periodically different bars "twitch" and "reset for a second"
Edit
I just saw this first person SVG Demo So it can be done.
I read while Googling that SVG was "dead". Although I disagree, could anyone tell me more/future vector based format to represent 2d/3d graphics? What about VML? What format should I use to represent 2D and 3D graphics on Web?
I playing around with graphics on web and I would like to know if I'm working with an obsolete technology.
Microsoft is supporting SVG in IE9, and gave a detailed explanation of why they were doing it on the IE blog:
http://blogs.msdn.com/b/ie/archive/2010/03/18/svg-in-ie9-roadmap.aspx
SVG's main advantage is that it becomes part of the DOM, so you can use CSS to style it and javascript to modify it. Canvas, by contrast, must redraw every frame completely. This makes canvas suited to spectrum analyzers, video processing, fast-paced games, and other non-gradual animations. SVG is better suited for gradual animations.
As far as 3D is concerned, the future is WebGL, a thin shim over OpenGL ES, but it's far off. Microsoft has not committed to supporting it, and that means it's not going to be in IE9. Maybe IE10, maybe not.
If you do use SVG, I recommend using svgweb to abstract away the browser differences (falls back to a flash applet on outdated browsers).
This post is rather late... but I think it is worth re-addressing, since your question has popped up again with all the html5 talk.
SVG is a vector drawing format that also supports animation, timing, and Javascript DOM support. In other words, it is a standalone format for static and dynamic vector graphics; you might say it is a web-focused (or screen-focused) alternative to EPS/PDF. The html5 canvas tag is not a format but a way to draw (static images) to the screen with Javascript — that is all; there is no competition between it and SVG, as they have entirely different purposes.
Most other vector "formats" involve plugins (Flash) or hardware support (webGL). Ironically, the VML format you mentioned is now deprecated in favor of SVG.
To answer your question: SVG is now the standard vector format for the web. Hopefully, in the near future, we will see it being used for video/animation as well.
You can try the Raphaël JavaScript Library.
It is easy to implement and provides the same UI features as SVG (and more!).
If it is SVG you are after the best way to go is svg.js. It supports SVG better and it is a fraction of the size (4.5k gzipped) of Raphaël (31k gzipped). It also has a very intuitive syntax.
All major browsers including ie9, firefox, safari and chrome are starting to supporting svg as part of the upcoming html5 standard. I wouldn't call that "dead"
2D: SVG
3D: X3DOM or webGL directly
Let me describe the "battlefield" of my task:
Multi-room audio/video chat with more than 1M users;
Custom Direct3D renderer;
What I need to implement is a TextOverVideo feature. The Text itself goes via network and is to be rendered on the recipient side with Direct3D renderer. AFAIK, it is commonly used in game development to create your own texture with letters/numbers and draw this items. Because our application must support many languages, we ought to use a standard. That's why I've been working with ID3DXFont interface but I've found out some unsatisfied limitations.
What I've faced is a lack of scalability. E.g. if user is resizing video window I have to RE-create D3DXFont with new D3DXFONT_DESC while he's doing that. I think it is unacceptable.
That is why the ONLY solution I see (due to my skills) is somehow render the text to a texture and therefore draw sprite with scaling, translation etc.
So, I'm not sure if I go into the correct direction. Please help with advice, experience, literature, sources...
Your question is a bit unclear. As I understand it, you want easily scalable font.
I think it is unacceptable
As far as I know, this is standard behavior for fonts - even for system fonts. They aren't supposed to be easily scalable.
Possible solutions:
Use ID3DXRenderTarget for rendering text onto texture. Font will be filtered when you scale it up too much. Some people will think that it looks ugly.
Write custom library that supports vector fonts. I.e. - it should be able to extract font outline from font, and build text from it. It will be MUCH slower than ID3DXFont (which is already slower than traditional "texture" fonts). Text will be easily scalable. Using this way, you are very likely to get visible artifacts ("noise") for small text. I wouldn't use that approach unless you want huge letters (40+ pixels). Freetype library may have functions for processing font outlines.
Or you could try using D3DXCreateText. This will create 3D text for ONE string. Won't be fast at all.
I'd forget about it. As long as user is happy about overall performance, improving font rendering routines (so their behavior looks nice to you) is not worth the effort.
--EDIT--
About ID3DXRenderTarget.
EVen if you use ID3DXRenderTarget, you'll need ID3DXFont. I.e. you use ID3DXFont to render text onto texture, and then use texture to blit text onto screen.
Because you said that performance is critical, you can delay creation of new ID3DXFont until user stops resizing video. I.e. When user starts resizing video, you use old font, but upscale it using texture. There will be filtering, of course. Once user stops resizing, you create new font when you have time. you probably can do that in separate thread, but I'm not sure about it. OR you could simply always render text in the same resolution as video. This way you won't have to worry about resizing it (it still will be filtered - along with the video). Some video players work this way.
Few more things about ID3DXFont. There is one problem with ID3DXFont - it is slow in situations where you need a lot of text (but you still need it, because it supports unicode, and writing texturefont with unicode support is pain). Last time I worked with it I optimized things by caching commonly used strings in the textures. I.e. any string that was drawn more than 3 frames in the row were rendered onto D3DFMT_A8R8G8B8 texture/render target, and then I've been copying that string from texture instead of using ID3DXFont. Strings that weren't rendered for a while, were removed from texture. That gave some serious boost. This solution, however is tricky - monitoring empty space in the texture, removing unused strings, and defragmenting the texture isn't exactly trivial (there is nothing exceptionally complicated, but it is easy to make a mistake). You won't need such complicated system unless your screen is literally covered by text.
ID3DXFont fonts are flat, always parallel to the screen. D3DXCreateText are meshes that can be scaled and rotated.
Texture fonts are fuzzy and don't look very clear. Not good for an app that uses lots of small text.
I am writing an app that can create 500 text meshes, each mesh averaging 3,000-5,000 vertices. The text meshes are created once, then are static. I get 700 fps on a GeForce 8800.