Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I have many different versions of the same photo in JPEG files. I need a command-line tool or library running on Linux which selects the highest quality version from the available versions. Taking a look (and zooming in) at the files one-by-one is not an option, because I need to repeat this for a few thousand sets of photos. I already know which photos are versions of each other.
The simplest approximate solution to this is selecting the image file with the largest width and height. This doesn't work when there are image A and image B; image B was created by scaling image A down by a factor 2, then scaling the result up by a factor of 3. In this case image A has more information and is of higher quality, but image B has larger dimensions. I need a solution which selects image A in this case and similar cases.
The "quality" of a picture is not an easy thing to measure, and may very well be subjective. That said, there are two approximate solutions that come to mind.
File size: as #qqilihq suggests, the file size of a jpeg will roughly reflect the quality of a photo. In the example you give, where a file is scaled down and then back to full size, you would expect the original (non-scaled) version to have a larger file size. The details lost in scaling down the jpeg will no longer be taking up space in the file. Seems easy to test, at least.
You may be able to extract useful information from the jpeg's metadata. This will likely vary greatly by the programs used to edit the jpegs and the specification of the jpegs. Here are two resources: Exif and one from Oracle.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
It is possible way that PSD file can convert into SVG file or using thirdparty software without sacrificing quality i found this website but for image to vector.
I only need coverter or plugins for Photoshop or in Illustrator export as SVG
I suggest to you use the free vesion of that website vectormagic just save your psd to image then process it to vectormagic website but remember almost 20% of quality is sacrifice because this is vector, to be sure for best result quality make an image with solid color or vetorized images without shadow blue or glow.
If you're knowledgeable in Illustrator here the plug in from Adobe but then Illutrator is different in Photoshop.
I think this is a 2-part question and requires a 2-part answer. Assuming you're going strictly from raster graphic to vector:
You have to get a good raster to vector conversion. Illustrator is good for this if you import your raster into AI and play around with tracing. You'll get decent results, but nothing beats doing the conversion by hand or starting from a vector to begin with. -- If you're starting with vectors already, you can just copy your shapes and paths to Adobe Illustrator (AI).
Once you have your vector is in AI, it's just a matter of using Save As : SVG. If you're using it for web, SVG 1.1 profile is the best option since it's the most compatible with browsers.
2B. If you're doing multiple vectors at once, place each vector into its own artboard inside AI. Once you've done that, you can fit each artboard to the artwork so that the canvas hugs your art. Save As SVG and choose the option "use artboards." -- This will save all the vectors within that AI file as individual SVGs and will use the artboard names a file names.
NOTE: while you can use plug-ins and automated converters to do this, doing it manually will result in the best quality output.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I'm looking for a tool that will let me draw out a map of dependencies for applications, servers, etc. We have a lot of servers and a lot of databases, and no way of tracking what depends on what. Note that I'm not talking about code or class dependencies within a project, but rather dependencies on servers, databases, and so on.
When I sat down to create a sample map, I ended up with something like this:
Now, the problem I have with doing this on paper or in MS Paint is that the layout is not adjustable. For example, if I took a node in the above example and moved it, I want the other lines and nodes to adjust to the new position of the one I just moved.
I checked out some "mind mapping" applications, like FreeMind, and found it to be too restrictive. For example, in that application, you can't just freely draw stuff and connect it, you have to specify nodes and their relationships (child, parent, sibling). Also, there was no ability to comment on anything. For example in the above image, I'd like to be able to include comments for each thing in the map, but have them be hidden until that object is clicked. In this way, I can write a lot of text regarding the relationship and it's history, without muddling up the map.
At the most basic level, all I want is a super simple application that will just let me draw squares, insert text (and hidden comments) in, and connect them with arrows, and then allow me to move the squares around and have the surrounding squares and arrows adjust automatically.
A good tool for making general diagrams of this sort is GraphViz. You specify the input as a .dot file, containing instructions similar to (for your example):
digraph G {
web_app -> server1
web_app -> sql_database
web_app -> repository
server1 -> esx_server1
...
}
(There are many formatting directives, such as proper labels you would use on the vertexes, rather than the short names I've used above.)
Then run a command line tool to lay out the graph as an image. There are many layout algorithms, so often you will use trial and error to find one which works best for your graph.
GraphViz can do a reasonably good layout job on many graphs. But the great thing about it is the text-based input; it's easy to autogenerate the input from a program, or keep it in version control, etc.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I am writing a music player and I want to normalize the audio volume across different songs.
I could think of some different ways to do this, e.g.:
Go through all PCM samples (assume floating point from -1 to 1) and select the m = max(abs(sample)). Then apply the factor 1/m to all the PCM samples. This would make the peak be at 1.
Go through the PCM stream and for each position, take the Hanning window of some width around it, calculate the average of absolute samples and from those data, pick the maximum and normalize everything.
The same as 2 but some other way to get some sort of averaged value.
2 and 3 have the disadvantage that I might need some clipping and thus loose some quality. By not normalizing to 1 but to 0.95 or so, I maybe could avoid this to some degree, though. But I think 2 and 3 have the advantage that this might be the more natural normalization for the user. Wikipedia also has some information about this and mentions RMS, ReplayGain or EBU R128 to measure the loudness of a song.
How are other popular music players (like iTunes or so) doing this?
iTunes uses the Sound Check technology. "Sound Check is a proprietary Apple Inc. technology similar in function to ReplayGain. It is available in iTunes and on the iPod." (from Wikipedia) So, this is no solution for me.
It seems that ReplayGain is the most common technic. The algorithm is explained here. A sample implementation is mp3gain (GPL) or ffmpeg-replaygain (GPL, derived from mp3gain). I have my own implementation now in my MusicPlayer project (BSD-licence).
See also these projects with implementations:
http://sox.sourceforge.net/
http://r128gain.sourceforge.net/
official ReplayGain homepage
official ReplayGain 1.0 specification
Wikipedia: ReplayGain
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I've been a web developer for quite some time, and I am used to transforming all my designs into the png file format in order to build my webpages. Despite the fact that png, contrary to jpg, allows transparency in the images, is it a better solution?
The question regards page loading time and best webdesign practices, as well as file size versus quality of the images.
What do you think is the best solution to use?
It depends.
PNG is better for crisp images with a low number of colours,
JPG is better for a low-bandwidth image - however it is not as crisp and therefore not very good for GUI.
Generally, JPG is for photos and pictures, whereas PNG (or GIF) is for layout.
You may find this page interesting, as it goes over the basics of PNG vs GIF.
Google has written about it very well. From Selecting the right image format, you can find a flow-chart to make the decision:
Given the ever-rising speed of the average net connection I don't tend to think that page loading time is much of a concern any more [ducks!]; It's really far more useful to think about what you are trying to achieve with the resources you have at your end: For example is bandwidth limited? Then tending towards heavier compression is a no-brainer. Is the graphic content of the site going to expand, ensuring that the cost of server space will increase over time? Then tending towards heavier compression will delay that cost. Is it an art portfolio site? Then -- aha! -- compression artefacts in the sampler work may actually be desirable! Are you trying to flog a game? Then the screenshots should probably be ultra-crisp.
Generally, then, I would repeat what has been said, although perhaps in slightly different language: For site furnishings, which tend to be computer-generated and will be cached for re-use between pages, tend towards png; For site content, which will often be page-specific and likely large and complex enough to mask lossy compression, tend towards jpg.
With specific reference to switching to png where you decide it is appropriate, run everything through PNGCrush as a matter of course -- otherwise they won't get displayed with the colours you expect in every browser and the overall quality of your design will be diminished.
"Given the ever-rising speed of the average net connection I don't tend to think that page loading time is much of a concern any more "
According to the website SEO optimisation, Google rank your website worse if page load time is above 2 seconds, so compression is necessary, especially on new website designs heavy on graphics.
jpg is usually preferred for photographic images that have a lot of subtly different colors. png works well with computer generated graphics.
That's my rule of thumb.
I am a starter and mostly what I do is if the image is too big I make small unless it's a background image. And if it is for layout I choose PNG and JPG for pictures.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
What is a good free library for editing MP3s/FLACs.
By editing I mean:
Cutting audio file into multiple parts
Joining multiple audio files together
Increase playback speed of file without affecting the pitch (eg. podcasts up to 1.3x)
Re-encoding audio file from Flac -> MP3 or vice versa
I don't mean software, I mean a library that I can use within another application. Programming language agnostic.
Just about every language has bindings to C, so you'll probably want to get the applicable C libraries for encoding/decoding mp3's and FLAC files. This list might include
libFLAC http://flac.sourceforge.net/api/index.html FLAC encoding/decoding
LAME http://lame.sourceforge.net/index.php MP3 encoding
MAD http://www.underbit.com/products/mad/ MP3 decoding
The rest of your signal processing needs could be gathered around a single popular API such as LADSPA http://www.ladspa.org/.
Here's a stretching / pitch shifting library: http://www.breakfastquay.com/rubberband/
Most audio processing programs have a certain internal format they use. That keeps things simple. Everything coming in gets converted to the same format. Once you've standardized the internal format, cutting and splicing audio data is about as difficult as cutting and splicing strings. You don't really need a library for that.
I use Audacity for all my editing needs
Audacity is a free, easy-to-use audio
editor and recorder for Windows, Mac
OS X, GNU/Linux and other operating
systems. You can use Audacity to:
* Record live audio.
* Convert tapes and records into digital recordings or CDs.
* Edit Ogg Vorbis, MP3, WAV or AIFF sound files.
* Cut, copy, splice or mix sounds together.
* Change the speed or pitch of a recording.
Audacity uses the Lame library, however not only is this not language agnostic it also has some questions over licensing. Nevertheless it might be a start