Jump to content

Wikipedia:Reference desk/Archives/Computing/2017 October 6

From Wikipedia, the free encyclopedia
Computing desk
< October 5 << Sep | October | Nov >> October 7 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


October 6

[edit]

Creating a vector image out of a white transparent png file

[edit]

I've watched every "png to vector" tutorial video and read a million message board posts but still can't figure this out and I need it done for work: I have a png file that is a white graphic with a transparent background, and I need to make it be a vector file. In Illustrator when I trace it, it just makes it be an entirely white square. When I invert the colors and trace it as a black image, it seems to work, but as soon as I reverse the colors back to the original, the white portion of the graphic disappears - so frustrating! Is this even possible? ReferenceDeskEnthusiast (talk) 14:52, 6 October 2017 (UTC)[reply]

I used Inkscape's Path > Trace Bitmap feature to do this. This archive has the input png and the output svg, and a screenshot of the settings of the trace-bitmap dialog I used. -- Finlay McWalter··–·Talk 15:12, 6 October 2017 (UTC)[reply]
Got this going now, but I'm at Step 1 of the "2. Multiple Scans" section and when I hit the OK button, nothing happens. The Trace Bitmap panel is there and the OK button blinks as I click it and nothing occurs, it's not traced and the panel doesn't go away. I had to call tech support to get admin access to install this program to try this. Any idea how to do this in Illustrator? ReferenceDeskEnthusiast (talk) 16:01, 6 October 2017 (UTC)[reply]
http://www.instructables.com/id/Turning-a-pixel-image-into-a-vector-image-using-Ad/ (((The Quixotic Potato))) (talk) 16:05, 6 October 2017 (UTC)[reply]
http://www.graphics.com/article-old/converting-rasters-vectors-using-live-trace-illustrator Google "raster to vector illustrator" for more (((The Quixotic Potato))) (talk) 16:08, 6 October 2017 (UTC)[reply]
Yeah that's exactly what I've done, but using any of these steps inexplicably removes the white portion of the graphic and leaves in the colored sections. Can't find a single video or discussion that addresses this. This is what's so frustrating, I can easily get it as a vector in the inverted colors. ReferenceDeskEnthusiast (talk) 16:14, 6 October 2017 (UTC)[reply]

Solved! Finally got it. The trick was to invert it, trace it, save it as an .svg, close the window, open the .svg, and invert the svg file back to the original colors. ReferenceDeskEnthusiast (talk) 16:49, 6 October 2017 (UTC)[reply]

High Quality Image Library

[edit]

Is there anywhere that I can download, freely, a large collection of images that are uncompressed - preferable of varied subjects: landscapes, people, etc.? Ultimately, the images will end up as bitmaps to be processed; I've been pulling images from Google image searches, and the like, but it is hard to tell exactly how much noise my uses for them are introducing versus how much noise is due to compression artifacts that are not readily visible in the original. In the event there are no free solutions, I'd be open to any source that had a reasonable price (less than $100, though that's more money than I'd like to spend). While on the subject, a related question: which image has more noise: a very large bitmap turned to a jpeg, then downsized or a very large bitmap, downsized, then compressed?Phoenixia1177 (talk) 17:06, 6 October 2017 (UTC)[reply]

Try searching for "RAW format images", then process them as you like. Under Google + Images, I picked "More Tools + Sizes + Larger than ... 4 MP". Note that you'll find more images of landscapes and buildings under this search than close-ups of people, as people tend not to look good when you can see that level of detail (pores, etc.). StuRat (talk) 17:24, 6 October 2017 (UTC)[reply]
NASA is good about publishing uncompressed, high resolution TIFF files of most of their images.
It's not all space photos either. A lot of it seem to be pictures taken of engineers building things, PR photos, launch-pad photos, and various other random photos of NASA personel, fascilities, and equipment.
https://photojournal.jpl.nasa.gov/gallery/snt
ApLundell (talk) 17:50, 6 October 2017 (UTC)[reply]
But I wouldn't assume these images have never been lossy compressed. Are we certain that NASA is careful enough with their workflows for all those other images to ensure they were never lossy compressed (or if they were, this is clearly marked)? AFAIK, even some cheaper DSLRs only provide a lossy compression option by default [1]. Of course, noise can come from a variety of sources besides compression artifacts, and it can get quite complicated what you actually mean by a non-lossy compressed 'raw' image for the sensor, so I'm not sure there is a reason for the OP to concentrate on noise from lossly compression. (I suspect Nimur will have a lot to say about this, if they get to it.) Nil Einne (talk) 12:12, 8 October 2017 (UTC)[reply]
For the sake of brevity, let me summarize: users of any scientific image must be very careful to understand how the image was created.
Many people conflate two totally unrelated concepts: (1) data may be uncompressed (or it may use lossless compression); (2) data may be unprocessed - it may be in a form that directly corresponds to digitized sensor data. These concepts are entirely orthogonal (independent of each other).
NASA scientific imagery is usually published through one of the main databases - for example, Planetary Data System (PDS). PDS data is annotated with metadata and links to technical publications that explain how the imagery or raw data you see has been processed. For example, this summer I spent a few weeks training new scientists in the theory and practice of interpreting LRO camera imagery. You can get JPEG files, and you can get TIFF files; but if you want to do photogrammetry or radiometry, you need to do a lot of extra homework to learn about the camera instrument, and the software that runs before you get a data-file. "Raw data" is a term that should be avoided: instead, professional scientists refer to the exact data product that they are using, so that it is clear to others exactly what type of data they mean. For example, there are "engineering data records," "scientific data records," and so forth.
As one clear example: LROC camera data may be provided in a TIFF file, but it has already been companded - the dynamic range has been reduced, to fit into an eight-bit numeral. Even if you get an "uncompressed" TIFF file, the spacecraft itself has compressed that pixel from the original 12 bits (usually). The only way to correctly interpret the image is to pull up the orbital record (the Calibrated Data Record, for example) and invert the companding equation. This is a complicated process, unique to this instrument.
So - beware of "raw" data - the only real raw data is an analog signal! The minute that signal gets preprocessed by hardware or software, you (the consumer) have to do a lot of homework to understand every single step in the data acquisition pipeline, from sensor electronics, to acquisition software, to storage-and-transmission signal conditioning, to scientific post-processing, to archival file-formatting, to data compression for long-term storage. The very same caveats apply to "conventional" photographic data - only, it's not so cleanly documented!
Nimur (talk) 20:15, 9 October 2017 (UTC)[reply]

How to find "effective resolution" of pics ?

[edit]

Is there a way to determine the original resolution, say of a pic that has been upscaled but has no history stored with it ? I can get a general idea by how blurry the pic is, but there must be a more scientific approach. StuRat (talk) 22:46, 9 October 2017 (UTC)[reply]

The "obvious method" is to observe the energy spectrum produced by the spatial frequency transform, and observe whether there is a distinct fall-off at a maximum spatial frequency. That spatial frequency value would correspond, conceptually, to the "original" pixel size.
This of course assumes that the upscaling algorithm is "dumb," in that it blurs during upscaling - but this is not representative of any modern method. Modern methods preserve gradient directions, improve and sharpen, mixed with matched noise-fill to yield high-frequency content, ...
A Tour of Modern Image Processing... (IEEE Spectrum, 2013), provides, well, a tour of all the horrible new math you need to learn if you plan to make quantitative sense out of modern digital images.
StuRat, you might enjoy reading pixel-art scaling algorithms - after viewing a lot of those sample images, you'll get a good conceptual understanding for why a purely mathematical treatment is almost impossible - modern upsampling algorithms are nonlinear filters; they yield an "ill-posed inversion problem;" this is math-ese for "you can't get a provably-correct answer."
Nimur (talk) 23:07, 9 October 2017 (UTC)[reply]
I was thinking more of a solution like this:
1) Recognize an object in the pic. For example, recognize a rose.
2) Compare with various stored images of that object, at various resolutions, to determine which is most similar in effective resolution. StuRat (talk) 23:18, 9 October 2017 (UTC)[reply]
Another possible method is to downscale it until a loss of quality becomes apparent. The last point before that became apparent is the effective resolution. StuRat (talk) 23:20, 9 October 2017 (UTC)[reply]
That's sort of like a keypoint detection method or a image saliency approach - it's an active area of research, but it's more along the lines of a machine-vision method, rather than a method of conventional image processing. Nimur (talk) 23:33, 9 October 2017 (UTC)[reply]