Kodewerx

Our culture has advanced beyond all that you could possibly comprehend with one hundred percent of your brain.
It is currently Tue Mar 19, 2024 2:20 am

All times are UTC - 8 hours [ DST ]




Post new topic Reply to topic  [ 2 posts ] 
Author Message
PostPosted: Mon Nov 09, 2009 5:26 pm 
Offline
Krew (Admin)
Krew (Admin)
User avatar

Joined: Sun Oct 01, 2006 9:26 pm
Posts: 3768
Title: All in a day's work.
Sometime in May 2005, I got the unusual idea that the pixel scaling algorithms in use (at the time as well as today) are ineffective at reproducing the original pixel art with a higher resolution and high quality. My personal belief is that for a pixel scaling algorithm to do justice to the original pixel art that it intends to scale, the resulting image should retain the color depth of the original image. This is a simple idea: scale and smooth the image without introducing any anti-aliasing. In practice, though, this is more difficult than it sounds.


Pyxes

"Pyxes" is my first [working] attempt at this idea. The name is a reversed abbreviation of "Sexy Pixel" which accurately describes the motivation behind this experiment.

The current implementation of the algorithm is designed for 1bpp images (a total color depth of 2 colors). The resulting images produced by this implementation look either "fat" or "thin" based on the output patterns used (more on this later in the implementation details). First, a demonstration, using the number font from Metroid. The first image shows the font simply scaled up using "nearest neighbor" (no interpolation), the second shows the result of the "thin" pattern, and the third is the result of the "fat" pattern:

Nearest Neighbor:
Image

Thin:
Image

Fat:
Image

The problems with each of these are quite apparent; the thin pattern produces a "cyborg font" straight out of the year 21XX. The fat pattern produces an Airheads font.

Sadly, it's not easy to tweak the patterns in such a way that the output font is a thickness between these two steps, while keeping a very smooth appearance. Here's an example of an attempt to create an "in-between thickness":

Image

Notice the numbers are quite blocky in this image. The result here is that the numbers appear less "fat" than the second image above, but also less smooth due to fewer interpolated pixels.

An interesting compromise (stress on "compromise") is the introduction of anti-aliasing to create a result with a perfect in-between thickness and additional smoothness:

Image

Note that this is not "real" anti-aliasing, but is a combination of the thin and fat images; the thin image is rendered, and then the fat image is layered over it at about half opacity. This is [strangely] my favorite result, even though the original goal was to produce a nice result without introducing any anti-aliasing. For the full-blown no-anti-aliasing-at-all algorithm, I think the fat pattern will work best, since it produces more rounded edges; the thin pattern tends to produce very straight, angled edges.


Implementation Details

The Pexys algorithm is quite simple. For each pixel of input, take it and the three pixels directly adjacent, to the right and below. Analyze this group of pixels to choose the closest matching pattern. At this time, the algorithm only works on 1bpp input images, so there are only 16 total patterns to choose from. The patterns are as follows:

Code:
    ..    ..    ..    ..
    ..    .#    #.    ##

    .#    .#    .#    .#
    ..    .#    #.    ##

    #.    #.    #.    #.
    ..    .#    #.    ##

    ##    ##    ##    ##
    ..    .#    #.    ##


Where each dot represents the first pixel color (for example, black) and the # represents the second pixel color (like white). After one of these patterns is determined, The corresponding "upscaled version" of that pattern is chosen. The upscaled patterns were hand-drawn with some specific assumptions. One of the major assumptions is that no input pixel is ever assumed to be square, but instead slightly rounded. Here is the pattern set which creates the "thin" output:

Code:
    ....    ....    ....    ....
    ....    ....    ....    ....
    ....    ...#    #...    .##.
    ....    ..##    ##..    ####

    ..##    ...#    ..##    ...#
    ...#    ..##    .###    ..##
    ....    ..##    ###.    .###
    ....    ...#    ##..    ###.

    ##..    ##..    #...    #...
    #...    ###.    ##..    ##..
    ....    .###    ##..    ###.
    ....    ..##    #...    .###

    ####    ###.    .###    .##.
    .##.    .###    ###.    ####
    ....    ..##    ##..    ####
    ....    ...#    #...    .##.


If you look over these upscaled patterns, compared to the original patterns, you can see that I've opted to make these a bit "thick and bubbly". The corners are rounded, and care is taken to exaggerate angles.

The chosen upscaled pattern is then stuffed into the output bitmap with a bitwise-or operation. The next pixel chosen from the input image overlaps half of the input from the last cycle. Because of this, the output is also overlapped. For example, if the first input pattern was pattern number 5, then the second patterns would have to be one of number 10, 11, 14, or 15. (Since these are the only patterns which contain pixels on the left that match the pixels on the right side of pattern number 5.)

If that's confusing, here's another ASCII picture:

Code:
First input, pattern number 5:
.#
.#

Second input will be one of these:
10:   11:   14:   15:
#.    #.    ##    ##
#.    ##    #.    ##


Let's assume the second input is pattern number 11. The full input so far is actually:

Code:
.#.
.##


Continuing ... select the upscaled pattern number 5, and upscaled pattern number 11:
Code:
Upscaled 5:
...#
..##
..##
...#

Upscaled 11:
#...
##..
###.
.###


Now we bitwise-or these two together, overlapping at pixel number 2 on pattern number 5 (in other words, left shift the first pattern by 2 pixels, then perform the bitwise-or):

Code:
...#
..##
..##
...#

|

  #...
  ##..
  ###.
  .###

=

..##..
..##..
..###.
...###


You now have a group of 6x4 pixels, scaled up from a 3x2 group. Continue this process for the rest of the horizontal resolution of the input image, stopping at pixel w-1 (where w = image width) ... In this way, we collect all pixels on the first two rows of input, overlapping only the middle w-2 pixels. The algorithm then continues to the next input row (this time overlapping the bottom two pixels of each input group, and the overlapping the bottom two rows of output).

And this produces scaled images similar to the thin screenshot above, with the "thin" numbers. To create the "fat" numbers, I just changed the upscaled patterns slightly, adding even more output pixels to the patterns with three # input pixels (there are four of them in all).

Keep in mind this algorithm currently only scales 1bpp images, but it should not be much trouble adding support for full-color input. I believe detecting a difference in hue and/or luminance between input pixels can be used for pattern matching in a full-color image. You will only have a total of four colors to deal with per input pixel group.


Comparison with the Hqx and Scalex Algorithms

I haven't done any actual comparison between Pyxes and the Hqx and Scale2x series of scaling algorithms (hq2x, hq3x, hq4x, Scale2x, Scale3x, Scale4x). I believe the results are similar, at least so far as 1bpp input goes and the current limitations of the Pyxes algorithm. The Hqx and Scale2x algorithms mainly differ by using a group of the 8 surrounding pixels to do its pattern matching. Four input pixels (the pixel itself plus 3 of its neighbors) should be plenty to contribute to a nice output, as shown by the example images above. More experimentation is still necessary to find the perfect balance of smoothness in the output, as well as accepting full-color input.

I would be interested in doing a more formal comparison of the results between these three algorithms. Comparing the output quality as well as rendering efficiency. Particularly as Pyxes matures into something usable. I'm only interested in the Hqx and Scale2x families for comparison, because I believe they are currently the only algorithms in use which produce tasteful output.

[Hqx]
[Scale2x]

_________________
I have to return some video tapes.

Feed me a stray cat.


Top
 Profile  
Reply with quote  
PostPosted: Wed Nov 11, 2009 6:48 pm 
Offline
Krew (Admin)
Krew (Admin)
User avatar

Joined: Sun Oct 01, 2006 9:26 pm
Posts: 3768
Title: All in a day's work.
After some further analysis, it's apparent that the algorithm can be made more efficient by taking into account the overlapping in the upscaled output, and pre-calculating the proper upcaled pixels.

Every pixel on the inside of the input image is overlapped four times. Each pixel on the edge of the image is overlapped twice, and the four pixels in the corners are only processed once. We can pre-calculate the possible upscaled pixels and include them in separate tables, depending on how many times they are overlapped (1, 2, or 4). In that way, we reduce the number of memory accesses to only writing pixels to the output one time a piece. The same basic idea of blending rounded corners to fill in the centers of shapes stays the same, as does the actual rounded corners.

For color images, the upscaled patterns will be more complex. Using the luminance of the pixels will provide a simple means of breaking large, full-color images into a series of simplistic four-shaded tiles for the pattern matching process. Using the hue could also work, but it might be more difficult specifying which colors will be used as the threshold.

_________________
I have to return some video tapes.

Feed me a stray cat.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 2 posts ] 

All times are UTC - 8 hours [ DST ]


Who is online

Users browsing this forum: No registered users and 12 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group