Gain Superpowered Vision WIth Scalable Function Graphics

The intention for Scalable Function Graphics (SFG) is to provide a means of storing photographic images as scalable mathematical models. Unlike raster graphics, which store images as an array of discrete pixel values, an SFG effectively stores an image as a complex, bivariate, vector-valued, continuous function. The exact mathematical function is determined by a machine learning algorithm during a typically time-consuming conversion process.

(The above and below diagrams show the quality of image scaling that can be achieved using Scalable Function Graphics, both in ‘sharp edge more’ and ‘detail recovery mode’. The above iris image was originally a wiki commons photograph by Laitr Keiows.

Here it has been scaled down to a 40 by 40 thumbnail, and then back up again using different techniques)

The input values to this function are decimal x,y co-ordinates, and the three outputs correspond to the red, green, and blue values of a precise point at that location. Before converting into an SFG the original image will have consisted of a fixed number of points (or pixels). Through the conversion process the learning algorithm steadily manipulates constants of an extremely long mathematical function such that the output of the function will directly match up with the RGB values of each pixel in the original image (so long as the inputs to the function were the co-ordinate values of the respective pixel).

Once this has been achieved the resulting function can be fed new co-ordinate values that weren’t defined in the original image (co-ordinates in-between the original pixels). The resulting outputs act as effective estimates, allowing the image to be scaled up dramatically (as long as it is stored as an SFG) with the learnt mathematical function filling in the gaps.

(The wiki commons photograph ‘Sunset from Battery Park, New York City’ by Alex Proimos has been cropped and scaled down to a 40 by 40 thumbnail. It has then been scaled back up 1000% in order to compare different scaling algorithms and demonstrate the quality of scalable function graphics)

A comparison can be drawn between Scalable Function Graphics and Scalable Vector Graphics (SVG). SVGs also store images as a scalable mathematical model, but with an important difference. SVGs store values that directly represent colours, heights, widths and other attributes of a fixed number of geometric shapes. As mathematically defined objects these shapes maintain the same characteristics at any scale. Unfortunately, however, this mechanism only remains useful for text, and relatively simple illustrations such as logos. Photographic images are far too complex to be stored using simple geometric objects. This is where Scalable Function Graphics are intended to be useful.


The Development of Scalable Function Graphics

The below sequence of images are rendered bivariate, vector-valued function graphics that have been produced without a learning process. Instead the constituent functions, mathematical operators, variables and constants have been generated and combined into a single function randomly. Discontinuous functions such as tan, min, max, abs, random, and if-then-else were also included, facilitating infinite sharpness and granularity. The fact that these functions are discontinuous means that they will produce the same sharp characteristics no-matter what resolution the image is displayed at (so long as it is stored as a function graphic).


The initial intention was that this behaviour would serve as a means of preserving sharp edges in photographic images when they are scaled up, however it quickly became apparent that discontinuous functions directly impeded the learning process.

 For machine learning algorithms to work, such as genetic algorithms, relatively small learning steps need to be taken which gradually build up. However using discontinuous functions meant that even small alterations in the function could result in dramatic changes to the resulting image. This obstructs subtle but beneficial changes from accumulating effectively. Additionally, almost any alteration to any constituent function, operator or variable (as opposed to constants) often resulted in relatively drastic effects. For example, changing 100+100 to 100-100, or max(0,x) to min(0,x), or y to x. Changes to mathematical constants on the other hand can be controllably subtle, so long as they are not contained within discontinuous functions (for example changing 50 to 50.00001).
Fortunately, limiting the learning process to the manipulation of constants also allows for a more efficient learning algorithm to be used called ‘backpropogation’. Significant modifications have, however, needed to be made to the standard backpropogation algorithm to make it fit for the purpose of Scalable Function Graphic conversion. Additionally, to solve the problem within a tractable time frame the original image needs to be divided into a series of smaller tiles. A different mathematical function can then be trained to model each tile, with each one overlapping with neighbouring tiles slightly to ensure that the scaled up versions fit together seamlessly once the learning process is complete. Tiling is necessary as the ability of even an extremely large continuous function to model the complexity of an entire photographic image appears to diminish exponentially the larger the original image becomes.
For comparison here are the results of four different scaling techniques (including a Scalable Function Graphic) applied to a 20 by 20 tile within the wiki commons image, ‘Feathered Dusk’. by Jessie Eastland. From left to right, ‘nearest neighbour algorithm’, ‘bicubic algorithm’, ‘fractal algorithm’, ‘SFG(sharp edge mode)’. If you would like to see the length and complexity of the vector-valued function required for the SFG to produce such a high quality rendering (for just this one tile) click here.

Here the Wikipedia commons photograph ‘Feathered Dusk’ taken by Jessie Eastland, has been converted into a mathematical function using a machine learning algorithm. Through an iterative process the algorithm has developed a bivariate, vector valued, mathematical function, the outputs of which correspond to the RGB values of each pixel in the image (as long as the two inputs are the x,y co-ordinates of the respective pixel).

With the entire image modelled as a single mathematical function, feeding this function x,y co-ordinates that exist outside of the original image’s borders produces interesting results. The below image shows the rendered results of one such mathematical function. The white corner markers identify where the original borders of the image existed. The pixel co-ordinates that exits outside of these markers did not exist in the original image yet the function is still capable of rendering relatively plausible results. During the learning process, heavy restrictions were intentionally placed on the detail that the function would be able to model (by limiting the size and complexity of the function).

The next image down shows a larger, more complex mathematical function, modelled on the same image. With less restrictions on size and complexity the algorithm has learnt to model finer details of the original image but to the detriment of a plausible rendering beyond the image borders. The more capable it becomes at exactly replicating the image, the less capable it becomes as generalising outside of the image borders based on what it has learnt.

The images below show the results from a string of attempts to solve this problem, mainly by using multiple mathematical functions with each one trained to pick up on different features within the same image. The results are interesting, but largely unsuccessful.

Explore Scalable Function Graphics for yourself and download PhotoFunction; the free, prototype software that allows you to convert thumbnail images into Scalable Function Graphics.


Alexander O.D. Lorimer is a computer programmer and emergent systems researcher, focusing on the application of self-organising systems to solve complex design problems. An overarching aim is the development of an artificial and collective intelligence system to better optimise and decentralise the products and processes of architectural design.