The GImage class of the breadboards package provides some simple methods to both examine and/or create pictures at the level of individual pixels. For example, for a given GImage named "myImage", one can use
int pixels = myImage.getPixelArray();
to load up a 2D array of integers, named "pixels", with values that combine (in a reversible way) the red, green, and blue intensities of the color associated with each pixel in the GImage (as well as a transparency level, called alpha for each pixel).
You can reverse this process too. If you have a 2D array of integers named "pixels", you can construct a new GImage from that array with something similar to:
GImage newImage = new GImage(pixels);
Suppose we have an image and we have found its associated pixel array, named "pixels", using the getPixelArray() method mentioned above. Further suppose that you wanted to find the red, green, and blue intensities of the pixel in the 100th row and 100th column of the image. To do so, you would simply look at the value given by pixels. Perhaps this int value was -14610928. As said before, this single number represents a combination of the red, green, and blue intensity levels for the pixel in question. Each of these intensities/levels are expressed as integers from 0 to 255. We can "pull apart" this value apart into its separate integer components with the methods
GImage.getBlue(). Doing this to our previous example, we could use:
int red = GImage.getRed(-14610928); //now, red = 33 int green= GImage.getGreen(-14610928); //now, green = 14 int blue= GImage.getBlue(-14610928); //now, blue = 16
You can go the other direction too -- if you have some red, green, and blue, you can make a single integer (i.e., pixel) that combines these values by using the method
GImage.createRGBPixel(). As an example:
int pixel = GImage.createRGBPixel(33, 14, 16); //now, pixel = -14610928
If we modify these red, green, and blue intensities (otherwise known as the "RGB values"), we change the color. By playing with the values in a pixel array for an image, we can create some interesting effects. For example:
If we averaged all of the red, green, and blue intensities for a given pixel, and then turned around and created a new pixel whose RGB values were all equal to this average -- and if we did this to all of the pixels of an image, and used these new pixels to create a new image -- we will have created a grayscale version of our original image.
Alternatively, suppose we find the complement intensities for a given set of RGB values -- in that the new red intensity and the old red intensity add to pure red (255), the new green intensity and the old add to pure green (255), and the new blue intensity and the old add to pure blue (255). If we made a new pixel out of these inverted intensities -- and if we did this to all of the pixels of an image, and used these new pixels to create a new image -- we would have the "negative" of our original image.
As a slightly more complicated example, if we found the average red intensity seen in the pixels surrounding a given pixel (say, within 3 pixels from the given one), and similarly found average green and average blue intensities for these same pixels, and used these average red, average green, and average blue values as the new red, green, and blue values for our pixel -- and if we did this to all of the pixels of an image, and used these new pixels to create a new image -- the result would be a blurred version of the original image.