In 2011, the Software Studies Initiative released a macro known as Image Plot for ImageJ. When combined with a system for measuring the properties of images, the macro allowed users to compose an Image Plot, where the images that they were analyzing would appear themselves as the points of a scatterplot. This is an extremely useful way to analyze visual data as it allows everyone to see the both a meta representation of information about the optical qualities of the images while experiencing them visually. This technique was key to projects that inspried us, like Manovich’s Visualizing Vertov The original documentation focused on patterns of change through an artists collection, for example, how did the works of Mondrain or Van Goh change through the years?
Underlying much of the rise of Cultural Analytics is the tension between close and distant reading which is often overplayed. The close reading of a particular image will always be an important project, what distant reading methods can provide are: ways of visualizing entire collections, tests for normative claims to clustering or trends, generative documents for encouraging new hypothesis formation. In this sense, the plots produced as image plots appear very much like marginalia in reading or notes taken during field work.
This particular library grew from work in a lower division undergraduate course at Oregon State University. Our endeavor here is not to provide an entirely new approach, but to bring multiple existing functions into a single commonly used framework that can easily be taught to undergraduates and employed in research by those who are not especially handy with computers. At the same time, in building this project we assume that the future of general education includes some level of data analysis education and that courses in learning common platforms and languages (like tidyverse in Rstudio) will replace standalone GUIs in just a few years. Because our program has links to both communication, art, design, and computer science, our tool set in this package is intended to be useful for everyone.
We designed this package to interface with the tidyverse more generally, all of our measurements are collected in dataframes with one case per row, our tables can easily be joined to make a very wide dataframe. The end plotting logic of this package is geom_image for ggplot2. If you are functional with dplyr and ggplot2, this package should provide you with a comprehensive set of image plotting tools.
knitr::opts_chunk$set(echo = TRUE)
#our library
library(ImagePlotX)
For your convenience we have included a selection of images for analysis in this package, they can be called with the function “bernie.” These are twenty images of Bernie Sanders in his mittens from the inauguration, modified by a neural net.
Our basic importer function takes the name of your folder in quotes and yields a dataframe with each local and global path. This basic loader does not do anything to the images or even read them, it is a simplifier that helps you know where your files actually are.
#our mittens images can be found here:
#http://www.danielfaltesek.com/mittens.zip
#this is where the files ended up, notice that the directory is different than yours, this is because this tutorial was produced with a beta version of the package
#your path to your files will likely include downloads/mittens/
A<-"/Users/faltesed/Documents/One/mittens"
load_images(A)
## [1] "/Users/faltesed/Documents/One/mittens/58d7005687456f7f1d8b46b6_da39a3ee5e6b4b0d3255bfef95601890afd80709.jpg"
## [2] "/Users/faltesed/Documents/One/mittens/58d8a39887456f4c718b4574_da39a3ee5e6b4b0d3255bfef95601890afd80709.jpg"
## [3] "/Users/faltesed/Documents/One/mittens/58d9558f87456f65058b45fd_da39a3ee5e6b4b0d3255bfef95601890afd80709.jpg"
## [4] "/Users/faltesed/Documents/One/mittens/58d9558f87456f65058b45fd_da39a3ee5e6b4b0d3255bfef95601890afd80709.png"
## [5] "/Users/faltesed/Documents/One/mittens/591c084587456fd56a8b45c5_da39a3ee5e6b4b0d3255bfef95601890afd80709.jpg"
## [6] "/Users/faltesed/Documents/One/mittens/591c08fe87456fdb068b45bb_da39a3ee5e6b4b0d3255bfef95601890afd80709.jpg"
## [7] "/Users/faltesed/Documents/One/mittens/591c093a87456fdb068b45bd_da39a3ee5e6b4b0d3255bfef95601890afd80709.jpg"
## [8] "/Users/faltesed/Documents/One/mittens/591ec9c087456fb3208b4568_da39a3ee5e6b4b0d3255bfef95601890afd80709.jpg"
## [9] "/Users/faltesed/Documents/One/mittens/5922b64c87456f02158b4592_da39a3ee5e6b4b0d3255bfef95601890afd80709.jpg"
## [10] "/Users/faltesed/Documents/One/mittens/5922b70987456f79178b456f_da39a3ee5e6b4b0d3255bfef95601890afd80709.jpg"
## [11] "/Users/faltesed/Documents/One/mittens/5922b77c87456f56158b458c_da39a3ee5e6b4b0d3255bfef95601890afd80709.jpg"
## [12] "/Users/faltesed/Documents/One/mittens/5922b7f187456f02158b45a2_da39a3ee5e6b4b0d3255bfef95601890afd80709.jpg"
## [13] "/Users/faltesed/Documents/One/mittens/592b0de887456fd7158b456b_da39a3ee5e6b4b0d3255bfef95601890afd80709.jpg"
## [14] "/Users/faltesed/Documents/One/mittens/592c4fcc87456fb1368b458c_da39a3ee5e6b4b0d3255bfef95601890afd80709.jpg"
## [15] "/Users/faltesed/Documents/One/mittens/593db3f487456fa7348b45e3_da39a3ee5e6b4b0d3255bfef95601890afd80709.jpg"
## [16] "/Users/faltesed/Documents/One/mittens/5956c9ba87456f2f2e8b457f_da39a3ee5e6b4b0d3255bfef95601890afd80709.jpg"
## [17] "/Users/faltesed/Documents/One/mittens/5aa0eca087456ff31d8b4569_da39a3ee5e6b4b0d3255bfef95601890afd80709.jpg"
## [18] "/Users/faltesed/Documents/One/mittens/5aa0ef1e87456f33228b4571_da39a3ee5e6b4b0d3255bfef95601890afd80709.jpg"
## [19] "/Users/faltesed/Documents/One/mittens/5c6247045cfdce37b6a8559e_da39a3ee5e6b4b0d3255bfef95601890afd80709.jpg"
This is always your starting point, which is helpful as it yields an entry in your global environment. It is often helpful to have your files in a single format, which in our case would be PNG. Our function convert_and_import will take the result of your load_images and will convert all of your images to PNG it will put them in a single folder named “converted” in the home directory of your R project. Each image will be assigned a new filename which is a random combination of numbers and letters, which is associated with the original name for that file. Our files can be found (here)[http://www.danielfaltesek.com/mittens.zip].
WARNING: if you run convert and import multiple times, your converted directory will get larger and larger. In accordance with tidy principals our functions are non-destructive. Furthermore, we can imagine use cases where you might chose to convert and land in a single directory for further analysis. If you want to have a clean convert_and_import, delete the directory between function runs.
You can take a look at some of the individual converted Bernies by opening your “converted” directory, selecting an image with the radio button, and then using the “more” dropdown to open the image in another program. You can see that the images were successfully converted.
convert_and_import(images)
Once your images are imported you need to measure them in some way. Your first step in measurement is measure_images, which predictably produces a dataframe called measured_images. This function can tell you many fun things about your images, like what kind of files they are (if you have not converted), their dimensions, color spaces, filesizes, and will OCR any text on the image. This can be very useful for the analysis of memes, our Bernie pictures have no text.
A second useful measurement method is fluency analysis, which employs the methods from the image fluency package, fielding a dataframe with contrast, self_similarity, symmetry, and complexity. This is a slower function that many others, but can produce really useful results for your analysis.
measure_images(converted_images)
## [1] "OCR results may be misleading if images include no text"
This is one of the fastest ways to arrive at an overview of your images. Notice that it did leave a new dataframe in your global environment rather than adding to what you already had, this is in keeping with the non-destructive principles of the tidyverse. To use this data, in cases like our first plot, you will need
#for your join
library(dplyr)
##
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
library(ggplot2)
#combine the data side by side
mypictures<-bind_cols(converted_images, measured_images)
#the plotting code
imageplot_output("mypictures", "a", "info.filesize", .5)
Dplyr sent a few messages when we loaded the library, ggplot2 did not.
This is simply intended to show you how a scatterplot can be produced with this code.
Colors are great fun and are also very useful. Our color methods are in a single function called color_analysis yielding a dataframe color_results. Under the hood, we are using a number of calculations from both RGB and HSV, depending on your needs.
Basic Colors mean_red Deviation_red Mean_green Deviation_green Mean_blue Deviation_blue
To understand what these measure we need to take a detour into some color theory. In many primary schools the primary colors are taught as red, blue, and yellow. For subtractive color and rudimentary paints this then leads to a series of mixing choices and lovely paintings. More advanced versions of this, such as in offset printing, use cyan, magenta, and yellow, often with a layer of black to save money and produce higher contrast line effects, thus CMYK. There are colors where this is not quite as consistent and sharp as we may like which leads to more specific inks, Pantones and the like. With light, we are adding colors to produce white.
Measuring mean red, blue, and green can tell us about how much of that color is present, but that doesn’t mean that the image is that color, what matters are ratios. An all white image would have a lot of every color and no standard deviation. Higher mean values tell you more of what color is present and the deviations show how much variation is present. Consider this image: the top is blue 255 and the bottom is red 255:
The values for mean red and blue are nearly directly between the two regions, thus the standard deviation is close to the mean values for red and blue. You will also notice that the values for green are very low as green appears in just some slivers of white on the edges of the image, the standard deviation for the green is also very low as nearly the entire image has no use of green.
Hue, saturation, and value provide another useful set of measurements for color. We can imagine use cases where you might prefer to use these factors rather than thinking about which raw RGB specifications. For example, it might be helpful to look for trends in saturation or comparing saturation and the use of a certain color, like red. While it would be possible to ask users to calculate their own transformations between RGB and HSV, we simply provide all in the dataframe for speed and convenience.
color_analysis(converted_images)
head(colors_results)
## mean_red deviation_red mean_blue deviation_blue mean_green deviation_green
## 1 95.22748 80.30425 84.37147 64.31143 87.05828 67.79372
## 2 111.51838 108.84900 114.48620 88.03052 104.99191 90.66390
## 3 112.80675 85.58073 60.37667 54.04264 124.97695 65.97980
## 4 77.07664 72.84817 76.63221 74.23997 78.57176 66.75985
## 5 114.77870 102.26952 102.71738 83.83843 114.48135 91.87764
## 6 98.35654 68.23294 92.27417 64.64829 91.60640 68.78785
## mean_hue deviation_hue mean_saturation deviation_saturation mean_value
## 1 0.4361970 0.2961008 0.5429274 0.3131320 0.4527997
## 2 0.3169398 0.3411196 0.4319762 0.4063933 0.5089563
## 3 0.2393427 0.1750328 0.8134015 0.1729660 0.6534664
## 4 0.4435929 0.2698108 0.5810582 0.2449151 0.4333243
## 5 0.2977604 0.1926652 0.3759916 0.3075038 0.4742432
## 6 0.4993626 0.3134222 0.2694383 0.2008261 0.4214555
## deviation_saturation.1 luminance lum_contrast hue_region
## 1 0.3131320 88.88574 70.80313 Orange
## 2 0.4063933 110.33217 95.84781 Violet
## 3 0.1729660 99.38679 68.53439 Chartreuse
## 4 0.2449151 77.42687 71.28266 Chartreuse
## 5 0.3075038 110.65914 92.66186 Orange
## 6 0.2008261 94.07904 67.22303 Rose
For this tutorial, we turned off the warnings, there will be a bunch related to transparent pixels, they aren’t a problem.
Once again, as we complete our color analysis we will need to bind that information with our ongoing dataframe to use it in a plot.
mypictures<-dplyr::bind_cols(mypictures, colors_results)
imageplot_output("mypictures", "mean_red", "mean_blue", .5)
edge_analysis(converted_images)
## [1] "edge analysis complete"
Mean red/blue plots are particularly popular and useful. Thinking alegebraically,
##Porportion and Balance
While there was a symmetry method used in the basic section, we have written a few additional functions that work with proportion and balance. Our methods in this section are concerned with finding lines.
symmetry_analysis(converted_images)
## a horiz sd_top sd_bottom vert sd_left sd_right
## 1 1 -0.023156277 0.3119029 0.3401269 0 0.3092852 0.3092852
## 2 2 -0.006694469 0.2736789 0.2846854 0 0.2674252 0.2674252
## 3 3 0.013279192 0.2704635 0.2495093 0 0.2387569 0.2387569
## 4 4 0.033472344 0.2427677 0.1695823 0 0.2284608 0.2284608
## 5 5 -0.007901668 0.1868374 0.2061156 0 0.2206595 0.2206595
## 6 6 -0.004170325 0.1958462 0.2061156 0 0.1716659 0.1716659
## 7 7 -0.048946444 0.4257706 0.4539226 0 0.4215745 0.4215745
## 8 8 -0.005377524 0.2096982 0.2217724 0 0.1864876 0.1864876
## 9 9 -0.025570676 0.2784850 0.3144141 0 0.2971061 0.2971061
## 10 10 -0.018327480 0.1345336 0.1888793 0 0.1453526 0.1453526
## 11 11 0.021949078 0.3131366 0.2846854 0 0.2752399 0.2752399
## 12 12 0.034021071 0.3189229 0.2736623 0 0.3164348 0.3164348
## 13 13 0.033472344 0.2427677 0.1695823 0 0.2284608 0.2284608
## 14 14 0.076602283 0.3498663 0.2495093 0 0.3055756 0.3055756
## 15 15 0.008779631 0.3057539 0.2951127 0 0.2684293 0.2684293
## 16 16 -0.011852502 0.4064274 0.4161762 0 0.4024834 0.4024834
## 17 17 0.060140474 0.3480642 0.2736623 0 0.3114831 0.3114831
## 18 18 -0.016022827 0.2024186 0.2361640 0 0.1882462 0.1882462
## 19 19 0.009328358 0.2520202 0.2361640 0 0.2547780 0.2547780
## central_diagonal corners_diagonal diagonal_overall
## 1 -0.036715566 0.058693550 0.001455463
## 2 -0.031179157 0.100689041 0.026485183
## 3 0.052946418 -0.045928342 0.023366256
## 4 0.009197319 0.001400118 0.003985216
## 5 -0.018807412 -0.031308219 -0.038402962
## 6 0.023462582 0.013281707 0.028977309
## 7 0.013714898 0.018189626 0.029381713
## 8 -0.005366058 -0.034470798 -0.023305286
## 9 -0.053837485 -0.022804257 -0.063128127
## 10 0.008189955 -0.007791836 0.003590208
## 11 0.025978536 0.034413133 0.033683413
## 12 -0.018979401 -0.055897636 -0.055752473
## 13 0.009197319 0.001400118 0.003985216
## 14 0.106022237 0.097581481 0.144061992
## 15 0.038874438 -0.024809620 0.021490039
## 16 0.003344777 0.017425990 0.004764346
## 17 0.003130201 0.066974922 0.035034047
## 18 0.025202128 -0.025269187 0.007851881
## 19 -0.019395450 -0.008392177 -0.026266089
Positive numbers imply a balance toward the left side, negative numbers toward the right. Top and bottom are a similar relationship.
Ten items to report: Horizontal symmetry SD of the top region, SD of the bottom
Vertical symmetry Sd of the left region, sd of the right region
Central diagonal symmetry along an integral x~y Central region Corner region
Consider this image:
Vertically, the image has some symmetry, but is crude. Horizontally, less. Diagonally, much less. The image should have excellent corner symmetry a there are no lines in either corner.
thirds_images(converted_images)
## [1] -0.009317752
## [1] 0.03082091
## [1] 0.002167584
## [1] -0.0120371
## [1] -0.007858339
## [1] -0.009440681
## [1] -0.002358848
## [1] -0.02606116
## [1] 0.02408751
## [1] 0.05737095
## [1] 0.01006745
## [1] -0.01653075
## [1] 0.005858146
## [1] 0.02268822
## [1] -0.01175179
## [1] -0.01288436
## [1] -0.002135554
## [1] 0.04291959
## [1] 0.006558581
## [1] -0.02662099
## [1] 0.0116277
## [1] 0.0391607
## [1] -0.0120861
## [1] -0.009705882
## [1] 0.007937547
## [1] 0.0121316
## [1] -0.01454519
## [1] 0.0006366979
## [1] -0.0007035558
## [1] 0.03215545
## [1] 0.01071202
## [1] -0.01362299
## [1] -0.01978682
## [1] 0.01611885
## [1] 0.01310768
## [1] -0.007072193
## [1] 0.0008556807
## [1] 0.02377451
## [1] -0.01159285
## [1] 0.001535762
## [1] 0.01862532
## [1] 0.04763346
## [1] 0.0005898726
## [1] -0.02295622
## [1] -0.005034404
## [1] 0.008953256
## [1] 0.006282075
## [1] -0.02353108
## [1] 0.005858146
## [1] 0.02268822
## [1] -0.01175179
## [1] -0.01288436
## [1] 0.05861109
## [1] 0.02539905
## [1] 0.01589446
## [1] -0.005910762
## [1] 0.01518018
## [1] -0.01320669
## [1] 0.001359118
## [1] -0.02956049
## [1] 0.009024704
## [1] 0.01743355
## [1] 0.01153452
## [1] -0.01840909
## [1] 0.01633339
## [1] 0.03257401
## [1] 0.01411668
## [1] -0.01561832
## [1] 0.01124901
## [1] 0.04286034
## [1] 0.0004234428
## [1] -0.01104445
## [1] -0.007767378
## [1] 0.01770819
## [1] 0.0021372
## [1] -0.005051805
## a low_hor high_hor left_vert right_vert vert_focal
## 1 1 -0.0093177523 0.030820912 0.0021675840 -0.0120370989 High
## 2 2 -0.0078583394 -0.009440681 -0.0023588485 -0.0260611631 Low
## 3 3 0.0240875130 0.057370955 0.0100674495 -0.0165307487 High
## 4 4 0.0058581458 0.022688224 -0.0117517940 -0.0128843583 High
## 5 5 -0.0021355544 0.042919593 0.0065585805 -0.0266209893 High
## 6 6 0.0116277029 0.039160703 -0.0120861036 -0.0097058824 High
## 7 7 0.0079375473 0.012131605 -0.0145451928 0.0006366979 High
## 8 8 -0.0007035558 0.032155455 0.0107120216 -0.0136229947 High
## 9 9 -0.0197868158 0.016118854 0.0131076759 -0.0070721925 High
## 10 10 0.0008556807 0.023774510 -0.0115928509 0.0015357620 High
## 11 11 0.0186253180 0.047633463 0.0005898726 -0.0229562166 High
## 12 12 -0.0050344043 0.008953256 0.0062820748 -0.0235310829 High
## 13 13 0.0058581458 0.022688224 -0.0117517940 -0.0128843583 High
## 14 14 0.0586110909 0.025399046 0.0158944604 -0.0059107620 Low
## 15 15 0.0151801802 -0.013206694 0.0013591175 -0.0295604947 Low
## 16 16 0.0090247043 0.017433554 0.0115345160 -0.0184090909 High
## 17 17 0.0163333897 0.032574012 0.0141166849 -0.0156183155 High
## 18 18 0.0112490129 0.042860342 0.0004234428 -0.0110444519 High
## 19 19 -0.0077673779 0.017708195 0.0021371999 -0.0050518048 High
thirds_results
## a low_hor high_hor left_vert right_vert vert_focal
## 1 1 -0.0093177523 0.030820912 0.0021675840 -0.0120370989 High
## 2 2 -0.0078583394 -0.009440681 -0.0023588485 -0.0260611631 Low
## 3 3 0.0240875130 0.057370955 0.0100674495 -0.0165307487 High
## 4 4 0.0058581458 0.022688224 -0.0117517940 -0.0128843583 High
## 5 5 -0.0021355544 0.042919593 0.0065585805 -0.0266209893 High
## 6 6 0.0116277029 0.039160703 -0.0120861036 -0.0097058824 High
## 7 7 0.0079375473 0.012131605 -0.0145451928 0.0006366979 High
## 8 8 -0.0007035558 0.032155455 0.0107120216 -0.0136229947 High
## 9 9 -0.0197868158 0.016118854 0.0131076759 -0.0070721925 High
## 10 10 0.0008556807 0.023774510 -0.0115928509 0.0015357620 High
## 11 11 0.0186253180 0.047633463 0.0005898726 -0.0229562166 High
## 12 12 -0.0050344043 0.008953256 0.0062820748 -0.0235310829 High
## 13 13 0.0058581458 0.022688224 -0.0117517940 -0.0128843583 High
## 14 14 0.0586110909 0.025399046 0.0158944604 -0.0059107620 Low
## 15 15 0.0151801802 -0.013206694 0.0013591175 -0.0295604947 Low
## 16 16 0.0090247043 0.017433554 0.0115345160 -0.0184090909 High
## 17 17 0.0163333897 0.032574012 0.0141166849 -0.0156183155 High
## 18 18 0.0112490129 0.042860342 0.0004234428 -0.0110444519 High
## 19 19 -0.0077673779 0.017708195 0.0021371999 -0.0050518048 High
## hor_focal
## 1 Left
## 2 Left
## 3 Left
## 4 Left
## 5 Left
## 6 Right
## 7 Right
## 8 Left
## 9 Left
## 10 Right
## 11 Left
## 12 Left
## 13 Left
## 14 Left
## 15 Left
## 16 Left
## 17 Left
## 18 Left
## 19 Left
This is our approach for reading images for the use of a standard set of composition trends known as the rule of thirds. Compositions generally position key figures on a tic-tac-toe like grid. This function segments each image into four thirds regions which are then compared for canny edges so that we can compare. Negative scores mean that there is more activity not on the third, positive scores mean there is more there than the other third. If all scores are very low, it is likely that the image does not conform to the rule of thirds.
Under standard aesthetic assumptions, the yellow figure is on multiple thirds and thus is primary, the blue figure is not. Typically, being in the center square is also less interesting.
Notice that we have two discrete outputs which tell you which thirds were highest. These are simply there to avoid a calculation for you. In a few cases, the neural net did shift the thirds, meaning that core composition elements did change.
edge_analysis(converted_images)
## [1] "edge analysis complete"
This is by far our most largest measurement.
Includes a region by region breakdown of the canny edges detected in each region and the relative standard deviation of the edges in that region. Each region also has a skewness measure and a kurtosis measure, which speak to how much in any given region the distribution leans and how peaky it is.
The use cases for this particular model would include looking at staks for particular regions where there is a higher density of activity or looking for defocused or empty areas. An approach looking for images that are blue with low edge values in regions 1-4 would be a “sky” detector. If faces were known to be in the images and thirds were established, the scores for R8 and R 12 could be used to look for lead room. There are many ways that you can imagine using this approach to compare particular areas across a set of images, we hope this particular result, while voluminous, is flexible.
Each represents one-sixteenth of the image numbered in rows from the top left corner, which is R1. The extreme bottom right is R16.
It is straight forward to write new functions that could approximate symmetry or focus detection Note: it is not uncommon for this function to throw NaN when there are regions of the graphic with no edges. This is a drawback of our method.
For your reference, this is a map of our region buckets.
As long as your analytics method produces a dataframe or dataframe like device, you should be able to join it with the underlying data to produce a plot using this method. Google cloud classifiers would be a relatively simple to implement method. This is the strength of the tidy integration.
A future version of this package will likely include a segmented mapping function that would allow, under the conventions of purr, the capacity to apply a function to image segments that could be defined by the user.
##Outputs and Aesthetics
So far you are familiar with our basic plotting function. This is intended to help folks who are unfamilar create meaningful plots.
Within the function there are really three distinct things happening: A. a function is “passing alpha” to adjust the transparency of your images. B. a ggplot2 object is using the standard grammar of graphics approach to build a plot C. the library ggimage is being used to append the images with modified alpha, which requires a specific assignment of file names
library(ggplot2)
#sub-routine A is passing alpha
transparent <- function(img){magick::image_fx(img, expression= ".1*a", channel = "alpha")}
#B is the basic GG plot
ggplot(mypictures, aes(a, info.filesize))+
#and C is the geom
ggimage::geom_image(image=mypictures$local_path,image_fun=transparent) + coord_polar()
Because we are using the gg paradigm for producing output graphics, all of your strategies for using ggplot2 can be used here as well. Because we are using a particular geom, we can’t play with many of the options on the front side of the cheatsheet. The back right hand side, is going to be where the fun begins.
#old stuff
transparent <- function(img){magick::image_fx(img, expression= ".1*a", channel = "alpha")}
ggplot(mypictures, aes(a, info.filesize))+
ggimage::geom_image(image=mypictures$local_path,image_fun=transparent) +
#newstuff
#let's make this polar with a linedrawn background and a big title
coord_polar()+theme_linedraw()+labs(title = "Mittens Are Warm.")
While aesthetics aren’t perfect here, you can start to see how for things like a symmetry plot or other times when we are looking for outliers a polar plot could be really powerful.
#old stuff
transparent <- function(img){magick::image_fx(img, expression= ".1*a", channel = "alpha")}
ggplot(mypictures, aes(a, info.filesize))+
ggimage::geom_image(image=mypictures$local_path,image_fun=transparent) +
#newstuff
#no styling, but definitely faceting
facet_wrap(~hue_region)
In this case we used hue regions as produced the five tertiary color zones. Faceting an image plot is a great choice with any discrete, such as years of an authors work, facet_wrap is best as it will preserve the shape of your images.