Someone on Reddit asked about automatically detecting fuse colors for an aligned image. I thought that was a neat task. After a bit of fumbling, I wrote this simple script which generates a probability distribution for each hue, where the certainty is scaled based on how light or dark a pixel is. (Darker pixels have more noise and can contribute oddly. Lighter pixels can drag an average off kilter or wash out an average.)
Here’s the code:
I finished enough of my machine learning set to start doing some more fun things. The MNIST dataset, while totally awesome, got a little bit boring, so I decided to see if it could automatically generate the original 151 pokemon.
Probably the most important thing I learned in this process: RBMs are great at modeling binary data, but for continuous data, you’re going to have a bad time. To circumvent this limitation, just break luminosity into multiple bits per step. Check out GreyMatrixToBitMatrix in my Aij code. (https://github.com/JosephCatrambone/Aij/blob/master/src/main/java/com/josephcatrambone/aij/utilities/ImageTools.java)
The system still had trouble generating novel images, which perhaps is to be expected. Even with a mean filter, though, the results weren’t great. Someone recommended a median filter instead, but I’m not groking how that’s going to interact in the bigger picture. I suspect the best improvement will come in the form of convolution, but I’m running into a few bugs with that right now. Soon.