I have an initial set of test images to work with now. Sadly, the information I wanted in the file names is not what got put there (the original code was placed after the relevant variable had been re-used for something less useful and I failed to notice that when adapting it). It's not a big deal because I can just replicate the code and have it load data from a file instead of from a camera, but it's more work to get started on the analysis than I hoped.
@goat@hellsite.site Billed as a walkthrough of World 1-1, see Shigeru Miyamoto repeatedly fail to complete the level before just giving up on it. https://www.youtube.com/watch?v=zRGRJRUWafY
From there, I can see if there's an easy way to automatically classify the images that perform well (I have some thoughts on this, but I need a data set to test those ideas against) and see if there's anything reliable in worse performing images where maybe a different approach to calculating the measurement might produce a better quality measurement.
The idea here is that 10 frames from each of several batches should be representative of the different image characteristics the system encounters and if I save those just before the end of the batch I can see how different types of images compare to the expected value as produced by a bench top analyzer. The expected values for those 10 frames should be very close to each other and to the post-roast measurement.
Today's code modification to the roaster cam is changing the behavior of the debug save button. Previously this created three images based on the next frame captured: the original image, the mask calculated for that image, and the masked image. Part of the file name is also what the system calculated as a degree of roast for that frame. I've changed this to do that for the next 10 frames, sticking each batch into its own directory.
A sample size of one roast isn't enough to say anything useful, but it looks like the new code isn't worse than the old code, but is not as much of an improvement as I was hoping for. There's a parameter I can tweak before tomorrow's roasting to see if that helps, but I might need to investigate a different approach.
An initial test of just holding stuff up in front of the camera shows that it takes my code a little longer to settle on a value (as expected), but it's settling on the same value as the old code, so I'm tempted to have the system output the new values and see how that looks in Typica with a real batch.
Patched in a first attempt at what I hope will fix the issue I've been working on with the new instrument. For now I've got the display showing the results of both the old and the new calculations so I can see how those compare, but I didn't put any rounding on my approach so it's showing a way more than what's significant number of digits after the decimal point.
The main challenge to improving the code on the device is that I don't actually know the programming language the existing code is written in (I know what language it is, but I've never written anything nontrivial in it). The existing code is already bringing in enough dependencies that everything I need is there, but there's a lot of digging through the documentation to find out what the parts I want are named and what would be considered an idiomatic expression of what I'm trying to write.
If everything works out as I'm hoping, the new instrument could be a big usability improvement over competing products that operate on different principals.
Roasted a few batches of coffee with the new debug labels and that does seem to be opening up both a new avenue of investigation and gives me a little more confidence that I can use math to fix the problem I'm looking into. There's enough extra compute capability that I can run the original algorithm and what I think might work better at the same time and maybe I can just temporarily write a CSV with the output from both to verify that I'm getting the improvements I expect are possible.
Author of Typica software for coffee roasters.