That might be long term motivation to port the whole codebase over to something that can run on my laptop.
Part of the performance surprise is that I'm prototyping the algorithms on my laptop which granted is a few years old, but was pretty high end when I got it and the math library in use sees what I'm doing and says, okay, I'll just offload all that to the GPU and I get my result instantly, then I try it on the hardware it needs to run on and all of that gets pushed back to a much slower CPU.
First test batch on the new algorithm ran into performance issues that makes it kind of painful to use, but it slashed the error in half compared to the original code. With a sample size of 1 it's not as if that's meaningful, but I've made some adjustments aimed at improving performance and I'll see how that does after staff lunches are finished.
Spent some time working on another new approach for generating measurements from the roaster cam. Results on some previously gathered test data look a lot better (25% error reduction, but I suspect I can push that farther by cranking up a parameter that I just didn't have enough data to push farther). I'll take my best guess at an implementation based on last night's exploration and work on getting a larger set of testing data for work on future refinements.
In normal times, the fire extinguisher maintenance people just come on a schedule, do their thing, and let me know if something more expensive than usual is coming up or required. If they don't show up and we don't have a fire requiring an extinguisher, the only people double checking that are the fire inspectors.
I have an initial set of test images to work with now. Sadly, the information I wanted in the file names is not what got put there (the original code was placed after the relevant variable had been re-used for something less useful and I failed to notice that when adapting it). It's not a big deal because I can just replicate the code and have it load data from a file instead of from a camera, but it's more work to get started on the analysis than I hoped.
From there, I can see if there's an easy way to automatically classify the images that perform well (I have some thoughts on this, but I need a data set to test those ideas against) and see if there's anything reliable in worse performing images where maybe a different approach to calculating the measurement might produce a better quality measurement.
The idea here is that 10 frames from each of several batches should be representative of the different image characteristics the system encounters and if I save those just before the end of the batch I can see how different types of images compare to the expected value as produced by a bench top analyzer. The expected values for those 10 frames should be very close to each other and to the post-roast measurement.
Today's code modification to the roaster cam is changing the behavior of the debug save button. Previously this created three images based on the next frame captured: the original image, the mask calculated for that image, and the masked image. Part of the file name is also what the system calculated as a degree of roast for that frame. I've changed this to do that for the next 10 frames, sticking each batch into its own directory.
Author of Typica software for coffee roasters.