Half gallon bottles arrived today so that size chai concentrate is once again available, at least while our current bottle supply lasts. We were out of stock on that size for months and I don't know if the supply disruption is completely resolved or if we can expect another massive delay when we place the next order.
One thing I've noticed with the latest algorithm change is that the roaster cam now catches where the chaff is coming off the coffee. That's invisible looking at single frames, but when you stack up 20 there's a clear signal where that gets brown while the rest of the coffee is still green. (unless you're roasting decaf as decaf coffees no longer have any of the silver skin layer to come off as chaff during roasting)
2347. Dependency
title text: Someday ImageMagick will finally break for good and we'll have a long period of scrambling as we try to reassemble civilization from the rubble.
(https://xkcd.com/2347)
(https://www.explainxkcd.com/wiki/index.php/2347
)
The stupid simple optimizations turned out to be effective and I confirmed another bottleneck in the original code that caught my eye when I first saw it (I didn't change it then because I was less familiar with the code and performance wasn't yet a problem) which is letting the latest version of the code run a lot faster than it used to even with a heavier weight algorithm running alongside the original.
@neal Zeus is a dick, I'm for this
While running errands, the radio was doing that thing where it's mostly playing one station, but then it briefly cuts over to a different station for just a single word like, "confusion" or "radar". I think whatever the other program may have been something like a very slow paced word based game show.
That might be long term motivation to port the whole codebase over to something that can run on my laptop.
Part of the performance surprise is that I'm prototyping the algorithms on my laptop which granted is a few years old, but was pretty high end when I got it and the math library in use sees what I'm doing and says, okay, I'll just offload all that to the GPU and I get my result instantly, then I try it on the hardware it needs to run on and all of that gets pushed back to a much slower CPU.
First test batch on the new algorithm ran into performance issues that makes it kind of painful to use, but it slashed the error in half compared to the original code. With a sample size of 1 it's not as if that's meaningful, but I've made some adjustments aimed at improving performance and I'll see how that does after staff lunches are finished.
Author of Typica software for coffee roasters.