This one is geared toward experiment design and iterative profile development. People shouldn't use the new feature all the time, but it should be a good QoL improvement for people who need it while being ignorable for people who don't.
Started work on implementing what I think is probably the biggest feature for what I hope will be the next update to CRUCS, though there are a few things that I'd like to try to get into that. This involves one highly invasive change that I'll need to audit the whole (thankfully small) existing code base to make sure everything is updated to do things slightly differently and there's an aspect where I'll want to try a few different approaches to see what works best.
New tutorial video is up. Here I'm taking roasting data from two batches of coffee where the data initially looks very different and I'm showing another way of looking at the data which explains how these ended up matching on the sensory spec.
It's an analysis technique that I've been successfully using for over two decades that never really caught on, I think in large part because there wasn't any software to make it easy. The latest CRUCS update changes that.
Decided it would be less work to just re-record the tutorial and instead of trying to figure out what sort of graphical screen recorder was going to work for me, I just used ffmpeg. Should probably stick the command into a 1 line shell script with a nice name so I don't have to look up the options I need next time.
Recorded a new tutorial video, but the screen recorder only grabbed the upper left quarter of the screen instead of the whole thing (HiDPI issue I guess? What are people using for screen recording on Linux/X.org these days?). I can probably still make it work instead of re-recording the whole thing, but I'll want to eat a lunch before I attempt that.
Author of Typica software for coffee roasters.