Show newer

Figured out my next avenue of attack for the problem I've been working on. Once I figure it out it's probably worth writing up a paper that like 3 people would read.

A sample size of one roast isn't enough to say anything useful, but it looks like the new code isn't worse than the old code, but is not as much of an improvement as I was hoping for. There's a parameter I can tweak before tomorrow's roasting to see if that helps, but I might need to investigate a different approach.

An initial test of just holding stuff up in front of the camera shows that it takes my code a little longer to settle on a value (as expected), but it's settling on the same value as the old code, so I'm tempted to have the system output the new values and see how that looks in Typica with a real batch.

Show thread

Patched in a first attempt at what I hope will fix the issue I've been working on with the new instrument. For now I've got the display showing the results of both the old and the new calculations so I can see how those compare, but I didn't put any rounding on my approach so it's showing a way more than what's significant number of digits after the decimal point.

There's some motivation to get this done quickly, though, as once I've got better quality data coming out of this I'll be able to start generating new figures to use for my book incorporating that data.

The main challenge to improving the code on the device is that I don't actually know the programming language the existing code is written in (I know what language it is, but I've never written anything nontrivial in it). The existing code is already bringing in enough dependencies that everything I need is there, but there's a lot of digging through the documentation to find out what the parts I want are named and what would be considered an idiomatic expression of what I'm trying to write.

If everything works out as I'm hoping, the new instrument could be a big usability improvement over competing products that operate on different principals.

Show thread

Roasted a few batches of coffee with the new debug labels and that does seem to be opening up both a new avenue of investigation and gives me a little more confidence that I can use math to fix the problem I'm looking into. There's enough extra compute capability that I can run the original algorithm and what I think might work better at the same time and maybe I can just temporarily write a CSV with the output from both to verify that I'm getting the improvements I expect are possible.

Star Trek Voyager is a magical girl show. See the doctor's transformation when he becomes the emergency command hologram.

Today I realized I still do not think of environment variables how many of you do, and certainly not how I was meant to.

I just read a manual on troff from 1976 and now I might finally get what you're all on about. (not related, but troff uses environments and the concept was made clear, not "it's an array of strings, go away").

kids, I'm starting to think a whole lot of problems for unix newbies could be solved at once by really digging in and annotating the heck out of the execve(2) page.

In last night's dream I got home to find a couple people working on setting up an eclectic lab of 80s/90s Macs from an alternate reality in which Apple stuck with and scaled up the boxy all in one design of the Classic. Several of the machines had been personalized with many stickers. One had a big mouse basket mounted to the side. There was also an all black one called the Mac VR with two pairs of glasses.

I think I've figured out how to express what should be a more robust approach as well, so maybe I'll let the device run and display the results from both algorithms and see if mine really is an improvement. If it is, I can upstream that. If it isn't, well, I'll have learned something.

Show thread

Added a stack of debug labels off to the side of the screen so I can see the data that's getting used for current value calculations and while I still don't like the algorithm that's being used, I'm less confident that it's the source of the problem I've been noticing now that I understand the data flow a little more fully. Still, I'll take a look at that on my next roasting session and see if I notice anything.

Apparently the postal service has a loyalty program now?

Doing that cross checking, with the latest hardware and software updates most of the batches are checking out well, but the instrument seems to be biased toward producing a darker measurement. There's a suspicious operation in the code on the instrument that I think could be responsible for this, but I'd like to add some additional debug capabilities to look specifically at how the values in one array change during real batches and see if my intuition on that problem pans out.

Show thread

This doesn't check that the readings are in any way accurate (that's what cross actual roasted coffees against something that's not a prototype is for), just that the sending and receiving systems agree on what the signal represents.

Show thread

My software displays the measurements at a higher precision while the instrument generating the signal rounds its display to the nearest integer while still outputting the full precision measurement, but everything was within rounding distance so I'm going to just call that good.

Show thread

Yesterday I was out buying more cleaning supplies so I also picked up a bunch of those cards that you can take to see how different paint colors might look under your lighting in assorted shades of brown. These were used to check that my software is producing the same readings as what the roaster cam is generating (hold color in front of camera, wait for reading to stabilize, compare the readings on the two displays).

Show older
Typica Social

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!