The original words of Phanes, tirelessly carved into a slab of "No'".

The Return of Mag, and also the exit of Mag

Nothing on the job front, yet.

On the fitness front, cubital tunnel caught up to me. Letting that left arm heal pretty solid before I get back on the bar to avoid injury. Though I am shaping up still, so, there’s that.

Projections say 1-2 months before shit hits the fan without a revenue stream. If this stretches out much further I’ll have to start selling off belongings. So it’s apply, apply, apply during a massive market recession and hope for the best. No one’s coming to save me and there’s nowhere to go that’s going to be safe. The only way out is to get hired or land a contract. Truthfully, I’m getting somewhat worried.

While waiting for calls from applications I’ve prioritized the conversion of Tetsuo SOL MK-VII to its MK-8 architecture so it can be hosted on local resources. It’s 50 bucks a month in hosting costs saved so it counts. That will depend on my internet connection and electricity, so, it doesn’t fully solve the problem, but buys some time.

Conversion is at a frenzied pace, but, I have along the way found and fixed tons of issues in the process, and it looks like the new MK8 build is going to have much higher accuracy.

SIG was easy, and, during the process it was found that, even though it was building a confidence score from a 10 day walking forecast during data feature configuration, it was dropping and not using a forecast probability score that would be very useful for gauging the confidence of a score separately from its existing confidence score. So, both values are needed, but, that second one just wasn’t being used and should be.

MAG was, as historically been the case with that project, a sole source of pain. Bloated, full of errors, complex, inaccurate, and poorly aligned to goal. While a critical piece of Tetsuo, it was almost not worth converting. After ripping a bunch of legacy shit out that was left over from when I was doing regressor model exploratories in January of 2025, I found that its forecast strategy was wrong. MAG is intended to forecast magnitude of change for a symbol. That’s best represented as a percentage for this system. So what it was doing was forecasting the price for the next two days and calculating magnitude of percent change from that, which is naive at best — it compounds any error in forecast by 2x instead of 1x, so that’s half the confidence than a single score would have. In that model it’s more going to be a result of symbol momentum on T[0] than a T+1:T+2 percent change.

So I created a second mag forecasting and training algorithm that creates a pct_mag column in the dataframe, after adding the features, which represents on each T[0] the 100 * (T+2@11 - T+1@11) / T+1@11) value, which still results in a double NaN but the predict only needs the last one, so it grabs the last one. This is a bit cleaner since we’re only relying on one forecasted value as a direct pct_mag score.

Then came the GIL. Because of the huge amount of parallelism in compute required, Python’s Global Interpreter Lock (GIL) becomes a bottleneck if you don’t implement the parallelism correctly and I tend to be pretty poor at that. I don’t know if I would have solved this in a timely fashion without Claude Code.

This whole solution has gotten smarter over the last 2 years or so at about the pace that LLMs have, because an LLM has been involved in every step along the way so that I can adopt this emergent technology and stay familiar with it — and it does solve some problems and speed some aspects up. It’s helped teach me which tasks to use an LLM for and which tasks shouldn’t use an LLM and has kind of forced me to “see through the hype” to make sure I’m not overrelying on something in areas it shouldn’t be used. That’s actually been pretty eye-opening.

Claude Code has stayed consistently pretty okay over the two years I’ve been using it. ChatGPT, on the other hand, has not and appears to just be an emulation of something useful instead of actually being a useful thing. The OpenAI models have never performed well and lie constantly, their marketing misinforms their users about LLM capabilities, and I’m honestly surprised that Sam Altman and OpenAI haven’t gone the way of Elizabeth Holmes and Theranos due to those misrepresenations to customers and stakeholders.

Make no mistake — LLMs are over-hyped and they are not going to replace any role any time soon, but, that doesn’t mean the entire enterprise tech world isn’t going to try, and that’s what we’re seeing at all levels — from entry engineers to senior architects to middle management, HR, accounting, sales, and C-suite. Some of them know better and are just going with it, but alot of them don’t and think you can use it for everything because folks don’t understand the tech, they just know it does some cool things. That most LLMs will just hallucinate an answer when it doesn’t know or can’t know is feeding directly into that. I shudder hearing them talk about using LLMs to do things like “diagnose heart conditions” or “detect crime” or “screen job applicants”.

Anyway, whole other topic. To get back to Tetsuo and more specifically MAG, that conversion is about 90% done. It has to configure before I can move forward with its conversion– which runs about 30 hours, so, that gives me some time in between to clean while staring at my email and phone hoping someone not bound by my non-compete needs a contract.

Next Post

© 2026 Phanes' Canon

The Personal Blog of Chris Punches