The original words of Phanes, tirelessly carved into a slab of "No'".

" />

Representation, Intent, and the Problem of Understanding

First, TDM is complete and deployed on a newly rebuilt server, at tdm.silogroup.org

There is no frontend, because it’s not a frontend component. It’s a standalone component meant to operate as part of a distributed system.

With the SOL variation of Tetsuo I’m taking a productized approach to that component and any others that I know I’ll need and don’t want to reimplement anytime I want to change the pipeline.

Essentially what it does is it pulls N days of data for every symbol on the NYSE, managing the list of active symbols it uses, accounting for market closures, blackouts, and each day that it pulls is a minute-by-minute listing of the open price, high price, low price, close price, and volume of trades for those symbols.

It checks for gaps in the data, and can even heal some of them. This has taken a 20 hour process to about a 25 minute process which may be enough to actually extend the scope to cover NASDAQ symbols as well, even some crypto. At this point the bottleneck is a 75 requests per minute rate limit with the raw data provider, which can be upgraded a few fold at additional cost that just isn’t necessary or appropriate yet for my uses.

Then it does some interesting things. Of those notable, it generates a secondary dataset from that 1-minute interval set it just created that is on a 30-minute interval using averages of price values and sums of volumes for that window in fall-forward operations and then dropping the edge window, creating a kind of stream of pinches of the data to drop some of the noise while maintaining almost all of the signal. Interestingly enough, I got the idea thinking about a technique used for image processing / laplace augmentation. I guess you would call it an “incremental smoothing” but it doesn’t serve the function that smoothing serves usually, and in my opinion is more reflective of the data than a traditional smoothing operation would be for this type of operation. You would really only arrive at this type of transform with the intent of a direct fixed-horizon regression of the type I’ll be doing: This will be with 1100 T+1 and T+2 targets where %delta will be the derivative to operate on, for a T+1@1100 purchase and a T+2@1100 sell intent. I will be comparing global models (datasets across all symbols in one model) to individual ranked forecasts (like I have been doing with my “recursive multi-step” forecasts in previous versions of tetsuo, especially in the now retired argonaut build.

Intent. That word keeps surfacing in a few different areas of my life recently, now that I think about it. I spend alot of time observing movement within a system boundary and reckoning intent of actors in it. I should explain that more.

I have some maxims that I’m fleshing out when it comes to intent:

  • The way information is represented is a direct product of intent.
  • That intent can be analytical or executive in nature.
  • The type of intent seems to be either to understand something about that information, to help someone else understand that information, or to influence decisions made by those given exposure to that information.

Representation of information, however, doesn’t necessarily convey that intent or even the nature of the intent that it is a product of.

So how do you glean intent from the representation of information (since that representation is a product of intent)? That’s a big question these days for me. Where I’m currently at with this is that insights can be gleaned about intent if the context/scope of analysis is extended beyond the information being represented to include other representations of information by the same representer, or actions by the representer. That actor’s statements are actions in and of themselves, but, the words in those statements are less important than the intent behind them and the impact of those words on themselves, their environment (to include other actors), as part of a larger puzzle of intention to piece together. Actors sometimes play dumb, sometimes they pretend not to know things they do know or omit context to influence interpretation, sometimes they outright lie– to each other and to even themselves–and those behaviours additionally glean insights into intent that the words they say do not.

Environment for an actor is two-fold: Internal and external, in an interplay of intent and environmental stimuli and response. Truthfully, you can never fully glean intent, even from yourself, as most actors’ intents are a product of those environmental stimuli, group identity, culture, upbringing, personal values and aspirations, political climate (often in a compartmentalized fashion), perceptions and misperceptions, the interests of threats and alliances, and even their errors in evaluating their environment. They can’t fully know their own intent so how can you or them possibly really fully understand that intent even if they flat out told you? This goes for whether it’s you (yes there are things you don’t know about yourself), them, and everyone else. It’s just part of the human.

They can’t fully know their own intent, but they have greater insight into it than you do, usually, and you can still glean insight into that intent by observing the impact of their actions on their environment, including others, including yourself, and flipping the observation lens internally, to map out likely motivators and intentions, treating actions as environmental interaction through a virtual lens of educated guesses about their needs, wants, desires, identity values, history, and your own internal archetypes. The alignment or timing of the action with events and other actions by other actors is also a critical factor but not necessarily something you can really rely on. This treatment of actions as environmental interaction in that context includes their representation of information. I can assure you, while this process is vague and convoluted, you do it already, almost every day, and don’t even realize you’re doing it.

Some people take things more at face value than this implies, and some just pretend to, and some do neither based on their personality. When formalized to an analytical process, though, it removes many of the blind spots those personality attributes create. You’ll find that often adversarial behaviours become less adversarial than was apparent once intent is understood, and you’ll also find the occasional knife ready for your back and hidden behind agreeable faces this way, too, in the abominable rat-races we create for others and subject ourselves to in life.

Putting up this grid over your environment to do this kind of analysis really changes how you look at everything, not just “did this earnings call get fluffed up to try to stir a rally or are they just having a good year” — it applies to any area where interaction or interdependency or interpersonal relationship networks exist between people, even those that the analyzer is a part of.

Words are nice. Actions speak louder. Context, louder. Data, even louder.

The importance of information is an important study to humanity, in my opinion, and I think this process of understanding behaviour could be formalized by someone smarter.

It’s an old topic. Aristotle would have recognized this search for intent as an inquiry into telos—the purpose driving action. He framed human behavior in terms of ends, both conscious and unconscious, that organize activity within a system.

Heidegger pushed this further, insisting that intent is never separable from the environment in which it is disclosed. Meaning emerges only through “being-in-the-world,” a situated interplay of context, action, and interpretation.

Both views reinforce the idea that intent is not an object waiting to be extracted, but a dynamic field that must be traced through patterns of behavior, environment, and consequence. I don’t think it’s a discreet thing, really, after looking at it closer — if it’s not fully self-known then it can’t be really externally known completely because it’s too vague to be fully known and is in and of itself an internal reaction to environment through those subjective lenses I mentioned.

This is why the notion of intent keeps surfacing in different areas. Whether I am compressing market data into usable signals or watching how people maneuver inside the networks they build, the common thread is the search for intent behind how information is presented. The mechanics differ—code on one side, human behavior on the other—but the analytical frame is the same. Representation is only the surface; intent is what gives it meaning and forecasts future behaviour. And while it may never be fully knowable, the disciplined attempt to approximate it is where real understanding begins. You can know a person’s intentions better than they do themselves, and that can be a critical understanding to have for realization of your goals.

Hat man sein Warum des Lebens, so verträgt man sich fast mit jedem Wie (“If one has one’s why of life, then one gets along with almost every how”).

Nietzsche, Friedrich. Götzen-Dämmerung. Leipzig: C. G. Naumann, 1889.

In terms of what’s next, I’ll need to make some decisions and comparisons:

  • Is a global model really better? Research is telling me it’s more accurate, uses less computing, and is also less testable.
  • Fixed-horizon regressions for T+1 and T+2 or recursive incremental forecasts? Research is telling me the former is more accurate.
  • Expectations, sentiment and intent: Research is showing I can expect a 10% boost in directional accuracy by weighting %delta ranks in forecasts with expectations data. This is “what the market already priced in before the event”. Examples include analyst consensus on EPS, revenue, guidance forecasts (from factset, refinitive, zacks), options implied moves (derived from straddle pricing), guidance consensus (expected ranges for next quarter or fiscal year), recent stock drift (pre-earnings run-up or sell-off as an implicit expectation signal). The key is to map out or try to predict upsets and momentum generating events. Earnings calls, ex-dividend dates, etc.
  • Sentiment itself only represents about a 5% impact, which is enough to be required, but has to be balanced in the weighting by expectations data. This can largely be extracted from a time series created from publications about the stock from analysts, potentially integrated as features to the time series doing the training and then short-term used for the weighting. Haven’t decided how this will integrate yet because it’s new.

There’s more to do here. This SOL build of tetsuo is, overall a positive explosion of capability as it allows me to do many things at once and have all kinds of new data sources to use that only I have currently, until I arrive at something that does well as a distributed system.

Next Post

Previous Post

© 2025 Phanes' Canon

The Personal Blog of Chris Punches