F# tests driving out architecture, shared libraries and persisted state

DevLog: EasyAM2, Episode 4

A hunka, hunka burning F#

The code tour. Lots of code changes this time around, with more to come.

We're four episodes in, and we're continuing to use our first stories as a way to set up and optimize all of our meta stuff -- tools, workflow, approval, deployment, and so on. By using user stories (future system behavior) as a way to validate architecture, we're assured that we're only setting up the minimal amount of architecture we need to get the job done.

There is a tension there between delivering value and "making things right". That tension is good! We need it to clarify what our values are as we continue along. That's how emergent design happens. It emerges from creative conflict, and the process of emergence gives us really good pieces in our analysis model to use in other places

So what tooling did I play with?

Test Explorer in VS using Expecto and the plug-in from GitHub. Never got it to work. A complete waste of time. I tried setting up an old-school Visual Studio solution file. I'm not sure, but that might have just made things worse. Beats me.

Somewhere in all of this, my IDE environment, both VS and VS Code, has lost its way. I will speak no more of this.

As explained in the video, testing the methods I needed for the next chunk of work "forced" met into re-arranging my code. This is a double-win. It makes both the testing and solution architecture more to-the-point.

Got into an interesting discussion online about code comments. Is it good for me to stick so many comments in my code?

Usually, no. This time? Yes. Since I'm solo dev and PO, as long as the code is well-structured there's no better place to keep notes and questions than the code. This would fall apart the minute a second person came onboard though. Connecting "all the rest of the information" around the project aside from the code itself is the problem that EasyAM/Info-Ops is supposed to solve in the first place.

While doing the work, I realized we had a new Meta Supplemental! Woot! "Prioritize helping people make and understand what's going on with their models above all else." For now, this means making the base rules and rule errors/weirdo-stuff as easy to create and understand as possible -- since it will be the same tools that help users do the same work later on.

Stated differently, whatever I deploy should work as hard as it can to make sure that people using the grammar are coddled and guided towards the most enjoyable process possible. Of course, the generic platitude "easy-to-use" is true in just about every software project, but in this case I've started to narrow that down to interactions around command tokens and parsing. As I continue to work, I'll continue to narrow that down, refining my values from platitude to testable code, just like it's supposed to happen.

Speaking of behavior and values driving structure, I ran into a few cases this past week where the existing structure of my app made adjusting the code unwieldy. That's a red flag. I set up the projects the way I did because this is version 3. It very well may be that I overestimated the structure I needed for the behavior. We'll find out. I always do that, even when I try not to. Heck, I'm the bozo that keeps ranting about it online! I should know better!

Sill got me.

"What's the minimum vector transform that takes me closer to a solution?" led to these three types:

// Not the same as line types. Commands are what we search for. The line type depends on the context
    type EasyAMCommandType =
    type RegexMatcherType = 
        PossibleLineTypes:EasyAMLineTypes list
    type LineMatcherType = 
        MatchTokens:RegexMatcherType []

Basically, these types answer the questions: "What keyword is on this line?", "How did you find it?", "How many values should I expect?", and "What line types might be associated with this?"

Note that there's no way for these transforms to fail. As long as the regex is valid, it'll do something. It may not work the way I wanted, but it won't crash.

That's good FP. It puts the onus back on the guy who's making this array -- which, admittedly, in this case is also me -- to make a RegexMatcherType array that does something useful. That also means that any tests that run should have the goal of helping that guy write a better regex/line-matcher list.

It changes the discussion from "is the code doing the right thing" to "is the caller getting what they wanted given their input". From a TDD perspective, it means there's nothing to test. We test things when we change the desired behavior of a system. Here we're just using regex like any other .NET coder to do a simple transform. The transform may not do what we wanted, but that's a business test, not a system test. It's handled in the acceptance criteria for whatever story we're working. Whatever you do, don't use TDD to try to recreate simple vector transforms! Bad! Bad!

I also started talking about lenses in the video. I'll talk more about them as we stand up ear, the reporting engine.

As I mentioned in the video, this little guy drove me crazy. What to do with a persistent logging engine inside a shared DLL? Dang it, I hadn't thought of that being an issue.

    // Tag-list for the logger is namespace, project name, file name
    let moduleLogger = logary.getLogger (PointName [| "EA""Core""Compiler""EALib""Compiler" |])
    let mutable clientLogBack:(LogLevel->string->unit) option=Option<LogLevel->string->unit>.None
    let logEventSig (lvl:LogLevel) (str:string) = logEvent lvl str moduleLogger
    // Decided on a mixed mode for now. Set one from client-side. Figure out what we have on this side
    let mutable logBack = logEventSig
    let setLogger (cl:LogLevel->string->unit):unit =
      logBack Debug "Remote logger set"

I may need to tear out my logging code and refactor. I will think some more on this.

All-in-all, it was a very educational segment! I learned about my app values, structure conceits that might not map well to what I'm doing, and thought some more about my test-feedback loop. All while delivering the user stories.

Next up? Some reporting! Woot++! This will be my first time adding "real" stuff to the "ear" and "earTest" projects. That means starting to build out the "real" data model. Can't wait.


Interested in background on this series/project? Here are some links:

Change Your Paradigm

(Obligatory promotional video. We're building the compiler to support this book.)