F# Development: Words Trump Syntax

DevLog: EasyAM2, Episode 5

Watch your language!

Question: Whoa! Are we doing design or what? Answer: No.

First thing I did before any coding this time around was sketch out this little UML diagram. Isn't it nice?

But why? Because I needed to move from parsing to modeling. That is, I was coming to the end of the "parsing input" phase and realized I needed some kind of model to stick this stuff in. There were a bunch of different ways I could do it -- just like every problem. Wouldn't it be easier to talk and think about the problem visually? That's what models are for.

And it was! Took 10 minutes to draw the diagram using Enterprise Architect, and I kept the diagram on the wall for a couple of days while I did other work. (Then I think it fell behind my desk or something.)

Next I spent a week or so doing some other work for a client that involved web programming -- a nice change of pace. Forgot all about EasyAM aside from every now-and-then complaining about not getting any work on it done. (If you don't know, complaining is a very important part of being a programmer. Not that anybody cares, mind you! grin)

When I finally got back to it, it took me a bit to get going again. Dang! Why? Because I had left the code in a spot where I needed to refactor a bunch. What you're supposed to do is leave the code in a state where it's easy to come back.

But when developing a greenfield (new) app, I never do this. Once I get going, I run as fast as I can coding up tests and methods, until it looks tough again. There's some kind of refactoring I need to do that I don't want to. I resist.

Then I walk away.

What that means is that I'm always secretly dreading coming back to new code because it means, well, work. Thinking and stuff. Hence the diagram above. The model allowed me to conceptualize the problem and the trade-offs easily, the tool was fun to play with, plus it wasn't coding. Wooo.

When I started in again, did I use the diagram? Heck no! Not only did I forget about it, it would have been a terrible idea to use it. Why? Because a model is not a design. It's just a picture. Structure, even in picture format, has to pass various tests and meet supplementals like all other structure, but all of those tests and supplementals existed only in my mind. Not in executing tests. By the time I returned, that was all gone. Heck they were probably mostly all gone by the time dinner rolled around that same day.

UML is good. Diagramming and modeling? Irreplaceable. Top-three tool of knowing-what-the-hell-you're-doing. Trying to take something that was in your head, grabbing a big hammer, and pounding it into code? That's whack. It's whack because your brain cheats. The only thing to use in real design is executing tests that validate behavior and supplementals that are somehow recorded and agreed-to. Putting them in tests ahead of coding is fine. Sketching them out as you sketch out a model is fine. Nothing else. Otherwise it's an art project, not a software engineering one.

This was an art project. There's nothing wrong with that. Art can let you see and understand all kinds of things you couldn't otherwise.

Speaking of tests, as you can see in the video code tour below, when I came back to the code, I realized that tests from previous sprints were now failing. Or rather, the tests were passing. It was the description of the tests that was bad. Dang you, human language!

The code tour. I had to do a bunch of wiring. Meh.

We're coming over the top of the "hump of complexity" that every non-trivial project goes through. I know this because the tests are driving out a new file, EA.fs, which is app-level (not program level) common types: what goes into compiling, what comes out. I also have a very strong -- and sad -- feeling that my cool logging code I stuck everywhere needs to just go in ealib in one spot.

Things are simplifying.

Aside from the "bucket" type, most of the model will consist of text stuck in a node at a certain generic location in our Structured Analysis model. Locations look like this:

type TemporalIndicatorType =
      | Was
      | AsIs
      | ToBe 
    type GenreType =
      | Meta
      | Business
      | System
    type AbstractionLevelType =
      | Abstract
      | Realized
 
    type LocationPointerType =
      {
        Genre:GenreType
        AbstractionLevel:AbstractionLevelType
        Bucket: BucketType
        TemporaalIndicator:TemporalIndicatorType
        Tags:string[]
        Namespace:string[]
      }

For more explanation, RTB.

Section Directives, like the "USER STORIES" we talked about today, pin the compiler to a certain spot in the generic Structured Analysis model, which in the case of our User Stories would be "Business Abstract Behavior To-Be". New items in that location, which are identified using markdown bullet-format, go into nodes at that spot. Everything else is just the relationship between nodes or notes on either the nodes themselves or how they relate to one another.

It sounds more complicated than it is. Hence the book.

Here's the sketch for the ModelItem type. I thought I had already typed this in the other day, but heck if I can find it. So I had to type it in again. (I spent 20 seconds typing it in on day, then an hour the following day trying to figure out where it went to. I still don't know. Yard gnomes or something must have eaten it.)

        type JoinType = 
      |ParentChild
      |NodeNotes
    type ModelItemType = 
      |Root
      |Node of int
      |Join of JoinType*int*int
      |Text of string
    type ModelItem =
      {
        Id: int 
        Type:ModelItemType
        Location:LocationPointerType
        Title:string
        Mentions:CompilationLine []
      }

JoinType right now is only the most generic of types: Parent-child. Of course, once I start the coding, it'll get a lot more specific, like the parent-child relationship between User Stories and actors, or the one between Supplementals and Features.

This is a ton of structure I'm dropping on you, seemingly without a reason for it being there. As best I can, I'm trying to harvest lessons learned from previous versions of this program. So all the tests are there, they were tests in previous versions which drove out the structures I'm copying over (admittedly from memory and using my judgment) Most importantly, the behavior remains the same, that is, there will just be new tests that do the same thing in this version that will exercise/validate this structure.

If it seems like I'm focusing too much on tests -- how to test, when to test, how to test in the future without coding --- it's because I am. Get the testing right (or more specifically the testing architecture) and the rest will become trivial. That's why it was more important to get the tests more fully running against our existing test data this time around than it was to finish the user stories.

If it seems I am coding "dumb" -- purposefully not optimizing in places where F# could optimize the heck out of what I'm doing -- it's because I am. As a programmer, I am very much in a symbiotic relationship with my programming language of choice. People think in human language, which is broad, loose, fuzzy, and hugely context-dependent. Computers "think" in mathematical language, which is a logical and concrete mathematical construct. It's not that I couldn't take a thought and turn it into math. It's that the process of turning ideas into formal languages changes the meaning and understanding of the problem you're trying to solve. Just like we saw with "LineType" in the episode today. The term meant one thing 2 episodes ago, but that was then, this is now. You don't just need to change programs because the business changes. You also need to change them because your mental model changes. And it should change! That's how it's supposed to work!

That resistance I feel when hitting a point where the refactoring doesn't feel right? When I walk away, I turn all of that over to my subconscious. It's much better at this kind of thing than I will ever be.

We would never want a computer that could instantly translate your desires into a formal system. It's a process of discovery. Given the same problem, no two teams of people will come up with the same result -- nor should they. Some people will never understand that. They have my pity.

Next up, I'll finish out the Compile-Output-Input-ReCompile/Check loop where the compiler creates a model that can be persisted, and that persisted model, when read back, creates the same model. It's probably the last critical piece of functionality for the code. The rest will be slicing and dicing. Cleaning up.

Before I do that, though, I need to go ahead and move all of that logging boilerplate to one spot. As I add the Compile stuff in, I'll also need to change the existing tests because my return type will change. All of that, which will be the bulk of the work, I imagine, is just grunt work. Meh. Such is coding. It's mostly testing, there's a ton of cleaning up, then every now and then you get to add something cool, neat, or complex. That's the way it's supposed to be. At the end of the day, we program for other people, not the computer.

First, however, I have other projects to go do.

It's been a hoot! I'll see you on the internet.

(Just to be clear, in the UML diagram, I was doing mental analysis. I was mentally asking myself how I thought of the problem, what the tool in general had to do. With the types I typed in later, I was looking at the test data from an earlier version, remembering how the code worked. My plan is to bring that test data over, and since the structure was already there to make thes tests pass, I used that. The only thing I added was a bit of judgment in simplifying the types since I've also simplified the language spec. Apologies if that wasn't clear in the essay.)

QA

Interested in background on this series/project? Here are some links:

Change Your Paradigm

(Obligatory promotional video. We're building the compiler to support this book.)