Tuesday, February 25, 2003

Navigation missing from MVC paradigm?

After playing with Struts and seeing its notion of what Model-View-Controller means (versus traditional "fat client" MVC), I think that MVC needs to add an explicit model for Navigation. A so-called NMVC design pattern.

In traditional MVC, Views are individual pages/dialog-boxes/panes/etc. and Controllers are geared towards translating (mouse/keyboard/clock) events into transaction requests against some data Model.

The large item left out is the universal need to define Navigation between views using controllers that are geared towards "movement" i.e. transactions against a Navigation model. In other words, a data model specifically geared towards user interface control is needed that is separate from the data models managing "business/domain/application data".

Since UI navigation (especially in a web site context) usually maps well onto a finite state machine, a navigation model geared towards FSMs (with the definition of the state-transition graph specified in an external config-file maybe?) would take out some of the grunt work of UIs.

Can Psychology help Framework Designs?

There is a continuum of component sets (i.e. frameworks) that range between "simple" sets containing fewer more universal components versus "complex" sets containing more specialized components.  For example, compare old school Lego pieces that were generic shapes that built anything, versus new style Legos that build only one particular type of Jedi Starfighter.

These frameworks tend to not integrate very well with other frameworks. It is simpler to pick one as a standard, but how does it affect those that prefer the one not picked? What is the overall cost/benefit of picking one vs the other [vs not picking one at all, and doing the more complicated work of supporting both]?

It would be interesting to perform a series of psych experiments to determine whether people have clear preferences between simple vs complex component sets and how well they can adapt to their non-preferred choice.  Also, do the choices change based on time pressure, organization, documentation of the sets, appropriateness of the components to the overall goal?

For example...
*) - Ask people to build a "plane" from the provided components.
Let people pick between a simple vs complex set of components (e.g. specialized legos vs generic tinkertoys). Measure time taken, and "quality" of result (both self evaluation and objective third party evaluation), and "how well they liked it" i.e. "how fun was it" i.e. "would they like to do another one". Compare the success and quality rates overall for simple vs complex when used by those preferring each.
1) variation: mixed pile of components
2) variation: pile of mixed simple and separate pile of mixed complex
3) variation: organized piles
4) variation: various time limits
5) variation: make complex parts clearly "plane related"
6) variation: make complex parts clearly unrelated to overall goal

*) - Ask people to build another goal object, but require them to use the opposite of their preference in the first test. Compare measurements to those of when using their preferred components.  Use similar variations to above (especially for various time limits). Compare the success/quality rates of simple vs complex overall when used by those not preferring it.

Overall goals: determine whether:
1) success rate is significantly different between simple vs complex
2) quality rate is significantly different between simple vs complex
3) time taken is significantly different between simple vs complex
4) do people have strong preferences vs weak (what is the distribution each)
5) are people more able to use either simple or complex better when it is not preferred

*) i.e. can one pick simple vs complex as a standard or must both be avail and integrated with each other??

Another variation to try is having the simple and complex component sets be compatible with each other (as opposed to the previous example of special-legos/tinkertoys that can not be used together). E.G. generic legos vs specialized legos. Goal: see if the choices made between simple/complex are as strong and the effects of the choices as strong when there is less of a consequence to starting with one set or the other.

Also, add variations to previous experiments that vary the size of the "simple" set and the "complex" set and see what the size boundaries are to these categories. See what the performance curve is when graphed against set size for the various categories of people.

Sunday, February 23, 2003

Abstract Unit Tests

I WANT TO UNIT TEST Requirements, Specifications, Use Cases, etc, and not just fully coded Implementations!

I believe that there needs to be a language that allows sufficiently abstract logic to be specified/programmed such that tests can be written against use cases, high level specifications, etc that are very general. An example of a high level use case might be "customer can get account balance from the ATM".

In other words, I'd like to write, compile, and lock away in a test suite, a test that verifies that if joeBlow has $10 in his account, the result he gets from requesting his balance is $10, even though no user interface, extra details like logging in first, etc have been decided.
Then later refinements of the system details can be "plugged in" such that the test can still run without being rewritten. Traditionally, tests must be written at a level of detail that is very specific such that it can interact with the actual system being tested, user interface screens, UI testing tools, etc.

On the other hand, "high level tests" have traditionally been written as high level test plans in (possibly structured) English; just as the use case descriptions themselves were in English.  A language is needed that can capture this logic (in either spec or test-case form) that is precise enough to be "compilable", and have a robust enough notion of "interfaces" that future detailed implementations can be passed at test runtime as "implementors of those abstract interfaces".

E.G. If test cases can take the system-being-tested itself as an explicit parameter (even though the system is normally the ultimate of implicit parameters), that explicit parameter's type is an abstract interface, and the system-being-tested is defined as implementing that interface. [Ed. note: the rest of the world will later get this idea and call it dependency-injection!]
The abstract interfaces must be able to be associated with the more specific interfaces that make up the various levels of abstraction that are captured as the system design is fleshed out.

So, at the very high level (described in the use case above), the interface provides a simple "get balance" request that returns a "number".  Eventually, the actual system has a whole sequence of interaction required to login, navigate to account, request balance, receive formatted display, logout which is encapsulated in a test case defined at that level of abstraction but which none the less can "implement" the getBalance() interface and return the dollar formatted amount as a "number".  So, the specific test case implementation "subclasses" the abstract test case implementation.

QUESTION: when (i.e. at what level of abstraction) does the "interface" analogy get replaced by the "subclass" analogy?
ANSWER: silly rabbit, each level of abstraction is really a framework that defines interfaces for the more detailed level(s) to implement, and defines "subclasses" to implement the interfaces defined by the framework(s) of the higher level(s) of abstraction.  [Ed. note: don't confuse frameworks with implementation-via-subclassing.  They can use other mechanisms than subclassing to override generic default behavior with specific behavior.]

Tuesday, February 4, 2003

Musing on Events

"Events" are a subclass of "Data" where there is one required attribute, namely, a time-stamp.  Arbitrary amounts of other data may be included in an Event but they all have a time-stamp.

The canonical use of Events is for them to act as a trigger for some action to occur.  They secondarily may be logged for audit/replay/etc.  Many systems do not care what the actual time-stamps are, but only that they are sorted in chronological order.

Examples of events are; keystrokes, message receipt, mouseclick, etc.

Since the only requirement for an event is that it has a time-stamp, the clock itself can generate events that consist of nothing more than the time-stamp.

"Actors" are generators of events.  Having a "thread of control" means "being able to generate events" rather than only being able to react to events.

Since events only need time, people often overlook the fact that a "clock" can be an "actor" and therefore the source of events in a system.

Objects are usually described as containing both behavior and state. State is data. Behavior is a sequence of events.  Control flow is the proscription of how to decide which events are to be generated. So, if objects can be modelled as state change diagrams, the states represent the possible instances of the object data, and the events represent the arcs that trigger a change from one state to another.

Programs represent a planned source of events, and "external actors" represent an unplanned (i.e. unpredictable) source of events.