[Cuis] Canonical test cases?

Phil (list) pbpublist at gmail.com
Thu Feb 26 14:32:39 CST 2015


On Thu, 2015-02-26 at 11:18 -0300, Juan Vuletich wrote:
> Hi Folks,
> 
> (below)
> On 2/25/2015 12:22 AM, Phil (list) wrote:
> > On Tue, 2015-02-24 at 17:51 -0800, Ken.Dickey wrote:
> >> On Mon, 23 Feb 2015 14:43:26 -0500
> >> "Phil (list)"<pbpublist at gmail.com>  wrote:
> >>
> >>> I started playing around with a couple of example test cases to see what
> >>> I ran into and came up with a distinct class to store all of these per
> >>> test category (i.e. under Test-Files a class ApiFile which could have a
> >>> method testWriteTextFile)  The rationale was that it might make sense to
> >>> keep these test cases separate from traditional test cases which are
> >>> free to make calls that users of the class (i.e. who are calling the
> >>> supported API) should not.
> >> You bring up a good point.
> >>
> >> I suspect there are three basic cases:
> >>    [1] A number of small test cases for a wide, shallow API.  E.g. Collection classes.  Many "small" usage tests.
> >>    [2] A few deep calls which demonstrate a usage protocol.  E.g. open/read/close files, sockets; use library wrapper.
> >>    [3] Sample/example UI code which illustrate how to build apps/applets/tools  [Color Picker, File List, ..]
> >>
> >> A test class, as you point out, is probably appropriate for case [1], where there would be a zillion test<collection><access>API kinds of things would swamp a name search.  A class naming convention would be good here.
> >>
> >> Individual test methods, with a searchable name convention, would be best for [2] where there would be few illustrative usage tests.
> >>
> >> We could call out example code in a README doc for [3] where examples might have auxiliary architectural documentation.
> >>
> > Agreed.
> >
> > 2 was what originally got me going on this subject as that's where my
> > code seems to usually be breaking due to guessing wrong at what the
> > longer term entry points were.  I wasn't envisioning a kitchen-sink
> > approach to tests, just some basic 'here's the recommended way to use
> > this area of functionality' tests.
> >
> > Unless Cuis evolves into a completely separate Smalltalk dialect, 1 can
> > probably be handled via traditional unit tests / categorization and the
> > numerous intro to Smalltalk tutorials out there.  3 seems like the point
> > where written tutorials (along the lines of what Hannes pointed out
> > earlier in the thread) and/or videos would best come into play.
> >
> 
> I essentially agree with all you say. And obviously, I support any and 
> all efforts to enhance Cuis (including docs, external tools, etc).
> 
> WRT using tests to document stable or standarized APIs, another reason 
> for marking them or making them identifiable as such is managing change. 
> When I change an API in Cuis, I update all senders in the image and core 
> packages, including those in tests. So, I need to know that the test 
> documents a standarized API, to avoid the change, or at least discuss it 
> with you here, or move the old API to SqueakCompatibility.pck.st, etc.
> 
> Another alternative is to have some symbol, like #standardAPI or 
> #stableAPI to be called from such methods. This makes it easy to spot 
> them with 'senders', and it is obvious for anyone modifying them.
> 

self flag: #stableAPI? (assuming you're talking about doing this in test
cases)  I don't really have strong feelings on symbols vs. categories
vs. something else on how to indicate them other than it should be
consistent so that all concerned know how to identify/find them.

> In any case, Phil, given that all this is not yet done, when you face 
> such breakage, please don't just fix it yourself (unless you think 
> that's best). Please use that frustration to open a specific discussion 
> on that particular API. Ideally, we would agree on a reasonable API 
> (maybe the old one, maybe a better new one), implement it, and document 
> it. Just remember that we can and should discuss decisions.
> 

That's why I started this thread.  As I've been going through this round
of upgrade fixes I was thinking that 'there's got to be a better way'.
One of the things I can do to help is to generate some test cases for
things I'm having to fix in my code where it seems like there should be
a standard API and then you would be able to look at and say 'yes, that
looks right' or 'no... here's how it should be done instead' and the
result could live on in the tests so that hopefully that particular
issue wouldn't occur again in future releases.

> Cheers,
> Juan Vuletich
> 






More information about the Cuis mailing list