[Cuis] (Rescued) Re: Fwd: Re: DIRECT version number

Peter van Rooijen peter at aba-instituut.nl
Wed Jul 22 06:29:05 CDT 2015


I'm thinking about some features (pun not intentional) of this Feature Test
framework:

1. It's reasonable to assume that many tests will depend on something else
working, but that cannot be counted on, and
we would not want to repeat testing for that and also would not run into it
failing all the time and that filling our feedback.

1a. So it would make sense to add a #precondition method to each Feature
Test class.

FeatureAnsiArray
    class
        precondition

        self run: 'Array' "i.e. the global Array must be present"

then only if the precondition for the class is satisfied, will the test
methods be executed. so if most of them start with

'Array new …' then they would all fail anyway so we don't need to test them.

2. You would want to be able to assure that in a particular object you
would be able to access a particular variable.

so in the test method you would write:

FeatureTest1
    class
        test1

        self setContext: OrderdCollection new

        self run: 'elements' "determine if the inst var elements is present
in a new OrderedCollection"

        self run: 'elements == nil' expect: false

        self run: 'elements isOrderedCollection' expect: true

let's say the test runner would continue testing even if something failed,
e.g. the array is called array, not elements. then we already know that the
following expressions will fail

so we might instead write:

        self run: 'elements' ifFail: [^self]

 3. Instead of implicitly testing for a global using run: 'NameOfTheGlobal'
or for a class var using setContext: and then run'NameOfTheClassVar' there
could be convenience methods for

        self expectGlobal: 'NameOfTheGlobal' "argument may be given as a
Symbol as well"

        self expectClass: 'NameOfTheClass' "additionally verified that the
global holds a class"

        self expectSharedVariable: 'Foo' inClass: 'Bar'

this would make for nicer feedback since the expectation is made clearer

Okay just 2 more cents!

Cheers, Peter



On Wed, Jul 22, 2015 at 12:57 PM, Peter van Rooijen <peter at aba-instituut.nl>
wrote:

> Hi Ken,
>
> On Wed, Jul 22, 2015 at 12:33 AM, Ken.Dickey <Ken.Dickey at whidbey.com>
> wrote:
>
>> On Tue, 21 Jul 2015 07:59:47 -0700
>> Peter van Rooijen <peter at aba-instituut.nl> wrote:
>>
>> >> I was thinking: "What should a Feature Test be?".
>>
>> I tend to think of a hierarchy of requirements.  Perhaps something like:
>>
>>  Feature requireAll: #( <feature name>.. ).
>>  Classes requireAll: #( <class name>.. ).
>>  Methods requireAll: #( <selector name>.. ).
>>  MethodsForKind class: <class name> requireAll: #( <selectorName>.. ).
>>  Tests requireAllPass: #( <unit test name> ).
>>
>
> Yeah, that's not at all what I'm thinking :). I'm thinking of something
> that is automatically runnable, like a unit test. It tests something, like
> a un test. But if the test does not pass, it is NOT a bug, unlike with a
> unit test. It's just that we would like to know about it. Also, with nit
> tests there is the assumption that the code that represents the test is
> always compilable, with feature tests that cannot be assumed, so there
> would need to be protection against that. Of course we want to be able to
> load the feature tests in any condition, so putting it in the form of
> source text and compiling that is a possibility. The fact that that makes
> it slower than unit tests is not a problem, because unlike unit tests, we
> don't have to run them continuously.
>
>
>> The Feature class lets us require named (macro) Features with a version
>> check.  I prefer that requirements at this level actually load the packages
>> required and only report failure if that is not possible, although we could
>> have a "preflight" verson which just checks without loading any featured
>> packages.
>>
>
> I see. The thing I was thinking about merely reports about the state of a
> system (of code), it does not try to configure that in any way.
>
>
>>
>> API's are basically "protocols", which in the absence of symbolic
>> execution means checking that classes and specific method selectors exist,
>> or more specifically, method selectors are applicable to specific KindOf:
>> classes.
>>
>
> Well, in my mind some semantics could be expected (and tested for). For
> instance I might be interested if there is a DateTime class in the image
> and if it implements the ANSI DateAndTime protocol (not sure if there is
> one named that). Or perhaps another class that does that. These tests can
> test some actual semantics no problem.
>
> Perhaps some of you remember that Camp Smalltalk started with Ralph
> Johnson's desire to build an ANSI test suite. The way people went about it
> (extension methods to SUnit? I don't really remember) was wrong and could
> not possibly work (in my opinion anyway), but I could not convince a lot of
> people and such a test suite was never written. But with Feature Tests I
> think we could come a long way.
>
>>
>> Further, some Unit Tests may be required to pass to ensure compliance
>> with some specification.
>>
>
> Well, except that the tests would not be unit tests in the strictest
> sense. But semantics, not merely interface, can be tested for sure.
>
>>
>> We should be able to automate at least some of this
>
>
> Automate the running of the feature tests? Of course.
>
>
>> including a first pass of generating the test sets, which could then be
>> pruned by hand as required.
>>
>
> That I don't see happening. You test what YOU think is important to be
> sure of. No machine can decide/calculate that for you. Perhaps I'm
> misunderstanding you.
>
> Cheers, Peter
>
>
>>
>> $0.02,
>> -KenD
>>
>
>
> _______________________________________________
> Cuis mailing list
> Cuis at jvuletich.org
> http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://jvuletich.org/pipermail/cuis_jvuletich.org/attachments/20150722/5bba2659/attachment-0004.html>


More information about the Cuis mailing list