[Cuis] (Rescued) Re: Fwd: Re: DIRECT version number

H. Hirzel hannes.hirzel at gmail.com
Fri Jul 31 01:29:54 CDT 2015


Could we do examples of such Feature Tests?

Class String is a good candidate to start with

Reasons
a) Is  used everywhere
b) Interface is non-trivial
        String selectors size 166   (in Cuis)
                                      338   (in Pharo)
                                      331   (in Squeak)
        --> there are issues when porting.

We might want to have a StringExtensions package

--Hannes

On 7/22/15, Phil (list) <pbpublist at gmail.com> wrote:
> On Wed, 2015-07-22 at 13:29 +0200, Peter van Rooijen wrote:
>> I'm thinking about some features (pun not intentional) of this Feature
>> Test framework:
>>
>>
>> 1. It's reasonable to assume that many tests will depend on something
>> else working, but that cannot be counted on, and
>> we would not want to repeat testing for that and also would not run
>> into it failing all the time and that filling our feedback.
>>
>
> Why not?  I agree that these would be a different category of test in
> that the dependencies would be more complex and often dependent on more
> than one package, but why would their functioning be considered
> optional?  If they fail, shouldn't they either be addressed right away
> or explicitly deprecated?  If you make the tests easy to ignore/forget
> about, they will be.  If the functionality they are testing can't be
> counted on, it won't be used.
>
> If your thinking is that these would be tests that are part of package X
> but might rely on package Y which might not be loaded yet, why not
> instead just make the tests part of package Z which depends on package X
> and Y?  The whole idea is that these are not unit tests in that sense...
> have them live where ever it is appropriate.
>
>>
>> 1a. So it would make sense to add a #precondition method to each
>> Feature Test class.
>>
>>
>> FeatureAnsiArray
>>     class
>>         precondition
>>
>>
>>         self run: 'Array' "i.e. the global Array must be present"
>>
>>
>> then only if the precondition for the class is satisfied, will the
>> test methods be executed. so if most of them start with
>>
>>
>> 'Array new …' then they would all fail anyway so we don't need to test
>> them.
>>
>>
>> 2. You would want to be able to assure that in a particular object you
>> would be able to access a particular variable.
>>
>>
>> so in the test method you would write:
>>
>>
>> FeatureTest1
>>     class
>>         test1
>>
>>         self setContext: OrderdCollection new
>>
>>
>>         self run: 'elements' "determine if the inst var elements is
>> present in a new OrderedCollection"
>>
>>
>>         self run: 'elements == nil' expect: false
>>
>>
>>         self run: 'elements isOrderedCollection' expect: true
>>
>>
>> let's say the test runner would continue testing even if something
>> failed, e.g. the array is called array, not elements. then we already
>> know that the following expressions will fail
>>
>>
>> so we might instead write:
>>
>>
>>         self run: 'elements' ifFail: [^self]
>>
>>
>>
>>  3. Instead of implicitly testing for a global using run:
>> 'NameOfTheGlobal' or for a class var using setContext: and then
>> run'NameOfTheClassVar' there could be convenience methods for
>>
>>
>>         self expectGlobal: 'NameOfTheGlobal' "argument may be given as
>> a Symbol as well"
>>
>>
>>         self expectClass: 'NameOfTheClass' "additionally verified that
>> the global holds a class"
>>
>>
>>         self expectSharedVariable: 'Foo' inClass: 'Bar'
>>
>>
>> this would make for nicer feedback since the expectation is made
>> clearer
>
> I went the other way when I did the ApiFile tests in that it didn't seem
> terribly important to use most of the testing framework capabilities
> (other than the overall pass/fail aspect to keep the initial PoC as
> simple as possible)  So they are simply small snippets of code that
> performed the desired task but didn't care where it failed (if it
> failed):  the failure to successfully complete the task would be the
> indicator that there was a problem and we would know that either
> something being depended on had broken and needed to be fixed or that
> the test needed to be updated/overhauled to represent the new way of
> accomplishing the task.
>
> My thinking was that as we started to build up a number of these, we
> might start to see common breakage patterns and then we could decide
> whether or not to handle that them more explicitly (whether using the
> existing test framework capabilities, creating a new one, etc.)  Trying
> to engineer it up front didn't seem like a great idea not knowing what
> common failure states would look like yet.
>
>>
>>
>> Okay just 2 more cents!
>>
>
> Mine as well.  This is a worthwhile discussion/exercise IMO as we need
> to get to a common understanding of what we are doing here.
>
>>
>> Cheers, Peter
>>
>>
>>
>>
>>
>> On Wed, Jul 22, 2015 at 12:57 PM, Peter van Rooijen
>> <peter at aba-instituut.nl> wrote:
>>         Hi Ken,
>>
>>         On Wed, Jul 22, 2015 at 12:33 AM, Ken.Dickey
>>         <Ken.Dickey at whidbey.com> wrote:
>>                 On Tue, 21 Jul 2015 07:59:47 -0700
>>                 Peter van Rooijen <peter at aba-instituut.nl> wrote:
>>
>>                 >> I was thinking: "What should a Feature Test be?".
>>
>>                 I tend to think of a hierarchy of requirements.
>>                 Perhaps something like:
>>
>>                  Feature requireAll: #( <feature name>.. ).
>>                  Classes requireAll: #( <class name>.. ).
>>                  Methods requireAll: #( <selector name>.. ).
>>                  MethodsForKind class: <class name> requireAll:
>>                 #( <selectorName>.. ).
>>                  Tests requireAllPass: #( <unit test name> ).
>>
>>
>>         Yeah, that's not at all what I'm thinking :). I'm thinking of
>>         something that is automatically runnable, like a unit test. It
>>         tests something, like a un test. But if the test does not
>>         pass, it is NOT a bug, unlike with a unit test. It's just that
>>         we would like to know about it. Also, with nit tests there is
>>         the assumption that the code that represents the test is
>>         always compilable, with feature tests that cannot be assumed,
>>         so there would need to be protection against that. Of course
>>         we want to be able to load the feature tests in any condition,
>>         so putting it in the form of source text and compiling that is
>>         a possibility. The fact that that makes it slower than unit
>>         tests is not a problem, because unlike unit tests, we don't
>>         have to run them continuously.
>>
>>                 The Feature class lets us require named (macro)
>>                 Features with a version check.  I prefer that
>>                 requirements at this level actually load the packages
>>                 required and only report failure if that is not
>>                 possible, although we could have a "preflight" verson
>>                 which just checks without loading any featured
>>                 packages.
>>
>>
>>         I see. The thing I was thinking about merely reports about the
>>         state of a system (of code), it does not try to configure that
>>         in any way.
>>
>>
>>                 API's are basically "protocols", which in the absence
>>                 of symbolic execution means checking that classes and
>>                 specific method selectors exist, or more specifically,
>>                 method selectors are applicable to specific KindOf:
>>                 classes.
>>
>>
>>         Well, in my mind some semantics could be expected (and tested
>>         for). For instance I might be interested if there is a
>>         DateTime class in the image and if it implements the ANSI
>>         DateAndTime protocol (not sure if there is one named that). Or
>>         perhaps another class that does that. These tests can test
>>         some actual semantics no problem.
>>
>>
>>         Perhaps some of you remember that Camp Smalltalk started with
>>         Ralph Johnson's desire to build an ANSI test suite. The way
>>         people went about it (extension methods to SUnit? I don't
>>         really remember) was wrong and could not possibly work (in my
>>         opinion anyway), but I could not convince a lot of people and
>>         such a test suite was never written. But with Feature Tests I
>>         think we could come a long way.
>>
>>                 Further, some Unit Tests may be required to pass to
>>                 ensure compliance with some specification.
>>
>>
>>         Well, except that the tests would not be unit tests in the
>>         strictest sense. But semantics, not merely interface, can be
>>         tested for sure.
>>
>>                 We should be able to automate at least some of this
>>
>>
>>         Automate the running of the feature tests? Of course.
>>
>>                 including a first pass of generating the test sets,
>>                 which could then be pruned by hand as required.
>>
>>
>>         That I don't see happening. You test what YOU think is
>>         important to be sure of. No machine can decide/calculate that
>>         for you. Perhaps I'm misunderstanding you.
>>
>>
>>         Cheers, Peter
>>
>>
>>                 $0.02,
>>                 -KenD
>>
>>
>>
>>         _______________________________________________
>>         Cuis mailing list
>>         Cuis at jvuletich.org
>>         http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
>>
>>
>>
>> _______________________________________________
>> Cuis mailing list
>> Cuis at jvuletich.org
>> http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
>
>
>
> _______________________________________________
> Cuis mailing list
> Cuis at jvuletich.org
> http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org
>




More information about the Cuis mailing list