<div dir="ltr">Hi Hannes,<div><br></div><div>You asked for an example of a Feature Test. I assume, but may be wrong, that you want this to understand better what the intention of such a test is. Of course I can only answer form my own imagination.</div><div><br></div><div>1 It should be something that provides a definite answer to a question about the image/system, but where there is not necessarily a good or bad outcome (as there is with a unit test; pass=good, fail or error=bad). So for instance it could test if there is any class loaded that implements AnsiDateAndTime protocol (or at least part of it and how much). Or it could test if the number of classes is less than the number of instance variables. Or it could test how many classes have a class comment. It could test how many literals may be in a method. Still, it seems logical to want to define Features in such a way that the presence of the feature is a good situation (better or no worse than the absence). </div><div><br></div><div>2 An essential feature of a Feature Test is that it should be able to load and run in in any image. Its no use of course if you want to figure something out and you can't load and run the test that determines what you want to know. So The Feature test must be able to compile and run source code (it could conceivably interpret syntax trees but that seems rather contrived given that the compiler is normally available in any image). And while doing that it should be able to suppress any problems.</div><div><br></div><div>3 There should be some way to (build Feature Test Suites suites and/or) run all the loaded Feature tests together, silently, and receive feedback only at the end. Most simply in the form of a document/text that shows how it went. Optionally using some kind of GUI, viz. SUnitBrowser. The default FeatureTestSuite could simply run all loaded Feature Tests, and you could subclass it if you wanted to define specific Suites.</div><div><br></div><div>I think it would be great to have a FeatureTest class that you could subclass to run tests on how much of the Ansi standard is implemented in the image and how accurately it follows the semantics prescribed. The original Camp Smalltalk attempts at such a test suite were based on SUnit as the test framework, but that didn't work as it is essentially unsuited to loading tests about code that may not be laoded.</div><div><br></div><div>Cheers, Peter </div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Jul 31, 2015 at 8:29 AM, H. Hirzel <span dir="ltr"><<a href="mailto:hannes.hirzel@gmail.com" target="_blank">hannes.hirzel@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Could we do examples of such Feature Tests?<br>
<br>
Class String is a good candidate to start with<br>
<br>
Reasons<br>
a) Is used everywhere<br>
b) Interface is non-trivial<br>
String selectors size 166 (in Cuis)<br>
338 (in Pharo)<br>
331 (in Squeak)<br>
--> there are issues when porting.<br>
<br>
We might want to have a StringExtensions package<br>
<br>
--Hannes<br>
<div class="HOEnZb"><div class="h5"><br>
On 7/22/15, Phil (list) <<a href="mailto:pbpublist@gmail.com">pbpublist@gmail.com</a>> wrote:<br>
> On Wed, 2015-07-22 at 13:29 +0200, Peter van Rooijen wrote:<br>
>> I'm thinking about some features (pun not intentional) of this Feature<br>
>> Test framework:<br>
>><br>
>><br>
>> 1. It's reasonable to assume that many tests will depend on something<br>
>> else working, but that cannot be counted on, and<br>
>> we would not want to repeat testing for that and also would not run<br>
>> into it failing all the time and that filling our feedback.<br>
>><br>
><br>
> Why not? I agree that these would be a different category of test in<br>
> that the dependencies would be more complex and often dependent on more<br>
> than one package, but why would their functioning be considered<br>
> optional? If they fail, shouldn't they either be addressed right away<br>
> or explicitly deprecated? If you make the tests easy to ignore/forget<br>
> about, they will be. If the functionality they are testing can't be<br>
> counted on, it won't be used.<br>
><br>
> If your thinking is that these would be tests that are part of package X<br>
> but might rely on package Y which might not be loaded yet, why not<br>
> instead just make the tests part of package Z which depends on package X<br>
> and Y? The whole idea is that these are not unit tests in that sense...<br>
> have them live where ever it is appropriate.<br>
><br>
>><br>
>> 1a. So it would make sense to add a #precondition method to each<br>
>> Feature Test class.<br>
>><br>
>><br>
>> FeatureAnsiArray<br>
>> class<br>
>> precondition<br>
>><br>
>><br>
>> self run: 'Array' "i.e. the global Array must be present"<br>
>><br>
>><br>
>> then only if the precondition for the class is satisfied, will the<br>
>> test methods be executed. so if most of them start with<br>
>><br>
>><br>
>> 'Array new …' then they would all fail anyway so we don't need to test<br>
>> them.<br>
>><br>
>><br>
>> 2. You would want to be able to assure that in a particular object you<br>
>> would be able to access a particular variable.<br>
>><br>
>><br>
>> so in the test method you would write:<br>
>><br>
>><br>
>> FeatureTest1<br>
>> class<br>
>> test1<br>
>><br>
>> self setContext: OrderdCollection new<br>
>><br>
>><br>
>> self run: 'elements' "determine if the inst var elements is<br>
>> present in a new OrderedCollection"<br>
>><br>
>><br>
>> self run: 'elements == nil' expect: false<br>
>><br>
>><br>
>> self run: 'elements isOrderedCollection' expect: true<br>
>><br>
>><br>
>> let's say the test runner would continue testing even if something<br>
>> failed, e.g. the array is called array, not elements. then we already<br>
>> know that the following expressions will fail<br>
>><br>
>><br>
>> so we might instead write:<br>
>><br>
>><br>
>> self run: 'elements' ifFail: [^self]<br>
>><br>
>><br>
>><br>
>> 3. Instead of implicitly testing for a global using run:<br>
>> 'NameOfTheGlobal' or for a class var using setContext: and then<br>
>> run'NameOfTheClassVar' there could be convenience methods for<br>
>><br>
>><br>
>> self expectGlobal: 'NameOfTheGlobal' "argument may be given as<br>
>> a Symbol as well"<br>
>><br>
>><br>
>> self expectClass: 'NameOfTheClass' "additionally verified that<br>
>> the global holds a class"<br>
>><br>
>><br>
>> self expectSharedVariable: 'Foo' inClass: 'Bar'<br>
>><br>
>><br>
>> this would make for nicer feedback since the expectation is made<br>
>> clearer<br>
><br>
> I went the other way when I did the ApiFile tests in that it didn't seem<br>
> terribly important to use most of the testing framework capabilities<br>
> (other than the overall pass/fail aspect to keep the initial PoC as<br>
> simple as possible) So they are simply small snippets of code that<br>
> performed the desired task but didn't care where it failed (if it<br>
> failed): the failure to successfully complete the task would be the<br>
> indicator that there was a problem and we would know that either<br>
> something being depended on had broken and needed to be fixed or that<br>
> the test needed to be updated/overhauled to represent the new way of<br>
> accomplishing the task.<br>
><br>
> My thinking was that as we started to build up a number of these, we<br>
> might start to see common breakage patterns and then we could decide<br>
> whether or not to handle that them more explicitly (whether using the<br>
> existing test framework capabilities, creating a new one, etc.) Trying<br>
> to engineer it up front didn't seem like a great idea not knowing what<br>
> common failure states would look like yet.<br>
><br>
>><br>
>><br>
>> Okay just 2 more cents!<br>
>><br>
><br>
> Mine as well. This is a worthwhile discussion/exercise IMO as we need<br>
> to get to a common understanding of what we are doing here.<br>
><br>
>><br>
>> Cheers, Peter<br>
>><br>
>><br>
>><br>
>><br>
>><br>
>> On Wed, Jul 22, 2015 at 12:57 PM, Peter van Rooijen<br>
>> <<a href="mailto:peter@aba-instituut.nl">peter@aba-instituut.nl</a>> wrote:<br>
>> Hi Ken,<br>
>><br>
>> On Wed, Jul 22, 2015 at 12:33 AM, Ken.Dickey<br>
>> <<a href="mailto:Ken.Dickey@whidbey.com">Ken.Dickey@whidbey.com</a>> wrote:<br>
>> On Tue, 21 Jul 2015 07:59:47 -0700<br>
>> Peter van Rooijen <<a href="mailto:peter@aba-instituut.nl">peter@aba-instituut.nl</a>> wrote:<br>
>><br>
>> >> I was thinking: "What should a Feature Test be?".<br>
>><br>
>> I tend to think of a hierarchy of requirements.<br>
>> Perhaps something like:<br>
>><br>
>> Feature requireAll: #( <feature name>.. ).<br>
>> Classes requireAll: #( <class name>.. ).<br>
>> Methods requireAll: #( <selector name>.. ).<br>
>> MethodsForKind class: <class name> requireAll:<br>
>> #( <selectorName>.. ).<br>
>> Tests requireAllPass: #( <unit test name> ).<br>
>><br>
>><br>
>> Yeah, that's not at all what I'm thinking :). I'm thinking of<br>
>> something that is automatically runnable, like a unit test. It<br>
>> tests something, like a un test. But if the test does not<br>
>> pass, it is NOT a bug, unlike with a unit test. It's just that<br>
>> we would like to know about it. Also, with nit tests there is<br>
>> the assumption that the code that represents the test is<br>
>> always compilable, with feature tests that cannot be assumed,<br>
>> so there would need to be protection against that. Of course<br>
>> we want to be able to load the feature tests in any condition,<br>
>> so putting it in the form of source text and compiling that is<br>
>> a possibility. The fact that that makes it slower than unit<br>
>> tests is not a problem, because unlike unit tests, we don't<br>
>> have to run them continuously.<br>
>><br>
>> The Feature class lets us require named (macro)<br>
>> Features with a version check. I prefer that<br>
>> requirements at this level actually load the packages<br>
>> required and only report failure if that is not<br>
>> possible, although we could have a "preflight" verson<br>
>> which just checks without loading any featured<br>
>> packages.<br>
>><br>
>><br>
>> I see. The thing I was thinking about merely reports about the<br>
>> state of a system (of code), it does not try to configure that<br>
>> in any way.<br>
>><br>
>><br>
>> API's are basically "protocols", which in the absence<br>
>> of symbolic execution means checking that classes and<br>
>> specific method selectors exist, or more specifically,<br>
>> method selectors are applicable to specific KindOf:<br>
>> classes.<br>
>><br>
>><br>
>> Well, in my mind some semantics could be expected (and tested<br>
>> for). For instance I might be interested if there is a<br>
>> DateTime class in the image and if it implements the ANSI<br>
>> DateAndTime protocol (not sure if there is one named that). Or<br>
>> perhaps another class that does that. These tests can test<br>
>> some actual semantics no problem.<br>
>><br>
>><br>
>> Perhaps some of you remember that Camp Smalltalk started with<br>
>> Ralph Johnson's desire to build an ANSI test suite. The way<br>
>> people went about it (extension methods to SUnit? I don't<br>
>> really remember) was wrong and could not possibly work (in my<br>
>> opinion anyway), but I could not convince a lot of people and<br>
>> such a test suite was never written. But with Feature Tests I<br>
>> think we could come a long way.<br>
>><br>
>> Further, some Unit Tests may be required to pass to<br>
>> ensure compliance with some specification.<br>
>><br>
>><br>
>> Well, except that the tests would not be unit tests in the<br>
>> strictest sense. But semantics, not merely interface, can be<br>
>> tested for sure.<br>
>><br>
>> We should be able to automate at least some of this<br>
>><br>
>><br>
>> Automate the running of the feature tests? Of course.<br>
>><br>
>> including a first pass of generating the test sets,<br>
>> which could then be pruned by hand as required.<br>
>><br>
>><br>
>> That I don't see happening. You test what YOU think is<br>
>> important to be sure of. No machine can decide/calculate that<br>
>> for you. Perhaps I'm misunderstanding you.<br>
>><br>
>><br>
>> Cheers, Peter<br>
>><br>
>><br>
>> $0.02,<br>
>> -KenD<br>
>><br>
>><br>
>><br>
>> _______________________________________________<br>
>> Cuis mailing list<br>
>> <a href="mailto:Cuis@jvuletich.org">Cuis@jvuletich.org</a><br>
>> <a href="http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org" rel="noreferrer" target="_blank">http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org</a><br>
>><br>
>><br>
>><br>
>> _______________________________________________<br>
>> Cuis mailing list<br>
>> <a href="mailto:Cuis@jvuletich.org">Cuis@jvuletich.org</a><br>
>> <a href="http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org" rel="noreferrer" target="_blank">http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org</a><br>
><br>
><br>
><br>
> _______________________________________________<br>
> Cuis mailing list<br>
> <a href="mailto:Cuis@jvuletich.org">Cuis@jvuletich.org</a><br>
> <a href="http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org" rel="noreferrer" target="_blank">http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org</a><br>
><br>
<br>
_______________________________________________<br>
Cuis mailing list<br>
<a href="mailto:Cuis@jvuletich.org">Cuis@jvuletich.org</a><br>
<a href="http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org" rel="noreferrer" target="_blank">http://jvuletich.org/mailman/listinfo/cuis_jvuletich.org</a><br>
</div></div></blockquote></div><br></div>