Fwd: Extreme Programming Digest Number 8684

> Messages
>
> 1.
>
> Incremental design material (was YAGNI Football Analogy)
>
> Posted by: "Kim Gräsman" kim.grasman@gmail.com kimgrasman
>
> Fri May 9, 2008 5:14 am (PDT)
>
> Hello all,
>
> It seems incremental design has become a theme (if implicit) on the
> list these past weeks -- does anyone have good references to reading
> on incremental design? Kent's note below whet my appetite...
>
> Ron's C# Adventure book is finally in the mail headed for my house,
> but it'd be interesting to see if there's more material available.
>
> Thanks,
> - Kim
>
> On Wed, May 7, 2008 at 6:22 PM, Kent Beck <kentb@earthlink.net> wrote:
>> <snip>
>> I don't know if that was helpful, but I have design-y ideas swirling
>> around
>> in my head as I work on the incremental design material, so it came out
>> easily. Please let us know how your situation came out.
>
> Back to top
> Reply to sender | Reply to group | Reply via web post
> Messages in this topic (1)
> 2.1.
>
> Re: Unit Testing Question [ some explanations & a better example]
>
> Posted by: "Wilson, Michael" michael.wilson@itg.com madwilliamflint
>
> Fri May 9, 2008 7:06 am (PDT)
>
> I really love the idea of Selenium and what it appears to be able to do.
> But I have to confess, I've had some significant trouble finding
> meaningfully rich documentation, which has stalled me almost completely.
>
> -----Original Message-----
> From: extremeprogramming@yahoogroups.com
> [mailto:extremeprogramming@yahoogroups.com] On Behalf Of Jonathan
> Rasmusson
> Sent: Thursday, May 08, 2008 6:17 PM
> To: extremeprogramming@yahoogroups.com
> Subject: [XP] Re: Unit Testing Question [ some explanations & a better
> example]
>
> Have you seen or tried Selenium?
>
> http://selenium.openqa.org/
>
> I am using it an find it quite good for high level smoke tests through
> web applications.
>
> It has a separate recorder plug-in (for Firefox) - so no programming.
>
> Cheers - Jonathan
>
> --- In extremeprogramming@yahoogroups.com, "Gary Brown" <glbrown@...>
> wrote:
>>
>> Hi, Steve,
>>
>> ----- Original Message -----
>> From: "Steven Gordon" <sgordonphd@...>
>> To: <extremeprogramming@yahoogroups.com>
>> Sent: Wednesday, May 07, 2008 8:10 PM
>> Subject: Re: [XP] Re: Unit Testing Question [ some explanations & a
> better
>> example]
>>
>>
>> > Gary,
>> >
>> > I would generally recommend The book Fit for Developing Software:
>> > Framework for Integrated Tests by
>> > Muckridge and Cunningham. The standard approach is to "test below
> the
>> > GUI" to create executable specifications of the business rules that
>> > are independent of presentation and workflow. This tends to promote
> a
>> > more flexible architecture that will allow the GUI to evolve
>> > independently from the generally more stable business rules
>> > implementation. Sometimes, it can be a challenge when the customer
>> > sees the GUI as the application.
>>
>> I agree with all of the above.
>>
>> >
>> > Perhaps I can provide more specific help if you can tell me more
> about
>> > the following:
>> > - What made this approach successful for your data conversion
> project?
>> > - How are the projects different?
>> > - What specific problems are you encountering when you try to do the
>> > same thing for your web project?
>>
>> The data conversion process is the same thing over and over. Read
> the input
>> file, convert to our format, clean it up, write the output file. We
> had a
>> stable text document used to describe the requirements. We pulled
> it into
>> Fitnesse, added test cases, and presto, instant executable
> specification.
>> They are a bit verbose for my tastes, but they seem to work pretty
> well.
>>
>> The other projects are a wide variety of web applications. We
> insist on
>> testing through the GUI. The available tools are slow and klunky.
> The
>> customers don't like the underlying table formats.
>>
>> GB.
>>
>
> ------------------------------------
>
> To Post a message, send it to: extremeprogramming@eGroups.com
>
> To Unsubscribe, send a blank message to:
> extremeprogramming-unsubscribe@eGroups.com
>
> ad-free courtesy of objectmentor.comYahoo! Groups Links
>
> This message is for the named person's use only. This communication is for
> informational purposes only and has been obtained from sources believed to
> be reliable, but it is not necessarily complete and its accuracy cannot be
> guaranteed. It is not intended as an offer or solicitation for the purchase
> or sale of any financial instrument or as an official confirmation of any
> transaction. Moreover, this material should not be construed to contain any
> recommendation regarding, or opinion concerning, any security. It may
> contain confidential, proprietary or legally privileged information. No
> confidentiality or privilege is waived or lost by any mistransmission. If
> you receive this message in error, please immediately delete it and all
> copies of it from your system, destroy any hard copies of it and notify the
> sender. You must not, directly or indirectly, use, disclose, distribute,
> print, or copy any part of this message if you are not the intended
> recipient. Any views expressed in this message are those of the individual
> sender, except where the message states otherwise and the sender is
> authorized to state them to be the views of any such entity.
>
> Securities products and services provided to Canadian investors are offered
> by ITG Canada Corp. (member CIPF and IDA), an affiliate of Investment
> Technology Group, Inc.
>
> ITG Inc. and/or its affiliates reserves the right to monitor and archive
> all electronic communications through its network.
>
> ITG Inc. Member NASD, SIPC
> -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
>
> Back to top
> Reply to sender | Reply to group | Reply via web post
> Messages in this topic (214)
> 2.2.
>
> Re: Unit Testing Question [ some explanations & a better example]
>
> Posted by: "Brian Victor" homeusenet3@brianhv.org brianhvictor
>
> Fri May 9, 2008 7:16 am (PDT)
>
> Wilson, Michael wrote:
>> I really love the idea of Selenium and what it appears to be able to do.
>> But I have to confess, I've had some significant trouble finding
>> meaningfully rich documentation, which has stalled me almost completely.
>
> I'm not sure what qualifies as meaningfully rich, but most of what I've
> needed to know I've been able to figure out from these two pages:
>
> http://release.openqa.org/selenium-remote-control/0.9.2/doc/dotnet/Selenium.DefaultSeleniumMembers.html
> http://release.openqa.org/selenium-remote-control/0.9.2/doc/dotnet/Selenium.DefaultSelenium.html
>
> It took me a while to find those, so I thought I'd point them out. I'm
> using the .NET wrapper, so YMMV.
>
> --
> Brian
>
> Back to top
> Reply to sender | Reply to group | Reply via web post
> Messages in this topic (214)
> 2.3.
>
> Re: Unit Testing Question [ some explanations & a better example]
>
> Posted by: "J. B. Rainsberger" jbrains762@gmail.com nails762
>
> Fri May 9, 2008 8:40 am (PDT)
>
> On May 8, 2008, at 07:55 , Gary Brown wrote:
>> The other projects are a wide variety of web applications. We insist
>> on
>> testing through the GUI. The available tools are slow and klunky. The
>> customers don't like the underlying table formats.
>>
> I get that a lot, too, but only on projects where I don't get to talk
> directly to the customer. Have you been able to find out why the
> customers don't like the table formats? When I have designed tests
> with my customers, they have been happy; and when I hear they aren't
> happy, it's been when I haven't been able to design tests with them. I
> notice the correlation without claims there is cause and effect, but I
> suspect the correlation is strong, and not just a coincidence.
> ----
> J. B. (Joe) Rainsberger :: http://www.jbrains.ca
> Your guide to software craftsmanship
> JUnit Recipes: Practical Methods for Programmer Testing
> 2005 Gordon Pask Award for contributions to Agile Software Practice
>
> Back to top
> Reply to sender | Reply to group | Reply via web post
> Messages in this topic (214)
> 3a.
>
> Re: Unit Testing Queries and Stored Procedures
>
> Posted by: "J. B. Rainsberger" jbrains762@gmail.com nails762
>
> Fri May 9, 2008 8:23 am (PDT)
>
> On May 8, 2008, at 08:24 , Christopher K. Joiner, Jr. wrote:
>> During our last Reflections, this week, we have discovered that when
>> our
>> code is mostly table driven and requires mostly queries or stored
>> procedures
>> to accomplish the task, we tend to write less red lights than
>> normal. We
>> will tend to write a light to ensure that the query will actually
>> pull back
>> records that we put in the table during SetUp. We are at a loss as
>> to what
>> else needs to be tested, if anything. It makes a lot more sense when
>> we are
>> testing functions that involve logic because every decision that
>> code can
>> make needs at least one light plus the extreme cases, etc. But we
>> can not
>> fully figure out how to apply this to queries. Does anyone have
>> similar
>> experiences and/or does anyone have any pointers on how to create a
>> more
>> substantial, more extensive Unit Test coverage in those situations?
>>
> I would like to echo Kent's comment: test the parts of the system you
> fear getting wrong the most or that you tend to wrong the most often.
> Once you have those parts of the system under more control, then use
> some of your extra energy to revisit this problem and the rest to keep
> adding features to your solidly-built system.
>
> Questions: I assume that /some/ code /somewhere/ invokes your stored
> procedures and queries, then displays the results on a screen. Did you
> write that code? How confident are you that it works? Do you have to
> write duplicate boilerplate code? Could you remove that duplication
> somehow? That might lead both to making new features even cheaper and
> making the design more worth testing (and the second benefit amplifies
> the first benefit).
>
> Good luck.
> ----
> J. B. (Joe) Rainsberger :: http://www.jbrains.ca
> Your guide to software craftsmanship
> JUnit Recipes: Practical Methods for Programmer Testing
> 2005 Gordon Pask Award for contributions to Agile Software Practice
>
> Back to top
> Reply to sender | Reply to group | Reply via web post
> Messages in this topic (6)
> 3b.
>
> Re: Unit Testing Queries and Stored Procedures
>
> Posted by: "D. André Dhondt" d.andre.dhondt@gmail.com wile_e_kycodey
>
> Fri May 9, 2008 11:11 am (PDT)
>
> J.B. said:
>>test the parts of the system you fear getting wrong the most
>>or that you tend to wrong the most often. Once you have those
>>parts of the system under more control ...
> Having been a former member of that team, I don't think Chris is asking for
> help on other parts of the system... he's got those parts under control. I
> think the issue is that TDD doesn't really translate well for a language or
> behavior that is non-deterministic. I'm not sure if "non-deterministic" is
> the right phrase for what I'm trying to say, but taking SQL as an example of
> a high-level language that expresses intent without describing the how
> (e.g., with "inner join", "top 10 percent", where we don't have to do the
> row-by-row comparisons), what can we do to verify the behavior? Even more
> complicated than that is what do we do when we're not sure how to express
> the intent in this language? It's almost like I'd rather take a set of data
> in a fit/fitness-like grid and pick out the rows I'm interested in and then
> monkey-up my sql until the filter criteria is right. I'm not sure if this
> attitude is because I'm stronger in other paradigms (I think that SQL is a
> declarative language, while most the time I'm thinking in object or
> procedural or functional paradigms) -- but I think that I need an
> example-driven model for writing tests that is much more verbose than the
> typical single-line asserts of a good TDD cadence.
>
> The tools for TDDing database-related code that I'm aware of are:
> * test doubles to verify CRUD intent without actually touching the database
> * test frameworks (in-memory databases) like derby, HSQL, etc.
> * integration/acceptance tests with sample data
>
> In struggling to apply the tdd / well factored code into a stored procedure
> format, I wonder if one could follow rules like:
> * every OR, AND, JOIN requires a red light against a table with SPECIFIC
> sample data that should be included and sample data that should be excluded
> * at the first cut, use only two entities at a time in your queries--and
> layer them on top of one another so that a table joined to a view allows you
> to use more than two entities
> * once you have a good safety harness, refactor away the layers so that the
> db engine can run more efficiently
>
> Has anyone been thinking about these approaches, or found other
> alternatives? I'd love to hear about them!
>
> On Fri, May 9, 2008 at 11:23 AM, J. B. Rainsberger <jbrains762@gmail.com>
> wrote:
>
>> On May 8, 2008, at 08:24 , Christopher K. Joiner, Jr. wrote:
>> > During our last Reflections, this week, we have discovered that when
>> > our
>> > code is mostly table driven and requires mostly queries or stored
>> > procedures
>> > to accomplish the task, we tend to write less red lights than
>> > normal. We
>> > will tend to write a light to ensure that the query will actually
>> > pull back
>> > records that we put in the table during SetUp. We are at a loss as
>> > to what
>> > else needs to be tested, if anything. It makes a lot more sense when
>> > we are
>> > testing functions that involve logic because every decision that
>> > code can
>> > make needs at least one light plus the extreme cases, etc. But we
>> > can not
>> > fully figure out how to apply this to queries. Does anyone have
>> > similar
>> > experiences and/or does anyone have any pointers on how to create a
>> > more
>> > substantial, more extensive Unit Test coverage in those situations?
>> >
>> I would like to echo Kent's comment: test the parts of the system you
>> fear getting wrong the most or that you tend to wrong the most often.
>> Once you have those parts of the system under more control, then use
>> some of your extra energy to revisit this problem and the rest to keep
>> adding features to your solidly-built system.
>>
>> Questions: I assume that /some/ code /somewhere/ invokes your stored
>> procedures and queries, then displays the results on a screen. Did you
>> write that code? How confident are you that it works? Do you have to
>> write duplicate boilerplate code? Could you remove that duplication
>> somehow? That might lead both to making new features even cheaper and
>> making the design more worth testing (and the second benefit amplifies
>> the first benefit).
>>
>> Good luck.
>> ----
>> J. B. (Joe) Rainsberger :: http://www.jbrains.ca
>> Your guide to software craftsmanship
>> JUnit Recipes: Practical Methods for Programmer Testing
>> 2005 Gordon Pask Award for contributions to Agile Software Practice
>>
>>
>>
>
> --
> D. André Dhondt
> mobile: 267-283-8270
> home: 267-286-6875
>
> If you're a software developer in the area, join Agile Philly (
> http://groups.yahoo.com/group/agilephilly/)!
>
> [Non-text portions of this message have been removed]
>
> Back to top
> Reply to sender | Reply to group | Reply via web post
> Messages in this topic (6)
> 3c.
>
> Re: Unit Testing Queries and Stored Procedures
>
> Posted by: "J. B. Rainsberger" jbrains762@gmail.com nails762
>
> Fri May 9, 2008 5:42 pm (PDT)
>
> On May 9, 2008, at 14:11 , D. André Dhondt wrote:
>>
>> Having been a former member of that team, I don't think Chris is
>> asking for
>> help on other parts of the system... he's got those parts under
>> control. I
>> think the issue is that TDD doesn't really translate well for a
>> language or
>> behavior that is non-deterministic. I'm not sure if "non-
>> deterministic" is
>> the right phrase for what I'm trying to say, but taking SQL as an
>> example of
>> a high-level language that expresses intent without describing the how
>> (e.g., with "inner join", "top 10 percent", where we don't have to
>> do the
>> row-by-row comparisons), what can we do to verify the behavior?
>>
> SQL is still "just" a query language. When you design a query, you
> have in mind which rows in which tables you want to retrieve, so to
> check that, put data in the tables, run the query, then verify which
> rows you get back. How you get the bits to move from where to where is
> just a question of which tools are available.
>
> I don't know how else to do it.
> ----
> J. B. (Joe) Rainsberger :: http://www.jbrains.ca
> Your guide to software craftsmanship
> JUnit Recipes: Practical Methods for Programmer Testing
> 2005 Gordon Pask Award for contributions to Agile Software Practice
>
> Back to top
> Reply to sender | Reply to group | Reply via web post
> Messages in this topic (6)
> 4.1.
>
> Re: YAGNI Football Analogy
>
> Posted by: "J. B. Rainsberger" jbrains762@gmail.com nails762
>
> Fri May 9, 2008 8:29 am (PDT)
>
> On May 7, 2008, at 16:07 , Steven Gordon wrote:
>> On Wed, May 7, 2008 at 11:35 AM, J. B. Rainsberger <jbrains762@gmail.com
>> > wrote:
>> >
>> > On May 6, 2008, at 22:09 , Ron Jeffries wrote:
>> >
>> > > Hello, Doug. On Tuesday, May 6, 2008, at 7:56:20 PM, you wrote:
>> > >
>> > > > Team to customer: While implementing x we could implement it in
>> > > such a
>> > > > way that would allow us to easily add y shortly afterwards.
>> Doing
>> > > this
>> > > > does mean that we take a bit longer with x but then we'd be
>> > > quicker at
>> > > > adding y and the net result would be x and y quicker than just
>> x and
>> > > > then y.
>> > >
>> > > I have heard teams say this and have said it myself. I'm not
>> sure it
>> > > was ever true and I'm sure it was never quantifiable. Mostly it
>> was
>> > > just a way of getting permission to go on a boondoggle.
>> > >
>> > > I'd love to have even one example of a problem where x plus y was
>> > > actually easier if we did something for y in the first pass,
>> > > compared to working on x in the first and y in the second.
>> > >
>> > Moreover, if Y would be easier by building more of X, then there's
>> > probably some subfeature X' of X that we could build even cheaper
>> than
>> > X that would give the customer 80% of the value of X. The best teams
>> > I've seen find and build X' first, which often (strangely enough)
>> > points to an even cheaper way to build Y.
>>
>> By the same logic, why would there not probably be some X'' even
>> cheaper than X' that provides 80% of the value of X' which provides
>> the same feedback and learning at an even cheaper cost and lower risk?
>> Why would there not also be an X''' or X''''?
>>
> Well, Steve, for every feature X there exists natural number n such
> that we can split X n times in the way I described, but n isn't
> infinite, because features are not infinitely divisible.
>
> Also, what I didn't make explicit in my statement is the cost of
> finding X' compared to the waste involved in building unnecessary
> features. I'm very confident, based solely on observation and
> experience, and having in mind the teams I generally work with, that
> the cost of finding the X's on a single release is lower than the
> waste of building the unnecessary parts of the corresponding Xs. I am
> not confident that this cost/benefit tradeoff is the same for X'',
> X''', X(4) up to X(n).
>
> As teams learn to write smaller stories, it becomes less valuable to
> look for X', because, compared to what they used to do, they already
> write X' stories.
>
> Take care.
> ----
> J. B. (Joe) Rainsberger :: http://www.jbrains.ca
> Your guide to software craftsmanship
> JUnit Recipes: Practical Methods for Programmer Testing
> 2005 Gordon Pask Award for contributions to Agile Software Practice
>
> Back to top
> Reply to sender | Reply to group | Reply via web post
> Messages in this topic (38)
> 4.2.
>
> Re: YAGNI Football Analogy
>
> Posted by: "J. B. Rainsberger" jbrains762@gmail.com nails762
>
> Fri May 9, 2008 8:37 am (PDT)
>
> On May 8, 2008, at 01:10 , Matt wrote:
>> Pointing out that deploying a web service shouldn't take a long time
>> is
>> helpful (maybe?).
>>
>> Pointing out that perhaps I am missing a bigger piece of the
>> architectural puzzle as Marty did is more helpful.
>>
>> Going to the level of detail that Kent did in his post is extremely
>> helpful.
>>
>> As always, I am grateful for all help from those on the list.
>> Sarcastic
>> or not.
>>
> I don't know whether you think my original reply was sarcastic. It
> wasn't. My style is to ask questions first, get some context, then
> make a recommendation. My style is also to look for low-hanging fruit
> first before recommending something more involved. Sometimes the
> answer's right there.
>
> But then you answered me as though I were a moron. I'm fairly sure I'm
> not, and a few people around here might be willing to concur.
>
> You see, I used to go into detail with all my answers, but I found
> that many of the details were based on faulty assumptions, which
> annoyed people often enough that I stopped doing that.
>
> Incidentally, if there were an interesting reason why deploying your
> web service were expensive, it would likely have led me to offer the
> kind of advice that Marty and Kent did. I suppose this time I was
> unsuccessful at reading your mind. I don't think I'll be any better
> next time. :)
>
> So it seems we just didn't communicate well this time. I hope it goes
> better in the future.
>
> Take care.
> ----
> J. B. (Joe) Rainsberger :: http://www.jbrains.ca
> Your guide to software craftsmanship
> JUnit Recipes: Practical Methods for Programmer Testing
> 2005 Gordon Pask Award for contributions to Agile Software Practice
>
> Back to top
> Reply to sender | Reply to group | Reply via web post
> Messages in this topic (38)
> 4.3.
>
> Re: YAGNI Football Analogy
>
> Posted by: "Matt" maswaffer@gmail.com maswaffer
>
> Fri May 9, 2008 9:05 am (PDT)
>
> J.B.
>
> --- In extremeprogramming@yahoogroups.com, "J. B. Rainsberger"
> <jbrains762@...> wrote:
>>
> <snip/>
>
>> I don't know whether you think my original reply was sarcastic. It
>> wasn't. My style is to ask questions first, get some context, then
>> make a recommendation. My style is also to look for low-hanging fruit
>> first before recommending something more involved. Sometimes the
>> answer's right there.
>>
>
> No.. it was the "Wow!!!! Really?!??!?!" reply. I have teenagers so I
> recognize sarcasm when I see it :)
>
>> But then you answered me as though I were a moron. I'm fairly sure I'm
>> not, and a few people around here might be willing to concur.
>>
>
> Been following your posts since prob. 2003 or so... so yes, I concur,
> you are a smart guy. Sorry you felt I treated you as a moron. I was
> trying to convey very simply that the problem wasn't "how expensive" it
> was but that it had some cost associated with doing it. Probably should
> have just said that.
>
>> You see, I used to go into detail with all my answers, but I found
>> that many of the details were based on faulty assumptions, which
>> annoyed people often enough that I stopped doing that.
>>
>> Incidentally, if there were an interesting reason why deploying your
>> web service were expensive, it would likely have led me to offer the
>> kind of advice that Marty and Kent did. I suppose this time I was
>> unsuccessful at reading your mind. I don't think I'll be any better
>> next time. :)
>>
>> So it seems we just didn't communicate well this time. I hope it goes
>> better in the future.
>>
>
> Agreed.
>
>> Take care.
>> ----
>> J. B. (Joe) Rainsberger :: http://www.jbrains.ca
>> Your guide to software craftsmanship
>> JUnit Recipes: Practical Methods for Programmer Testing
>> 2005 Gordon Pask Award for contributions to Agile Software Practice
>>
>
> Thanks,
>
> Matt
>
> Back to top
> Reply to sender | Reply to group | Reply via web post
> Messages in this topic (38)
> 5a.
>
> Re: Executable Specifications
>
> Posted by: "Joshua Kerievsky" joshua@industriallogic.com jlk112067
>
> Fri May 9, 2008 8:33 am (PDT)
>
> On Wed, May 7, 2008 at 5:33 PM, Gary Brown <glbrown@inebraska.com> wrote:
>
>> We've done it with our data conversion process. We'd like to do it with
>> our web apps. Anyone have that experience?
>
> We've got loads of experience with this Gary. We just lack time to
> participate here just now. More soon....
>
> --
> best regards,
> jk
>
> Industrial Logic, Inc.
> Joshua Kerievsky
> Founder, Extreme Programmer & Coach
> http://industriallogic.com
> 866-540-8336 (toll free)
> 510-540-8336 (phone)
> Berkeley, California
>
> Learn Code Smells, Refactoring and TDD at
> http://industriallogic.com/elearning
>
> [Non-text portions of this message have been removed]
>
> Back to top
> Reply to sender | Reply to group | Reply via web post
> Messages in this topic (14)
> 5b.
>
> Re: Executable Specifications
>
> Posted by: "Wilson, Michael" michael.wilson@itg.com madwilliamflint
>
> Fri May 9, 2008 8:40 am (PDT)
>
> Looking forward to it JK.
>
> -----Original Message-----
> From: extremeprogramming@yahoogroups.com
> [mailto:extremeprogramming@yahoogroups.com] On Behalf Of Joshua
> Kerievsky
> Sent: Friday, May 09, 2008 11:33 AM
> To: extremeprogramming@yahoogroups.com
> Subject: Re: [XP] Executable Specifications
>
> On Wed, May 7, 2008 at 5:33 PM, Gary Brown <glbrown@inebraska.com>
> wrote:
>
>> We've done it with our data conversion process. We'd like to do it
> with
>> our web apps. Anyone have that experience?
>
> We've got loads of experience with this Gary. We just lack time to
> participate here just now. More soon....
>
> --
> best regards,
> jk
>
> Industrial Logic, Inc.
> Joshua Kerievsky
> Founder, Extreme Programmer & Coach
> http://industriallogic.com
> 866-540-8336 (toll free)
> 510-540-8336 (phone)
> Berkeley, California
>
> Learn Code Smells, Refactoring and TDD at
> http://industriallogic.com/elearning
>
> [Non-text portions of this message have been removed]
>
> ------------------------------------
>
> To Post a message, send it to: extremeprogramming@eGroups.com
>
> To Unsubscribe, send a blank message to:
> extremeprogramming-unsubscribe@eGroups.com
>
> ad-free courtesy of objectmentor.comYahoo! Groups Links
>
> This message is for the named person's use only. This communication is for
> informational purposes only and has been obtained from sources believed to
> be reliable, but it is not necessarily complete and its accuracy cannot be
> guaranteed. It is not intended as an offer or solicitation for the purchase
> or sale of any financial instrument or as an official confirmation of any
> transaction. Moreover, this material should not be construed to contain any
> recommendation regarding, or opinion concerning, any security. It may
> contain confidential, proprietary or legally privileged information. No
> confidentiality or privilege is waived or lost by any mistransmission. If
> you receive this message in error, please immediately delete it and all
> copies of it from your system, destroy any hard copies of it and notify the
> sender. You must not, directly or indirectly, use, disclose, distribute,
> print, or copy any part of this message if you are not the intended
> recipient. Any views expressed in this message are those of the individual
> sender, except where the message states otherwise and the sender is
> authorized to state them to be the views of any such entity.
>
> Securities products and services provided to Canadian investors are offered
> by ITG Canada Corp. (member CIPF and IDA), an affiliate of Investment
> Technology Group, Inc.
>
> ITG Inc. and/or its affiliates reserves the right to monitor and archive
> all electronic communications through its network.
>
> ITG Inc. Member NASD, SIPC
> -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
>
> Back to top
> Reply to sender | Reply to group | Reply via web post
> Messages in this topic (14)
> 5c.
>
> Re: Executable Specifications
>
> Posted by: "J. B. Rainsberger" jbrains762@gmail.com nails762
>
> Fri May 9, 2008 8:43 am (PDT)
>
> On May 7, 2008, at 20:33 , Gary Brown wrote:
>
>> We've done it with our data conversion process. We'd like to do it
>> with our web apps. Anyone have that experience?
>>
> Yes. The #1 fatal mistake teams make is trying to make their end-to-
> end web app tests exhaustive. This way lie dragons: brittle, slow
> tests that mostly duplicate what programmer tests do.
>
> Customer Tests are meant to give the Customer confidence that a
> feature is present, and not to test the feature exhaustively. When I
> have used this principle to guide writing end-to-end web app tests, I
> have tended to get good value for time and energy.
> ----
> J. B. (Joe) Rainsberger :: http://www.jbrains.ca
> Your guide to software craftsmanship
> JUnit Recipes: Practical Methods for Programmer Testing
> 2005 Gordon Pask Award for contributions to Agile Software Practice
>
> Back to top
> Reply to sender | Reply to group | Reply via web post
> Messages in this topic (14)
> 5d.
>
> Re: Executable Specifications
>
> Posted by: "gwynatdezyne" mail@gwynmorfey.com gwynatdezyne
>
> Fri May 9, 2008 9:07 am (PDT)
>
> Hi -
>
>> We've done it with our data conversion process. We'd like to do it
>>with our web apps. Anyone have that experience?
> We're doing this for Ruby On Rails and Merb applications using RSpec
> Stories and Webrat. RSpec Stories let us do this:
>
> Scenario: Creating a new discussion
> Given a group in the database
> When I visit the group page
> And I enter "hello" in title
> And I enter "hi there" in body
> And I click the button
> Then I should see the group page
> And there should be a new discussion
>
> And Webrat lets us do this:
>
> When("I visit the group page") do
> visits "/groups/33"
> end
>
> It's taken some time to learn, but is working well for us.
>
> Gwyn.
>
> Back to top
> Reply to sender | Reply to group | Reply via web post
> Messages in this topic (14)
> 5e.
>
> Re: Executable Specifications
>
> Posted by: "Joshua Kerievsky" joshua@industriallogic.com jlk112067
>
> Fri May 9, 2008 9:44 am (PDT)
>
> Gary,
>
> Of late (and after years of doing FIT), we're now using Selenium RC (Remote
> Control) with xUnit (JUnit, NUnit, etc) for Storytest-Driven Development
> (SDD).
>
> Consider this storytest:
>
> class StudentTakesSingleSelectQuizTest...
> public void testStudentChoosesCorrectAnswer() {
> navigateToTwoPlusTwoQuiz();
> pickAnswerOfFour();
> showAnswers();
> verifyAnswerOfFourIsReportedCorrect();
> verifyAnswersOfOneAndFiveAreReportedIncorrect();
> verifyStudentToldOfChoosingCorrectly();
> }
>
> Note how we're embedding "example" data in method names. We also embed
> example data via parameters, like so:
>
> class UserPostsFeedbackTests...
> public void testPostingAllFeedbackTypes() {
> navigateToContentPage();
> verifyFeedbackCounts(0, 0, 0);
>
> openFeedbackDialog();
>
> String subject = "This is a question";
> String body = "The quick brown fox jumps over the lazy dog.";
> postAQuestion(subject, body);
> verifyQuestionPosted(subject, body);
>
> subject = "This is a test comment";
> body = "All work and no play makes Jack a dull boy.";
> postAComment(subject, body);
> verifyCommentPosted(subject, body);
>
> String errorSubject = "This is a test error and it's a good one";
> body = "Fourscore and seven years ago";
> postAnError(errorSubject, body);
> verifyErrorPosted(subject, body);
>
> closeFeedbackDialog();
> verifyFeedbackCounts(1, 1, 1);
> }
>
> The idea is that non-technical customers (including QA) help specify
> storytests (on a whiteboard, document, whatever) and programmers automate in
> an OO environment (where we have re-use, superclasses, helper
> methods/classes, etc). The trick is to produce the automated storytests
> in a way that preserves their readability to non-techies. We run a script
> to extract the storytests (and not any helper methods) so that customers can
> easily study what storytests they have.
>
> This is definitely a departure from using documents hooked up to fixtures
> hooked up to production code. When time permits, I could describe why we've
> abandoned that approach.
>
> Also, we're finding that by using Selenium RC we can take advantage of the
> Selenium Grid. So we can run our storytests in parallel to speed
> execution. For example, we run storytests in IE7 on box A, FireFox 2.x on
> box B, Safari on box B, etc.
>
> --
> best regards,
> jk
>
> Industrial Logic, Inc.
> Joshua Kerievsky
> Founder, Extreme Programmer & Coach
> http://industriallogic.com
> 866-540-8336 (toll free)
> 510-540-8336 (phone)
> Berkeley, California
>
> Learn Code Smells, Refactoring and TDD at
> http://industriallogic.com/elearning
>
> [Non-text portions of this message have been removed]
>
> Back to top
> Reply to sender | Reply to group | Reply via web post
> Messages in this topic (14)
> 5f.
>
> Re: Executable Specifications
>
> Posted by: "Brian Button" bbutton@agilestl.com bbutton
>
> Fri May 9, 2008 11:29 am (PDT)
>
> J. B. Rainsberger wrote:
>> On May 7, 2008, at 20:33 , Gary Brown wrote:
>>
>>> We've done it with our data conversion process. We'd like to do it
>>> with our web apps. Anyone have that experience?
>>>
>> Yes. The #1 fatal mistake teams make is trying to make their end-to-
>> end web app tests exhaustive. This way lie dragons: brittle, slow
>> tests that mostly duplicate what programmer tests do.
>>
>> Customer Tests are meant to give the Customer confidence that a
>> feature is present, and not to test the feature exhaustively. When I
>> have used this principle to guide writing end-to-end web app tests, I
>> have tended to get good value for time and energy.
>
> So what's the other side of this, JB? In order for us to have mobile
> code, we have to have tests surrounding it to the level that we can be
> free to refactor. If we are only writing enough customer tests to ensure
> that the feature is present, where does the other customer-level testing
> come from?
>
> bab
>
> Back to top
> Reply to sender | Reply to group | Reply via web post
> Messages in this topic (14)
> 5g.
>
> Re: Executable Specifications
>
> Posted by: "Steven Gordon" sgordonphd@gmail.com sfman2k
>
> Fri May 9, 2008 11:53 am (PDT)
>
> On Fri, May 9, 2008 at 11:25 AM, Brian Button <bbutton@agilestl.com> wrote:
>>
>>
>> J. B. Rainsberger wrote:
>> > On May 7, 2008, at 20:33 , Gary Brown wrote:
>> >
>> >> We've done it with our data conversion process. We'd like to do it
>> >> with our web apps. Anyone have that experience?
>> >>
>> > Yes. The #1 fatal mistake teams make is trying to make their end-to-
>> > end web app tests exhaustive. This way lie dragons: brittle, slow
>> > tests that mostly duplicate what programmer tests do.
>> >
>> > Customer Tests are meant to give the Customer confidence that a
>> > feature is present, and not to test the feature exhaustively. When I
>> > have used this principle to guide writing end-to-end web app tests, I
>> > have tended to get good value for time and energy.
>>
>> So what's the other side of this, JB? In order for us to have mobile
>> code, we have to have tests surrounding it to the level that we can be
>> free to refactor. If we are only writing enough customer tests to ensure
>> that the feature is present, where does the other customer-level testing
>> come from?
>>
>> bab
>>
>
> There is a middle ground between feature presence and exhaustive
> testing. Following TDD does not create exhaustive unit tests, but it
> does create sufficient coverage to support refactoring.
>
> Under TDD, developers create a reasonable set of tests that
> specifies/drives the development of code that does what the developer
> thinks it should do. As long as the code is the simplest possible
> code that passes all the tests without cheating (e.g., not hard coding
> things to pass those specific tests), we can have high confidence that
> the code does what the people who wrote those tests expected it to do.
>
> The same goes for ATDD (acceptance test driven development). The
> tests should give us confidence that the story specified works as
> expected if the program does not cheat explicitly to cover those
> specific test cases.
>
> It very much helps to keep our individual stories focussed on specific
> scenarios so that it is clear what specific scenarios have been
> developed and tested in each story (as opposed to any individual story
> and its acceptance tests supposedly standing for an entire feature
> working for every imaginable scenario). As I have been arguing in
> several threads lately, making each story a vertical slice of a
> feature provides several benefits, including testability without the
> confusion that ensues when any single story and its tests purports to
> represent a finished feature. Features tend to be open-ended.
> Individual stories and their acceptance tests should not be
> open-ended.
>
> Steve
>
> Back to top
> Reply to sender | Reply to group | Reply via web post
> Messages in this topic (14)
> 5h.
>
> Re: Executable Specifications
>
> Posted by: "Brian Button" bbutton@agilestl.com bbutton
>
> Fri May 9, 2008 12:08 pm (PDT)
>
> Steven Gordon wrote:
>>> So what's the other side of this, JB? In order for us to have mobile
>>> code, we have to have tests surrounding it to the level that we can be
>>> free to refactor. If we are only writing enough customer tests to ensure
>>> that the feature is present, where does the other customer-level testing
>>> come from?
>
>> There is a middle ground between feature presence and exhaustive
>> testing. Following TDD does not create exhaustive unit tests, but it
>> does create sufficient coverage to support refactoring.
>
> Thanks for your answer, Steve. I understand it completely, having done
> it on projects of my own and on teams I've coached.
>
> My question really has a different focus, though.
>
> I fully appreciate, believe, and understand that manual QA is too slow
> to keep up with an agile team. So many of my clients have refused to
> believe this and have tried to carry on with their existing testing
> philosophy of having programmers create their programmer tests and
> having their testers test the system as best as they can at the end (you
> can't win every battle, can you :()
>
> But, according to the idea JB put forth, if we are not writing a good
> set of regression tests for the application, and manual QA is too slow,
> then where does the hard core system testing take place? It has to
> happen. But if we're not writing tests for it beforehand at the story
> level, and if the testers aren't manually banging on the system with
> manual tests (note that I am not talking about exploratory testing
> here), then where and when does this happen? Where are the boundary
> condition checks? Where are the checks for doing the proper thing if the
> user is overdrawn, its a Tuesday, payday was at midnight last night, and
> the bank is closed for an extended holiday on all days that end with an AY?
>
> I suppose one possible answer is to make your stories small enough that
> handling each of these error conditions is its own story, so the ATDD
> test would cover each of these cases, one per story, but I suspect that
> those stories would get to be too small to be managed.
>
> So how do you handle this?
>
> bab
>
> Back to top
> Reply to sender | Reply to group | Reply via web post
> Messages in this topic (14)
> 5i.
>
> Re: Executable Specifications
>
> Posted by: "John Roth" JohnRoth1@gmail.com jhrothjr
>
> Fri May 9, 2008 12:28 pm (PDT)
>
> Brian Button wrote:
>
>>
>> But, according to the idea JB put forth, if we are not writing a good
>> set of regression tests for the application, and manual QA is too slow,
>> then where does the hard core system testing take place? It has to
>> happen. But if we're not writing tests for it beforehand at the story
>> level, and if the testers aren't manually banging on the system with
>> manual tests (note that I am not talking about exploratory testing
>> here), then where and when does this happen? Where are the boundary
>> condition checks? Where are the checks for doing the proper thing if the
>> user is overdrawn, it's a Tuesday, payday was at midnight last night, and
>> the bank is closed for an extended holiday on all days that end with an
>> AY?
>>
>> I suppose one possible answer is to make your stories small enough that
>> handling each of these error conditions is its own story, so the ATDD
>> test would cover each of these cases, one per story, but I suspect that
>> those stories would get to be too small to be managed.
>>
>> So how do you handle this?
>
> The first thing is to realize that stories are not the issue. A story is
> simply a vehicle to facilitate scheduling and note that a conversation
> needs to take place. That conversation is where the necessary
> tests are defined and written. Once the story is implemented, then the tests
> are the specification; the story itself is of historical interest, if that.
>
> Your example is certainly amusing, since that particular bank is never
> open for business. However, a more significant fact is that the number of
> combinations to be tested explodes astronomically the more factors you
> attempt to combine, and the efficiency of finding problems falls
> drastically.
>
> If that kind of testing is finding enough defects to be worthwhile, then I'd
> submit that there's something wrong earlier in the process.
>
> John Roth
>
>>
>> bab
>>
>
> Back to top
> Reply to sender | Reply to group | Reply via web post
> Messages in this topic (14)
> 5j.
>
> Re: Executable Specifications
>
> Posted by: "J. B. Rainsberger" jbrains762@gmail.com nails762
>
> Fri May 9, 2008 5:44 pm (PDT)
>
> On May 9, 2008, at 14:25 , Brian Button wrote:
>
>> J. B. Rainsberger wrote:
>> > On May 7, 2008, at 20:33 , Gary Brown wrote:
>> >
>> >> We've done it with our data conversion process. We'd like to do it
>> >> with our web apps. Anyone have that experience?
>> >>
>> > Yes. The #1 fatal mistake teams make is trying to make their end-to-
>> > end web app tests exhaustive. This way lie dragons: brittle, slow
>> > tests that mostly duplicate what programmer tests do.
>> >
>> > Customer Tests are meant to give the Customer confidence that a
>> > feature is present, and not to test the feature exhaustively. When I
>> > have used this principle to guide writing end-to-end web app
>> tests, I
>> > have tended to get good value for time and energy.
>>
>> So what's the other side of this, JB? In order for us to have mobile
>> code, we have to have tests surrounding it to the level that we can be
>> free to refactor. If we are only writing enough customer tests to
>> ensure
>> that the feature is present, where does the other customer-level
>> testing
>> come from?
>>
> I'm not sure what other customer-level testing you want. If you mean
> system testing (integration smoke test, performance, scalability),
> then usually you have a separate testing team that uses testing tools
> like Silk Performer. I don't use these tests to aid refactoring,
> though: that's what the programmer tests are for.
> ----
> J. B. (Joe) Rainsberger :: http://www.jbrains.ca
> Your guide to software craftsmanship
> JUnit Recipes: Practical Methods for Programmer Testing
> 2005 Gordon Pask Award for contributions to Agile Software Practice
>
> Back to top
> Reply to sender | Reply to group | Reply via web post
> Messages in this topic (14)
> 5k.
>
> Re: Executable Specifications
>
> Posted by: "J. B. Rainsberger" jbrains762@gmail.com nails762
>
> Fri May 9, 2008 5:47 pm (PDT)
>
> On May 9, 2008, at 15:04 , Brian Button wrote:
>
>> Steven Gordon wrote:
>> >> So what's the other side of this, JB? In order for us to have
>> mobile
>> >> code, we have to have tests surrounding it to the level that we
>> can be
>> >> free to refactor. If we are only writing enough customer tests to
>> ensure
>> >> that the feature is present, where does the other customer-level
>> testing
>> >> come from?
>>
>> > There is a middle ground between feature presence and exhaustive
>> > testing. Following TDD does not create exhaustive unit tests, but it
>> > does create sufficient coverage to support refactoring.
>>
>> Thanks for your answer, Steve. I understand it completely, having done
>> it on projects of my own and on teams I've coached.
>>
>> My question really has a different focus, though.
>>
>> I fully appreciate, believe, and understand that manual QA is too slow
>> to keep up with an agile team. So many of my clients have refused to
>> believe this and have tried to carry on with their existing testing
>> philosophy of having programmers create their programmer tests and
>> having their testers test the system as best as they can at the end
>> (you
>> can't win every battle, can you :()
>>
>> But, according to the idea JB put forth, if we are not writing a good
>> set of regression tests for the application, and manual QA is too
>> slow,
>> then where does the hard core system testing take place? It has to
>> happen. But if we're not writing tests for it beforehand at the story
>> level, and if the testers aren't manually banging on the system with
>> manual tests (note that I am not talking about exploratory testing
>> here), then where and when does this happen? Where are the boundary
>> condition checks? Where are the checks for doing the proper thing if
>> the
>> user is overdrawn, its a Tuesday, payday was at midnight last night,
>> and
>> the bank is closed for an extended holiday on all days that end with
>> an AY?
>>
>> I suppose one possible answer is to make your stories small enough
>> that
>> handling each of these error conditions is its own story, so the ATDD
>> test would cover each of these cases, one per story, but I suspect
>> that
>> those stories would get to be too small to be managed.
>>
>> So how do you handle this?
>>
> If I really want to handle specifically the above case, I hire one of
> those crazy, lovable exploratory testers for a week (Bolton, Bach,
> Hendrickson, Marick, Pettichord, ...), and if they find that defect,
> automate the corresponding test.
>
> For me, hardcore system testing isn't for feature correctness, it's
> for robustness and fitness. Hardcode /programmer/ testing gives me
> correctness.
> ----
> J. B. (Joe) Rainsberger :: http://www.jbrains.ca
> Your guide to software craftsmanship
> JUnit Recipes: Practical Methods for Programmer Testing
> 2005 Gordon Pask Award for contributions to Agile Software Practice

No comments: