[aosd-discuss] AOP and testing
TedN at icode.com
Thu May 8 16:37:14 EDT 2003
I am a total newbie to AOP & AOSD stuff. I have always worked on Microsoft technologies and I would Like to know more about implement aop/aosd techniques using Microsoft.Net framework. do we have enough literature available on this?
Thanks in Advance
From: Juri Memmert [mailto:jadawinarc at yahoo.com]
Sent: Thursday, May 08, 2003 2:25 PM
To: discuss at aosd.net
Cc: rickard at dreambean.com
Subject: Re: [aosd-discuss] AOP and testing
--- Rickard Ã-berg <rickard at dreambean.com> wrote:
> I made a blog entry about AOP and testing, and I'm curious if anyone on
> this list have any comments and ideas on this topic. How do You deal
> with the issues that are mentioned?
I'd like to go through your blog and give you my replies on the way...
> AOP is great. It allows us to do things that aren't really possible with any
> other coding paradigm. AOP is also a big responsibility, because it means
> there's no single place to look for bugs when bugs occur. And bugs do,
> occasional claims to the contrary, occur.
Sorry... there is exactly one place to look. ;-)
I do not use an AspectJ-like approach but a Hyperspaces-based one, so my
thoughts and practices might not be applicable, but any application I write
(beyond a very few classes) is captured in a Cosmos schema.
Within that schema, I define the interactions of the various aspects/concerns
of my application. There, I define what code is merged where.
When a bug occurs, I go back to the schema and check which concerns were part
of the code that created the bug (I also go back the stack trace to see what
concerns were there).
In 99% of the cases, the error occured where two+ concerns converge in one
method or class. So, finding the concerns in the schema (which can be done with
a simple search on the schema), this points me directly to the concerns that
caused the problem and from there to the code that I wrote. From there,
debugging normally is really simple.
> If you've built an entire system on AOP, like we have, then how do you make
> sure it actually works? How do you get the "green light" that TDD'ers crave
> To be honest: I don't have the slightest idea.
IMHO, you need a meta-level description of your application that tells you what
converges where and why... something you can not find in the code.
You can find the "what" and with more work the "where"... often a _lot_ more
work... but definitely not the "why" (unless you document _way_ more than I've
ever seen in any commercial project).
> You could test each advice independently, but that might be difficult since
> they usually rely on some particular context which may be impossible to
> replicate for testing purposes.
This is different with hyperslices. Each of the slices in itself is complete
enough to test it separately and I just merge in a test context that is based
on my best estimate of possible use cases.
> Also, even if an advice is considered to work by a JUnit test, maybe it will
> break when combined with other advice?
I do integration tests on all hyperslices that are combined as defined in the
Cosmos schema. The number is not too big.
These integration tests are basically nothing more than the unit tests for each
hyperslice plus a bit.
If the unit tests for the various hyperslices do not work on the combined
hypermodule anymore, you have modified the contract and need to document this.
Often, this points to a problem as there could be a third hyperslice in this...
But if it's not a problem, this also means that any class calling any method
the resulting hypermodule needs to adhere to this new contract. That way the
new contract is propagated and all integration tests need to reflect that (this
is the "bit", I mentioned above).
> Or, what if the pointcuts are incorrectly defined and the advice is never
> applied in the first place?
This is a large problem with the way pointcuts are defined in AspectJ. I've
seen pointcut definitions longer than my arm that still were wrong. :-(
It is much easier with hyperslices and a Cosmos schema from which to derive the
> This will be a common issue if you're using method regexps to define
> and do refactoring without using a tool.
Regex are, in my opinion, a really bad idea. They are very powerful and
flexible... and often you match more or less than you want. Unfortunately, you
find out about that too late, too.
> And even if you use a tool, just how is it supposed to work? Let's say a
> is being adviced because of a regexp including a "*". If that method is
> so that it's not covered anymore, should the regexp be changed so that it is
> indeed still included? E.g. "foo*" -> "foo* | someMethod" when "fooBar" is
> renamed to "someMethod". That could easily become a real nightmare.
It _is_ a nightmare. It's one of the reasons I really dislike using that kind
of method to define pointcuts.
> The issue of what advice is applied where can be made more transparent by
> tools that allow you to introspect a system to see how it is actually
Yes, tools to inspect the application are helpful, but they're nowhere enough.
You need to delve into the concerns that make up the application before they
> AspectJ relies on plugins to IDE's, and personally I'm leaning towards
> a JavaDoc extension that creates HTML showing the actual structure of the
> program. This, of course, relies on the idea that there are classes of
> that work the same way, and some other AOP implementations do not have this
> property (JBoss AOP comes to mind).
JavaDoc is good... when you can ascertain that your code adheres to the
standards needed by the doclet... and as you point out, that this might be
> But the above is only related to advice. What about introductions and mixins?
> Well, it's the similar problems there really, since using introductions is a
> kind of weak typing approach where the only way to know whether an object has
> particular interface is to do instanceof or casting at runtime. The JavaDoc
> thing can help again though, to make things a little more manageable and
> the simple errors.
Again... Doclets can help... I use them myself... but they rely entirely too
much on the documentation of the code base and not enough on the concerns that
build the underlying conceptual base. Many of the problems with concern
interaction are already visible in the concern model and can be resolved there,
they don't have to be resolved in the code... unless you have nothing but the
code to do it all...
> Our own experience is that we have indeed been bitten by some of the above
> problems, in particular the advice interaction and refactoring parts. All you
> can do really, at this point, is to be aware of the problem and take care of
> in any way possible. And hope for understanding customers and a quick
> turnaround time when bugs occur, both of which we have so far.
I have found that I had to drastically increase the number of tests... but
decrease the complexity of the tests at the same time, as the tests need to
cover less of the problem.
That way, where I had 10 unit tests, I have 50 or even 100, but each is much
Add to that a very few integration tests that test various hyperslices
combined, that's maybe another 10%, and you get a very stable, thoroughly
tested application where there are not only fewer bugs, but also a better
coverage, better (in terms of "more explicit") test cases and a much easier
road to that "green light" because not only is the impact of change limited to
smaller components, but also the impact of bugs is limited to only a very few
> So, there are definite drawbacks with this approach.
I don't see that as a drawback. Most of all since I can insert probes into my
application to find / analyze / fix the bug.
> There are two ways that I can think of to minimize the pains involved here.
> is to have tools that allow you to see the structure of the final program.
The structure of the final program is interesting, but imho, not half as
important as understanding what concerns you merge, where and why. The
resulting structure is then a direct mapping of those concerns.
> Second, using runtime attributes instead of regular expressions to define
> pointcuts would minimize the risk of getting pointcut definition problems,
> since defining runtime attributes is a much more semantically clear way to
> define what should happen than using method naming conventions.
I shudder at the "runtime" term... I do not think that "compile time" is too
early... but regex definitely is problematic.
> This is also how we do it. The pointcut definitions in our system that are
> regexp either look like this: "set*|add*|remove*" or like this:
> "create*|clone*|remove|", and we intend to replace them both with "!readonly"
> and "lifecycle" runtime attribute declarations down the road to avoid any
> potential problems.
If you can get away with these regex, you're _very_ lucky. ;-)
It has been said that the regex support of Hyper/J is lacking because it can
not match all the things you want... but I found that not to be a limitation.
I rather do explicit mappings all over my application or use even more simple
expressions than the ones you listed above. That way, I have much more control
over what I do... I do not like "automagically"... "automatically" is ok... but
as soon as you need to check a few thousand methods to make sure that noone
messed up the naming conventions, I'm rather unhappy.
I hope that helps... (otherwise ignore my ramblings. ;-) )
Do you Yahoo!?
The New Yahoo! Search - Faster. Easier. Bingo.
AOSD Discuss mailing list - discuss at aosd.net
To be removed send mail to discuss-admin at aosd.net
or visit http://aosd.net
More information about the discuss