Semantic MediaWiki test case system is alpha-ish, needs owner
I've been amiss in posting updates lately. This post is based on a post to fedora-test-list but has more links than the original email.
We've been experimenting with using like OLPC and the W3C OWL working group (this one was set up by the SMW developers, so the semantic stuff is quite nice) but this is very much bleeding-edge and not yet a common/widespread notion, and in need of development. Yay, First!
We now have a first working (minimal) set up for the SoaS Fedora Test Day, which is Thursday, September 3 - yes, this is a shameless plug; if you want to try it out, come over and test some liveusb images with us. If you want to see our journey down the rabbit hole on how this was created, Sebastian Dziallas, James Laska, and I logged and took detailed notes of not just our steps but also our thought process through them. (The hampster dance is somewhere in those logs for those of you who remember that particular blast from the past.)
It turns out the SMW community is interested in what we're doing. (Again: Yay, First!) I was in the #semanticmediawiki IRC channel tonight and was asked if we could log our stuff in the upstream SMW users community and ping the semediawiki-user mailing list when we have something there we want to share - so I made a stub page for Fedora and put in what I could.
Some more knowledgeable thoughts from Markus Krotzsch, one of the lead developers from SMW:
A typical test: http://km.aifb.uni-karlsruhe.de/projects/owltests/index.php/FS2RDF-literals-ar. Click "edit with form" on this page to see how the input is presented. We adopted a tab-based scheme to avoid overwhelming the user.
A typical listing, generated automatically: http://km.aifb.uni-karlsruhe.de/projects/owltests/index.php/Rdfs:range (the properties used here, in fact, were also entered automatically by analysing the user input; this is possible with custom code when the tests are of a sufficiently formal nature)
We also generate bulk exports of tests that are used by a software suite to run the tests and report results. Much of this could be improved in various ways (the site was originally hacked together on one weekend). I learned that one should really plan the details of the underlying knowledge model and user interaction before adding content to the wiki. Also, we use some non-Web software to execute tests and publish results, while your system would probably be more similar to a bug tracker.
I don't have the bandwidth to be "the SMW Test Case System Person" creating and maintaining this system, but would be happy to teach what I know of the process if someone is interested in taking this on, especially now that we've figured out how to actually make it work (it's usable! we're in alpha!), what resources to draw on to learn more, and an upstream SMW community to engage with. SMW for test case management is a pretty new thing to do. Any takers?