Skip to main content

C'est la Z

Testing, Testing

With CS4All being the buzzword of the day we're paying a lot of attention to the fact that when we teach a CS class for all students, most of them won't end up studying CS or going directly into a tech heavy field.

Among those that do study CS though, there is another division. Those studying CS as an academic exercise or to go on to graduate studies and those who are going in to the tech sector. For those going into the tech field, it's important that they're prepared in terms of software development, design, and all those goodies not covered in CS classes.

I'm not advocating removing the good stuff currently in most colleg CS curricua, although I will say that most CS programs I've reveiwed could benefit by trimming some requriements and adding some electives. I am advocating doing more than telling your students "comment your code," - a refrain many undergrads have heard frequently with little guidance.

I'm advocating to begin by bringing in tools and practices that either don't detract from current CS classes or in fact can add to or streamline them.

Last year I wrote about using Git and GitHub in my classes. Students have to submit projects anyway and frequently have to work in teams so using Git and a Git hosting service can actually make both student's and instructor's class experiences better while introducing them to an industry best practice.

Today I want to talk about testing.

Students are always told to test their code but frequently not given tremendous guidance. The results can be:

  • projects that don't compile
  • projects that compile and run but not the way they're supposed to.
  • wonky input or interfaces
  • projects that don't work for all cases

On the instructor side, we have to evaluate the student's submissions and deal with all of this. Some teachers use auto graders to help. I have mixed feelings on them. On the one hand they can speed up grading but on the other hand it's important for me as the teacher to actually dive into the student's code. In any case, using an auto grader is actually somewhat similar to running a test suite.

Why not have the students create their own test suites? If done right, this should encourage students to evaluate their own code more carefully and also cut down on the time it takes the intructor to evaluate a given submission.

This means that we have to use a testing framework with a very low cost of entry.

I ended up finding doctest for C++. It's really simple and just a single include file. This means that students don't have to actually install anything on their machines. Here's the example from the project page:

https://github.com/onqtam/doctest/raw/master/scripts/data/using_doctest_888px_wide.gif

Basically, the students can just start writing tests.

Python also has a couple of low friction testing options. One is to use the built in doctest facility. Basically you put sample runs in the doc string at the top of a function with the expected output:

This might seem a little cumbersome, particularly if you look at the example at the link. It also requires the tests go on top of each function and that the tests will print out as part of the docstring.

The other easy Python option is the built in unittest module.

Here's an example of testing strings from the link above:

I havn't played with Java testing in years but I'd guess there's something similarly light weight.

Tomorrow I'm hoping to finish the groundwork on using C++ doctest with my class and I'll write another post afterwards. Probably after SIGCSE since I'll be attending that from Wednesday on.

We'll see how this goes but I'm guessing it will work well. If it does, it should make my life as a grader easier and also get the kids on track to using test frameworks - something they'll need wherever they end up.

comments powered by Disqus