Test-Driven Development vs. Behavior-Driven Development

Overview

If you’ve spent any time at all reading about software development and testing methodologies, you’ve probably come across the terms Test-Driven Development (TDD) and Behavior-Driven Development (BDD). But what exactly do we mean by these? How do they differ? Can they be used together?

In this article, we’ll take a high-level look at both TDD and BDD, and hopefully, by the end, we’ll have answers to these three questions and more.

Example Problem Statement

To demonstrate these two concepts, we’ll use a relatively simple problem: input validation. We’ll establish the general goal of a specific input validation task and how we want the system to respond based on this goal. Then, we’ll implement and test a rudimentary solution in Java.

And as you may have guessed, we’ll show the TDD and BDD approaches to solving this problem, pointing out their basic philosophies and key differences along the way.

Test-Driven Development

Test-Driven Development is a lower-level, iterative, code-centric approach that uses unit tests to show the correctness of small units of code, like methods or classes.

The basic pattern of Test-Driven Development is straightforward:

  1. Optional: Stub out the desired class/method and/or test class
  2. Write a failing test
  3. Write just enough code to make the test pass
  4. Iterate steps 2 and 3 until you’re sure the code works as expected for all inputs/conditions
  5. Optional: Refactor the implementation if desired, without changing behavior, being sure to retest with every refactoring.

That sounds rather simplistic, so let’s see it in action.

Suppose we’re tasked with writing an input validation class with single method to check whether an input string is alphanumeric. We want our method to return true if our input string is alphanumeric; otherwise, it should return false.

To get started, let’s stub the test class:

class ValidatorTest {
    //...
}

Then, the first code we’ll write is a test that fails when we compile or execute it. We’ll choose the latter:

@Test
void testWithAlphanumericInput() {
    fail("not implemented yet!");
}

While we’ve technically written a failing test, there isn’t a big advantage of writing a test that simply forces a failure, other than to act as a placeholder for the actual test – we haven’t really set ourselves up for the next step. So, let’s modify our test case, imagining a class and method signature for our validation:

@Test
void testWithAlphanumericInput() {
    assertTrue(Validator.isAlphaNumeric("alpha12345"));
}

Since we haven’t written any implementation code yet (the Validator class doesn’t even exist!), we’ll get a compiler error. Great! We’ve written a failing test case. Now, let’s write enough implementation so that the test at least compiles:

class Validator {
    static boolean isAlphaNumeric(String s) {
        return false;
    }
}

When we return to our test, we’ll see that it now compiles, but if we run it, it fails. That’s because we haven’t really implemented any validation logic – the method just returns false, regardless of input. So, let’s modify our implementation so that our test passes:

static boolean isAlphaNumeric(String s) {
    for (int i = 0; i < s.length(); i++) {
        if (!Character.isLetterOrDigit(s.charAt(i))) {
            return false;
        }
    }
    return true;
}

Good, our test case passes. But what if the input isn’t alphanumeric? Let’s add a test case:

@Test
void testWithNonAlphanumericInput() {
    assertFalse(Validator.isAlphaNumeric("alpha 12345"));
}

When we run this, it’ll pass. So far, so good – our latest implementation works for this case. But what if we get a null input string? It’ll throw a NullPointerException. Let’s write the test first, and ensure that it fails:

@Test
void testWithNullInput() {
    assertFalse(Validator.isAlphaNumeric(null));
}

Sure enough, when we run the test, we get a NullPointerException. Now, let’s fix the implementation so that our test passes:

static boolean isAlphaNumeric(String s) {
    if (s == null) {
        return false;
    }
    for (int i = 0; i < s.length(); i++) {
        if (!Character.isLetterOrDigit(s.charAt(i))) {
            return false;
        }
    }
    return true;
}

All three tests should be passing now! But wait: What if our input string isn’t null but is empty? Surely, an empty string shouldn’t be considered to be alphanumeric. Yikes! Let’s add the test, first:

@Test
void testWithEmptyInput() {
    assertFalse(Validator.isAlphaNumeric(""));
}

Now, when we run our new test, we’ll see that it fails! That’s because the input string’s length is 0, causing the body of the loop too be skipped and the method to return true. Let’s fix it:

static boolean isAlphaNumeric(String s) {
    if (s == null || s.isEmpty()) {
        return false;
    }
    for (int i = 0; i < s.length(); i++) {
        if (!Character.isLetterOrDigit(s.charAt(i))) {
            return false;
        }
    }
    return true;
}

Now, our tests should all be passing! We’ve accounted for both null and empty input strings.

Of course, we can write a few additional test cases with various alphanumeric inputs and non-alphanumeric inputs to give us a high degree of confidence that our implementation is correct. For example, we may want to test with strings consisting of a single letter, digit, whitespace, or special character, and we could test strings containing combinations of letters, digits, whitespace, and special characters – including otherwise alphanumeric strings with leading or trailing whitespace.

Optionally, we can refactor our solution to improve its readability if we like – being sure to test after each refactoring. And this is a trivial example, no doubt. If we’re given this task in the real world, we’ll probably consider both null and empty strings up front. But, basically, we’ve demonstrated how to iteratively test and develop a method using the TDD approach. Very cool!

TDD is a proven approach to software development and is particularly effective when we may not have all of the requirements fleshed out up front.

Next, let’s have a look at the other side of the coin: BDD.

Behavior-Driven Development

As the name suggests, Behavior-Driven Development is a higher-level, scenario-based development and testing methodology that is less about testing smaller units of code and more about ensuring that our code behaves according to agreed-upon requirements or design specifications.

And even though BDD is often used at a higher level, it can also be used at the class or unit level to help us flesh out our unit tests. For the sake of demonstration, we’ll focus on the same problem as in the TDD section and employ BDD at the unit level.

Similar to TDD, we often write BDD tests before the implementation code, where each test case corresponds to a different expected behavior that has been predetermined jointly by developers, product owners, and business analysts working together.

This is slightly different from what we saw above in the TDD approach, where we didn’t list out all the expected behaviors up front. In TDD, we wrote a failing test, then implemented enough code for that test to pass, then iterated, writing additional tests and implementation details, until we were satisfied that the code did everything we wanted it to do.

You may have noticed that the test method names we used in the TDD section were descriptive, but not prescriptive. There really was no specific naming convention. That’s where BDD-style test naming conventions come to our aid.

In BDD, we generally name tests using the convention: givenX_whenY_thenZ, where ‘X’ is the precondition for the test, ‘Y’ is the code being tested, and ‘Z’ is the expected behavior or result.

We could also skip the precondition (‘X’) and just use the whenY_thenZ pattern in trivial cases, or if we want to use a more descriptive naming pattern for what’s being tested, depending on our preference.

For example, by iterating through the design in the TDD section, we’ve already identified four expected behaviors of our alphanumeric validator method:

  • Alphanumeric strings return true
  • Non-alphanumeric strings return false
  • Null input string returns false
  • Empty string input returns false

Let’s see what the test names might look like, using BDD-style naming:

  • givenAlphanumericInput_whenValidate_thenReturnsTrue
  • givenNonalphanumericInput_whenValidate_thenReturnsFalse
  • givenNullInput_whenValidate_thenReturnsFalse
  • givenEmptyInput_whenValidate_thenReturnsFalse

Here’s another example set of test names, still in the BDD style:

  • whenValidateAlphanumericInputString_thenValid
  • whenValidateNonalphanumericInputString_thenInvalid
  • whenValidateNullInputString_thenInvalid
  • whenValidateEmptyInputString_thenInvalid

As you can see, the BDD-style naming convention easily helps us to come up with a full implementation that covers all four of the required behaviors.

Of course, as we add methods to our Validator class, we may need to adjust the test names to clarify what method or behavior is being tested.

Next, let’s convert our TDD test cases to use the BDD-style naming convention. We’ll use the first set of names noted above:

@Test
void givenAlphanumericInput_whenValidate_thenReturnsTrue() {
    assertTrue(Validator.isAlphaNumeric("alpha12345"));
}

@Test
void givenNonAlphanumericInput_whenValidate_thenReturnsFalse() {
    assertFalse(Validator.isAlphaNumeric("alpha 12345"));
}

@Test
void givenNull_whenValidate_thenReturnsFalse() {
    assertFalse(Validator.isAlphaNumeric(null));
}

@Test
void givenEmptyInput_whenValidate_thenReturnsFalse() {
    assertFalse(Validator.isAlphaNumeric(""));
}

Some Key Differences

Although TDD demands and BDD emphasizes a test-first approach, BDD-style testing can also be applied retroactively after the implementation is written, given an adequate collection of expected behaviors, as is often employed by QA developers to aid in establishing and maintaining automated regression tests.

When using a test-first approach, TDD is more of an iterative, test-and-implement-as-you-go approach, whereas BDD lends itself to cases where we want to cleanly map out all the expected kinds of inputs and behaviors first, write these expected behaviors as tests, and then write the implementation code.

Can We Combine TDD and BDD?

As is often the case in software development, the answer is: “it depends“. Specifically, it depends on how strictly we define TDD. But generally speaking, it’s often a good practice to combine the two.

If we can enumerate all of the expected behaviors of our code, then we can define our BDD test cases accordingly. And if we write stubs for the implementation (or at least know its classes and method signatures), we can fully code the tests before we’ve implemented any of the desired behaviors!

Then, we implement one behavior at a time until all the tests pass! In that way, the process is similar to TDD, though we’ve written all the tests first instead of iterating through pairs of alternating tests and implementation details on-the-fly.

Alternatively, we can work through each behavior one by one in the TDD style, first implementing the test code that verifies the expected behavior, then implementing the behavior so that the test passes.

Final Thoughts

It’s worth noting that there are many TDD evangelists who say that it’s the absolute best approach to software development. And maybe they’re right. It’s certainly been shown to be effective and is a great way to map out a problem and solution iteratively, as we’ve seen in our examples.

On the other hand, BDD is a great way to map out our code’s behaviors in the form of higher-level test cases before diving into the implementation. That way, when all our tests are passing, we can be relatively confident in our implementation. And we can use it in conjunction with TDD in case we haven’t quite fleshed out all of the desired behaviors during the first pass.

That said, there are many proven approaches to software development, and the choice of which one to use may depend on personal preferences, company or customer policy, the problem at hand, and project requirements regarding quality, test coverage, and more.

See the GitHub repository for all the sample code in this article.

Migrating Gradle Projects to Java 17

Overview

According to New Relic, Java 17 officially surpassed Java 11 in 2024 as the most-used Java version by professional developers, with 35% of applications reported to now be using it. And let’s face it – lots of us who have been on Java 11 or Java 8 would love to take advantage of the new language features and runtime enhancements, as well as garbage collection choices, offered in JDK 17.

However, many developers are challenged with upgrading a complex system of custom libraries and applications that are running on Java 11 (or earlier). If you’re one of those developers, then this article is for you!

We’ll create a simple, Gradle-based “Hello World” Java project, and we’ll see how to upgrade this Gradle project to build and run in Java 17. We’ll also cover some additional considerations and common misconceptions for the endeavor.

Prerequisites

You probably already have JDK 11 or JDK 8 installed, but for this tutorial, you’ll need to install JDK 17 or later on your system so that you can build and run a project using JDK 17. Oracle JDK 17 or any OpenJDK 17 distribution will work fine for this purpose – my personal favorite is Amazon Corretto, but there are several others.

For best results, you’ll also need Gradle (or Gradle wrapper) version 7.3 or later. Earlier Gradle/wrapper versions may encounter errors building with JDK 17 or later and should be avoided for this reason.

For the purpose of this tutorial, we’ll stick with the Groovy DSL for Gradle, but the same principles can be applied using the Kotlin DSL.

Sample “Hello World” Project

Suppose we have a Gradle project with a simple build.gradle file:

plugins {
    id 'java'
}

group = 'blog.coherentjava'
version = '1.0-SNAPSHOT'

repositories {
    mavenCentral()
}

Pretty straightforward, right? Since our project is a simple “Hello World” app, we don’t need any external dependencies – just Java, which is added via the java plugin!

Now, let’s create the HelloWorld.java file under src/main/java:

public class HelloWorld {

    public static void main(String[] args) {
        System.out.println("Hello, World!");
    }
}

Updating the build.gradle File for Java 17

For our simple example, we only need to add the java plugin section, telling it to use the toolchain for Java 17:

java {
toolchain {
languageVersion = JavaLanguageVersion.of(17)
}
}

This tells Gradle that we want to build the project using JDK 17, with source code compatible with Java 17 and bytecode compiled to run on JRE 17.

This means that, even if we later decide to run Gradle with a newer version of Java, like JDK 21, this build will always be performed using JDK 17 – going so far as to automatically download the Java compiler and related tools for Java 17 if it can’t find them on our system, thereby future-proofing our Java 17 Gradle build.

Updating the Java Version in IntelliJ IDEA

If you’re using IntelliJ IDEA, you’ll also need to make sure to configure the project to use JDK 17 and set the language level to Java 17:

Common Pitfalls and Considerations

If you’re working with an IDE such as IntelliJ IDEA, Eclipse, Netbeans, or Visual Studio Code, you may have been tempted to skip ahead and simply configure your IDE project to use JDK 17 and be compatible with Java 17 language features. But make no mistake – this gets you only halfway there and “Livin’ on a Prayer” (cue Bon Jovi).

Or maybe you’re working with an established codebase and encounter settings for sourceCompatibility and/or targetCompatibility instead of the toolchain configuration in the build.gradle file:

java {
sourceCompatibility = 11
targetCompatibility = 11
}

And you may be tempted to just change the version numbers in this section:

java {
sourceCompatibility = 17
targetCompatibility = 17
}

Now, this was probably done with good intentions, but you should know that these settings are intended only to allow you to maintain backward compatibility with the specified older Java version – for example, if you intend to the code to be able to run on JDK 11 even though you only have JDK 17 on your development system and, therefore, need to restrict the source code syntax to be Java 11 compatible and the bytecode to be executable on Java 11.

However, these settings only establish the language and runtime compatibility versions of your project and do not guarantee that your Gradle build uses JDK 17. In fact, according to Gradle documentation, you may encounter issues with this alone and should only use this approach if you need backward compatibility but can’t use toolchains. So, if you see this, avoid the Bon Jovi syndrome and change this to use the toolchain construct.

Additional Notes

By default, when you build a Java project, Gradle will target the same JDK version that Gradle itself is running with. However, as shown above, there are some situations you may encounter when upgrading an established project that will require additional changes.

The toolchain also allows you to designate a specific JDK vendor and implementation:

java {
toolchain {
languageVersion = JavaLanguageVersion.of(17)
vendor = JvmVendorSpec.AMAZON
implementation = JvmImplementation.VENDOR_SPECIFIC
}
}

And for some advanced toolchain specifications, you may need to also specify a custom toolchain resolver, such as the Foojay Toolchains Plugin:

plugins {
id "java"
id "org.gradle.toolchains.foojay-resolver-convention" version "0.4.0"
}

The use and configuration of custom toolchain vendors, implementations, and resolvers is beyond the scope of this article.

Next Steps

Now that we’ve seen how simple it is to upgrade our Gradle projects to build and run using JDK 17, we can take advantage of the new language features and garbage collection improvements the platform has to offer.

See the GitHub repository for all the sample files in this article.

Back in the Saddle Again

Simply put, I haven’t had the time or energy to put much (or any) effort into Coherent Java lately and have basically been on hiatus from the site since 2016.

Now that I’ve got both the time and energy to focus more on this site, you might say I’m “back in the saddle again,” to quote the 1938 song first made popular by Gene Autry.

I’ll start out with some basic Java topics and then gradually move towards intermediate and more advanced topics involving Spring Boot and other components of the Spring Stack. Who knows – I may sprinkle in some AWS-related topics as well.

Anyway…

Welcome back, thanks for reading, and I hope you’ll find these articles interesting enough to share and/or subscribe.

Code to Interfaces, Not to Implementations

A common mantra of object-oriented development is that one should code to the interface, not the implementation.  But what does it mean?  What does it look like in Java?  And why does it matter?  Let’s explore this mantra in detail and see why it is considered a best practice.

What does it mean?

Quite simply, it means that your code should be “class inclusive” rather than “class exclusive”.  In other words, don’t be a class snob.  When specifying a method parameter or return value — even if you think you know what class you will eventually instantiate or return during your implementation, or what class others are likely to pass in to your method — if that class is one of many implementing a common interface (or extending an abstract class), then it is better to declare the field or variable as an instance of the lowest level interface or abstract class upon whose behaviors your code depends, rather than as an instance of the implementation type.  Think of it like finding the least common denominator when simplifying fractions in math class, or finding the simplest tool needed to perform a certain task.

Here’s a real-world analogy: suppose you’re on vacation, and when you arrive at the hotel, you realize that you’ve forgotten to pack your turbo-charged, battery-operated electric toothbrush with spinning heads, and that you will need to brush your teeth at some point before you return home.  Do you really need to go out and buy another turbo-charged, battery-operated electric toothbrush with spinning heads?  Of course not.  You really just need a toothbrush.  It’s the minimum tool that you need in order to perform the task of brushing your teeth.

What Does it Look Like in Java?

Let’s first take a look at an example of what not to do.  Have you ever had to maintain or review someone else’s code and they did something like this?

public void printWords(ArrayList<String> words) {
  for(String word : words) {
     System.out.println(word);
  }
}

public void printKeys(HashMap<Integer, String> pairs) {
  for(Integer key : pairs.getKeys()) {
    System.out.println(key);
  }
}

Did you immediately get the feeling that the code just didn’t look right?  Did you cringe as soon as you saw it?  Could you smell it?  If you answered “yes”, then you are already well on your way to understanding this basic principle.  If you answered “no”, read on.

Suppose you want to print a list of words that is stored in a LinkedList?  Or print all the keys in a TreeMap?  You can’t use the above methods as written, because a LinkedList is not a descendant of ArrayList, and TreeMap is not a descendant of HashMap.

Now look at the implementations above.  Does the printWords method perform any operations on its parameter that are exclusive to an ArrayList?  Or does printKeys perform any operations on its parameter that are exclusive to a HashMap?  The answer is clearly no in both cases.  Here is a better way to write these methods that is inclusive of more common types:

public void printWords(List<String> words) {
  for(String word : words) {
    System.out.println(word);
  }
}

public void printKeys(Map<Integer, String> pairs) {
  for(Integer key : pairs.getKeys()) {
    System.out.println(key);
  }
}

Toothbrushes Revisited…

To further illustrate this principle, let’s revisit the toothbrush analogy.  The following is a sample class interface and class hierarchy fitting the scenario described above:

public interface Toothbrush {
  void brushTeeth();
}

public interface ElectricToothbrush extends Toothbrush {
  void charge();
}

public interface BatteryPoweredToothbrush 
            extends ElectricToothbrush {
  void replaceBattery();
}

public class BasicToothbrush implements Toothbrush {
  public void brushTeeth() {
    . . .
  }
}

public class TurboChargedBatteryPoweredSpinningToothbrush
            implements ElectricToothbrush {
  public void brushTeeth() {
      . . .
  }
  public void charge() {
      . . .
  }
  public void replaceBattery() {
      . . .
  }
}

And here is a simplistic example showing how one might use the Toothbrush interface and class hierarchy in everyday life and while on vacation.

A Day in the Life…

public class DayInTheLife() {
  public void getUp() {
      . . .
  }
  public void brushYourTeeth(Toothbrush brush) {
      brush.brushTeeth();
  }
  public void takeShower() {
      . . .
  }
  public void writeAwesomeJavaCode(int hours) {
      . . .
  }
  public void eat(Meal meal) {
      . . .
  }
  public void chill(int hours) {
      . . .
  }
  public void sleep(int hours) {
      . . .
  }
  public void doFunStuff(int hours) {
      . . .
  }
  public void buyStuff(Object... obj) {
      . . .
  }
}

Today, a Regular Day…

public class RegularDayAtHome() {
  private Meal breakfast;
  private Meal secondBreakfast;
  private Meal lunch;
  private Meal dinner;
  public RegularDayAtHome(Meal[] meals) {
      //instantiate Meal fields
  }
  public void main() {
      DayInTheLife today = new DayInThLife();
      today.getUp();
      today.takeShower();
      today.eat(breakfast);
      Toothbrush tb = 
          new TurboChargedBatteryPoweredSpinningToothbrush();
      today.brushYourTeeth(tb);
      today.writeAwesomeJavaCode(2);
      today.eat(secondBreakfast);  //hobbits
      today.sleep(1);  //power nap
      today.writeAwesomeJavaCode(3);
      today.buyStuff(lunch);
      today.eat(lunch);
      today.writeAwesomeJavaCode(3);
      today.eat(dinner);
      today.writeAwesomeJavaCode(1);
      today.chill(4);
      today.sleep(8);  //good night
  }
}

Vacation Day!

public class VacationDay() {
  private Meal brunch;
  private Meal dinner;
  public VacationDay(Meal[] meals) {
      //instantiate Meal fields
  }
  public void main() {
      DayInTheLife someday = new DayInThLife();
      someday.getUp();
      someday.takeShower();
      someday.eat(brunch);
      //oops, I forgot my toothbrush
      Toothbrush tb = new BasicToothbrush();
      someday.buyStuff(tb);  //buy a toothbrush
      someday.brushYourTeeth(tb);
      someday.doFunStuff(5);
      someday.buyStuff(souvenirs);
      someday.sleep(3);  //I need a nap!
      someday.doFunStuff(2);
      someday.eat(dinner);
      someday.chill(2);
      someday.sleep(10);  //good night
  }
}

Why does it matter?

Coding to interfaces increases the reusability of your code.  When it comes down to specifying a method parameter, you are primarily interested in its behaviors and its potential use as a parameter to be passed to another method.  Don’t paint yourself into a corner by unnecessarily restricting the classes your methods will accept.

Recap

Learning to consistently code to interfaces is an important part of a developer’s arsenal and is one of many techniques that separate senior developers from junior or entry-level developers. I could write an entire book chapter on this subject, with lots of examples, but I think I’ve hit the main points.  So when designing and writing a piece of code, remember the following:

  • Be inclusive rather than exclusive!
  • Don’t be a class snob!
  • Reusability is king!