Pop Quiz: Software Testing

Pop quizzes are a fun way to discover knowledge gaps. I'm experimenting with this format to distill my software engineering experience and hopefully provide something both informative and entertaining!

While AI code generation tools are now accessible to everyone, experienced engineers maintain a critical advantage: they know how to direct AI to write testable code and verify it properly. Here's a pop quiz on software testing to evaluate your engineering acumen:

Question 1: Why do applications with global state present challenges for testing? (Hint: Think about test isolation and predictability.)

Question 2: What is a "flaky test" and what are the most common factors that cause this testing challenge? (Hint: It's about inconsistent results.)

Question 3: Your test suite shows all green, yet users still encounter critical bugs in production. What are the most common testing blind spots? (Hint: Are your tests seeing the whole picture and validating it thoroughly enough?)

Question 4: An AI generates tests with 100% line coverage for your code. What critical failures might still lurk? (Hint: What does coverage not tell you?)

Question 5: At what point do mocks in your tests become a liability rather than an asset? (Hint: Think about what happens during refactoring.)


ANSWERS:

Q1: Global State Challenges

  • Tests can influence each other

  • A change in one test (e.g., modifying a global variable) can unexpectedly break another

  • This makes achieving test isolation and predictable results difficult

Q2: Flaky Tests

  • Definition: Tests that pass or fail inconsistently without code changes

  • Common Causes: Race conditions, time dependencies (e.g., Thread.sleep()), shared state between tests, reliance on unstable external services, random numbers without seed

Q3: Green Tests, Production Bugs (Blind Spots)

  • Testing only "happy paths"

  • Weak or missing assertions (test runs but doesn't actually verify the right thing)

  • Environment differences (test vs. prod)

  • CAUTION: AIs may generate tests that optimize for passing, not verifying correctness!

Q4: 100% Coverage, Still Failing

  • Does the code correctly implement the business logic?

  • Coverage ≠ Correctness! Coverage tells you what lines were executed, not if the logic is sound or if it meets user needs

Q5: Mocks as a Liability

  • When mocks are too tightly coupled to the "implementation details" of a dependency, rather than its "interface"

  • This means benign refactoring of the dependency (which doesn't change its contract/interface) can break tests unnecessarily

You can go surprisingly far in boosting your AI-generated code with thoughtful test design!

For a great start, use Uncle Bob's FIRST acronym:

  • Fast

  • Isolated

  • Repeatable

  • Self-Validating

  • Timely/Thorough

Comments

Popular posts from this blog

Lessons from Teaching My SAT Reading Series at schoolhouse.world

Learning About My Brain From Typing Z-A

Tutorial: SQL Analytic Functions