Testing@LMAX
LMAX Exchange relies heavily on automated testing in a wide range of forms. This is a catalog of posts on Symphonious and from other LMAX folk about the tools and techniques we use to make testing effective and maintainable.
Testing Everything, Everywhere
- Testing in Live
Taking testing all the way into production. - Test Isolation
Designing the exchange in a way that lets us isolate tests and improve testability. - Making End-to-End Tests Work
An overview of our acceptance testing setup highlighting the key features that make it effective. - End to End Tests Update
Slightly more recent stats on our acceptance tests. - Time Travel and the TARDIS
Testing processes that happen over time. - Compatibility Tests
Testing that data migrations actually work against production data.
UI Tests
- UI Test Isolation with vncserver
Keep UI tests isolated by running them in their own X session with vncserver. - Taking Screenshots with Selenium/WebDriver
How we capture screenshots when tests fail to make it easier to see what went wrong.
The DSL
- Abstraction by DSL
How our DSL introduces abstraction to make it easier to maintain tests as the system changes. - Aliases
One of the key abstractions the DSL uses to provide isolation, using memorable aliases as placeholders for longer, unique names. - Replacements in DSL
A simple but very effective little pattern to work with output that references something we normally use an alias for. - Abstracting Acceptance Tests
Early thoughts on the DSL and acceptance test approach just after I joined LMAX. - Making Test Output Useful
Some useful techniques we use to ensure the log output from our tests is meaningful and understandable.
Static Analysis
- Adventures with Files.lines()
Tom Johnson documents how we use custom FindBugs detectors to ensure entire classes of problems never recur. - How To Find Bugs, Part 1: A Minimal Bug Detector
Some details from Tom Johnson about how to build a simple findbugs detector. - How To Find Bugs, Part 2: Well, this is somewhat confusing and frustrating
Tom Johnson continues his exploration of how to build findbugs detectors. - How To Find Bugs, Part 3: @VisibleForTesting
More from Tom Johnson – this time building a findbugs detector that works with annotations.
Tooling
- Distributed Builds with Romero
Romero is the in-house tool we built to intelligently distribute our acceptance tests across servers. It has some clever little tricks. - Test Results Database
Storing test results in the database. - Static and Dynamic Languages
Writing unit tests for Java code in Spock and Groovy turned out to be a bad idea.
Team & Culture
- Revert First, Ask Questions Later
There’s no shame in reverting.
Talks
- Testing into Production and Back Again
Sam Adams gives an overview of continuous delivery practices at LMAX. He demonstrates the variety of testing they have built in, how good test isolation has enabled them to extend their functional tests into live monitoring of production, and how a commitment to incremental delivery, quality and automation have created a sustainable environment for producing great software fast. - Testing Without Examples
Tom Johnson describes a number of techniques we use to ensure some property is true of our systems under all circumstances.
Open Source Libraries
The full range of projects LMAX Exchange has open sourced is available on the LMAX Exchange on Github. Listed below are the testing related ones.
- Simple-DSL
The re-usable part of our DSL, parsing and validating parameters. - elementspec
A tiny library that makes selenium/WebDriver selectors much nicer. - nanofix
Testing library for FIX, designed to make it easy to generate FIX messages that are invalid in specific ways. - parallel-junit
Run JUnit tests in parallel within ant. Not used at LMAX anymore since we migrated to Buck but still very useful for anyone using ant.