In this paper, we present the testing approach of the Genesys code generator framework. The employed approach is based on back-to-back-testing, which tests the translation performed by a code generator from a semantic perspective rather than just checking for syntactic correctness of the generation result. We describe the basic testing framework and show that it scales in three dimensions: parameterized tests, testing across multiple target platforms and testing on multiple meta-levels.
In particular, the latter is only possible due to the fact that Genesys code generators are constructed as models. Furthermore, in order to facilitate simplicity, Genesys consistently employs one single notation for all artifacts involved in this testing approach: Test data, test cases, the code generators under test, and even the testing framework itself are all modeled using the same graphical modeling language.