CAUT(C Analysis and Unit Testing)

Software testing is the most commonly adopted technique to ensure software quality. A program is usually composed of units, where a unit contains a number of functions. Unit testing is aimed to independently check functional correctness of a unit.

CAUT is a DSE (dynamic symbolic execution)-based tool to automatically generate test data for C program at unit testing level. It builds on CIL and uses lp_solve and Z3 as the underlying constrait solver. It supports constraint reasoning on both scalar types and derived types, i.e., structures, arrays and pointers (equality/inequality constraints). It currently supports the coverage-driven testing on branch and MC/DC criteria. And it’s easy to be extended to support other logical coverage criteria, like condition, condition/decision or multiple coverage. Up to now, serveral path exploration strategies have been implemented on CAUT.

CAUT is only tested on Linux-Ubuntu 32 bit. You can download from our github (updated on 04/01/2014) and a simple video demo is here.

update 2013.5: we have implemented some coverage-based white-box testing techniques on C program at unit testing level.

update 2014.3: our paper on coverage-driven testing has been accepted on SERE 2014! (see References for details)

update 2014.4: we have extended CAUT from the unit testing level to the whole program testing level.

update 2014.9: we have implemented a data flow testing (in particular for all def-use coverage criterion) framework on top of our DSE engine, which combines with the counter-example guided abstract refinement (CEGAR)-based model checking technique to futher improve the testing performance. This combined data flow testing approach combines the strengths of both techniques.

update 2014.12: our paper on “Combining Symbolic Execution and Model Checking for Data Flow Testing” was accepted by ICSE 2015! (see References for details)

update 2015.4: we conduct a detailed survey on data flow testing, and the technical report is ready! (see References for details)

future update: we will make our tool open source soon!

Input/Output

CAUT takes a C unit/program under test and a target coverage criterion (statement, branch, MC/DC, all def-use coverage) as input, and output coverage-driven test data. If you want to know more details, you can refer to some papers at reference part.

Search Strategies

CAUT has implemented a bundle of path exploration strategies to evaluate their effectiveness in the coverage-driven testing.

1. CREST cfg-guided search strategy

Brief: Drive the execution down the branch with the minimal distance towards uncovered goals measured by static CFG paths. It also adopts a branch heuristic to backtrack or restart the current search under some failing circumstances. It is essentially a local optimal search, as it focuses on each recently-covered branch and never explicitly revisits previous paths.
For details: here

2. KLEE rp-md2u search strategy

Brief: Interleave the Random Path strategy with Min-Dist-to-Uncovered heuristic. The Random Path strategy is actually a probabilistic version of breadth-first search, which weights a path candidate of length l by 2^-l and randomly chooses candidates with the same length. The Min-Dist-to-Uncovered heuristic prefers the path candidate with minimal distance to uncovered goals in the CFG.
For details: here

3. Predictive path search strategy

Brief: Drive the path exploration direction to the code part with local-predicated more dense coverage goals (i.e., give a higher priority to explore those path candidates which tend to cover more coverage goals at one blow (i.e., collateral coverage) ). This predication is obtained from a designated coverage structure.
For details: refer to our technical report

4. Other search strategies

DFS (depth-first search)
BFS (breath-first search)
LSF (local shortest first)
LLF (local longest first)
RANDOM (uniform random path search)

5. Fitness guided search strategy

For details: Fitness-Guided Path Exploration in Dynamic Symbolic Execution. Xie et al.

Bug report

If you find any bugs in CAUT or any questions about CAUT, please do not hesitate to contact me. Let’s make it better!
Email: tsuletgo@gmail.com
Ting Su, here is my CV (updated on 8/9/2013).
Department: Software Engineering Institute, East China Normal University
Address: NO.3663, North Zhongshan Road, Shanghai, China

References

[1]. Ting Su, Siyuan Jiang, Geguang Pu, Bin Fang, Jifeng He, Jun Yan, Jianjun Zhao. Coverage-Driven Test Data Generation Using Dynamic Symbolic Execution. Technical Report. 2013 (revised on 8/9/2013) pdf
[2]. Xiao Yu, Shuai Sun, Geguang Pu, Siyuan Jiang and Zheng Wang, A Parallel Approach to Concolic Testing with Low-cost Synchronization, In Proc. the 4th International Wokshop on Harnessing Theories for Tool Support in Software,Shanghai, China, November 15, 2010.pdf
[3]. Zheng Wang, Xiao Yu, Tao Sun, Geguang Pu, Zuohua Ding and Jueliang Hu, Test Data Generation for Derived Types in C Program, In Proc. the 2009 Third IEEE International Symposium on Theoretical Aspects of Software Engineering, Tianjin, China, July 29-31, 2009. pdf

Thanks

I appreciate the following smart people who have made great contributions to this project:
Geguang Pu, Zheng Wang, Siyuan Jiang, Xiao Yu, Zhaotan Fang, Bo Li, Sihang Shao, Mengdi Wang, Jingyi Yang, Jiazheng Yu.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>