In Chinese semantic sentence matching, existing models use the same architecture to distinguish the semantic differences and extract interaction information simultaneously. However, not only it brings tremendous redundant information but makes the model more overweight and sophisticated. To relieve this condition, a deep architecture with the comparison and interaction modules separated named SNMA is presented in this paper. The SNMA uses the Siamese network to extract context information, and employs the multi-head attention mechanism to extract interaction information from sentence pairs separately. Experimental results on four recent Chinese sentence matching datasets outline the effectiveness of our approach.