Currently, we rely on hard-coded grammars that don't cover 100% of the targeted grammar specification. One such approach to improve that is to implement a learning tokenizer that evolves at runtime. This could result in a complete re-write of the existing grammar mutator or just as an additional extension of it.
Ref: https://www.usenix.org/system/files/sec21-salls.pdf
Currently, we rely on hard-coded grammars that don't cover 100% of the targeted grammar specification. One such approach to improve that is to implement a learning tokenizer that evolves at runtime. This could result in a complete re-write of the existing grammar mutator or just as an additional extension of it.
Ref: https://www.usenix.org/system/files/sec21-salls.pdf