i found this super cool new approach from meta called
just- in time (jit)testing! basically, instead of relying on pre-built test suites like we're used to seeing
test_suite. py
, they generate tests during code reviews.
this isn't a one-size-fits-all solution - it's more dynamic and context-aware for ai-assisted dev using llms or similar tools . i'm curious, has anyone tried this out in their projects? did you see any
18% lift like they claimed?
i've heard good things abt dodgy diff, which seems to be a part of the jit system. does it work as well for catching bugs compared to traditional methods? i'm def gonna give this one try in my next dev cycle!
found this here:
https://www.infoq.com/news/2026/04/meta-jit-testing-ai-detection/?utm_campaign=infoq_content&utm_source=infoq&utm_medium=feed&utm_term=global