sometimes it feels like after all these years of protecting our data w/ atomic commits and distributed systems - youd think by now ai agents would have a basic rollback feature.
like imagine this: an agent starts to perform some task, but then hits roadblocks mid-way thru its loop cycle [1]. instead of just starting over blind-style (which can lead us right back into problems), why not let it try rolling things back? like hitting "undo" on your computer.
i mean come on - we have the tech. surely theres a way to implement this w/o making everything so brittle and prone to failure, rite?
[1] think of an agent trying to learn from its mistakes but getting stuck in infinite loops or repeating bad decisions
article:
https://dzone.com/articles/transactional-boundaries-agentic-loops