At a time we were getting started with Agile methods, a lot of energy went into working out the definition of done. We followed the debates on whether that is something the team decides or something the product owner decides, and went on with our share of discussions.
At first, it was not easy to even include testing in the definition of done. At least not all kinds of testing that were actually needed. Eventually that passed, and the lesson was learned: if it is not tested (and ready to be published), it is not actually done. The value is not available from concept to cash, as the lean thinking goes.
I still feel the definition of done, especially for the testing part, is quite a complex exercise. Testing is an endless task. At some point, however, it stops providing value, and should be deliberately stopped.
This is typical approach in "traditional testing" with a risk-based test management focus. Thus what I tried introducing is a practice of "risk-based test management for definition of done". Essentially this is a practice of discussing what "testing" in definition of done should be for each of the product backlog items through understanding the acceptable level of risk with that item.
"Testing" in the definition of done is not just one. Some changes can be quite safely tested mostly on unit level. Some changes can quite safely be tested with automation. Some changes need extensive exploratory testing.
Similarly "acceptable risk" is not the same for all product backlog items. Some items end up being very visible and commonly used features. Some items are for fewer users, but perhaps more important as customers. Some items are tick box features for sales purposes. You would look at acceptable risk very differently on each of these. Risk-avoidance through added testing adds costs. While velocity may remain similar (when the sizes are visible in the product backlog items), the value experience by users for the same velocity would not be.