Benchmark against best champion model

In champion/challenger model development, you currently have a deployed stable champion model in which you can benchmark any future results. This allows you to discard any model results which fall far below this benchmark. If you do not have a current champion, you can develop a theoretical best model using an algorithm such as random forests or SVM, and use that as a benchmark for what is achievable but (assume) is not actionable. Look at the results of the model you have just developed and see if it is close to these kinds of results. If so, the results are probably not attainable and you might want to look elsewhere.