As I know from my Computer Science lections this consideration are already implement in several libraries and frameworks. For example the sklearn library with support vector machines can be defined with a kernel which is an estimator function type to approximate the given data points. There you can define linear, polynomial, rbf or other function types. But everything comes with handicaps, too.
If you have a polynomial function it may tend to overfit since you can nearly always find a function to hit all given data points. But what happens in between or at the beginning/end is far from something you want to get. An example for overfitting can be found here:
Nevertheless, it is always good to have several measure possibilities and find out which fits best for you.