Categories: AI

A failed parallel between water sprouts and AI strategies

Find a tree. I mean a real-life, biological tree–and a mature one at that. Can you see any silly young branches that either shoot straight up for the sky, or ones which seem to explode in a crazy bunch sideways from the trunk? You can also approach the search from an aesthetic perspective: you’re looking for ugly, inharmonious young branches going all over the place.

If you can see any of those, you probably found water sprouts–and if so, you’re looking at an unhappy tree. Water sprouts are the tree’s middle age made manifest: I must have a plan B, hence I shall explore new strategies! Joking aside, water sprouts really are a sign of old age, bad pruning, damaged bark–or, as is the case for my own plum tree which triggered this blog post, all of the above.

Then again, I’m not an arborist. My entire train of thoughts was triggered by a false connection between my plum tree’s water sprouts and the wonderful AI strategy against overfitting. I only recently internalized overfitting itself, myself: it’s the AI equivalent of brittle programming. That is, your code (AI model) is so very constrained by the original functional requirements (training data) that it fails in a spectacular manner when presented with even the slightest variation from your original expectations (excellent results for training data, horrendous aberrations for test data).

But how do you guard against overfitting in AI? Well, in pretty much the same way you guard against brittleness in software development (or in a healthy human mind): you avoid dogma, and allow for flexibility instead. Specifically, you force the model to learn less than it could at each step during training. Counter-intuitive as that might be, this strategy allows it to alternate between learning dogma and exploring new avenues for learning.

Given the way this anti-overfitting learning strategy ends up alternating between reinforcing previously learned lessons and deliberately ignoring known lessons in favor of child-like, naive examination of the data, this learning strategy is known as the exploration/exploitation strategy: exploit the lessons already learned by reinforcing them, but make sure to also explore new learning avenues.

Well, when I noticed how my plum tree shot some branches straight up towards the sky I remembered about exploration/exploitation, and I thought I had found a wonderful parallel between biology and AI strategies–behold a tree which applies an AI strategy! Then I learned about water sprouts, and I realized I just happen to own an unkempt old tree, and that that has nothing whatsoever to do with AI or anything else, really.

Overfitting happens in meat brains, too.

Article info



Leave a Reply

Your email address will not be published. Required fields are marked *