That’s how does machine learning work? You’ve probably read an introduction or two on the subject, but frequently the best way to understand a thing is to try it out for yourself. Keep in mind; take a look out this little in-browser experiment from Google set Teachable Machine. It’s a perfect two-minute review of what a lot of modern AI (Artificial Intelligence) can — and more significantly can’t — do. Teachable Machine lets you utilize your webcam to instruct an extremely basic AI program. Just hit the “train green/purple/orange” buttons, and the AI machine will record whatsoever it can see during your webcam. Once it’s “learned” enough, it’ll output whatever you like (a GIF, a sound outcome or some speech) when it sees the object or activity you taught it with. I trained it to identify my houseplants and respond with relevant GIFs, but others have used it make their hands go moo or play on air guitar on command.
It is pretty fun, but it also demonstrates some essential aspects of machine learning. Before, those programs learn by example. They look, they find patterns, and they memorize them. Second, they require a lot of examples to learn from. And third, and most significantly, their understanding of the world is shallow and easily broken.
Earlier, for instance, I said that I “taught” the machine to identify my houseplants. The truth is that I only trained it to recognize a distantly green and fuzzy array of pixels. It has never seen my asparagus fern and thinks (like I do): “Ah, this needs maintenance out of sunlight and semi-frequent watering. I wonder why millennial are strained to houseplants in the first place? I’ve heard it’s since they can’t afford houses, but also hashtag urban jungle, I guess.” All the Ai machine knows is the pixels it can see, and any extra in order has to be programmed.
All it is worth recollection the next time you’re reading about machine learning or artificial intelligence. Yes, the field has made enormous, huge strides in recent years, but as we’re considering more and more, the algorithms being produced are nowhere near as clever as we’d like them to be. In other words, they’re still learning.