"Black box" AI, feature or bug?
In 'old school' symbolic AI, the system reasoned using rules and structures 'hand-coded' by humans and often derived from protocols where human experts would recorded their reasoning about something. In the current regime of machine learning, the system is programmed with a general strategy for learning from examples. Just how it leans to classify those examples, its internal strategies, that's NOT programmed, and it's not easy to open the system up and examine its strategies. Hence, it's sometimes referred to as 'black box' AI. We know what goes in and we know what comes out, but what happens in between, ...