I tend to think that what this article is asking for isn't achievable, because what people mean by "AI" is precisely "we don't know how it works".
An analogy I've used sometimes when talking with people about AI is the "I know a guy" situation. Someone you know comes and tells you "I know a guy who can do X for you", where "do X" is "write your class paper" or "book a flight" or "describe what a supernova is" or "invest your life savings". In this situation, the more important the task, the more you would probably want to know about this "guy". What are his credentials? Has he done this before? How often has he failed? What were the consequences? Can he be trusted? Etc.
The thing that "a guy" and an AI have in common is that you don't know what they're doing. Where they differ is in your ability to gradually gain knowledge. In real life, "know a guy" situations become transformed into something more specific as you gain information about who the person is and how they do what they do, and especially as you understand more about the system of consequences in which they are embedded (e.g., "if this painter had ruined many people's houses he would have been sued into oblivion, or at least I would have heard about it"). And also real people are unavoidably embedded in the system of physical reality which imposes certain constraints that bound plausibility (e.g., if someone tells you "I know a guy who can paint your entire house in five seconds" you will smell a rat).
Asking for "reliability" means asking for a network of causes and effects that surrounds and supports whatever "guy" or AI you're relying on. At this point I don't see any mechanism to provide that other than social and ultimately legal pressure, and I don't see any strong action being taken in that direction.