You generally can't analyse the accuracy of an ML system by each individual piece of data in the training set. Each batch of examples slightly changes the model making their updates interact and combine during the training process, so it becomes extremely difficult to assign the contribution of individual examples. Of course you could retrain the model leaving one example out, but that would be exceedingly slow and the result would be inconclusive from a single run because the stochastic noise of the training process is larger than the effect of removing or adding one example.
Related areas are confidence calibration, active learning and hard example detection during training. Another approach is to synthesise a new, much smaller dataset that would train a neural net to the same accuracy of the original larger dataset.