It would need to be more than that. A prompt for one model can have different results vs another. Even when the model has different treatment for inference, eg quantization, the same prompt for the unquantized and quantized model could differ.
One of several reasons to use an open model even if it isn't quite as good. Version control the models and commit the prompts with the model name and a hash of the parameters. I'm not really sure what value that reproducibility adds though.
You’d need to hash the model weights and save the seeds for the temperature prng as well, in order to verify the provenance. Ideally it would be reproducible, right?
Including prompts would create transparency but still wouldn't resolve the underlying copyright uncertainty of the output or guarantee the code wasn't trained on incompatibly-licensed material.