Quantum Generative Adversarial Networks (qGANs) are leading image-generating quantum machine learning models. With the increasing demand for Noisy Intermediate-Scale Quantum (NISQ) devices, more third-party vendors are expected to offer quantum hardware as a service, raising the risk of proprietary information theft. To mitigate this, we propose a novel watermarking technique that uses the noise signature embedded during qGAN training as a non-invasive watermark. This watermark is detectable in the generated images, tracing the specific quantum hardware used for training and providing strong proof of ownership. To enhance security, we propose training qGANs on multiple quantum hardware, embedding a complex watermark comprising the noise signatures of all training hardware, making it difficult for adversaries to replicate. A machine learning classifier is developed to extract this watermark, identifying the training hardware from the generated images, thereby validating the authenticity of the model. The watermark signature is robust even when inference occurs on different hardware. We achieve watermark extraction accuracy of 100% for single-hardware training and 90% for multi-hardware training setups. We also obtain a validation accuracy of 90% on both single-hardware and multi-hardware training setups in the presence of temporal variation of noise. This watermarking method can be extended to other quantum machine learning models due to the strong modulation of parameter evolution by quantum noise during training.