Quantum computing (QC) has the potential to revolutionize fields like machine learning, security, and healthcare. Quantum machine learning (QML) has emerged as a promising area, enhancing learning algorithms using quantum computers. However, QML models are lucrative targets due to their high training costs and extensive training times. The scarcity of quantum resources and long wait times further exacerbate the challenge. Additionally, QML providers may rely on third-party quantum clouds for hosting models, exposing them and their training data to potential threats. As QML-as-a-Service (QMLaaS) becomes more prevalent, reliance on third-party quantum clouds poses a significant security risk. This work demonstrates that adversaries in quantum cloud environments can exploit white-box access to QML models to infer the user's encoding scheme by analyzing circuit transpilation artifacts. The extracted data can be reused for training clone models or sold for profit. We validate the proposed attack through simulations, achieving high accuracy in distinguishing between encoding schemes. We report that ≈95% of the time, the encoding can be predicted correctly. To mitigate this threat, we propose a transient obfuscation layer that masks encoding fingerprints using randomized rotations and entanglement, reducing adversarial detection to near-random chance ≈42%, with a depth overhead of ≈8.5% for a 5-layer QNN design.