The increasing reliance on Machine Learning (ML) for malware detection has enhanced security across various computing environments. However, these models remain vulnerable to adversarial manipulations such as code obfuscation, which alters program structure and execution patterns to evade detection while maintaining malicious functionality. This challenge is particularly pronounced in Hardware-assisted Malware Detection (HMD) techniques, where performance counter-based features can be manipulated by obfuscated malware to deceive ML-based classifiers. In this paper, we examine the impact of obfuscation techniques on the efficacy of malware detection systems, using HMD as a case study to analyze the effectiveness of ML-based security solutions. Through the generation of diverse obfuscated malware variants and an extensive evaluation across multiple ML models, we demonstrate a substantial reduction in detection performance, highlighting the evasive nature of code obfuscation. To address these vulnerabilities, we propose ObfusGate, an intelligent performance-aware defense mechanism based on feature representation learning via a stacked denoising autoencoder to enhance ML models performance against both obfuscated and unobfuscated malware. Experimental results show that ObfusGate improves detection rates by up to 24% across various ML models, reinforcing the resilience of malware detection systems against in real-time execution environments against adversarial obfuscation techniques.