Hyperdimensional computing (HDC) processes information in a high-dimensional space, offering advantages over deep neural networks (DNNs), such as smaller model sizes, robustness to soft errors, and the capability for one/few-shot learning without significant accuracy drops in some applications. Although HDC shows great potential, encoding data into a hyperdimensional space poses challenges due to the significant data movement between processing units and memory in conventional architectures. Processing-in-memory (PIM) embeds computation directly within memory, reducing latency and energy consumption. Among PIM approaches, in-DRAM computation leverages the parallelism of DRAM to perform bulk bit-wise operations with high throughput. In this paper, we present HIDE, the first hyperdimensional in-DRAM encoder for fast and energy-efficient classification. HIDE integrates HDC encoding directly within commodity DRAM, significantly reducing energy consumption and improving system performance. HIDE achieves up to 272.8× (23.3×) encoding speedup over a CPU (GPU) during training, with energy reductions of 94.8% (95.0%) during training (inference). Integrating HIDE with the GPU achieves an overall system speedup of 15× (7.2×) during training (inference) compared to a GPU-only system, along with energy savings of 92.3% and 82.7%, respectively, for classification tasks with the MNIST, ISOLET, and UCHAR datasets.