Large-scale integrated circuit simulation faces challenges in simulation speed and memory consumption. Based on the divide-and-conquer idea, partitioning techniques can well mitigate these issues. However, as the number of partitioned subcircuits increases, the larger and denser Schur complement matrix destroys the scalability. Although previous studies introduce iterative methods to reduce the order and density of Schur matrices, the convergence and convergence speed of iterative methods are sensitive to circuit structures, limiting the simulation scenarios. This paper proposes a machine learning-based pruning algorithm that reduces the density of Schur matrices while precisely controlling simulation accuracy. In addition, a partitioning-based circuit simulation tool is developed and a constrained random scanning method is introduced to generate reliable samples for training a high-precision model. Combined with direct methods, such as LU factorization, the proposed approach exhibits robust performance. Experiments on real circuit netlists and matrices demonstrate that our model achieves an accuracy of 94.23%. Compared to the traditional method, this work reduces the density by an average of 15.54× and accelerates the simulation by an average of 1.71×, while restricting the simulation error to within 0.533‰.