Conventional reliability improvement methods might not be efficient solutions for Edge systems where limited hardware and processing resources are available. To address this gap, this paper proposes the application of performance monitoring metrics of the central processing unit used in Edge devices for reliability improvement purposes. We have utilized the performance monitoring toolset, PERF, along with the LLFI fault injection tool to inject a variety of different fault models into a prototypical Edge processor while running Mibench benchmark programs. The injected faults e.g., four different software faults and five hardware faults, are used to collect a dataset showcasing the behavior of the system under various reliability conditions. The collected dataset is then used to train machine learning models that can help run-time monitoring and detection of possible fault situations on the Edge system. Our experiments show that trained models can achieve a very good fault detection accuracy of 91.5%. We have tried various tiny machine-learning (ML) models e.g., random forest models, that can be implemented as a hardware module with very low hardware resources. Implementations of the tiny ML models showed that we can keep accuracy above 90% while model summarization methods helped save more than 80% of the models' parameters.