Emphasis on the Minimization of False Negatives or False Positives in Binary Classification

Authors: Sanskriti Singh

5 pages, 6 figures
License: CC BY-NC-SA 4.0

Abstract: The minimization of specific cases in binary classification, such as false negatives or false positives, grows increasingly important as humans begin to implement more machine learning into current products. While there are a few methods to put a bias towards the reduction of specific cases, these methods aren't very effective, hence their minimal use in models. To this end, a new method is introduced to reduce the False Negatives or False positives without drastically changing the overall performance or F1 score of the model. This method involving the careful change to the real value of the input after pre-training the model. Presenting the results of this method being applied on various datasets, some being more complex than others. Through experimentation on multiple model architectures on these datasets, the best model was found. In all the models, an increase in the recall or precision, minimization of False Negatives or False Positives respectively, was shown without a large drop in F1 score.

Submitted to arXiv on 06 Apr. 2022

Explore the paper tree

Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant

Also access our AI generated Summaries, or ask questions about this paper to our AI assistant.

Look for similar papers (in beta version)

By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.