In general, there are three types of ways to deal with nondetects and missing data:
Drop the whole observation with the missing data
Use some statistical method designed for dealing with missing data
Impute the missing values, either with a single value or by using a model
In order to decide which method is best to use, it’s important to think carefully about a few things.
Most importantly, are your data systematically missing in some way? For example, are you looking at crime data for neighborhoods, and all the high-crime neighborhoods are underreported? Systematically missing data can cause major problems, so it’s important to rule out possible sources of systematically missing data as much as you can.
How much data are you missing? 0.5%? 20%? Different methods are more or less successful depending on the percentage of missing data.
What is the real-world meaning of the data set you’re studying, and what are the possible sources of missing data? Let the type of data you’re using, its purpose, and its structure inform the strategies you use.
Finally, what are the potential consequences if missing data is handled incorrectly? Are imputed values going to be problematic in some way, and how does that balance against the possibility of reduced statistical power in the analysis?
It’s easy to look up particular methods, but one thing you can’t just look up is how to think critically about the most intelligent ways to handle your missing data. Not all data sets can be treated the same.
(Once you've thought through these things, check out one of my favorite Python packages, which helps you examining missing data - missingno.)