JPscreener.com logo JPscreener.com

March 11, 2024

Photo Acceptance Variance

A set of dice stacked

a roll of the dice, or something more?

March 25, 2024 Update

Hot photos screened immediate/within 24 hours. Hence the 7 day reflects the inclusion of many “hot” photos for review, but this would mean that the rejection reason is predominantly for “doesn’t qualify as hot”, which seems a stretch…Or alternatively, that photos submitted as “hot” are worse than the rest in the regular queue by an appreciable margin? What gives?

It’s in the numbers!

I can’t find any published data on the acceptance rate of the entire database at Jetphotos.com, but there’s a curious observation about the current state of accepted photos on Jetphotos that I have noticed: (these limited stats are viewable when you select check on the status of your queued images)

7 Day Period Totals Ending March 10:

Data
Total Screened:1,221% Accepted:36.28%
Total Accepted:443Total Rejected:778

30 Day Period Totals Ending March 10:

Data
Total Screened:44,740% Accepted:59.4%
Total Accepted:26,577Total Rejected:18,163

Signal or noise?

I would posit the following:

  1. There are a large number of photographers uploading to Jetphotos.com. As of March 10: 26,883.
  2. The average quality of the photos should remain similar (there will be good picture and there will be bad pictures), but in a large enough group, the average ‘quality’ of the pictures should remain somewhat similar.
  3. Based on #2, the acceptance rate for images should remain fairly consistent.

What’s happening in the last week? It really speaks to an abysmal acceptance rate. Is there an influx of bad photographers? Or have the screeners been especially tough? Sampling size issue? Perhaps, the last 7 days includes currently queued photos, but it’s still over 1000 images.

Let’s use 1,000 as the sample (the most recent week). We must keep in mind the statistical assumptions are true — (garbage in, garbage out of course) — but I think they are reasonable.

p_hat = 0.364
standard error of proportion = sqrt(p_hat (1-p_hat)/n)
SE = 0.015

95% confidence z-value is 1.96 gives bounds of:
(0.334,0.394)

Let’s take the 45,000 to represent the population and the ‘truth’, and the typical acceptance rate is 60%, the last week is a statistical outlier if the assumptions regarding the sample and population hold true.

Whether it’s really signal or noise takes more detailed data that I don’t have access to, but intuitively, and applying the statistical analysis with its assumptions, it’s quite bizarre to see such a gap.

Oh yeah, sorry, I contributed to bringing the numbers down.