how does lucky imaging in astrophotography work?

I understand how lucky imaging gets the results it gets, but I’m wondering specifically how the 10% of frames are chosen.

They’re not picked based on clarity/blur, because the problem is one of distorted images not blurry images, causing issues when averaging the stack.

Searching online gives me lots of answers about how lucky imaging produces clearer images, but not how the lucky frames are chosen.

Anyone know how lucky frames get chosen?

count_of_monte_carlo Mod ,

This isn’t exactly my area of expertise, but I have some information that might be helpful. Here’s the description of the frame selection from a paper on a lucky imaging system:

The frame selection algorithm, implemented (currently) as a post-processing step, is summarised below:

  1. A Point Spread Function (PSF) guide star is selected as a reference to the turbulence induced blurring of each frame.
  1. The guide star image in each frame is sinc-resampled by a factor of 4 to give a sub-pixel estimate of the position of the brightest speckle.
  1. A quality factor (currently the fraction of light concentrated in the brightest pixel of the PSF) is calculated for each frame.
  1. A fraction of the frames are then selected according to their quality factors. The fraction is chosen to optimise the trade- off between the resolution and the target signal-to-noise ra- tio required.
  1. The selected frames are shifted-and-added to align their brightest speckle positions.

If you want all the gory details, the best place to look is probably the thesis the same author wrote on this work. That’s available here PDF warning.

Mbourgon ,

Looking at this: skyandtelescope.org/…/lucky-imaging/

Reading between the lines, my bet is that it is looking for photos with less atmospheric blurring. Since it sets reference points, it can measure the delta from a good shot, add the values to detainee how close to ideal a particular photo is, then choose the overall “luckiest” photos and stack them.

PeriodicallyPedantic OP ,

I read that article, and it’s very good! But it didn’t explain how detect atmospheric blurring, since it’s not actually blurring, it’s distortion. To quote that article

even if the sharpest image is very clear, it may still be distorted in varying degrees around the frame So you can’t just score the frames by sharpness.

Assuming all images are compared to a reference shot as you suggested, how is the reference shot selected?

I’ve actually got my own ideas about how it could be done, but this is coming from a background in computer science, not from astronomy, so I don’t trust my solution.

Mbourgon ,

Yeah, I’m guessing your ideas and mine are going to be similar then; wish I could add more!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • [email protected]
  • All magazines