A friend of mine in grad school worked on a project that is similar in spirit called ASAS-SN. It also used off the shelf cameras but distributed them around the world so that they could detect supernovae and other transients. Because everything was off the shelf they could build out their network on a shoestring budget. I think they're the first to discover the vast majority of all bright supernovae these days.
Are things amateur photographers with small telescopes (and lots of patience) sometimes discover,
https://old.reddit.com/r/space/comments/13uco46/i_discovered... ("I discovered this planetary nebula using a $500 camera lens, now it carries my name")
https://www.astrobin.com/i9yy6f/ (18 hours!)
Maybe we can keep stacking them. Build an array of arrays of cameras/telescopes
What would be the limit?
I mean, can millions of phone cameras make one giant virtual telescope?
The key term you are looking for is "exposure stacking". See for example https://markus-enzweiler.de/software/starstax/ and https://www.cloudynights.com/topic/719318-stacking-data-from...
I have no clue what I am talking about, but would love to hear somebody knowledgeable speculate on this.
Since you're talking about video footage, I would guess it's rolling shutter distortion you saw. This can result in wobbles, skew, or aliasing artifacts.
Here's a mirror,
https://web.archive.org/web/20240507234024/https://www.dunla...
https://www.dunlap.utoronto.ca/new-dragonfly/
The publication resulting from the work, "Giant Shell of Ionized Gas Discovered near M82", can be found:
https://iopscience.iop.org/article/10.3847/1538-4357/ac50b6/...
The companion publication from Yale, "Nascent Tidal Dwarf Galaxy Forming", can be found:
https://iopscience.iop.org/article/10.3847/2041-8213/ac3ca6/...
Edit: And, actually from the post below, full publications related at:
Ah!
So there's a large mirror involved?
(sorry)
It uses a raspberry pi for each lens(https://www.raspberrypi.com/news/dragonfly-spectral-line-map...)
Early paper about the telescope. https://ui.adsabs.harvard.edu/abs/2014PASP..126...55A/abstra...
About the only point you can discuss as a comparator is the angular resolution and even that means asking how much has been interpolated.
I love modern astro images but I am unsure we even should call them "photography" because they're "painted", by the astronomer or their appointed colour artist.
"this one has more IR boost, but I added a hint of green for dramatic effect"
Also to the angular resolution thing, we're often also looking at what is projected into our mind as a 2D structure. The horse head nebula is a 3D state of matter. It wouldn't look like JFK's head from our angle, but might well from another point of view, or a melon, or like.. nothing at all.
Most "constellations" are not physically interconnected in any sense, and are not a group the way a galaxy or star+planets are, and need not lie the same distance away from us. They're the view of these objects on an imaginary surface painted by the astrologers.
The disadvantage is that it is in space, you have to spend 10x or 100x as much making something that can work in space, and you can't maintain it. I bet it would be much better to spend that money making dozens of these around the world, or iterating on the design.
The other advantage is that atmosphere is opaque for some wavelengths. The infrared wavelength that JWST looks in are absorbed. It also helps to be able to cool down the detectors to lower temps. One reason we aren't seeing direct replacement for Hubble is that the big ground telescopes with active optics are as good.
Once in space they cannot tweak the array.
Launch weight and stresses would damage this array.
2) Because you're setting fire to a big tube of propellant that then goes crazy fast, you need all sorts of permits and safety reviews to do it
3) Space is hard, so your rocket will almost certainly blow up / fail a couple of times.
All of this means: big budgets and state-level patience and persistence is needed
Rocket fuel is also not exactly easy to come by.
- cost to get into geostationary orbit might dominate the value/saving of the cheap instrument, so it might be smart to spend more on that - managing and controlling it might be very challenging - need to get the data down from it - might create difficulties and costs that kill the value - heating and cooling in space might kill it - radiation in space might kill the hardware - acceleration during launch might kill the hardware - the payload needs to be stable during launch or there will be an accident - scientific value might be lower than other missions for similar spend
/not an astrophotographer
First is weather. We can't see through clouds. Most new astronomy is about sources too faint to have been analyzed a hundred years ago, and even clouds that are barely visible to the human eye will drown those out.
Second is various engineering difficulties resulting from differential temperatures in the air in close proximity to the telescope dome, defects in the mirror surface, and limitations to the optical design (you're projecting a spherical globe onto a flat surface).
Third is 'atmospheric seeing' - high-order distortions caused by thermal patterns in the air which change significantly on a tens of milliseconds timescale, ultimately leading to a gaussian blur of the light in long exposures. The lower your altitude, and the more disturbed the airmass, and the more humid this is, the worse this is.
Fourth is sky glow - light pollution from nearby upwards facing lightbulbs, from the full moon, and from the sun at twilight & in the daytime
Fifth is the diffraction limit. A perfectly engineered, spherical-cow-world telescope with a perfect sensor has fundamental optical limits to the resolution it can observe, and optical resolution in arc seconds scales with wavelength / aperture.
Sixth is bright-source confusion and the limitations of your background field. It's very difficult with CCD & CMOS sensors (and even with spherical-cow sensors, the optics present limitations) to image a faint thing next to a bright thing. This is why we have fewer galaxies mapped on the other side of the Milky Way,, and why it can be very difficult to pick up, say, a nebula right next to a bright nearby star
Seventh is light-gathering ability, thermal noise, and readout noise. If you're trying to capture a photon every second, it's going to be very difficult if your CCD is absorbing thousands of photons per second thermally from the surrounding blackbody radiation and the readout circuitry.
Eighth is differential focus. To make matters more complicated, optical resolution is not 'fixed' because focus is not identical in different parts of the iamger; Typically telescopes are optimized for nominal focus at the center of their field, but get a few arc-minutes off of the center and optical resolution goes down. Get a few degrees off and it can go down to un-usability. There are characteristic abberations that crop up, and every optical design that aims for wide fields is a compromise between these abberations.
Ninth is atmospheric windows. Atmosphere absorbs hard UV. And portions of infrared. And portions of radio. To get a full spectrograph of a source, to detect the exotic portions of the EM spectrum that we don't really deal with frequently, you can't do it through atmosphere.
Generally speaking, it's relatively easy with on Earth for professional observatories to reach a point where atmospheric seeing limits your observations more than diffraction or readout noise or field distortions or sky glow or ambient light. It's not easy to defeat bright-source confusion with a larger and larger telescope. Many astronomers have had to content themselves with knowing little about the sky right next to bright sources like nearby stars. The telescope in the article tries to probe this known unknown with numerous small low-res cameras.
Space observatories provide us a small amount (10x?) better surveys because of no sky glow, daytime observations, no weather, etc. They eliminate atmospheric windows and simplify some engineering issues (while complicating others).
Part of the big remaining purpose of space observatories, the thing it's very difficult to do on the ground (we've tried!) is to defeat the atmospheric seeing limit and allow us to use very large telescopes which are relatively simply designed. Light-gathering ability from a source scales with aperture^2, and light-concentrating ability scales with aperture^2, so ideally sensitivity to sources should scale with aperture^4. It rarely does on the ground, because we have to put up with atmospheric seeing. The technologies we've used on the ground to fight atmospheric seeing are extremely limiting, expensive, complex, the subject of an inane number of PhD theses, and only suitable for very small fields.
This goal of survey astronomy is at cross purposes to the telescopes in the article, which aim to get diffuse low resolution impressions of the light near bright objects, defeating problem number 6; They can do this with relatively short exposures over hundreds of sensors, so that none of the electron wells in the sensors ever saturate from being full of too much light and spill over into their neighboring electron wells
Some kind of technical measurement for me to better appreciate the work.
Also known as an anti-reflection coating. Definitely not unprecedented.
Cool project, though.
In particular, I don't see how N co-aligned cameras is any different than N images taken in sequence with one camera (averaging over noise), other than a reduction in time required to take N images with one camera.
If you click the “ADS” link you will find most will have links to a free preprint on arXiv.
> other than a reduction in time required to take N images with one camera.
Life is short!
https://arxiv.org/abs/1401.5473 ("Ultra Low Surface Brightness Imaging with the Dragonfly Telephoto Array")
[1]https://warwick.ac.uk/fac/sci/physics/research/astro/researc... [2]https://www.superwasp.org/about/
"The 1.8 gigapixel sensor is made up of a matrix of 368 Aptina MT9P031 5-megapixel smartphone CCDs."
> However, the latest generation of Canon lenses features the first commercialized availability of nano-fabricated coatings with sub-wavelength structure on optical glasses.
Nikon was actually first to market with this, as far as I can tell in the AF-S 300mm f2.8 VR in 2004.
I work with microscopes that cost $1M and just sit in a lab. That's not atypical for an academic or industrial microscope.
One of the biggest issues in modern science is that to make many discoveries you need to pay very high prices to get the latest and greatest hardware. I've been exploring how to make lower cost telescopes and microscopes (and definitely love this project), that are "good enough" to open up new areas of research/discovery for people with budgets in the $1K-10K range. But it's hard! So far I have mostly been relearning what people already knew in the 1800s and early 1900s, that is easily obtained off-the-shelf tech today.
But talking apples to apples, a $12000 professional camera lens is closer to something like a $12000 research microscope than you think in terms of the build-or-buy decision. There's also a whole industry of telescopes for amateurs that are made in low volumes to much higher optical standards than photographic lenses that are mostly not that expensive. A top tier 6" refractor might be $15K and blow the camera lens away as a general purpose astronomical instrument, but it would not be nearly as fast, which is important for this application, and if you placed an order for 120 of them, you might get them in five years. Maybe. I'd guess that made-for-the-purpose tubes would come in within a factor of two or three of the off-the-shelf lens if you could find a supplier. They might even be cheaper. The project risk would be larger, and that might be determining also.
Ooo, what for!? I always love hearing what people do with optical equipment I can't afford. :)
I'm sure I'd be interested in your pet project as well!
I wonder if Dr. Pilipp Seidel has any relation to Philipp Ludwig von Seidel.
So, is there an advantage of having all of the lenses on the same tracking platform to justify the expense of the single mount? If you place individual 6" scopes in an area where humans could comfortably move between them all pointed at the same object or even slightly different areas to get the wider image, would that not be the same/similar result? Essentially, building the VLA but with commodity off-the-shelf visible scopes.
Your 6" scope is slower, probably much slower, than the telephoto lens they used. There really aren't any amateur telescope tubes I know of that you could directly compare to the 143mm aperture f2.8 Canon lens. The right comparison would be to a 6" apo, which would run $8K-$16K and still be slower.
Even if this isn't doing the same "science", it would be an interesting thing to play with for sciene. Instead of stacking images from the same camera, just stack each image from the array. Or capturing an entire mosaic in one "snap" which is essentially what WASP is doing (mentioned in a post from yesterday).
Also they're not so much using the speed of the lens for shorter exposure times but for field of view and for high sensitivity.
In general photographic lenses make mediocre telescopes and telescopes make mediocre photographic lenses - try using one of your tubes for some terrestrial photography to see. So it's pretty amazing that the Canon lens performs so well to begin with, that they're able to use it like a fast apochromat, and even more so that they're able to build it out to be roughly equivalent to a really large apochromat. With eight lenses, the early paper claims the instrument is equivalent to a 40cm f/1 refractor. How would you build such a thing? Well, this is how.