diff --git a/content/docs/3d-scanning/_index.md b/content/docs/3d-scanning/_index.md index d31c7d7..3fc5ab1 100644 --- a/content/docs/3d-scanning/_index.md +++ b/content/docs/3d-scanning/_index.md @@ -2,21 +2,12 @@ title: 3D Scanning --- -{{< callout type="info" >}} - **Hey!** This page is a work in progress. If you'd like to assist in the process of writing, take a look at the [git repository](https://git.myco.systems/mycosystems/midtowndrafting.com) -{{< /callout >}} - In the prototype lab, we provide many methods for students to learn 3D scanning through practice. This page serves as the starting point to briefly outline what we can do. -# Hardware +{{< cards >}} + {{< card link="./nerf" title="NeRF and 3D Gaussians" icon="nerf" >}} +{{< /cards >}} -The prototype lab comes equipped with top-of-the-line hardware to facilitate the processing of large 3D models. The main components of the hardware setup include: - -- **Computer**: The lab is equipped with a powerful computer, featuring: - - 2x **NVIDIA A5000** GPUs: These high-performance GPUs provide the necessary computing power for inference with NeRF models or heavy math compute for software like Meshroom, with a total of 48 GB of GDDR6 memory. - - 1x **AMD EPYC 7763 CPU**: This processor features 64 cores and 128 threads, offering exceptional multitasking capabilities. - - **8x 64GB DDR4** DIMMs: A total of 512 GB of RAM ensures smooth operation of large image datasets. - -- **Camera**: The lab maintains all of the equipment to do photogrammetry, including: - - 2x **GVM LED Ring Lights**: Ring lights with 6 removeable light bars, a color range of 3200 to 5600K, adjustable from 32" to 87" height. - - 1x **Sony A6500**: A 24MP mirrorless camera capable of 4K video, has in-body stabilization, a decent 107 RAW image buffer, and an 11 fps still photo mode. \ No newline at end of file +{{< callout type="info" >}} + **Hey!** This page is a work in progress. If you'd like to assist in the process of writing, take a look at the [git repository](https://git.myco.systems/mycosystems/midtowndrafting.com) +{{< /callout >}} \ No newline at end of file diff --git a/content/docs/3d-scanning/nerf/2024-05-31-17-55-11.mp4 b/content/docs/3d-scanning/nerf/2024-05-31-17-55-11.mp4 deleted file mode 100644 index 72ad18c..0000000 Binary files a/content/docs/3d-scanning/nerf/2024-05-31-17-55-11.mp4 and /dev/null differ diff --git a/content/docs/3d-scanning/nerf/2024-05-31-17-55-14.mp4 b/content/docs/3d-scanning/nerf/2024-05-31-17-55-14.mp4 deleted file mode 100644 index 970aaea..0000000 Binary files a/content/docs/3d-scanning/nerf/2024-05-31-17-55-14.mp4 and /dev/null differ diff --git a/content/docs/3d-scanning/nerf/4000-001.min.webp b/content/docs/3d-scanning/nerf/4000-001.min.webp new file mode 100644 index 0000000..c9a4bd8 Binary files /dev/null and b/content/docs/3d-scanning/nerf/4000-001.min.webp differ diff --git a/content/docs/3d-scanning/nerf/4000-002.min.webp b/content/docs/3d-scanning/nerf/4000-002.min.webp new file mode 100644 index 0000000..a454b4c Binary files /dev/null and b/content/docs/3d-scanning/nerf/4000-002.min.webp differ diff --git a/content/docs/3d-scanning/nerf/4000-003.min.webp b/content/docs/3d-scanning/nerf/4000-003.min.webp new file mode 100644 index 0000000..46185f5 Binary files /dev/null and b/content/docs/3d-scanning/nerf/4000-003.min.webp differ diff --git a/content/docs/3d-scanning/nerf/4000-004.min.webp b/content/docs/3d-scanning/nerf/4000-004.min.webp new file mode 100644 index 0000000..e3b9fc1 Binary files /dev/null and b/content/docs/3d-scanning/nerf/4000-004.min.webp differ diff --git a/content/docs/3d-scanning/nerf/_index.md b/content/docs/3d-scanning/nerf/_index.md index 0633978..a16f643 100644 --- a/content/docs/3d-scanning/nerf/_index.md +++ b/content/docs/3d-scanning/nerf/_index.md @@ -2,13 +2,34 @@ title: NeRF --- -3D scanning is a very wide field, including many, many different use-cases. I personally like to look at the ideas around using 3D scanning as another modal for allowing people to interact with the world around them, I'm talking about museums 3D scanning all of the works they hold, or capturing high quality 3D scans of monuments that are at risk of being lost forever, and I think education in general can be greatly aided by introducing more active methods of learning based on working in 3D. + -In the past, it has been prohibitively expensive and has had many drawbacks. It used to be that capturing a 3D model meant expensive equipment and prep time for your model, every surface to be captured needed to be perfectly matte or even had to be covered in a partical pattern. Photogrammetry and the wider concept of neural radiance fields have introduced a more software-defined approach to 3D scanning. [NeRF](https://www.matthewtancik.com/nerf), [MIP-NeRF](https://arxiv.org/abs/2103.13415), [3D Gaussians](https://arxiv.org/abs/2308.04079), and more techniques show off the ability to use neural networks to define a complete 3D model from a 2D reference point. +## Intro -I should say that none of these techniques are anywhere near a "production ready" stage, you still cannot derive an accurate 3D mesh from these techniques, but they have brought a lot of interesting concepts forward. For one, being able to share a color-accurate model has been the focus of my research recently, using what are called 3D guassians you can use an entirely software-defined approach to create very light (the below model is 6MB) and high quality 3D models. +The field of 3D scanning encompasses a wide range of applications, [NeRF](https://www.matthewtancik.com/nerf) makes a few of those applications a lot easier. The concept of using 3D scanning as an alternative means for people to interact with their surroundings is intriguing. For instance, museums could 3D scan their entire collections, or capture high-quality scans of monuments at risk of being lost forever. Additionally, education can be enhanced by introducing more active learning methods based on working in 3D. All of this is better served by how easy it is to produce models using a *more* software-defined method like NeRF. -Below you can see an example of a 3D guassian using the method "splatfacto", created by the engineers working on the [nerfstudio](https://docs.nerf.studio/) project inspired by the SIGGRAPH paper "[3D Gaussian Splatting for Real-Time Rendering of Radiance Fields](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/)". +Though techniques such as NeRF, [MIP-NeRF](https://arxiv.org/abs/2103.13415), [3D Gaussians](https://arxiv.org/abs/2308.04079), and others demonstrate a fast and simpler method of 3D scanning compared to traditional laser or projection scanning. Neural network based scanning is still not considered a perfect "production ready" method, these models are made up of very dense point clouds that can not easily be translated into CAD. Supplimentary software is getting better but laser and projection scanning is still the best for high-accuracy work. + +3D guassians have been a new focus of research, you can use an entirely software-defined approach to create very light (the below model is 6MB) and high quality 3D models. With [polygon-based](https://en.wikipedia.org/wiki/Polygonal_modeling) models you would have never been able to get this amount of accuracy in a 3D model that can run on a phone. + +## Showcase + +### 3D Gaussian Splatting + +Below you can see an example of a 3D guassian using the method "Splatfacto", created by the engineers working on the [nerfstudio](https://docs.nerf.studio/) project inspired by the SIGGRAPH paper "[3D Gaussian Splatting for Real-Time Rendering of Radiance Fields](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/)".
@@ -29,15 +50,25 @@ Below you can see an example of a 3D guassian using the method "splatfacto", creleft click rotate, right click pan +