update docs structure for 3d scanning
Some checks failed
Hugo / build (push) Failing after 21s

This commit is contained in:
brooke 2024-06-02 18:08:03 -04:00
parent 072f0fdc66
commit d50059d751
8 changed files with 57 additions and 35 deletions

View file

@ -2,42 +2,21 @@
title: 3D Scanning
---
3D scanning is a very wide field, including many, many different use-cases. I personally like to look at the ideas around using 3D scanning as another modal for allowing people to interact with the world around them, I'm talking about museums 3D scanning all of the works they hold, or capturing high quality 3D scans of monuments that are at risk of being lost forever, and I think education in general can be greatly aided by introducing more active methods of learning based on working in 3D.
{{< callout type="info" >}}
**Hey!** This page is a work in progress. If you'd like to assist in the process of writing, take a look at the [git repository](https://git.myco.systems/mycosystems/midtowndrafting.com)
{{< /callout >}}
In the past, it has been prohibitively expensive and has had many drawbacks. It used to be that capturing a 3D model meant expensive equipment and prep time for your model, every surface to be captured needed to be perfectly matte or even had to be covered in a partical pattern. Photogrammetry and the wider concept of neural radiance fields have introduced a more software-defined approach to 3D scanning. [NeRF](https://www.matthewtancik.com/nerf), [MIP-NeRF](https://arxiv.org/abs/2103.13415), [3D Gaussians](https://arxiv.org/abs/2308.04079), and more techniques show off the ability to use neural networks to define a complete 3D model from a 2D reference point.
In the prototype lab, we provide many methods for students to learn 3D scanning through practice. This page serves as the starting point to briefly outline what we can do.
I should say that none of these techniques are anywhere near a "production ready" stage, you still cannot derive an accurate 3D mesh from these techniques, but they have brought a lot of interesting concepts forward. For one, being able to share a color-accurate model has been the focus of my research recently, using what are called 3D guassians you can use an entirely software-defined approach to create very light (the below model is 6MB) and high quality 3D models.
# Hardware
Below you can see an example of a 3D guassian using the method "splatfacto", created by the engineers working on the [nerfstudio](https://docs.nerf.studio/) project inspired by the SIGGRAPH paper "[3D Gaussian Splatting for Real-Time Rendering of Radiance Fields](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/)".
The prototype lab comes equipped with top-of-the-line hardware to facilitate the processing of large 3D models. The main components of the hardware setup include:
<head>
<link rel="stylesheet" href="styles.css">
</head>
<br>
<div class="hextra-card hx-group hx-flex hx-flex-col hx-justify-start hx-overflow-hidden hx-rounded-lg hx-border hx-border-gray-200 hx-text-current hx-no-underline dark:hx-shadow-none hover:hx-shadow-gray-100 dark:hover:hx-shadow-none hx-shadow-gray-100 active:hx-shadow-sm active:hx-shadow-gray-200 hx-transition-all hx-duration-200 hover:hx-border-gray-300 hx-bg-gray-100 hx-shadow dark:hx-border-neutral-700 dark:hx-bg-neutral-800 dark:hx-text-gray-50">
<div id="container">
<div id="progress-container">
<dialog open id="progress-dialog">
<p>
<label for="progress-indicator">Waiting</label>
</p>
<progress max="100" id="progress-indicator"></progress>
</dialog>
</div>
<canvas id="canvas"></canvas>
<script type="module" src="script.js"></script>
</div>
<button style="margin:0.5rem; padding:0.25rem; padding-left: 1rem; padding-right: 1rem;" class="not-prose hx-font-medium hx-cursor-pointer hx-px-6 hx-py-3 hx-rounded-lg hx-text-center hx-text-white hx-inline-block hx-bg-primary-600 hover:hx-bg-primary-700 focus:hx-outline-none focus:hx-ring-4 focus:hx-ring-primary-300 dark:hx-bg-primary-600 dark:hover:hx-bg-primary-700 dark:focus:hx-ring-primary-800 hx-transition-all hx-ease-in hx-duration-200" id="load-button">Load Scene</button>
<div style="display: flex;justify-content: space-between; padding: 0.5rem; padding-top: 0;">
<a class="gsplat-js" href="https://github.com/huggingface/gsplat.js/">gsplat.js</a>
<a style="color:#fff;text-decoration: none;" class="gsplat-js"> left click <strong>rotate</strong>, right click <strong>pan</strong>
</a>
</div>
</div>
- **Computer**: The lab is equipped with a powerful computer, featuring:
- 2x **NVIDIA A5000** GPUs: These high-performance GPUs provide the necessary computing power for inference with NeRF models or heavy math compute for software like Meshroom, with a total of 48 GB of GDDR6 memory.
- 1x **AMD EPYC 7763 CPU**: This processor features 64 cores and 128 threads, offering exceptional multitasking capabilities.
- **8x 64GB DDR4** DIMMs: A total of 512 GB of RAM ensures smooth operation of large image datasets.
Below are some pre-rendered scenes based on the [nerfacto](https://docs.nerf.studio/nerfology/methods/nerfacto.html) method also developed by nerfstudio. Though I have found that in some context the "splatting" method produces a smaller file and is more efficient to run, NeRF still provides pretty excellent quality.
{{< cards cols="2" >}}
<a class="hextra-card hx-group hx-flex hx-flex-col hx-justify-start hx-overflow-hidden hx-rounded-lg hx-border hx-border-gray-200 hx-text-current hx-no-underline dark:hx-shadow-none hover:hx-shadow-gray-100 dark:hover:hx-shadow-none hx-shadow-gray-100 active:hx-shadow-sm active:hx-shadow-gray-200 hx-transition-all hx-duration-200 hover:hx-border-gray-300 hx-bg-gray-100 hx-shadow dark:hx-border-neutral-700 dark:hx-bg-neutral-800 dark:hx-text-gray-50 hover:hx-shadow-lg dark:hover:hx-border-neutral-500 dark:hover:hx-bg-neutral-700"><video controls alt="Watch Toast at Cynosport Finals 2019" src="2024-05-31-17-55-11.mp4"></video><span class="hextra-card-icon hx-flex hx-font-semibold hx-items-start hx-gap-2 hx-p-4 hx-text-gray-700 hover:hx-text-gray-900 dark:hx-text-neutral-200 dark:hover:hx-text-neutral-50"></span></a>
<a class="hextra-card hx-group hx-flex hx-flex-col hx-justify-start hx-overflow-hidden hx-rounded-lg hx-border hx-border-gray-200 hx-text-current hx-no-underline dark:hx-shadow-none hover:hx-shadow-gray-100 dark:hover:hx-shadow-none hx-shadow-gray-100 active:hx-shadow-sm active:hx-shadow-gray-200 hx-transition-all hx-duration-200 hover:hx-border-gray-300 hx-bg-gray-100 hx-shadow dark:hx-border-neutral-700 dark:hx-bg-neutral-800 dark:hx-text-gray-50 hover:hx-shadow-lg dark:hover:hx-border-neutral-500 dark:hover:hx-bg-neutral-700"><video controls alt="Watch Toast at Cynosport Finals 2019" src="2024-05-31-17-55-14.mp4"></video><span class="hextra-card-icon hx-flex hx-font-semibold hx-items-start hx-gap-2 hx-p-4 hx-text-gray-700 hover:hx-text-gray-900 dark:hx-text-neutral-200 dark:hover:hx-text-neutral-50"></span></a>
{{< /cards >}}
- **Camera**: The lab maintains all of the equipment to do photogrammetry, including:
- 2x **GVM LED Ring Lights**: Ring lights with 6 removeable light bars, a color range of 3200 to 5600K, adjustable from 32" to 87" height.
- 1x **Sony A6500**: A 24MP mirrorless camera capable of 4K video, has in-body stabilization, a decent 107 RAW image buffer, and an 11 fps still photo mode.

View file

@ -0,0 +1,43 @@
---
title: NeRF
---
3D scanning is a very wide field, including many, many different use-cases. I personally like to look at the ideas around using 3D scanning as another modal for allowing people to interact with the world around them, I'm talking about museums 3D scanning all of the works they hold, or capturing high quality 3D scans of monuments that are at risk of being lost forever, and I think education in general can be greatly aided by introducing more active methods of learning based on working in 3D.
In the past, it has been prohibitively expensive and has had many drawbacks. It used to be that capturing a 3D model meant expensive equipment and prep time for your model, every surface to be captured needed to be perfectly matte or even had to be covered in a partical pattern. Photogrammetry and the wider concept of neural radiance fields have introduced a more software-defined approach to 3D scanning. [NeRF](https://www.matthewtancik.com/nerf), [MIP-NeRF](https://arxiv.org/abs/2103.13415), [3D Gaussians](https://arxiv.org/abs/2308.04079), and more techniques show off the ability to use neural networks to define a complete 3D model from a 2D reference point.
I should say that none of these techniques are anywhere near a "production ready" stage, you still cannot derive an accurate 3D mesh from these techniques, but they have brought a lot of interesting concepts forward. For one, being able to share a color-accurate model has been the focus of my research recently, using what are called 3D guassians you can use an entirely software-defined approach to create very light (the below model is 6MB) and high quality 3D models.
Below you can see an example of a 3D guassian using the method "splatfacto", created by the engineers working on the [nerfstudio](https://docs.nerf.studio/) project inspired by the SIGGRAPH paper "[3D Gaussian Splatting for Real-Time Rendering of Radiance Fields](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/)".
<head>
<link rel="stylesheet" href="styles.css">
</head>
<br>
<div class="hextra-card hx-group hx-flex hx-flex-col hx-justify-start hx-overflow-hidden hx-rounded-lg hx-border hx-border-gray-200 hx-text-current hx-no-underline dark:hx-shadow-none hover:hx-shadow-gray-100 dark:hover:hx-shadow-none hx-shadow-gray-100 active:hx-shadow-sm active:hx-shadow-gray-200 hx-transition-all hx-duration-200 hover:hx-border-gray-300 hx-bg-gray-100 hx-shadow dark:hx-border-neutral-700 dark:hx-bg-neutral-800 dark:hx-text-gray-50">
<div id="container">
<div id="progress-container">
<dialog open id="progress-dialog">
<p>
<label for="progress-indicator">Waiting</label>
</p>
<progress max="100" id="progress-indicator"></progress>
</dialog>
</div>
<canvas id="canvas"></canvas>
<script type="module" src="script.js"></script>
</div>
<button style="margin:0.5rem; padding:0.25rem; padding-left: 1rem; padding-right: 1rem;" class="not-prose hx-font-medium hx-cursor-pointer hx-px-6 hx-py-3 hx-rounded-lg hx-text-center hx-text-white hx-inline-block hx-bg-primary-600 hover:hx-bg-primary-700 focus:hx-outline-none focus:hx-ring-4 focus:hx-ring-primary-300 dark:hx-bg-primary-600 dark:hover:hx-bg-primary-700 dark:focus:hx-ring-primary-800 hx-transition-all hx-ease-in hx-duration-200" id="load-button">Load Scene</button>
<div style="display: flex;justify-content: space-between; padding: 0.5rem; padding-top: 0;">
<a class="gsplat-js" href="https://github.com/huggingface/gsplat.js/">gsplat.js</a>
<a style="color:#fff;text-decoration: none;" class="gsplat-js"> left click <strong>rotate</strong>, right click <strong>pan</strong>
</a>
</div>
</div>
Below are some pre-rendered scenes based on the [nerfacto](https://docs.nerf.studio/nerfology/methods/nerfacto.html) method also developed by nerfstudio. Though I have found that in some context the "splatting" method produces a smaller file and is more efficient to run, NeRF still provides pretty excellent quality.
{{< cards cols="2" >}}
<a class="hextra-card hx-group hx-flex hx-flex-col hx-justify-start hx-overflow-hidden hx-rounded-lg hx-border hx-border-gray-200 hx-text-current hx-no-underline dark:hx-shadow-none hover:hx-shadow-gray-100 dark:hover:hx-shadow-none hx-shadow-gray-100 active:hx-shadow-sm active:hx-shadow-gray-200 hx-transition-all hx-duration-200 hover:hx-border-gray-300 hx-bg-gray-100 hx-shadow dark:hx-border-neutral-700 dark:hx-bg-neutral-800 dark:hx-text-gray-50 hover:hx-shadow-lg dark:hover:hx-border-neutral-500 dark:hover:hx-bg-neutral-700"><video controls alt="Watch Toast at Cynosport Finals 2019" src="2024-05-31-17-55-11.mp4"></video><span class="hextra-card-icon hx-flex hx-font-semibold hx-items-start hx-gap-2 hx-p-4 hx-text-gray-700 hover:hx-text-gray-900 dark:hx-text-neutral-200 dark:hover:hx-text-neutral-50"></span></a>
<a class="hextra-card hx-group hx-flex hx-flex-col hx-justify-start hx-overflow-hidden hx-rounded-lg hx-border hx-border-gray-200 hx-text-current hx-no-underline dark:hx-shadow-none hover:hx-shadow-gray-100 dark:hover:hx-shadow-none hx-shadow-gray-100 active:hx-shadow-sm active:hx-shadow-gray-200 hx-transition-all hx-duration-200 hover:hx-border-gray-300 hx-bg-gray-100 hx-shadow dark:hx-border-neutral-700 dark:hx-bg-neutral-800 dark:hx-text-gray-50 hover:hx-shadow-lg dark:hover:hx-border-neutral-500 dark:hover:hx-bg-neutral-700"><video controls alt="Watch Toast at Cynosport Finals 2019" src="2024-05-31-17-55-14.mp4"></video><span class="hextra-card-icon hx-flex hx-font-semibold hx-items-start hx-gap-2 hx-p-4 hx-text-gray-700 hover:hx-text-gray-900 dark:hx-text-neutral-200 dark:hover:hx-text-neutral-50"></span></a>
{{< /cards >}}

@ -1 +1 @@
Subproject commit ba7707d4d9f922ea82a9645af150a6216d343669
Subproject commit 1313415c8b4e2c559b7b133506d6599d1723b807