<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>photogrammetry &bull; Archives Creative Shrimp</title>
	<atom:link href="https://www.creativeshrimp.com/tag/photogrammetry/feed" rel="self" type="application/rss+xml" />
	<link>https://www.creativeshrimp.com/tag/photogrammetry</link>
	<description>Blender tutorials and courses for 3D artists</description>
	<lastBuildDate>Thu, 26 Feb 2026 13:45:05 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.3.1</generator>
	<item>
		<title>100% Free Mobile Photogrammetry Workflow: RealityScan</title>
		<link>https://www.creativeshrimp.com/free-mobile-photogrammetry-tutorial.html</link>
		
		<dc:creator><![CDATA[Gleb Alexandrov]]></dc:creator>
		<pubDate>Thu, 26 Feb 2026 13:43:03 +0000</pubDate>
				<category><![CDATA[Tutorials]]></category>
		<category><![CDATA[Blender]]></category>
		<category><![CDATA[photogrammetry]]></category>
		<category><![CDATA[RealityScan]]></category>
		<category><![CDATA[tutorial]]></category>
		<guid isPermaLink="false">https://www.creativeshrimp.com/?p=17944</guid>

					<description><![CDATA[<p>Can you do photoscanning for free, with your smartphone? In this tutorial, we show a 100% free photogrammetry workflow in RealityScan Mobile, Agisoft Texture De-lighter and Blender, with no AI involved. While AI-based tools like Sparc3D or Hunyuan 3D can generate genuinely impressive results from a single photo, they often rely on credits, paywalls, and</p>
<p>The post <a rel="nofollow" href="https://www.creativeshrimp.com/free-mobile-photogrammetry-tutorial.html">100% Free Mobile Photogrammetry Workflow: RealityScan</a> appeared first on <a rel="nofollow" href="https://www.creativeshrimp.com">Creative Shrimp</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Can you do photoscanning for free, with your smartphone? In this tutorial, we show a 100% free photogrammetry workflow in RealityScan Mobile, Agisoft Texture De-lighter and Blender, with no AI involved.</p>



<span id="more-17944"></span>


<div class="youtube-responsive-container"><iframe width="560" height="315" src="https://www.youtube.com/embed/AbO1Dvbq1Ms?si=L_pMuLl_ZR_3KHJb" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>



<p>While AI-based tools like <a href="https://sparc3d.org/">Sparc3D</a> or <a href="https://hunyuan-3d.org/">Hunyuan 3D</a> can generate genuinely impressive results from a single photo, they often rely on credits, paywalls, and non-so-transparent processing of data. In this video, we strip it down to basics: a smartphone, free software, measurement-based photogrammetry.</p>



<h2 class="wp-block-heading">Free Photogrammetry Software</h2>



<p><a href="https://www.realityscan.com/en-US/mobile">RealityScan Mobile</a></p>



<p><a href="https://www.agisoft.com/downloads/installer/">Agisoft Texture De-lighter</a></p>



<p>And of course, <a href="https://www.blender.org/">Blender</a> is a hub for assembling, editing and previewing photoscanned content.</p>



<p>&#8212;</p>



<h2 class="wp-block-heading">Photogrammetry: Best Practices and What Camera to Choose</h2>



<p>The capturing practices described in this tutorial still apply to mobile photoscanning, so it could be worth (re)watching.</p>


<div class="youtube-responsive-container"><iframe width="560" height="315" src="https://www.youtube.com/embed/JLdxBtECGuc?si=ARIo_l8NHIvXF4s-" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>



<p>And here&#8217;s the basic guide answering the question of &#8220;what camera to choose for photogrammetry&#8221;. Or, understood more broadly &#8211; what makes capturing device better or worse for the purpose of capturing photographic data. </p>



<p>If you haven&#8217;t heard the term <strong><em>spatial fidelity</em></strong> before, make sure to watch this video.</p>


<div class="youtube-responsive-container"><iframe width="560" height="315" src="https://www.youtube.com/embed/ZN8-tzqBLTs?si=aqR1AiEQXmMELQ_8" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>



<p>&#8212;</p>



<h2 class="wp-block-heading">Photogrammetry Course</h2>



<p>If you’d like to go deeper into DSLR-based photogrammetry, advanced capture techniques, and desktop RealityScan workflows (again, way more advanced), check out our full Photogrammetry course.</p>



<p><a href="https://superhivemarket.com/products/photogrammetry-course/?=ref88">On Superhive</a></p>



<p><a href="https://creativeshrimp.gumroad.com/l/photogrammetry-course">On Gumroad</a></p>



<p><strong><em>For this week only (till the 3rd of March) it&#8217;s -25% OFF (Winter Sale).</em></strong></p>



<figure class="wp-block-image size-full"><img decoding="async" loading="lazy" width="1920" height="589" src="https://www.creativeshrimp.com/wp-content/uploads/2026/02/2026_photogrammetry_course_01_1.jpg" alt="photogrammetry course 2026" class="wp-image-17948" srcset="https://www.creativeshrimp.com/wp-content/uploads/2026/02/2026_photogrammetry_course_01_1.jpg 1920w, https://www.creativeshrimp.com/wp-content/uploads/2026/02/2026_photogrammetry_course_01_1-768x236.jpg 768w, https://www.creativeshrimp.com/wp-content/uploads/2026/02/2026_photogrammetry_course_01_1-1536x471.jpg 1536w" sizes="(max-width: 1920px) 100vw, 1920px" /></figure>



<p>&#8212;</p>



<h2 class="wp-block-heading">Sparc3D: An Alternative 3d-from-image Method (AI)</h2>



<p>In this video, we have explored an alternative 3D reconstruction method – represented by SPARC3D here – and discussed what makes it stand out, along with its potential impact on asset creation and 3D workflows. </p>


<div class="youtube-responsive-container"><iframe loading="lazy" width="560" height="315" src="https://www.youtube.com/embed/XRFlnXeOdww?si=IGdAR9D2OxXuH65O" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>




<p>The post <a rel="nofollow" href="https://www.creativeshrimp.com/free-mobile-photogrammetry-tutorial.html">100% Free Mobile Photogrammetry Workflow: RealityScan</a> appeared first on <a rel="nofollow" href="https://www.creativeshrimp.com">Creative Shrimp</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>SPARC3D: Revolutionary Single-Image 3D Reconstruction Explained (with Live Demo)</title>
		<link>https://www.creativeshrimp.com/sparc3d-ai-mesh-from-image.html</link>
		
		<dc:creator><![CDATA[Gleb Alexandrov]]></dc:creator>
		<pubDate>Wed, 18 Jun 2025 12:29:23 +0000</pubDate>
				<category><![CDATA[Tutorials]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[ai]]></category>
		<category><![CDATA[Blender]]></category>
		<category><![CDATA[modeling]]></category>
		<category><![CDATA[photogrammetry]]></category>
		<guid isPermaLink="false">https://www.creativeshrimp.com/?p=17478</guid>

					<description><![CDATA[<p>In this video, we explore a new 3D reconstruction method – SPARC3D – and discuss what makes it stand out, along with its potential impact on asset creation and 3D workflows. (No sponsorships: this content is just for your benefit). 👉 https://lizhihao6.github.io/Sparc3D/ – Sparc3D – Sparse Representation and Construction for High-Resolution 3D Shapes Modeling by</p>
<p>The post <a rel="nofollow" href="https://www.creativeshrimp.com/sparc3d-ai-mesh-from-image.html">SPARC3D: Revolutionary Single-Image 3D Reconstruction Explained (with Live Demo)</a> appeared first on <a rel="nofollow" href="https://www.creativeshrimp.com">Creative Shrimp</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>In this video, we explore a new 3D reconstruction method – SPARC3D – and discuss what makes it stand out, along with its potential impact on asset creation and 3D workflows. <em>(No sponsorships: this content is just for your benefit).</em></p>



<span id="more-17478"></span>


<div class="youtube-responsive-container"><iframe loading="lazy" width="560" height="315" src="https://www.youtube.com/embed/XRFlnXeOdww?si=IGdAR9D2OxXuH65O" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>



<p><img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f449.png" alt="👉" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <a href="https://lizhihao6.github.io/Sparc3D/">https://lizhihao6.github.io/Sparc3D/</a> – Sparc3D – Sparse Representation and Construction for High-Resolution 3D Shapes Modeling by Nanyang Technological University. This method offers a robust alternative to traditional photogrammetry by generating detailed 3D meshes from a single input image.</p>



<p><img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f449.png" alt="👉" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <a href="https://huggingface.co/spaces/ilcve21/Sparc3D">https://huggingface.co/spaces/ilcve21/Sparc3D</a> &#8211; live demo</p>



<p><img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f449.png" alt="👉" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <a href="https://huggingface.co/spaces/tencent/Hunyuan3D-2.1">Tencent&#8217;s Hunyuan 3D 2.1 (open source) alternative</a> </p>



<p>&#8212;</p>



<h2 class="wp-block-heading">What is Sparc3D? A Diffusion-based 3D Mesh Generator from a Single Image</h2>



<p>Developed by Nanyang Technological University, <a href="https://lizhihao6.github.io/Sparc3D/">Sparc</a> is a new algorithm that builds 3D meshes from just <strong>A SINGLE IMAGE!</strong> It&#8217;s a significant leap beyond anything we&#8217;ve had before. </p>



<p>While traditional photogrammetry captures real-world objects using multiple photos to create detailed 3D models (something we dive deep into in our <a href="https://www.creativeshrimp.com/photogrammetry-course">Creative Shrimp Photogrammetry Course</a>), Sparc takes a different approach. This new tech can handle complex inputs – open surfaces, intricate geometry – all from one photo, powered by the latest in diffusion models. It’s pretty wild.</p>



<figure class="wp-block-image size-large"><img decoding="async" loading="lazy" width="1170" height="720" src="https://www.creativeshrimp.com/wp-content/uploads/2025/07/sparc3d_01-1170x720.jpg" alt="SPARC3D AI 3D Reconstruction from Image" class="wp-image-17572"/></figure>



<p>This technology has serious potential to shake up our industry. I don&#8217;t particularly like the word &#8220;disrupt&#8221;, as it&#8217;s overused in all sorts of silly contexts. But it feels disruptive. </p>



<p><em>Just browse through <a href="https://www.youtube.com/watch?v=XRFlnXeOdww">that comment section</a> (1100+ comments, my god!), and you&#8217;ll see what I mean</em>.</p>



<p>Now, let&#8217;s see what it means for us all.</p>



<figure class="wp-block-image size-full"><img decoding="async" loading="lazy" width="1653" height="580" src="https://www.creativeshrimp.com/wp-content/uploads/2025/07/comment_01.jpg" alt="" class="wp-image-17571" srcset="https://www.creativeshrimp.com/wp-content/uploads/2025/07/comment_01.jpg 1653w, https://www.creativeshrimp.com/wp-content/uploads/2025/07/comment_01-768x269.jpg 768w, https://www.creativeshrimp.com/wp-content/uploads/2025/07/comment_01-1536x539.jpg 1536w" sizes="(max-width: 1653px) 100vw, 1653px" /></figure>



<p>&#8212;</p>



<h2 class="wp-block-heading">How to Use the SPARC3D Demo</h2>



<p>There’s <a href="https://huggingface.co/spaces/ilcve21/Sparc3D">a public demo</a> online right now. </p>



<p>For how long? Who knows – on the one hand the authors said something about making it open-source upon receiving all permissions, but you can never rule out a sudden sale to a commercial company.</p>



<p>How to use SPARC demo?</p>



<ul>
<li>upload an image,</li>



<li>hit &#8220;generate&#8221;</li>



<li>and after a short wait (the queue is starting to get longer though, haha!),</li>



<li>you get a full 3D model. </li>
</ul>



<p>If you don&#8217;t get a full 3D model, then it&#8217;s most likely the fault of the source image. Try again, with a different one.</p>



<p>&#8212;</p>



<h2 class="wp-block-heading">Use Cases: Game Assets, AI Art to 3D Props, and More</h2>



<p>So, what’ll be the most popular use case (popular as in pop songs)? I&#8217;m betting on this: turning AI-generated images (from <a href="https://www.midjourney.com/home">MidJourney</a>, <a href="https://openai.com/">ChatGPT</a>, <a href="https://en.wikipedia.org/wiki/Stable_Diffusion">Stable Diffusion</a>, <a href="https://www.rundiffusion.com/">Rundiffusion</a> etc.) directly into 3D props and assets. Not that we endorse that – but let&#8217;s be FRANK, it&#8217;s the first thing that comes to mind and I can already hear the sloshing of AI slop (with occasional amber) crashing onto our shores. I&#8217;m bad at metaphors though.</p>



<p>And before you asked – I&#8217;m writing this article <em>mostly</em> in an old-school way, with just <a href="https://x.com/AidyBurrows3D">Aidy GPT</a> being my only helper.</p>



<figure class="wp-block-image size-full"><img decoding="async" loading="lazy" width="2560" height="785" src="https://www.creativeshrimp.com/wp-content/uploads/2025/07/sparc3d_ai_generated_model_stable_diffusion_concept_art-scaled.jpg" alt="SPARC3D AI 3D Reconstruction from Image" class="wp-image-17573" srcset="https://www.creativeshrimp.com/wp-content/uploads/2025/07/sparc3d_ai_generated_model_stable_diffusion_concept_art-scaled.jpg 2560w, https://www.creativeshrimp.com/wp-content/uploads/2025/07/sparc3d_ai_generated_model_stable_diffusion_concept_art-768x235.jpg 768w, https://www.creativeshrimp.com/wp-content/uploads/2025/07/sparc3d_ai_generated_model_stable_diffusion_concept_art-1536x471.jpg 1536w, https://www.creativeshrimp.com/wp-content/uploads/2025/07/sparc3d_ai_generated_model_stable_diffusion_concept_art-2048x628.jpg 2048w" sizes="(max-width: 2560px) 100vw, 2560px" /></figure>



<p>What is so new and revolutionary about Sparc?</p>



<p>Well, it does something crazy: <em><strong>it fills in the blanks</strong></em>. The underside, hidden geometry – whatever’s missing from your single image. This workflow makes mass production of 3D assets remarkably feasible, compared to an older-school photogrammetry (where abundance of input images has always been necessary, to avoid gaps and general geometry syphilis).</p>



<p>&#8212;</p>



<h2 class="wp-block-heading">Limitations of SPARC3D (And What Needs Improvement)</h2>



<p>Is the resulting model a mess? Yeah – it’s not fully production-ready (yet). You still get dense meshes, no real topology flow, no textures (<em>in Sparc3D at least – some other tools can do textures</em>). It&#8217;s still a pain in the back to do retopo. It&#8217;ll probably take a few more research papers to change that.</p>



<figure class="wp-block-image size-full"><img decoding="async" loading="lazy" width="2104" height="862" src="https://www.creativeshrimp.com/wp-content/uploads/2025/07/geometry_topology_of_sparc3d_01.jpg" alt="SPARC3D AI 3D Reconstruction from Image" class="wp-image-17574" srcset="https://www.creativeshrimp.com/wp-content/uploads/2025/07/geometry_topology_of_sparc3d_01.jpg 2104w, https://www.creativeshrimp.com/wp-content/uploads/2025/07/geometry_topology_of_sparc3d_01-768x315.jpg 768w, https://www.creativeshrimp.com/wp-content/uploads/2025/07/geometry_topology_of_sparc3d_01-1536x629.jpg 1536w, https://www.creativeshrimp.com/wp-content/uploads/2025/07/geometry_topology_of_sparc3d_01-2048x839.jpg 2048w" sizes="(max-width: 2104px) 100vw, 2104px" /></figure>



<figure class="wp-block-image size-large"><img decoding="async" loading="lazy" width="1170" height="731" src="https://www.creativeshrimp.com/wp-content/uploads/2025/07/sparc3d_topology_blender_ai_generate_3d_01-1170x731.jpg" alt="SPARC3D AI 3D Reconstruction from Image" class="wp-image-17580" srcset="https://www.creativeshrimp.com/wp-content/uploads/2025/07/sparc3d_topology_blender_ai_generate_3d_01-1170x731.jpg 1170w, https://www.creativeshrimp.com/wp-content/uploads/2025/07/sparc3d_topology_blender_ai_generate_3d_01-720x450.jpg 720w, https://www.creativeshrimp.com/wp-content/uploads/2025/07/sparc3d_topology_blender_ai_generate_3d_01-340x213.jpg 340w" sizes="(max-width: 1170px) 100vw, 1170px" /></figure>



<p>But nevertheless, the surface quality is a generation ahead. Flat surfaces are actually mostly flat now. The connections are smoother, more coherent. The geometry actually doesn&#8217;t look that bad (I&#8217;m surprised to say that).</p>



<p>What sets Sparc apart from traditional photogrammetry, where meticulous coverage is key, is its ability to infer geometry <strong>on the hidden side of an object</strong>. It even tackles thin, intricate parts surprisingly well –something previous 3d reconstruction tools always struggled with.</p>



<figure class="wp-block-image size-large"><img decoding="async" loading="lazy" width="1170" height="731" src="https://www.creativeshrimp.com/wp-content/uploads/2025/07/image-1-1170x731.jpg" alt="SPARC3D AI 3D Reconstruction from Image" class="wp-image-17575" srcset="https://www.creativeshrimp.com/wp-content/uploads/2025/07/image-1-1170x731.jpg 1170w, https://www.creativeshrimp.com/wp-content/uploads/2025/07/image-1-720x450.jpg 720w, https://www.creativeshrimp.com/wp-content/uploads/2025/07/image-1-340x213.jpg 340w" sizes="(max-width: 1170px) 100vw, 1170px" /></figure>



<p>To be fair, gaussian splatting solved thin and furry stuff to *some* extent, but this is a step forward (and it outputs good old polygons, unlike gaussian splatting).</p>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876325525?h=4d8b8a3603&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Neural Radiance Fields - Gaussian Splatting - Inria's open source app"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<h2 class="wp-block-heading">The &#8216;Traditional&#8217; Photoscan Apps Will Have to Upgrade Their Tech</h2>



<p>I&#8217;m dying to see what <a href="https://www.realityscan.com/en-US">Reality Scan&#8217;s</a> (aka Reality Capture&#8217;s) response will be. I bet the devs are already speedrunning the ways to integrate a tech like this into their app – which is one of the best already, and just waits for that extra push. Gosh, I can only imagine the power it&#8217;ll give to the tech that was somewhat overdependent on physical world constraints (objects that are often impossible to shoot from all sides, or cars covered in birds&#8217; excrements making it difficult to scan).</p>



<figure class="wp-block-image size-large"><img decoding="async" loading="lazy" width="1170" height="720" src="https://www.creativeshrimp.com/wp-content/uploads/2025/07/photogrammetry_tutorial_creative_shrimp_reality_scan_01-1170x720.jpg" alt="Photogrammetry - RealityScan" class="wp-image-17576"/></figure>



<p>And how would it interplay with <a href="https://www.youtube.com/watch?v=F0H3NAHP9r0">the latest advances in triangle splatting</a>, 3d gaussian splatting and more – another tech is becoming mainstream really quickly? </p>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876358366?h=81dfd1614c&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="NERF, Gaussian Splatting - Reflections/Transparency demo"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<h2 class="wp-block-heading">The AI Slop Wave Incoming?</h2>



<p>This blend of increased speed and quality of this new diffusion-based method makes it clear (to me) we&#8217;re heading for an avalanche of AI slop. Soon we&#8217;ll see a lot more models that are <em>good enough</em>, mass-produced and flooding the market. Like a viewer of our original video pointed out: the future will be <em>good enough</em>. </p>



<p>Putting aside the massive issue of the environment for a moment, this raises big questions about copyright, too, and how marketplaces like <a href="https://sketchfab.com/feed">Sketchfab</a> will handle it. Remember <a href="https://80.lv/articles/this-one-fab-user-has-over-38-000-ai-generated-assets">that user who uploaded 38,000 AI assets to Fab</a>? </p>



<p>We&#8217;ll see more of that.</p>



<figure class="wp-block-image size-full"><img decoding="async" loading="lazy" width="968" height="1207" src="https://www.creativeshrimp.com/wp-content/uploads/2025/07/fab_ai-generated_models_01.jpg" alt="Fab" class="wp-image-17577" srcset="https://www.creativeshrimp.com/wp-content/uploads/2025/07/fab_ai-generated_models_01.jpg 968w, https://www.creativeshrimp.com/wp-content/uploads/2025/07/fab_ai-generated_models_01-768x958.jpg 768w" sizes="(max-width: 968px) 100vw, 968px" /></figure>



<p>&#8212;</p>



<h2 class="wp-block-heading">Blender&#8217;s Role: The Hub</h2>



<p>So what is <a href="https://www.blender.org/">Blender</a>&#8216;s role in all that? With a huge influx of new 3D props (if and when it happens), Blender effectively becomes <strong><em>the hub</em></strong> – the central point where all elements are assembled and made sense of.</p>



<p>As we&#8217;ve mentioned, the Sparc-generated assets are similar to photogrammetry exported assets: messy, dense, high-poly. These will have to be prepared for exporting to Unreal Engine, Unity, Godot or other game engines, or just for using it within Blender for scene creation. From this perspective, very little will change for 3d artists&#8230; <em>just yet</em>.</p>



<p>Your skills remain valuable, YOU remain valuable and this will likely continue&#8230; </p>



<p>for some time.</p>



<p>&#8212;</p>



<p>If you&#8217;re looking to take these high-poly, often messy assets and optimize them for games or other production pipelines, our <a href="https://www.creativeshrimp.com/game-asset-workflow-complete-blender-guide">Game Asset Creation Course</a> focuses specifically on retopology, optimization, UV maps and efficient asset preparation.</p>



<p>Just mentioning.</p>



<figure class="wp-block-image size-large"><img decoding="async" loading="lazy" width="1170" height="731" src="https://www.creativeshrimp.com/wp-content/uploads/2025/07/game_asset_workflow_creativeshrimp_blender_course_cs_01-1170x731.jpg" alt="Game Asset Workflow course for Blender " class="wp-image-17578" srcset="https://www.creativeshrimp.com/wp-content/uploads/2025/07/game_asset_workflow_creativeshrimp_blender_course_cs_01-1170x731.jpg 1170w, https://www.creativeshrimp.com/wp-content/uploads/2025/07/game_asset_workflow_creativeshrimp_blender_course_cs_01-720x450.jpg 720w, https://www.creativeshrimp.com/wp-content/uploads/2025/07/game_asset_workflow_creativeshrimp_blender_course_cs_01-340x213.jpg 340w" sizes="(max-width: 1170px) 100vw, 1170px" /></figure>



<p>&#8212;</p>



<h2 class="wp-block-heading">Creativity vs Complacency</h2>



<p>This new accessibility that AI-driven tools bring to life is pretty exciting – and as many of you pointed out <a href="https://www.youtube.com/watch?v=XRFlnXeOdww">in the comments to our video</a>, pretty disturbing as well. If you have an image – whether your own creation or something found online, or (god forbid), ai-generated – you are now significantly closer to realizing that thing in 3D. </p>



<p>From there, the possibilities to build your own worlds and tell your own stories are immense.</p>



<p>However, as Aidy pointed out when proof-reading this article: fast and easy does not always equate to better or more meaningful. Like processed fast food, that&#8217;s usually an easy option but still best used with caution. AI should probably serve as a tool to augment our creativity, not diminish the artistic skills that make our work distinctive. Think atrophied creative muscles.</p>



<figure class="wp-block-image size-full"><img decoding="async" loading="lazy" width="2088" height="998" src="https://www.creativeshrimp.com/wp-content/uploads/2025/07/ai_generated_3d_models_01.jpg" alt="SPARC3D AI 3D Reconstruction from Image" class="wp-image-17581" srcset="https://www.creativeshrimp.com/wp-content/uploads/2025/07/ai_generated_3d_models_01.jpg 2088w, https://www.creativeshrimp.com/wp-content/uploads/2025/07/ai_generated_3d_models_01-768x367.jpg 768w, https://www.creativeshrimp.com/wp-content/uploads/2025/07/ai_generated_3d_models_01-1536x734.jpg 1536w, https://www.creativeshrimp.com/wp-content/uploads/2025/07/ai_generated_3d_models_01-2048x979.jpg 2048w" sizes="(max-width: 2088px) 100vw, 2088px" /></figure>



<p>The tech like SPARC will spark (ahem) new workflows and a lot of questions: Where does it fit? Kitbashing? Concept art? What about automatic retopo, for heaven&#8217;s sake? Expect tools like ZBrush’s ZRemesher or <a href="https://exoside.com/">Quad Remesher</a> to become even more valuable, as clean topology could be in higher demand than ever. </p>



<p>But why, oh why, AI-based automatic retopology isn&#8217;t a thing yet.</p>



<p>Also, will Blender cement its role as the hub where such content is being assembled?</p>



<p>&#8212;</p>



<p>So many questions!</p>



<p>So, what do YOU think? Do you feel threatened, or see this as an opportunity to level up your workflow? Let us know <a href="https://discord.gg/nNReRkzWsh">on Discord</a>!</p>



<p>That’s it from us, Gleb Alexandrov and Aidy Burrows at <a href="/">Creative Shrimp</a>, keeping you updated on the latest in computer graphics.</p>



<p>&#8212;</p>



<h2 class="wp-block-heading">Links</h2>



<p><a href="https://lizhihao6.github.io/Sparc3D/ - Sparc3D - Sparse Representation and Construction for High-Resolution 3D Shapes Modeling by Nanyang Technological University, research paper">https://lizhihao6.github.io/Sparc3D/</a> &#8211; Sparc3D</p>



<p><a href="https://huggingface.co/spaces/ilcve21/Sparc3D">https://huggingface.co/spaces/ilcve21/Sparc3D</a> &#8211; live demo</p>



<p><a href="https://huggingface.co/spaces/tencent/Hunyuan3D-2.1">Tencent&#8217;s Hunyuan 3D 2.1 (open source) alternative</a></p>



<p><a href="https://www.creativeshrimp.com/turn-2d-images-into-3d-in-a-few-clicks-true-depth-add-on-for-blender.html">TrueDepth add-on for Blender &#8211; tutorial</a></p>



<p><a href="https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg">Dr. Károly Zsolnai-Fehér &#8211; Two Minute Papers channel</a> </p>



<p>&#8212;</p>



<p><a href="https://www.creativeshrimp.com/game-asset-workflow-complete-blender-guide">Game Asset Creation Course</a> <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f990.png" alt="🦐" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>



<p><a href="https://superhivemarket.com/products/photogrammetry-course">Photogrammetry Course: Photoreal 3D With Blender And Reality Capture</a> <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f990.png" alt="🦐" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>



<p><a href="https://discord.gg/nNReRkzWsh">Creative Shrimp Discord</a> <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f990.png" alt="🦐" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>



<p><a href="https://www.youtube.com/@GlebAlexandrov">Gleb Alexandrov Youtube</a> <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f990.png" alt="🦐" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>



<p>&#8212;</p>



<p><a href="https://www.youtube.com/watch?v=XRFlnXeOdww">Are We Screwed? The original Youtube video</a></p>



<p><a href="https://www.youtube.com/@pixelreconstruct">PixelReconstruct Youtube channel &#8211; gaussian splatting and more</a></p>



<p><a href="https://www.creativeshrimp.com/photoreal-environments-and-photogrammetry.html">Photoreal Environments and Photogrammetry: an article</a></p>



<p><a href="https://www.realityscan.com/en-US">RealityScan</a></p>



<p><a href="https://www.blender.org/">Blender</a></p>
<p>The post <a rel="nofollow" href="https://www.creativeshrimp.com/sparc3d-ai-mesh-from-image.html">SPARC3D: Revolutionary Single-Image 3D Reconstruction Explained (with Live Demo)</a> appeared first on <a rel="nofollow" href="https://www.creativeshrimp.com">Creative Shrimp</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>I Photoscanned… a Nebula</title>
		<link>https://www.creativeshrimp.com/gaussian-splatting-tutorial.html</link>
		
		<dc:creator><![CDATA[Gleb Alexandrov]]></dc:creator>
		<pubDate>Mon, 01 Jan 2024 16:30:24 +0000</pubDate>
				<category><![CDATA[Tutorials]]></category>
		<category><![CDATA[photogrammetry]]></category>
		<guid isPermaLink="false">https://www.creativeshrimp.com/?p=17177</guid>

					<description><![CDATA[<p>Basically I&#8217;ve just photoscanned a fully volumetric Blender nebula using 3d Gaussian Splatting. Links: Luma AI Polycam Nebula gaussian splats in Polycam (preview) Nebula gaussian splats in Luma AI (preview) Getting Started With 3D Gaussian Splatting for Windows (Beginner Tutorial) GitHub Repo for Windows Nebula course on BlenderMarket More about photogrammetry workflows (including Gaussian Splatting)</p>
<p>The post <a rel="nofollow" href="https://www.creativeshrimp.com/gaussian-splatting-tutorial.html">I Photoscanned… a Nebula</a> appeared first on <a rel="nofollow" href="https://www.creativeshrimp.com">Creative Shrimp</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Basically I&#8217;ve just photoscanned a fully volumetric Blender nebula using 3d Gaussian Splatting.</p>



<span id="more-17177"></span>


<div class="youtube-responsive-container"><iframe loading="lazy" width="560" height="315" src="https://www.youtube.com/embed/AZn8lu2jqow?si=IXQVlMOeVV2IY8qP" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe></div>



<h2 class="wp-block-heading">Links:</h2>



<p><a href="https://lumalabs.ai/">Luma AI</a></p>



<p><a href="https://poly.cam/">Polycam</a></p>



<p><a href="https://poly.cam/capture/68165413-8438-4419-8e36-e78ae7b0f9b5">Nebula gaussian splats in Polycam (preview)</a></p>



<p><a href="https://lumalabs.ai/embed/b86b7928-f130-40a5-8cac-8095f30eed54?mode=sparkles&amp;background=%23ffffff&amp;color=%23000000&amp;showTitle=true&amp;loadBg=true&amp;logoPosition=bottom-left&amp;infoPosition=bottom-right&amp;cinematicVideo=undefined&amp;showMenu=false">Nebula gaussian splats in Luma AI (preview)</a></p>



<p><a href="https://www.youtube.com/watch?v=UXtuigy_wYc&amp;t=2141s&amp;ab_channel=TheNeRFGuru">Getting Started With 3D Gaussian Splatting for Windows (Beginner Tutorial)</a></p>



<p><a href="https://github.com/jonstephens85/gaussian-splatting-Windows">GitHub Repo for Windows</a></p>



<p><a href="https://blendermarket.com/products/nebula-course">Nebula course on BlenderMarket</a></p>



<p><a href="/photoreal-environments-and-photogrammetry.html">More about photogrammetry workflows (including Gaussian Splatting)</a></p>



<p><a href="https://creativeshrimp.gumroad.com/l/nebula-course">Nebula course on Gumroad</a></p>
<p>The post <a rel="nofollow" href="https://www.creativeshrimp.com/gaussian-splatting-tutorial.html">I Photoscanned… a Nebula</a> appeared first on <a rel="nofollow" href="https://www.creativeshrimp.com">Creative Shrimp</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Photoreal Environments and Photogrammetry: 5 Workflows For 3D Artists and Environment Artists</title>
		<link>https://www.creativeshrimp.com/photoreal-environments-and-photogrammetry.html</link>
		
		<dc:creator><![CDATA[Gleb Alexandrov]]></dc:creator>
		<pubDate>Mon, 06 Nov 2023 15:33:34 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[Tutorials]]></category>
		<category><![CDATA[photogrammetry]]></category>
		<guid isPermaLink="false">https://www.creativeshrimp.com/?p=16488</guid>

					<description><![CDATA[<p>Photogrammetry opens up a whole new realm of creative potential for 3d artists and environment artists. In this article, written as a follow up to Gleb&#8217;s Blender Conference 2023 talk, we&#8217;ll explore how we can tap into this exciting technique for creating environments that looks authentic and real. Photoreal, if you wish. &#8212; As it</p>
<p>The post <a rel="nofollow" href="https://www.creativeshrimp.com/photoreal-environments-and-photogrammetry.html">Photoreal Environments and Photogrammetry: 5 Workflows For 3D Artists and Environment Artists</a> appeared first on <a rel="nofollow" href="https://www.creativeshrimp.com">Creative Shrimp</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Photogrammetry opens up a whole new realm of creative potential for 3d artists and environment artists. In this article, written as a follow up to<a href="https://www.youtube.com/watch?v=flb8RP8F8_E"> <strong>Gleb&#8217;s Blender Conference 2023 talk</strong></a>, we&#8217;ll explore how we can tap into this exciting technique for creating environments that looks authentic and real. Photoreal, if you wish.</p>



<p>&#8212;</p>


<div class="youtube-responsive-container"><iframe loading="lazy" width="560" height="315" src="https://www.youtube.com/embed/flb8RP8F8_E?si=I2abF3pnqHbjTxaY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe></div>



<p>As it turns out, we’ve been using photogrammetry quite a lot in our <a href="/">Creative Shrimp</a> projects. We&#8217;ve been using it to digitize <a href="https://www.artstation.com/artwork/nbvQE">historical artifacts</a>; for capturing details that would be a pain in the neck to model <a href="https://www.artstation.com/artwork/xZwx2">in any other way</a>. We’ve built some miniature movie sets, like this Millennium Falcon, then photo-scanned it for fun. </p>



<figure class="wp-block-image size-full"><img decoding="async" loading="lazy" width="2560" height="1204" src="https://www.creativeshrimp.com/wp-content/uploads/2023/10/demo_reel_01.jpg" alt="" class="wp-image-16496" srcset="https://www.creativeshrimp.com/wp-content/uploads/2023/10/demo_reel_01.jpg 2560w, https://www.creativeshrimp.com/wp-content/uploads/2023/10/demo_reel_01-768x361.jpg 768w, https://www.creativeshrimp.com/wp-content/uploads/2023/10/demo_reel_01-1536x722.jpg 1536w, https://www.creativeshrimp.com/wp-content/uploads/2023/10/demo_reel_01-2048x963.jpg 2048w" sizes="(max-width: 2560px) 100vw, 2560px" /></figure>



<p>Most importantly though, we&#8217;ve been using photoscanning to create life-like 3d environments. Mostly <a href="https://www.artstation.com/artwork/9NygoW">abandoned hallways</a> and such, made of the shards of Brest (Belarus), Warsaw, Vilnius and other places we&#8217;ve been to.</p>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876466901?h=6d81802c25&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Creative Shrimp - Photoscanned Environments"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<h3 class="wp-block-heading">A GENERIC PHOTOSCANNING WORKFLOW</h3>



<p>There are a few really interesting things that we can do with photogrammetry as 3d artists and as environment artists&#8230;<br /><br />But first, here’s a quick refresher of what a typical photogrammetry workflow entails.</p>



<ul>
<li>capture a bunch of photos of an object or an interior space from different angles;<br />using the best capturing device you have; making sure there&#8217;s enough overlap between the photos and everything’s sharp and clear;</li>



<li>load these photos into your photogrammetry software of choice;<br />eg <a href="https://github.com/alicevision/Meshroom">Meshroom</a>, <a href="https://colmap.github.io/">Colmap</a>, <a href="https://www.agisoft.com/">Metashape</a>, <a href="https://www.capturingreality.com/">Reality Capture</a>, etc</li>



<li>let the software calculate similarities between the source photos and spit out a point cloud that can then be used to deliver a high density 3d surface;</li>



<li>optionally, with texture;</li>
</ul>



<p>That’s how it works. That’s the generic workflow anyway.</p>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/875952235?h=e72a47a78b&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Photogrammetry - the generic workflow"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<h3 class="wp-block-heading">CAPTURING EQUIPMENT</h3>



<p>What equipment do you need to start photo-scanning?</p>



<p>We often hear that you just need a device with a sensor and you can start doing photogrammetry. And that is true. But as a 3d artist, I find it somewhat frustrating how much photographic equipment has accumulated over time, in addition to my DSLR camera (I use a Sony A7iii with a Tamron 24-75mm f/2.8 lens or some of my primes because primes are generally sharper, but there was a time when we were using a Canon 80D with a Sigma 17-55 f/2.8 or pretty much any DSLR or mirrorless camera available).</p>



<p>Then you start thinking about getting a tripod. It is a must-have for keeping shots steady.</p>



<p>A ruler and a color checker come next.</p>



<p>And then: “Should we get a drone?”</p>



<p>It escalates quickly.</p>



<p>Let&#8217;s put it like this: a low or medium-range DSLR or mirrorless camera is probably all you need if you have just started.</p>



<figure class="wp-block-image size-large"><img decoding="async" loading="lazy" width="1170" height="720" src="https://www.creativeshrimp.com/wp-content/uploads/2023/10/devices_thumbnail_01-1170x720.jpg" alt="" class="wp-image-16532"/></figure>



<p>But what camera is best for photoscanning? This video is as close as it gets to answering the question <a href="https://www.youtube.com/watch?v=ZN8-tzqBLTs">&#8220;What camera is best for photoscanning?&#8221;</a>. There, we talk about the 3 parameters of any capturing device that make all the difference in photoscanning quality. </p>



<p>And in this one we talk about what are <a href="https://www.youtube.com/watch?v=JLdxBtECGuc">the optimal camera settings</a> for capturing the highest quality photos for 3d reconstruction.</p>


<div class="youtube-responsive-container"><iframe loading="lazy" width="560" height="315" src="https://www.youtube.com/embed/ZN8-tzqBLTs?si=ubpNpfMfdoO4GfBR" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe></div>



<p>&#8212;</p>



<h3 class="wp-block-heading">PHOTOSCANNING SOFTWARE</h3>



<p>As for the software, there are many options to choose from.</p>



<p>What we’ve been using mostly is <a href="https://www.capturingreality.com/">Reality Capture</a> and <a href="https://www.agisoft.com/">Agisoft Metashape</a>. Fantastic tools, really fast and good quality. And commercial as well. Reality Capture has an interesting pricing option called PPI or Pay-per-Input (<a href="https://www.youtube.com/watch?v=z3aSA0qXsIg">here&#8217;s our video with some advice as for optimizing the cost</a>).</p>



<p><a href="https://github.com/alicevision/Meshroom">Meshroom</a> and <a href="https://colmap.github.io/">Colmap</a> are the open-source alternatives. Which are also quite capable, in fact fantastic considering the price, just not as obscenely fast as Reality Capture and Metashape just yet.</p>



<p>Somewhat counter-intuitively, the open-source tools are arguably *more* demanding about the quality of the photos (for now). In other words, you may need slightly more expensive equipment and a more powerful computer to get the reconstruction results that are on par with the aforementioned commercial software options.</p>



<p>Still, we should absolutely support open-source, because we know what happens to commercial enterprises.<br /><br />It&#8217;s also worth noting there are also mobile apps like <a href="https://lumalabs.ai/">Luma AI</a> and <a href="https://poly.cam/captures">Polycam</a>.</p>


<div class="youtube-responsive-container"><iframe loading="lazy" width="560" height="315" src="https://www.youtube.com/embed/j3lhPKF8qjU?si=fiDVMokjJgCO9mOy" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe></div>



<p>&#8212;</p>



<h2 class="wp-block-heading">5 PHOTOGRAMMETRY WORKFLOWS FOR ENVIRONMENT ARTISTS</h2>



<p>When it comes to environmental design and photogrammetry, there are different ways to use the toolset offered by 3d reconstruction software. Here are 5 workflows (not the exhaustive list!) that are distinct in their goals and the type of data that you gather.</p>



<p>Distinct enough so we can write about each of them separately.</p>



<p>1. THE WHOLE SEAMLESS ENVIRONMENTS</p>



<p>2. MODULAR ASSETS</p>



<p>3. CHARACTERS</p>



<p>4. PBR MATERIALS AND MAPS</p>



<p>5. LIGHTING</p>



<p>&#8212;</p>



<h3 class="wp-block-heading">WORKFLOW 1a: (CAPTURING) THE WHOLE SEAMLESS ENVIRONMENTS</h3>



<p>The first workflow on our list is capturing the whole seamless environments. This might seem silly, what would you do with such a model anyway? But it&#8217;s the one that has given us some amazing location snapshots.</p>



<figure class="wp-block-image size-full"><img decoding="async" loading="lazy" width="1920" height="810" src="https://www.creativeshrimp.com/wp-content/uploads/2023/10/v6_01.jpg" alt="" class="wp-image-16509" srcset="https://www.creativeshrimp.com/wp-content/uploads/2023/10/v6_01.jpg 1920w, https://www.creativeshrimp.com/wp-content/uploads/2023/10/v6_01-768x324.jpg 768w, https://www.creativeshrimp.com/wp-content/uploads/2023/10/v6_01-1536x648.jpg 1536w" sizes="(max-width: 1920px) 100vw, 1920px" /></figure>



<p>When we arrived at the abandoned warehouse near Brest (Belarus), our goal was to digitize the Volga car that has been standing there for half a century, collecting dust and bird excrement.</p>



<p>And that’s what we did.</p>



<p>But then, before leaving the place, Nik proposed to bring a drone next time to capture an isometric view basically.</p>



<p>And when I saw it reconstructed in Reality Capture, I thought, damn! I can totally imagine this as a game level in some indie RPG that uses the isometric view.</p>



<p>Especially if you pepper it up with dramatic lighting.</p>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/875960853?h=ec42ca8265&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Warehouse Breakdown - Photogrammetry"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<p>Double so if you throw in a playable character (this is a quick demo put together in <a href="https://godotengine.org/">Godot</a> by Aidy).</p>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/875963005?h=516f807fcf&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Godot Test - Photoscanned Environment"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<p>If you&#8217;re lucky enough to have some amazing historical locations nearby, like this 18th century Bernardine monastery in Brest, rebuilt into a military hospital, a gem in our collection of abandoned places, photographed some 15 years ago, you can totally get some nice environments captured almost in one piece. Given you have enough time to spend there.</p>



<p>Even though it’s not a super flexible workflow, capturing the entire space like that, walking with a wide angle lens and an on-camera flash through the dark and cold corridors, you can’t argue with the photorealistic output it produces. </p>



<p>It just looks… real. Which it is.</p>



<p>Even with a cinematic lighting treatment in Blender it still looks arguably better than many games. It’s hard to beat that kind of photorealism, which comes unfortunately at the cost of flexibility.</p>






<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876020888?h=0ac544dcd7&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Photoscan - Bernardine Monastery, Brest, Belarus"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<p><strong>THE CONS</strong></p>



<p>Aside from that, a notable downside of such approach is that as every polygon and texel in such model is unique, the meshes and textures end up pretty heavy.</p>



<ul>
<li>The good locations are hard to find</li>



<li>The lack of versatility</li>



<li>Performance</li>
</ul>



<p>&#8212;</p>



<p>That being said, even the lower res environment scans found on <a href="https://sketchfab.com/">Sketchfab</a> like this <a href="https://sketchfab.com/3d-models/tholos-do-escoural-83b3d3fca738488289dd584c7962dfa4">funerary monument by Morbase | Museu Virtual</a> can work really well as a base for your next Blender animation.</p>



<p>Once you start throwing in lights, effects and particles.</p>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876324198?h=6762030514&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Fire Pit - Breakdown"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<h3 class="wp-block-heading">WORKFLOW 1b: (CAPTURING) THE WHOLE ENVIRONMENTS &#8211; RADIANCE FIELDS</h3>



<p>When it comes to creating the <em>complete snapshots</em> of environments, the neural radiance fields (or NERFs) seem to do an even better job.</p>



<p>Unlike the traditional techniques, these novel view-synthesis methods can actually capture a lot of extra data: the wispy details like vegetation, or hair, but amazingly also the dynamic material properties like reflection and refraction.</p>



<p>The stuff that was unthinkable using more traditional photogrammetry tools now comes to life thanks to Gaussian Splatting and other NERF algorithms. </p>



<p>The demo below was created in the <a href="https://github.com/jonstephens85/gaussian-splatting-Windows">Inria&#8217;s open source Gaussian Splatting Github app</a>. The beginner tutorial by The NeRF Guru, without which it would be impossible for me to make sense of it, <a href="https://www.youtube.com/watch?v=UXtuigy_wYc">is linked here</a>. </p>



<p>A caveat: it requires a powerful GPU (I have RTX 2080ti, but NVIDIA GPU with 24GB VRAM or more is recommended).</p>



<figure class="wp-block-image size-full"><img decoding="async" loading="lazy" width="1920" height="1080" src="https://www.creativeshrimp.com/wp-content/uploads/2023/10/radiance_fields_thumbnail_01.jpg" alt="" class="wp-image-16930" srcset="https://www.creativeshrimp.com/wp-content/uploads/2023/10/radiance_fields_thumbnail_01.jpg 1920w, https://www.creativeshrimp.com/wp-content/uploads/2023/10/radiance_fields_thumbnail_01-768x432.jpg 768w, https://www.creativeshrimp.com/wp-content/uploads/2023/10/radiance_fields_thumbnail_01-1536x864.jpg 1536w" sizes="(max-width: 1920px) 100vw, 1920px" /></figure>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876325525?h=4d8b8a3603&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Neural Radiance Fields - Gaussian Splatting - Inria's open source app"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>Speaking about reflections, here’s a quick demo of the highly glossy scenery that we built and scanned just before going to <a href="https://conference.blender.org/2023/">Blender Conference 2023</a>. </p>



<p>Honestly it’s pretty mind-boggling to see it captured by the means of photogrammetry at all.</p>



<p>I wonder what happens when <a href="https://twitter.com/laanlabs/status/1704939447521845398">the animation part</a> of the puzzle falls into place. Will it be a photogrammetry bingo card completed?</p>



<figure class="wp-block-image size-full"><img decoding="async" loading="lazy" width="1280" height="720" src="https://www.creativeshrimp.com/wp-content/uploads/2023/10/glass_thumbnail_01a.jpg" alt="" class="wp-image-16933" srcset="https://www.creativeshrimp.com/wp-content/uploads/2023/10/glass_thumbnail_01a.jpg 1280w, https://www.creativeshrimp.com/wp-content/uploads/2023/10/glass_thumbnail_01a-768x432.jpg 768w" sizes="(max-width: 1280px) 100vw, 1280px" /></figure>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876358366?h=81dfd1614c&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="NERF, Gaussian Splatting - Reflections/Transparency demo"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<p>This tech is becoming mainstream really quickly, too. We can already try gaussian splatting in the <a href="https://lumalabs.ai/">Luma AI</a> and <a href="https://poly.cam/gaussian-splatting">Polycam</a> apps (for our smartphones). </p>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876359186?h=b0cd8a1e3e&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Luma AI / Polycam demo of Gaussian Splattering"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<p><strong>THE CONS</strong></p>



<ul>
<li>tricky to edit</li>



<li>non-versatile</li>
</ul>



<p>The problem with radiance fields is that it isn’t super clear how to edit this stuff just yet. The tools for editing gaussian splats and NERFs are still in their infancy, in other words.</p>



<p>But this tech is developing fast. We should watch this space.</p>



<p>P.S. Here you can see a demo testing the thin-structures capabilities of gaussian splatting in Luma AI.</p>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876359879?h=112bc77c73&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Luma AI Gaussian Splatting demo"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<h3 class="wp-block-heading">WORKFLOW 2. (EXTRACTING) MODULAR ASSETS</h3>



<p>A far more versatile workflow of course would be a modular one.</p>



<p>Meaning, finding a cool location and dissecting it for <em>props</em>. With the goal of creating the asset library that captures the essence of this environment. So we can make *any* kind of environment like the one we&#8217;ve just scanned.</p>



<p>Incidentally, that&#8217;s the industry standard. </p>



<figure class="wp-block-image size-full"><img decoding="async" loading="lazy" width="1280" height="720" src="https://www.creativeshrimp.com/wp-content/uploads/2023/10/modular_props_header_01.jpg" alt="" class="wp-image-16935" srcset="https://www.creativeshrimp.com/wp-content/uploads/2023/10/modular_props_header_01.jpg 1280w, https://www.creativeshrimp.com/wp-content/uploads/2023/10/modular_props_header_01-768x432.jpg 768w" sizes="(max-width: 1280px) 100vw, 1280px" /></figure>



<p>&#8212;</p>



<p>Going back to our warehouse environment, we could have scanned:</p>



<ul>
<li>The hero assets like this Volga<br /></li>



<li>The medium and small-scale props<br /></li>



<li>Surface textures<br /></li>



<li>Decals</li>
</ul>



<p>Whatever can be salvaged there. The goal is cohesiveness. Scanning such library around the same environment is like a locked-in art direction. Everything in this warehouse was equally dusty, forsaken and coming roughly from the same historical slice. Imagine fine-tuning such a look across dozens of models. Using this workflow, you get it for free.</p>



<p>Have a look at any of the <a href="https://www.artstation.com/scansfactory">Scans Factory</a> packs to see how realistic an environment composed of such modular props can look.</p>



<figure class="wp-block-image size-full"><img decoding="async" loading="lazy" width="1280" height="720" src="https://www.creativeshrimp.com/wp-content/uploads/2023/10/modular_workflow_warehouse_thumbnail_01.jpg" alt="" class="wp-image-16936" srcset="https://www.creativeshrimp.com/wp-content/uploads/2023/10/modular_workflow_warehouse_thumbnail_01.jpg 1280w, https://www.creativeshrimp.com/wp-content/uploads/2023/10/modular_workflow_warehouse_thumbnail_01-768x432.jpg 768w" sizes="(max-width: 1280px) 100vw, 1280px" /></figure>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876361178?h=08dc36dde5&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Modular Photoscanned Props - Abandoned Warehouse"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<p>Of course, then each asset would have had to be optimized; meaning, polycount reduced and the geometry details transferred from the high to the low res meshes (as normal maps for example);</p>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876362424?h=f1752486ee&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Optimization - Photoscanned Modular Assets"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<p>Sometimes this process is a breeze. </p>



<p>Say, when you can take a prop back home or back to studio and scan it on a turntable, and maybe flip it during the scanning process, so the software reconstructs a full 360 degrees watertight mesh.</p>



<p>Then it requires almost no post-processing, maybe aside from some polycount reduction (and that can be done automatically) and texture baking (and that can be automatized as well, to a certain extent).</p>



<figure class="wp-block-image size-full"><img decoding="async" loading="lazy" width="2560" height="1440" src="https://www.creativeshrimp.com/wp-content/uploads/2023/10/skull_polycount_01_1c.jpg" alt="" class="wp-image-16938" srcset="https://www.creativeshrimp.com/wp-content/uploads/2023/10/skull_polycount_01_1c.jpg 2560w, https://www.creativeshrimp.com/wp-content/uploads/2023/10/skull_polycount_01_1c-768x432.jpg 768w, https://www.creativeshrimp.com/wp-content/uploads/2023/10/skull_polycount_01_1c-1536x864.jpg 1536w, https://www.creativeshrimp.com/wp-content/uploads/2023/10/skull_polycount_01_1c-2048x1152.jpg 2048w" sizes="(max-width: 2560px) 100vw, 2560px" /></figure>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876362951?h=705b6fa3c5&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Turntable - Photogrammetry"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<p>But it isn’t always like that. In fact, it’s almost never like that!</p>



<p>&#8212;</p>



<p><strong>THE CONS</strong></p>



<ul>
<li>Planning</li>



<li>Post-pro and clean-up</li>
</ul>



<p>More often than not, raw scans are pretty messy and require <a href="https://www.youtube.com/watch?v=5NY9lOgnpLA">an enormous amount of post-pro</a> to make them useable.</p>



<p>Like, imagine cleaning-up something like that.</p>



<figure class="wp-block-image size-full"><img decoding="async" loading="lazy" width="1280" height="720" src="https://www.creativeshrimp.com/wp-content/uploads/2023/10/modular_hard_thumbnail_01b.jpg" alt="" class="wp-image-16939" srcset="https://www.creativeshrimp.com/wp-content/uploads/2023/10/modular_hard_thumbnail_01b.jpg 1280w, https://www.creativeshrimp.com/wp-content/uploads/2023/10/modular_hard_thumbnail_01b-768x432.jpg 768w" sizes="(max-width: 1280px) 100vw, 1280px" /></figure>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876364794?h=caa5ca41e1&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="A 14-floor building"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<p>But anyway, nothing is more satisfying than scattering the photoscanned assets that have been prepared in advance. Gathered in the same location. Carefully optimized. The resulting environments can look just as cohesive as the full location scans.</p>



<figure class="wp-block-image size-full"><img decoding="async" loading="lazy" width="1920" height="1080" src="https://www.creativeshrimp.com/wp-content/uploads/2023/10/scattering_thumbnail_01.jpg" alt="" class="wp-image-16940" srcset="https://www.creativeshrimp.com/wp-content/uploads/2023/10/scattering_thumbnail_01.jpg 1920w, https://www.creativeshrimp.com/wp-content/uploads/2023/10/scattering_thumbnail_01-768x432.jpg 768w, https://www.creativeshrimp.com/wp-content/uploads/2023/10/scattering_thumbnail_01-1536x864.jpg 1536w" sizes="(max-width: 1920px) 100vw, 1920px" /></figure>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876365960?h=ecffdad1a6&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Scattering the Photoscanned Modular Assets"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<h3 class="wp-block-heading">WORKFLOW 3. CHARACTERS</h3>



<p>Speaking about modular props, characters also fall into this category and can be made with the help of photogrammetry. Let’s say, the background NPCs for your environments like those seen in <a href="https://www.youtube.com/@IanHubert2">Ian Hubert’s videos</a>.</p>



<figure class="wp-block-image size-large"><img decoding="async" loading="lazy" width="1170" height="720" src="https://www.creativeshrimp.com/wp-content/uploads/2023/10/characters_thumbnail_02-1170x720.jpg" alt="" class="wp-image-16943"/></figure>



<p>&#8212;</p>



<p>As a quick reminder, it goes like this. First you make a quick scan, add a few bones in <a href="https://www.blender.org/">Blender</a>, then animate these bones to rotate the torso and the head a little bit while recording the motion. And you get a nice background character doing the background character stuff.</p>



<p>The advantage of this workflow is that we don’t have to use a proper A or T-pose for scanning, we can just scan the characters in the poses they’re supposed to be, like sitting or riding a bicycle.</p>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876369822?h=8d5f986737&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Quick Photoscanned NPCs"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<p>A more advanced variation of this workflow would be to apply the real motion capture data to such quick character scans using one of the markerless motion tracking apps like <a href="https://www.mixamo.com/">Mixamo</a> or <a href="https://www.rokoko.com/products/vision">Rokoko Vision</a>. You can even record your own motion and transfer it to the skeleton with these apps, like <a href="https://twitter.com/AidyBurrows3D">Aidy</a> is doing here.</p>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876371159?h=4d2f4a5ba9&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Mixamo to Blender, Rokoko to Blender - Motion Capture Workflow"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>Still, it&#8217;s way simpler if the character doesn&#8217;t need to move.</p>



<p>People tend to fidget and make the scanning process more complicated than it needs to be. It’s my motionless body you see in this scan by the way.</p>



<p>&#8212;</p>



<p><strong>THE CONS</strong></p>



<ul>
<li>people are tricky to scan</li>
</ul>



<figure class="wp-block-image size-large"><img decoding="async" loading="lazy" width="1170" height="720" src="https://www.creativeshrimp.com/wp-content/uploads/2023/10/characters_thumbnail_03-1170x720.jpg" alt="" class="wp-image-16944"/></figure>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876372648?h=f6bfb84096&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Character Scan - Still"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<h3 class="wp-block-heading">WORKFLOW 4. PBR MATERIALS AND MAPS</h3>



<p>Yet another application for photogrammetry in environment creation is, of course, capturing the PBR materials.</p>



<figure class="wp-block-image size-full"><img decoding="async" loading="lazy" width="1280" height="712" src="https://www.creativeshrimp.com/wp-content/uploads/2023/10/pbr_maps_header_01.jpg" alt="" class="wp-image-16945" srcset="https://www.creativeshrimp.com/wp-content/uploads/2023/10/pbr_maps_header_01.jpg 1280w, https://www.creativeshrimp.com/wp-content/uploads/2023/10/pbr_maps_header_01-768x427.jpg 768w" sizes="(max-width: 1280px) 100vw, 1280px" /></figure>



<p>The process is no different from capturing props (or characters for that matter!): first we take a bunch of photos of whatever surface we want to scan, hovering over the muddy ground like a fairy; then reconstruct a high-definition mesh in photogrammetry software.</p>



<p>But then, we bake the detail from that mesh onto a flat plane to derive such maps as:</p>



<ul>
<li>the diffuse color (or albedo)</li>



<li>ambient occlusion,</li>



<li>normals,</li>



<li>and most importantly the height data that contains the detail about the surface displacement</li>
</ul>



<figure class="wp-block-image size-full"><img decoding="async" loading="lazy" width="1280" height="720" src="https://www.creativeshrimp.com/wp-content/uploads/2023/10/pbr_maps_thumbnail_01.jpg" alt="" class="wp-image-16959" srcset="https://www.creativeshrimp.com/wp-content/uploads/2023/10/pbr_maps_thumbnail_01.jpg 1280w, https://www.creativeshrimp.com/wp-content/uploads/2023/10/pbr_maps_thumbnail_01-768x432.jpg 768w" sizes="(max-width: 1280px) 100vw, 1280px" /></figure>



<p>Then, these maps can be made seamless in <a href="https://www.gimp.org/">Gimp</a> by offsetting and painting across them all at once using the clone brush. This is a fairly new functionality that imitates the <a href="https://affinity.serif.com/en-gb/photo/">Affinity Photo</a> approach to multi-layer editing, and we&#8217;re proud to have influenced this feature developed for Gimp with the help of <a href="https://twitter.com/zemarmot">ZeMarmot</a>.</p>



<p>And voila! We’ve got the ingredients for a seamless PBR material.</p>



<p>The shaders like that work extremely well with displacement. No surprise, because in the end we feed it the real heightmap derived from the real surface. The result looks extremely realistic (and runs in realtime thanks to Eevee Next).</p>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876375467?h=3a84973075&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Capturing PBR Materials - Photoscanning"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<p>Additionally, we can use the extracted heightmaps for blending between different materials to create a much-needed variation. In <a href="https://www.blender.org/">Blender</a> via nodes or alternatively, in <a href="https://quixel.com/mixer">Quixel Mixer</a>.</p>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876377270?h=453a90262d&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="PBR Materials blending in Blender"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<p><strong>THE CONS</strong></p>



<ul>
<li>less flexible than procedural workflows (in Substance Designer and so on)</li>
</ul>



<p>Aside from that, variation isn’t the biggest pro of this technique. The photoscanned materials are quite limited by the source footage. By its resolution and other properties. And simply by the physical availability, unlike their procedural counterparts.</p>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876378316?h=d2402e1c07&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Skull - Photogrammetry"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<h3 class="wp-block-heading">WORKLOW 5. LIGHTING</h3>



<p>The last type of data that can be extracted from a real-world location, that we will have a look in this article, is lighting. Lighting can be extracted and thrown away or extracted and… cherished.</p>



<p>Let’s talk about removing it first.</p>



<figure class="wp-block-image size-large"><img decoding="async" loading="lazy" width="1170" height="731" src="https://www.creativeshrimp.com/wp-content/uploads/2023/10/overexposure_01-1170x731.jpg" alt="" class="wp-image-16947" srcset="https://www.creativeshrimp.com/wp-content/uploads/2023/10/overexposure_01-1170x731.jpg 1170w, https://www.creativeshrimp.com/wp-content/uploads/2023/10/overexposure_01-720x450.jpg 720w, https://www.creativeshrimp.com/wp-content/uploads/2023/10/overexposure_01-340x213.jpg 340w" sizes="(max-width: 1170px) 100vw, 1170px" /></figure>



<p>&#8212;</p>



<p>Typically, the shadows, at least the indirect ones, are removed from textures as soon as possible with the help of <a href="https://www.agisoft.com/downloads/installer/">Agisoft Texture De-Lighter</a> or other de-lighting tools.</p>



<p>So we can set our own lighting in Blender.</p>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876381181?h=747c50d1e6&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Agisoft Texture De-lighter"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<p>Alternatively, the lighting can be flattened during the capturing phase, by using an on-camera flash. This also gives the supernatural ability to shoot in the dark.</p>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876381521?h=becc5146e6&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Using an on-camera flash to flatten out the lighting - Photogrammetry"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<p>Or we can go as far as removing both shadows and reflections using the flash AND the polarizing filters. In the process called cross-polarization.</p>



<p>We don’t have time to go into that. But cross-polarization happens when you put a polarizing filter on both the lens and the light.</p>



<p>Like <a href="https://www.youtube.com/watch?v=yhjKO1a99OQ">in this video by James Candy</a>.</p>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876381915?h=10b28a1080&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Cross-polarization demo (by Classy Dog, James Candy)"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<p><strong>THE CONS</strong></p>



<p>If you were capturing for a virtual movie set though, the lighting data would become precious. Definitely worth retaining!</p>



<p>To capture a full range of light, each shot can be taken multiple times using different exposure settings, then merged into a singular HDR image in the process called exposure bracketing. In <a href="https://www.darktable.org/">Darktable</a> for example.</p>



<p>And then, the photoscan can be made from such high dynamic range images. And as a result, such environment would emit a proper amount of light. So that the 3d objects put into such scene would receive a proper amount of light. And integrate seamlessly.</p>



<p>Imagine a better HDRI panorama that is fully three-dimensional &#8211; this is it.</p>



<figure class="wp-block-image size-full"><img decoding="async" loading="lazy" width="1530" height="714" src="https://www.creativeshrimp.com/wp-content/uploads/2023/10/hdr_environment_01_1a.jpg" alt="" class="wp-image-16966" srcset="https://www.creativeshrimp.com/wp-content/uploads/2023/10/hdr_environment_01_1a.jpg 1530w, https://www.creativeshrimp.com/wp-content/uploads/2023/10/hdr_environment_01_1a-768x358.jpg 768w" sizes="(max-width: 1530px) 100vw, 1530px" /></figure>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876381643?h=094722f067&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="HDR Photoscan - 3d Environment"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<p><strong>THE CONS</strong></p>



<ul>
<li>Processing for HDR takes time</li>



<li>Versatility!</li>
</ul>



<p>Unfortunately, preparing the HDR images can be time-consuming and the use-cases for such a workflow are truly specific.</p>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876383687?h=f0c66e2cd8&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="HDR Photoscanned Environment, Character Test | Photogrammetry"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<h2 class="wp-block-heading">CONCLUSION</h2>



<p>Alright! These are just some of the things we can do with photogrammetry, as 3d artists.</p>



<p>Despite producing a non-versatile output sometimes, the tech itself is actually pretty versatile nevertheless. There is something for everyone.</p>



<p>So no matter whether you’re a 3d artist, a director, a cinematographer or a level designer; or maybe you work as a scientist or archaeologist, photogrammetry gives us some powerful tools and workflows to bridge the gap between our physical world and the digital realms we create.</p>



<figure class="wp-block-image size-full"><img decoding="async" loading="lazy" width="1920" height="966" src="https://www.creativeshrimp.com/wp-content/uploads/2023/10/photoscanning_and_blender_01.jpg" alt="" class="wp-image-16963" srcset="https://www.creativeshrimp.com/wp-content/uploads/2023/10/photoscanning_and_blender_01.jpg 1920w, https://www.creativeshrimp.com/wp-content/uploads/2023/10/photoscanning_and_blender_01-768x386.jpg 768w, https://www.creativeshrimp.com/wp-content/uploads/2023/10/photoscanning_and_blender_01-1536x773.jpg 1536w" sizes="(max-width: 1920px) 100vw, 1920px" /></figure>


<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/876424968?h=b61be4872a&amp;badge=0&amp;autopause=0&amp;quality_selector=1&amp;progress_bar=1&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Photoreal Environments with Photogrammetry | Demo rendered in Blender"></iframe></div>
<p><script src="https://player.vimeo.com/api/player.js"></script></p>



<p>&#8212;</p>



<p>Before we wrap it up, I&#8217;d like to say thanks to everyone who helped us on the ground to gather the content: Eugeny, Nik, Pawel, Kaciaryna, Ryncuk, Aidy, Lena &#8211; thank you so much indeed!</p>



<p>&#8212;</p>



<p>Feel free to check our <a href="https://www.creativeshrimp.com/photogrammetry-course">Photogrammetry Course</a>. Thank you!</p>
<p>The post <a rel="nofollow" href="https://www.creativeshrimp.com/photoreal-environments-and-photogrammetry.html">Photoreal Environments and Photogrammetry: 5 Workflows For 3D Artists and Environment Artists</a> appeared first on <a rel="nofollow" href="https://www.creativeshrimp.com">Creative Shrimp</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>

<!--
Performance optimized by W3 Total Cache. Learn more: https://www.boldgrid.com/w3-total-cache/

Page Caching using disk: enhanced 
Database Caching 10/68 queries in 0.030 seconds using disk

Served from: www.creativeshrimp.com @ 2026-04-21 18:18:33 by W3 Total Cache
-->