Turning a VIDEO into 3D using LUMA AI and BLENDER!
TLDRThis video showcases the groundbreaking technology of Luma AI, which enables the conversion of video into 3D models using photogrammetry. The process involves capturing various angles of an object on video and then uploading the footage to Luma AI's website. Despite challenges such as low light conditions and reflective surfaces, the AI successfully separates the object from its background, creating detailed 3D models. The video demonstrates the results from different sources, including an iPhone and a Sony DSLR, highlighting the superior quality of the DSLR due to its closer proximity and higher resolution. The technology's potential is evident, even with a complex subject like a car with reflective paint. The creator plans to use the generated 3D assets in a short video to test their performance in 3D software, promising further exploration of the technology's capabilities in future content.
Takeaways
- 🚀 Luma AI has enabled video to photogrammetry, which allows turning videos into 3D models.
- 🕒 The process was demonstrated to be relatively quick, with the video to 3D conversion taking about 20 to 30 minutes per clip.
- 📹 The video source can vary, with examples given from both an iPhone and a Sony DSLR.
- 📈 The AI technology separates the scene from the object in the video, which was shown to work effectively in the examples provided.
- 📦 Downloadable 3D models come in formats like glTF, and there's also an Unreal Engine plugin available.
- 🛠️ Blender was used to clean up and smooth the 3D models, removing sharp edges for better quality.
- 📷 Reflective surfaces and low light conditions present challenges, but the AI still produced usable 3D models.
- 🚗 A car with reflective paint was used as a test subject, and despite the difficulties, the result was impressive.
- 🌆 The 3D models created can be used in various scenarios, offering flexibility for different applications.
- 🔍 The quality of the 3D models is expected to improve as the technology advances.
- 📅 A follow-up demonstration is planned to show how these 3D assets perform when used in 3D software for background purposes.
Q & A
What is the main subject of the video?
-The main subject of the video is demonstrating how to turn a video into a 3D model using Luma AI and Blender.
What is the significance of Luma AI's video to photogrammetry feature?
-Luma AI's video to photogrammetry feature allows users to create 3D models from videos instead of having to take multiple photos to capture an object in 3D space.
What was the time constraint the creators were facing?
-The creators had a time constraint as they only had a couple of minutes before it turned dark.
What was the process of uploading video clips to Luma AI?
-The creator uploaded the video clips one by one to Luma AI's website. The DSLR footages initially failed, so they were processed through DaVinci with h.265 encoding before re-uploading.
How long did it take for Luma AI to process each 3D mesh?
-Each 3D mesh took about 20 to 30 minutes to be processed by Luma AI.
What tool did the creator use in Blender to refine the 3D model?
-The creator used a smooth tool in Blender to remove the sharp edges from the 3D model.
How long was the footage that was used to create the 3D model of the cone?
-The footage used to create the 3D model of the cone was only one minute and forty-two seconds long.
What was the quality difference between the iPhone and Sony DSLR footage?
-The Sony DSLR footage had sharper quality, not only due to the camera's image quality but also because the creator got closer to the object. The iPhone footage was taken from three levels of loop from high, mid, and low angles as per the website's instructions.
What were the challenges faced when trying to create a 3D model of the car?
-The challenges included the car being a limo and very long, the entire car not being visible in the footage, the car's reflective paint, and the darkness outside during filming.
What is the next step the creators are planning to take with the 3D models?
-The creators plan to use the 3D models to create a quick and short video to see how these assets perform when used in 3D software for background purposes.
What is the potential of Luma AI's technology according to the video?
-According to the video, the technology is in its early stages and the quality is expected to improve over time.
What is the Unreal Engine plug-in mentioned in the video?
-The Unreal Engine plug-in is a tool mentioned that the creator plans to cover in another video, which suggests it may be used for further enhancement or integration of the 3D models into the Unreal Engine.
Outlines
🎥 Luma AI Video to Photogrammetry Discovery
The speaker expresses excitement about a new feature in Luma AI that allows for 3D modeling from video footage rather than still photos. They describe a rush to capture a video of a scene before sunset to demonstrate the technology. The process involves uploading video clips to Luma AI's website, with some footage needing re-encoding due to issues with the DSLR footage. The AI successfully separates the scene from the object, creating a 3D mesh that can be manipulated in software like Blender. The speaker also discusses the quality differences between iPhone and Sony DSLR footage, noting that the DSLR provided sharper results. Despite challenges like reflective surfaces and low light conditions, the AI produced impressive 3D models, including one of a car with reflective paint.
Mindmap
Keywords
💡3D model
💡Luma AI
💡Photogrammetry
💡Video to 3D conversion
💡DaVinci
💡h.265 encoding
💡Blender
💡gltf model
💡Unreal Engine
💡Payphone
💡Reflective surfaces
Highlights
Luma AI enables video to photogrammetry, allowing 3D capture without photos.
A video can now be used to create a 3D model.
The process was tested with limited time before dark conditions.
Different camera angles and heights were utilized for the video capture.
DaVinci with h.265 encoding was used to process DSLR footage.
Each 3D mesh generated by Luma AI took approximately 20 to 30 minutes to complete.
The AI automatically separates the scene from the object in the video.
A glTF model and Unreal Engine plugin are available for the 3D models.
Blender was used to refine the 3D model by smoothing sharp edges.
The video footage was compared with the 3D model output by Luma AI.
The process took only a minute and 42 seconds to record the video that generated the 3D model.
The technology is expected to improve in quality over time.
Payphone and car models were created from iPhone and Sony DSLR footage.
Reflective surfaces and low light conditions presented challenges.
The DSLR footage provided sharper quality due to closer proximity and better camera capabilities.
A quick and short video will be created using the 3D assets for background purposes.
Stay tuned for a demonstration of the 3D assets in a short video.