Turning a VIDEO into 3D using LUMA AI and BLENDER!

Bad Decisions Studio
17 Apr 202303:18

TLDRThis video showcases the groundbreaking technology of Luma AI, which enables the conversion of video into 3D models using photogrammetry. The process involves capturing various angles of an object on video and then uploading the footage to Luma AI's website. Despite challenges such as low light conditions and reflective surfaces, the AI successfully separates the object from its background, creating detailed 3D models. The video demonstrates the results from different sources, including an iPhone and a Sony DSLR, highlighting the superior quality of the DSLR due to its closer proximity and higher resolution. The technology's potential is evident, even with a complex subject like a car with reflective paint. The creator plans to use the generated 3D assets in a short video to test their performance in 3D software, promising further exploration of the technology's capabilities in future content.

Takeaways

  • 🚀 Luma AI has enabled video to photogrammetry, which allows turning videos into 3D models.
  • 🕒 The process was demonstrated to be relatively quick, with the video to 3D conversion taking about 20 to 30 minutes per clip.
  • 📹 The video source can vary, with examples given from both an iPhone and a Sony DSLR.
  • 📈 The AI technology separates the scene from the object in the video, which was shown to work effectively in the examples provided.
  • 📦 Downloadable 3D models come in formats like glTF, and there's also an Unreal Engine plugin available.
  • 🛠️ Blender was used to clean up and smooth the 3D models, removing sharp edges for better quality.
  • 📷 Reflective surfaces and low light conditions present challenges, but the AI still produced usable 3D models.
  • 🚗 A car with reflective paint was used as a test subject, and despite the difficulties, the result was impressive.
  • 🌆 The 3D models created can be used in various scenarios, offering flexibility for different applications.
  • 🔍 The quality of the 3D models is expected to improve as the technology advances.
  • 📅 A follow-up demonstration is planned to show how these 3D assets perform when used in 3D software for background purposes.

Q & A

  • What is the main subject of the video?

    -The main subject of the video is demonstrating how to turn a video into a 3D model using Luma AI and Blender.

  • What is the significance of Luma AI's video to photogrammetry feature?

    -Luma AI's video to photogrammetry feature allows users to create 3D models from videos instead of having to take multiple photos to capture an object in 3D space.

  • What was the time constraint the creators were facing?

    -The creators had a time constraint as they only had a couple of minutes before it turned dark.

  • What was the process of uploading video clips to Luma AI?

    -The creator uploaded the video clips one by one to Luma AI's website. The DSLR footages initially failed, so they were processed through DaVinci with h.265 encoding before re-uploading.

  • How long did it take for Luma AI to process each 3D mesh?

    -Each 3D mesh took about 20 to 30 minutes to be processed by Luma AI.

  • What tool did the creator use in Blender to refine the 3D model?

    -The creator used a smooth tool in Blender to remove the sharp edges from the 3D model.

  • How long was the footage that was used to create the 3D model of the cone?

    -The footage used to create the 3D model of the cone was only one minute and forty-two seconds long.

  • What was the quality difference between the iPhone and Sony DSLR footage?

    -The Sony DSLR footage had sharper quality, not only due to the camera's image quality but also because the creator got closer to the object. The iPhone footage was taken from three levels of loop from high, mid, and low angles as per the website's instructions.

  • What were the challenges faced when trying to create a 3D model of the car?

    -The challenges included the car being a limo and very long, the entire car not being visible in the footage, the car's reflective paint, and the darkness outside during filming.

  • What is the next step the creators are planning to take with the 3D models?

    -The creators plan to use the 3D models to create a quick and short video to see how these assets perform when used in 3D software for background purposes.

  • What is the potential of Luma AI's technology according to the video?

    -According to the video, the technology is in its early stages and the quality is expected to improve over time.

  • What is the Unreal Engine plug-in mentioned in the video?

    -The Unreal Engine plug-in is a tool mentioned that the creator plans to cover in another video, which suggests it may be used for further enhancement or integration of the 3D models into the Unreal Engine.

Outlines

00:00

🎥 Luma AI Video to Photogrammetry Discovery

The speaker expresses excitement about a new feature in Luma AI that allows for 3D modeling from video footage rather than still photos. They describe a rush to capture a video of a scene before sunset to demonstrate the technology. The process involves uploading video clips to Luma AI's website, with some footage needing re-encoding due to issues with the DSLR footage. The AI successfully separates the scene from the object, creating a 3D mesh that can be manipulated in software like Blender. The speaker also discusses the quality differences between iPhone and Sony DSLR footage, noting that the DSLR provided sharper results. Despite challenges like reflective surfaces and low light conditions, the AI produced impressive 3D models, including one of a car with reflective paint.

Mindmap

Keywords

💡3D model

A 3D model refers to a digital representation of a three-dimensional object or environment, used in various fields such as video games, animation, and virtual reality. In the video, the creation of a 3D model from a video using Luma AI signifies a breakthrough in technology, as it allows for the conversion of 2D video footage into a 3D representation, which can be manipulated and used in different scenarios.

💡Luma AI

Luma AI is a technology or software mentioned in the video that enables the conversion of video into 3D models through photogrammetry. It represents a significant advancement as it simplifies the process of creating 3D models from visual data. The video demonstrates how Luma AI can analyze video footage and generate a 3D mesh of the objects within it.

💡Photogrammetry

Photogrammetry is a technique that involves the use of photography to measure distances between objects. In the context of the video, Luma AI uses photogrammetry to create 3D models from video footage, which is a novel application of this technique. This process allows for the extraction of 3D information from a sequence of 2D images, which is particularly useful for creating detailed models from videos.

💡Video to 3D conversion

The process of converting a video into a 3D model involves analyzing the video frames to extract depth information and reconstruct a 3D representation of the objects within the video. The video demonstrates this process using Luma AI, which is a significant development as it traditionally required a series of photographs. The result is a 3D model that can be used in various applications such as 3D printing, virtual reality, or animation.

💡DaVinci

DaVinci is a professional video editing software that was used in the video to process the DSLR footage before uploading it to Luma AI. The video mentions using DaVinci to apply h.265 encoding to the footage, which is a video compression standard that helps reduce file size while maintaining quality. This step was necessary because the original DSLR footage failed to upload directly to Luma AI.

💡h.265 encoding

h.265 encoding, also known as High Efficiency Video Coding (HEVC), is a video compression standard that offers better data compression than its predecessor, h.264. In the video, the footage was processed through DaVinci with h.265 encoding to make it compatible for uploading to Luma AI, indicating the importance of file format and compression in the video-to-3D conversion process.

💡Blender

Blender is a free and open-source 3D creation suite used for modeling, rigging, animation, simulation, rendering, compositing, and motion tracking. In the video, Blender is used to refine the 3D model generated by Luma AI by smoothing out the edges and preparing it for further use. This demonstrates Blender's role in the post-processing of 3D models to achieve desired aesthetics and functionality.

💡gltf model

A gltf model refers to a file format known as GL Transmission Format, which is designed for efficient storage and transmission of 3D models. The video mentions downloading a gltf model, which is one of the output options provided by Luma AI. This format is versatile and widely supported, making it suitable for use in various 3D applications and platforms.

💡Unreal Engine

Unreal Engine is a game engine developed by Epic Games, used for creating video games and other interactive applications. The video script mentions an Unreal Engine plug-in, suggesting that Luma AI's 3D models can be integrated into this engine for high-quality rendering and real-time applications, such as video games or virtual environments.

💡Payphone

In the context of the video, a payphone is the object that the creators are capturing in 3D using Luma AI. The video shows the process of recording a video of a payphone from different angles and heights, which is then used to generate a 3D model. The payphone serves as a practical example of how everyday objects can be turned into 3D models using the technology showcased.

💡Reflective surfaces

Reflective surfaces are materials that bounce light back towards the source, which can cause issues in 3D modeling due to the complexity of capturing and representing reflections accurately. The video discusses the challenges of modeling a car with reflective paint in the dark, highlighting the limitations and difficulties when dealing with reflective materials in the video-to-3D conversion process.

Highlights

Luma AI enables video to photogrammetry, allowing 3D capture without photos.

A video can now be used to create a 3D model.

The process was tested with limited time before dark conditions.

Different camera angles and heights were utilized for the video capture.

DaVinci with h.265 encoding was used to process DSLR footage.

Each 3D mesh generated by Luma AI took approximately 20 to 30 minutes to complete.

The AI automatically separates the scene from the object in the video.

A glTF model and Unreal Engine plugin are available for the 3D models.

Blender was used to refine the 3D model by smoothing sharp edges.

The video footage was compared with the 3D model output by Luma AI.

The process took only a minute and 42 seconds to record the video that generated the 3D model.

The technology is expected to improve in quality over time.

Payphone and car models were created from iPhone and Sony DSLR footage.

Reflective surfaces and low light conditions presented challenges.

The DSLR footage provided sharper quality due to closer proximity and better camera capabilities.

A quick and short video will be created using the 3D assets for background purposes.

Stay tuned for a demonstration of the 3D assets in a short video.