Initial Commit
commit
b6dfa8b236
Binary file not shown.
After Width: | Height: | Size: 2.2 MiB |
|
@ -0,0 +1,4 @@
|
|||
.DS_Store
|
||||
._.*
|
||||
.idea/
|
||||
|
|
@ -0,0 +1 @@
|
|||
3.7.6
|
|
@ -0,0 +1,21 @@
|
|||
MIT License
|
||||
|
||||
Copyright (c) [year] [fullname]
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
|
@ -0,0 +1,138 @@
|
|||
# mitsuba-visualize: Automates High-Quality Mitsuba Renders
|
||||
|
||||
Visualizes meshes, pointclouds and video flythroughs in publication quality
|
||||
- Collection of scripts automating path-traced rendering with [mitsuba (0.6.0)](https://www.mitsuba-renderer.org/index_old.html), directly using their python API.
|
||||
- Can also render flythroughs from predefined camera poses using bezier curves and catmull-rom splines
|
||||
|
||||
This implementation was used for the visualizations in *Dai, Angela, Christian Diller, and Matthias Nießner. "SG-NN: Sparse Generative Neural Networks for Self-Supervised Scene Completion of RGB-D Scans."* (CVPR'20)
|
||||
|
||||
![Sample Rendering](.github/sample_rendering.png)
|
||||
|
||||
You can find more samples, including flythroughs, in our [SG-NN video](https://www.youtube.com/watch?v=rN6D3QmMNuU).
|
||||
|
||||
|
||||
## Installation
|
||||
|
||||
1. Install mitsuba 0.6.0, following the tutorial here: [10 Steps to Install Mitsuba Renderer on Ubuntu](https://medium.com/@sreenithyc21/10-steps-to-install-mitsuba-renderer-on-ubuntu-38a9318fbcdf)
|
||||
2. Clone this repository and install python dependencies:
|
||||
```bash
|
||||
git clone https://github.com/chrdiller/mitsuba-visualize
|
||||
cd mitsuba-visualize
|
||||
poetry install
|
||||
```
|
||||
3. After building mitsuba, adjust path in ``set_python_path.py`` to point to the folder you cloned mitsuba to (has to contain subdirectory ``dist/python``)
|
||||
```bash
|
||||
MITSUBA_BASE = Path('/path/to/mitsuba/clone')
|
||||
```
|
||||
|
||||
|
||||
## Usage
|
||||
Enter environment: ``poetry shell``
|
||||
|
||||
### Single images
|
||||
- Meshes: ``python mesh.py ...``
|
||||
```bash
|
||||
usage: mesh.py [-h] -o OUTPUT [--scenes_list SCENES_LIST] [--scene SCENE]
|
||||
[--cameras CAMERAS] [--width WIDTH] [--height HEIGHT]
|
||||
[--samples SAMPLES] [--workers WORKERS]
|
||||
[input_paths [input_paths ...]]
|
||||
|
||||
Render directory
|
||||
|
||||
positional arguments:
|
||||
input_paths Path(s) to directory containing all files to render
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
-o OUTPUT, --output OUTPUT
|
||||
Path to write renderings to
|
||||
--scenes_list SCENES_LIST
|
||||
Path to file containing filenames to render in base
|
||||
path
|
||||
--scene SCENE One scene. Overrides scenes_list
|
||||
--cameras CAMERAS XML file containing meshlab cameras
|
||||
--width WIDTH Width of the resulting image
|
||||
--height HEIGHT Height of the resulting image
|
||||
--samples SAMPLES Number of integrator samples per pixel
|
||||
--workers WORKERS How many concurrent workers to use
|
||||
```
|
||||
|
||||
- Point Clouds: ``python pointcloud.py ...``
|
||||
```bash
|
||||
usage: Render pointcloud with mitsuba by placing little spheres at the points' positions
|
||||
[-h] -o OUTPUT [--radius RADIUS] [--width WIDTH] [--height HEIGHT]
|
||||
[--samples SAMPLES] [--workers WORKERS]
|
||||
input [input ...]
|
||||
|
||||
positional arguments:
|
||||
input Path(s) to the ply file(s) containing a pointcloud(s)
|
||||
to render
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
-o OUTPUT, --output OUTPUT
|
||||
Path to write renderings to
|
||||
--radius RADIUS Radius of a single point
|
||||
--width WIDTH Width of the resulting image
|
||||
--height HEIGHT Height of the resulting image
|
||||
--samples SAMPLES Number of integrator samples per pixel
|
||||
--workers WORKERS How many concurrent workers to use
|
||||
```
|
||||
|
||||
### Flythroughs
|
||||
As in the [SG-NN video](https://www.youtube.com/watch?v=rN6D3QmMNuU)
|
||||
|
||||
- (Optional) Export cameras from meshlab
|
||||
- Open the mesh in meshlab and press ``Cmd/Ctrl + C`` to copy the current camera view
|
||||
- Paste into a text editor (will be a few lines of XML)
|
||||
- Save as xml file
|
||||
- Repeat for every camera keypoint
|
||||
- (Alternatively) Do not specify cameras; this will render a slightly tilted top-down turntable view
|
||||
- Usage: ``python flythough.py ...``
|
||||
```bash
|
||||
usage: Render a flythrough video of a scene [-h] -o OUTPUT
|
||||
[--remote [REMOTE [REMOTE ...]]]
|
||||
[--novideo] [--norender] [--keep]
|
||||
[--scenes_list SCENES_LIST]
|
||||
[--frames FRAMES]
|
||||
[--cameras CAMERAS]
|
||||
[--shutter_time SHUTTER_TIME]
|
||||
[--width WIDTH] [--height HEIGHT]
|
||||
[--fov FOV] [--samples SAMPLES]
|
||||
[--interpolation {catmullrom,bezier}]
|
||||
[--workers WORKERS]
|
||||
input [input ...]
|
||||
|
||||
positional arguments:
|
||||
input Path to the ply file to render
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
-o OUTPUT, --output OUTPUT
|
||||
Path to write output video to
|
||||
--remote [REMOTE [REMOTE ...]]
|
||||
Urls of the remote render servers
|
||||
--novideo Only render frames, do not produce video
|
||||
--norender Only render video from existing frames, no rendering
|
||||
--keep Whether to keep the frame images
|
||||
--scenes_list SCENES_LIST
|
||||
Path to file containing filenames to render in base
|
||||
path
|
||||
--frames FRAMES Number of frames to render (The video file will have
|
||||
30fps)
|
||||
--cameras CAMERAS XML file containing meshlab cameras (or path to
|
||||
directory only containing such files). If set, this is
|
||||
used for spline interpolation. Otherwise, a rotating
|
||||
flyover is generated
|
||||
--shutter_time SHUTTER_TIME
|
||||
Shutter time of the moving sensor
|
||||
--width WIDTH Width of the resulting image
|
||||
--height HEIGHT Height of the resulting image
|
||||
--fov FOV Field of view of the sensor in degrees (meshlab
|
||||
default is 60)
|
||||
--samples SAMPLES Number of integrator samples per pixel
|
||||
--interpolation {catmullrom,bezier}
|
||||
Which method to use for interpolation between control
|
||||
points
|
||||
--workers WORKERS How many local concurrent workers to use
|
||||
```
|
|
@ -0,0 +1,247 @@
|
|||
import sys
|
||||
import argparse
|
||||
from pathlib import Path
|
||||
from typing import List
|
||||
import math
|
||||
|
||||
import numpy as np
|
||||
import xml.etree.ElementTree as etree
|
||||
|
||||
sys.path.append('')
|
||||
import set_python_path
|
||||
from mitsuba.core import Transform, Matrix4x4, Vector3, PluginManager, normalize, AnimatedTransform, Point3
|
||||
from mitsuba.render import Sensor
|
||||
|
||||
import util
|
||||
from math_util.catmull_rom_spline import catmull_rom_spline
|
||||
from math_util.bezier_curve import bezier_curve
|
||||
|
||||
|
||||
def convert_meshlab2mitsuba_camera(camera_def: dict) -> Transform:
|
||||
"""
|
||||
Takes a meshlab camera dict (usually loaded from a meshlab xml file) and turns it into a valid mitsuba Transform
|
||||
which can then be applied to a sensor
|
||||
:param camera_def: The camera def dict, containing at least keys RotationMatrix and TranslationVector
|
||||
:return: A mitsuba transform constructed from the given values
|
||||
"""
|
||||
# Meshlab camera matrix is transposed
|
||||
matrix = Matrix4x4([
|
||||
float(elem) for elem in camera_def['RotationMatrix'].split(' ')[:16]
|
||||
]).transpose()
|
||||
|
||||
# Add translation vector
|
||||
translation = [-float(elem) for elem in camera_def['TranslationVector'].split(' ')]
|
||||
for i in range(3):
|
||||
matrix[i, 3] = translation[i]
|
||||
|
||||
# Make Mitsuba transform and flip rotation signs (y is not flipped, otherwise resulting image will be flipped vertically)
|
||||
transform = Transform(matrix)
|
||||
transform *= transform.scale(Vector3(-1, 1, -1))
|
||||
|
||||
return transform
|
||||
|
||||
|
||||
def read_meshlab_sensor_transforms(file_path: Path) -> List[Transform]:
|
||||
"""
|
||||
Reads meshlab camera properties from an xml file. To obtain those, simply press ctrl/cmd+c in meshlab, and paste into a text editor.
|
||||
If you paste multiple cameras into one file, make sure to have a valid document structure with one root node, e.g.
|
||||
<!DOCTYPE ViewState>
|
||||
<project>
|
||||
<VCGCamera ... />
|
||||
<ViewSettings ... />
|
||||
</project>
|
||||
|
||||
:param file_path: Path to the xml file to read
|
||||
:return: A list of Mitsuba transforms
|
||||
"""
|
||||
assert file_path.suffix == '.xml'
|
||||
root_node = etree.parse(file_path).getroot()
|
||||
|
||||
transforms = [convert_meshlab2mitsuba_camera(elem.attrib) for elem in root_node if elem.tag == 'VCGCamera']
|
||||
|
||||
return transforms
|
||||
|
||||
|
||||
def create_transform_on_bbsphere(bbox, radius_multiplier: float, positioning_vector: Vector3, tilt=None) -> Transform:
|
||||
"""
|
||||
Create a camera with origin on the bounding sphere around an object and looking towards the center of that sphere
|
||||
Camera center is calculated as: bbsphere_center + radius_multiplier * bbsphere_radius * normalized_positioning_vector
|
||||
:param bbox: The bounding box of the object from which to calculate the bounding sphere
|
||||
:param radius_multiplier: The value to multiply the bounding sphere's radius with
|
||||
(= distance of the camera origin to the center of the bounding sphere)
|
||||
:param positioning_vector: The vector pointing towards the camera center
|
||||
:param tilt: Transform to apply to camera origin. Usually a small rotation to tilt the camera perspective a little
|
||||
:return: A transform on the bbsphere
|
||||
"""
|
||||
bsphere = bbox.getBSphere()
|
||||
camera_target = bsphere.center
|
||||
|
||||
camera_offset = radius_multiplier * bsphere.radius * normalize(positioning_vector)
|
||||
camera_origin = camera_target + camera_offset
|
||||
|
||||
camera_transform = Transform.lookAt(tilt * camera_origin if tilt is not None else camera_origin, camera_target, Vector3(0., 0., 1.))
|
||||
|
||||
return camera_transform
|
||||
|
||||
|
||||
def create_trajectory_on_bbsphere(bbox, initial_positioning_vector: Vector3, rotation_around: Vector3, num_cameras: int,
|
||||
radius_multiplier: float, tilt: Transform = None) -> List[Transform]:
|
||||
"""
|
||||
Creates an interpolated trajectory on the bounding sphere of a scene/object
|
||||
:param bbox: The bounding box of the object/scene to render
|
||||
:param initial_positioning_vector: Controls where to position the camera initially
|
||||
:param rotation_around: Specify axis to rotate camera around
|
||||
:param num_cameras: Number of cameras to generate
|
||||
:param radius_multiplier: Cameras will be placed at distance multiplier * bbsphere radius
|
||||
:param tilt: Additional transformation meant to tilt the camera a bit
|
||||
:return: List with generated transforms
|
||||
"""
|
||||
transforms = []
|
||||
|
||||
step_angle = 360. / float(num_cameras)
|
||||
for camera_idx in range(num_cameras):
|
||||
transform = create_transform_on_bbsphere(bbox, radius_multiplier, Transform.rotate(rotation_around, step_angle * camera_idx) * tilt * initial_positioning_vector)
|
||||
transforms.append(transform)
|
||||
|
||||
return transforms
|
||||
|
||||
|
||||
def create_interpolated_trajectory(cameras: List[Transform], method: str,
|
||||
num_cameras_per_m: int = 75, num_cameras_per_rad: int = 225,
|
||||
num_total_cameras: int = 1000) -> List[Transform]:
|
||||
"""
|
||||
Creates an interpolated camera trajectory using supplied key camera transforms
|
||||
:param cameras: Camera transformations to use as control points for interpolation
|
||||
:param method: How to interpolate: *bezier* (used for both camera centers and rotations):
|
||||
Smooth trajectory, but does not necessarily pass through the control points (except the first and last) or
|
||||
*catmullrom* (used for camera centers, using quaternion slerp for rotations):
|
||||
Passes through all control points, but needs more tuning to prevent weird behaviour
|
||||
:param num_cameras_per_m: catmullrom parameter: Camera centers per meter
|
||||
:param num_cameras_per_rad: catmullrom parameter: Camera rotations per radiant
|
||||
:param num_total_cameras: bezier parameter: Number of interpolated cameras between first and last key camera pose
|
||||
:return: A list of interpolated transforms
|
||||
"""
|
||||
all_interpolated = []
|
||||
|
||||
camera_pose_matrices = [util.convert_transform2numpy(camera) for camera in cameras]
|
||||
if method == 'bezier':
|
||||
interpolated_cameras = bezier_curve(camera_pose_matrices, num_steps=num_total_cameras)
|
||||
for elem in interpolated_cameras:
|
||||
position = Point3(*elem[:3, 3].tolist())
|
||||
look = Vector3(*elem[:3, :3].dot(np.array([0, 0, 1])).tolist())
|
||||
up = Vector3(*(elem[:3, :3].dot(np.array([0, 1, 0]))).tolist())
|
||||
all_interpolated.append(Transform.lookAt(position, position + look, up))
|
||||
|
||||
elif method == 'catmullrom':
|
||||
assert len(cameras) >= 4
|
||||
camera_groups = [camera_pose_matrices[idx: idx + 4] for idx in range(len(cameras) - 3)]
|
||||
for camera_group in camera_groups:
|
||||
key_positions = (camera_group[1][:3, 3], camera_group[2][:3, 3])
|
||||
key_looks = (camera_group[1][:3, :3] * np.array([0, 0, 1]), camera_group[2][:3, :3] * np.array([0, 0, 1]))
|
||||
dist = np.linalg.norm(key_positions[1] - key_positions[0])
|
||||
angle = math.acos(np.clip(np.sum(key_looks[1] @ key_looks[0]), -1., 1.))
|
||||
|
||||
num_t = int(np.round(dist / 100. * num_cameras_per_m))
|
||||
num_r = int(np.round(angle * num_cameras_per_rad))
|
||||
num = max(40, max(num_t, num_r))
|
||||
|
||||
interpolated_cameras = catmull_rom_spline(camera_group, num)
|
||||
|
||||
for elem in interpolated_cameras:
|
||||
position = Point3(*elem[:3, 3].tolist())
|
||||
look = Vector3(*elem[:3, :3].dot(np.array([0, 0, 1])).tolist())
|
||||
up = Vector3(*(elem[:3, :3].dot(np.array([0, 1, 0]))).tolist())
|
||||
all_interpolated.append(Transform.lookAt(position, position + look, up))
|
||||
|
||||
else:
|
||||
raise NotImplementedError(f'The method you chose ({method}) is not implemented')
|
||||
|
||||
return all_interpolated
|
||||
|
||||
|
||||
def create_animated_sensors(trajectory: List[Transform], shutter_time: float = 1., width: int = 1920, height: int = 1440,
|
||||
fov: float = 45., num_samples: int = 256) -> List[Sensor]:
|
||||
"""
|
||||
Create an animated sensor (Applies motion blur to the rendered image)
|
||||
:param trajectory: The trajectory containing all Transforms
|
||||
:param shutter_time: The shutter time to be used to set the Mitsuba setShutterOpenTime
|
||||
:param width: Width of the generated image
|
||||
:param height: Height of the generated image
|
||||
:param fov: The sensor field of view
|
||||
:param num_samples: Number of samples per pixel (controls the noise in the resulting image)
|
||||
:return: A list of Mitsuba animated sensors
|
||||
"""
|
||||
animated_sensors = []
|
||||
for transform_idx in range(len(trajectory)):
|
||||
atrafo = AnimatedTransform()
|
||||
atrafo.appendTransform(0, trajectory[transform_idx])
|
||||
atrafo.appendTransform(1, trajectory[min(transform_idx + 1, len(trajectory) - 1)])
|
||||
atrafo.sortAndSimplify()
|
||||
sensor = create_sensor_from_transform(atrafo, width, height, fov, num_samples)
|
||||
sensor.setShutterOpen(0)
|
||||
sensor.setShutterOpenTime(shutter_time)
|
||||
animated_sensors.append(sensor)
|
||||
|
||||
return animated_sensors
|
||||
|
||||
|
||||
def create_sensor_from_transform(transform, width=1920, height=1440, fov=45., num_samples=256) -> Sensor:
|
||||
"""
|
||||
Create a Mitsuba sensor from a camera transform
|
||||
:param transform: The transform (camera to world) that this sensor uses
|
||||
:param width: The width of the resulting image
|
||||
:param height: The height of the resulting image
|
||||
:param fov: The field of view in degrees (default 45, meshlab uses 60)
|
||||
:param num_samples: Number of samples per pixel
|
||||
:return: A Mitsuba sensor
|
||||
"""
|
||||
sensor = PluginManager.getInstance().create({
|
||||
'type': 'perspective',
|
||||
'film': {
|
||||
'type': 'ldrfilm',
|
||||
'width': width,
|
||||
'height': height,
|
||||
'pixelFormat': 'rgba',
|
||||
'exposure': 1.0,
|
||||
'banner': False
|
||||
},
|
||||
'sampler': {
|
||||
'type': 'ldsampler',
|
||||
'sampleCount': num_samples
|
||||
},
|
||||
'toWorld': transform,
|
||||
'fov': fov,
|
||||
})
|
||||
|
||||
return sensor
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser("Load and parse cameras from different formats for mitsuba. Called directly, it will print the parsed mitsuba camera")
|
||||
parser.add_argument('camera_filename', type=str, help="The file containing camera definitions")
|
||||
parser.add_argument('--type', default='meshlab', choices=['meshlab'], help="Which type the camera definitions are")
|
||||
parser.add_argument('--interpolate', action='store_true', help="Whether to interpolate cameras between the given one (need at least 4 in this case)")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
camera_filepath = Path(args.camera_filename)
|
||||
assert camera_filepath.is_file(), "Camera file has to exist"
|
||||
|
||||
if args.type == 'meshlab':
|
||||
transforms = read_meshlab_sensor_transforms(camera_filepath)
|
||||
else:
|
||||
raise NotImplementedError
|
||||
|
||||
print(f"Read {len(transforms)} camera transformations from type {args.type}:")
|
||||
print(transforms)
|
||||
|
||||
if args.interpolate:
|
||||
interpolated_cameras = create_interpolated_trajectory(transforms, method='bezier')
|
||||
|
||||
points = np.array([elem[3, :3] for elem in interpolated_cameras])
|
||||
from io_util import ply
|
||||
|
||||
ply.write_ply('/home/christian/tmp/cameras.ply', points=points, as_text=True)
|
||||
|
||||
print(f"Interpolated {len(interpolated_cameras)} camera transformations:")
|
||||
print(interpolated_cameras)
|
|
@ -0,0 +1,164 @@
|
|||
import sys
|
||||
import argparse
|
||||
from pathlib import Path
|
||||
from typing import Tuple, List
|
||||
import shutil
|
||||
|
||||
from tqdm import tqdm
|
||||
import moviepy.editor as mpy
|
||||
|
||||
sys.path.append('')
|
||||
import set_python_path
|
||||
from mitsuba.core import *
|
||||
from mitsuba.render import RenderJob, RenderQueue
|
||||
|
||||
import mesh
|
||||
import cameras
|
||||
import util
|
||||
import pointcloud
|
||||
|
||||
|
||||
def render_trajectory_frames(mesh_path: Tuple[Path, Path], trajectory: List[Transform], queue: RenderQueue,
|
||||
shutter_time: float, width: int = 1920, height: int = 1440, fov: float = 60.,
|
||||
num_samples: int = 32) -> None:
|
||||
"""
|
||||
Render a camera trajectory through a mesh loaded from mesh_path[0] at the Transformations given in trajectory
|
||||
:param mesh_path: Path tuple (input_filepath, output_dirpath) of the mesh to render
|
||||
:param trajectory: List of Mitsuba Transformations, corresponding to cameras in the trajectory
|
||||
:param shutter_time: The camera shutter time. Controls the amount of motion blur between every pair of consecutive frames
|
||||
:param width: Parameter passed to cameras.create_sensor_from_transform
|
||||
:param height: Parameter passed to cameras.create_sensor_from_transform
|
||||
:param fov: The field of view
|
||||
:param num_samples: Parameter passed to cameras.create_sensor_from_transform
|
||||
:param queue: The Mitsuba render queue to use for all the frames
|
||||
:return:
|
||||
"""
|
||||
input_path, output_path = mesh_path
|
||||
|
||||
# Make Mesh
|
||||
mesh_obj = mesh.prepare_ply_mesh(input_path, util.get_predefined_spectrum('light_blue'))
|
||||
|
||||
# Create sensors with animation transforms
|
||||
sensors = cameras.create_animated_sensors(trajectory, shutter_time, width, height, fov, num_samples)
|
||||
|
||||
scene = util.construct_simple_scene([mesh_obj], sensors[0])
|
||||
|
||||
with tqdm(total=len(sensors), bar_format='Total {percentage:3.0f}% {n_fmt}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}{postfix}] {desc}', dynamic_ncols=True) as t:
|
||||
util.redirect_logger(tqdm.write, EError, t)
|
||||
t.write(input_path.stem)
|
||||
for idx, sensor in enumerate(sensors):
|
||||
# Make Scene
|
||||
scene.setSensor(sensor)
|
||||
scene.setDestinationFile(str(output_path / f'{input_path.stem}-{idx}.png'))
|
||||
# Make Result
|
||||
job = RenderJob(f'Render-{input_path.stem}-{idx}', scene, queue)
|
||||
job.start()
|
||||
queue.waitLeft(0)
|
||||
queue.join()
|
||||
t.update()
|
||||
|
||||
|
||||
def create_movie_from_frames(images_path: Path, output_filepath: Path, frame_durations=None):
|
||||
"""
|
||||
Create a movie from images (like the ones rendered by Mitsuba).
|
||||
Uses moviepy (which uses ffmpeg). Movies will have 30fps if frame_durations is None
|
||||
:param images_path: Path containing only image files
|
||||
:param output_filepath: Path to output video file
|
||||
:param frame_durations: If not None, specifies the duration of frames (1 = one frame).
|
||||
Must contain one for every file in images_path
|
||||
:return:
|
||||
"""
|
||||
files = list(images_path.iterdir())
|
||||
files = sorted(files, key=lambda filepath: int(filepath.stem.split('-')[-1]))
|
||||
|
||||
if frame_durations is not None:
|
||||
assert len(files) == len(frame_durations)
|
||||
clip = mpy.ImageSequenceClip([str(file) for file in files], durations=frame_durations)
|
||||
else:
|
||||
clip = mpy.ImageSequenceClip([str(file) for file in files], fps=30)
|
||||
|
||||
clip.on_color().write_videofile(str(output_filepath), fps=30, audio=False)
|
||||
|
||||
|
||||
def main(args):
|
||||
input_paths = [Path(path_str) for path_str in args.input]
|
||||
output_path = Path(args.output)
|
||||
assert output_path.parent.is_dir()
|
||||
if not output_path.is_dir():
|
||||
output_path.mkdir(parents=False)
|
||||
|
||||
assert all([path.exists() for path in input_paths])
|
||||
mesh_paths = list(util.generate_mesh_paths(input_paths, output_path, selected_meshes=util.read_filelist(Path(args.scenes_list)) if args.scenes_list is not None else None))
|
||||
|
||||
for input_path, output_path in mesh_paths:
|
||||
intermediate_path = Path(output_path) / f'{input_path.stem}-frames'
|
||||
if not args.norender:
|
||||
if intermediate_path.is_dir():
|
||||
res = input(f"Mesh {input_path.stem} has already been rendered, delete and re-render? Y/n")
|
||||
if res == 'n':
|
||||
sys.exit(0)
|
||||
shutil.rmtree(intermediate_path)
|
||||
|
||||
render_queue = util.prepare_queue(args.workers, remote_stream=[SocketStream(remote, 7554) for remote in args.remote] if args.remote is not None else None)
|
||||
for input_path, output_path in mesh_paths:
|
||||
intermediate_path = Path(output_path) / f'{input_path.stem}-frames'
|
||||
if not args.norender:
|
||||
intermediate_path.mkdir()
|
||||
|
||||
if args.cameras is None:
|
||||
bbox = pointcloud.get_pointcloud_bbox(pointcloud.load_from_ply(input_path))
|
||||
trajectory = cameras.create_trajectory_on_bbsphere(bbox,
|
||||
initial_positioning_vector=Vector3(0, 1, 1),
|
||||
rotation_around=util.axis_unit_vector('z'),
|
||||
num_cameras=args.frames, radius_multiplier=3.,
|
||||
tilt=Transform.rotate(util.axis_unit_vector('x'), 20.))
|
||||
|
||||
else:
|
||||
cameras_path = Path(args.cameras)
|
||||
if cameras_path.is_dir():
|
||||
camera_filepath = cameras_path / f'{input_path.stem}.xml'
|
||||
else:
|
||||
camera_filepath = cameras_path
|
||||
|
||||
assert camera_filepath.is_file(), f"Camera file {camera_filepath} has to exist"
|
||||
|
||||
key_poses = cameras.read_meshlab_sensor_transforms(camera_filepath)
|
||||
trajectory = cameras.create_interpolated_trajectory(key_poses, method=args.interpolation, num_total_cameras=args.frames)
|
||||
|
||||
if not args.norender:
|
||||
render_trajectory_frames((input_path, intermediate_path), trajectory, render_queue,
|
||||
args.shutter_time, args.width, args.height, args.fov, args.samples)
|
||||
if not args.novideo:
|
||||
create_movie_from_frames(intermediate_path, output_path / f'{input_path.stem}.mp4')
|
||||
|
||||
if not args.keep:
|
||||
shutil.rmtree(intermediate_path)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser("Render a flythrough video of a scene")
|
||||
parser.add_argument('input', nargs='+', help='Path to the ply file to render')
|
||||
parser.add_argument('-o', '--output', required=True, help='Path to write output video to')
|
||||
|
||||
# Scene render parameters
|
||||
parser.add_argument('--remote', nargs='*', required=False, help='Urls of the remote render servers')
|
||||
parser.add_argument('--novideo', action='store_true', help='Only render frames, do not produce video')
|
||||
parser.add_argument('--norender', action='store_true', help='Only render video from existing frames, no rendering')
|
||||
parser.add_argument('--keep', action='store_true', help='Whether to keep the frame images')
|
||||
parser.add_argument('--scenes_list', required=False, help='Path to file containing filenames to render in base path')
|
||||
parser.add_argument('--frames', required=True, type=int, help='Number of frames to render (The video file will have 30fps)')
|
||||
parser.add_argument('--cameras', required=False, help='XML file containing meshlab cameras (or path to directory only containing such files). '
|
||||
'If set, this is used for spline interpolation. Otherwise, a rotating flyover is generated')
|
||||
|
||||
# Sensor parameters
|
||||
parser.add_argument('--shutter_time', default=1., type=float, help='Shutter time of the moving sensor')
|
||||
parser.add_argument('--width', default=1280, type=int, help='Width of the resulting image')
|
||||
parser.add_argument('--height', default=960, type=int, help='Height of the resulting image')
|
||||
parser.add_argument('--fov', default=60., type=float, help='Field of view of the sensor in degrees (meshlab default is 60)')
|
||||
parser.add_argument('--samples', default=128, type=int, help='Number of integrator samples per pixel')
|
||||
parser.add_argument('--interpolation', default='catmullrom', choices=['catmullrom', 'bezier'], help='Which method to use for interpolation between control points')
|
||||
|
||||
# General render parameters
|
||||
parser.add_argument('--workers', required=False, default=8, type=int, help="How many local concurrent workers to use")
|
||||
|
||||
main(parser.parse_args())
|
|
@ -0,0 +1,219 @@
|
|||
import sys
|
||||
from typing import List
|
||||
from collections import defaultdict
|
||||
|
||||
import numpy as np
|
||||
import pandas as pd
|
||||
|
||||
|
||||
sys_byteorder = ('>', '<')[sys.byteorder == 'little']
|
||||
|
||||
ply_dtypes = {
|
||||
'int8': 'i1', b'char': 'i1', b'uint8': 'u1', b'uchar': 'b1', b'uchar': 'u1',
|
||||
'int16': 'i2', b'short': 'i2', b'uint16': 'u2', b'ushort': 'u2',
|
||||
'int32': 'i4', b'int': 'i4', b'uint32': 'u4', b'uint': 'u4',
|
||||
'float32': 'f4', b'float': 'f4', b'float64': 'f8', b'double': 'f8'
|
||||
}
|
||||
|
||||
valid_formats = {'ascii': '', 'binary_big_endian': '>', 'binary_little_endian': '<'}
|
||||
|
||||
|
||||
def describe_element(name: str, df: pd.DataFrame) -> List[str]:
|
||||
"""
|
||||
Takes the columns of the dataframe and builds a ply-like description
|
||||
:param name:
|
||||
:param df:
|
||||
"""
|
||||
property_formats = {'f': 'float', 'u': 'uchar', 'i': 'int'}
|
||||
element = ['element ' + name + ' ' + str(len(df))]
|
||||
|
||||
if name == 'face':
|
||||
element.append("property list uchar int vertex_indices")
|
||||
|
||||
else:
|
||||
for i in range(len(df.columns)):
|
||||
# get first letter of dtype to infer format
|
||||
f = property_formats[str(df.dtypes[i])[0]]
|
||||
element.append('property ' + f + ' ' + df.columns.values[i])
|
||||
|
||||
return element
|
||||
|
||||
|
||||
def read_ply(filename):
|
||||
"""
|
||||
Read a .ply (binary or ascii) file and store the elements in pandas DataFrame
|
||||
Parameters
|
||||
----------
|
||||
filename: str
|
||||
Path to the filename
|
||||
Returns
|
||||
-------
|
||||
data: dict
|
||||
Elements as pandas DataFrames; comments and ob_info as list of string
|
||||
"""
|
||||
if not type(filename) is str:
|
||||
filename = str(filename)
|
||||
with open(filename, 'rb') as ply:
|
||||
if b'ply' not in ply.readline():
|
||||
raise ValueError('The file does not start whith the word ply')
|
||||
# get binary_little/big or ascii
|
||||
fmt = ply.readline().split()[1].decode()
|
||||
# get extension for building the numpy dtypes
|
||||
ext = valid_formats[fmt]
|
||||
|
||||
line = []
|
||||
dtypes = defaultdict(list)
|
||||
count = 2
|
||||
points_size = None
|
||||
mesh_size = None
|
||||
has_texture = False
|
||||
while b'end_header' not in line and line != b'':
|
||||
line = ply.readline()
|
||||
|
||||
if b'element' in line:
|
||||
line = line.split()
|
||||
name = line[1].decode()
|
||||
size = int(line[2])
|
||||
if name == "vertex":
|
||||
points_size = size
|
||||
elif name == "face":
|
||||
mesh_size = size
|
||||
|
||||
elif b'property' in line:
|
||||
line = line.split()
|
||||
# element faces
|
||||
if b'list' in line:
|
||||
|
||||
if b"vertex_indices" in line[-1] or b"vertex_index" in line[-1]:
|
||||
mesh_names = ["n_points", "v1", "v2", "v3"]
|
||||
else:
|
||||
has_texture = True
|
||||
mesh_names = ["n_coords"] + ["v1_u", "v1_v", "v2_u", "v2_v", "v3_u", "v3_v"]
|
||||
|
||||
if fmt == "ascii":
|
||||
# the first number has different dtype than the list
|
||||
dtypes[name].append(
|
||||
(mesh_names[0], ply_dtypes[line[2]]))
|
||||
# rest of the numbers have the same dtype
|
||||
dt = ply_dtypes[line[3]]
|
||||
else:
|
||||
# the first number has different dtype than the list
|
||||
dtypes[name].append(
|
||||
(mesh_names[0], ext + ply_dtypes[line[2]]))
|
||||
# rest of the numbers have the same dtype
|
||||
dt = ext + ply_dtypes[line[3]]
|
||||
|
||||
for j in range(1, len(mesh_names)):
|
||||
dtypes[name].append((mesh_names[j], dt))
|
||||
else:
|
||||
if fmt == "ascii":
|
||||
dtypes[name].append((line[2].decode(), ply_dtypes[line[1]]))
|
||||
else:
|
||||
dtypes[name].append((line[2].decode(), ext + ply_dtypes[line[1]]))
|
||||
count += 1
|
||||
|
||||
# for bin
|
||||
end_header = ply.tell()
|
||||
|
||||
data = {}
|
||||
|
||||
if fmt == 'ascii':
|
||||
top = count
|
||||
bottom = 0 if mesh_size is None else mesh_size
|
||||
|
||||
names = [x[0] for x in dtypes["vertex"]]
|
||||
|
||||
data["points"] = pd.read_csv(filename, sep=" ", header=None, engine="python", skiprows=top,
|
||||
skipfooter=bottom, usecols=names, names=names)
|
||||
|
||||
for n, col in enumerate(data["points"].columns):
|
||||
data["points"][col] = data["points"][col].astype(dtypes["vertex"][n][1])
|
||||
|
||||
if mesh_size:
|
||||
top = count + points_size
|
||||
|
||||
names = np.array([x[0] for x in dtypes["face"]])
|
||||
usecols = [1, 2, 3, 5, 6, 7, 8, 9, 10] if has_texture else [1, 2, 3]
|
||||
names = names[usecols]
|
||||
|
||||
data["faces"] = pd.read_csv(filename, sep=" ", header=None, engine="python",
|
||||
skiprows=top, usecols=usecols, names=names)
|
||||
|
||||
for n, col in enumerate(data["faces"].columns):
|
||||
data["faces"][col] = data["faces"][col].astype(dtypes["face"][n + 1][1])
|
||||
|
||||
else:
|
||||
with open(filename, 'rb') as ply:
|
||||
ply.seek(end_header)
|
||||
points_np = np.fromfile(ply, dtype=dtypes["vertex"], count=points_size)
|
||||
if ext != sys_byteorder:
|
||||
points_np = points_np.byteswap().newbyteorder()
|
||||
data["points"] = pd.DataFrame(points_np)
|
||||
if mesh_size:
|
||||
mesh_np = np.fromfile(ply, dtype=dtypes["face"], count=mesh_size)
|
||||
if ext != sys_byteorder:
|
||||
mesh_np = mesh_np.byteswap().newbyteorder()
|
||||
data["faces"] = pd.DataFrame(mesh_np)
|
||||
data["faces"].drop('n_points', axis=1, inplace=True)
|
||||
|
||||
return data
|
||||
|
||||
|
||||
def write_ply(filename, points=None, faces=None, as_text=True):
|
||||
"""
|
||||
|
||||
Parameters
|
||||
----------
|
||||
filename: str
|
||||
The created file will be named with this
|
||||
points: ndarray
|
||||
faces: ndarray
|
||||
as_text: boolean
|
||||
Set the write mode of the file. Default: binary
|
||||
|
||||
Returns
|
||||
-------
|
||||
boolean
|
||||
True if no problems
|
||||
|
||||
"""
|
||||
filename = str(filename)
|
||||
if not filename.endswith('ply'):
|
||||
filename += '.ply'
|
||||
|
||||
# open in text mode to write the header
|
||||
with open(filename, 'w') as ply:
|
||||
header = ['ply']
|
||||
|
||||
if as_text:
|
||||
header.append('format ascii 1.0')
|
||||
else:
|
||||
header.append('format binary_' + sys.byteorder + '_endian 1.0')
|
||||
|
||||
if points is not None:
|
||||
points = pd.DataFrame(points, columns=['x', 'y', 'z'])
|
||||
header.extend(describe_element('vertex', points))
|
||||
if faces is not None:
|
||||
faces = pd.DataFrame(faces.copy())
|
||||
faces.insert(loc=0, column="n_points", value=3)
|
||||
faces["n_points"] = faces["n_points"].astype("u1")
|
||||
header.extend(describe_element('face', faces))
|
||||
|
||||
header.append('end_header')
|
||||
|
||||
for line in header:
|
||||
ply.write("%s\n" % line)
|
||||
|
||||
if as_text:
|
||||
if points is not None:
|
||||
points.to_csv(filename, sep=" ", index=False, header=False, mode='a', encoding='ascii')
|
||||
if faces is not None:
|
||||
faces.to_csv(filename, sep=" ", index=False, header=False, mode='a', encoding='ascii')
|
||||
else:
|
||||
with open(filename, 'ab') as ply:
|
||||
if points is not None:
|
||||
points.to_records(index=False).tofile(ply)
|
||||
if faces is not None:
|
||||
faces.to_records(index=False).tofile(ply)
|
||||
|
||||
return True
|
|
@ -0,0 +1,69 @@
|
|||
from typing import List
|
||||
|
||||
import numpy as np
|
||||
from scipy.special import comb
|
||||
from pyquaternion import Quaternion
|
||||
|
||||
from math_util.util import find_closest_orthogonal_matrix
|
||||
|
||||
|
||||
def avoid_jumps(cur: Quaternion, prev: Quaternion):
|
||||
"""
|
||||
Avoid jumps between quaternions that happen when a flipped quaternion follows a non-flipped one
|
||||
:param cur: Current quaternion
|
||||
:param prev: Previous quaternion
|
||||
:return: A new current quaternion which is either the original one or the sign-flipped version thereof
|
||||
"""
|
||||
if (prev - cur).norm < (prev + cur).norm:
|
||||
return cur
|
||||
else:
|
||||
return -cur
|
||||
|
||||
|
||||
def bernstein_poly(i, n, t):
|
||||
"""
|
||||
The Bernstein polynomial of n, i as a function of t
|
||||
:param i: Step
|
||||
:param n: Total number of steps
|
||||
:param t: Variable
|
||||
:return: Bernstein polynomial at t with parameters i and n
|
||||
"""
|
||||
return comb(n, i) * (t**(n-i)) * (1 - t)**i
|
||||
|
||||
|
||||
def bezier_curve(control_poses: List[np.ndarray], num_steps=1000):
|
||||
"""
|
||||
Given a set of control points, return the bezier curve defined by the control points.
|
||||
See http://processingjs.nihongoresources.com/bezierinfo/
|
||||
:param control_poses: List of poses (4x4 numpy arrays (R|t))
|
||||
:param num_steps: Number of camera poses to interpolate between first and last keypoint, considering key
|
||||
points in between (but not necessarily passing through them)
|
||||
:return: List with num_steps interpolated camera poses (4x4 numpy arrays (R|t))
|
||||
"""
|
||||
interpolated_camera_poses = []
|
||||
|
||||
control_quaternions = [Quaternion(matrix=find_closest_orthogonal_matrix(pose[:3, :3])) for pose in control_poses]
|
||||
control_rotations = [control_quaternions[0]]
|
||||
for quat_idx in range(1, len(control_quaternions)):
|
||||
control_rotations.append(avoid_jumps(control_quaternions[quat_idx], control_rotations[-1]))
|
||||
control_rotations = np.stack([q.elements for q in control_rotations])
|
||||
control_points = np.stack([pose[:3, 3] for pose in control_poses])
|
||||
|
||||
t = np.linspace(0.0, 1.0, num_steps)
|
||||
|
||||
polynomial_array = np.array([bernstein_poly(i, len(control_poses)-1, t) for i in range(0, len(control_poses))])
|
||||
|
||||
interpolated_rotations = np.stack([np.dot(control_rotations[:, i], polynomial_array) for i in range(4)], axis=1)[::-1]
|
||||
interpolated_points = np.stack([np.dot(control_points[:, i], polynomial_array) for i in range(3)], axis=1)[::-1]
|
||||
|
||||
for rotation, point in zip(interpolated_rotations, interpolated_points):
|
||||
ext = np.eye(4)
|
||||
ext[:3, :3] = Quaternion(rotation).rotation_matrix
|
||||
ext[:3, 3] = point
|
||||
interpolated_camera_poses.append(ext)
|
||||
|
||||
return interpolated_camera_poses
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise NotImplementedError("Cannot call this math_util script directly")
|
|
@ -0,0 +1,69 @@
|
|||
from typing import List
|
||||
|
||||
import numpy as np
|
||||
from pyquaternion import Quaternion
|
||||
|
||||
from math_util.util import find_closest_orthogonal_matrix
|
||||
|
||||
|
||||
def catmull_rom_spline(camera_poses: List[np.ndarray], num_points: int) -> List[np.ndarray]:
|
||||
"""
|
||||
Interpolate points on a Catmull-Rom spline
|
||||
Reading: Edwin Catmull, Raphael Rom, A CLASS OF LOCAL INTERPOLATING SPLINES,
|
||||
Computer Aided Geometric Design, Academic Press, 1974, Pages 317-326
|
||||
:param camera_poses: List of 4 camera poses. Points are interpolated on the curve segment between pose 1 and 2
|
||||
:param num_points: Number of points to interpolate between pose 1 and 2
|
||||
:return: A list of num_points point on the curve segment between pose 1 and 2
|
||||
"""
|
||||
# Formulas from https://en.wikipedia.org/wiki/Centripetal_Catmull%E2%80%93Rom_spline
|
||||
assert len(camera_poses) == 4, "Need exactly 4 camera poses for interpolation"
|
||||
interpolated_cameras = []
|
||||
|
||||
# rotations
|
||||
q1 = Quaternion(matrix=find_closest_orthogonal_matrix(camera_poses[1][:3, :3]))
|
||||
q2 = Quaternion(matrix=find_closest_orthogonal_matrix(camera_poses[2][:3, :3]))
|
||||
|
||||
# translations
|
||||
translations = [pose[:3, 3] for pose in camera_poses]
|
||||
|
||||
if np.linalg.norm(translations[1] - translations[2]) < 0.001:
|
||||
for i in range(num_points):
|
||||
ext = np.eye(4)
|
||||
ext[:3, :3] = Quaternion.slerp(q1, q2, float(i) / num_points).rotation_matrix
|
||||
ext[:3, 3] = translations[1]
|
||||
interpolated_cameras.append(ext)
|
||||
else:
|
||||
eps = 0.001
|
||||
alpha = 0.5 # Default for centripetal catmull rom splines
|
||||
|
||||
# Calculate knot parameters
|
||||
t0 = 0.0
|
||||
t1 = np.linalg.norm(translations[1] - translations[0]) ** alpha + t0
|
||||
t2 = np.linalg.norm(translations[2] - translations[1]) ** alpha + t1
|
||||
t3 = np.linalg.norm(translations[3] - translations[2]) ** alpha + t2
|
||||
|
||||
# Calculate points on curve segment C
|
||||
new_points = []
|
||||
for t in np.arange(t1, t2, (t2 - t1) / float(num_points)):
|
||||
A1 = (t1 - t) / max(eps, t1 - t0) * translations[0] + (t - t0) / max(eps, t1 - t0) * translations[1]
|
||||
A2 = (t2 - t) / max(eps, t2 - t1) * translations[1] + (t - t1) / max(eps, t2 - t1) * translations[2]
|
||||
A3 = (t3 - t) / max(eps, t3 - t2) * translations[2] + (t - t2) / max(eps, t3 - t2) * translations[3]
|
||||
|
||||
B1 = (t2 - t) / max(eps, t2 - t0) * A1 + (t - t0) / max(eps, t2 - t0) * A2
|
||||
B2 = (t3 - t) / max(eps, t3 - t1) * A2 + (t - t1) / max(eps, t3 - t1) * A3
|
||||
|
||||
C = (t2 - t) / max(eps, t2 - t1) * B1 + (t - t1) / max(eps, t2 - t1) * B2
|
||||
new_points.append(C)
|
||||
|
||||
# For each point, also slerp the rotation matrix
|
||||
for idx, point in enumerate(new_points):
|
||||
ext = np.eye(4)
|
||||
ext[:3, :3] = Quaternion.slerp(q1, q2, float(idx) / len(new_points)).rotation_matrix
|
||||
ext[:3, 3] = point
|
||||
interpolated_cameras.append(ext)
|
||||
|
||||
return interpolated_cameras
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise NotImplementedError("Cannot call this math_util script directly")
|
|
@ -0,0 +1,43 @@
|
|||
|
||||
import numpy as np
|
||||
|
||||
|
||||
def find_closest_orthogonal_matrix(A: np.ndarray) -> np.ndarray:
|
||||
"""
|
||||
Find closest orthogonal matrix to *A* using iterative method.
|
||||
Based on the code from REMOVE_SOURCE_LEAKAGE function from OSL Matlab package.
|
||||
Reading:
|
||||
Colclough GL et al., A symmetric multivariate leakage correction for MEG connectomes.,
|
||||
Neuroimage. 2015 Aug 15;117:439-48. doi: 10.1016/j.neuroimage.2015.03.071
|
||||
|
||||
:param A: array shaped k, n, where k is number of channels, n - data points
|
||||
:return: Orthogonalized matrix with amplitudes preserved
|
||||
"""
|
||||
# Code from https://gist.github.com/dokato/7a997b2a94a0ec6384a5fd0e91e45f8b
|
||||
MAX_ITER = 2000
|
||||
TOLERANCE = np.max((1, np.max(A.shape) * np.linalg.svd(A.T, False, False)[0])) * np.finfo(A.dtype).eps
|
||||
reldiff = lambda a, b: 2 * abs(a - b) / (abs(a) + abs(b))
|
||||
convergence = lambda rho, prev_rho: reldiff(rho, prev_rho) <= TOLERANCE
|
||||
|
||||
A_b = A.conj()
|
||||
d = np.sqrt(np.sum(A * A_b, axis=1))
|
||||
|
||||
rhos = np.zeros(MAX_ITER)
|
||||
|
||||
for i in range(MAX_ITER):
|
||||
scA = A.T * d
|
||||
u, s, vh = np.linalg.svd(scA, False)
|
||||
V = np.dot(u, vh)
|
||||
d = np.sum(A_b * V.T, axis=1)
|
||||
|
||||
L = (V * d).T
|
||||
E = A - L
|
||||
rhos[i] = np.sqrt(np.sum(E * E.conj()))
|
||||
if i > 0 and convergence(rhos[i], rhos[i - 1]):
|
||||
break
|
||||
|
||||
return L
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise NotImplementedError("Cannot call this math_util script directly")
|
|
@ -0,0 +1,162 @@
|
|||
import sys
|
||||
import argparse
|
||||
from pathlib import Path
|
||||
from typing import List, Tuple
|
||||
|
||||
from tqdm import tqdm
|
||||
|
||||
sys.path.append('')
|
||||
import set_python_path
|
||||
from mitsuba.core import *
|
||||
from mitsuba.render import RenderJob
|
||||
|
||||
import util
|
||||
import cameras
|
||||
|
||||
|
||||
def prepare_ply_mesh(filepath: Path, spectrum, transformation=None):
|
||||
"""
|
||||
Prepare a ply mesh: Load from given filepath, apply given color spectrum and (optionally) transformation.
|
||||
Uses a two-sided bsdf (so no black faces) and recomputed the normals before returning
|
||||
:param filepath: Path to a ply file
|
||||
:param spectrum: The color spectrum to use for the diffuse bsdf used for the mesh
|
||||
:param transformation: Optional transformation to apply to the mesh (toWorld transform)
|
||||
:return: A mitsuba mesh object
|
||||
"""
|
||||
assert filepath.suffix == '.ply', f"{filepath} does not seem to be a ply file"
|
||||
mesh = PluginManager.getInstance().create({
|
||||
'type': 'ply',
|
||||
'filename': str(filepath),
|
||||
'bsdf': {
|
||||
'type': 'twosided',
|
||||
'bsdf': {
|
||||
'type': 'diffuse',
|
||||
'diffuseReflectance': spectrum
|
||||
}
|
||||
},
|
||||
'toWorld': transformation if transformation is not None else Transform()
|
||||
})
|
||||
mesh.computeNormals(True)
|
||||
|
||||
return mesh
|
||||
|
||||
|
||||
def render_multiple_perspectives(mesh_path: Tuple[Path, Path], sensors: list, num_workers=8) -> None:
|
||||
"""
|
||||
Render one mesh from multiple camera perspectives
|
||||
:param mesh_path: Path tuples (input_filepath, output_dirpath) of the mesh to render
|
||||
:param sensors: The Mitsuba sensor definitions to render the mesh with
|
||||
:param num_workers: Number of CPU cores to use for rendering
|
||||
:return:
|
||||
"""
|
||||
queue = util.prepare_queue(num_workers)
|
||||
|
||||
input_path, output_path = mesh_path
|
||||
# Make Mesh
|
||||
mesh = prepare_ply_mesh(input_path, util.get_predefined_spectrum('light_blue'))
|
||||
|
||||
with tqdm(total=len(sensors), bar_format='Total {percentage:3.0f}% {n_fmt}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}{postfix}] {desc}', dynamic_ncols=True) as t:
|
||||
util.redirect_logger(tqdm.write, EWarn, t)
|
||||
for idx, sensor in enumerate(sensors):
|
||||
t.write(f"Rendering with sensor {idx}")
|
||||
# Make Scene
|
||||
scene = util.construct_simple_scene([mesh], sensor)
|
||||
scene.setDestinationFile(str(output_path / f'{input_path.stem}-{idx}.png'))
|
||||
# Make Result
|
||||
job = RenderJob(f'Render-{input_path.stem}-{idx}', scene, queue)
|
||||
job.start()
|
||||
|
||||
queue.waitLeft(0)
|
||||
queue.join()
|
||||
t.update()
|
||||
|
||||
|
||||
def render_multiple_meshes(mesh_paths: List[Tuple[Path, Path]],
|
||||
radius_multiplier=3., positioning_vector=Vector3(0, 1, 1), tilt=Transform.rotate(util.axis_unit_vector('x'), 20.),
|
||||
width=1920, height=1440, num_samples=256, num_workers=8) -> None:
|
||||
"""
|
||||
Render multiple meshes with the camera always in the same relative position (based on the mesh).
|
||||
:param mesh_paths: Path tuples (input_filepath, output_dirpath) of the meshes to render
|
||||
:param radius_multiplier: Parameter passed to cameras.create_transform_on_bbsphere
|
||||
:param positioning_vector: Parameter passed to cameras.create_transform_on_bbsphere
|
||||
:param tilt: Parameter passed to cameras.create_transform_on_bbsphere
|
||||
:param width: Parameter passed to cameras.create_sensor_from_transform
|
||||
:param height: Parameter passed to cameras.create_sensor_from_transform
|
||||
:param num_samples: Parameter passed to cameras.create_sensor_from_transform
|
||||
:param num_workers: Number of CPU cores to use for rendering
|
||||
:return:
|
||||
"""
|
||||
queue = util.prepare_queue(num_workers)
|
||||
|
||||
mesh_paths = list(mesh_paths)
|
||||
with tqdm(total=len(mesh_paths), bar_format='Total {percentage:3.0f}% {n_fmt}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}{postfix}] {desc}', dynamic_ncols=True) as t:
|
||||
util.redirect_logger(tqdm.write, EWarn, t)
|
||||
for mesh_path in mesh_paths:
|
||||
input_path, output_path = mesh_path
|
||||
t.write(f'Rendering {input_path.stem}')
|
||||
# Make Mesh
|
||||
mesh = prepare_ply_mesh(input_path, util.get_predefined_spectrum('light_blue'))
|
||||
# Make sensor
|
||||
sensor_transform = cameras.create_transform_on_bbsphere(mesh.getAABB(), radius_multiplier, positioning_vector, tilt)
|
||||
sensor = cameras.create_sensor_from_transform(sensor_transform, width, height, fov=45., num_samples=num_samples)
|
||||
# Make Scene
|
||||
scene = util.construct_simple_scene([mesh], sensor)
|
||||
scene.setDestinationFile(str(output_path / f'{input_path.stem}.png'))
|
||||
# Make Result
|
||||
job = RenderJob(f'Render-{input_path.stem}', scene, queue)
|
||||
job.start()
|
||||
|
||||
queue.waitLeft(0)
|
||||
queue.join()
|
||||
t.update()
|
||||
|
||||
|
||||
def main(args):
|
||||
input_paths = [Path(path_str) for path_str in args.input_paths]
|
||||
output_path = Path(args.output)
|
||||
|
||||
assert all([path.exists() for path in input_paths])
|
||||
assert output_path.parent.is_dir()
|
||||
|
||||
if not output_path.is_dir():
|
||||
output_path.mkdir(parents=False)
|
||||
|
||||
if args.scenes_list is None and args.scene is None:
|
||||
mesh_paths = util.generate_mesh_paths(input_paths, output_path)
|
||||
elif args.scenes_list is not None and args.scene is None:
|
||||
mesh_paths = util.generate_mesh_paths(input_paths, output_path, util.read_filelist(Path(args.scenes_list)))
|
||||
else: # args.scene is not None:
|
||||
mesh_paths = util.generate_mesh_paths(input_paths, output_path, [args.scene])
|
||||
|
||||
if args.cameras is None:
|
||||
render_multiple_meshes(mesh_paths,
|
||||
radius_multiplier=3., positioning_vector=Vector3(0, 1, 1),
|
||||
tilt=Transform.rotate(util.axis_unit_vector('x'), 20.),
|
||||
width=args.width, height=args.height, num_samples=args.samples,
|
||||
num_workers=args.workers)
|
||||
else:
|
||||
sensor_transforms = cameras.read_meshlab_sensor_transforms(Path(args.cameras))
|
||||
sensors = [cameras.create_sensor_from_transform(transform, args.width, args.height, fov=45., num_samples=args.samples)
|
||||
for transform in sensor_transforms]
|
||||
render_multiple_perspectives(next(mesh_paths), sensors, num_workers=args.workers)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser(description='Render directory')
|
||||
parser.add_argument('input_paths', nargs='*', help='Path(s) to directory containing all files to render')
|
||||
parser.add_argument('-o', '--output', required=True, help='Path to write renderings to')
|
||||
|
||||
# Scene render parameters
|
||||
parser.add_argument('--scenes_list', required=False, help='Path to file containing filenames to render in base path')
|
||||
parser.add_argument('--scene', required=False, help='One scene. Overrides scenes_list')
|
||||
parser.add_argument('--cameras', required=False, help='XML file containing meshlab cameras')
|
||||
|
||||
# Sensor parameters
|
||||
parser.add_argument('--width', default=1280, type=int, help='Width of the resulting image')
|
||||
parser.add_argument('--height', default=960, type=int, help='Height of the resulting image')
|
||||
parser.add_argument('--samples', default=128, type=int, help='Number of integrator samples per pixel')
|
||||
|
||||
# General render parameters
|
||||
parser.add_argument('--workers', required=False, default=8, type=int, help="How many concurrent workers to use")
|
||||
|
||||
main(parser.parse_args())
|
|
@ -0,0 +1,130 @@
|
|||
import sys
|
||||
import argparse
|
||||
from pathlib import Path
|
||||
from typing import List, Tuple
|
||||
from itertools import cycle
|
||||
|
||||
import numpy as np
|
||||
from tqdm import tqdm
|
||||
|
||||
sys.path.append('')
|
||||
import set_python_path
|
||||
from mitsuba.core import PluginManager, Point3, Spectrum, Vector3, Transform, AABB, EWarn
|
||||
from mitsuba.render import RenderJob
|
||||
|
||||
from io_util import ply
|
||||
import util
|
||||
import cameras
|
||||
|
||||
|
||||
def create_spheres(pointcloud: np.ndarray, spectrum: Spectrum, sphere_radius: float = 1.) -> List:
|
||||
"""
|
||||
Create little spheres at the pointcloud's points' locations.
|
||||
:param pointcloud: 3D pointcloud, as Nx3 ndarray
|
||||
:param spectrum: The color spectrum to use with the spheres' diffuse bsdf
|
||||
:param sphere_radius: The radius to use for each sphere
|
||||
:return: A list of mitsuba shapes, to be added to a Scene
|
||||
"""
|
||||
spheres = []
|
||||
for row in pointcloud:
|
||||
sphere = PluginManager.getInstance().create({
|
||||
'type': 'sphere',
|
||||
'center': Point3(float(row[0]), float(row[1]), float(row[2])),
|
||||
'radius': sphere_radius,
|
||||
'bsdf': {
|
||||
'type': 'diffuse',
|
||||
'diffuseReflectance': spectrum
|
||||
},
|
||||
})
|
||||
spheres.append(sphere)
|
||||
|
||||
return spheres
|
||||
|
||||
|
||||
def get_pointcloud_bbox(pointcloud: np.ndarray) -> AABB:
|
||||
"""
|
||||
Create a mitsuba bounding box by using min and max on the input ndarray
|
||||
:param pointcloud: The pointcloud (Nx3) to calculate the bounding box from
|
||||
:return: The bounding box around the given pointcloud
|
||||
"""
|
||||
min_values = np.min(pointcloud, axis=0)
|
||||
max_values = np.max(pointcloud, axis=0)
|
||||
min_corner = Point3(*[float(elem) for elem in min_values])
|
||||
max_corner = Point3(*[float(elem) for elem in max_values])
|
||||
return AABB(min_corner, max_corner)
|
||||
|
||||
|
||||
def load_from_ply(filepath: Path) -> np.ndarray:
|
||||
"""
|
||||
Load points from a ply file
|
||||
:param filepath: The path of the file to read from
|
||||
:return: An ndarray (Nx3) containing all read points
|
||||
"""
|
||||
points = ply.read_ply(str(filepath))['points'].to_numpy()
|
||||
return points
|
||||
|
||||
|
||||
def render_pointclouds(pointcloud_paths: List[Tuple[Path, Path]], sensor, sphere_radius=1., num_workers=8) -> None:
|
||||
"""
|
||||
Render pointclouds by pkacing little Mitsuba spheres at all point locations
|
||||
:param pointcloud_paths: Path tuples (input_mesh, output_path) for all pointcloud files
|
||||
:param sensor: The sensor containing the transform to render with
|
||||
:param sphere_radius: Radius of individual point spheres
|
||||
:param num_workers: Number of concurrent render workers
|
||||
:return:
|
||||
"""
|
||||
queue = util.prepare_queue(num_workers)
|
||||
|
||||
pointcloud_paths = list(pointcloud_paths)
|
||||
with tqdm(total=len(pointcloud_paths), bar_format='Total {percentage:3.0f}% {n_fmt}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}{postfix}] {desc}', dynamic_ncols=True) as t:
|
||||
util.redirect_logger(tqdm.write, EWarn, t)
|
||||
for pointcloud_path in pointcloud_paths:
|
||||
input_path, output_path = pointcloud_path
|
||||
t.write(f'Rendering {input_path.stem}')
|
||||
# Make Pointcloud Spheres
|
||||
spheres = create_spheres(load_from_ply(input_path), util.get_predefined_spectrum('orange'), sphere_radius)
|
||||
# Make Scene
|
||||
scene = util.construct_simple_scene(spheres, sensor)
|
||||
scene.setDestinationFile(str(output_path / f'{input_path.stem}.png'))
|
||||
# Make Result
|
||||
job = RenderJob(f'Render-{input_path.stem}', scene, queue)
|
||||
job.start()
|
||||
|
||||
queue.waitLeft(0)
|
||||
queue.join()
|
||||
t.update()
|
||||
|
||||
|
||||
def main(args):
|
||||
input_filepaths = [Path(elem) for elem in args.input]
|
||||
output_path = Path(args.output)
|
||||
|
||||
assert all([elem.is_file() for elem in input_filepaths])
|
||||
assert output_path.is_dir()
|
||||
|
||||
bbox = get_pointcloud_bbox(load_from_ply(input_filepaths[0]))
|
||||
sensor_transform = cameras.create_transform_on_bbsphere(bbox,
|
||||
radius_multiplier=3., positioning_vector=Vector3(0, -1, 1),
|
||||
tilt=Transform.rotate(util.axis_unit_vector('x'), -25.))
|
||||
sensor = cameras.create_sensor_from_transform(sensor_transform, args.width, args.height,
|
||||
fov=45., num_samples=args.samples)
|
||||
render_pointclouds(zip(input_filepaths, cycle([output_path])), sensor, args.radius, args.workers)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser("Render pointcloud with mitsuba by placing little spheres at the points' positions")
|
||||
parser.add_argument('input', nargs='+', help="Path(s) to the ply file(s) containing a pointcloud(s) to render")
|
||||
parser.add_argument('-o', '--output', required=True, help='Path to write renderings to')
|
||||
|
||||
# Pointcloud parameters
|
||||
parser.add_argument('--radius', default=1., type=float, help='Radius of a single point')
|
||||
|
||||
# Sensor parameters
|
||||
parser.add_argument('--width', default=1280, type=int, help='Width of the resulting image')
|
||||
parser.add_argument('--height', default=960, type=int, help='Height of the resulting image')
|
||||
parser.add_argument('--samples', default=128, type=int, help='Number of integrator samples per pixel')
|
||||
|
||||
# General render parameters
|
||||
parser.add_argument('--workers', required=False, default=8, type=int, help="How many concurrent workers to use")
|
||||
|
||||
main(parser.parse_args())
|
|
@ -0,0 +1,20 @@
|
|||
[tool.poetry]
|
||||
name = "mitsuba-visualize"
|
||||
version = "1.0.0"
|
||||
description = ""
|
||||
authors = ["Christian Diller <git@christian-diller.de>"]
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = "^3.7"
|
||||
numpy = "^1.18.2"
|
||||
scipy = "^1.4.1"
|
||||
pyquaternion = "^0.9.5"
|
||||
tqdm = "^4.45.0"
|
||||
pandas = "^1.0.3"
|
||||
moviepy = "==1.0.1"
|
||||
|
||||
[tool.poetry.dev-dependencies]
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry>=0.12"]
|
||||
build-backend = "poetry.masonry.api"
|
|
@ -0,0 +1,12 @@
|
|||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
MITSUBA_BASE = Path('/home/christian/github/mitsuba')
|
||||
|
||||
for path in (MITSUBA_BASE / 'dist/python').iterdir():
|
||||
if str(path) not in sys.path:
|
||||
sys.path.append(str(path.absolute()))
|
||||
|
||||
if str(MITSUBA_BASE / 'dist') not in os.environ['PATH']:
|
||||
os.environ['PATH'] = str(MITSUBA_BASE / 'dist') + os.pathsep + os.environ['PATH']
|
|
@ -0,0 +1,200 @@
|
|||
from pathlib import Path
|
||||
from typing import List
|
||||
import numpy as np
|
||||
|
||||
import set_python_path
|
||||
from mitsuba.core import Vector, Spectrum, Scheduler, LocalWorker, RemoteWorker, PluginManager, Appender, EInfo, Thread, Transform
|
||||
from mitsuba.render import Scene, RenderQueue
|
||||
|
||||
|
||||
def axis_unit_vector(axis: str):
|
||||
"""
|
||||
Create a unit vector along the given axis
|
||||
:param axis: 3D axis x, y, or z
|
||||
:return: A Vector of length one along the specified axis
|
||||
"""
|
||||
if axis == 'x':
|
||||
return Vector(1, 0, 0)
|
||||
elif axis == 'y':
|
||||
return Vector(0, 1, 0)
|
||||
elif axis == 'z':
|
||||
return Vector(0, 0, 1)
|
||||
else:
|
||||
raise ValueError("Choose between x, y, and z")
|
||||
|
||||
|
||||
def read_filelist(filepath: Path) -> List[str]:
|
||||
"""
|
||||
Read a list of files from a file (one file per line). Lines beginning with # are ignored
|
||||
:param filepath: The path of the file containing filenames
|
||||
:return: A list of filenames as strings
|
||||
"""
|
||||
assert filepath.is_file()
|
||||
with open(filepath, 'r') as f:
|
||||
return [line for line in f.read().splitlines() if not line.startswith('#')]
|
||||
|
||||
|
||||
def create_spectrum_from_rgb(r: int, g: int, b: int) -> Spectrum:
|
||||
"""
|
||||
Create a Mitsuba Spectrum from r, g, and b values
|
||||
:param r: Red component
|
||||
:param g: Green component
|
||||
:param b: Blue component
|
||||
:return: A Mitsuba Spectrum
|
||||
"""
|
||||
assert (0 <= r <= 255 and 0 <= g <= 255 and 0 <= b <= 255), "Provide integer rgb values in the range 0-255"
|
||||
spectrum = Spectrum()
|
||||
spectrum.fromSRGB(float(r) / 255., float(g) / 255., float(b) / 255.)
|
||||
|
||||
return spectrum
|
||||
|
||||
|
||||
def get_predefined_spectrum(name: str) -> Spectrum:
|
||||
"""
|
||||
Get a predefined spectrum from a name string.
|
||||
Currently supports: light_blue, cornflower_blue, orange
|
||||
:param name: The spectrum name
|
||||
:return: A Mitsuba Spectrum
|
||||
"""
|
||||
if name == 'light_blue':
|
||||
return create_spectrum_from_rgb(160, 180, 200)
|
||||
elif name == 'cornflower_blue':
|
||||
return create_spectrum_from_rgb(100, 149, 237)
|
||||
elif name == 'orange':
|
||||
return create_spectrum_from_rgb(200, 160, 0)
|
||||
else:
|
||||
raise ValueError
|
||||
|
||||
|
||||
def prepare_queue(num_local_workers, remote_stream=None) -> RenderQueue:
|
||||
"""
|
||||
Prepare a Mitsuba render queue with the given amount of workers
|
||||
:param num_local_workers: Number of local rendering workers on the CPU (corresponding to CPU cores fully used during rendering)
|
||||
:param remote_stream: TODO
|
||||
:return: A mitsuba RenderQueue object
|
||||
"""
|
||||
scheduler = Scheduler.getInstance()
|
||||
queue = RenderQueue()
|
||||
for worker_idx in range(num_local_workers):
|
||||
local_worker = LocalWorker(worker_idx, f'LocalWorker-{worker_idx}')
|
||||
scheduler.registerWorker(local_worker)
|
||||
if remote_stream is not None:
|
||||
for idx, stream in enumerate(remote_stream):
|
||||
remote_worker = RemoteWorker(f'RemoteWorker-{idx}', stream)
|
||||
scheduler.registerWorker(remote_worker)
|
||||
scheduler.start()
|
||||
|
||||
return queue
|
||||
|
||||
|
||||
def construct_simple_scene(scene_objects, sensor) -> Scene:
|
||||
"""
|
||||
Construct a simple scene containing given objects and using the given sensor. Uses the path integrator and constant
|
||||
emitter
|
||||
:param scene_objects: All scene child objects to add
|
||||
:param sensor: The mitsuba sensor definition to use for this scene
|
||||
:return: The scene created, already configured and initialized
|
||||
"""
|
||||
pmgr = PluginManager.getInstance()
|
||||
integrator = pmgr.create({'type': 'path'})
|
||||
emitter = pmgr.create({'type': 'constant'})
|
||||
|
||||
scene = Scene()
|
||||
scene.addChild(integrator)
|
||||
scene.addChild(emitter)
|
||||
scene.addChild(sensor)
|
||||
for obj in scene_objects:
|
||||
scene.addChild(obj)
|
||||
|
||||
scene.configure()
|
||||
scene.initialize()
|
||||
|
||||
return scene
|
||||
|
||||
|
||||
def convert_transform2numpy(transform: Transform) -> np.ndarray:
|
||||
"""
|
||||
Get a numpy array containing the same transformation matrix values as a Mitsuba Transform object
|
||||
:param transform: Mitsuba Transform
|
||||
:return: 4x4 Numpy array representing the transformation matrix
|
||||
"""
|
||||
matrix = np.zeros([4, 4], dtype=float)
|
||||
for i in range(4):
|
||||
for j in range(4):
|
||||
matrix[i, j] = transform.getMatrix()[i, j]
|
||||
return matrix
|
||||
|
||||
|
||||
def distances_from_control_points(camera_poses: List[Transform], control_poses: List[Transform]) -> List[np.ndarray]:
|
||||
"""
|
||||
Calculate distance to closest control pose for every camera pose
|
||||
:param camera_poses: Camera poses in a trajectory
|
||||
:param control_poses: Control poses to calculate distances to
|
||||
:return: The minimal distance to a control pose for every camera pose
|
||||
"""
|
||||
camera_poses = [convert_transform2numpy(camera.getMatrix()) for camera in camera_poses]
|
||||
control_poses = [convert_transform2numpy(camera.getMatrix()) for camera in control_poses]
|
||||
distances = []
|
||||
for pose in camera_poses:
|
||||
curr_distances = [np.linalg.norm(pose[:3, 3] - control_pose[:3, 3]) for control_pose in control_poses]
|
||||
distances.append(np.min(curr_distances))
|
||||
return distances
|
||||
|
||||
|
||||
def generate_mesh_paths(input_paths: List[Path], base_output_path: Path, selected_meshes: List[str] = None):
|
||||
"""
|
||||
Generator for mesh paths. Takes a list of input paths, a base output path and a list of selected scenes.
|
||||
If an item in input paths is a ply file, it will be yielded. If it is a path, its subelements will be traversed and
|
||||
yielded if they are ply files and in the list of selected files
|
||||
:param input_paths: List of ply files or directories containing ply files, or a mixture of both
|
||||
:param base_output_path: The base output path. If input_paths contains directories, their last components will translate
|
||||
into a subdirectory in base_output_path
|
||||
:param selected_meshes: A list of selected meshes in the input_paths subdirectories. Elements have to end with .ply.
|
||||
Does not apply to elements in input_paths that are ply files.
|
||||
:return: Yields tuples: (path to ply file, directory output path for this file)
|
||||
"""
|
||||
for input_path in input_paths:
|
||||
if input_path.is_file(): # If it is a ply file, yield
|
||||
if input_path.suffix == '.ply':
|
||||
yield input_path, base_output_path
|
||||
else: # Else, iterate over all files in the directory
|
||||
for file in input_path.iterdir():
|
||||
output_path = base_output_path / input_path.name
|
||||
if not output_path.is_dir():
|
||||
output_path.mkdir(parents=False)
|
||||
if file.suffix == '.ply' and (selected_meshes is None or str(file.name) in selected_meshes):
|
||||
yield file, output_path
|
||||
|
||||
|
||||
def redirect_logger(write_function=print, log_level=EInfo, tqdm_progressbar=None):
|
||||
"""
|
||||
Redirect Mitsuba's Logger output to a custom function (so it can be used with e.g. tqdm).
|
||||
Additionally, can be used to control the log level
|
||||
:param write_function: A function like print() or tqdm.write() that is used to write log messages and progess bars
|
||||
:param log_level: The Mitsuba log level (mitsuba.EError, ...)
|
||||
:param tqdm_progressbar: Optionally, pass a tqdm progress bar. The Mitsuba rendering bar will be set as that bar's description
|
||||
:return:
|
||||
"""
|
||||
class RedirectedAppender(Appender):
|
||||
def __init__(self, write_function, tqdm_progressbar):
|
||||
self.write_function = write_function
|
||||
self.tqdm_progressbar = tqdm_progressbar
|
||||
super().__init__()
|
||||
|
||||
def append(self, log_level, message):
|
||||
self.write_function(message)
|
||||
|
||||
def logProgress(self, progress, name, formatted, eta):
|
||||
if self.tqdm_progressbar is not None:
|
||||
self.tqdm_progressbar.set_description_str(formatted.replace('\r', ''), refresh=True)
|
||||
else:
|
||||
self.write_function(f"\r{formatted}", end='')
|
||||
|
||||
logger = Thread.getThread().getLogger()
|
||||
logger.clearAppenders()
|
||||
logger.addAppender(RedirectedAppender(write_function, tqdm_progressbar))
|
||||
logger.setLogLevel(log_level)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
raise NotImplementedError("Cannot call the util script directly")
|
Loading…
Reference in New Issue