mitsuba/doc/python.tex

386 lines
13 KiB
TeX
Raw Normal View History

\section{Python integration}
\label{sec:python}
A recent feature of Mitsuba is a Python interface to the renderer API.
While the interface is still limited at this point, it can already be
used for many useful purposes. To access the API, start your Python
interpreter and enter
\begin{python}
import mitsuba
\end{python}
\paragraph{Mac OS:}
For this to work on MacOS X, you will first have to run the ``\emph{Apple
Menu}$\to$\emph{Command-line access}'' menu item from within Mitsuba.
In the unlikely case that you run into shared library loading issues (this is
taken care of by default), you may have to set the \code{LD\_LIBRARY\_PATH}
environment variable before starting Python so that it points to where the
Mitsuba libraries are installed (e.g. the \code{Mitsuba.app/Contents/Frameworks}
directory).
\paragraph{Windows and Linux:}
On Windows and \emph{non-packaged} Linux builds, you may have to explicitly
specify the required extension search path before issuing the \code{import} command, e.g.:
\begin{python}
import os, sys
# Specify the extension search path on Linux/Windows (may vary depending on your
# setup. If you compiled from source, 'path-to-mitsuba-directory' should be the
# 'dist' subdirectory)
# NOTE: On Windows, specify these paths using FORWARD slashes (i.e. '/' instead of
# '\' to avoid pitfalls with string escaping)
# Configure the search path for the Python extension module
sys.path.append('path-to-mitsuba-directory/python/<python version, e.g. 2.7>')
# Ensure that Python will be able to find the Mitsuba core libraries
os.environ['PATH'] = 'path-to-mitsuba-directory' + os.pathsep + os.environ['PATH']
import mitsuba
\end{python}
In rare cases when running on Linux, it may also be necessary to set the
\code{LD\_LIBRARY\_PATH} environment variable before starting Python so that it
points to where the Mitsuba core libraries are installed.
For an overview of the currently exposed API subset, please refer
to the following page: \url{http://www.mitsuba-renderer.org/api/group__libpython.html}.
\subsubsection{Accessing signatures in an interactive Python shell}
The plugin exports comprehensive Python-style docstrings, hence
the following is an alternative and convenient way of getting information on
classes, function, or entire namespaces when running an interactive Python shell.
\begin{shell}
>>> help(mitsuba.core.Bitmap) # (can be applied to namespaces, classes, functions, etc.)
class Bitmap(Object)
| Method resolution order:
| Bitmap
| Object
| Boost.Python.instance
| __builtin__.object
|
| Methods defined here:
| __init__(...)
| __init__( (object)arg1, (EPixelFormat)arg2, (EComponentFormat)arg3, (Vector2i)arg4) -> None :
| C++ signature :
| void __init__(_object*,mitsuba::Bitmap::EPixelFormat,mitsuba::Bitmap::EComponentFormat,mitsuba::TVector2<int>)
|
| __init__( (object)arg1, (EFileFormat)arg2, (Stream)arg3) -> None :
| C++ signature :
| void __init__(_object*,mitsuba::Bitmap::EFileFormat,mitsuba::Stream*)
|
| clear(...)
| clear( (Bitmap)arg1) -> None :
| C++ signature :
| void clear(mitsuba::Bitmap {lvalue})
...
\end{shell}
The docstrings list the currently exported functionality, as well as C++ and Python signatures, but they
don't document what these functions actually do. The web API documentation is
the preferred source of this information.
\subsection{Basics}
Generally, the Python API tries to mimic the C++ API as closely as possible.
Where applicable, the Python classes and methods replicate overloaded operators,
overridable virtual function calls, and default arguments. Under rare circumstances,
some features are inherently non-portable due to fundamental differences between the
two programming languages. In this case, the API documentation will contain further
information.
Mitsuba's linear algebra-related classes are usable with essentially the
same syntax as their C++ versions --- for example, the following snippet
creates and rotates a unit vector.
\begin{python}
import mitsuba
from mitsuba.core import *
# Create a normalized direction vector
myVector = normalize(Vector(1.0, 2.0, 3.0))
# 90 deg. rotation around the Y axis
trafo = Transform.rotate(Vector(0, 1, 0), 90)
# Apply the rotation and display the result
print(trafo * myVector)
\end{python}
\subsection{Recipes}
The following section contains a series of ``recipes'' on how to do
certain things with the help of the Python bindings.
\subsubsection{Loading a scene}
The following script demonstrates how to use the
\code{FileResolver} and \code{SceneHandler} classes to
load a Mitsuba scene from an XML file:
\begin{python}
import mitsuba
from mitsuba.core import *
from mitsuba.render import SceneHandler
# Get a reference to the thread's file resolver
fileResolver = Thread.getThread().getFileResolver()
# Register any searchs path needed to load scene resources (optional)
fileResolver.appendPath('<path to scene directory>')
# Optional: supply parameters that can be accessed
# by the scene (e.g. as $\text{\color{lstcomment}\itshape\texttt{\$}}$myParameter)
paramMap = StringMap()
paramMap['myParameter'] = 'value'
# Load the scene from an XML file
scene = SceneHandler.loadScene(fileResolver.resolve("scene.xml"), paramMap)
# Display a textual summary of the scene's contents
print(scene)
\end{python}
\subsubsection{Rendering a loaded scene}
Once a scene has been loaded, it can be rendered as follows:
\begin{python}
from mitsuba.core import *
from mitsuba.render import RenderQueue, RenderJob
import multiprocessing
scheduler = Scheduler.getInstance()
# Start up the scheduling system with one worker per local core
for i in range(0, multiprocessing.cpu_count()):
scheduler.registerWorker(LocalWorker('wrk%i' % i))
scheduler.start()
# Create a queue for tracking render jobs
queue = RenderQueue()
scene.setDestinationFile('renderedResult')
# Create a render job and insert it into the queue
job = RenderJob('myRenderJob', scene, queue)
job.start()
# Wait for all jobs to finish and release resources
queue.waitLeft(0)
queue.join()
# Print some statistics about the rendering process
print(Statistics.getInstance().getStats())
\end{python}
\subsubsection{Rendering over the network}
To render over the network, you must first set up one or
more machines that run the \code{mtssrv} server (see \secref{mtssrv}).
A network node can then be registered with the scheduler as follows:
\begin{python}
# Connect to a socket on a named host or IP address
# 7554 is the default port of 'mtssrv'
stream = SocketStream('128.84.103.222', 7554)
# Create a remote worker instance that communicates over the stream
remoteWorker = RemoteWorker('netWorker', stream)
scheduler = Scheduler.getInstance()
# Register the remote worker (and any other potential workers)
scheduler.registerWorker(remoteWorker)
scheduler.start()
\end{python}
\subsubsection{Constructing custom scenes from Python}
Dynamically constructing Mitsuba scenes entails loading a series of external
plugins, instantiating them with custom parameters, and finally assembling
them into an object graph.
For instance, the following snippet shows how to create a basic
perspective sensor with a film that writes PNG images:
\begin{python}
from mitsuba.core import *
pmgr = PluginManager.getInstance()
# Encodes parameters on how to instantiate the 'perspective' plugin
sensorProps = Properties('perspective')
sensorProps['toWorld'] = Transform.lookAt(
Point(0, 0, -10), # Camera origin
Point(0, 0, 0), # Camera target
Vector(0, 1, 0) # 'up' vector
)
sensorProps['fov'] = 45.0
# Encodes parameters on how to instantiate the 'ldrfilm' plugin
filmProps = Properties('ldrfilm')
filmProps['width'] = 1920
filmProps['height'] = 1080
# Load and instantiate the plugins
sensor = pmgr.createObject(sensorProps)
film = pmgr.createObject(filmProps)
# First configure the film and then add it to the sensor
film.configure()
sensor.addChild('film', film)
# Now, the sensor can be configured
sensor.configure()
\end{python}
The above code fragment uses the plugin manager to construct a
\code{Sensor} instance from an external plugin named
\texttt{perspective.so/dll/dylib} and adds a child object
named \texttt{film}, which is a \texttt{Film} instance loaded from the
plugin \texttt{ldrfilm.so/dll/dylib}.
Each time after instantiating a plugin, all child objects are added, and
finally the plugin's \code{configure()} method must be called.
Creating scenes in this manner ends up being rather laborious.
Since Python comes with a powerful dynamically-typed dictionary
primitive, Mitsuba additionally provides a more ``pythonic''
alternative that makes use of this facility:
\begin{python}
from mitsuba.core import *
pmgr = PluginManager.getInstance()
sensor = pmgr.create({
'type' : 'perspective',
'toWorld' : Transform.lookAt(
Point(0, 0, -10),
Point(0, 0, 0),
Vector(0, 1, 0)
),
'film' : {
'type' : 'ldrfilm',
'width' : 1920,
'height' : 1080
}
})
\end{python}
This code does exactly the same as the previous snippet.
By the time \code{PluginManager.create} returns, the object
hierarchy has already been assembled, and the
\code{configure()} method of every object
has been called.
Finally, here is an full example that creates a basic scene
which can be rendered. It describes a sphere lit by a point
light, rendered using the direct illumination integrator.
\begin{python}
from mitsuba.core import *
from mitsuba.render import Scene
scene = Scene()
# Create a sensor, film & sample generator
scene.addChild(pmgr.create({
'type' : 'perspective',
'toWorld' : Transform.lookAt(
Point(0, 0, -10),
Point(0, 0, 0),
Vector(0, 1, 0)
),
'film' : {
'type' : 'ldrfilm',
'width' : 1920,
'height' : 1080
},
'sampler' : {
'type' : 'ldsampler',
'sampleCount' : 2
}
}))
# Set the integrator
scene.addChild(pmgr.create({
'type' : 'direct'
}))
# Add a light source
scene.addChild(pmgr.create({
'type' : 'point',
'position' : Point(5, 0, -10),
'intensity' : Spectrum(100)
}))
# Add a shape
scene.addChild(pmgr.create({
'type' : 'sphere',
'center' : Point(0, 0, 0),
'radius' : 1.0,
'bsdf' : {
'type' : 'diffuse',
'reflectance' : Spectrum(0.4)
}
}))
2011-08-22 12:17:55 +08:00
scene.configure()
\end{python}
\subsubsection{Taking control of the logging system}
2011-08-20 15:36:40 +08:00
Many operations in Mitsuba will print one or more log messages
during their execution. By default, they will be printed to the console,
which may be undesirable. Similar to the C++ side, it is possible to define
custom \code{Formatter} and \code{Appender} classes to interpret and direct
the flow of these messages. This is also useful to keep track of the progress
of rendering jobs.
2011-08-20 15:36:40 +08:00
Roughly, a \code{Formatter} turns detailed
information about a logging event into a human-readable string, and a
\code{Appender} routes it to some destination (e.g. by appending it to
a file or a log viewer in a graphical user interface). Here is an example
of how to activate such extensions:
\begin{python}
import mitsuba
from mitsuba.core import *
class MyFormatter(Formatter):
def format(self, logLevel, sourceClass, sourceThread, message, filename, line):
return '%s (log level: %s, thread: %s, class %s, file %s, line %i)' % \
(message, str(logLevel), sourceThread.getName(), sourceClass,
2011-08-20 15:36:40 +08:00
filename, line)
class MyAppender(Appender):
def append(self, logLevel, message):
print(message)
def logProgress(self, progress, name, formatted, eta):
print('Progress message: ' + formatted)
2011-08-20 15:36:40 +08:00
# Get the logger associated with the current thread
logger = Thread.getThread().getLogger()
logger.setFormatter(MyFormatter())
logger.clearAppenders()
logger.addAppender(MyAppender())
logger.setLogLevel(EDebug)
Log(EInfo, 'Test message')
2011-08-20 15:36:40 +08:00
\end{python}
\subsubsection{Rendering a turntable animation with motion blur}
Rendering a turntable animation is a fairly common task that is
conveniently accomplished via the Python interface. In a turntable
video, the camera rotates around a completely static object or scene.
The following snippet does this for the material test ball scene downloadable
on the main website, complete with motion blur. It assumes that the
scene and scheduler have been set up approriately using one of the previous
snippets.
\begin{python}
sensor = scene.getSensor()
sensor.setShutterOpen(0)
sensor.setShutterOpenTime(1)
stepSize = 5
for i in range(0,360 / stepSize):
rotationCur = Transform.rotate(Vector(0, 0, 1), i*stepSize);
rotationNext = Transform.rotate(Vector(0, 0, 1), (i+1)*stepSize);
trafoCur = Transform.lookAt(rotationCur * Point(0,-6,4),
Point(0, 0, .5), rotationCur * Vector(0, 1, 0))
trafoNext = Transform.lookAt(rotationNext * Point(0,-6,4),
Point(0, 0, .5), rotationNext * Vector(0, 1, 0))
atrafo = AnimatedTransform()
atrafo.appendTransform(0, trafoCur)
atrafo.appendTransform(1, trafoNext)
atrafo.sortAndSimplify()
sensor.setWorldTransform(atrafo)
scene.setDestinationFile('frame_%03i.png' % i)
job = RenderJob('job_%i' % i, scene, queue)
job.start()
queue.waitLeft(0)
queue.join()
\end{python}