mitsuba/doc/python.tex

725 lines
26 KiB
TeX
Raw Normal View History

\section{Python integration}
\label{sec:python}
A recent feature of Mitsuba is a Python interface to the renderer API.
While the interface is still limited at this point, it can already be
used for many useful purposes. To access the API, start your Python
interpreter and enter
\begin{python}
import mitsuba
\end{python}
\paragraph{Mac OS:}
For this to work on MacOS X, you will first have to run the ``\emph{Apple
Menu}$\to$\emph{Command-line access}'' menu item from within Mitsuba.
In the unlikely case that you run into shared library loading issues (this is
taken care of by default), you may have to set the \code{LD\_LIBRARY\_PATH}
environment variable before starting Python so that it points to where the
Mitsuba libraries are installed (e.g. the \code{Mitsuba.app/Contents/Frameworks}
directory).
2013-08-05 17:35:25 +08:00
When Python crashes directly after the \code{import mitsuba} statement,
make sure that Mitsuba is linked against the right Python distribution
(i.e. matching the \code{python} binary you are using). For e.g. Python
2.7, can be done by adjusting the \code{PYTHON27INCLUDE} and
\code{PYTHON27LIBDIR} variables in \code{config.py}. For other versions,
adjust the numbers accordingly.
\paragraph{Windows and Linux:}
On Windows and \emph{non-packaged} Linux builds, you may have to explicitly
specify the required extension search path before issuing the \code{import} command, e.g.:
\begin{python}
import os, sys
# Specify the extension search path on Linux/Windows (may vary depending on your
# setup. If you compiled from source, 'path-to-mitsuba-directory' should be the
# 'dist' subdirectory)
# NOTE: On Windows, specify these paths using FORWARD slashes (i.e. '/' instead of
# '\' to avoid pitfalls with string escaping)
# Configure the search path for the Python extension module
sys.path.append('path-to-mitsuba-directory/python/<python version, e.g. 2.7>')
# Ensure that Python will be able to find the Mitsuba core libraries
os.environ['PATH'] = 'path-to-mitsuba-directory' + os.pathsep + os.environ['PATH']
import mitsuba
\end{python}
In rare cases when running on Linux, it may also be necessary to set the
\code{LD\_LIBRARY\_PATH} environment variable before starting Python so that it
points to where the Mitsuba core libraries are installed.
For an overview of the currently exposed API subset, please refer
to the following page: \url{http://www.mitsuba-renderer.org/api/group__libpython.html}.
\subsubsection*{Accessing signatures in an interactive Python shell}
The plugin exports comprehensive Python-style docstrings, hence
the following is an alternative and convenient way of getting information on
classes, function, or entire namespaces when running an interactive Python shell.
\begin{shell}
>>> help(mitsuba.core.Bitmap) # (can be applied to namespaces, classes, functions, etc.)
class Bitmap(Object)
| Method resolution order:
| Bitmap
| Object
| Boost.Python.instance
| __builtin__.object
|
| Methods defined here:
| __init__(...)
| __init__( (object)arg1, (EPixelFormat)arg2, (EComponentFormat)arg3, (Vector2i)arg4) -> None :
| C++ signature :
| void __init__(_object*,mitsuba::Bitmap::EPixelFormat,mitsuba::Bitmap::EComponentFormat,mitsuba::TVector2<int>)
|
| __init__( (object)arg1, (EFileFormat)arg2, (Stream)arg3) -> None :
| C++ signature :
| void __init__(_object*,mitsuba::Bitmap::EFileFormat,mitsuba::Stream*)
|
| clear(...)
| clear( (Bitmap)arg1) -> None :
| C++ signature :
| void clear(mitsuba::Bitmap {lvalue})
...
\end{shell}
The docstrings list the currently exported functionality, as well as C++ and Python signatures, but they
don't document what these functions actually do. The web API documentation is
the preferred source of this information.
\subsection{Basics}
Generally, the Python API tries to mimic the C++ API as closely as possible.
Where applicable, the Python classes and methods replicate overloaded operators,
overridable virtual function calls, and default arguments. Under rare circumstances,
some features are inherently non-portable due to fundamental differences between the
two programming languages. In this case, the API documentation will contain further
information.
Mitsuba's linear algebra-related classes are usable with essentially the
same syntax as their C++ versions --- for example, the following snippet
creates and rotates a unit vector.
\begin{python}
import mitsuba
from mitsuba.core import *
# Create a normalized direction vector
myVector = normalize(Vector(1.0, 2.0, 3.0))
# 90 deg. rotation around the Y axis
trafo = Transform.rotate(Vector(0, 1, 0), 90)
# Apply the rotation and display the result
print(trafo * myVector)
\end{python}
\subsection{Recipes}
The following section contains a series of ``recipes'' on how to do
certain things with the help of the Python bindings.
\subsubsection{Loading a scene}
The following script demonstrates how to use the
\code{FileResolver} and \code{SceneHandler} classes to
load a Mitsuba scene from an XML file:
\begin{python}
import mitsuba
from mitsuba.core import *
from mitsuba.render import SceneHandler
# Get a reference to the thread's file resolver
fileResolver = Thread.getThread().getFileResolver()
# Register any searchs path needed to load scene resources (optional)
fileResolver.appendPath('<path to scene directory>')
# Optional: supply parameters that can be accessed
# by the scene (e.g. as $\text{\color{lstcomment}\itshape\texttt{\$}}$myParameter)
paramMap = StringMap()
paramMap['myParameter'] = 'value'
# Load the scene from an XML file
scene = SceneHandler.loadScene(fileResolver.resolve("scene.xml"), paramMap)
# Display a textual summary of the scene's contents
print(scene)
\end{python}
\subsubsection{Rendering a loaded scene}
Once a scene has been loaded, it can be rendered as follows:
\begin{python}
from mitsuba.core import *
from mitsuba.render import RenderQueue, RenderJob
import multiprocessing
scheduler = Scheduler.getInstance()
# Start up the scheduling system with one worker per local core
for i in range(0, multiprocessing.cpu_count()):
2013-10-18 15:29:28 +08:00
scheduler.registerWorker(LocalWorker(i, 'wrk%i' % i))
scheduler.start()
# Create a queue for tracking render jobs
queue = RenderQueue()
scene.setDestinationFile('renderedResult')
# Create a render job and insert it into the queue
job = RenderJob('myRenderJob', scene, queue)
job.start()
# Wait for all jobs to finish and release resources
queue.waitLeft(0)
queue.join()
# Print some statistics about the rendering process
print(Statistics.getInstance().getStats())
\end{python}
\subsubsection{Rendering over the network}
To render over the network, you must first set up one or
more machines that run the \code{mtssrv} server (see \secref{mtssrv}).
A network node can then be registered with the scheduler as follows:
\begin{python}
# Connect to a socket on a named host or IP address
# 7554 is the default port of 'mtssrv'
stream = SocketStream('128.84.103.222', 7554)
# Create a remote worker instance that communicates over the stream
remoteWorker = RemoteWorker('netWorker', stream)
scheduler = Scheduler.getInstance()
# Register the remote worker (and any other potential workers)
scheduler.registerWorker(remoteWorker)
scheduler.start()
\end{python}
\subsubsection{Constructing custom scenes from Python}
Dynamically constructing Mitsuba scenes entails loading a series of external
plugins, instantiating them with custom parameters, and finally assembling
them into an object graph.
For instance, the following snippet shows how to create a basic
perspective sensor with a film that writes PNG images:
\begin{python}
from mitsuba.core import *
pmgr = PluginManager.getInstance()
# Encodes parameters on how to instantiate the 'perspective' plugin
sensorProps = Properties('perspective')
sensorProps['toWorld'] = Transform.lookAt(
Point(0, 0, -10), # Camera origin
Point(0, 0, 0), # Camera target
Vector(0, 1, 0) # 'up' vector
)
sensorProps['fov'] = 45.0
# Encodes parameters on how to instantiate the 'ldrfilm' plugin
filmProps = Properties('ldrfilm')
filmProps['width'] = 1920
filmProps['height'] = 1080
# Load and instantiate the plugins
sensor = pmgr.createObject(sensorProps)
film = pmgr.createObject(filmProps)
# First configure the film and then add it to the sensor
film.configure()
sensor.addChild('film', film)
# Now, the sensor can be configured
sensor.configure()
\end{python}
The above code fragment uses the plugin manager to construct a
\code{Sensor} instance from an external plugin named
\texttt{perspective.so/dll/dylib} and adds a child object
named \texttt{film}, which is a \texttt{Film} instance loaded from the
plugin \texttt{ldrfilm.so/dll/dylib}.
Each time after instantiating a plugin, all child objects are added, and
finally the plugin's \code{configure()} method must be called.
Creating scenes in this manner ends up being rather laborious.
Since Python comes with a powerful dynamically-typed dictionary
primitive, Mitsuba additionally provides a more ``pythonic''
alternative that makes use of this facility:
\begin{python}
from mitsuba.core import *
pmgr = PluginManager.getInstance()
sensor = pmgr.create({
'type' : 'perspective',
'toWorld' : Transform.lookAt(
Point(0, 0, -10),
Point(0, 0, 0),
Vector(0, 1, 0)
),
'film' : {
'type' : 'ldrfilm',
'width' : 1920,
'height' : 1080
}
})
\end{python}
This code does exactly the same as the previous snippet.
By the time \code{PluginManager.create} returns, the object
hierarchy has already been assembled, and the
\code{configure()} method of every object
has been called.
Finally, here is an full example that creates a basic scene
which can be rendered. It describes a sphere lit by a point
light, rendered using the direct illumination integrator.
\begin{python}
from mitsuba.core import *
from mitsuba.render import Scene
scene = Scene()
# Create a sensor, film & sample generator
scene.addChild(pmgr.create({
'type' : 'perspective',
'toWorld' : Transform.lookAt(
Point(0, 0, -10),
Point(0, 0, 0),
Vector(0, 1, 0)
),
'film' : {
'type' : 'ldrfilm',
'width' : 1920,
'height' : 1080
},
'sampler' : {
'type' : 'ldsampler',
'sampleCount' : 2
}
}))
# Set the integrator
scene.addChild(pmgr.create({
'type' : 'direct'
}))
# Add a light source
scene.addChild(pmgr.create({
'type' : 'point',
'position' : Point(5, 0, -10),
'intensity' : Spectrum(100)
}))
# Add a shape
scene.addChild(pmgr.create({
'type' : 'sphere',
'center' : Point(0, 0, 0),
'radius' : 1.0,
'bsdf' : {
'type' : 'diffuse',
'reflectance' : Spectrum(0.4)
}
}))
2011-08-22 12:17:55 +08:00
scene.configure()
\end{python}
\subsubsection{Taking control of the logging system}
2011-08-20 15:36:40 +08:00
Many operations in Mitsuba will print one or more log messages
during their execution. By default, they will be printed to the console,
which may be undesirable. Similar to the C++ side, it is possible to define
custom \code{Formatter} and \code{Appender} classes to interpret and direct
the flow of these messages. This is also useful to keep track of the progress
of rendering jobs.
2011-08-20 15:36:40 +08:00
Roughly, a \code{Formatter} turns detailed
information about a logging event into a human-readable string, and a
\code{Appender} routes it to some destination (e.g. by appending it to
a file or a log viewer in a graphical user interface). Here is an example
of how to activate such extensions:
\begin{python}
import mitsuba
from mitsuba.core import *
class MyFormatter(Formatter):
def format(self, logLevel, sourceClass, sourceThread, message, filename, line):
return '%s (log level: %s, thread: %s, class %s, file %s, line %i)' % \
(message, str(logLevel), sourceThread.getName(), sourceClass,
2011-08-20 15:36:40 +08:00
filename, line)
class MyAppender(Appender):
def append(self, logLevel, message):
print(message)
def logProgress(self, progress, name, formatted, eta):
print('Progress message: ' + formatted)
2011-08-20 15:36:40 +08:00
# Get the logger associated with the current thread
logger = Thread.getThread().getLogger()
logger.setFormatter(MyFormatter())
logger.clearAppenders()
logger.addAppender(MyAppender())
logger.setLogLevel(EDebug)
Log(EInfo, 'Test message')
2011-08-20 15:36:40 +08:00
\end{python}
\subsubsection{Rendering a turntable animation with motion blur}
Rendering a turntable animation is a fairly common task that is
conveniently accomplished via the Python interface. In a turntable
video, the camera rotates around a completely static object or scene.
The following snippet does this for the material test ball scene downloadable
on the main website, complete with motion blur. It assumes that the
scene and scheduler have been set up approriately using one of the previous
snippets.
\begin{python}
sensor = scene.getSensor()
sensor.setShutterOpen(0)
sensor.setShutterOpenTime(1)
stepSize = 5
for i in range(0,360 / stepSize):
rotationCur = Transform.rotate(Vector(0, 0, 1), i*stepSize);
rotationNext = Transform.rotate(Vector(0, 0, 1), (i+1)*stepSize);
trafoCur = Transform.lookAt(rotationCur * Point(0,-6,4),
Point(0, 0, .5), rotationCur * Vector(0, 1, 0))
trafoNext = Transform.lookAt(rotationNext * Point(0,-6,4),
Point(0, 0, .5), rotationNext * Vector(0, 1, 0))
atrafo = AnimatedTransform()
atrafo.appendTransform(0, trafoCur)
atrafo.appendTransform(1, trafoNext)
atrafo.sortAndSimplify()
sensor.setWorldTransform(atrafo)
scene.setDestinationFile('frame_%03i.png' % i)
job = RenderJob('job_%i' % i, scene, queue)
job.start()
queue.waitLeft(0)
queue.join()
\end{python}
2013-02-18 08:03:01 +08:00
A useful property of this approach is that scene loading and initialization
must only take place once. Performance-wise, this compares favourably with
running many separate rendering jobs, e.g. using the \code{mitsuba}
command-line executable.
2013-11-21 18:52:29 +08:00
\subsubsection{Creating triangle-based shapes}
It is possible to create new triangle-based shapes directly in Python, though
doing so is discouraged: because Python is an interpreted programming language,
the construction of large meshes will run very slowly. The builtin shapes
and shape loaders are to be preferred when this is an option. That said, the
following snippet shows how to create \code{TriMesh} objects from within Python:
\begin{python}
# Create a new mesh with 1 triangle, 3 vertices,
# and allocate buffers for normals and texture coordinates
mesh = TriMesh('Name of this mesh', 1, 3, True, True)
v = mesh.getVertexPositions()
v[0] = Point3(0, 0, 0)
v[1] = Point3(1, 0, 0)
v[2] = Point3(0, 1, 0)
n = mesh.getVertexNormals()
n[0] = Normal(0, 0, 1)
n[1] = Normal(0, 0, 1)
n[2] = Normal(0, 0, 1)
t = mesh.getTriangles() # Indexed triangle list: tri 1 references vertices 0,1,2
t[0] = 0
t[1] = 1
t[2] = 2
uv = mesh.getTexcoords()
uv[0] = Point2(0, 0)
uv[1] = Point2(1, 0)
uv[2] = Point2(0, 1)
mesh.configure()
# Add to a scene (assumes 'scene' is available)
sensor.addChild(mesh)
\end{python}
2013-11-25 23:55:44 +08:00
\subsubsection{Calling Mitsuba functions from a multithread Python program}
By default, Mitsuba assumes that threads accessing Mitsuba-internal
data structures were created by (or at least registered with) Mitsuba. This is the
case for the main thread and subclasses of \code{mitsuba.core.Thread}. When a
Mitsuba function is called from an event dispatch thread of a multithreaded
Python application that is not known to Mitsuba, an exception or crash will result.
To avoid this, get a reference to the main thread right after loading the Mitsuba plugin
and save some related state (the attached \code{FileResolver} and \code{Logger} instances).
\begin{python}
mainThread = Thread.getThead()
saved_fresolver = mainThread.getFileResolver()
saved_logger = mainThread.getLogger()
\end{python}
Later when accessed from an unregister thread, execute the following:
\begin{python}
# This rendering thread was not created by Mitsuba -- register it
newThread = Thread.registerUnmanagedThread('render')
newThread.setFileResolver(saved_fresolver)
newThread.setLogger(saved_logger)
\end{python}
It is fine to execute this several times (\code{registerUnmanagedThread} just returns
a reference to the associated \code{Thread} instance if it was already registered).
\subsubsection{PyQt/PySide interaction with Mitsuba (simple version)}
The following listing contains a complete program that
renders a sphere and efficiently displays it in a PyQt window
2013-11-21 18:52:29 +08:00
(to make this work in PySide, change all occurrences of \code{PyQt4} to \code{PySide} in the
import declarations and rename the function call to \code{getNativeBuffer()} to \code{toByteArray()},
which is a tiny bit less efficient).
\begin{python}
import mitsuba, multiprocessing, sys
from mitsuba.core import Scheduler, PluginManager, \
LocalWorker, Properties, Bitmap, Point2i, FileStream
from mitsuba.render import RenderQueue, RenderJob, Scene
from PyQt4.QtCore import QPoint
from PyQt4.QtGui import QApplication, QMainWindow, QPainter, QImage
class MitsubaView(QMainWindow):
def __init__(self):
super(MitsubaView, self).__init__()
self.setWindowTitle('Mitsuba/Qt demo')
self.initializeMitsuba()
self.image = self.render(self.createScene())
self.resize(self.image.width(), self.image.height())
def initializeMitsuba(self):
# Start up the scheduling system with one worker per local core
self.scheduler = Scheduler.getInstance()
for i in range(0, multiprocessing.cpu_count()):
self.scheduler.registerWorker(LocalWorker(i, 'wrk%i' % i))
self.scheduler.start()
# Create a queue for tracking render jobs
self.queue = RenderQueue()
# Get a reference to the plugin manager
self.pmgr = PluginManager.getInstance()
def shutdownMitsuba(self):
self.queue.join()
self.scheduler.stop()
def createScene(self):
# Create a simple scene containing a sphere
sphere = self.pmgr.createObject(Properties("sphere"))
sphere.configure()
scene = Scene()
scene.addChild(sphere)
scene.configure()
# Don't automatically write an output bitmap file when the
# rendering process finishes (want to control this from Python)
scene.setDestinationFile('')
return scene
def render(self, scene):
# Create a render job and insert it into the queue
job = RenderJob('myRenderJob', scene, self.queue)
job.start()
# Wait for the job to finish
self.queue.waitLeft(0)
# Develop the camera's film into an 8 bit sRGB bitmap
film = scene.getFilm()
size = film.getSize()
bitmap = Bitmap(Bitmap.ERGB, Bitmap.EUInt8, size)
film.develop(Point2i(0, 0), size, Point2i(0, 0), bitmap)
# Write to a PNG bitmap file
outFile = FileStream("rendering.png", FileStream.ETruncReadWrite)
bitmap.write(Bitmap.EPNG, outFile)
outFile.close()
# Also create a QImage (using a fast memory copy in C++)
2013-11-25 23:55:44 +08:00
return QImage(bitmap.getNativeBuffer(),
size.x, size.y, QImage.Format_RGB888)
def paintEvent(self, event):
painter = QPainter(self)
painter.drawImage(QPoint(0, 0), self.image)
del painter
def main():
app = QApplication(sys.argv)
view = MitsubaView()
view.show()
view.raise_()
retval = app.exec_()
view.shutdownMitsuba()
sys.exit(retval)
if __name__ == '__main__':
main()
\end{python}
2013-11-21 18:52:29 +08:00
2013-11-25 23:55:44 +08:00
\subsubsection{PyQt/PySide interaction with Mitsuba (fancy)}
The following snippet is a much fancier version of the previous PyQt/PySide example.
Instead of waiting for the rendering to finish and then displaying it, this example launches the
rendering in the background and uses Mitsuba's \code{RenderListener} interface to update the
view and show image blocks as they are being rendered.
As before, some changes will be necessary to get this to run on PySide.
\begin{center}
\includegraphics[width=10cm]{images/python_demo.jpg}
\end{center}
When using this snippet, please be wary of threading-related issues; the key thing to remember is that
in Qt, only the main thread is allowed to modify Qt widgets. On the other hand, rendering and logging-related
callbacks will be invoked from different Mitsuba-internal threads---this means that it's not possible to e.g.
directly update the status bar message from the callback \code{finishJobEvent}. To do this, we must use
use Qt's \code{QueuedConnection} to communicate this event to the main thread via signals and slots. See the
code that updates the status and progress bar for more detail.
2013-11-21 18:52:29 +08:00
\begin{python}
2013-11-25 23:55:44 +08:00
import mitsuba, multiprocessing, sys, time
2013-11-21 18:52:29 +08:00
2013-11-25 23:55:44 +08:00
from mitsuba.core import Scheduler, PluginManager, Thread, Vector, Point2i, \
LocalWorker, Properties, Bitmap, Spectrum, Appender, EWarn, Transform, FileStream
from mitsuba.render import RenderQueue, RenderJob, Scene, RenderListener
from PyQt4.QtCore import Qt, QPoint, pyqtSignal
from PyQt4.QtGui import QApplication, QMainWindow, QPainter, QImage, QProgressBar
class MitsubaView(QMainWindow):
viewUpdated = pyqtSignal()
renderProgress = pyqtSignal(int)
renderingCompleted = pyqtSignal(bool)
def __init__(self):
super(MitsubaView, self).__init__()
self.setWindowTitle('Mitsuba/Qt demo')
self.initializeMitsuba()
self.qimage = self.render(self.createScene())
status = self.statusBar()
status.setContentsMargins(0,0,5,0)
self.progress = QProgressBar(status)
status.addPermanentWidget(self.progress)
status.setSizeGripEnabled(False)
self.setFixedSize(self.qimage.width(), self.qimage.height() +
self.progress.height()*1.5)
def handleRenderingCompleted(cancelled):
status.showMessage("Rendering finished.")
self.progress.setVisible(False)
if not cancelled:
outFile = FileStream("rendering.png", FileStream.ETruncReadWrite)
self.bitmap.write(Bitmap.EPNG, outFile)
outFile.close()
self.viewUpdated.connect(self.repaint, Qt.QueuedConnection)
self.renderProgress.connect(self.progress.setValue, Qt.QueuedConnection)
self.renderingCompleted.connect(handleRenderingCompleted,
Qt.QueuedConnection)
status.showMessage("Rendering ..")
def initializeMitsuba(self):
# Start up the scheduling system with one worker per local core
self.scheduler = Scheduler.getInstance()
for i in range(0, multiprocessing.cpu_count()):
self.scheduler.registerWorker(LocalWorker(i, 'wrk%i' % i))
self.scheduler.start()
# Create a queue for tracking render jobs
self.queue = RenderQueue()
# Get a reference to the plugin manager
self.pmgr = PluginManager.getInstance()
# Appender to process log and progress messages within Python
class CustomAppender(Appender):
def append(self2, logLevel, message):
print(message)
def logProgress(self2, progress, name, formatted, eta):
self.renderProgress.emit(progress)
logger = Thread.getThread().getLogger()
logger.setLogLevel(EWarn)
logger.clearAppenders()
logger.addAppender(CustomAppender())
def closeEvent(self, e):
self.job.cancel()
self.queue.join()
self.scheduler.stop()
def createScene(self):
scene = self.pmgr.create({
'type' : 'scene',
'sphere' : {
'type' : 'sphere',
},
'envmap' : {
'type' : 'sunsky'
},
'sensor' : {
'type' : 'perspective',
'toWorld' : Transform.translate(Vector(0, 0, -5)),
'sampler' : {
'type' : 'halton',
'sampleCount' : 64
}
}
})
return scene
def render(self, scene):
film = scene.getFilm()
size = film.getSize()
# Bitmap that will store pixels of the developed film
self.bitmap = Bitmap(Bitmap.ERGB, Bitmap.EUInt8, size)
self.bitmap.clear()
# Listener to update bitmap subregions when blocks finish rendering
class CustomListener(RenderListener):
def __init__(self):
super(CustomListener, self).__init__()
self.time = 0
def workBeginEvent(self2, job, wu, thr):
self.bitmap.drawRect(wu.getOffset(), wu.getSize(), Spectrum(1.0))
now = time.time()
if now - self2.time > .25:
self.viewUpdated.emit()
self2.time = now
def workEndEvent(self2, job, wr):
film.develop(wr.getOffset(), wr.getSize(),
wr.getOffset(), self.bitmap)
now = time.time()
if now - self2.time > .25:
self.viewUpdated.emit()
self2.time = now
def refreshEvent(self2, job):
film.develop(Point2i(0), size, Point2i(0), self.bitmap)
self.viewUpdated.emit()
def finishJobEvent(self2, job, cancelled):
self2.refreshEvent(job)
self.renderingCompleted.emit(cancelled)
# Create a render job and insert it into the queue
self.job = RenderJob('rjob', scene, self.queue)
self.queue.registerListener(CustomListener())
self.job.start()
# Return a QImage that directly points into the contents of self.bitmap
return QImage(self.bitmap.getNativeBuffer(),
size.x, size.y, QImage.Format_RGB888)
def keyPressEvent(self, e):
if e.key() == Qt.Key_Escape:
self.close()
def paintEvent(self, event):
super(MitsubaView, self).paintEvent(event)
painter = QPainter(self)
painter.drawImage(QPoint(0, 0), self.qimage)
del painter
def main():
import signal
# Stop the program upon Ctrl-C (SIGINT)
signal.signal(signal.SIGINT, signal.SIG_DFL)
app = QApplication(sys.argv)
view = MitsubaView()
view.show()
view.raise_()
retval = app.exec_()
sys.exit(retval)
if __name__ == '__main__':
main()
2013-11-21 18:52:29 +08:00
\end{python}
2013-11-25 23:55:44 +08:00