removed repeated words in the documentation

metadata
Wenzel Jakob 2014-05-08 12:29:00 +02:00
parent d2fb59ca4e
commit ab767f0328
24 changed files with 33 additions and 32 deletions

View File

@ -86,7 +86,7 @@ $\texttt{\$}$ mitsuba -c machine1;machine2;... path-to/my-scene.xml
There are two different ways in which you can access render nodes: There are two different ways in which you can access render nodes:
\begin{itemize} \begin{itemize}
\item\textbf{Direct}: Here, you create a direct connection to a running \code{mtssrv} instance on \item\textbf{Direct}: Here, you create a direct connection to a running \code{mtssrv} instance on
another machine (\code{mtssrv} is the Mitsuba server process). From the the performance another machine (\code{mtssrv} is the Mitsuba server process). From the performance
standpoint, this approach should always be preferred over the SSH method described below when there is standpoint, this approach should always be preferred over the SSH method described below when there is
a choice between them. There are some disadvantages though: first, you need to manually start a choice between them. There are some disadvantages though: first, you need to manually start
\code{mtssrv} on every machine you want to use. \code{mtssrv} on every machine you want to use.

View File

@ -48,7 +48,7 @@ Visual Studio 2010 for legacy 32 bit builds.
Versions XE 2012 and 2013 are known to work. Versions XE 2012 and 2013 are known to work.
\end{description} \end{description}
\paragraph{Mac OS:} \paragraph{Mac OS:}
On Mac OS, builds can either be performed using the the XCode 4 \code{llvm-gcc} toolchain or Intel XE Composer. On Mac OS, builds can either be performed using the XCode 4 \code{llvm-gcc} toolchain or Intel XE Composer.
It is possible to target MacOS 10.6 (Snow Leopard) or 10.7 (Lion) as the oldest supported operating system release. It is possible to target MacOS 10.6 (Snow Leopard) or 10.7 (Lion) as the oldest supported operating system release.
In both cases, XCode must be installed along with the supplementary command line tools. In both cases, XCode must be installed along with the supplementary command line tools.
\begin{description} \begin{description}
@ -57,7 +57,7 @@ In both cases, XCode must be installed along with the supplementary command line
Versions XE 2012 and 2013 are known to work. Versions XE 2012 and 2013 are known to work.
\end{description} \end{description}
Note that the configuration files assume that XCode was Note that the configuration files assume that XCode was
installed in the \code{/Applications} folder. They must be be manually updated installed in the \code{/Applications} folder. They must be manually updated
when this is not the case. when this is not the case.
\subsubsection{Selecting a configuration} \subsubsection{Selecting a configuration}
Having chosen a configuration, copy it to the main directory and rename it to \code{config.py}, e.g.: Having chosen a configuration, copy it to the main directory and rename it to \code{config.py}, e.g.:

View File

@ -1,8 +1,8 @@
\part{Development guide} \part{Development guide}
\label{sec:development} \label{sec:development}
This chapter and the subsequent ones will provide an overview This chapter and the subsequent ones will provide an overview
of the the coding conventions and general architecture of Mitsuba. of the coding conventions and general architecture of Mitsuba.
You should only read them if if you wish to interface with the API You should only read them if you wish to interface with the API
in some way (e.g. by developing your own plugins). The coding style in some way (e.g. by developing your own plugins). The coding style
section is only relevant if you plan to submit patches that are meant section is only relevant if you plan to submit patches that are meant
to become part of the main codebase. to become part of the main codebase.

View File

@ -7,7 +7,7 @@ The framework distinguishes between \emph{sampling-based} integrators and
\emph{generic} ones. A sampling-based integrator is able to generate \emph{generic} ones. A sampling-based integrator is able to generate
(usually unbiased) estimates of the incident radiance along a specified rays, and this (usually unbiased) estimates of the incident radiance along a specified rays, and this
is done a large number of times to render a scene. A generic integrator is done a large number of times to render a scene. A generic integrator
is more like a black box, where no assumptions are made on how the the image is is more like a black box, where no assumptions are made on how the image is
created. For instance, the VPL renderer uses OpenGL to rasterize the scene created. For instance, the VPL renderer uses OpenGL to rasterize the scene
using hardware acceleration, which certainly doesn't fit into the sampling-based pattern. using hardware acceleration, which certainly doesn't fit into the sampling-based pattern.
For that reason, it must be implemented as a generic integrator. For that reason, it must be implemented as a generic integrator.
@ -261,7 +261,7 @@ As you can see, we did something slightly different in the distance
renderer fragment above (we called \code{RadianceQueryRecord::rayIntersect()} renderer fragment above (we called \code{RadianceQueryRecord::rayIntersect()}
on the supplied parameter \code{rRec}), and the reason for this is \emph{nesting}. on the supplied parameter \code{rRec}), and the reason for this is \emph{nesting}.
\subsection{Nesting} \subsection{Nesting}
The idea of of nesting is that sampling-based rendering techniques can be The idea of nesting is that sampling-based rendering techniques can be
embedded within each other for added flexibility: for instance, one embedded within each other for added flexibility: for instance, one
might concoct a 1-bounce indirect rendering technique complete with might concoct a 1-bounce indirect rendering technique complete with
irradiance caching and adaptive integration simply by writing the following irradiance caching and adaptive integration simply by writing the following

View File

@ -545,7 +545,7 @@ command-line executable.
\subsubsection{Simultaneously rendering multiple versions of a scene} \subsubsection{Simultaneously rendering multiple versions of a scene}
Sometimes it is useful to be able to submit multiple scenes to the rendering scheduler Sometimes it is useful to be able to submit multiple scenes to the rendering scheduler
at the same time, e.g. when rendering on a big cluster, where one image is not enough to keep all at the same time, e.g. when rendering on a big cluster, where one image is not enough to keep all
cores on all machines busy. This is is quite easy to do by simply launching multiple \code{RenderJob} cores on all machines busy. This is quite easy to do by simply launching multiple \code{RenderJob}
instances before issuing the \code{queue.waitLeft} call. instances before issuing the \code{queue.waitLeft} call.
However, things go wrong when rendering multiple versions of the \emph{same} scene simultaneously (for instance However, things go wrong when rendering multiple versions of the \emph{same} scene simultaneously (for instance
@ -753,7 +753,7 @@ As before, some changes will be necessary to get this to run on PySide.
When using this snippet, please be wary of threading-related issues; the key thing to remember is that When using this snippet, please be wary of threading-related issues; the key thing to remember is that
in Qt, only the main thread is allowed to modify Qt widgets. On the other hand, rendering and logging-related in Qt, only the main thread is allowed to modify Qt widgets. On the other hand, rendering and logging-related
callbacks will be invoked from different Mitsuba-internal threads---this means that it's not possible to e.g. callbacks will be invoked from different Mitsuba-internal threads---this means that it's not possible to e.g.
directly update the status bar message from the callback \code{finishJobEvent}. To do this, we must use directly update the status bar message from the callback \code{finishJobEvent}. To do this, we must
use Qt's \code{QueuedConnection} to communicate this event to the main thread via signals and slots. See the use Qt's \code{QueuedConnection} to communicate this event to the main thread via signals and slots. See the
code that updates the status and progress bar for more detail. code that updates the status and progress bar for more detail.
\begin{python} \begin{python}

View File

@ -96,7 +96,7 @@ MTS_NAMESPACE_BEGIN
* Internally, this is model simulates the interaction of light with a diffuse * Internally, this is model simulates the interaction of light with a diffuse
* base surface coated by a thin dielectric layer. This is a convenient * base surface coated by a thin dielectric layer. This is a convenient
* abstraction rather than a restriction. In other words, there are many * abstraction rather than a restriction. In other words, there are many
* materials that can be rendered with this model, even if they might not not * materials that can be rendered with this model, even if they might not
* fit this description perfectly well. * fit this description perfectly well.
* *
* \begin{figure}[h] * \begin{figure}[h]

View File

@ -103,13 +103,13 @@ MTS_NAMESPACE_BEGIN
* interaction of light with a diffuse base surface coated by a thin dielectric * interaction of light with a diffuse base surface coated by a thin dielectric
* layer (where the coating layer is now \emph{rough}). This is a convenient * layer (where the coating layer is now \emph{rough}). This is a convenient
* abstraction rather than a restriction. In other words, there are many * abstraction rather than a restriction. In other words, there are many
* materials that can be rendered with this model, even if they might not not * materials that can be rendered with this model, even if they might not
* fit this description perfectly well. * fit this description perfectly well.
* *
* The simplicity of this setup makes it possible to account for interesting * The simplicity of this setup makes it possible to account for interesting
* nonlinear effects due to internal scattering, which is controlled by * nonlinear effects due to internal scattering, which is controlled by
* the \texttt{nonlinear} parameter. For more details, please refer to the description * the \texttt{nonlinear} parameter. For more details, please refer to the description
* of this parameter given in the the \pluginref{plastic} plugin section * of this parameter given in the \pluginref{plastic} plugin section
* on \pluginpage{plastic}. * on \pluginpage{plastic}.
* *
* *

View File

@ -80,7 +80,7 @@ MTS_NAMESPACE_BEGIN
* *
* The implementation loads a captured illumination environment from a image in * The implementation loads a captured illumination environment from a image in
* latitude-longitude format and turns it into an infinitely distant emitter. * latitude-longitude format and turns it into an infinitely distant emitter.
* The image could either be be a processed photograph or a rendering made using the * The image could either be a processed photograph or a rendering made using the
* \pluginref{spherical} sensor. The direction conventions of this transformation * \pluginref{spherical} sensor. The direction conventions of this transformation
* are shown in (b). * are shown in (b).
* The plugin can work with all types of images that are natively supported by Mitsuba * The plugin can work with all types of images that are natively supported by Mitsuba

View File

@ -64,7 +64,7 @@ MTS_NAMESPACE_BEGIN
* \parameter{resolution}{\Integer}{Specifies the horizontal resolution of the precomputed * \parameter{resolution}{\Integer}{Specifies the horizontal resolution of the precomputed
* image that is used to represent the sun environment map \default{512, i.e. 512$\times$256}} * image that is used to represent the sun environment map \default{512, i.e. 512$\times$256}}
* \parameter{scale}{\Float}{ * \parameter{scale}{\Float}{
* This parameter can be used to scale the the amount of illumination * This parameter can be used to scale the amount of illumination
* emitted by the sky emitter. \default{1} * emitted by the sky emitter. \default{1}
* } * }
* \parameter{samplingWeight}{\Float}{ * \parameter{samplingWeight}{\Float}{
@ -160,7 +160,7 @@ MTS_NAMESPACE_BEGIN
* (512$\times$ 256) of the entire sky that is then forwarded to the * (512$\times$ 256) of the entire sky that is then forwarded to the
* \pluginref{envmap} plugin---this dramatically improves rendering * \pluginref{envmap} plugin---this dramatically improves rendering
* performance. This resolution is generally plenty since the sky radiance * performance. This resolution is generally plenty since the sky radiance
* distribution is so smooth, but it it can be adjusted manually if * distribution is so smooth, but it can be adjusted manually if
* necessary using the \code{resolution} parameter. * necessary using the \code{resolution} parameter.
* *
* Note that while the model encompasses sunrise and sunset configurations, * Note that while the model encompasses sunrise and sunset configurations,
@ -212,7 +212,7 @@ MTS_NAMESPACE_BEGIN
* \medrendering{\code{albedo}=100%}{emitter_sky_albedo_1} * \medrendering{\code{albedo}=100%}{emitter_sky_albedo_1}
* \medrendering{\code{albedo}=20% green}{emitter_sky_albedo_green} * \medrendering{\code{albedo}=20% green}{emitter_sky_albedo_green}
* \caption{\label{fig:sky_groundalbedo}Influence * \caption{\label{fig:sky_groundalbedo}Influence
* of the ground albedo on the apperance of the sky} * of the ground albedo on the appearance of the sky}
* } * }
*/ */
class SkyEmitter : public Emitter { class SkyEmitter : public Emitter {

View File

@ -62,7 +62,7 @@ MTS_NAMESPACE_BEGIN
* \parameter{resolution}{\Integer}{Specifies the horizontal resolution of the precomputed * \parameter{resolution}{\Integer}{Specifies the horizontal resolution of the precomputed
* image that is used to represent the sun environment map \default{512, i.e. 512$\times$256}} * image that is used to represent the sun environment map \default{512, i.e. 512$\times$256}}
* \parameter{scale}{\Float}{ * \parameter{scale}{\Float}{
* This parameter can be used to scale the the amount of illumination * This parameter can be used to scale the amount of illumination
* emitted by the sun emitter. \default{1} * emitted by the sun emitter. \default{1}
* } * }
* \parameter{sunRadiusScale}{\Float}{ * \parameter{sunRadiusScale}{\Float}{

View File

@ -67,11 +67,11 @@ MTS_NAMESPACE_BEGIN
* \parameter{resolution}{\Integer}{Specifies the horizontal resolution of the precomputed * \parameter{resolution}{\Integer}{Specifies the horizontal resolution of the precomputed
* image that is used to represent the sun environment map \default{512, i.e. 512$\times$256}} * image that is used to represent the sun environment map \default{512, i.e. 512$\times$256}}
* \parameter{sunScale}{\Float}{ * \parameter{sunScale}{\Float}{
* This parameter can be used to separately scale the the amount of illumination * This parameter can be used to separately scale the amount of illumination
* emitted by the sun. \default{1} * emitted by the sun. \default{1}
* } * }
* \parameter{skyScale}{\Float}{ * \parameter{skyScale}{\Float}{
* This parameter can be used to separately scale the the amount of illumination * This parameter can be used to separately scale the amount of illumination
* emitted by the sky.\default{1} * emitted by the sky.\default{1}
* } * }
* \parameter{sunRadiusScale}{\Float}{ * \parameter{sunRadiusScale}{\Float}{

View File

@ -155,7 +155,7 @@ MTS_NAMESPACE_BEGIN
* *
* Apart from querying the render time, * Apart from querying the render time,
* memory usage, and other scene-related information, it is also possible * memory usage, and other scene-related information, it is also possible
* to `paste' an existing parameter that was provided to another plugin---for instance,the * to `paste' an existing parameter that was provided to another plugin---for instance,
* the camera transform matrix would be obtained as \code{\$sensor['toWorld']}. The name of * the camera transform matrix would be obtained as \code{\$sensor['toWorld']}. The name of
* the active integrator plugin is given by \code{\$integrator['type']}, and so on. * the active integrator plugin is given by \code{\$integrator['type']}, and so on.
* All of these can be mixed to build larger fragments, as following example demonstrates. * All of these can be mixed to build larger fragments, as following example demonstrates.

View File

@ -48,7 +48,7 @@ MTS_NAMESPACE_BEGIN
* \begin{enumerate}[(i)] * \begin{enumerate}[(i)]
* \item \code{gamma}: Exposure and gamma correction (default) * \item \code{gamma}: Exposure and gamma correction (default)
* \vspace{-1mm} * \vspace{-1mm}
* \item \code{reinhard}: Apply the the * \item \code{reinhard}: Apply the
* tonemapping technique by Reinhard et al. \cite{Reinhard2002Photographic} * tonemapping technique by Reinhard et al. \cite{Reinhard2002Photographic}
* followd by gamma correction. * followd by gamma correction.
* \vspace{-4mm} * \vspace{-4mm}

View File

@ -56,7 +56,7 @@ MTS_NAMESPACE_BEGIN
* paths starting at the emitters and the sensor and connecting them in every possible way. * paths starting at the emitters and the sensor and connecting them in every possible way.
* This works particularly well in closed scenes as the one shown above. Here, the unidirectional * This works particularly well in closed scenes as the one shown above. Here, the unidirectional
* path tracer has severe difficulties finding some of the indirect illumination paths. * path tracer has severe difficulties finding some of the indirect illumination paths.
* Modeled after after a scene by Eric Veach. * Modeled after a scene by Eric Veach.
* } * }
* } * }
* \renderings{ * \renderings{

View File

@ -65,7 +65,7 @@ MTS_NAMESPACE_BEGIN
* *
* \remarks{ * \remarks{
* \item Due to the data dependencies of this algorithm, the parallelization is * \item Due to the data dependencies of this algorithm, the parallelization is
* limited to to the local machine (i.e. cluster-wide renderings are not implemented) * limited to the local machine (i.e. cluster-wide renderings are not implemented)
* \item This integrator does not handle participating media * \item This integrator does not handle participating media
* \item This integrator does not currently work with subsurface scattering * \item This integrator does not currently work with subsurface scattering
* models. * models.

View File

@ -63,7 +63,7 @@ MTS_NAMESPACE_BEGIN
* *
* \remarks{ * \remarks{
* \item Due to the data dependencies of this algorithm, the parallelization is * \item Due to the data dependencies of this algorithm, the parallelization is
* limited to to the local machine (i.e. cluster-wide renderings are not implemented) * limited to the local machine (i.e. cluster-wide renderings are not implemented)
* \item This integrator does not handle participating media * \item This integrator does not handle participating media
* \item This integrator does not currently work with subsurface scattering * \item This integrator does not currently work with subsurface scattering
* models. * models.

View File

@ -34,7 +34,7 @@ MTS_NAMESPACE_BEGIN
* path termination criterion. \default{\code{5}} * path termination criterion. \default{\code{5}}
* } * }
* \parameter{granularity}{\Integer}{ * \parameter{granularity}{\Integer}{
* Specifies the work unit granularity used to parallize the the particle * Specifies the work unit granularity used to parallize the particle
* tracing task. This should be set high enough so that accumulating * tracing task. This should be set high enough so that accumulating
* partially exposed images (and potentially sending them over the network) * partially exposed images (and potentially sending them over the network)
* is not the bottleneck. * is not the bottleneck.

View File

@ -2815,7 +2815,8 @@ void Bitmap::writeOpenEXR(Stream *stream) const {
Imf::ChannelList &channels = header.channels(); Imf::ChannelList &channels = header.channels();
if (!m_channelNames.empty()) { if (!m_channelNames.empty()) {
if (m_channelNames.size() != (size_t) m_channelCount) if (m_channelNames.size() != (size_t) m_channelCount)
Log(EError, "writeOpenEXR(): 'channelNames' has the wrong number of entries!"); Log(EError, "writeOpenEXR(): 'channelNames' has the wrong number of entries (%i, expected %i)!",
(int) m_channelNames.size(), (int) m_channelCount);
for (size_t i=0; i<m_channelNames.size(); ++i) for (size_t i=0; i<m_channelNames.size(); ++i)
channels.insert(m_channelNames[i].c_str(), Imf::Channel(compType)); channels.insert(m_channelNames[i].c_str(), Imf::Channel(compType));
} else if (pixelFormat == ELuminance || pixelFormat == ELuminanceAlpha) { } else if (pixelFormat == ELuminance || pixelFormat == ELuminanceAlpha) {

View File

@ -70,7 +70,7 @@ MTS_NAMESPACE_BEGIN
* The Hammerlsey sequence is closely related to the Halton sequence and yields a very * The Hammerlsey sequence is closely related to the Halton sequence and yields a very
* high quality point set that is slightly more regular (and has lower discrepancy), * high quality point set that is slightly more regular (and has lower discrepancy),
* especially in the first few dimensions. As is the case with the Halton sequence, * especially in the first few dimensions. As is the case with the Halton sequence,
* the points should be scrambled to reduce patterns that manifest due due to correlations * the points should be scrambled to reduce patterns that manifest due to correlations
* in higher dimensions. Please refer to the \pluginref{halton} page for more information * in higher dimensions. Please refer to the \pluginref{halton} page for more information
* on how this works. * on how this works.
* *

View File

@ -65,7 +65,7 @@ MTS_NAMESPACE_BEGIN
* \fbox{\includegraphics[width=6cm]{images/shape_hair}}\hspace{4.5cm} * \fbox{\includegraphics[width=6cm]{images/shape_hair}}\hspace{4.5cm}
* \caption{A close-up of the hair shape rendered with a diffuse * \caption{A close-up of the hair shape rendered with a diffuse
* scattering model (an actual hair scattering model will * scattering model (an actual hair scattering model will
* be needed for realistic apperance)} * be needed for realistic appearance)}
* } * }
* The plugin implements a space-efficient acceleration structure for * The plugin implements a space-efficient acceleration structure for
* hairs made from many straight cylindrical hair segments with miter * hairs made from many straight cylindrical hair segments with miter

View File

@ -39,7 +39,7 @@ MTS_NAMESPACE_BEGIN
* \parameter{faceNormals}{\Boolean}{ * \parameter{faceNormals}{\Boolean}{
* When set to \code{true}, any existing or computed vertex normals are * When set to \code{true}, any existing or computed vertex normals are
* discarded and \emph{face normals} will instead be used during rendering. * discarded and \emph{face normals} will instead be used during rendering.
* This gives the rendered object a faceted apperance.\default{\code{false}} * This gives the rendered object a faceted appearance.\default{\code{false}}
* } * }
* \parameter{maxSmoothAngle}{\Float}{ * \parameter{maxSmoothAngle}{\Float}{
* When specified, Mitsuba will discard all vertex normals in the input mesh and rebuild * When specified, Mitsuba will discard all vertex normals in the input mesh and rebuild

View File

@ -51,7 +51,7 @@ MTS_NAMESPACE_BEGIN
* } * }
* \parameter{faceNormals}{\Boolean}{ * \parameter{faceNormals}{\Boolean}{
* When set to \code{true}, Mitsuba will use face normals when rendering * When set to \code{true}, Mitsuba will use face normals when rendering
* the object, which will give it a faceted apperance. \default{\code{false}} * the object, which will give it a faceted appearance. \default{\code{false}}
* } * }
* \parameter{maxSmoothAngle}{\Float}{ * \parameter{maxSmoothAngle}{\Float}{
* When specified, Mitsuba will discard all vertex normals in the input mesh and rebuild * When specified, Mitsuba will discard all vertex normals in the input mesh and rebuild

View File

@ -46,7 +46,7 @@ extern MTS_EXPORT_RENDER void pushSceneCleanupHandler(void (*cleanup)());
* \parameter{faceNormals}{\Boolean}{ * \parameter{faceNormals}{\Boolean}{
* When set to \code{true}, any existing or computed vertex normals are * When set to \code{true}, any existing or computed vertex normals are
* discarded and \emph{face normals} will instead be used during rendering. * discarded and \emph{face normals} will instead be used during rendering.
* This gives the rendered object a faceted apperance.\default{\code{false}} * This gives the rendered object a faceted appearance.\default{\code{false}}
* } * }
* \parameter{maxSmoothAngle}{\Float}{ * \parameter{maxSmoothAngle}{\Float}{
* When specified, Mitsuba will discard all vertex normals in the input mesh and rebuild * When specified, Mitsuba will discard all vertex normals in the input mesh and rebuild

View File

@ -39,7 +39,7 @@ MTS_NAMESPACE_BEGIN
* \default{automatic} * \default{automatic}
* } * }
* \parameter{stepWidth}{\Float}{ * \parameter{stepWidth}{\Float}{
* Controls the width of of step function used for the * Controls the width of the step function used for the
* color transition. It is specified as a value between zero * color transition. It is specified as a value between zero
* and one (relative to the \code{lineWidth} parameter) * and one (relative to the \code{lineWidth} parameter)
* \default{0.5} * \default{0.5}