metadata
Wenzel Jakob 2014-05-20 18:42:30 +02:00
commit 93b7cdbb1c
36 changed files with 139 additions and 78 deletions

View File

@ -86,7 +86,7 @@ $\texttt{\$}$ mitsuba -c machine1;machine2;... path-to/my-scene.xml
There are two different ways in which you can access render nodes:
\begin{itemize}
\item\textbf{Direct}: Here, you create a direct connection to a running \code{mtssrv} instance on
another machine (\code{mtssrv} is the Mitsuba server process). From the the performance
another machine (\code{mtssrv} is the Mitsuba server process). From the performance
standpoint, this approach should always be preferred over the SSH method described below when there is
a choice between them. There are some disadvantages though: first, you need to manually start
\code{mtssrv} on every machine you want to use.

View File

@ -48,7 +48,7 @@ Visual Studio 2010 for legacy 32 bit builds.
Versions XE 2012 and 2013 are known to work.
\end{description}
\paragraph{Mac OS:}
On Mac OS, builds can either be performed using the the XCode 4 \code{llvm-gcc} toolchain or Intel XE Composer.
On Mac OS, builds can either be performed using the XCode 4 \code{llvm-gcc} toolchain or Intel XE Composer.
It is possible to target MacOS 10.6 (Snow Leopard) or 10.7 (Lion) as the oldest supported operating system release.
In both cases, XCode must be installed along with the supplementary command line tools.
\begin{description}
@ -57,7 +57,7 @@ In both cases, XCode must be installed along with the supplementary command line
Versions XE 2012 and 2013 are known to work.
\end{description}
Note that the configuration files assume that XCode was
installed in the \code{/Applications} folder. They must be be manually updated
installed in the \code{/Applications} folder. They must be manually updated
when this is not the case.
\subsubsection{Selecting a configuration}
Having chosen a configuration, copy it to the main directory and rename it to \code{config.py}, e.g.:

View File

@ -1,8 +1,8 @@
\part{Development guide}
\label{sec:development}
This chapter and the subsequent ones will provide an overview
of the the coding conventions and general architecture of Mitsuba.
You should only read them if if you wish to interface with the API
of the coding conventions and general architecture of Mitsuba.
You should only read them if you wish to interface with the API
in some way (e.g. by developing your own plugins). The coding style
section is only relevant if you plan to submit patches that are meant
to become part of the main codebase.

View File

@ -7,7 +7,7 @@ The framework distinguishes between \emph{sampling-based} integrators and
\emph{generic} ones. A sampling-based integrator is able to generate
(usually unbiased) estimates of the incident radiance along a specified rays, and this
is done a large number of times to render a scene. A generic integrator
is more like a black box, where no assumptions are made on how the the image is
is more like a black box, where no assumptions are made on how the image is
created. For instance, the VPL renderer uses OpenGL to rasterize the scene
using hardware acceleration, which certainly doesn't fit into the sampling-based pattern.
For that reason, it must be implemented as a generic integrator.
@ -261,7 +261,7 @@ As you can see, we did something slightly different in the distance
renderer fragment above (we called \code{RadianceQueryRecord::rayIntersect()}
on the supplied parameter \code{rRec}), and the reason for this is \emph{nesting}.
\subsection{Nesting}
The idea of of nesting is that sampling-based rendering techniques can be
The idea of nesting is that sampling-based rendering techniques can be
embedded within each other for added flexibility: for instance, one
might concoct a 1-bounce indirect rendering technique complete with
irradiance caching and adaptive integration simply by writing the following

View File

@ -51,8 +51,8 @@
\ofoot[]{}
\cfoot[]{}
\automark[subsection]{section}
\ihead{\sc\leftmark}
\ohead{\sc\rightmark}
\ihead{\normalfont\scshape\leftmark}
\ohead{\normalfont\scshape\rightmark}
\chead{}
\setheadsepline{.2pt}
\setkomafont{pagenumber}{\normalfont}

View File

@ -545,7 +545,7 @@ command-line executable.
\subsubsection{Simultaneously rendering multiple versions of a scene}
Sometimes it is useful to be able to submit multiple scenes to the rendering scheduler
at the same time, e.g. when rendering on a big cluster, where one image is not enough to keep all
cores on all machines busy. This is is quite easy to do by simply launching multiple \code{RenderJob}
cores on all machines busy. This is quite easy to do by simply launching multiple \code{RenderJob}
instances before issuing the \code{queue.waitLeft} call.
However, things go wrong when rendering multiple versions of the \emph{same} scene simultaneously (for instance
@ -753,7 +753,7 @@ As before, some changes will be necessary to get this to run on PySide.
When using this snippet, please be wary of threading-related issues; the key thing to remember is that
in Qt, only the main thread is allowed to modify Qt widgets. On the other hand, rendering and logging-related
callbacks will be invoked from different Mitsuba-internal threads---this means that it's not possible to e.g.
directly update the status bar message from the callback \code{finishJobEvent}. To do this, we must use
directly update the status bar message from the callback \code{finishJobEvent}. To do this, we must
use Qt's \code{QueuedConnection} to communicate this event to the main thread via signals and slots. See the
code that updates the status and progress bar for more detail.
\begin{python}

View File

@ -31,7 +31,8 @@ this is a windowed Gaussian filter with configurable standard deviation.
It produces pleasing results and never suffers from ringing, but may
occasionally introduce too much blurring.
When no reconstruction filter is explicitly requested, this is the default
choice in Mitsuba.
choice in Mitsuba. Takes a standard deviation parameter (\code{stddev})
which is set to 0.5 pixels by default.
\item[Mitchell-Netravali filter (\code{mitchell}):]
Separable cubic spline reconstruction filter by Mitchell and Netravali
\cite{Mitchell:1988:Reconstruction}

View File

@ -24,7 +24,7 @@
MTS_NAMESPACE_BEGIN
/*! \plugin{blendbsdf}{Blended material}
* \order{16}
* \order{17}
* \parameters{
* \parameter{weight}{\Float\Or\Texture}{A floating point value or texture
* with values between zero and one. The extreme values zero and one activate the

View File

@ -113,6 +113,11 @@ public:
} else if (child->getClass()->derivesFrom(MTS_CLASS(Texture))) {
if (m_displacement != NULL)
Log(EError, "Only a single displacement texture can be specified!");
const Properties &props = child->getProperties();
if (props.getPluginName() == "bitmap" && !props.hasProperty("gamma"))
Log(EError, "When using a bitmap texture as a bump map, please explicitly specify "
"the 'gamma' parameter of the bitmap plugin. In most cases the following is the correct choice: "
"<float name=\"gamma\" value=\"1.0\"/>");
m_displacement = static_cast<Texture *>(child);
} else {
BSDF::addChild(name, child);

View File

@ -22,7 +22,7 @@
MTS_NAMESPACE_BEGIN
/*!\plugin{mask}{Opacity mask}
* \order{17}
* \order{18}
* \parameters{
* \parameter{opacity}{\Spectrum\Or\Texture}{
* Specifies the per-channel opacity (where $1=$ completely opaque)\default{0.5}.

View File

@ -23,7 +23,7 @@
MTS_NAMESPACE_BEGIN
/*! \plugin{mixturebsdf}{Mixture material}
* \order{15}
* \order{16}
* \parameters{
* \parameter{weights}{\String}{A comma-separated list of BSDF weights}
* \parameter{\Unnamed}{\BSDF}{Multiple BSDF instances that should be

View File

@ -21,6 +21,29 @@
MTS_NAMESPACE_BEGIN
/*! \plugin{normalmap}{Normal map modifier}
* \order{13}
* \icon{bsdf_bumpmap}
*
* \parameters{
* \parameter{\Unnamed}{\Texture}{
* The color values of this texture specify the perturbed
* normals relative in the local surface coordinate system.
* }
* \parameter{\Unnamed}{\BSDF}{A BSDF model that should
* be affected by the normal map}
* }
*
* This plugin is conceptually similar to the \pluginref{bump} map plugin
* but uses a normal map instead of a bump map. A normal map is a RGB texture, whose color channels
* encode the XYZ coordinates of the desired surface normals.
* These are specified \emph{relative} to the local shading frame,
* which means that a normal map with a value of $(0,0,1)$ everywhere
* causes no changes to the surface.
* To turn the 3D normal directions into (nonnegative) color values
* suitable for this plugin, the
* mapping $x \mapsto (x+1)/2$ must be applied to each component.
*/
class NormalMap : public BSDF {
public:
NormalMap(const Properties &props) : BSDF(props) { }
@ -61,7 +84,12 @@ public:
m_nested = static_cast<BSDF *>(child);
} else if (child->getClass()->derivesFrom(MTS_CLASS(Texture))) {
if (m_normals != NULL)
Log(EError, "Only a single normals texture can be specified!");
Log(EError, "Only a single normal texture can be specified!");
const Properties &props = child->getProperties();
if (props.getPluginName() == "bitmap" && !props.hasProperty("gamma"))
Log(EError, "When using a bitmap texture as a normal map, please explicitly specify "
"the 'gamma' parameter of the bitmap plugin. In most cases the following is the correct choice: "
"<float name=\"gamma\" value=\"1.0\"/>");
m_normals = static_cast<Texture *>(child);
} else {
BSDF::addChild(name, child);
@ -192,7 +220,7 @@ protected:
// ================ Hardware shader implementation ================
/**
* This is a quite approximate version of the bump map model -- it likely
* This is a quite approximate version of the normal map model -- it likely
* won't match the reference exactly, but it should be good enough for
* preview purposes
*/

View File

@ -23,7 +23,7 @@
MTS_NAMESPACE_BEGIN
/*!\plugin{phong}{Modified Phong BRDF}
* \order{13}
* \order{14}
* \parameters{
* \parameter{exponent}{\Float\Or\Texture}{
* Specifies the Phong exponent \default{30}.

View File

@ -96,7 +96,7 @@ MTS_NAMESPACE_BEGIN
* Internally, this is model simulates the interaction of light with a diffuse
* base surface coated by a thin dielectric layer. This is a convenient
* abstraction rather than a restriction. In other words, there are many
* materials that can be rendered with this model, even if they might not not
* materials that can be rendered with this model, even if they might not
* fit this description perfectly well.
*
* \begin{figure}[h]

View File

@ -103,13 +103,13 @@ MTS_NAMESPACE_BEGIN
* interaction of light with a diffuse base surface coated by a thin dielectric
* layer (where the coating layer is now \emph{rough}). This is a convenient
* abstraction rather than a restriction. In other words, there are many
* materials that can be rendered with this model, even if they might not not
* materials that can be rendered with this model, even if they might not
* fit this description perfectly well.
*
* The simplicity of this setup makes it possible to account for interesting
* nonlinear effects due to internal scattering, which is controlled by
* the \texttt{nonlinear} parameter. For more details, please refer to the description
* of this parameter given in the the \pluginref{plastic} plugin section
* of this parameter given in the \pluginref{plastic} plugin section
* on \pluginpage{plastic}.
*
*

View File

@ -23,7 +23,7 @@
MTS_NAMESPACE_BEGIN
/*!\plugin{twosided}{Two-sided BRDF adapter}
* \order{18}
* \order{19}
* \parameters{
* \parameter{\Unnamed}{\BSDF}{A nested BRDF that should
* be turned into a two-sided scattering model. If two BRDFs

View File

@ -24,7 +24,7 @@
MTS_NAMESPACE_BEGIN
/*!\plugin{ward}{Anisotropic Ward BRDF}
* \order{14}
* \order{15}
* \parameters{
* \parameter{variant}{\String}{
* Determines the variant of the Ward model to use:

View File

@ -80,7 +80,7 @@ MTS_NAMESPACE_BEGIN
*
* The implementation loads a captured illumination environment from a image in
* latitude-longitude format and turns it into an infinitely distant emitter.
* The image could either be be a processed photograph or a rendering made using the
* The image could either be a processed photograph or a rendering made using the
* \pluginref{spherical} sensor. The direction conventions of this transformation
* are shown in (b).
* The plugin can work with all types of images that are natively supported by Mitsuba

View File

@ -64,7 +64,7 @@ MTS_NAMESPACE_BEGIN
* \parameter{resolution}{\Integer}{Specifies the horizontal resolution of the precomputed
* image that is used to represent the sun environment map \default{512, i.e. 512$\times$256}}
* \parameter{scale}{\Float}{
* This parameter can be used to scale the the amount of illumination
* This parameter can be used to scale the amount of illumination
* emitted by the sky emitter. \default{1}
* }
* \parameter{samplingWeight}{\Float}{
@ -160,7 +160,7 @@ MTS_NAMESPACE_BEGIN
* (512$\times$ 256) of the entire sky that is then forwarded to the
* \pluginref{envmap} plugin---this dramatically improves rendering
* performance. This resolution is generally plenty since the sky radiance
* distribution is so smooth, but it it can be adjusted manually if
* distribution is so smooth, but it can be adjusted manually if
* necessary using the \code{resolution} parameter.
*
* Note that while the model encompasses sunrise and sunset configurations,
@ -212,7 +212,7 @@ MTS_NAMESPACE_BEGIN
* \medrendering{\code{albedo}=100%}{emitter_sky_albedo_1}
* \medrendering{\code{albedo}=20% green}{emitter_sky_albedo_green}
* \caption{\label{fig:sky_groundalbedo}Influence
* of the ground albedo on the apperance of the sky}
* of the ground albedo on the appearance of the sky}
* }
*/
class SkyEmitter : public Emitter {

View File

@ -62,7 +62,7 @@ MTS_NAMESPACE_BEGIN
* \parameter{resolution}{\Integer}{Specifies the horizontal resolution of the precomputed
* image that is used to represent the sun environment map \default{512, i.e. 512$\times$256}}
* \parameter{scale}{\Float}{
* This parameter can be used to scale the the amount of illumination
* This parameter can be used to scale the amount of illumination
* emitted by the sun emitter. \default{1}
* }
* \parameter{sunRadiusScale}{\Float}{

View File

@ -67,11 +67,11 @@ MTS_NAMESPACE_BEGIN
* \parameter{resolution}{\Integer}{Specifies the horizontal resolution of the precomputed
* image that is used to represent the sun environment map \default{512, i.e. 512$\times$256}}
* \parameter{sunScale}{\Float}{
* This parameter can be used to separately scale the the amount of illumination
* This parameter can be used to separately scale the amount of illumination
* emitted by the sun. \default{1}
* }
* \parameter{skyScale}{\Float}{
* This parameter can be used to separately scale the the amount of illumination
* This parameter can be used to separately scale the amount of illumination
* emitted by the sky.\default{1}
* }
* \parameter{sunRadiusScale}{\Float}{

View File

@ -155,7 +155,7 @@ MTS_NAMESPACE_BEGIN
*
* Apart from querying the render time,
* memory usage, and other scene-related information, it is also possible
* to `paste' an existing parameter that was provided to another plugin---for instance,the
* to `paste' an existing parameter that was provided to another plugin---for instance,
* the camera transform matrix would be obtained as \code{\$sensor['toWorld']}. The name of
* the active integrator plugin is given by \code{\$integrator['type']}, and so on.
* All of these can be mixed to build larger fragments, as following example demonstrates.

View File

@ -48,7 +48,7 @@ MTS_NAMESPACE_BEGIN
* \begin{enumerate}[(i)]
* \item \code{gamma}: Exposure and gamma correction (default)
* \vspace{-1mm}
* \item \code{reinhard}: Apply the the
* \item \code{reinhard}: Apply the
* tonemapping technique by Reinhard et al. \cite{Reinhard2002Photographic}
* followd by gamma correction.
* \vspace{-4mm}

View File

@ -56,7 +56,7 @@ MTS_NAMESPACE_BEGIN
* paths starting at the emitters and the sensor and connecting them in every possible way.
* This works particularly well in closed scenes as the one shown above. Here, the unidirectional
* path tracer has severe difficulties finding some of the indirect illumination paths.
* Modeled after after a scene by Eric Veach.
* Modeled after a scene by Eric Veach.
* }
* }
* \renderings{

View File

@ -437,7 +437,7 @@ public:
unsigned int bsdfType = bsdf->getType() & BSDF::EAll;
/* Irradiance cachq query -> trat as diffuse */
/* Irradiance cache query -> treat as diffuse */
bool isDiffuse = (bsdfType == BSDF::EDiffuseReflection) || cacheQuery;
bool hasSpecular = bsdfType & BSDF::EDelta;

View File

@ -65,7 +65,7 @@ MTS_NAMESPACE_BEGIN
*
* \remarks{
* \item Due to the data dependencies of this algorithm, the parallelization is
* limited to to the local machine (i.e. cluster-wide renderings are not implemented)
* limited to the local machine (i.e. cluster-wide renderings are not implemented)
* \item This integrator does not handle participating media
* \item This integrator does not currently work with subsurface scattering
* models.

View File

@ -63,7 +63,7 @@ MTS_NAMESPACE_BEGIN
*
* \remarks{
* \item Due to the data dependencies of this algorithm, the parallelization is
* limited to to the local machine (i.e. cluster-wide renderings are not implemented)
* limited to the local machine (i.e. cluster-wide renderings are not implemented)
* \item This integrator does not handle participating media
* \item This integrator does not currently work with subsurface scattering
* models.

View File

@ -34,7 +34,7 @@ MTS_NAMESPACE_BEGIN
* path termination criterion. \default{\code{5}}
* }
* \parameter{granularity}{\Integer}{
* Specifies the work unit granularity used to parallize the the particle
* Specifies the work unit granularity used to parallize the particle
* tracing task. This should be set high enough so that accumulating
* partially exposed images (and potentially sending them over the network)
* is not the bottleneck.

View File

@ -2815,7 +2815,8 @@ void Bitmap::writeOpenEXR(Stream *stream) const {
Imf::ChannelList &channels = header.channels();
if (!m_channelNames.empty()) {
if (m_channelNames.size() != (size_t) m_channelCount)
Log(EError, "writeOpenEXR(): 'channelNames' has the wrong number of entries!");
Log(EError, "writeOpenEXR(): 'channelNames' has the wrong number of entries (%i, expected %i)!",
(int) m_channelNames.size(), (int) m_channelCount);
for (size_t i=0; i<m_channelNames.size(); ++i)
channels.insert(m_channelNames[i].c_str(), Imf::Channel(compType));
} else if (pixelFormat == ELuminance || pixelFormat == ELuminanceAlpha) {

View File

@ -107,12 +107,19 @@ void VPLShaderManager::setScene(const Scene *scene) {
shape = instantiatedShapes[j];
if (!m_renderer->unregisterGeometry(shape))
continue;
m_renderer->unregisterShaderForResource(shape->getBSDF());
const BSDF *bsdf = shape->getBSDF();
if (!bsdf)
bsdf = const_cast<Shape *>(shape)->createTriMesh()->getBSDF();
m_renderer->unregisterShaderForResource(bsdf);
}
} else {
const BSDF *bsdf = shape->getBSDF();
if (!bsdf)
bsdf = const_cast<Shape *>(shape)->createTriMesh()->getBSDF();
if (!m_renderer->unregisterGeometry(shape))
continue;
m_renderer->unregisterShaderForResource(shape->getBSDF());
m_renderer->unregisterShaderForResource(bsdf);
}
}
@ -168,9 +175,13 @@ void VPLShaderManager::setScene(const Scene *scene) {
if (!gpuGeo)
continue;
Shader *shader = m_renderer->registerShaderForResource(shape->getBSDF());
const BSDF *bsdf = shape->getBSDF();
if (!bsdf)
bsdf = gpuGeo->getTriMesh()->getBSDF();
Shader *shader = m_renderer->registerShaderForResource(bsdf);
if (shader && !shader->isComplete()) {
m_renderer->unregisterShaderForResource(shape->getBSDF());
m_renderer->unregisterShaderForResource(bsdf);
shader = NULL;
}
@ -192,13 +203,14 @@ void VPLShaderManager::setScene(const Scene *scene) {
GPUGeometry *gpuGeo = m_renderer->registerGeometry(shape);
if (!gpuGeo)
continue;
Shader *shader = m_renderer->registerShaderForResource(shape->getBSDF());
const BSDF *bsdf = shape->getBSDF();
if (!bsdf)
bsdf = gpuGeo->getTriMesh()->getBSDF();
Shader *shader = m_renderer->registerShaderForResource(bsdf);
if (shader && !shader->isComplete()) {
m_renderer->unregisterShaderForResource(shape->getBSDF());
m_renderer->unregisterShaderForResource(bsdf);
shader = NULL;
}
gpuGeo->setShader(shader);
m_geometry.push_back(std::make_pair(gpuGeo, identityTrafo));

View File

@ -70,7 +70,7 @@ MTS_NAMESPACE_BEGIN
* The Hammerlsey sequence is closely related to the Halton sequence and yields a very
* high quality point set that is slightly more regular (and has lower discrepancy),
* especially in the first few dimensions. As is the case with the Halton sequence,
* the points should be scrambled to reduce patterns that manifest due due to correlations
* the points should be scrambled to reduce patterns that manifest due to correlations
* in higher dimensions. Please refer to the \pluginref{halton} page for more information
* on how this works.
*

View File

@ -65,7 +65,7 @@ MTS_NAMESPACE_BEGIN
* \fbox{\includegraphics[width=6cm]{images/shape_hair}}\hspace{4.5cm}
* \caption{A close-up of the hair shape rendered with a diffuse
* scattering model (an actual hair scattering model will
* be needed for realistic apperance)}
* be needed for realistic appearance)}
* }
* The plugin implements a space-efficient acceleration structure for
* hairs made from many straight cylindrical hair segments with miter

View File

@ -39,7 +39,7 @@ MTS_NAMESPACE_BEGIN
* \parameter{faceNormals}{\Boolean}{
* When set to \code{true}, any existing or computed vertex normals are
* discarded and \emph{face normals} will instead be used during rendering.
* This gives the rendered object a faceted apperance.\default{\code{false}}
* This gives the rendered object a faceted appearance.\default{\code{false}}
* }
* \parameter{maxSmoothAngle}{\Float}{
* When specified, Mitsuba will discard all vertex normals in the input mesh and rebuild
@ -52,15 +52,21 @@ MTS_NAMESPACE_BEGIN
* }
* \parameter{flipTexCoords}{\Boolean}{
* Treat the vertical component of the texture as inverted? Most OBJ files use
* this convention. \default{\code{true}, i.e. flip them to get the
* correct coordinates}.
* this convention. \default{\code{true}}
* }
* \parameter{toWorld}{\Transform\Or\Animation}{
* Specifies an optional linear object-to-world transformation.
* \default{none (i.e. object space $=$ world space)}
* }
* \parameter{shapeIndex}{\Integer}{
* When the file contains multiple meshes, this parameter can
* be used to select a single one. \default{\code{-1}, \mbox{i.e. load all}}
* }
* \parameter{collapse}{\Boolean}{
* Collapse all contained meshes into a single object \default{\code{false}}
* Collapse all meshes into a single shape \default{\code{false}}
* }
* \parameter{loadMaterials}{\Boolean}{
* \mbox{Import materials from a \code{mtl} file, if it exists?\default{\code{true}}}
* }
* }
* \renderings{
@ -134,7 +140,6 @@ MTS_NAMESPACE_BEGIN
* valid vertex normals).
*
* \remarks{
* \item The plugin currently only supports loading meshes constructed from triangles and quadrilaterals.
* \item Importing geometry via OBJ files should only be used as an absolutely
* last resort. Due to inherent limitations of this format, the files tend to be unreasonably
* large, and parsing them requires significant amounts of memory and processing power. What's worse
@ -205,9 +210,15 @@ public:
/* Causes all texture coordinates to be vertically flipped */
bool flipTexCoords = props.getBoolean("flipTexCoords", true);
/// When the file contains multiple meshes, this index specifies which one to load
int shapeIndex = props.getInteger("shapeIndex", -1);
/* Object-space -> World-space transformation */
Transform objectToWorld = props.getTransform("toWorld", Transform());
/* Import materials from a MTL file, if any? */
bool loadMaterials = props.getBoolean("loadMaterials", true);
/* Load the geometry */
Log(EInfo, "Loading geometry from \"%s\" ..", path.filename().string().c_str());
fs::ifstream is(path);
@ -226,7 +237,7 @@ public:
std::set<std::string> geomNames;
std::vector<Vertex> vertexBuffer;
fs::path materialLibrary;
int geomIdx = 0;
int geomIndex = 0;
bool nameBeforeGeometry = false;
std::string materialName;
@ -260,10 +271,12 @@ public:
if (triangles.size() > 0) {
/// make sure that we have unique names
if (geomNames.find(targetName) != geomNames.end())
targetName = formatString("%s_%i", targetName.c_str(), geomIdx++);
targetName = formatString("%s_%i", targetName.c_str(), geomIndex);
geomIndex += 1;
geomNames.insert(targetName);
createMesh(targetName, vertices, normals, texcoords,
triangles, materialName, objectToWorld, vertexBuffer);
if (shapeIndex < 0 || geomIndex-1 == shapeIndex)
createMesh(targetName, vertices, normals, texcoords,
triangles, materialName, objectToWorld, vertexBuffer);
triangles.clear();
} else {
nameBeforeGeometry = true;
@ -271,13 +284,15 @@ public:
name = newName;
} else if (buf == "usemtl") {
/* Flush if necessary */
if (triangles.size() > 0) {
if (triangles.size() > 0 && !m_collapse) {
/// make sure that we have unique names
if (geomNames.find(name) != geomNames.end())
name = formatString("%s_%i", name.c_str(), geomIdx++);
name = formatString("%s_%i", name.c_str(), geomIndex);
geomIndex += 1;
geomNames.insert(name);
createMesh(name, vertices, normals, texcoords,
triangles, materialName, objectToWorld, vertexBuffer);
if (shapeIndex < 0 || geomIndex-1 == shapeIndex)
createMesh(name, vertices, normals, texcoords,
triangles, materialName, objectToWorld, vertexBuffer);
triangles.clear();
name = m_name;
}
@ -298,27 +313,25 @@ public:
iss >> tmp; parse(t, 1, tmp);
iss >> tmp; parse(t, 2, tmp);
triangles.push_back(t);
if (iss >> tmp) {
t.p[1] = t.p[0];
t.uv[1] = t.uv[0];
t.n[1] = t.n[0];
parse(t, 0, tmp);
/* Handle n-gons assuming a convex shape */
while (iss >> tmp) {
t.p[1] = t.p[2];
t.uv[1] = t.uv[2];
t.n[1] = t.n[2];
parse(t, 2, tmp);
triangles.push_back(t);
}
if (iss >> tmp)
Log(EError, "Encountered an n-gon (with n>4)! Only "
"triangles and quads are supported by the OBJ loader.");
} else {
/* Ignore */
}
}
if (geomNames.find(name) != geomNames.end())
/// make sure that we have unique names
name = formatString("%s_%i", m_name.c_str(), geomIdx);
name = formatString("%s_%i", m_name.c_str(), geomIndex);
createMesh(name, vertices, normals, texcoords,
triangles, materialName, objectToWorld, vertexBuffer);
if (shapeIndex < 0 || geomIndex-1 == shapeIndex)
createMesh(name, vertices, normals, texcoords,
triangles, materialName, objectToWorld, vertexBuffer);
if (props.hasProperty("maxSmoothAngle")) {
if (m_faceNormals)
@ -329,7 +342,7 @@ public:
m_meshes[i]->rebuildTopology(maxSmoothAngle);
}
if (!materialLibrary.empty())
if (!materialLibrary.empty() && loadMaterials)
loadMaterialLibrary(fileResolver, materialLibrary);
Log(EInfo, "Done with \"%s\" (took %i ms)", path.filename().string().c_str(), timer->getMilliseconds());
@ -401,6 +414,7 @@ public:
}
Properties props("bitmap");
props.setString("filename", path.string());
props.setFloat("gamma", 1.0f);
ref<Texture> texture = static_cast<Texture *> (PluginManager::getInstance()->
createObject(MTS_CLASS(Texture), props));
texture->configure();
@ -535,7 +549,7 @@ public:
bsdf->configure();
if (bump) {
props = Properties("bump");
props = Properties("bumpmap");
ref<BSDF> bumpBSDF = static_cast<BSDF *> (PluginManager::getInstance()->
createObject(MTS_CLASS(BSDF), props));

View File

@ -51,7 +51,7 @@ MTS_NAMESPACE_BEGIN
* }
* \parameter{faceNormals}{\Boolean}{
* When set to \code{true}, Mitsuba will use face normals when rendering
* the object, which will give it a faceted apperance. \default{\code{false}}
* the object, which will give it a faceted appearance. \default{\code{false}}
* }
* \parameter{maxSmoothAngle}{\Float}{
* When specified, Mitsuba will discard all vertex normals in the input mesh and rebuild

View File

@ -46,7 +46,7 @@ extern MTS_EXPORT_RENDER void pushSceneCleanupHandler(void (*cleanup)());
* \parameter{faceNormals}{\Boolean}{
* When set to \code{true}, any existing or computed vertex normals are
* discarded and \emph{face normals} will instead be used during rendering.
* This gives the rendered object a faceted apperance.\default{\code{false}}
* This gives the rendered object a faceted appearance.\default{\code{false}}
* }
* \parameter{maxSmoothAngle}{\Float}{
* When specified, Mitsuba will discard all vertex normals in the input mesh and rebuild

View File

@ -39,7 +39,7 @@ MTS_NAMESPACE_BEGIN
* \default{automatic}
* }
* \parameter{stepWidth}{\Float}{
* Controls the width of of step function used for the
* Controls the width of the step function used for the
* color transition. It is specified as a value between zero
* and one (relative to the \code{lineWidth} parameter)
* \default{0.5}