Retrospect | Utilizing real-time technologies for product visualization.

Client: Restoration Hardware

Project: Cloud Modular

Artist: Steven Wilson - Epigraph

Project Summary and Scope

The goal of the project was to produce images for product photography on par with current works produced by Epigraph whilst fulfilling the standards of the client. Additionally, the utilization of real-time technologies would provide the benefit of quicker feedback from the client via “live” reviews, and subsequently result in quicker turnaround time for delivery to the client.


The scope of this project was to produce images of four specific furniture assemblies in 5 variations for a total of 20 images delivered to the client.


Successful Discoveries

  • Render time was successfully diminished by a significant margin. Ex. 8k renders in 5 minutes via PathTracer (similar renders on our hardware with vRay would be 1+ hours)
  • Unreal Engine is capable of rendering extremely high resolution images without extreme render times. (8k+)
  • Live client reviews are feasible and create a more collaborative environment between Epigraph and the client.
  • Unreal Engine is capable of producing high quality, photo-realistic images.
  • Raw data from CAD and other DCC software can be imported into Unreal via Datasmith or universal formats (.fbx, .obj, etc.)
  • Complex material setups are feasible.
  • Various parameters can be created for live and fast editing of materials/finishes for client reviews.
  • Toggleable setups for fast switching between specific setups/configurations are possible via Variant Manager.
  • Lookdev is highly responsive for artists - lighting, post-process, and camera effects are all accessible. Some additional information for further post-production is also available after rendering. (ex. cryptomatte).

Obstacles Discovered

  • Images produced for this particular project did not meet expectations for final delivery.
  • This is primarily due to the specific challenges of this project itself versus the quality achieved as a whole.
  • Various bugs or anomalies are present whilst interacting with Unreal. The largest would be instability whilst using the Path Tracer for previewing or rendering. I experienced well over 50 crashes whilst working on this project.
  • Instability is an issue not only for lookdev and production, but for “live” reviews as well.
  • Clunky UI makes live reviews a bit cumbersome.
  • Utilizing the Path Tracer, whilst producing great results, comes with several limitations.
  • It’s not complete.
  • No access to what produces the image. (Gloss/Specular/Lighting passes)
  • No anti-aliasing without losing control over the final image look. (Disables tone-mapping)
  • No high-resolution imagery (hardware limited) without losing control over the final image look. (Also disables tone-mapping).

*Tone mapping being disabled primarily becomes an issue since lookdev for the artist no longer becomes a WYSIWYG experience. This also makes live reviews pointless.


  • No bump maps.
  • Must be converted to normal which does actually produce a different look.This was particularly an issue with the Roma finish for this project.
  • Constrained to PBR workflow.
  • Can be an issue when replicating very specific material setups from other workflows.
  • No general color correction nodes for matching material setups from other DCC-apps.


Other Considerations/Project Specific Challenges

  • We constrained all look-dev and outputs based on results from the Path Tracer. We have not fully explored reaching similar results via Lumen/Raytracing.
  • Our reference for our final output was also CG.
  • We were essentially converting/translating some workflows from the 3D dept. for the client. 
  • 3DSMax Corona shaders/materials utilize reflective/glossiness work-flow (not-PBR). These shaders could not be directly translated or converted over to Unreal Engine.
  • Some organizational issues were also present in client files which lead to some confusion/incorrect usage of some resources.
  • Various features inside of Unreal are still in beta/early access and not complete.
  • The client requested changes to the models we used, albeit the models were approved and sent to us by the client themselves.


The Process

Data Delivery

We acquired the necessary information about the product and the specific finishes related to begin. For this project, we received 3DSMax project files that contained a model made up of various components we utilized to create the final 4 models/products we required for production. The model/components did not derive from CAD or scan data, and were otherwise created by an artist at RH.

In addition to the model, the files also contained the light setups, scene, and materials used in their renderings.

Once this data is validated as usable (in our case we had a couple of Max files that weren’t compatible) we can proceed with the next step in using it.

Data to Unreal

There were primarily two ways to get the information/data from Max to Unreal.


  1. Datasmith

Datasmith was the initial way to move most of the assets/data into Unreal. It is essentially a set of tools that allows you to pull in all kinds of information from other applications and serves as a middleman between them. Whilst it is mostly successful, there are some gaps in translation where some information is interpreted incorrectly (or simply not supported) and can create either errors or make some assets unusable. The largest road-block that caused this was the use of Corona - a rendering engine utilized by both us and Restoration Hardware.

Lights and geometry/scenery all converted over successfully with only some minor tweaks to some parameters needed like the intensity of individual lights.

Materials however, did not convert over with much success. Datasmith could not interpret some Corona specific nodes, and the ending result looked like a spaghetti nightmare in the editor - largely unreadable for an artist to logically navigate and utilize.

Pictured above: A material brought in via datasmith. Note the large number of connections with various arbitrary math functions used to edit texture images.


  1. Manual Import

The other option was manually import assets using the already existing tools inside of Unreal. I did eventually import the models manually for our specific use in this project primarily because we needed to have more control over specific components (leather panels), and couldn’t do so with the models directly from Max. There were two ways to handle texturing each individual leather panel in Unreal. Either split each panel into its own object and manage the material for each one, or assign a unique material to each panel. I ended up going with the latter since there wasn’t really a tradeoff between the two other than how the data is organized in Unreal.

Materials on the other hand required the most work and lookdev on materials was by far the largest time-sink on the project.

Pictured below: One of the variations of the Berkshire material built from scratch. A large number of nodes are used, but it is organized by different properties that make up the material and is also parametric to where these properties can be changed live.

An additional reason as to why we could not utilize the materials from Corona imported via Datasmith was because we also required the flexibility to change various properties of the materials on the fly for live reviews. These properties ranged from texture tiling, roughness, specular intensity, intensity of normal maps, albedo color tinting, and also the intensity of maps compositing with other maps. All the materials in the project used in the final images were built from scratch using the Corona shaders as a roadmap as to what texture images to use and how to use them.


The problem with this particular process was that:

  1. Recreating certain functions for color correction such as adjusting contrast, saturation, brightness, etc. is up the artist’s discretion and interpretation since there is no color correction node in Unreal. Additionally, the limiting tools that do exist for saturation/contrast do not function the same way as they do in Corona.
  2. Due to the above issues and the differences between going from glossiness/reflective workflow to PBR, it became increasingly difficult to achieve the target render results.


I would like to note that some color corrections can be made inside the texture editor (not material) in Unreal, however, we required these adjustments to be made in the material editor instead.


These issues were a particular challenge of the project in terms of matching our work with that of the client’s, combined with some limitations inside of Unreal.

Variants

Once all the assets were imported into Unreal, we needed to have easy control over what was rendered or visible to the viewer. This included what products were visible, what materials/finishes were applied to the products, the light setup, and also what post-processing adjustments were used. Need to render out the Chair in Berkshire finish? Just toggle everything needed for that.


It is possible to link these options by creating dependencies, but for this project I did not dive that deep; however you can save yourself a few clicks by doing so.

This variant manager system was used to control the presets for everything. Since there is no automation for this, each preset for each image needed to be toggled before rendering. Not a big deal, but definitely a monotonous process after rendering each image out several times for review. 

Without this, managing everything would surely have been a nightmare.Thanks Epic!

Rendering

In Unreal there are two ways to render your image, both were utilized for this project, but ultimately we went with option 1 due to limitations with the Path Tracer.

  1. The Sequencer

The sequencer is where you create camera shots similar to 3D apps or other video editing software like After Effects/Premiere Pro.. You can keyframe and animate various parameters for the camera as well as objects, lights - virtually anything else in your scene. You can also render from the sequencer editor with your chosen sequence open. From the rendering window you have some pretty straightforward options in setting up what you would like to output like additional passes (cryptomatte, roughness, etc.). It is important to note though, that these passes do not look like or function the same way as our offline render counterparts such as Corona, vRay, etc. Thus, their usability is limited for any actual post-production work.

The primary difference between rendering with this and the following method “Movie Render Queue'' is that some options are not available such as tiled rendering (AKA High Resolution Output), specific anti-aliasing options, and more. You can only render up to 8k resolution from the sequencer; however, our GPU’s could not handle this, and we were stuck with rendering only up to 4k resolution.

There was no difference in render time between these two methods. We rendered via Path Tracer with around 2048 samples/passes.

  1. Movie Render Queue

The movie render queue functions primarily as a way to render out multiple sequences in succession. The benefit of using it versus directly from the sequencer was the High Resolution Output option which allowed tiled rendering. Tiled rendering allows you to render extremely large resolutions like 16k without your GPU exploding and burning the whole office down. The way it works is it breaks your image up into manageable chunks (tiles) and then stitches them all together at the end to form one giant image.

The problem we confronted with using this feature was images rendered with it lacked post process effects from post process volumes. Additionally, we also discovered some tone mapping that is inherently present within Unreal Engine also seemed to be omitted as well. So essentially, our renders would not fully match what you would see in the viewport, and thus lookdev and live reviews aren’t fruitful.

This issue is perhaps project specific, and had the burden of matching the target images from the client had not been so pertinent - this feature may have been used.

Post Processing

Once the images were created from Unreal they did go undergo some small post processing inside of Nuke in order to comp them into consistently lit backgrounds. There was some small additional grading to aid with matching, but nothing immediately noticeable from viewport to render. There is nothing really specific to note in this part of the process other than:


  1. Cryptomattes were perhaps the single most important pass to have.
  2. We could not get a proper lighting pass out of Unreal to add shadows to our backgrounds. The detail lighting pass and regular lighting pass are only material overrides and don’t actually contain the information used in the beauty image that the final result from Unreal creates.
  1. Epic/Unreal has actually stated in documentation that there currently is no way to comp passes and achieve a 1 to 1 matching result with the raw beauty render.
  2. The way we achieved shadows was a cheat by taking one beauty render, making it monotone, and adjusting values to isolate the shadows. We then used a mask to comp these in.

Conclusion

Overall the project proved several things. Unreal is certainly capable of being used to replace current rendering pipelines with some limitations. There were specific requirements in this project that at times Unreal cannot fulfill; however, the overall quality of images and speed of the application in terms of look-dev and rendering is a massive boon. A consideration must be made that throughout the project we were utilizing Unreal in rather unorthodox ways to achieve these results.

First and foremost,I don’t believe that currently the PathTracer engine is to be used as a final output for production. There is little to no documentation for it, and there is little one can do to control it. We don’t have access to what creates the path-traced image (passes), and the number of samples is the only option we have in controlling it. There currently is little reason to believe Epic is preparing the Path Tracer to be used as a final output, but the future could prove this wrong. For now and perhaps quite some time we should expect that Epic supports Lumen/RTX to be used in production. For this project, we did not delve deeper into this more frequently used and familiar part of Unreal. The Path Traced engine is also considerably slower than Lumen/RTX, but still faster than typical offline DCC-apps (vRay/Corona.). There are just too many cons and issues to consider moving forward with the Path Traced engine in Unreal. It would be wise to consider the other options of rendering.

Another consideration to make is how Unreal fares for our process against other software such as Omniverse (which we are currently exploring). Both use similar technologies, but being able to fully utilize the power of USD format inside Omniverse may be a better option. Unreal also supports USD format, but it is not quite as organized in comparison to how USD stages/assets are structured in Omniverse.

I also believe our success and results would have differed had we been matching to photo-references with our own resources versus replicating something that was already computer-generated and incompatible with our workflows. This was perhaps the largest or most difficult obstacle in the project. We were challenged to match 5 leather finishes that were previously rendered using Corona in non-PBR complex material setups (lots of custom falloffs for fresnel and such). We were mostly successful and matched 3 out of the 5 finishes. This speaks volumes to the success we did have when considering the incompatibilities between the two workflows: Unreal vs. Corona.

The two finishes in question that we couldn’t match as well were Roma and Sherwood. Roma was arguably the simplest setup however, the materials from the client in Corona used a lot of bump mapping versus normal maps. Unreal Engine does not accept or convert bump maps very well and we couldn’t get a normal map using their bump maps that could match the same look - I think this is primarily due to the fact that bump maps and normal maps are calculated differently when rendered and ultimately produce a different look in their own right. Regardless of whether bump maps versus normal maps are more physically correct  - we just couldn’t manage to produce this same exact look in time.

For Sherwood, the material setup from the client in Corona contained a lot of images that were composited with each other in addition to lots of color corrections and custom falloffs. For falloffs, we replicated this using material functions which worked quite well. This finish in particular had a complex suede-like appearance that blended into a more glossy and smooth rubber-like leather at particular angles. We came close, but the setup was just too custom and specific to Corona for us to replicate it exactly in Unreal.

Some limitations in Unreal combined with other variables prevented our march to 100% accuracy, but I don’t think the loss of this match speaks at all to the success of other projects in Unreal and how capable Unreal is for other projects. 

Without the requirement for absolute matching to the target references, the images would have worked fine in terms of quality.

It is highly likely that we will revisit this and continue to utilize Unreal for projects in the future.

Links

Below are links to the final outputs from this project at 4k resolution. A video showcasing the power of Unreal in this project will also be included here at a later date.


Final Images

Target Reference Images from RH





Steven Wilson
This is some text inside of a div block.