Playing with Maya: Rendering Tests

hill_model_test_geo blessyou_backgrounmodelling_lighttest blessyou_backgrounmodelling_lighttest2 blessyou_backgrounmodelling_lighttest4 blessyou_backgrounmodelling_lighttest blessyou_backgrounmodelling_lighttest2

Above are some sample tests of the lighting source that will be used in the animation. Some were unsuccessful as they look like interior lighting rather than exterior. What I am trying to achieve/figure out is how the scenes’ objects will react to the light positions. Where will shadows’ positions within the scene be, and how much light should be used… how much is needed? We want the staging for the scene to look believable, and to explain the scene, and light the characters well within the scene.

These images are not the actual staging, I have been collaborating with another team member who is setting up the staging within the scene.

Actual Stage Set Up Scene

stagelight_scene1

stagelight_scene2 stagelight_scene5

 

Arranging the lighting of the scene by placing the main source of light behind the objects, this gives the scene a better sense of form; side light needs to be rearranged to the same position of the sun. We planned to have the light coming in from stage right. I think I am liking how these shadows and the light source are forming the scene, though, I would like to balance it a little better.

Notes: (info taken from What is Rendering?)

When an artist is working on a 3D scene, the model he manipulates are actually a mathematical representation of points and surfaces (more specifically actually a mathematical representation of points and surfaces (more specifically vertices and polygons) in three-dimensional space.

The term rendering refers to the calculations performed by a 3D software package’s render engine to the scene from mathematical approximation to a finalized 2D image. During the process, the entire scene’s spatial, textural, and lighting information are combined to determine the colour value of each pixel in the flattened image.

Two Types of Rendering

1. Real-Time Rendering: Real-Time Rendering is used most prominently in gaming and interactive graphics, where images must be computed from 3D information at an incredibly rapid pace.

  • Interactivity: Because it is impossible to predict exactly how a player will interact with the game environment, images must be rendered in “real-time” as the action unfolds.
  • Speed Matters: In order for motion to appear fluid, a minimum of 18-20 frames per second must be rendered to the screen. Anything less than this and action will appear choppy.
  • The Methods: Real-Time Rendering is drastically improved by dedicated graphics hardware. (GPUs) Graphic Processing Units, and by pre-compiling as much information a possible. A great deal of a game environment’s lighting information is pre-computed and “baked” directly into the environment’s texture files to improve render speed.

2. Offline or Pre-Rendering: offline rendering is used in situations where speed is less of an issue, with calculations typically performed using multi-core CPUs (Central Processing Units) rather than dedicated graphic hardware.

  • Predictability: Offline rendering is seen most frequently in animation and effects work where visual complexity and Photorealism are held to a much higher standard. Since there is no unpredictability as to what will appear in each frame, large studios have been known to dedicate up to 90 hrs render time to individual frames.
  • Rending Techniques: There are three major computational techniques used for most rendering. Each has its own set of advantages and disadvantages, making all three viable options in certain situations.
  • Scanline (or rasterization): Scanline rendering is used when speed is a necessity, which makes it the technique of choice for real-time rendering and interactive graphics. Instead of rendering an image pixel by pixel, scanline renders compute on a polygon by polygon basis. Scanline techniques used in conjunction with recomputed (baked) lighting can achieve speeds of 60 frames per second or better on a high-end graphics card.
  • Raytracing: In raytracing, for every pixel in the scene, one (or more) ray(s) of light are traced from the camera to the nearest 3D object. The light ray is then passed through a set of number of “bounces”, which can include reflections or refraction depending on the materials in the 3D scene. The colour of each pixel is computed algorithmically based raytracing is capable of greater photorealism than scanline, but is exponentially slower.
  • Radiosity: Unlike raytracing is calculated independent of the camera, and is surface oriented rather than pixel by pixel. The primary function of radiosity is to more accurately simulate surface colouring by accounting for direct illumination (bounced diffuse light). Radiosity is typically characterized by soft graduated shadows and colour onto nearby surfaces.

In practice, raytracings are often used in conjunction with one another, using the advantages of each system to achieve impressive levels of photorealism.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s