Wednesday, 19 June 2013

6. Constraints

6. Constraints

Polygon Count and File Size
The two common measurements of an object's 'cost’ or file size are the polygon count and vertex count. For example, a game character may stretch anywhere from 200-300 polygons, to 40,000+ polygons. A high-end third-person console or PC game may use many vertices or polygons per character, and an iOS tower defence game might use very few per character.

Polygons Vs. Triangles
When a game artist talks about the poly count of a model, they really mean the triangle count. Games almost always use triangles not polygons because most modern graphic hardware is built to accelerate the rendering of triangles.
The polygon count that's reported in a modelling app is always misleading, because a model's triangle count is higher. It's usually best therefore to switch the polygon counter to a triangle counter in your modelling app, so you're using the same counting method everyone else is using.
Polygons however do have a useful purpose in game development. A model made of mostly four-sided polygons (quads) will work well with edge-loop selection & transform methods that speed up modelling, make it easier to judge the "flow" of a model, and make it easier to weight a skinned model to its bones. Artists usually preserve these polygons in their models as long as possible. When a model is exported to a game engine, the polygons are all converted into triangles automatically. However different tools will create different triangle layouts within those polygons. A quad can end up either as a "ridge" or as a "valley" depending on how it's triangulated. Artists need to carefully examine a new model in the game engine to see if the triangle edges are turned the way they wish. If not, specific polygons can then be triangulated manually.

Triangle Count vs. Vertex Count
Vertex count is ultimately more important for performance and memory than the triangle count, but for historical reasons artists more commonly use triangle count as a performance measurement. On the most basic level, the triangle count and the vertex count can be similar if the all the triangles are connected to one another. 1 triangle uses 3 vertices, 2 triangles use 4 vertices, 3 triangles use 5 vertices, and 4 triangles use 6 vertices and so on. However, seams in UVs, changes to shading/smoothing groups, and material changes from triangle to triangle etc. are all treated as a physical break in the model's surface, when the model is rendered by the game. The vertices must be duplicated at these breaks, so the model can be sent in renderable chunks to the graphics card.
Overuse of smoothing groups, over-splittage of UVs, too many material assignments (and too much misalignment of these three properties), all of these lead to a much larger vertex count. This can stress the transform stages for the model, slowing performance. It can also increase the memory cost for the mesh because there are more vertices to send and store.


Rendering Time
Rendering is the final process of creating the actual 2D image or animation from the prepared scene. This can be compared to taking a photo or filming the scene after the setup is finished in real life. Several different, and often specialised, rendering methods have been developed. These range from the distinctly non-realistic wireframe rendering through polygon-based rendering, to more advanced techniques such as: scanline rendering, ray tracing, or radiosity. Rendering may take from fractions of a second to days for a single image/frame. In general, different methods are better suited for either photo-realistic rendering, or real-time rendering.

Real-time
Rendering for interactive media, such as games and simulations, is calculated and displayed in real time, at rates of approximately 20 to 120 frames per second. In real-time rendering, the goal is to show as much information as possible as the eye can process in a fraction of a second, i.e. one frame. The primary goal is to achieve an as high as possible degree of photorealism at an acceptable minimum rendering speed (usually 24 frames per second, as that is the minimum the human eye needs to see to successfully create the illusion of movement). In fact, exploitations can be applied in the way the eye 'perceives' the world, and as a result the final image presented is not necessarily that of the real-world, but one close enough for the human eye to tolerate. Rendering software may simulate such visual effects as lens flares, depth of field or motion blur. These are attempts to simulate visual phenomena resulting from the optical characteristics of cameras and of the human eye. These effects can lend an element of realism to a scene, even if the effect is merely a simulated artefact of a camera. This is the basic method employed in games, interactive worlds and VRML. The rapid increase in computer processing power has allowed a progressively higher degree of realism even for real-time rendering, including techniques such as HDR rendering. Real-time rendering is often polygonal and aided by the computer's GPU.

Non Real-time
Animations for non-interactive media, such as feature films and video, are rendered much more slowly. Non-real time rendering enables the leveraging of limited processing power in order to obtain higher image quality. Rendering times for individual frames may vary from a few seconds to several days for complex scenes. Rendered frames are stored on a hard disk then can be transferred to other media such as motion picture film or optical disk. These frames are then displayed sequentially at high frame rates, typically 24, 25, or 30 frames per second, to achieve the illusion of movement.

When the goal is photo-realism, techniques such as ray tracing or radiosity are employed. This is the basic method employed in digital media and artistic works. Techniques have been developed for the purpose of simulating other naturally-occurring effects, such as the interaction of light with various forms of matter. Examples of such techniques include particle systems (which can simulate rain, smoke, or fire), volumetric sampling (to simulate fog, dust and other spatial atmospheric effects), caustics (to simulate light focusing by uneven light-refracting surfaces, such as the light ripples seen on the bottom of a swimming pool), and subsurface scattering (to simulate light reflecting inside the volumes of solid objects such as human skin).

The rendering process is computationally expensive, given the complex variety of physical processes being simulated. Computer processing power has increased rapidly over the years, allowing for a progressively higher degree of realistic rendering. Film studios that produce computer-generated animations typically make use of a render farm to generate images in a timely manner. However, falling hardware costs mean that it is entirely possible to create small amounts of 3D animation on a home computer system. The output of the renderer is often used as only one small part of a completed motion-picture scene. Many layers of material may be rendered separately and integrated into the final shot using compositing software.

Reflection/Scattering - How light interacts with the surface at a given point
Shading - How material properties vary across the surface

5. 3D Development Software

3D Studio Max

Autodesk 3ds Max, formerly 3D Studio Max, is 3D computer graphics software for making 3D animations, models, and images. It was developed and produced by Autodesk Media and Entertainment. It has modelling capabilities, a flexible plugin architecture and can be used on the Microsoft Windows platform. It is frequently used by video game developers, TV commercial studios and architectural visualization studios. It is also used for movie effects and movie pre-visualization.



In addition to its modelling and animation tools, the latest version of 3ds Max also features shaders (such as ambient occlusion and subsurface scattering), dynamic simulation, particle systems, radiosity, normal map creation and rendering, global illumination, a customizable user interface, and its own scripting language.






Maya

Autodesk Maya, commonly shortened to Maya, is 3D computer graphics software that runs on Microsoft Windows, Mac OS and Linux, originally developed by Alias Systems Corporation (formerly Alias|Wavefront) and currently owned and developed by Autodesk, Inc. It is used to create interactive 3D applications, including video games, animated film, TV series, or visual effects. The product is named after the Sanskrit word Maya, the Hindu concept of illusion.



Maya 1.0 was released in February 1998. Following a series of acquisitions, Maya was bought by Autodesk in 2005.[8][9] Under the name of the new parent company, Maya was renamed Autodesk Maya. However, the name "Maya" continues to be the dominant name used for the product.














LightWave


LightWave is a software package used for rendering 3D images, both animated and static. It includes a rendering engine that supports such advanced features as realistic reflection and refraction, radiosity, and caustics. The 3D modeling component supports both polygon modeling and subdivision surfaces. The animation component has features such as reverse and forward kinematics for character animation, particle systems and dynamics. Programmers can expand LightWave's capabilities using an included SDK which offers LScript scripting (a proprietary scripting language) and common C language interfaces.











Blender


Blender is a free and open-source 3D computer graphics software product used for creating animated films, visual effects, interactive 3D applications or video games. Blender's features include 3D modeling, UV unwrapping, texturing, rigging and skinning, fluid and smoke simulation, particle simulation, animating, match moving, camera tracking, rendering, video editing and compositing. It also features a built-in game engine.
















Cinema 4D


CINEMA 4D is a 3D modeling, animation and rendering application developed by MAXON Computer GmbH of Friedrichsdorf, Germany. It is capable of procedural and polygonal/subd modeling, animating, lighting, texturing, rendering, and common features found in 3d modelling applications.

Four variants are currently available from MAXON: a core CINEMA 4D 'Prime' application, a 'Broadcast' version with additional motion-graphics features, 'Visualize' which adds functions for architectural design and 'Studio', which includes all modules. CINEMA 4D runs on Windows and Macintosh PC's.



Initially, CINEMA 4D was developed for Amiga computers in the early 1990s, and the first three versions of the program were available exclusively for that platform. With v4, however, MAXON began to develop the program for Windows and Macintosh computers as well, citing the wish to reach a wider audience and the growing instability of the Amiga market following Commodore's bankruptcy.















ZBrush


ZBrush is a digital sculpting tool that combines 3D/2.5D modeling, texturing and painting. It uses a proprietary "pixol" technology which stores lighting, colour, material, and depth information for all objects on the screen. The main difference between ZBrush and more traditional modelling packages is that it is more akin to sculpting.



ZBrush is used as a digital sculpting tool to create high-resolution models (up to ten million polygons) for use in movies, games, and animations. It is used by companies ranging from ILM to Electronic Arts. ZBrush uses dynamic levels of resolution to allow sculptors to make global or local changes to their models. ZBrush is most known for being able to sculpt medium to high frequency details that were traditionally painted in bump maps. The resulting mesh details can then be exported as normal maps to be used on a low poly version of that same model. They can also be exported as a displacement map, although in that case the lower poly version generally requires more resolution. Or, once completed, the 3D model can be projected to the background, becoming a 2.5D image (upon which further effects can be applied). Work can then begin on another 3D model which can be used in the same scene. This feature lets users work with extremely complicated scenes without heavy processor overhead.











Sketchup
SketchUp is a 3D modelling program for a broad range of applications such as architectural, civil, mechanical, film as well as video game design — and available in free as well as 'professional' versions.

The program highlights its ease of use,[4] and an online repository of model assemblies (e.g., windows, doors, automobiles, entourage, etc.) known as 3D Warehouse enables designers to locate, download, use and contribute free models. The program includes a drawing layout functionality, allows surface rendering in variable "styles," accommodates third-party "plug-in" programs enabling other capabilities (e.g., near photo realistic rendering) and enables placement of its models within Google Earth.










File Formats
Each 3D application allows the user to save their work, both objects and scenes, in a proprietary file format and export in open formats.

A proprietary format is a file format where the mode of presentation of its data is the intellectual property of an individual or organisation which asserts ownership over the format. In contrast, a free format is a format that is either not recognised as intellectual property, or has had all claimants to its intellectual property release claims of ownership. Proprietary formats can be either open if they are published, or closed, if they are considered trade secrets. In contrast, a free format is never closed.
Proprietary formats are typically controlled by a private person or organization for the benefit of its applications, protected with patents or as trade secrets, and intended to give the license holder exclusive control of the technology to the (current or future) exclusion of others.

Examples of proprietary formats, AutoCAD - .dxf, 3D Studio Max - .3ds, Maya - .mb, LightWave - .lwo
Examples of open formats,  .obj and .dae.

4. Mesh Construction

Mesh Construction
4. Mesh Construction
Although it is possible to construct a mesh by manually specifying vertices and faces, it is much more common to build meshes using a variety of tools. A wide variety of 3d graphics software packages are available for use in constructing polygon meshes.
 

Box Modelling
 

One of the more popular methods of constructing meshes is box modelling, which uses two simple tools:
1. The subdivide tool splits faces and edges into smaller pieces by adding new vertices. For example, a square would be subdivided by adding one vertex in the center and one on each edge, creating four smaller squares.

2. The extrude tool is applied to a face or a group of faces. It creates a new face of the same size and shape which is connected to each of the existing edges by a face. Thus, performing the extrude operation on a square face would create a cube connected to the surface at the location of the face.

 


http://theorangeduck.com/page/subdivision-modelling

Extrusion Modelling
 

A second common modelling method is sometimes referred to as inflation modeling or extrusion modelling. In this method, the user creates a 2d shape which traces the outline of an object from a photograph or a drawing. The user then uses a second image of the subject from a different angle and extrudes the 2d shape into 3d, again following the shape’s outline. This method is especially common for creating faces and heads. In general, the artist will model half of the head and then duplicate the vertices, invert their location relative to some plane, and connect the two pieces together. This ensures that the model will be symmetrical.

http://forum.thegamecreators.com/?m=forum_view&t=93981&b=3

Primitive Modelling


Another common method of creating a polygonal mesh is by connecting together various primitives, which are predefined polygonal meshes created by the modelling environment. Common primitives include:

CubesPyramidsCylindersSpheres2D primitives, such as squares, triangles, and disks


Specialised Modelling

 

Finally, some specialized methods of constructing high or low detail meshes exist. Sketch based modeling is a user-friendly interface for constructing low-detail models quickly, while 3d scanners can be used to create high detail meshes based on existing real-world objects in almost automatic way. These devices are very expensive, and are generally only used by researchers and industry professionals but can generate high accuracy sub-millimetric digital representations.



http://banburywalker.com/labs/labs/scan-3d-test/


http://en.wikipedia.org/wiki/Polygonal_modeling

3. Geometric Theory

geometric theory

Geometric theory is used to create a 3d image using an extra axis.In 2d images are created by connecting points on the x and y axis
Cartesian Coordinates System





3d programs work using a grid of 3d co-ordinates.  3d co-ordinates are the same as 2-d co-ordinates but they use a z axis known as a “depth” axis

2. Displaying 3D Polygon Animations

Displaying 3D Polygon Animations
API

API is an abbreviation of application program interface this is basically what makes the program work as it includes the tools and protocols to do so.
API’s are used in MS- windows to allow the user to create applications and ultimately make learning the basics of the software easier for the user.


Direct3D
Is an API created by Microsoft to allow the user to produce 3D programs that can use whatever graphics acceleration device is installed in the computer.

Graphics Pipeline

Graphics pipeline more commonly known as  rendering pipeline refers to the rasterisation based rendering  of a 3D image transforming it in to a 2D image

Clipping
Geometric primitives that now fall completely outside of the viewing frustum will not be visible and are discarded at this stage.


1. Application of 3D

3D in games
3D in gaming was first introduced in a game called 3D Monster Maze that was created in 1981 by the 2 minds of J.K.Greye and Malcolm Evans.
The game was developed for the Sinclair ZX81 platform and released by J. K. Greye Software in 1982 and then later released by Malcolm Evans’s own company in the same year.
Ultimately this was the game to release 3D gaming to the public and in to their own home.
The game it’s self puts the player in a randomly generated maze with one exit and a monster looking for them trying to get to them before they escape. This game has been re-invented multiple times most recently as the popular game slender but of course 3D in gaming has evolved since 1981.

Evolution of 3D in gaming
3D in gaming has evolved dramatically since 3D Monster Maze the first major step in 3D gaming was In Virtua Racing or V.R. for short that was created in 1992 by saga for the consoles saga Mega Drive/Genesis in 1994 he game was later released for saga Saturn and the PlayStation 2 in disc form.
The thing that Virtua Racing allowed the player to do was race in a completely 3D course which was revolutionary for its time it was even said to be the most impressive 16-bit game ever created from a technical standpoint
With home gaming being all the rage back then the game itself cost a huge £70 in the UK and a huge $100 in America.
Current gaming
Hyper-realistic 3D technology in games now is a standard element but companies/developers still strive to improve.
In current gaming a fairly new element has been introduced that has been used in films like Avatar and Tron this element is called Motion Capture.
Motion capture uses actors to perform an action and records there movement this has been used in games like Baman Arkham series ,beyond two souls ,Tron and DC’s Injustice.




3D in Animation
3D in Animation was first developed in 1972 at the University of Utah by a man called Ed Catmull who is now the founder and president of Walt Disney and Pixar animation studios.



in 1988 Pixar decided to create a short film called Tin Toy using computer-animation
the short film was directed by  John Lasseter and only ran for the short duration of 5 minutes the film includes a small tin toy trying to escape the grasp of a baby. John pitched this idea to the then owner of the company Pixar Steve Jobs by storyboard Steve decided to fund the idea and it was then officially used as a test for PhotoRealistic RenderMan software. This also forged the partnership with Pixar and Disney and it was the first animation to give Pixar an Oscar in 1988
Little did Pixar know this would also influence some of their greatest films such as Toy Story 1, 2 and 3, Monsters Inc. and even Up.
 Accessing the Technology
Although cell and stop motion are still a very popular way of animating high end 39 software is now pretty affordable to the average animator allowing them to set up a company from scratch allowing them to start working for a larger audience for example they may further their careers by creating children’s animations or even making films like toy story , cloudy with a chance of meatballs and WALL-E.
Techniques
There are many different techniques in 3d animation these are a few of the main techniques.



3D in film and TV
First 3D animation in film
The first 3D animation used in a film was the 3D hand and face created by Ed Catmull coincidently this is the first computer generated animation the film was also created by Pixar studios and it was called future world this film was released in 1976.



Evolution of 3d animation in films
3D animation in films today has exceled dramatically with the use of motion capture and facial tracing the first major improvement in 3d animation in films was in the film  Jurassic park that was made in 1993 by universal.


3D in education
3D in education uses a program called gaia 3D that re-creates multiple scenes in history giving the user a tour of ancient temples and it also helps as a visual aid for students in biology as it can re-create a 3d model of a heart or lung.


3D in architecture
3D modelling in architecture is used to create the building before it is physically made.
This is use as a visual aid.

3D in Engineering
Like in architecture 3D modelling is used to create it virtually and see if everything works and fits together and works the way it should


3D in medicine
3d animation is used in medicine for training as a visual representation of the subject.


3D in meteorology
3d animation in meteorology is used to simulate what might happen in extream weather conditions.

3D in product design
3d modelling is used in product design to give an investor an idea what the product is.