Google
 

Tuesday, April 24, 2007

Engine Design - RCModel

Now this class is probably the biggest deviation from the tutorial. I have added a number of fields and properties for my shaders, from the basic AmbientLightColor to and environment cube map, bump maps and height maps. I don't see any point in documenting these as they are pretty much run of the mill and are required for the shaders you use. So I am going to document the fundimental differences here.

New Fields

public bool UseBasicRender;
private float tick;
private float animSpeed;

UseBasicRender
This field (should have a property) is used as with Mike's original render method shaders using reflective cube mapping did not draw correctly, I don't know why, but they just didn't work. So I had to use a more basic draw method. So if this field is set to true then this basic method is used:


for (int msh = 0; msh < myModel.Meshes.Count; msh++)
{
    ModelMesh mesh = myModel.Meshes[msh];
    for (int prt = 0; prt < mesh.MeshParts.Count; prt++)
        mesh.MeshParts[prt].Effect = shader.Effect;
    mesh.Draw();
}



tick & animSpeed
This is how we decide how fast we want our shader animation to run.For examplt in the Microsoft HLSL exabmple it ripples the mesh at a given speed. The speed is set externaly in a Property called AnimationSpeed, this sets the animSpeed field. This is then passed to the shader.

New Properties

public BoundingSphere ObjectBoundingSphere
I added this property so I could get a bounding sphere that was not always positioned at 0,0,0


public BoundingSphere ObjectBoundingSphere
{
    get
    {
        BoundingSphere bs = new BoundingSphere(myModel.Meshes[0].BoundingSphere.Center + myPosition, myModel.Meshes[0].BoundingSphere.Radius);
        return bs;
    }
}



New Methods

protected override void DrawBounds(GraphicsDevice myDevice, Color col)
This method draws the bounding box for the mesh


protected override void DrawBounds(GraphicsDevice myDevice, Color col)
{
    List<BoundingBox> bounds = (List<BoundingBox>)((object[])myModel.Tag)[0];
    for (int b = 0; b < bounds.Count; b++)
    {
        myBounds = bounds[b];
        BuildBoxCorners();

        myDevice.DrawUserIndexedPrimitives<VertexPositionColor>(PrimitiveType.LineList, points, 0, 8, index, 0, 12);
    }
}



public void UpdateAnimation()
This was added recently to update the bones of an animated mesh. This method is called at the top of the RenderChildren method.


public void UpdateAnimation()
{
    if (animationData != null)
    {
        Matrix[] m_bonetrans = new Matrix[animationData.m_bones.Count];

        currentAnimationFrame++;
        if (currentAnimationFrame >= animationData.m_timeframes.Count)
            currentAnimationFrame = 0;

        int i = 0;
        foreach (TimeFrame tf in animationData.m_timeframes.Values)
        {
            if (i == currentAnimationFrame)
            {
                for (int j = 0; j < tf.m_transforms.Count; j++)
                {
                    m_bonetrans[tf.m_transforms[j].m_boneindex] = tf.m_transforms[j].m_transform;
                }
                break;
            }
            i++;
        }
        Effect effect = RCShaderManager.GetShader(myShader).Effect;
        if(effect.Parameters["mWorldMatrixArray"] != null)
            effect.Parameters["mWorldMatrixArray"].SetValue(m_bonetrans);

        if (effect.Parameters["Bones"] != null)
            effect.Parameters["Bones"].SetValue(m_bonetrans);
    }
}



And finaly the modified public void SetMaterialProperties() that now takes into account Semantics and Annotations.


public void SetMaterialProperties()
{
    Effect effect = RCShaderManager.GetShader(myShader).Effect;

    for (int parm = 0; parm < effect.Parameters.Count; parm++)
    {
        string paramSemantic = "";
        paramSemantic = effect.Parameters[parm].Semantic;

        if (effect.Parameters[parm].Semantic != null)
        {
            switch (effect.Parameters[parm].Semantic.ToLower())
            {
                case "position": // Presume it's a light
                    for (int note = 0; note < effect.Parameters[parm].Annotations.Count; note++)
                    {
                        EffectAnnotation desc = effect.Parameters[parm].Annotations[note];

                        switch (desc.ParameterType)
                        {
                            case EffectParameterType.String:
                            switch (desc.ParameterClass)
                            {
                                case EffectParameterClass.Object:
                                    switch (desc.GetValueString().ToLower())
                                    {
                                        case "lightfromsky":
                                            effect.Parameters[parm].SetValue(myLightFromSky);
                                            break;
                                        case "directionallight":
                                        case "pointlight":
                                            effect.Parameters[parm].SetValue(myLightPosition);
                                            break;
                                    }
                                    break;
                            }
                            break;
                        }
                    }
                    break;
                case "ambient":
                    switch (effect.Parameters[parm].ParameterClass)
                    {
                        case EffectParameterClass.Vector: // Color
                            if (effect.Parameters[parm].Name.ToLower().IndexOf("light") != -1)
                                effect.Parameters[parm].SetValue(myLightAmbientIntensity);
                            else
                                effect.Parameters[parm].SetValue(myAmbientLightColor);
                            break;
                    }
                    break;
                case "diffuse":
                    switch (effect.Parameters[parm].ParameterClass)
                    {
                        case EffectParameterClass.Vector: // Color
                            if (effect.Parameters[parm].Name.ToLower().IndexOf("light") != -1)
                                effect.Parameters[parm].SetValue(myLightDiffuseColor);
                            else
                            {
                                bool loaded = false;
                                for (int an = 0; an < effect.Parameters[parm].Annotations.Count; an++)
                                {
                                    if (effect.Parameters[parm].Annotations[an].Name == "UIName")
                                    {
                                        switch (effect.Parameters[parm].Annotations[an].GetValueString().ToLower())
                                        {
                                            case "groundcolor":
                                                loaded = true;
                                                effect.Parameters[parm].SetValue(myGroundColor);
                                                break;
                                            case "skycolor":
                                                loaded = true;
                                                effect.Parameters[parm].SetValue(mySkyColor);
                                                break;
                                        }
                                        break;
                                    }
                                }
                                if (!loaded)
                                    effect.Parameters[parm].SetValue(myDiffuseColor);
                            }
                            break;
                    }
                    break;
                case "specular":
                    switch (effect.Parameters[parm].ParameterClass)
                    {
                        case EffectParameterClass.Vector: // Color
                            if (effect.Parameters[parm].Name.ToLower().IndexOf("light") != -1)
                                effect.Parameters[parm].SetValue(myLightSpecularColor);
                            else
                                effect.Parameters[parm].SetValue(mySpecularColor);
                            break;
                        case EffectParameterClass.Scalar: // Value
                            effect.Parameters[parm].SetValue(mySpecularPower);
                            break;
                        }
                        break;
                    case "environment":
                        if(myCube != null)
                            effect.Parameters[parm].SetValue(myCube);
                        else
                            if(myColorMap != null)
                                effect.Parameters[parm].SetValue(myColorMap);
                            break;
                        case "rcmaterialparameter":
                            switch (effect.Parameters[parm].ParameterClass)
                            {
                                case EffectParameterClass.Vector:
                                    for (int an = 0; an < effect.Parameters[parm].Annotations.Count; an++)
                                    {
                                        if (effect.Parameters[parm].Annotations[an].Name == "UIName")
                                        {
                                            switch (effect.Parameters[parm].Annotations[an].GetValueString().ToLower())
                                            {
                                                case "etas":
                                                    effect.Parameters[parm].SetValue(new Vector3(0.80f, 0.82f, 0.84f));
                                                    break;
                                            }
                                            break;
                                        }
                                    }
                                    break;
                                case EffectParameterClass.Scalar:
                                    for (int an = 0; an < effect.Parameters[parm].Annotations.Count; an++)
                                    {
                                        if (effect.Parameters[parm].Annotations[an].Name == "UIName")
                                        {
                                            switch (effect.Parameters[parm].Annotations[an].GetValueString().ToLower())
                                            {
                                                case "reflectivestrength":
                                                    effect.Parameters[parm].SetValue(1.0f);
                                                    break;
                                                case "refractstrength":
                                                    effect.Parameters[parm].SetValue(1.0f);
                                                    break;
                                            }
                                            break;
                                        }
                                    }
                                    break;
                                case EffectParameterClass.Object:
                                    for (int an = 0; an < effect.Parameters[parm].Annotations.Count; an++)
                                    {
                                        if (effect.Parameters[parm].Annotations[an].Name == "UIName")
                                        {
                                            switch (effect.Parameters[parm].Annotations[an].GetValueString().ToLower())
                                            {
                                                case "colormap":
                                                    effect.Parameters[parm].SetValue(myColorMap);
                                                    break;
                                                case "bumpmap":
                                                    effect.Parameters[parm].SetValue(myBumpMap);
                                                    break;
                                                case "heightmap":
                                                    effect.Parameters[parm].SetValue(myHeightMap);
                                                    break;
                                            }
                                            break;
                                        }
                                    }
                                    break;
                            }
                            break;
                        default:
                            break;
            }
        }
    }
}

Sunday, April 22, 2007

New Engine Component - Fog

Well, have been a little busy over the last few weeks, but have managed to get some time to have a play with XNA and have added Fog to my engine.

I thought I would post a screen shot of it or two in action, I have posted my method on the HM forum and will go into more detail here once I have finished documenting my engine.

Here is the link to the forum FOG Alas, if you are not a member of the Hazy Mind forum you will not be able to view it, but it costs you nothing to register.

Starting at the top of a mountain:




Moving down the mountain through the fog:







Get to the waters edge:




Then look back up the mountain:




So that's fog in XNA (Windows Only I am afraid) this also shows a bit of the terrain and water object, all yet to be documented here, if I ever get the time...

Sunday, April 15, 2007

Engine Design - Shader, Textured Quad and Camera Objects

RCShader & RCShaderManager
I will start with the Shader classes. The ShaderManager is as the tutorial has it but has the Hashtable replaced with a Dictionary.

Basically I have redefined the myShaders container and added a variable to manage the index like this:

private static Dictionary<string, RCShader> myShaders = new Dictionary<string,RCShader>();
private static NameValueCollection keys = new NameValueCollection();



Altered the AddShader method like this:

public static void AddShader(RCShader newShader,string shaderLabel)
{
    myShaders.Add(shaderLabel, newShader);
    keys.Add((myShaders.Count - 1).ToString(), shaderLabel);
}



And replaced the foreach loop with a for loop in the LoadGraphicsContent method lie this:

public static void LoadGraphicsContent(GraphicsDevice myDevice,ContentManager myLoader)
{
    for (int sh = 0; sh < myShaders.Count; sh++)
        myShaders[keys[sh.ToString()]].LoadGraphicsContent(
                    myDevice, myLoader);
}



The Shader class it's self however has been reworked to use Semantics. The class has stayed essentially the same, however the SetParameters method has been totally redone.


public void SetParameters(RCObject myObject)
{
    Matrix World = Matrix.Identity;
    Matrix View = Matrix.Identity;
    Matrix Projection = Matrix.Identity;
    Matrix WVP = Matrix.Identity;
    Matrix WorldView = Matrix.Identity;
    Matrix ViewProjection = Matrix.Identity;

    if (!myObject.AlwaysFacingCamera)
    {
        if (myObject.UseLeftHandedWorldCalc)
        {
            World = Matrix.CreateScale(myObject.Scaling) *
                Matrix.CreateTranslation(myObject.Position) *
                Matrix.CreateFromQuaternion(myObject.Rotation);
        }
        else
        {
            World = Matrix.CreateScale(myObject.Scaling) *
                Matrix.CreateFromQuaternion(myObject.Rotation) *
                Matrix.CreateTranslation(myObject.Position);
        }
    }
    else
    {
        World = Matrix.CreateScale(myObject.Scaling) *
            Matrix.CreateFromQuaternion(RCCameraManager.ActiveCamera.Rotation * -1) *
            Matrix.CreateTranslation(myObject.Position);
    }

    if (RCHelpers.RCHelper.UseRefelctionViewMatrix)
        View = RCHelpers.RCHelper.reflectionViewMatrix;
    else
        View = RCCameraManager.ActiveCamera.View;

    Projection = RCCameraManager.ActiveCamera.Projection;

    ViewProjection = View * Projection;
    WorldView = World * View;
    WVP = World * View * Projection;

    for (int parm = 0; parm < myEffect.Parameters.Count; parm++)
    {
        string paramSemantic = "";
        paramSemantic = myEffect.Parameters[parm].Semantic;
        if (paramSemantic != null)
        {
            switch (paramSemantic.ToLower())
            {
                case "worldviewprojection":
                    myEffect.Parameters[parm].SetValue(WVP);
                    break;
                case "world":
                    myEffect.Parameters[parm].SetValue(World);
                    break;
                case "view":
                    myEffect.Parameters[parm].SetValue(View);
                    break;
                case "projection":
                    myEffect.Parameters[parm].SetValue(Projection);
                    break;
                case "cameraposition":
                    myEffect.Parameters[parm].SetValue(RCCameraManager.ActiveCamera.Position);
                    break;
                case "worldinversetranspose":
                    myEffect.Parameters[parm].SetValue(Matrix.Transpose(Matrix.Invert(World)));
                    break;
                case "worldinverse":
                    myEffect.Parameters[parm].SetValue(Matrix.Invert(World));
                break;
                case "worldview":
                    myEffect.Parameters[parm].SetValue(WorldView);
                    break;
                case "viewprojection":
                    myEffect.Parameters[parm].SetValue(ViewProjection);
                    break;
                case "viewinverse":
                    myEffect.Parameters[parm].SetValue(Matrix.Invert(View));
                    break;
            }
        }
    }
}


You will see above three ways of creating the World matrix, one is the one given in the tutorial, Scale * Rotation * Position, another is used when the UseLeftHandedWorldCalc property is set to true in myObject. I put this in as I was having some issues with a few shaders I found. The issue was that the models where not rotating but revolving when I called the rotate method on them, I thought this was due to the shaders having been written for a left handed system (XNA is right), being new to all this 3D stuff I could not alter the shader to behave properly so altered the world calculation for them so it would and the Scale * Position * Rotation method seemed to do the trick. If I ever get a better solution to this or find why it behaves like it does for the shaders I will post the fix here. The third method is only ever really used for my billboard class to keep the Textured Quad facing the camera.

RCTexturedQuad & RC2SidedTexturdeQuad
With the base Textured Quad code I have just added the ability to allow for alpha checking (image transparency). I have done this by adding a field and associated property called alphacheck. If set to true the Render method of the TQ sets the render state to use the alpha test.

This code is called before the Vertex declaration:

bool alphaTest = myDevice.RenderState.AlphaTestEnable;
bool alphaBlend = myDevice.RenderState.AlphaBlendEnable;
CompareFunction alphaFunc = myDevice.RenderState.AlphaFunction;

if (alphaCheck)
{
    if (myDevice.RenderState.AlphaTestEnable != true)
        myDevice.RenderState.AlphaTestEnable = true;
    if (myDevice.RenderState.AlphaBlendEnable != true)
        myDevice.RenderState.AlphaBlendEnable = true;
    if (myDevice.RenderState.AlphaFunction != CompareFunction.NotEqual)
        myDevice.RenderState.AlphaFunction = CompareFunction.NotEqual;
}



and this after to put the states back as they were if needed:

if (alphaCheck)
{
    if (myDevice.RenderState.AlphaTestEnable != alphaTest)
        myDevice.RenderState.AlphaTestEnable = alphaTest;
    if (myDevice.RenderState.AlphaBlendEnable != alphaBlend)
        myDevice.RenderState.AlphaBlendEnable = alphaBlend;
    if (myDevice.RenderState.AlphaFunction != alphaFunc)
        myDevice.RenderState.AlphaFunction = alphaFunc;
}



The bounding box for the Textured Quad is managed like this in the Render method:

myBounds = new BoundingBox(myPosition - (myScaling / 2), myScaling / 2);



I have also added a double sided textured quad, this is basically 2 textured quads back to back, displaying the same image.

RCCamera & RCCameraManager
Again there is very little deviation here, I have added an enumerator to manage the different camera types I want to use and added a field with an associated property of this type to the RCCamera class.


public enum CameraViewType
{
    Floating,
    ThirdPerson,
    POV
}



This tells me what kind of movement the camera should have and is managed in the calling assembly.

Sunday, April 08, 2007

Engine Design - Interfaces and RCScene

The interfaces are pretty much as the tutorial has them, I have removed the set methods from the IRCHasMaterial interface as I didn't think they where of much use as I prefer setting fields in my objects with properties rather than using methods. I don't think there is any performance issue with it, it is just a personal preference.

Interfaces
interface IRCObjectComponent
interface IRCHasMaterial : IRCObjectComponent
interface IRCChildRenderer : IRCObjectComponent
interface IRCLoadable : IRCObjectComponent
interface IRCRenderable : IRCObjectComponent

IRCObjectComponent has no methods associated with it, yet.

IRCHasMaterial has a single method:


void SetMaterialProperties();



The method is used to invoke the setting up of shader parameters with the objects material fields. Things like material ambient color and material diffuse color should be set using this method.

IRCChildRenderer has a single method:


void RenderChildren(GraphicsDevice myDevice);



The method should be used to call an objects child render methods. for example one of my particle emitters has an array of RCBilboard objects, this class inherits from IRCChildRenderer as it needs to render each billboard in the array.

IRCLoadable has a single method:


void LoadGraphicsContent(GraphicsDevice myDevice,     ContentManager myLoader);



This method is used to load any graphical content ready to be drawn. For example an object that has a shader that uses an environment cube map would use this to load the cube map asset into memory ready to use.

IRCRenderable has a single method:


void Render(GraphicsDevice myDevice);



Objects that need to be rendered should use this method, so that is probably 99% of all objects in the engine!

The Scene Objects
I have added a new class called RCSceneLoader which can load/save scenes to XML. I implemented this class ages ago and it is now totally out of date, so I will leave this for another post once I have brought it up to date with the engine as it stands now.

The only real change to the RCSceneGraph class is the addition of one method, GetObject. This method is passed the objects name that you want to retrieve from the scene graph and returns the RCObjectNode containing this object which you can then cast to it's expected type and use.


public RCObjectNode GetObject(string Name)
{
    for(int on=0;on < myRoot.Nodes.Count;on++)
    {
        if (((RCObjectNode)myRoot.Nodes[on]).Name == Name)
        return (RCObjectNode)myRoot.Nodes[on];
    }
    return null;
}



Here is an example of it being used:

RCTerrain terrain = (RCTerrain)game.Scene.GetObject("terrain").Object;



RCScenePicker
This class uses ray picking to select an object from the scene with the mouse.


public class RCScenePicker
{
    private static Ray myRay;
    private static ArrayList RayHitList;

    public static Ray RayPicker
    {
        get { return myRay; }
        set { myRay = value; }
    }

    private RCScenePicker() { }

    public static RCObject GetClickedModel(Point mousecoords,
                RCSceneGraph Scene)
    {
        RCCamera camera = RCCameraManager.ActiveCamera;

        Vector3 nearSource = camera.Viewport.Unproject(
            new Vector3(mousecoords.X, mousecoords.Y,
            camera.Viewport.MinDepth), camera.Projection, camera.View,
            Matrix.Identity);

        Vector3 farSource = camera.Viewport.Unproject(
            new Vector3(mousecoords.X, mousecoords.Y,
            camera.Viewport.MaxDepth), camera.Projection, camera.View,
            Matrix.Identity);

        Vector3 direction = farSource - nearSource;

        direction.Normalize();

        myRay = new Ray(nearSource, direction);
        return RayIntersects(Scene);
    }

    private static RCObject RayIntersects(RCSceneGraph Scene)
    {
        RCObject retVal = null;
        RayHitList = new ArrayList();

        for (int n = 0; n < Scene.SceneRoot.Nodes.Count; n++)
        {
            RCObject obj =
                (RCObject)(((RCObjectNode)
                    Scene.SceneRoot.Nodes[n]).Object);
            BoundingBox bb = obj.ObjectsBoundingBox;
            Nullable<float> distance;
            myRay.Intersects(ref bb,out distance);
            if (distance != null)
            {
                object[] thisObj = new object[2];
                thisObj[0] = (int)distance;
                thisObj[1] = obj;

                RayHitList.Add(thisObj);
            }
        }

        // Now get the object nearest the camera.
        object[] lastDist = new object[] {(int)
            RCCameraManager.ActiveCamera.Viewport.MaxDepth,
            new RCObject("tmp")};

        for (int o = 0; o < RayHitList.Count; o++)
        {
            if((int)((object[])RayHitList[o])[0] < (int)lastDist[0])
                lastDist = ((object[])RayHitList[o]);
        }

        if(RayHitList.Count > 0)
            retVal = (RCObject)lastDist[1];

        return retVal;
    }
}



In my next post I will be documenting the changes I have made to the objects in Tutorial 3, the Shader and ShaderManager class, the TexturedQuad, the Camera and CameraManager class.

Engine Design - RCObject

As mentioned before my engine is based on the HazyMind XNA tutorials. I think Mike's idea for the tutorial was to give people a sense of how to go about creating an engine in XNA. The tutorials do not necessarily tell you how you should do it but rather point you in the right direction.

Tutorial 2 Scene Graph and Object Framework is where the deviation really starts. Mike starts by creating a basic Object class to be used to render our 3D models with along with some interfaces and a Scene object. The basic mechanics of this is pretty much unchanged (as of the time of writing this).

I have extended this basic object in a number if ways:

Fields
protected VertexPositionColor[] points;
protected short[] index;
protected BoundingBox myBounds;
protected Color myBoundsColor;
protected bool alwaysFacingCamera;
private bool myUseLeftHandedWorldCalc;
private bool myVisible;
public bool WireFrame;
protected bool myDrawBounds;
private string myName;

These fields all have a respective property, with the exception of WireFrame (should have one, but just not got round to doing it yet)

points & index
This is an array of VertexPositionColor and an array of short's used for drawing the objects bounding box. In the tutorial an array of type int is used, I have changed this to short as this allows quads to be rendered on my laptop. For some reason if they index was declared as an int array only half of the textured quads would render.

protected BoundingBox myBounds;
This field holds the objects bounding box. Its access property is a little different. Rather than just returning the filed myBounds the property returns the calculated bounding box plus the objects current position.


public BoundingBox ObjectsBoundingBox
{
    get
    {
        BoundingBox bb = new BoundingBox(myBounds.Min + myPosition, myBounds.Max + myPosition);
        return bb;
    }
}



protected Color myBoundsColor;
This field simply holds the color to be used to draw the bounding box.

protected bool alwaysFacingCamera;
This field allows me to inform the shader to create the World matrix for this object so that it has the object always facing the camera. This is of only any real use with Textured Quads.

private bool myUseLeftHandedWorldCalc;
Now this is a bit of a bodge to fix an issue I have with the odd shader or two. I found that when rotating objects with certain shaders that the object would revolve around it's position rather than rotate, so this method forces the shader to generate the World matrix in such a way as to compensate for this odd action. I will get the the bottom of why this happens but as yet I have no fix for it other than this.

private bool myVisible;
This field basically causes the object not to be drawn if set to false.

public bool WireFrame;
This field tells the draw method to use wire frame.

protected bool myDrawBounds;
If this field is set to true then (if possible) the objects Bounding Box is drawn.

private string myName;
Now I added this field so I can pull objects out of the scene by name.

Methods
public virtual void Rotate(Vector3 axis, float angle)
public void Translate(Vector3 distance)
protected void BuildBoxCorners()
protected virtual void DrawBounds(GraphicsDevice myDevice,Color col)
public Point Get2DCoords(GameWindow window)

public virtual void Rotate(Vector3 axis, float angle)
This has been taken from the Camera class, this allows me to rotate the models.
public void Translate(Vector3 distance)
Again this method was taken from the Camera class to allow translation of models.

protected void BuildBoxCorners()
This method was added so that the corners of the bounding box could be created ready to be used should the bounding box need to be drawn.


protected void BuildBoxCorners()
{
    points = new VertexPositionColor[8];
    Vector3[] corners = myBounds.GetCorners();

    points[0] = new VertexPositionColor(corners[1] , Color.Green);
    points[1] = new VertexPositionColor(corners[0] , Color.Green);
    points[2] = new VertexPositionColor(corners[2] , Color.Green);
    points[3] = new VertexPositionColor(corners[3] , Color.Green);
    points[4] = new VertexPositionColor(corners[5] , Color.Green);
    points[5] = new VertexPositionColor(corners[4] , Color.Green);
    points[6] = new VertexPositionColor(corners[6] , Color.Green);
    points[7] = new VertexPositionColor(corners[7] , Color.Green);

    short[] inds = {
        0, 1, 0, 2, 1, 3, 2, 3,
        4, 5, 4, 6, 5, 7, 6, 7,
        0, 4, 1, 5, 2, 6, 3, 7
        };

    index = inds;
}



protected virtual void DrawBounds(GraphicsDevice myDevice,Color col)
This is the method that actually draws the objects bounding box.


protected virtual void DrawBounds(GraphicsDevice myDevice,Color col)
{
    BuildBoxCorners();

    if (this is RCModel)
    {
        myDevice.DrawUserIndexedPrimitives<VertexPositionColor>(PrimitiveType.LineList, points, 0, 8, index, 0, 12);
    }
    else
    {
        BasicEffect shader = RCShaderManager.GetShader("BasicEffect").Effect as BasicEffect;
        shader.View = RCCameras.RCCameraManager.ActiveCamera.View;
        shader.Projection = RCCameras.RCCameraManager.ActiveCamera.Projection;
        shader.DiffuseColor = col.ToVector3();

        shader.Begin();
        for (int i = 0; i < shader.CurrentTechnique.Passes.Count; i++)
        {
            shader.CurrentTechnique.Passes[i].Begin();
            myDevice.DrawUserIndexedPrimitives<VertexPositionColor>(PrimitiveType.LineList, points, 0, 8, index, 0, 12);
            shader.CurrentTechnique.Passes[i].End();
        }
        shader.End();
    }
}


The check for RCModel is now redundant and will be removed as I now have multiple bounding box's in the mesh. Mesh's have maesh's in mesh's and so if the mesh format is a FBX file then it can have a bounding box per mesh in the mesh.

public Point Get2DCoords(GameWindow window)
This method returns the objects 2D screen co-ordinates, very useful for placing targeting glyph on an object in the scene.


public Point Get2DCoords(GameWindow window)
{
    RCCameras.RCCamera camera = RCCameras.RCCameraManager.ActiveCamera;
    Matrix ViewProjectionMatrix = camera.View * camera.Projection;

    Vector4 result4 = Vector4.Transform(myPosition, ViewProjectionMatrix);
    if (result4.W == 0)
        result4.W = RCHelper.Epsilon;

    Vector3 result = new Vector3(result4.X / result4.W, result4.Y / result4.W, result4.Z / result4.W);

    return new Point((int)Math.Round(+result.X * (window.ClientBounds.Width / 2)) + (window.ClientBounds.Width / 2), (int)Math.Round(-result.Y * (window.ClientBounds.Height / 2)) + (window.ClientBounds.Height / 2));
}


My next post will describe how I have altered the interfaces and what I have changed in the Scene class.

Saturday, April 07, 2007

The Randomchaos3DEngine

What I intent to do is document what I have added to my engine. This will include where it deviates from the original source in the Hazy Mind tutorial.

Here is a list of some of the things I have added and/or changed in my engine:

Namespace
Altered Namespaces, classes have a prefix of RC rather than HM, probably not worth a mention but if you read any of my code snippets you might think, "where did he get that RCShaderManager class from?" etc...

Hashtable -> Dictionary
Have changed all Hashtables to be Dictionary, this cuts down on the amount of boxing and un-boxing needed to get to the objects stored in them as a Dictionary can be typed.

RCObject & RCScene
Additions to the base Object to allow retrieval from a scene by name as well as index.

RCModel
More model properties to account for extra shaders I have found or written. Also, (still in progress) the ability to use annotation and semantics in shaders for models rather than hard coding shader variable names into the RCModel class.

RCShader
Again, the ability to use semantics rather than hard code shader variable names in code.

New Classes
RCBillboard - Billboard object
RCFire - Volume fire effect
RCParticle - Part of the Particle system
RCSkyDome - A sky dome class [Riemer]
RCSnowEmmiter - Part of the particle system
RCSpriteBatch - You guessed it, for sprite batch's
RC2SidedTexturdeQuad - A TQ with both sides rendered
RCTerrain - A Terrain class [Riemer]
RCWater - A water effect using the NVIDIA Ocean shader
RCRiemersWater - Another water effect (still under construction) [Riemer]
RCScenePicker - Used to Ray pick models in the scene
RCSound - Used to play sound effects and music
RCFont - Bitmap font manager
RCNuclexFont - Font manager using Nuclex fonts
RCHelper - Static class used to hold methods used throughout the engine

And a bit more still under development, I have a RCGUI namespace that holds my Graphical User Interface classes, like buttons, labels etc... There is also a debug screen and a RenderState viewer all yet to be finished off. There is also my RandomchaosContentPipelineManager for overriding the base content pipeline should I need it, this is used for the creation of bounding box's for a mesh and the bounding box's for that mesh's mesh's. Oh yes and the latest addition Mesh Animation.

I will probably start by documenting my changes to the base engine objects and then move on to my additions.