People who use TitaniumGL usually have a configuration with broken or missing OpenGL support. If you are a game developer, its
maybee a good idea for you to ensure your product works with TitaniumGL. To
optimize your game to TitaniumGL, is very easy. You will
face the same cases, like you optimizing to ATi or nVidia
drivers.
Dont forget: Most of TitaniumGL users are using old
graphics cards. These old graphics cards are very slow,
so if you want to create a TitaniumGL profile
optimisation, you must probably use lowest settings.
Geometry
-TitaniumGL is triangle based. This means, triangle
data input will be the fastest in EVERY case. Triangle
strip, quads, quad strip, triangle-fan will be slower
than trangle-based geometry. Quad strip and triangle fan
is the slowest.
-Display lists are the fastest way to render. Display lists will be converted to the native data representation of TitaniumGL, avoiding any further conversions when rendering them. Vertex Buffer Objects are only fast if the data is in float triplets, uv pairs are in float, and if color is present, rgba float quadlet is used. Element and index arrays are slow, and vertex arrays are also slower than display lists. If you are using texture matrix, speed will drop. The difference between glVertex3f, glDrawArrays, Vertex buffers and display lists can be quite huge, if more than a few 10k polygons are being rendered on the scene.
-If you can, avoid the use of drawarrays. It may will
be slow.
-If you rendering with lot of glend/glend or lot
drawbuffers, will be also slow. To compensate this, TitaniumGL will create data batches, and perform batched rendering.
-If you are planning to speed up your geometry bandwith
by using chars or integers instead of floats, it will not
help. It will just make the things far works.
-Display list are the fastest, but eats a lot of RAM. A
display list contain the data and the commands
preoptimised, so sometimes, creating a display list is a
bit slow. It does not matter what kind of data type
(float, integer, short, ...) or geometry type (triangle,
quads, fans, strips, ...) you give, TitaniumGL will
internally represent them as pre-tesselated and optimised
data. With TitaniumGL, Display Lists are the fastest way
to render your geometry.
-If your scene uses more than a few 100k polygons, TitaniumGL RAM usage can be too high in some situations due to memory area caching. Its not recommended to draw objects bigger than a few 10k polygons at most.
-Drawing lines and points can be buggy.
-If your engine can do it, use geometry lod.
-Old versions of TitaniumGL were able to process maximum 1-6
million triangles/sec, the newest versions can usually do above 10 million triangles/sec. In the best case, even above 100 million triangles/sec is possible, but this is only a theorical limit, which in practice will not happen (if hardware accelerated vertex processing can be used for an object, and hardware lights are disabled, and display lists are used).
Texture
handling
-Rendering a multitextured geometry is faster, than
rendering the geometry twice.
-Four texture pipeline available.
-Switching textures rapidly for no reason will cause speed decrase.
-Copying frame buffer to texture is slow, but can be used
with modern gpus. Older buggy GPU-s will probably produce
broken picture.
-Uploading texture data is slow. Fastest is RGB and RGBA
texture data uploading, and these two kind of formats
supported internally. Switch off mipmapping, if you can.
TitaniumGL texture handler was very slow, the versions released after 2023 are equipped with a new texture processing code.
On a 1,6 ghz machine, texture uploading happended with
approx. 30 mbyte/sec initially, now it can reach 100 mbyte/sec in lucky cases. Its not recommended to upload some animation as textures in every frames. Disabled those features if you can. If you using float, integer, or short textures, the upload speed will decrase even more,
resulting longer loading to your game. Buffering
subimages will be even slower. With TitaniumGL,
overwriting the textures in the memory will NOT be faster
than deleting and recreating them. TitaniumGL will accept
every size of textures, but the graphics card may will
recive a smaller image, if it does not support large
textures. This will not affect TitaniumGL's internal
texture representation, but if the graphics card does not
support them, the resolution of the image in the VGA RAM
will be downscaled.
Pipeline
implementation
Old version of TitaniumGL:
|
|
New version of TitaniumGL:
|
If your graphics card supports it and it has more than 128 MB VRAM, TitaniumGL
will attempt to use anti-aliasing.
-Do not use stencil/alpha/lookop buffers. They will not work.
-Some blending modes on old cards are unimplemented, except
additive blending and alphablending. The best is to use this two
kind of blending. Some blending modes are unimplemented even in TitaniumGL itself.
-Material handling is not yet correctly implemented, and light management is buggy. Do not expect good shading at all.
-TitaniumGL (like some other vendors) will NOT
lose your device, even if you are trying it. Destroying and recreating a
context is a very bad idea to delete your textures, as TitaniumGL will simply not to it. Making it 10x times to
be sure, its a very very bad idea.
-You can not access TitaniumGL on multiple threads. It will crash. Don't even attempt doing it.
-Hammering 1000s of OpenGL initialization functions to search for magical bits here and there is a super bad idea.
-TitaniumGL does not rely on the DirectX stack too much. TitaniumGL is minimizing the interaction with Direct3D.
Unlikely with other OpenGL to Direct3D wrapper, now DirectX is only used to
render the geometry with hardware acceleration, its not used to implement
the whole OpenGL API.
Becouse of this, TitaniumGL will avoid the
graphics cards hardware design
flaws and driver problems, but still stays fast
becouse the 3D is accelerated.
This is, why the same features available on all
graphic cards with TitaniumGL.
Bug
reports
Every TitaniumGL version is tested with 30-40 game
software, 10-20 feature tester miniapp, and with some
stress-test applications. TitaniumGL has
pointer-corruption detector, internal memory protection,
pointer overrunning protections, and several other
security locks. TitaniumGL has some debugging features
too, but that requies a specially compiled version.
If you send me a bugreport, its being added to my
buglist, and i will check it. This does not means that it
will be fixed in the upcoming versions, but i will try to
fix it. If you have donated, your bugreport will be
investigated with higher priority, but its still not
means that i can fix your issue, becouse a driver is
always very complicated.
Always use the newest version!
-If you are a gamer, and you found a bug, you can send
a bugreport to my email address. The bug-report must
contain the buggy game/software name, the name of your
graphics card, CPU, size of your RAM, and of couse the
bug itself. No other information needed from your
computer.
-If you are a developer, and you have found a bug, you
can send a bugreport to my email address. Mostly its much
simplyer to fix the problem in your application, by
changing the problematic parts to others. Your bug-report
must contain your stuff's name, the name of your graphics
card, and of couse the bug itself. No other information
needed from your computer. Please write, what function
did causing the bug, if it crashing, please remove other
codes and try to test the crashing codes alone. Its still
crashing? Do you use multithreading? Or is the graphics
corrupt? How do you render the corrupt parts? Its still
corrupt if you using different way to render? Its good to
send me screenshots from the bug and from the correct
image. You also can send me source code, in this case
please tell me the corresponding file and line in your
source.
-
|