Showing posts with label Programming. Show all posts
Showing posts with label Programming. Show all posts

Friday, August 17, 2012

Video Game Optimization

Here is a link to the slides for the recent workshop I held on video game optimization:

Optimization presentation slides

Monday, August 06, 2012

JC

Watching Carmack's keynote in QuakeCon2012 is really interesting in many ways:

1 - A case study of how you can maintain your energy in a 3.5 hour talk.
2 - Passion of a man.
3 - Respect for the audience.
4 - How context affects the relevance of phenomena.
5 - Lots of informative tidbits from a matured code guru.

http://www.youtube.com/watch?v=wt-iVFxgFWk

Monday, September 12, 2011

Refactoring

Refactoring code is like being in the middle of a combat field. You study, change, fix, enhance, test, change, design, observe, reduce, change, optimize, extend, measure, change ... just like you would slash, guard, run, move, stop, breathe, hit, defend, jump, focus, push and attack in the middle of an ancient war.

In both, the outcome is good as long as you are alive, moving, changing, doing and commanding every second. Your failure is the moment you feel secure and satisfied.

Both do not rely on grand design and strategies, both bring out the best out of you, both often require you to do before you think, both show you the meaning of every instant, both value your sub conscious self over your conscious reflection, both need you to hug the opportunities of failure with ultimate courage and both require Real Men!

When no colossal armies are around, waves of uncertainty in the horizon, gotta grab the sharpest blade and dive into the adrenaline pumping moments of life, looking for none but miracles and staying true, every moment.

Just like nature itself, how it continually moves and makes and breaks.

Lets work carefully on good creations, only to watch them shattered to pieces while being done better, all by us, lets embrace the destructive nature of change and look forward to the rising sun...

Friday, August 26, 2011

Zen Coding

You wrote the code, now you have to build it, it takes a while before you can see the results. You see the results, bring down the application, write more code and build again. In most serious applications the build time is something considerable. It is the time the machine needs in order to be prepared for your commands. To you it can feel like a short recess, or is it?

The build time is very clear for the machine but it is a very critical time for the coder. Why you say? Lets see. There is a huge tendency for coders to fly away during this time, which can be sometimes rather long (it can include both the code build time and the application startup and initialization time), it seems to be the best time to check the emails again, read a bit more of that article in the open browser, do a few more clicks on that sporadic junk web MMO, try to be cool in the social net or a million other small time suckers that we are all aware of. Why should it matter anyway, the machine is building and there is nothing else for me to do!

Well, there are three main issues with the above scenario:

1 - A shift of attention during this time is a context switch for the brain which can help you forget all the thought process and data you had regarding your problem in your mind. The longer this break away is, the higher the risk and it is quite easy to completely forget what it was you were working on once the application is up for testing.

2 - The time when you stop coding and wait is the best time to focus more on the problem at hand and even if you stop thinking about anything in particular (a zen state), your subconscious will carry on and try to work on the problem from different aspects. This could increase your problem solving quality and over the course of the day, highly reduce the number of times you would need to follow the code/build process for one specific problem. (Providing you with a lot more free time at the end of the day)

3 - Once you are carried away, there is no guarantee that you will come back and test the application once it is up and ready for you, you might spend a lot more time on the other task you started and who knows, maybe even get involved with the article for more than thirty minutes.

So before jumping out to do something else right after you initiated the build process next time, double check and see whether it is going to really be in your benefit and consider the negative effects it can have to your overall development quality. It is not easy but it could be well worth it.

If the problem at hand is too easy and there is absolutely no need for thinking more on it, you can always think about all the things you can do to reduce the lengthy code build time.

(from http://xkcd.com/303/)



Wednesday, August 03, 2011

The new Hell

We recently thought it was a good time to upgrade the tools and library code that we use. So we went on and upgraded to VS 2010, used the latest version of OGRE 1.8 (un-stable), upgraded almost all other dependent libraries like PhysX, NxOgre, OgreVideo, Theora, ogg, vorbis, ...etc. Recompiled all the ones that had the source code available using VS 2010, quickly fixed the errors related to porting and put all where they belonged and tried to run the game with the old levels we had. Guess what, nothing worked!

Serious crashes and hangs everywhere.

Hmm... it might be related to the new OGRE we thought, maybe because of the way we have changed the use of Threading? Maybe an incompatibility with the new OpenAL? Can it be related to VS 2010? So we started.... we started to try out all different permutations possible with the libraries, ... the new code with the old OGRE, the old code with the new OGRE, the new code with VS 2008, OGRE 1.7, ... OGRE 1.6 ... and any other imaginable configuration, hoping to find the exact module causing the new issues. After almost a week, nothing was found! Strange stuff happened with every change.

Now we are back to the old code base that runs perfect on VS 2008 with all the rusty libraries that are rock solid. Will follow the upgrading of the libraries sometime in the future but this time one by one with proper test suites to run after each change.

Lesson learned: Do not upgrade everything possible over NIGHT!

Sunday, August 01, 2010

Game Engine Design Course

We will be running a Game Engine Design and Implementation course for those interested in knowing more about the topics and gaining some insight into the inner workings of modern Game Engines.
More information can be found here.

Thursday, June 10, 2010

Sinner

Oh dear lord ... forgive me for I have sinned ... forgive me for the unforgivable sin ... a sin with deep roots in ignorance ... a sin of not being conscious over memory usage from the first day ... having blind eyes, deaf ears and irresponsible hands...
Upon my forgiveness ... I shall promise thee to be memory conscious from the first line in the next evolution step of the code base ... acceptance of all these sacrifices is all I hope ... every bit will be measured ... I shall tame the beast ...

Friday, May 07, 2010

To OOP or not to OOP !!

Object Oriented Programming has been the buzz word for quite a long time in computer programming. The concept has even leaked to higher level issues such as OO Design and OO Analysis.

Just like any other buzz word in the world of information technology, Object Oriented programming has received much more attention than what it deserves.

But what does it really do? We all know that eventually all code written in a programming language needs to be translated into machine code and once in machine code, it is all the same, no matter writing your code in VB or python, it will end up in a unified language of instructions which the target machine knows well.

Object Oriented Design/Programming, is a way of adding a few conceptual layers between the problem domain in our real world and the instructions which need to be run by the target machine, a way of making a smooth transition between the layers of abstraction, a way to help our mind understand the problem and find a proper solution for it. After all, Object Oriented Programming is something to help us human beings overcome our limitations and be able to make better computer programs. Some argue that Object Oriented Programming brings with it new functionality which was not possible to have before such as Inheritance and Polymorphism, however, simple C structs and function pointers are all that you need in a structure programming language such as C to simulate inheritance and polymorphism.

All of these added values toward writing easier code for the programmers come with costs, sometimes huge costs. In other words intellectual manageability which is probably the most important aspect of Object Oriented Design comes with major costs, the most significant side effect is always performance.

Real-time simulation software have usually performance high on their priority list and so Object Oriented Design in such programs, such as games, needs some second thoughts. Such applications usually see their world as groups of data rather than groups of objects which contain data and behavior, unlike the way we see our world.

An example for the above is the way the main loop for a physics simulation engine works, all it cares about is a few data related to the physical properties and for it, other properties of an object such as its visual properties, sound and game related meta data is of no use.

In these cases, when a part of an application needs to work on specific data, in order to best utilize the system cache and increase the performance, it would be best to provide the relevant data to the sub-system rather than pushing in a whole object containing lots of data, just because handling the code in an object oriented manner is easier for us programmers. Multi-Processing can be managed easier once related data is batched together.

Enter Data Oriented Design. A rather harder way to look at the problem domain and to model which is by far easier and more efficient to process by the machine.

A very good article regarding Data Oriented Design can be found here:
Data-Oriented Design (Or Why You Might Be Shooting Yourself in The Foot With OOP)

And another very inspiring article is from Sony Research:
Pitfalls of Object Oriented Programming

Also a very nice presentation by Mike Acton:
Three Big lies: Typical design failures in game programming

Friday, January 15, 2010

Real coder's arena

One thing we, a few guys on our tech team, wish we could experience is console development. Being able to make software for one specific hardware platform is quite exciting. The root of this excitement comes from many different factors. All the hardware specification is fixed and known beforehand, unlike PC development where every machine is different, knowing the hardware is fixed, you can try and create the best code possible to run on that machine, you can always compare this with what others are doing and any enhancement in software means something because everybody is running on the same ground. The memory is fixed, and very limited too, and a new dimension of memory consciousness while coding needs to be added to the development process and the art of the coder should come up with elegant memory solutions to support the whole product.
You would need to know the hardware very well in order to utilize it best and the border line between software and hardware really tends to become non existent. The depth of the abstraction layers while doing software engineering is much more in this case, all the way de-abstracting to the hardware itself.
The machine stays the same for a few years, the games enhance, the only way this could happen is by writing better and more efficient code and better using the hardware, unlike the usual trend in PC application development which is to write less efficient bloated code every year and rely on the enhancements in hardware performance.
Modern consoles are all multi-processor machines and they are a great platform for parallel software development which is quite complex and exciting. The best system in this area is the Cell processor on the PS3 with the 8 processing cores.
All of the above is quite a lot of work and needs tremendous amount of work but the whole idea of console development seems to be a fair and well defined game and challenge. Playing in a game where the rules are changed randomly isn't quite fun.
Hope to engage in the real coder's arena one day soon.

Wednesday, May 06, 2009

Haskellers

Our first Haskell programming session was held this week along with three Fanafzar Ninjas, yzt, fhm and barj. Our goal is to get to know the language and see the world from a different angle. We might not use it eventually for the daily coding tasks but thinking from a Functional Programming aspect should be valuable and help the thinking process in general.

Real World Haskell
is a free online book which we follow. Quite a good book.

Sunday, May 03, 2009

The Real Ones

A real coder once said:

"Of course, none of this would have mattered unless the code could be compiled very quickly, so full-blown traditional compilers were out of the question. Instead, we wrote a streamlined compiler custom designed for the task, which we call the "welder.""

In
fact this is how it can be, if the environment isn't what you really need, just BUILD the environment for yourself.

This code was Michael Abrash, talking about the software renderer they made called Pixomatic which performs DX7 level hardware functionality on software.

In fact it's been the work of people like him and Carmack who have worked on Quake that shaped the 3D games world for ever. Magic happens when real people join forces.

Saturday, April 25, 2009

The Future

We went over an interesting presentation from GDC09 regarding parallel coding strategies for game AI which covered a few basic strategies for making the main component interactions parallel to embrace the multi core and many core architectures. These include double buffering, messaging, asynchronous requests and Job scheduling. This is one major area which we'll need to focus on for the very near future of our development. The Zorvan engine is not threaded properly right now(except for resource loading and the potential to run in a different thread for the physics simulation loop) but one of the most important design goals for MAGE is a mutli threaded kernel.

An interesting software engineering concept which I came across to during the presentation that has good use for multi threaded Job scheduling systems is "Futures" and "Promises". These definitions have been thought about for a while now but they are finding practical uses these days with the heightened significance for parallel code. Languages which have supported this feature include Joule and E.

yzt pointed me out to the Boost implementation for the idea.

As always, the engineers rolling the boost library (probably aliens from outer space), are providing c++ implementations for any statement in the form of:" .... but c++ does not support this feature ! ", with Boost it can!

Sunday, February 08, 2009

It will return

Bad software design is like a Boomerang. It will get back at you and hit you in the head. It will hit you, unless you throw it and run away and never come back. In case of a major code base which has development that spans in time, better watch for the bad design decisions and short term hacks.

Monday, January 05, 2009

Debug Clusters

There are lots of issues to bear in mind as design goals in software development. The x-ability issues among the most popular. Many such goals being even contradicting. However, designing and implementing code which has no functional value and is used only for debug purposes can become important in most complex projects. This can be considered as as subset of the maintainability goals but such code can be significant in developing the proper code in the first place. Added to the designers main decision making factors are then the amount of such debug clusters necessary around the functional code, engaged in the optimization activity which tries to minimize development time by selecting the amount of debug code, "the more" easing functional development and requiring non functional code development time, "the less" making development harder by increasing functional code development time.

Wednesday, August 20, 2008

Lets get disconnected from Time!

How can some code behave absolutely fine when being inspected under a debugger, and go absolutely nuts once running free!!?
Well, enter Real Time code which is synchronized with the time we measure! This happens in the Soshiant project. The project is real time, although "Soft Real-time" since it has to respond in a specific time frame, known as each Frame of the application which one main loop executes, and also dependent on the actual time which goes by in our real world. Every frame knows the time passed since the last one, for example 16ms. This time is used as an input value to different algorithms in the code, such as the animation code which uses it to find out about the correct transformations that needs to be applied to the characeter bones in the current frame.
What the debugger does is to first of all stop the execution of the program on a specific point, the break point, and enable you to trace the code line by line from that point on. Stopping the execution means stopping the main loop and bringing its frequency down but the real time which we sense is of course not going to stop so the result is loss of synchronization between the frames and the time. If in normal execution the 10th frame is performed at time 300ms, tracing line by line using the debugger will cause the 10th frame to be executed at time 900ms. This would mean that this frame would witness different states for functions which are dependent on the real time while debugging. A practical problem as a result of this problem would be when you see wierd behavior during two animations being blended into each other which happens during 0.2 seconds, but breaking the code execution at the start of the blend will not let you follow the logic as it happens since that 0.2 seconds of real time will pass as soon as your debugger stops the code for you to evaluate.
Solutions? Short answer, to output trace what you want to monitor while the application is running and not to use break points. Not a real nice solution and sometimes cumbersome. Other solution might be to decouple the algorithms which rely on time from the real time we measure and provide a virtual time system for the application. Speaking about time, we might not be able to find out which time is the real one and which is virtual, relativity anyone?


Saturday, August 09, 2008

Black-Risks

The best parts of development in a coding project is when you don't know what you want to do, and once you decide, you simply implement. The worst parts, project wise, are the parts when you know exactly what you want to do, but do not know how to do it. The reason for not knowing is usually due to abstractions inside the frameworks and libraries you are using. These are the real risks which can take a lot of time the first time they are encountered and get solved right away the next times. These are the issues which make deterministic project effort requirements almost impossible. A good name for such things might be "Semantic Black Holes".
We are faced with a lot in Soshiant.
Of course it is possible to reach deep inside these black holes and find out everything about how things are done and need to be done but the only thing lost during this process is the single crucial parameter usually represented using the "t" symbol.
On the other end the same frameworks speed up and help out in many tasks and provide lots of useful features which if you wanted to start and implement yourself it would have needed lots of time too.
Time saved for needed features which exist in a library and can be used: t1
Time spent to find out how the library can tweak to provide the exact need that you have: t2
Happiness = t1 - t2

Thursday, May 15, 2008

I had a bug!

Trying to attach a physically simulated cloth to the clothes that Soshiant is wearing had me puzzled for a few days with some bugs. Well, the bug was the kind that after finding out what it was, you just wished you thought a bit deeper into the layers of code and it was in fact very easy to find out, except you wouldn't see it from the upper layer. Hmm... well, aren't all software bugs like this? Something happens, we call it a bug since we don't get what we expect and the reason it is really happening is because we do not really understand what is going on underneath. We think we know everything and call it a bug, but it is really us that has the bug and the system is working absolutely perfect. It is doing what it has to do, we expect it to do something else, because we do not know!

In order to find out about these issues with software, you would usually need to uncover the code which is being used by your code to really see what it is doing, and then modify your usage and maybe your expectations. If you see a bug in the lower layer, then you might as well be able to peel that and see what is underneath that layer or API which you are using. This is why programmers only trained in high level languages and advanced IDEs usually have problems finding the problems since they can not do this peeling and looking inside. The tools and high level constructs of the programming language abstract everything away from the developer.

What we see here is the danger of abstraction in fact. We need to use abstraction in order to be able to manage the intellectual needs of software development but any abstraction brings with it expectations. Expectations that might not always be correct or at least be dependent on many complex configurations which are not always obvious at that level of abstraction, hence we expect incorrectly and call it a bug. A car is an abstraction of a reality in which I know some details, it has to move and it needs gas.When my car stops moving because of something breaking in the engine, I look at this fact as a problem and somehow blame the car (source of problem being in an abstract part which I can not see through), but when the car stops due to not having gas (source being in the same abstraction level as I can see), I usually do not blame the car but probably myself since I am very well aware about the reason, but in reality these two are not any different.

It seems like if we are able to comprehend all the details and go deeper and deeper in the logical relationships of the code, then we can find the reason for all the bugs. Will any bugs remain? What about hardware bugs?How far can we go? Low level code, assembly instructions, machine code, hardware behavior due to physical properties, molecules, atoms .... are there any real bugs in the world? Can we relate this topic to determinism and indeterminism? If the world is deterministic then can we say there are no real bugs ever? In the story "Beyond" from the Animatrix series, there was a bug in the world, but it really had a reason in another lower level layer.

What about generalizing this concept from computer software to real life, are there any problems in the world? Or do we call some things a problem because we expect something else to happen and it doesn't happen because it should not happen and our assumption was false due to the context of the problem. If you find the reason to a software bug, you fix it and don't call it a bug anymore, can we do this with life? Can we find the reason for things we call problems or troubles and somehow be able to fix them and not call them problems anymore? Do we get really frustrated when we don't get what we expect? What "WE EXPECT!". Maybe we expect wrong. The Cynic philosophy is a bit related to this view and it all falls in the domain of epistemology. In mathematical terms, our operations should not have closure, since we need to be able to understand from a different set (layer) or in fact enter the meta-layer, or deductive closure in logic.

The above discussion can have some psychological conclusions also. People get sad when faced with problems, but knowing about the causes of the problems might be able to eliminate this sadness, either by changing something so that the problem does not happen or by changing our expectations. Can we conclude that knowledge will provide the foundation for a happier man? This is opposite to the belief that more knowledge brings more suffering which I am very much against.

Friday, May 09, 2008

Multiple Dispatch

Specifying collision detection logic for scenarios in games that have an inheritance hierarchy for game objects will usually lead to a code for handling the collision event. In this case, if the code happens to be polymorphic, then we will encounter code that will look something like "gameObjectA->handleCollision(gameObjectB)". The two game objects are probably just base class pointers to sub class objects and we usually need different handling code for when different object types collide. An arrow hitting an enemy needs different handling than the arrow hitting a tree.

Now in a single dispatch language such as C++, the above code will have trouble finding out how exactly to handle the event since different handlers can not be selected at run time based on the real type of gameObjectB. ( Assuming we have handleCollision(Tree*) and handleCollision(Enemy*) for the Arrow object). Dynamic binding only works using one parameter and in c++ that would mean using virtual methods. At least the double dispatch mechanism is needed for the collision case.

However one technique which can be used for the above scenario and would help out a little is the use of the visitor pattern.

Stroustrup and some of his students are working on an interesting project to add multiple dispatch functionality to the c++ compiler. The paper can be found here. Would be nice to see this feature in standard c++ someday.

Sunday, December 09, 2007

Using the cores

Visual Studio 2005 by default uses multiple cores of a CPU for compiling different projects at the same time and if your solution is made from only one project, then the multiple cores aren't really going to help you. This tool MPCL will be installed in VS and will make it use all cores for compiling a single project. Very nice and handy.
VS 2008 is going to have this feature in it, other than this there doesn't seem to be much enhancements for the VC part.