probably the key to how to push audio forward in terms of intigration with the dynamics of games, is to reconsider what IS music in relation to the games world.
in the world of music, the sounds are broken into beats and bars, keys and chords. in comparison to the flow of games, this way of thinking is too rigid and restrictive.
instead, why are games sound designers not thinking more along the lines of "what does that leaky steam pipe sound like in relation to the engine hum over in that corner, mixed with the sound of the rain outside? does it mesh well with the sound of the players footfall on this surface?"
in games worlds, you have at least 64 tracks of simultanious audio that can be triggered and mixed in real time, in a true 3d space (thats not even taking into account the spacial alogorythms from EAX and the like) a good sound designer should make the environment itself, the theme music. if you approach a project like this, you will find your old "chords and keys" ideas will quickly seem redundant as the main meat of your sound structure, and instead can be relied on to do more subtle work.
for an example of how to make the environment itself the instrument, see doom3 (terrible game, but great sound work)
for an example of how NOT to do it, see half-life 2 (great game, terrible sound work)
(to qualify the last statement) half life 2 features single "showpieces" of triggered music (which is good enough music, and more or less fits the action)
but the music plays until it either ends, or even worse, stops abruptly when you leave the current area and start loading the next. it is jarring, and lends little to the immersive experiance of the game.
this is what i think anyway on the subject of adaptive sound in games.