Getting started: Caching graphs

Paul DoylePaul Doyle Administrator, Fabric Employee Posts: 229 admin

CEO at Fabric Software
Twitter

Tagged:

Comments

  • malbrechtmalbrecht Fabric for Houdini Posts: 752 ✭✭✭

    Thanks a ton for this!

    Although I have to admit that I don't find this example "perfect" - because it seems to show that, instead of the VALUES in the array being cached, the execution of the "upper part" of the graph is being cached (so not the values, but the actual code). I say this because the content of the array stays the same.

    Now, I do still get what the cache node is for and why it makes sense to use it (to considerably speed up execution of iterative code), but maybe comparing the cache node to a CPU's cache might explain it a bit more clearly - then again it may well be my own development history that makes me understand "cache" differently (i.e. not so much on the code-caching side but on the actual data caching itself).

    Marc


    Marc Albrecht - marc-albrecht.de - does things.

  • Roy NieterauRoy Nieterau Posts: 258 ✭✭✭

    Great and clear example for newcomers. Also good explanation that the full graph doesn't get triggered again.

    @malbrecht It could just well be your background in a certain area because for me this seemed to be spot on.

    The only thing I didn't get from the video was that caches (unlike variables) are not required to keep the value. If memory gets bogged down the application is free to release the cache. See here.

    Caches are not guaranteed to be kept in memory. A memory manager might decide to clear them. The assumption is that it’s always possible to recompute the cache deterministically. You can not rely on the cache to always be kept around, if you need a guaranteed container please use a variable.

    This also makes me very curious whether variables behave somewhat like a cache or would you still put a cache node in front?
    I assume you'll have to put a cache node there as well. Correct?

    Plus I'd also love to know more about what "costs" it has to have a cache node in place when the values before it are always changing.
    (For example you have a preset that caches and works for 95% of your cases. Will it be worth to remove the cache if your input is changing every frame anyway?)

  • EricTEricT Administrator, Moderator, Fabric Employee Posts: 304 admin

    @malbrecht Thanks for the comments.

    In order to demonstrate to users that the cache node was in fact not recalculating the array with each value change, a report statement was put into the graph before the cache node. Demonstrating concepts that are running in the background "invisibly" are not able to be demo'd in videos so that was the simplest way that I thought would illustrate it.

    The report node is a simple pass through and the values of the array are getting printed simply because the report node is there. If you remove it, then the array will be cached just the same, this time though, without printing it out. It is caching data and not specifically directing execution of that branch of the graph. The output of the cache node is a Scalar[] so the overall graph is still passing the data down stream. Just in an efficient way when using the cache node.

    Eric Thivierge
    Kraken Developer
    Kraken Rigging Framework

  • malbrechtmalbrecht Fabric for Houdini Posts: 752 ✭✭✭

    Fair enough :-)

    All good here, I do grasp the concept. I would expect some kind of "invalidate cache" aspect, though, which any change to the data inside the cache will trigger anyway - and, like Roy asks, how code can know about the validity of the cache (or would the cache always get refilled if a read node accesses it and the cache had been cleared)?

    Marc


    Marc Albrecht - marc-albrecht.de - does things.

  • Roy NieterauRoy Nieterau Posts: 258 ✭✭✭
    edited December 2015

    To me the cache is somewhat like a dirty state propagation of the combined inputs of the graph that define this particular output value. If none of the inputs have changed there is no "dirty" value and the currently cached value is still valid for use and will be used. As such it takes it from memory (if it's there) and returns the value. If not yet (or anymore) in memory it recomputes the graph before it to retrieve the value.

    So to me the cache basically assumes the currently stored (in memory) value remains valid as long as none of the inputs of any nodes before it in the graph (upstream) change.

    As such the cache assumes that a node when recalculated with the same values results in the same result. (And is not random) Otherwise the random value would not update/propagate if not changed by any inputs. In that view when for example retrieving the system time in a node that has no inputs and is caced directly after would not update... until released from memory.

    Would love to be corrected if I'm wrong somewhere.

Sign In or Register to comment.