WIP: Animation Helpers for Fabric

malbrechtmalbrecht Fabric for Houdini Posts: 752 ✭✭✭

Moin,

I have started another ambitious project to make capturing, playback and editing of animation data in Fabric "easy". Although I do have some very concrete ideas about how to proceed, I would welcome feedback on the question "are overly simple presets/nodes the right way to go to appeal to artists" or should I try to make up some "complex ornamentory"?

Or, still serious here, is the whole idea of "abstracting" animation data into single (floating point) data stream completely off anyway?

(Vimeo: )

I have a lot of ideas and plans and will - like with my other projects - continue to work on this, but any discussion, feedback, ideas, suggestions, questions are highly welcome, encouraged, asked for!

Marc


Marc Albrecht - marc-albrecht.de - does things.

Comments

  • mootzoidmootzoid Fabric Employee Posts: 185 Fabric Employee

    Hi Marc,

    very interesting and promising tool.
    Having a (ASCII?) file format and a set of really easy presets is great, but wouldn't it maybe be good to use Alembic under the hood instead?

    Cheers,
    Eric

  • malbrechtmalbrecht Fabric for Houdini Posts: 752 ✭✭✭

    Moin, Sire,

    thanks for the feedback!

    Having a (ASCII?) file format and a set of really easy presets is great, but wouldn't it maybe be good to use Alembic under the hood instead?

    Yes, it's ASCII - originally I was heading for XML, as I feel more comfortable reading and writing that, but some comments around this forum made me go for JSON instead.
    I think the advantage of having a human-readable file is that you can reliably get around issues like those discussed elsewhere in this community - and hand-crafting your own reader/writer is a bleep in whatever application you want to use the data.

    Although I am a fan of Alembic, it sadly has some serious acceptance problems in most places I have had contact with (in comparison with FBX, sigh). Its implementation across various tools it not exactly "painfree". And, probably the most important point against any "binary" format (though it's not about binary/text): If I store, say, "Xfo" into the ABC, it would depend on the DCC's interpretation of that to actually make use of this data. Text-read: The theoretical advantage of using a standard format collapses if the data you are going to store is not understood in a standardized way (the "animation cache" data I am currently dealing with is a stream of one dimensional scalar values, so it would be of no use in any software dealing with Alembic).

    That is not to say that writing/reading Alembic data should not be on the todo list! It would merely mean a limitation of the foundation if I relied on that from that start. For the experimental phase I think I should stick with human-readable data - and add layers of "abstraction" (speak: Alembic, FBX) later on.

    Marc


    Marc Albrecht - marc-albrecht.de - does things.

  • Roy NieterauRoy Nieterau Posts: 258 ✭✭✭
    edited July 2016

    Although I am a fan of Alembic, it sadly has some serious acceptance problems in most places I have had contact with (in comparison with FBX, sigh). Its implementation across various tools it not exactly "painfree". And, probably the most important point against any "binary" format (though it's not about binary/text): If I store, say, "Xfo" into the ABC, it would depend on the DCC's interpretation of that to actually make use of this data.

    The whole point of Alembic was always to make exactly this "more painfree". With that task as one of its primary focuses I must say it's been doing a really great job and I don't know these industry acceptance issues you talk about. But it might be because we're in a slightly different field. Aside from that. I'm not sure if implementing your own format solves that particular problem. Yes you can take total control of it in Maya, but it doesn't have "any acceptance" and as such is even a step back from Alembic if that's your primary concern.

    You're building this tool (in first regard) for Fabric Engine. As long as it's consistent with Alembic in Fabric Engine then using Alembic is (solely on that part) just as good as rolling your own. All the other things (efficient caching, reliable API) can then just be taken for granted.

    Text-read: The theoretical advantage of using a standard format collapses if the data you are going to store is not understood in a standardized way (the "animation cache" data I am currently dealing with is a stream of one dimensional scalar values, so it would be of no use in any software dealing with Alembic).

    A single dimension value "by itself" doesn't have any meaning in the Context of Alembic, that's totally right. There it's only a relevant stream (I think) if it's related to a specific object, like an Xform. (In that sense it's similar to FBX, right?)

    Anyway, if it's purely about animation curves (key->values, and maybe even its tangents) that you'd want to take along I think looking at formats like .Atom would also be really interesting. Since it's meant for that purpose.

    I have a lot of ideas and plans and will - like with my other projects - continue to work on this, but any discussion, feedback, ideas, suggestions, questions are highly welcome, encouraged, asked for!

    Aside from the above points I think some easy external loading of a "stream" like that in FE can be a good addition. What's your further plans with this? Love to hear some more of your ideas.

  • malbrechtmalbrecht Fabric for Houdini Posts: 752 ✭✭✭
    edited July 2016

    Moin, Roy,

    thanks!

    implementing your own format solves that particular problem

    ... like I said to Eric above: Having a human readable storage format while experimenting with the whole thing to me seems to make a lot more sense than storing data in a binary form that has to be interpreted by an extra tool before being debugable.
    Just because I am using a readable textfile while developing the system as such - in order to being able to spot problems in the implementation right away - does not mean other formats are off the table or may not become the format of choice.

    If the experimental format while playing with ideas is the major concern and would keep people from giving feedback, I'd be the first to implement an Alembic loader/saver. Right not, to me, that only would mean making things unnecessarily difficult, but since feedback/suggestions are what I am looking for, I can shift all efforts from making the system actually usable over to first creating Alembic saver/loader routines ;-)

    What's your further plans with this?

    The vision is to create a set of tools (nodes/"presets", GUI elements and processing/library functions) to handle "animation input/output" easily, accessible in a Fabric "environment". For this I am going to learn the necessary Qt/Python elements to create GUI elements (graph displays, click'n'drag controls etc) to provide editing functions.

    • One of the concrete scopes is my BVH-manipulation system ("poor man's motion builder"), that's why I have BVH loading/saving as one of the "short term dodos".
    • Another (probably more obvious) use case is being able to load in - say Alembic :) - animation and being able to edit it (for which, really obviously, manipulation GUI elements are necessary).
    • Another (completely useless, because only targeted at my own needs) usecase is to load and interpret physical data (measurements) and create animations from those ("procedural animation"). Example: Weather data to automatically adjust scene settings for renders that represent actual weather states (rain, gusts, wind direction, sun, clouds), with an extension to read satellite weather data and render weather prediction ("forecast") movies.
    • Another usecase is iterative animation capturing - as hinted at with the "puppeteer" channel in the video. Allowing an animator to iterateivly add animation layers "by hand", playing back the previously recorded elements and adding new layers on top, might be an interesting way of, maybe, getting fast results without the need for actual motion capturing.
    • Although, for the most part, I have left "my" previously beloved platform for 3d creativity and have not yet made a switch to another platform, I can still imagine adding what is in this tool's name to the DCC in question (i.e. "non linear animation", allowing for "takes" and "shots" to be layered and recombined)

    ... this just being a rough overview of ideas, with a lot more "fine tuned stuff" under the hood. If all fails, I am, again, learning a lot while doing this. Creating and using the JSON archive alone has taught me something I wasn't able to do before. And since I am hoping for Fabric to become THE (or at least one of THE TWO) go-to-systems for technical solutions (not only) in 3d, I have nothing against having some development to present if someone asks me to solve her or his problem.
    Which I was able to do a few times already.

    Marc


    Marc Albrecht - marc-albrecht.de - does things.

  • malbrechtmalbrecht Fabric for Houdini Posts: 752 ✭✭✭

    ... what the heck, why not tell the whole picture.

    I intend to recreate modo's animation tools in/around Fabric. modo's animation department has a really great, almost fantastic foundation, it just suffers from being modo (text speak: everything gets done about 75-80 percent, the rest never gets done), and the completely outdated GUI is a pain to work with.
    So my goal is, step by step, to provide the functionality of modo's animation tools, add on top, complete them, be flexible enough to adopt to users/animators needs.

    My problem is that I don't get paid for that. I estimate that I would need about 8 weeks to recreate about 75% of modo's animation tools, enough to be usable inside Fabric, provided that I already know enough about Qt/Python to "just do it" (the Qt part I get, it's Python that I struggle with, to a point where I am considering to do the GUI part in C++ and just ignore non-Windows-platforms). Since I only have a few minutes/max 2 hours a day, these 8 weeks may span about one and a half year if I only worked on this and that would get me to said 75% only. That's not exactly a great perspective - and there you have the reason why I want to get some feedback in the first place.
    I do have some contacts to people who animate and some to people who rig. So I can, once I am at a point where the tools are usable, refine stuff and go from there. It's the base that I need to get done and I have so many other ideas what I want to do with Fabric that it's the "moral support" that would keep me going.

    Done. Soul-striptease over a third Espresso. Now rip me apart.

    Marc


    Marc Albrecht - marc-albrecht.de - does things.

  • malbrechtmalbrecht Fabric for Houdini Posts: 752 ✭✭✭

    Looking at the animation extension in Fabric, I think I can wrap my own ideas around the core that is in there.
    My initial impression was that the animation interface is too limited, but I guess the stuff I want to do should even be possible using the builtin features with some additional layers.
    The advantage here being, obviously, that the FBX helpers already create clips, so that importing animation from FBX would be straight forward.

    So maybe the next step should be, taking your feedback into account, to write Alembic IO functions for tracks and clips.


    Marc Albrecht - marc-albrecht.de - does things.

  • malbrechtmalbrecht Fabric for Houdini Posts: 752 ✭✭✭

    Slowly getting into it.

    Question: I do get how to create my own "implement some canvas in your own app" tool, based on the AlembicViewer-sample. But what would be the best approach to create a Python based tool that can be ADDED to a "normal" Canvas, so that I can hook in my own set of tools (like "add keyframe" etc.)? My tests so far ended with the "main app loop" getting locked.
    What I would like to have is a way to simply load an additional tool (the anim tool shelf) AFTER a canvas has been opened. Not only in standalone, but everywhere where I can open a canvas.

    Sorry for maybe asking something too stupid.

    Marc


    Marc Albrecht - marc-albrecht.de - does things.

  • EricTEricT Administrator, Moderator, Fabric Employee Posts: 305 admin

    Hey Marc,

    You should have a look at the code for the UICmdHandler and how it implements commands that are passed to the main app. I think this is the correct way to go about it and how I tell the DFG Graph (invisible in the Alembic Viewer sample app) to change the alembic it is pointing to.

    Also I'm not sure there is a mechanism built in to open a UI when a particular file is loaded. You'll probably have to save some meta data to the .canvas graph file (I think there are ways to inject that) and then in your stand alone it can automatically look for that and open the UI if some data is found. Not sure about how to do that in the DCC Integrations though.

    Eric Thivierge
    Kraken Developer
    Kraken Rigging Framework

  • malbrechtmalbrecht Fabric for Houdini Posts: 752 ✭✭✭

    Thanks, @EricT

    handing over data from the PyQt window to the underlying graph seems to be pretty straight forward judging from the samples. What I wonder is if there is a way to load the additional GUI element from an empty canvas, meaning: Can I create some kind of extension that "fires up" the GUI when a preset-node is instanced in a canvas?

    My goal is to have the PyQt window pop up when someone instances a FuNLAsh-archive preset in the graph. So whenever someone wants to use these helpers for animation in Fabric, the GUI will be available.

    The other way around, having a prepared canvas based on a modified *.py file, is simple, the AlembicViewer sample is sufficient for that. But I would like to be able to extend the "original" canvas and, obviously, the canvas that is visible from inside DCC integrations. The "connection" between a Fabric extension and an add-on-GUI is what I currently don't understand.

    Marc


    Marc Albrecht - marc-albrecht.de - does things.

  • EricTEricT Administrator, Moderator, Fabric Employee Posts: 305 admin

    Hi Marc,

    I'd look at the UICmdHandler method dfgDoInstPreset. You could probably add in some code there that will check the node name / path that is being instanced and pop up a UI there.

    Eric Thivierge
    Kraken Developer
    Kraken Rigging Framework

  • seltzdesignseltzdesign Posts: 80

    Hi @malbrecht

    This looks amazing! I am still in the process of developing further our toolkit of creating and transferring animation (instance) data between our different tools. Remember you helped me with a .csv reader that contains some per-frame 4x4 matrix data of instances basically.

    So now that I have developed the other tools further I am coming back to Fabric soon and am looking for a more sensible data format for exchange between the different software (vvvv, grasshopper, fabric and MASH in Maya). It will be either XML or json based (preferably the later), but I would like a format that is as painless to deal with in all of them. Since I know least about Fabric and it has the least number of extensions/plugins, it might be good to start there and have a good data format for it. It seems like FunLash is basically exactly what would make sense, since it can read and write streams of data, which can contain all sorts of data, but mainly things like xfo's or 4x4 matrixes - either recorded per frame or using keyframes - and per instance information like ID, geometry to use, color, etc.

    Would you mind sharing a sample .txt file with me, so I can see the data structure you are using!?

    Is it possible to share this data directly between software using some sort of network protocol or something similar to OSC or Maya's command port, without the "detour" to the disk? Something that uses UDP probably. This is probably unrelated to FunLash, but maybe you have tried this. The idea would be to directly share data between software and computers without using a file written to the server, which seems like a lot of overhead for anything close to real-time.

    Regards, Armin.

  • malbrechtmalbrecht Fabric for Houdini Posts: 752 ✭✭✭

    Moin, Armin,

    I have put this project aside for the time being, waiting for more information about Fabric's general direction (data wrangling? Animation? Rigging?). At the moment I am a bit unclear about what the Fabric developers have up their sleeves for the coming major releases.

    This is not to say that I would mind sharing what I did back then - but it really was more of a "experimental thing", figuring out what makes sense at what does not. I have to agree, now, that using a common file format like FBX or Alembic to transfer animation data between applications just makes more sense, even if an animation "host" for Fabric would treat that data differently to other applications. Just keeping stuff in the pipeline instead of throwing more file formats (and be it CSV with arbitrary content) into the mix.

    I have had quite a lot of chats with riggers and animators over the last half year, and my impression so far is that there are two major concerns:
    One, obviously, being the rigging/animation pipeline (back'n'forth). As long as there isn't a dependable rigging environment in Fabric (Kraken is on its way for sure, but it just isn't there yet), riggers won't jump onto Fabric. And if the animator cannot rely on rigging being able to provide him with what he needs, animators won't use the platform either.
    Second: Export, also obviously. If you are exporting to FBX/Alembic anyway, there is no real advantage of using an animation system in its infancy state, because you already could use Maya (or Houdini 16, smirk). If we were able to get an export pipeline working that would allow better/easier tweaking in the render application - or game engine for that matter - we would have a much better starting point. Not exporting keys on each and every frame would be a good thing, but we'd need to make sure that the target application interprets/interpolates the same way as the animation application. That, so far, isn't a given.

    So what we would need to discuss is:

    • how do we get the target applications to interpret animation data 100% the same way the animation application does (topic "interpolation"). Here an animation interpreter based on Fabric using the same animation core would be needed in relevant target applications (game engines are a start). With my former favored DCC being out of the picture (modo) and me personally not using Game Engines, I am in a bad position for this one, because I don't have the render options I would like to have in a 100% Fabric workflow.

    • how do we make sure that the animation-rigging-workflow (in both directions) is painless for both departments. If the rig is based on Kraken, we have a solid foundation. I am keeping an eye on Fabric 2.4.0++ for that. So far the feedback I got on rigging/animating in Fabric from non-Fabric-fanboys (like me) was frustrating, to say the least.

    Marc


    Marc Albrecht - marc-albrecht.de - does things.

  • AlexanderMAlexanderM Posts: 132 ✭✭
    edited January 23

    I am also interested to know the general direction of Fabric development. Game engines, web browser integration, it's interesting, but I am very far from it

    Let's say NO to Autodesk®Fabric®

Sign In or Register to comment.