It looks like you're new here. If you want to get involved, click one of these buttons!
Hello guys, it's me again trying to do machine learning with motion capture data in FE. I have created a Neural Network object so I can simulate networks directly in FE. The learning phase is still done off-line, outside Fabric.
I think this is an improvement over my existing pipeline because I can see results in real-time and also because I have more tools to manipulate/use/mix data.
I have created 2 videos. In the first video I do a brief overview of the applications of such extension. In the second video I show how one can actually setup up a network and bring learned parameters, in this case, from MATLAB. Of course you can bring learned parameters from anywhere (OpenNN, OpenCV, and so on) so long as you store them in the CSV format.
In this small video series I outline some problems and data manipulation needs with moCap data in the context of Machine Learning, and try to make the case for how those can be addressed by using Fabric.
I know this is not the Fabric's main target, but I actually find it quite useful in this context.
Sorry for my (at times) confusing presentation
I have been working with moCaps, and most moCap research databases are in BVH format. I was converting it all in MotionBuilder, avoiding to write my own BVHReader. But in order to scale, this was unavoidable.
So I have written a very simple BVHReader (read only, no intention in writing a writer) for the most common BVH cases, which are: only root has translation and rotations, joints have local rotations.
I have made the code available in my fabricUtils extension (https://github.com/gustavoeb/fabricUtils) although it has no dependencies (aside from the sample file) on the rest of the extension. So it is easy to rip it apart from the rest.
Hope this can be useful to other people.
Ps: Here are some databases you can try:
©Copyright 2017 Fabric Software Inc. All rights reserved. | Privacy