Showing posts with label COP3. Show all posts
Showing posts with label COP3. Show all posts

Wednesday, January 15, 2014

Final mocap with keyframe

Below is my final motion capture with the keyframe aspects added in. In a previous post I had managed to get the motion capture to a workable stage with the cleanup of the data. I found that in some areas the performance wasn't as strong as I was hoping it to be, this may have been because of the data not being very accurate when recording. To get around this, I emphasised certain actions whilst trying to keep the main performance at the core. I found it hard to distinguish between what makes it motion capture and what makes it keyframe. Surely by animating over the top of the data turns it more into keyframe, so what is the point of motion capture? It's trying to find that fine line between the two, keeping the integrity of the actor's performance whilst ensuring the animation has weight and believability to the movements.

I chose to use iPiSoft as it was a more accessible piece of technology that is becoming more commonly used. I wanted to see how well markerless technology could capture a performance. It had its issues with the accuracy of the data, however when it did capture something, the movement came across as realistic and had subtle motions within it, something that may not have been able to be achieved with keyframe. Through doing this project I can see the benefits to using motion capture, it's a quick way to get a base performance down or to work out the layout of a scene in 3D space. If you do add keyframe into motion capture you have to make sure that there is stylistic consistency between the two, and for this reason it is why games companies may choose to use motion capture without the use of keyframe. It ensures that throughout the game the animation is of the same style, something that could be hard to achieve with keyframe if the animators have not had much experience with that. Both motion capture and keyframe can have their strengths and weaknesses, however it should be about the performance and the believability of a character. If an audience can connect and empathise with a character because of the believability and performance, then that should be all that matters.

Tuesday, January 14, 2014

The Imaginarium Studios / Marker motion capture

The Imaginarium was founded by Andy Serkis and John Cavendish in 2011. With the use of performance capture technology they currently have two productions under way, Animal Farm and most recently (securing the rights to) The Bone Season. Andy Serkis is well known for his performance as Gollum in Lord of the Rings and The Hobbit. The Imaginarium did a workshop at BAF Game, showing their process when it comes to motion capture and the software they use. When capturing a performance, they will use around 50 Vicon cameras for the body and sometimes another 50 just to capture the face. What's great about motion capture is that it provides realtime results, directors are able to see what is being produced and not have to wait for animation to be completed. It would be a much faster process than the pre-viz stage in animation where characters would be animated in a 3D space according to the storyboards. The 3D cameras is also able to be adjusted and moved around accordingly.

Fig.1 shows the setup at the workshop, there were around 10-12 cameras for this demonstration. Each Vicon camera works by projecting infra-red light which is then bounced back off the reflective markers on the body. Marker motion capture is very accurate, more so than markerless motion capture due to the reflective markers. The markers are placed on the body in relation to where the joints would bend and move. To aid the performance and allow for believable reactions to take place, the use of props can be used, and these can also be captured (Fig.2). It allows for the interaction between actor and prop to be more realistic and believable in terms of contact with the object.

A fairly recent piece of technology that Vicon has produced is the Cara rig. This allows for more accurate facial motion capture. It has been designed for comfort and with the actors performance in mind whilst being able to achieve high quality results. Previously games such as L.A Noire were able to achieve highly realistic performances with the use of MotionScan, however this could only be used on the face. This resulted in the body motion capture to not be of the same standard of the face. With the Cara rig, it attaches to the actors head and with this, fully body motion capture can be achieved. Fig 3 shows the cameras view of the face and the points that it is tracking.

Fig.1 

Fig.2 

Fig.3

Tuesday, December 24, 2013

Final Mocap Progress

For my final piece of motion capture I planned out a scene where I would recreate an environment of a shop window and a collection of mannequins in that window. One would come alive and be confused as to what happened and why they can't get out. I captured this performance using a two camera setup as there was more movement than my previous tests. The data that I obtained from this was very messy and involved a lot of cleanup, to a point where it probably would have been just as quick or even quicker for myself to keyframe the animation. What I have managed to clean up is shown below. That is the stage where I've got to at the minute, I've had to adapt a lot of the motion as it was all in the wrong position and over rotating in some joints. The movements are quite jittery in places and very snappy, I've tried adjusting this and it has reduced the amount, however it's not to the standard as I would have liked. I want to try and put some more personality into the movements and adding that appeal, I'll try this out using animation layers so that it's not destructive on the base motion and I can always delete it if it's not working out.

I've been using markerless motion capture, a technology that is fairly recent and due to that fact it doesn't seem to be very accurate when compared to marker motion capture such as Vicon. This is one reason it's taken myself a lot longer to get it to a workable stage. I feel like I could animate it a lot better and with more ease if it was from scratch, and right now I do not feel comfortable with what I'm achieving. I don't know if I will be able to get the motions smoothed out like I would with keyframe animation and it's frustrating me that I can't figure it out.

Scared Animation / MoCap

Another one of my tests with motion capture was to create a reaction to something, in this instance I chose to capture a scared/shocked emotion. I did this with a one camera setup as there wasn't any movement that would go behind the body and therefore not be captured by the camera. I tried two different ways of retargeting this time, one within Mocap Studios and my original way within Maya. The two videos below show the original mocap that was done within Mocap Studios, and the one below in Maya. What was interesting to see was that when retargeted using the iPi software, the original data came out quite well and accurate, more so than doing it in Maya. The only issue with this process is that only fbx models could be used to import and I was not sure how to export one of my own models with the rig out as an fbx and into Mocap Studios. In Mocap Studios, you have to connect each joint from the data skeleton to the imported character joints, and I wasn't able to do this with my own model. In order to get more practice with cleaning up the data and using my own model I think I'm going to stick with retargeting in Maya.





As with my previous mocap test I also created a keyframe version. This time I keyframed using my own reference and from this I also created a more exaggerated animation ontop of the base animation. I did this using animation layers. Because I've used the Stewart rig from Animation Mentor, there is immediately more appeal in terms design. In the exaggerated animation there is more of a reaction and it comes across more vividly than the base animation or mocap version. To make it more appealing I would need to concentrate more on the line of action of the body and the arcs that are being created. These arcs are a natural motion and add appeal to movements. It's easier to create these with keyframe as you are starting from scratch, I'll try and implement these techniques into my motion capture to see how much I can edit the motion yet keeping the main performance at the core.



Tuesday, December 17, 2013

Two Camera Setup - iPiSoft

For the past few tests that I have undertaken, I have just used a single Kinect camera. This is fine for recording performances from the front that will not really have any actions that will disappear from the cameras view. I decided to try out a two Kinect setup, where each Kinect is placed at around 60 degrees between each other. Because there are two cameras, a calibration process needed to take place in order to determine where both cameras would sit in 3D space. I held up a piece of cardboard and moved this from one camera to another whilst stood in the same place. After this I was then free to record my performance and take the data into mocap studios. The calibration video needed to be opened up into mocap studios and for it to calibrate based on 3D plane. By saving this file it would then use this as a reference when opening up the main motion capture data file.


Below are two videos showing the raw data and the cleanup. I found that with two cameras it had stabilised the feet more, however it didn't capture the movement of the left arm very well. This could have been down to the calibration setup and I will need to check this before I go on to creating my final animation. Even the raw data is quite accurate, more than what a single camera setup would give me. For the cleanup I had to put in the movements for the left arm myself as this was not captured, I also corrected the feet and spine. The rigged model I've been using is my own that I managed to fix from my previous attempt. It still needs some more adjusting but it's given me a decent result this time round.



Friday, November 29, 2013

Synthesis Crit

A few weeks ago we had a synthesis crit where we presented our ideas, research and any practical work, showing how we were going to synthesis both practice and theory together. So far I had produced some practical tests of motion capture and also keyframe animation to see the difference between the two and how it would affect the overall performance of the character. Something that I've come across whilst doing this is that the rig that I've been retargeting the motion capture data to, doesn't have as much control as I would like. There aren't enough controls for certain parts of the body e.g the spine so I find that when I need to tweak parts of the motion capture data in the spine, it's not that easy and sometimes not possible to get to certain joints. To get around this would be to build my own model and rig (which I've done in a previous post) where I'm able to add in as much control as I need.

Regarding the synthesis part, I've been researching into the pros and cons of motion capture along with performance, appeal and realism vs believability. All of this will help inform my practice by giving myself prior knowledge to what will work better e.g certain softwares, performance and animation techniques, what makes a believable character/performance. I will incorporate this into my work and whatever I produce I will end up with my own findings which I will then relate back to my writing. I will be learning throughout and making links between both aspects.

Because I'm looking into performance and aiming more at film rather than games, it will be good to give my final animation a story, something I can work with that will make the whole thing more believable. In order to give a good performance you should understand your character first. A story can also add appeal in the form of how the character may react in certain situations. It will be interesting to see how it all turns out and if I come out of it with any new findings.

Monday, November 11, 2013

Practical Element - Character

Through testing out motion capture and retargeting mocap data onto rigs, I have found out that I'm not able to get the amount of control that I would like from some of the rigs. Also with the rig that I have been using I still find that it is not plain enough for it not to have any contributing factors to deal with aesthetics. As I am focusing purely on the animation and performance I don't want the model or textures to affect this. To overcome this I've modelled and rigged my own character. I've tried to create a model that doesn't have appeal in the way of design or textures, and to make sure that it doesn't fall into the uncanny valley I've taken away the face and hands and replaced them for basic forms. Eye lines are one of the key factors to any performance, but it can also be the tipping point for something to be uncanny or not. The eyes need to be believable and not feel 'dead'. As there is no face it wouldn't look right aesthetically to have human like hands, by creating mitten hands it also allows me to spend more time on the mechanics of movement and the pure performance. The iPi software doesn't capture finger movements, only the wrist action so I would have had to add these all in myself using keyframe.

I used the HumanIK system within Maya to set up my skeleton. When you retarget the data to a rig, you first have to make sure that the rig is characterised and you do this within HumanIK, so it made sense to create a rig in this way. I've weighted all the joints, however I'm having a few issues with the controls. With HIK it will set up the controls for you, I need to go back in and check a few things out, however when I tested it earlier the controls weren't working as how they should be. I may find out that I will have to put the controls in by hand (which isn't a major task) the worst case scenario would be having to delete the rig and start again, but this time rigging it by hand. Hopefully it won't come to this and that I can get it sorted soon. Once it is sorted though, I plan to put my old mocap data onto this rig just to see how it works. I may still need to tweak it and adjust a few things at that point. I want to make sure the rig is working to the best that it can before I move onto my major piece of mocap animation.


Wednesday, November 6, 2013

Ratatouille

One of the films that I'm going to look at for my extended writing piece is Ratatouille. As I'm going to be researching into appeal and performance, Ratatouille is a really good film to look in to as Pixar managed to make a rat appealing to the audience. A lot of it was down to the design of the characters as this is the immediate thing an audience would see, they needed to take previous misconceptions away from rats and portray them as something cute and appealing.
"If the visual qualities of character get the viewer’s attention during the storytelling, that’s appeal." 
With Emile it was a lot easier to make a chubby rat more appealing as this in itself makes something a lot more cute. Adding the type of personality that Emile has also adds to the appeal, clumsy yet friendly and most interested in his food. Through personality traits this can influence the animation style and performance. The 12 principles of animation will always play a bit part in any animation work, arcs, follow through, overlapping action, arcs, exaggeration, appeal, solid drawing etc - they all create a more realistic performance but in a stylised way. Animation performances can be extremely exaggerated but the characters still need to be believable and connect with the audience. For Ratatouille, one thing to pick up on is how the rats run, it's very quick but has a rhythm to it. The animators studied real rats and how they moved and interacted with objects, by understanding the real motion they could then take this and adapt it to produce a performance that expressed personality and charm.


Friday, October 25, 2013

L.A Noire and Facial Capture

L.A Noire (2011) used the MotionScan system (created by Depth Analysis) for which it recorded facial motion capture. It was vital that they got accurate facial capture in order for players to be able to tell when a character is lying. MotionScan is markerless technology so there is no need for actors to be put in any type of suit or have markers on their face. This reduces the time it takes and adds an ease to the whole process, it took around 30-60 minutes to scan the whole head. From here the actors would then be recorded in the capture stage where 32 cameras were setup in order to capture the whole head. Deph Analysis were able to record audio at the same time as the facial capture in order for it to all sync up. The body and neck movements come from a separate session of motion capture, the problem with this is that the body can sometimes look disjointed from the head and the movements aren't as natural and realistic as the facial expressions. It somewhat breaks the players immersion and for myself, I found it really noticeable.
"We didn't want animators touching up the data. Each time you do that, you lose a bit of the personality. We wanted LA Noire to be as authentic as possible"  - Oliver Bao, head of research, Depth Analysis
As the MotionScan system eliminated the need for data clean-up, it increased the amount of animations that were produced a day. Around 15 minutes of animation were achieved a day, in which an animator could spend almost a week cleaning up that data alone. Since the game was released in 2011, Depth Analysis is aiming to increase the capture resolution so that it can also include full-body. This will be really beneficial and will be interesting to see the progress of this. When this is complete it will be great to see how they compare together - the facial capture with separate motion capture data for bodies and full-body captures.

The faces themselves are very realistic and almost an exact match to the actor's face. When captured, the faces are of a much higher resolution (more suitable for film) so when transferred in game, it loses some of its quality but the core believability is still there. There are some eye movements which sometimes takes away from the realism of the characters yet it's not at the point of the Uncanny Valley. L.A Noire hasn't fallen into this category, normally when games or films attempt to achieve really realistic results, it ends up becoming eerie and feeling 'uncanny'. It's been said that because of the MotionScan technology that this was the main reason for escaping the Uncanny Valley. Because of the high realism in the face, it didn't have that 'fake' feeling or any of the eeriness associated with the uncanny.

Thursday, October 24, 2013

First Mocap Test with iPi Soft

On Tuesday I did my first test with iPi Soft - markerless motion capture. This is the main piece of equipment I'll be using for the practical side of my extended essay. I had previously tested it out during summer, however I wasn't able to export the data out due to it being a trial version. Because of that test though, I knew the basics of setting it all up and recording. When setting up the Kinect for recording you have to sort out the background first. Choosing depth mode when in the recorder allows you to see different colours, the main thing you want to get rid of is any areas of yellow so the Kinect can get the best scan as possible. Another thing to consider is what clothing you wear, you want slim clothing and if you have long hair, to tie it back. (I didn't tie my hair back so the head on the pure motion capture test was very wonky)

For this initial test I just used one Kinect. The more you use the better the scan, but it also takes longer to process. There wasn't much movement happening in this test, so one Kinect was more than enough for what I wanted to capture. Before recording you must take a snapshot of the background (once this has been done the camera cannot be moved), then you can go ahead and record. The actor must strike a T-pose before commencing any main action as this will help with the building and tracking of the character.

Once this was all recorded there were a few options within iPi Mocap Sutdio to edit the data and fit it onto a character. Using the tools such as refit pose (the model will be fitted to the motion capture data) and track forward (tracks the data onto the character) will get you what you need for you to export the bvh file and take it into another programme. It is also possible to import a pre-rigged character into Mocap Studio and retarget the data to it there. This eliminates the need for another piece of software such as MotionBuilder.

I tried taking the bvh file into MotionBuilder and editing it there, however I had some issues with it and wasn't able to pin the feet down too well. I thought by flattening out the F-Curves it would hold the pose, yet this wasn't the case. This test had a lot of issues with the feet, and this may have been due to the floor being shiny - it may not have coped very well with this. Hopefully the next test I do might work out better as I plan to do it on carpet.

I took the fbx file into Maya and retargeted this onto a character rig. I followed a tutorial with Digital Tutors which helped me greatly with this stage. Within MotionBuilder it is a much simpler process, however in Maya it takes a lot longer to do. Luckily a script was provided in order to speed up the whole process. Without it you would need to group each control and create other groups for retargeting. All in all it wouldn't have been as simple as MotionBuilder. To clean up the data I had to bake the animation and then simplify the curves. By simplifying it reduced the amount of keyframes there were and I was able to then go in and edit. I found this stage quite easy and had no issue with cleaning it up. Because of the restricted space that I worked in, the jump that I performed was very unenthusiastic with not much energy, so when cleaning up I thought I would make it more obvious that it was a jump and pushed it a bit further (without making it too exaggerated as this is what I want to keep for the pure keyframe animation version).

As you can see below, the first video is of the pure motion capture data that I retargeted to a character rig - provided by Digital Tutors. Underneath that is the cleaned up motion capture which I keyframed in order to correct the some of the movements. I had to do a lot more clean up and editing of keyframes than I had anticipated - this may have been due to the environment affecting the data capture. Maybe next time it might take less time but it all depends. Motion capture and keyframe could take up to the same amount of time.


Monday, October 21, 2013

Retargeting options

One option for my practical piece of work is to explore the option of retargeting with different pieces of software e.g within MotionBuilder or Maya. Retargeting is the process of transferring motion capture data onto another rig where it is then possible to clean up and edit. At the minute I am far more accustomed with Maya so it may seem that I find the process a lot easier, however what I'm finding at the minute is that the process seems to be much more complex than with MotionBuilder. It will be interesting to see which technique will be more beneficial to myself, not only for how long a process will take but also the results that it produces. Right now I will be testing out the Maya approach and then I will move on to retargeting within MotionBuilder. 

When it comes to retargeting within Maya you need to have a compatible rig. For this you need to either define an existing rig or create and define one from scratch. I will be trying out both techniques, for the existing rig it will be using one from Digital Tutors. Creating and defining from scratch is an option to consider and possibly taking further as this would allow myself to have my own model and rig that using the HumanIK system. Initial ideas was to possibly do a collaboration with Alex on my course, however it will depend heavily on our time constraints and available equipment on my part. As of now though, I am just working on tests to see if it would be possible and to create my own body of work that will help explore themes within my extended essay. 

Thursday, October 10, 2013

Quick Motion Capture Test

Before I start properly on the practical side of this module I thought I should try out cleaning up some motion capture within MotionBuilder then taking it into Maya. I've been watching tutorials over the summer about how to use MotionBuilder as it's a new software that I will be taking on. The tutorial I followed for this test was from 3D Artist, Issue 58. Using the motion capture data that came with the magazine I plotted it onto a standard character in MotionBuilder, cutting up the data sequence in order to get a specific part of the run. It wasn't as easy as I thought it was going to be and I'll definitely need some more practice with the software as a whole.

Motion capture needs to be cleaned up as a lot of the time there will be sliding of the feet and some unnatural movements may also occur, e.g flipping of the joints. I managed to clean up some of the feet but I think it could still be improved more. I'm used to having a rig that I can have full control of e.g foot roll so I'm trying to learn how to compensate for this. The motion capture data I was working with was from optical motion capture, I however will be using markerless motion capture. I hope this won't affect the quality of the data and that I'll still be able to achieve good results with it.

One thing I need to test out is editing motion capture within Maya, I know it's possible to do however I need to explore this option some more and see if it requires a specific file type. The ideal situation would be to use MotionBuilder as part of my pipeline as this is industry standard. This is just the start so I'll have to keep testing and see where it gets me. Once I get the hang of it I'll be able to then implement my theory and knowledge of motion and performance to the characters and see if this has an effect on the overall appeal of them.

Sunday, October 6, 2013

Final year begins...

Before I broke up for summer, I had to start thinking about what I wanted to do for the upcoming module, Context of Practice 3. Unlike previous years, this had now changed from a dissertation to an extended writing piece with a synthesised practical. With this in mind I immediately knew I wanted to base my practical around motion capture and the essay side to combine keyframe animation (which I specialise in now) and compare it to motion capture. One of the main reasons for doing something like this is so that I can explore a new technique and to broaden my knowledge within the animation field. At the minute I'm currently looking around the idea of 'keyframe animation vs motion capture - how this can affect the overall performance and appeal of the characters' - This is subject to change and is not a definite title as of yet. Over the summer I've been researching into several different topic areas I may cover such as, The Uncanny Valley, appealing animation, pros and cons of mo-cap and also learning the softwares I need to use in order to create my own pieces of motion capture.

For the practical side of the module I'm hoping to do some shorts tests to compare and contrast the animations between mocap and keyframe. The keyframe animation I'll be doing myself, using the raw footage of the motion capture as reference. I'm planning to exaggerate the performance and see if I can achieve more appeal in the movements and the characters. A possibility at the end may be to combine both motion capture and keyframe within the same scene to see if visually, whether or not there is a noticeable difference.

Throughout this module I'll be posting up my research and findings on the subject and will be heading off to BAF Game for some specific motion capture talks and workshops. This should all benefit myself a great deal and I'll be able to implement this into my essay. I hope to get some practical tests up soon, right now I'm planning on using iPi Soft - markerless mocap and then taking this into MotionBuilder for cleanup. I'll also be keyframing other animations and will be sticking with Maya for this. I'm looking forward to seeing how this all ends up and the research I've done so far has really intrigued me and I can't wait to really get started on it all!