Categories
Uncategorised Virtual Spaces

Critical appraisal

The task was to create and animate a character, using motion capture data to control its movements. We also had to adapt the data to fit an alternate character rig. We started by making storyboards, mood boards, and concept designs, and considered the type of immersive experience we wanted to create. The goal was to render the final animation and submit it before the deadline. Initially, I made good progress with my character’s modelling, but there were some things that could have been better. The legs should have been thicker, and one of the arms was too thin and uneven. This caused some issues during the photogrammetry process, resulting in some gaps around the arm. I managed to fix it and continue working, and in the end, the character turned out okay. To make rigging easier, I decided to make a mirrored version of my character by cutting it in half and making both sides equal. I successfully unwrapped the UV for both versions. However, I had difficulty painting directly on the UV, so I ended up using different texture paints instead. I wanted to add a face to my character to improve its appearance, but I didn’t have time to do it.

For the first part of the assignment, I animated my character in Blender using a basic rig and added textures. I also gave the character a moving mouth using shape keys. According to the storyboard, the character was supposed to perform with background singers, and there would be an audience clapping at the end of the song. However, I encountered difficulties when rendering the character’s hair, so I had to remove it and replace it with a hat that I didn’t design. I successfully rendered the animation in Blender and added audio, but it didn’t turn out exactly as I had planned. The character’s movements were not as good as I had hoped.

The second part of the assignment focused on improving the character’s movements by rigging and adapting motion capture data in Maya. I did a good job connecting the data to my character using inverse kinematics (IK). However, I faced some challenges in defining the character’s appearance. I had to try different methods, rig edits, and weight painting techniques to achieve the desired result. Once the character was rigged to the IK data, I started animating it. I tried animating in Maya at first and was satisfied with the results. I also liked the environments I created in Maya and the character’s movements. However, I felt more comfortable using Blender, so I decided to replicate the process there. In Blender, I successfully used the motion capture data and applied it to my character’s rig. This allowed me to create the right environment with the desired lighting. Overall, the process in Blender worked well, but there were some issues. The character could have been better proportioned, and the rig controls could have been more precise to achieve the desired movements. I decided to change the story slightly and introduce a different character I had created in Blender—a pink bear. This character had better topology, which made the retargeting process smoother. I also did some adjustments in weight painting, focusing on the head and arms. This helped improve the character’s motion, which had previously been dragging in the wrong areas. With the characters rigged and the ability to apply the motion capture data, I thought I was ready to create the final video and follow the story. However, some movements didn’t turn out as defined as I had hoped, and it was challenging to determine the character’s facing direction.

The spaces used for the assignment included a bathroom, a stage created in Blender, and a street environment made with Polycom. Despite not having the right equipment for automatic environment scanning, I used video photogrammetry to capture the street environment. Although the result was decent, I believe it could have been better

Leave a Reply

Your email address will not be published. Required fields are marked *