More specifically, I'm doing a theatre demonstration sequence adapted from Shakespeare's Henry V
There's restrictions with unity in terms of how I can vary the visuals and the direction I'm hitting at this point. So I'm cutting that out of the pipeline.
I'm pulling pre-rendered clips and attaching them to user interaction. It'll take away the problem of latency and will make things less complicated then it needs to be.
Chorus. O for a Muse of fire, that would ascend The brightest heaven of invention, A kingdom for a stage, princes to act And monarchs to behold the swelling scene! 5 Then should the warlike Harry, like himself, Assume the port of Mars; and at his heels, Leash'd in like hounds, should famine, sword and fire Crouch for employment. But pardon, and gentles all, The flat unraised spirits that have dared 10 On this unworthy scaffold to bring forth So great an object: can this cockpit hold The vasty fields of France? or may we cram Within this wooden O the very casques That did affright the air at Agincourt? 15 O, pardon! since a crooked figure may Attest in little place a million; And let us, ciphers to this great accompt, On your imaginary forces work. Suppose within the girdle of these walls 20 Are now confined two mighty monarchies, Whose high upreared and abutting fronts The perilous narrow ocean parts asunder: Piece out our imperfections with your thoughts; Into a thousand parts divide on man, 25 And make imaginary puissance; Think when we talk of horses, that you see them Printing their proud hoofs i' the receiving earth; For 'tis your thoughts that now must deck our kings, Carry them here and there; jumping o'er times, 30 Turning the accomplishment of many years Into an hour-glass: for the which supply, Admit me Chorus to this history; Who prologue-like your humble patience pray, Gently to hear, kindly to judge, our play.35
[Exit]
Seven bundles of dialogue, Seven pre-rendered sequences, Seven areas to mark.
Feedback Welcome.
Worked on debugging/cleaning the code:
-Researched and made efforts to have the Kinect automatically detect user position without needing to do the calibration pose
-Removed the image jitter by dampening the rotation and transform values of the user that the Kinect gives out.
-Removed the huge sway when the user rotates their body left and right and instead replaced it with the camera kept at a single rotation with only transformations happening. - Therefore when you move around the area, the proper illusion is seen.
Cleaned up my general creative pipeline/workflow for this specific project. - now smarter, faster and easier.
-Optimised the pipeline to allow easier transition from my current toolset to surface projection.
-More Portability, any demonstrations I set up can be easily manipulated to adjust the angle of projection.
Currently need to:
Finish Reading up on the history of Trompe L'Oeil some more. Finish Reading up on the DLSR low-light recording functions properly. Sort out delay in image Finish experimenting with animating textures to models