We have been working on a novel way to tie collaborative set design to pre-viz for virtual production. All of the software used is open source. Everything is running on a £250 second hand dell 1U rack server. The ambition has been to provide access to creative workflows for underserved global communities, so that they can bring their creative ideas into high end #virtualproduction. What you're seeing here is a raw alpha straight out of the very first render attempt. This it the proposed workflow:
- open a local open source collaborative VR space using an engine which can interface both headsets and a simple WebGL window in a browser.
- Get senior visual stakeholders together for collaborative pre-viz in the space, but also swing in a bunch of people all over the world, using our open source flossverse stack.
- Ideate the massing, lighting, camera angles, shot tracks, assets etc in a collaborative space using supportive ML tools
- Capture the co-ordinates of the camera track and export the low resolution frames.
- Stable diffusion, ControlNet, EBSynth, rapid iteration by the director and DP and the set designers
- Apply the style you want, upscale, and straight back to the in camera VFX LED wall pipeline. This can be 8k, at whatever framerate.
- Apply the camera track co-ordinates to a camera robot
- Result is "refined look and feel" from the day, ON the day, with all of the digital asset mass and co-ordinates exportable to Unreal for refining. These can also obviously be used in post.
- This without ever having to leave the £250 dell box (assuming a colab to do a bit of stable diffusion), no Unreal, and crucially all the creators in the system are communicated with over #nostr, which requires no western identity proofs, and they can be paid in realtime over the lightning network as they assemble the scene. This is profoundly enfranchising, and nobody can stop it. The stack is deployed. All the components work, but more to do.
- The video output below was based on the single prompt "high quality professional photo of a luxury island resort". The originating low polygon collaborative VR graphics are show in the comments. I'll try to get this up on the Pathway XR wall to see what it looks like in camera, and post in the replies, but I am pretty excited, so it's getting posted now. #vfx #metaverse
This isn't the end, this is the start. The terminal point for this is the senior team being able to change the visual parameters in "something like" near real time using voice commands, based on the buildout of the virtual set, as the set is being built. This is potentially distruptive to the whole pre-viz pipeline, and it opens it up to the whole world. Note especially that anyone anywhere can download and run this stack for free, giving a local creative community an instance of the software which can be simply 'dropped in' on the day of the shoot.
You'll notice it's not a huge difference between the original and the target, I didn't want to push it into 'artifacts' on the first try, but controlnets allow you to go straight from A all the way to Z in one smooth go, and that's where you can start to see how a whole AAA cityscale could be created in a day from a bunch of cubes. That's a really big deal. It doesn't get you all the way to your in camera final product, but it's a hell of a way to replace a day of making storyboards.
Here's the simple scene

This is the first pass ML output for the LED wall.
