Avatar
flossverse
bfcf20d472f0fb143b23cb5be3fa0a040d42176b71f73ca272f6912b1d62a452
Bitcoin and open source collaboration tools
Replying to Avatar flossverse

We have been working on a novel way to tie collaborative set design to pre-viz for virtual production. All of the software used is open source. Everything is running on a £250 second hand dell 1U rack server. The ambition has been to provide access to creative workflows for underserved global communities, so that they can bring their creative ideas into high end #virtualproduction. What you're seeing here is a raw alpha straight out of the very first render attempt. This it the proposed workflow:

- open a local open source collaborative VR space using an engine which can interface both headsets and a simple WebGL window in a browser.

- Get senior visual stakeholders together for collaborative pre-viz in the space, but also swing in a bunch of people all over the world, using our open source flossverse stack.

- Ideate the massing, lighting, camera angles, shot tracks, assets etc in a collaborative space using supportive ML tools

- Capture the co-ordinates of the camera track and export the low resolution frames.

- Stable diffusion, ControlNet, EBSynth, rapid iteration by the director and DP and the set designers

- Apply the style you want, upscale, and straight back to the in camera VFX LED wall pipeline. This can be 8k, at whatever framerate.

- Apply the camera track co-ordinates to a camera robot

- Result is "refined look and feel" from the day, ON the day, with all of the digital asset mass and co-ordinates exportable to Unreal for refining. These can also obviously be used in post.

- This without ever having to leave the £250 dell box (assuming a colab to do a bit of stable diffusion), no Unreal, and crucially all the creators in the system are communicated with over #nostr, which requires no western identity proofs, and they can be paid in realtime over the lightning network as they assemble the scene. This is profoundly enfranchising, and nobody can stop it. The stack is deployed. All the components work, but more to do.

- The video output below was based on the single prompt "high quality professional photo of a luxury island resort". The originating low polygon collaborative VR graphics are show in the comments. I'll try to get this up on the Pathway XR wall to see what it looks like in camera, and post in the replies, but I am pretty excited, so it's getting posted now. #vfx #metaverse

This isn't the end, this is the start. The terminal point for this is the senior team being able to change the visual parameters in "something like" near real time using voice commands, based on the buildout of the virtual set, as the set is being built. This is potentially distruptive to the whole pre-viz pipeline, and it opens it up to the whole world. Note especially that anyone anywhere can download and run this stack for free, giving a local creative community an instance of the software which can be simply 'dropped in' on the day of the shoot.

You'll notice it's not a huge difference between the original and the target, I didn't want to push it into 'artifacts' on the first try, but controlnets allow you to go straight from A all the way to Z in one smooth go, and that's where you can start to see how a whole AAA cityscale could be created in a day from a bunch of cubes. That's a really big deal. It doesn't get you all the way to your in camera final product, but it's a hell of a way to replace a day of making storyboards.

Here's the simple scene

This is the first pass ML output for the LED wall.

https://media.nostrgram.co/v/da/media_da8611ee6af36.mp4

Just got a local chatbot running with 30B params. Answers take over a minute, and it (obviously) makes things up, as one would expect. ChatGPT it is not, but free, under development, and locally running? HELL YEAH.

My new girl friend accused me of being rubbish in bed. I said it was unfair to reach that conclusion after less than a minute

When I was a kid, my biggest fear was getting locked in a small room with Santa. I had a bad case of Claustrophobia.

This morning I made a Belgian waffle. In the afternoon I made a Frenchman talk bollocks.

What do you call a magician who has lost his magic?

I've been trying to read a chapter a day from my new book The A to Z of Fruit but it's been really quite difficult to find the time and I'd fallen behind a bit, anyway I'm up to dates now.

Ian.

Replying to Avatar flossverse

We have been working on a novel way to tie collaborative set design to pre-viz for virtual production. All of the software used is open source. Everything is running on a £250 second hand dell 1U rack server. The ambition has been to provide access to creative workflows for underserved global communities, so that they can bring their creative ideas into high end #virtualproduction. What you're seeing here is a raw alpha straight out of the very first render attempt. This it the proposed workflow:

- open a local open source collaborative VR space using an engine which can interface both headsets and a simple WebGL window in a browser.

- Get senior visual stakeholders together for collaborative pre-viz in the space, but also swing in a bunch of people all over the world, using our open source flossverse stack.

- Ideate the massing, lighting, camera angles, shot tracks, assets etc in a collaborative space using supportive ML tools

- Capture the co-ordinates of the camera track and export the low resolution frames.

- Stable diffusion, ControlNet, EBSynth, rapid iteration by the director and DP and the set designers

- Apply the style you want, upscale, and straight back to the in camera VFX LED wall pipeline. This can be 8k, at whatever framerate.

- Apply the camera track co-ordinates to a camera robot

- Result is "refined look and feel" from the day, ON the day, with all of the digital asset mass and co-ordinates exportable to Unreal for refining. These can also obviously be used in post.

- This without ever having to leave the £250 dell box (assuming a colab to do a bit of stable diffusion), no Unreal, and crucially all the creators in the system are communicated with over #nostr, which requires no western identity proofs, and they can be paid in realtime over the lightning network as they assemble the scene. This is profoundly enfranchising, and nobody can stop it. The stack is deployed. All the components work, but more to do.

- The video output below was based on the single prompt "high quality professional photo of a luxury island resort". The originating low polygon collaborative VR graphics are show in the comments. I'll try to get this up on the Pathway XR wall to see what it looks like in camera, and post in the replies, but I am pretty excited, so it's getting posted now. #vfx #metaverse

This isn't the end, this is the start. The terminal point for this is the senior team being able to change the visual parameters in "something like" near real time using voice commands, based on the buildout of the virtual set, as the set is being built. This is potentially distruptive to the whole pre-viz pipeline, and it opens it up to the whole world. Note especially that anyone anywhere can download and run this stack for free, giving a local creative community an instance of the software which can be simply 'dropped in' on the day of the shoot.

You'll notice it's not a huge difference between the original and the target, I didn't want to push it into 'artifacts' on the first try, but controlnets allow you to go straight from A all the way to Z in one smooth go, and that's where you can start to see how a whole AAA cityscale could be created in a day from a bunch of cubes. That's a really big deal. It doesn't get you all the way to your in camera final product, but it's a hell of a way to replace a day of making storyboards.

Here's the simple scene

This is the first pass ML output for the LED wall.

https://media.nostrgram.co/v/da/media_da8611ee6af36.mp4

Blimey trousers I got a whopper 2100 zap there from some pleb. Thanks!

We have been working on a novel way to tie collaborative set design to pre-viz for virtual production. All of the software used is open source. Everything is running on a £250 second hand dell 1U rack server. The ambition has been to provide access to creative workflows for underserved global communities, so that they can bring their creative ideas into high end #virtualproduction. What you're seeing here is a raw alpha straight out of the very first render attempt. This it the proposed workflow:

- open a local open source collaborative VR space using an engine which can interface both headsets and a simple WebGL window in a browser.

- Get senior visual stakeholders together for collaborative pre-viz in the space, but also swing in a bunch of people all over the world, using our open source flossverse stack.

- Ideate the massing, lighting, camera angles, shot tracks, assets etc in a collaborative space using supportive ML tools

- Capture the co-ordinates of the camera track and export the low resolution frames.

- Stable diffusion, ControlNet, EBSynth, rapid iteration by the director and DP and the set designers

- Apply the style you want, upscale, and straight back to the in camera VFX LED wall pipeline. This can be 8k, at whatever framerate.

- Apply the camera track co-ordinates to a camera robot

- Result is "refined look and feel" from the day, ON the day, with all of the digital asset mass and co-ordinates exportable to Unreal for refining. These can also obviously be used in post.

- This without ever having to leave the £250 dell box (assuming a colab to do a bit of stable diffusion), no Unreal, and crucially all the creators in the system are communicated with over #nostr, which requires no western identity proofs, and they can be paid in realtime over the lightning network as they assemble the scene. This is profoundly enfranchising, and nobody can stop it. The stack is deployed. All the components work, but more to do.

- The video output below was based on the single prompt "high quality professional photo of a luxury island resort". The originating low polygon collaborative VR graphics are show in the comments. I'll try to get this up on the Pathway XR wall to see what it looks like in camera, and post in the replies, but I am pretty excited, so it's getting posted now. #vfx #metaverse

This isn't the end, this is the start. The terminal point for this is the senior team being able to change the visual parameters in "something like" near real time using voice commands, based on the buildout of the virtual set, as the set is being built. This is potentially distruptive to the whole pre-viz pipeline, and it opens it up to the whole world. Note especially that anyone anywhere can download and run this stack for free, giving a local creative community an instance of the software which can be simply 'dropped in' on the day of the shoot.

You'll notice it's not a huge difference between the original and the target, I didn't want to push it into 'artifacts' on the first try, but controlnets allow you to go straight from A all the way to Z in one smooth go, and that's where you can start to see how a whole AAA cityscale could be created in a day from a bunch of cubes. That's a really big deal. It doesn't get you all the way to your in camera final product, but it's a hell of a way to replace a day of making storyboards.

Here's the simple scene

This is the first pass ML output for the LED wall.

https://media.nostrgram.co/v/da/media_da8611ee6af36.mp4

Video to avatar like this was obviously a matter of time.

http://moygcc.github.io/vid2avatar

I lost count of the number I have seen orbital live decades ago, then I kept going again a couple of times a year for those decades. Just clocked they have a new album. I wasn't expecting that for some reason.

https://open.spotify.com/track/2uJbLXWgxBEgkUtz2Ie07l?

Cool

THE U-2 AND BALLOONS – SOME HISTORY, AND SOME THOUGHTS | Dragon Lady Today https://dragonladytoday.com/2023/02/21/the-u-2-and-balloons-some-history-and-some-thoughts/

ControlNets in stable diffusion are the next new thing I've not had time to try. I've seen some great character and video stuff but this is a nice taste of a workflow for modelers.

Workflow: UV texture map generation with ControlNet Image Segmentation · Discussion #204 · Mikubill/sd-webui-controlnet · GitHub https://github.com/Mikubill/sd-webui-controlnet/discussions/204

I'm gonna start floating posts from my browsing of non Bitcoin research to give some colour other than orange and purple round here. Many of you will have seen NeRFs (Neural Radiance Fields) and the impact they're having on scene capture. What you might have missed is that the Nvidia instant NeRF github gets features rolled quietly out into the wild without fanfare. Their post recent updates allow you to connect directly to a VR headset and edit the "floaters" out of the scene with the controllers. It's wild.

https://github.com/NVlabs/instant-ngp#vr-controls