Prashant Gahlot ๐ฎ๐ณ
ad8d4000f436c4fd66256ab022ad044b1cba58f8d27c87c4d7fa2790d64804b4
Software Engineer ๐จโ๐ป
Ok, this is pretty good.
https://video.nostr.build/c766a09e4549c54bcc05cc7e382e5beafa192a35782afc79b008a4496910e98a.mp4
They pre trained the body structure, iโm waiting the real time using those glasses or another hardware. Currently itโs a real avatar which is pre trained on gestures. Like for example I trained my avatar on bearded face and you connect someone after shaving it. Would really have an impact experience. Same for haircuts etc
Testing multiple videos in the new video player (unreleased). Plz ignore
Testing on public relay, next level of development on damus.
Which data apiโs itโs using?
Fragmentation in android really sucks

