You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to ask a bit of a dumb question: why isn't this a full implementation of what we see in the tech demo video? Especially when Nvidia have released the whole code? Why does further development still needed to be done?
The text was updated successfully, but these errors were encountered:
They did not release the whole code - they just released some code to test and evaluate a few pretrained models. In the demo, they are using the Flickr - trained model, which they did not release, and they are running it on a very very expensive computer with many GPUs, to achieve near real time image synthesis. The code they released is very much research - oriented, and is not at all written to be used in a user-facing application.
However, I'll add that getting to an app that functions like their demo is a function of time and money, as technically everything they used can be acquired now. There is some talk about training an open-sourced version of the Flickr model, as the dataset is open source, and if someone has a mammoth computer to run the models on, we could in theory make something like what was in the demo. I think this might be a waste of time though, as someone from NVIDIA said they would be making and releasing their own demo app in the near future.
I want to ask a bit of a dumb question: why isn't this a full implementation of what we see in the tech demo video? Especially when Nvidia have released the whole code? Why does further development still needed to be done?
The text was updated successfully, but these errors were encountered: