tiktok!!
And remember the Mannequin Challenge? Yep, they used that too.
Subscribe and turn on notifications 🔔 so you don’t miss any videos: http://goo.gl/0bsAjO
The quest for computer vision requires lots of data — including real world images. But that can be hard to find, which has led researchers to look in some pretty creative places.
The above video shows how researchers used Tik Tok dances and the Mannequin Challenge to train AI. The quest is for “ground truth” — real world examples that can be used to train or grade an AI on its guesses. Tik Tok datasets provide this by showing lots of movement, clothing types, backgrounds, and people. That diversity is key to train a model that can handle the randomness of the real world.
The same thing happens with the Mannequin Challenge — all those people pretending to stand still gave researchers — and their models — more real world data to train with than they ever could have hoped for.
Watch the above video to learn more.
Further Reading:
Here’s the original project pages for each researcher in the video:
Tik Tok aided depth: https://www.yasamin.page/hdnet_tiktok
Mannequin Challenge: https://google.github.io/mannequinchallenge/www/index.html
Geofill and Reference-Based Inpainting: https://paperswithcode.com/paper/geofill-reference-based-image-inpainting-of
Virtual Correspondence: https://virtual-correspondence.github.io/
Densepose: http://densepose.org/
Make sure you never miss behind the scenes content in the Vox Video newsletter, sign up here: http://vox.com/video-newsletter
Vox.com is a news website that helps you cut through the noise and understand what’s really driving the events in the headlines. Check out http://www.vox.com
Support Vox’s reporting with a one-time or recurring contribution: http://vox.com/contribute-now
Shop the Vox merch store: http://vox.com/store
Watch our full video catalog: http://goo.gl/IZONyE
Follow Vox on Facebook: http://facebook.com/vox
Follow Vox on Twitter: http://twitter.com/voxdotcom
Follow Vox on TikTok: http://tiktok.com/@voxdotcom
I’m loving these AI-themed videos from Phil!
Imagine if your entire purpose in life was to watch tiktok dance videos endlessly.
When the robot uprising happens I bet this one will be extra cruel in return for what we made it do 😂
i really like how phil is so personable, it feels like he’s casually having a conversation with just me.
a great presenter
It’s really cool to see my current research direction summarized in a video. 😀
Using old Mannequin Challenge videos for training a depth prediction model is such a wildly clever idea! 🤯
Forcing an AI to watch countless hours of TikTok is definitely how the robot uprising starts
I believe that the challenges are a way to collect data to train AI.
Not just TikTok,
People share photos from their childhood, teens and adult versions. No wonder how good those filters work 😂
Some challenges are just in line with AI data projects that I’ve rolled in.
Phil’s shower dance is 100% ground and life truth
From learn, to exploring how to, i love how the tech esp AI is rapidly growth
This is the fourth of five videos Phil is doing on the ins, outs, and struggles of AI! Watch more of our robot revolution coverage by checking out our AI playlist here: https://www.youtube.com/playlist?list=PLJ8cMiYb3G5ek1Ux66aJ_qWf6CfBaAkGG
Though the methodology is fascinating, the actual question is what will this research be applied on? I can imagine predictive vigilance and ever-expanding tracking and warfare capabilities. It’d also be nice to know who has funded all of these researches.
Can you make a video about the rollout of digital IDs in some African states and how it threatens a variety of human rights?
I don’t even like tiktok or dances, challenges but this is wild. Also, great video guys 🎉
Never thought I’d see a day when the mannequin challenge becomes useful.
But doesn’t the mannequin challenge generate a lot of inaccuracy? Even slight movements will compromise the precision. And I believe using scenes from tv shows like friends that may have an even worse effect, since sometimes the shot and reverse shots are not filmed simultaneously and can have major differences.
That’s actually insanely cool. A bizarre social trend from years ago becomes the weirdly perfect dataset for these computer vision models. Those researchers literally couldn’t have asked for better
Everything is a training set. You use to be an advertising target, now you are a training set.
TikTok dances, with their diverse and dynamic movements, provided a rich dataset for training AI algorithms to recognize and interpret human motion. By analyzing and learning from countless dance videos, the AI system gains the ability to “see” and understand the intricacies of human movement and gestures.
This is extremely important. I appreciate you guys for explaining this because it revolutionizes the way I view some of these video applications.
It’s scary to think that AI companies can use data like this even if it’s to train an AI. I guess it’s the same way with how image generators like Bluewillow was trained with a few differences.