The Tunu app.
There was a time when I got interested in a strange university student team. Despite being crazy busy around that time, I decided to join the team instead of mentoring the team as they asked me. The above image you have seen was the result of a whole night of not sleeping for the app to run.
After a 5 minutes deadline decision and pre-fair rounds, we had another fancy idea, make a game to study ASL (American Sign Language) and quickly made it into the second round. We had a week to prepare our app, but we were all busy, for me, I struggled with my work this time. So when there was only a day left to finish the app. We gather at a coffee shop and quickly prototype the app. We decided to call the app TUNU because it is kind of slag for “Thủ ngữ” aka Sign language in Vietnamese.
If you talk about AI apps, I will always think about reusing any open-source projects, because it will always be a huge challenge to conduct full research from the ground (but I did try it in some projects) and for the best results, it more challenges to achieve a SOTA results. But there are egos in this, I sometimes feel uncomfortable or not smart when using other public work, but I told myself again, if you declare that you want to create everything on your own, how about making your library? Oh, but the library still operates on “other people” computers and OS, how about making your computer? Oh, realize that you are still using different chips from different countries. How about making your chips? At this point, I think you and I should have realized how small we are in this developing world. In my definition, there is simply no way to say I made it myself without using other people’s work.
Anyway, get back to the app, on the final day we didn’t have a single line of code. So midnight our team had a meeting at a coffee, and vibe coding with chatGPT but it didn’t go anywhere (it was not as smart as the recent model, but hard to tell if it could make the whole app) so we divided the software and… continue to vibe coding the parts of the software and integrate those part into our MVP. After several hours around 3 AM, we did get the app working and summited it later. But because the app was… visually not attractive I guessed that why we stopped here.
Anyway, this MVP app allowed me to theoretically make my first public game. And, it was the first time I ran the computer vision system on a web browser too.
If you want to try the game, Here is the link to the game, all the computer vision and ML are run on the local web browser, I didn’t collect any infomation of your camera or result of the detection, I also tried it on some phones and it seems to work too, sometimes you need to do some reload and allow camera permission for the web app to work. In the app, you can try to raise your hand to see the model predict the meaning of your hand gestures, There are several characters it didn’t detect correctly but overall it is just another fun thing. For the source code, you can take a look here
In early 2025, then taking a deeper look at the SL, I realized there were many versions of it, like the Frances (FSL), and even Viet has its own Sign Language, and without a professor, it would be challenging to continue the project.