Needless to say, this is a fantastic article. Great job and thank you for going so in-depth.
It's interesting that in my time software engineering, I never had to really learn about how GPUs work in-depth. I wish I did. I have a friend working on deep learning over at Nvidia, and he seemingly operates at a different level of technicality than I do. At the same time, I try to remind myself that my expertise and experience is mostly on hyperscale distributed system and what I work on is probably foreign to him.
Regardless, I feel like at least GPU basics should be known knowledge to ambitious software engineers, especially as the world moves forward on GPU-powered computing thanks to AI.
Although I also never had to work with GPUs directly (apart from running deep learning models), there have been few instances in my career where we wondered if we could use GPUs for a problem. But the lack of basic understanding of how they operate made things difficult.
Abhinav, good article! I’m wondering if we can translate your blog into Chinese and post it in Chinese community. We will highlight your name and keep the original link on the top of the translated version. Thank you!
Your comment is valid. Although the article was written for an audience which may not have much background in parallel computing and the use of Little's law was to provide a better intuition. However, I didn't spend enough words on elaborating on it because that would have taken too much space and diluted other parts of the article. I've removed the mention of Little's law, it was really not needed when explaining how GPUs work.
Needless to say, this is a fantastic article. Great job and thank you for going so in-depth.
It's interesting that in my time software engineering, I never had to really learn about how GPUs work in-depth. I wish I did. I have a friend working on deep learning over at Nvidia, and he seemingly operates at a different level of technicality than I do. At the same time, I try to remind myself that my expertise and experience is mostly on hyperscale distributed system and what I work on is probably foreign to him.
Regardless, I feel like at least GPU basics should be known knowledge to ambitious software engineers, especially as the world moves forward on GPU-powered computing thanks to AI.
Thank you, Leonardo.
Although I also never had to work with GPUs directly (apart from running deep learning models), there have been few instances in my career where we wondered if we could use GPUs for a problem. But the lack of basic understanding of how they operate made things difficult.
Ciao Abhinav, greetings from Italy. I really enjoy and admire your posts. I have written to you via Linkedin, hope that's okay.
Hi Tony, thank you so much. (already connected with you on LinkedIn) :-)
Nice article, refreshed my 2016 memory of CUDA programming.
Keep up the great job 👏
Thanks, Nat :)
Excellent fundamentals about GPU, Great post, thanks a lot
Abhinav, good article! I’m wondering if we can translate your blog into Chinese and post it in Chinese community. We will highlight your name and keep the original link on the top of the translated version. Thank you!
Your comment is valid. Although the article was written for an audience which may not have much background in parallel computing and the use of Little's law was to provide a better intuition. However, I didn't spend enough words on elaborating on it because that would have taken too much space and diluted other parts of the article. I've removed the mention of Little's law, it was really not needed when explaining how GPUs work.