Confessions of a Code Addict

Confessions of a Code Addict

Share this post

Confessions of a Code Addict
Confessions of a Code Addict
Live Session: How Hyper-Threading (Simultaneous Multithreading) Works — A Microarchitectural Perspective
Copy link
Facebook
Email
Notes
More

Live Session: How Hyper-Threading (Simultaneous Multithreading) Works — A Microarchitectural Perspective

Learn about the microarchitecture implementation of SMT & its performance implications

Abhinav Upadhyay's avatar
Abhinav Upadhyay
Jun 20, 2024
∙ Paid
12

Share this post

Confessions of a Code Addict
Confessions of a Code Addict
Live Session: How Hyper-Threading (Simultaneous Multithreading) Works — A Microarchitectural Perspective
Copy link
Facebook
Email
Notes
More
2
Share

Have you ever wondered how Simultaneous Multithreading (SMT) works at the hardware level? Or thought about its impact on your code's performance, such as whether it can affect single-threaded applications?

Simultaneous Multithreading (SMT), also known as Hyper-Threading (HT), is a hardware feature available on many modern processors that enables a single processor core to execute two threads simultaneously. This technology improves instruction throughput and can significantly boost system performance.

In our next live session, we will answer these questions by exploring the microarchitectural implementation of SMT in Intel CPUs. Apart from covering how SMT works, this discussion will provide you with a thorough overview of the microarchitecture of the x86 CPUs, and offer a deep understanding of how your program's instructions are executed. This knowledge is extremely useful for performing low-level performance optimizations and squeezing out every bit of efficiency from the CPU.

Here’s what we’ll cover:

  • What is simultaneous multithreading (SMT) & motivation behind its introduction in CPUs

  • A brief background on the CPU microarchitecture

  • How SMT instruction execution works at the microarchitecture level, we will cover:

    • Instruction fetch & decode

    • ITLB and branch prediction

    • Uop queue

    • Out-of-order execution engine

    • Instruction scheduling & retirement

    • Memory access

If you are not familiar with these microarchitecture details of the CPU, this talk will be a good first introduction.


Date & Time

July 6th, 16:30 to 18:00 UTC


Logistics

The session is free for all paid subscribers. You can RSVP at the link in the next section. If you are not a paid subscriber, you can upgrade and access the link.

Payment Issues on Substack

If you have trouble in paying for the membership on Substack, you can opt for sponsoring me on GitHub, or becoming a member at buymeacoffee, and I will upgrade you to a paid subscription here.


RSVP

To register, please RSVP at the below link to receive the zoom meeting details:

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Abhinav Upadhyay
Publisher Privacy ∙ Publisher Terms
Substack
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More