指针算术-编译器如何确定要增加的字节数?

考虑下面的代码。

#include <iostream>

int main(){
  int a[] = {1,2,3,4,5};
  int b = 5;
  std::cout << a[b] << std::endl;
  std::cout << b[a] << std::endl;
}

I understand that a[b] and b[a] are identical, as specified by the standard:

除非已为类声明(13.5.5),否则下标   运算符[]的解释方式应使E1 [E2]等于   *(((E1)+(E2))。由于适用于+的转换规则,如果E1是一个数组,而E2是一个整数,则E1 [E2]引用第E2个成员   E1。因此,尽管外观不对称,但下标还是   换向运算。

However, I still don't quite understand. The compiler does address arithmetic in bytes. Since an int takes up 4 bytes, both a[b] and b[a] are translated into *(a + b * 4). My question is: how does the compiler determine that the correct translation is *(a + b * 4), instead of *(b + a * 4)? When the compiler is given an expression in the form of E1[E2], the compiler can translate it into either *(E1 + E2 * 4), or *(E2 + E1 * 4) - how does the compiler know which one is the correct choice?