为了娱乐,我正在Rust中编写一个bignum库。我的目标(与大多数bignum库一样)是使它尽可能高效。我希望即使在不寻常的体系结构上,它也能保持高效。
It seems intuitive to me that a CPU will perform arithmetic faster on integers with the native number of bits for the architecture (i.e., u64
for 64-bit machines, u16
for 16-bit machines, etc.) As such, since I want to create a library that is efficient on all architectures, I need to take the target architecture's native integer size into account. The obvious way to do this would be to use the cfg attribute target_pointer_width. For instance, to define the smallest type which will always be able to hold more than the maximum native int size:
#[cfg(target_pointer_width = "16")]
type LargeInt = u32;
#[cfg(target_pointer_width = "32")]
type LargeInt = u64;
#[cfg(target_pointer_width = "64")]
type LargeInt = u128;
However, while looking into this, I came across this comment. It gives an example of an architecture where the native int size is different from the pointer width. Thus, my solution will not work for all architectures. Another potential solution would be to write a build script which codegens a small module which defines LargeInt
based on the size of a usize
(which we can acquire like so: std::mem::size_of::<usize>()
.) However, this has the same problem as above, since usize
is based on the pointer width as well. A final obvious solution is to simply keep a map of native int sizes for each architecture. However, this solution is inelegant and doesn't scale well, so I'd like to avoid it.
所以,我的问题是:是否有一种方法可以找到目标的本机int大小,最好是在编译之前,以减少运行时开销?这样做值得吗?也就是说,使用本地int大小和指针宽度之间可能会存在显着差异吗?