Time bounded random access machines
作者:
Highlights:
•
摘要
The RAM, an abstract model for a random access computer, is introduced. A unique feature of the model is that the execution time of an instruction is defined in terms of l(n), a function of the size of the numbers manipulated by the instruction. This model has a fixed program, but it is shown that the computing speeds of this model and a stored-program model can differ by no more than a constant factor. It is proved that a T(n) time-bounded Turing machine can be simulated by an O(T(n)·l(T(n))) timebounded RAM, and that a T(n) time-bounded RAM can be simulated by a Turing machine whose execution time is bounded by (T(n))3 if l(n) is constant, or (T(n))2 if l(n) is logarithmic.The main result states that if T2(n) is a function such that there is a RAM that computes T2(n) in time O(T2(n)), and if T1(n) is any function such that liminfn→∞T1(n)T2(n)=0, then there is a set S that can be recognized by some RAM in time O(T2(n)), but no RAM recognizes S in time O(T1(n)). This is a sharper diagonal result than has been obtained for Turing machines.The proofs of most of the above results are constructive and are aided by the introduction of an ALGOL-like programming language for RMA's.
论文关键词:
论文评审过程:Received 3 August 1972, Available online 27 December 2007.
论文官网地址:https://doi.org/10.1016/S0022-0000(73)80029-7