Question

Consider the counting function $\{x\}^* \rightarrow \mathbb N$ that counts the number of occurrences of the symbol $x$.

I am confused about the (asymptotic) complexity of computing this function, where the output should be represented in a non-unary system (e.g., binary). My intuition strongly suggests that this should be linear, i.e., $O(n)$ where $n$ is the number of occurrences of the symbol $x$ in the input.

As far as my understanding goes there are multiple interpretations of computation - e.g.,

  • single-band Turing machines, for which my best idea has run time $\Omega(n^2 \log n)$ I think (the $\log n$ comes from the assumption that the binary successor function has $\Omega(n)$ run time, where $n$ is the length of a binary representation of a natural number, and the $n^2$ comes from the assumption that the Turing machine has to travel over all the $x$'s to reach its current count),
  • multi-band Turing machines, for which I think I have an idea of run time $\Omega(n \log n)$,
  • random-access machines, which I don't know at all.

So my question is the following.

What is the computational complexity of the counting function in the various models of computation? Is it linear in any of them?

If at all relevant, I ask from the point of view of abstract algebra, where I am trying to assess the computational complexity of the word problem in some specific group.

Was it helpful?

Solution

Turing machines are a nice model with several advantages, mostly their simplicity, but they are not the first choice when analyzing algorithms. Algorithms are typically implicitly analyzed on the RAM machine model, and in some cases, on the BSS model.

Here are some comments on the computational complexity of counting in various models:

Single-tape Turing machine: These are only considered since it is relatively easy to prove lower bounds on them. They are even less realistic a computation model than multi-tape Turing machines.

Multi-tape Turing machine: It is a standard example in amortized complexity that an increasing counter can be implemented using $O(1)$ amortized bit operations. This is because half the time, only one bit changes, one quarter of the time, only two bits, and so on, for a total number of changed bits which is $1/2 + 2/4 + 3/8 + \cdots = 2$. The Turing machine complexity is linear in that, and so counting can be implemented in $O(n)$.

RAM machine: A RAM machine consists of finitely many registers and a random-access memory. The registers are $O(\log n)$-bits long, where $n$ is the size of the input. Reasonable operations on registers take constant time. In particular, increasing a counter that can count up to $\mathit{poly}(n)$ takes $O(1)$ worst-case time. In particular, your function will run in $O(n)$.

BSS machine: One has to be careful when performing arithmetic on large numbers. On the RAM machine, arithmetic takes constant time only if the magnitude of the operands is $\mathit{poly}(n)$. A BSS machine allows access to special registers which store values in some field, say the real numbers. You can perform constant-time arithmetic and comparisons, but not functions like floor. You are also not allowed to use such values for indexing. (If you don't limit yourself enough, you will soon be solving SAT in polynomial time.) You can think of a BSS machine as accommodating floating-point operations, which in practice take constant time, while ignoring the finite precision involved in them.

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top