NP and non-deterministic Turing machines

The complexity class P is pretty straightforward: it’s the class of problems that can be solved in polynomial time. What is NP, though?

I’w now attending a course on computational complexity. While I’ve known about complexity classes for a long time, my understanding has been hazy. I’ve known that the problems that can be solved in polynomial time belong to P and harder problems belong to NP and thene there’s something beyond that. This course has finally clarified to me what is NP.

The key to NP are non-deterministic Turing machines (NDTMs). They’re a generalization of the ordinary, deterministic Turing machines: there can be more than one rule for each state-symbol pair. This is the same as with non-deterministic and deterministic finite automatons.

NDTM can be thought to run all the possible computations in parallel. A possible intuition is that it “makes a copy of itself”, multiverse-style, when it reaches a configuration where it can apply two or more rules and the different copies apply different rules.1

NDTM accepts a language when there exists at least one computation (a series of configurations of the machine) that finishes in an accepting state. It halts if all the possible computations halt.

Formally, P is the complexity class of decision problems that can be solved in polynomial time by a deterministic Turing machine. NP is the complexity class of decision problems that can be solved in a polynomial time by a non-deterministic Turing machine.

It’s is sometimes said that NP is the class of problems whose solutions can be checked efficiently (i.e. in polynomial time). Here’s how it works: construct a NDTM that generates non-deterministically a solution candidate and then checks it with that efficient procedure. This NDTM accepts the input if there’s at least one computation that halts and hence it solves the decision problem efficiently.

Consider the partition problem: you have a set of items with known weights and two knapsacks and you want to split the items into the sacks so that they weigh equally much. The decision problem asks, given the weights, whether this can be done. A possible solution can be efficiently verified: if you come up with a way to split up the items, you can easily sum the weights and see if they match. This problem is indeed NP-complete: it can be solved efficiently with a NDTM, but not with a deterministic Turing machine unless P=NP.


  1. This multiverse intuition is sometimes offered as an explanation of what quantum computers do. I do not know much about quantum computing, but I’m pretty sure this is wrong and quantum computers are less powerful than NDTMs. Edit: See a later post about this↩︎


Comments or questions? Send me an e-mail.


Want to get these articles to your inbox? Subscribe to the newsletter: