I want to tell you about descriptive complexity theory, but it’s hard to explain if you don’t know what firstorder logic is. Let’s have a tiny refresher.
Even if you haven’t heard about firstorder logic, you’re probably familiar with it anyway. It’s the basic language of logic with the following elements:
 logical connectives ∧ for conjunction (AND), ∨ for disjunction, and ¬ for negation (NOT)^{1}
 existential quantifier $\exists$ and universal quantifier $\forall$
 equality symbol $=$
Using this language, you can talk about a domain which is a set of values. It’s like having a database and writing queries against it using the firstorder logic as the query language. For example, the statement $\exists x (\exists y (\neg (x = y)))$ asserts that there at least two distinct values in the domain.
Firstorder logic, abbreviated FO, is not very powerful. You can make it more powerful by adding some extra predicates. For example, if you want to talk about natural numbers, you could add the predicates + and <. Then you can state theorems like $\forall x (\forall y (x \leq x + y))$. The resulting logic is called FO(+,<).
FO is called firstorder logic, because the existential and the universal quantifiers quantify over atomic values. In secondorder logic (SO), you’re allowed to quantify over predicates. In thirdorder logic, you’re allowed to quantify over predicates of predicates etc. This gives you a lot of leverage. For example, in SO you can assert that the domain contains even number of elements. This is not expressible in FO.

The exact set of connectives varies by the source, but it does not really matter. Having ∧, ∨, and ¬ is enough, because you can express all the other logical connectives in terms of them. ↩︎