| tags:[ research complexity ]

# A note about XNLP-completeness

Classically, we aim to determine whether a computational problem is either NP-hard or polynomial-time solvable. Such a distinction is only the first step when researching a problem that is NP-hard as often we still need to devise an algorithm that will provide us with a solution.

Within the field of parameterized complexity we aim to find algorithms that are “terrible” only with respect to some facet of the input. More precisely, this means that instead of having something like $\mathcal O(2^n)$ where $n$ is the size of the input we instead may have $f(k) n^{\mathcal O(1)}$ where $f$ is any computable function and $k$ describes parameter of the input.

Extraction of the input parameter leads us to niecher distinction of complexity classes. When a parameter $k$ is extracted one may still get that the problem remains NP-hard – we call this para-NP-hardness. On the other hand, we may get that the problem becomes fixed parameter tractable (FPT), which means the algorithm has complexity $f(k) n^{\mathcal O(1)}$. Somewhere in the middle we may prove that there is an XP algorithm with complexity $n^{f(k)}$ and is at the same time W[1]-hard. W[1]-hardness refutes (under standard assumptions) that there exists an FPT algorithm.

Going little deeper we may prove that the problem is XNLP-complete – we are finally getting to the point of this post. XNLP-hardness implies W[t]-hardness for every $t$, and it also implies XP membership, so it is a more precise place for some problems within the standard parameterized complexity analysis.

Typically, XNLP membership is proved by showing a nondeterministic algorithm that runs in $f(k) \cdot n^{\mathcal O(1)}$ time and $f(k)\cdot \log n$ space with $f$ a computable function, $n$ input size, and $k$ the parameter. XNLP-hardness is shown by using a parameterized tractable logspace reduction – a parameterized reduction that runs in $\mathcal O(g(k) \cdot n^c)$ time and uses $\mathcal O(f(k) \cdot \log n)$.

The hardness reductions begs a question: “How does a reduction that does not have space to output its own answer works?” Typically, this can be thought of having a Turing machine that has an input, output, and a work tape and where its space complexity is counted only in the used work tape space. Other way to think about such a reduction is that it is able to output a single bit of the output on demand. One could then concatenate such reductions. This increases the space used additively and time multiplicatively, which preserves the class in question.

## Soruces

- XNLP class was introduced (under name $N[f\ \textrm{poly}, f\ \textrm{log}]$) in – Michael Elberfeld, Christoph Stockhusen, and Till Tantau. On the space and circuit complexity of parameterized problems: Classes and completeness. Algorithmica, 71(3):661–701, 2015. doi:10.1007/s00453-014-9944-y ↗.
- Its significance was shown in – Hans L. Bodlaender, Carla Groenland, Jesper Nederlof, and Céline M. F. Swennenhuis. Parameterized problems complete for nondeterministic FPT time and logarithmic space. In Proc. 62nd Ann. IEEE Symp. Foundat. Comput. Sci. (FOCS), pages 193–204, 2022. doi:10.1109/FOCS52979.2021.00027 ↗.
- Then, it was used to show results for linear width measures in – Hans L. Bodlaender, Carla Groenland, Hugo Jacob, Lars Jaffke, and Paloma T. Lima. XNLP-completeness for parameterized problems on graphs with a linear structure. In Holger Dell and Jesper Nederlof, editors, 17th Int. Symp. Param. Exact Comput. (IPEC), volume 249 of LIPIcs, pages 8:1–8:18. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2022. doi:10.4230/LIPIcs.IPEC.2022.8 ↗.