paint-brush
A Brief History of the Theoretical Computer Science Approach to Computationby@aithics

A Brief History of the Theoretical Computer Science Approach to Computation

by AIthicsSeptember 4th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Theoretical computer science, starting with Turing's 1930s model, explores what is computable and efficient. Turing’s work laid the foundation for the Theory of Computation, leading to insights into resource constraints and problems like the SAT problem. Modern TCS investigates the efficiency of algorithms and the impact of intractable problems, such as the P vs NP challenge.
featured image - A Brief History of the Theoretical Computer Science Approach to Computation
AIthics HackerNoon profile picture

Authors:

(1) Lenore Blum ([email protected]);

(2) Manuel Blum ([email protected]).

Abstract and 1 Introduction

2 Brief Overview of CtmR, a Robot with a CTM Brain

2.1 Formal Definition of CtmR

2.2 Conscious Attention in CtmR

2.3 Conscious Awareness and the Feeling of Consciousness in CtmR

2.4 CtmR as a Framework for Artificial General Intelligence (AGI)

3 Alignment of CtmR with Other Theories of Consciousness

4 Addressing Kevin Mitchell’s questions from the perspective of CtmR

5 Summary and Conclusions

6 Acknowledgements

7 Appendix

7.1 A Brief History of the Theoretical Computer Science Approach to Computation

7.2 The Probabilistic Competition for Conscious Attention and the Influence of Disposition on it

References

7.1 A Brief History of the Theoretical Computer Science Approach to Computation

The theoretical computer science approach to computation starts with Alan Turing in the 1930’s and focuses on the question, “What is computable (decidable) and what is not?” (Turing, 1937). Turing defined a simple formal model of computation, which we now call the Turing Machine (TM) and defined a function to be computable if and only if it can be realized as the input-output map of a TM. The formal definition of a TM (program) also provides a formal definition of the informal concept of algorithm.


Using his model, Turing proved properties (theorems) of computable functions, including the existence of universal computable functions (universal Turing machines) and the fact that some functions are not computable. The former foresees the realization of general purpose programmable computers; the latter that some problems cannot be decided even by the most powerful computers. For example, Turing shows there is no Turing machine (Turing computable function) that given the description of a TM M and an input x, outputs 1 if M on input x (eventually) halts, and 0 if not. This is known as the “halting problem” and is equivalent to Gödel’s theorem on the undecidability of arithmetic.


But why should we believe the Church-Turing Thesis, suggested first in (Turing, 1937), that the TM embodies the informal notion of computability (decidability)? That’s because each of a great many very different independently defined models of discrete computation, including TMs and Alonzo Church’s effective calculability (Church, 1936), define exactly the same class of functions, the computable functions. In programming parlance, all sufficiently powerful practical programming languages are equivalent in that any one can simulate (be compiled into) any other. The ensuing mathematical theory is often called the Theory of Computation (TOC).


In the 1960’s, with the wider accessibility of computers, newly minted theoretical computer scientists such as Jack Edmonds (Edmonds, 1965) and Richard Karp (Karp, 1972), pointed out that resources matter. Certain problems that in principle were decidable, were seemingly intractable given feasible time and space resources. Even more, intractability seemed to be an intrinsic property of the problem, not the method of solution or the implementing machine. The ensuing sub-theory of TOC, which introduces resource constraints into what is or is not computable efficiently, is called Theoretical Computer Science (TCS).


TCS focuses on the question, “What is or is not computable (decidable) given limited resources?” A key problem here is the deceivingly simple “SAT problem”: Given a boolean formula �, is it satisfiable, meaning is there a truth assignment to its variables that makes formula � true? This problem is decidable. Here is a decision procedure: Given a boolean formula � with n variables, systematically check to see if any of the 2n possible truth assignments makes the formula true. If yes, output 1, otherwise output 0. This brute force procedure takes exponential(n) time in general. But is the “SAT problem” tractable, meaning decidable efficiently, i.e., in polynomial(n) time? This is equivalent to the well-known P = NP? problem of (Cook, 1971), (Karp, 1972), (Levin, 1973).


The design of novel and efficient algorithms is a key focus of TCS.


Turning the table on its head, the ability to exploit the power of hard problems, problems that cannot be solved efficiently, has been a key insight of TCS. An example is the definition of pseudorandomness. This ability to exploit the power of hardness is novel for mathematics.


This paper is available on arxiv under CC BY 4.0 DEED license.