"Security" has been a concept since violence, and theft, probably earlier. Why haven't we "solved" security issues in the "real world" or "computer systems and networks" ?

Attackers attack Defenders. Any success must be quickly addressed by Defender, or attack may be successfully repeated. It has been this way for a long time. Why does it remain this way?

Is it possible to design network protocols, services, system tools, etc. with small enough possible input and output possibilities as to test all possible permutations of use in reasonable time? If we can focus attention on each layer, and reduce complexity and then validate all possible inputs, would this give us secure foundations on which to build other layers?

The complexity of each "feature" for network protocols, services, tools, if just a binary "Yes" or "No" doubles the number of tests required to validate. A feature that offers a numeric value ranging from -2G to +2G or 0G to 4G multiplies the number of possible things to test by 2^32 (or around 4 billion)

Consider "TCP" ... the concept of a session means open-ended collections of packets and "logically" an unending stream of bits. Why do we need session management at this layer? What if we accepted a smaller max size of payload, and suffered the lower through-put and efficiency, and limited an alternative to TCP to have only one job: routing and no sessions. We would leave for another layer the task of dealing with sessions.

This same kind of approach could be applied to many other fundamental and core services. If made simple enough, all possible combinations of things AT THEIR LEVEL could be tested, and exhaust all possible input, then we have systems or protocols (when implemented) that we can rely upon as "secure" from attack of any input.

Yes, I realize the amount of work required to do this is large, and efficiency of throughput, and processing may suffer, but if we postpone that as an issue until later, would this really old idea (Keep It Simple Stupid : KISS) give us products that we could call secure from attack with "bad data" ?

Reduction of focus for each layer also allows for decisions on cost-benefit analysis. It is not the job of Layer 2 or Layer 3 to provide protection from Physical attack -- for that, we would have to focus on Layer 1.

I do not think this is a "new" idea. I am guessing other people have proposed similar ideas. (For example, you can find elements of this in the Iso OSI 7 layer networking model, and the old "UNIX way" of tools designed with a single purpose in mind, that end-users can combine as needed.)

What do you think?