As a quick recap:
In other words, the learning is that "trust" has to be earned, and further that trust has to be retained and maintained. Also one must note that "trust" is a human emotion, and we create all these systems because we as humans want to get our work done by trustworthy sources i.e. sources that will help solve our problems and satisfy our desires.
In this sense, trust is a engineering problem.
- Authentication i.e. verifying that someone has the proper credentials to enter the system can be done via
- Passwords
- Public key encryption. The gatekeeper will request from the person some agreed upon message that will be encrypted with the private key, and only the public key can decrypt it. This way, the gatekeeper knows that the person has passed the truth-test for the public-key/private-key mapping. Should the gatekeeper "trust" this person ? This is a separate issue and this is handled via "Certificates"
- The CA has confirmed that the public-key that shows up in the certificate is "trustworthy" and to say this, CA will sign this public key with its own private key. What if CA has trusted some fake machine ? This then becomes the CA's fault. The question arises, how can a CA trust someone.
- Once the user gets a certificate signed by a CA it trusts, the end-user can accept that it is talking to someone trustworthy.
In other words, the learning is that "trust" has to be earned, and further that trust has to be retained and maintained. Also one must note that "trust" is a human emotion, and we create all these systems because we as humans want to get our work done by trustworthy sources i.e. sources that will help solve our problems and satisfy our desires.
In this sense, trust is a engineering problem.