Monthly Archives: June 2016

Is there room for an ehttp:// protocol?

Lets say you run a small website. It does not make you any money, so you really can’t put much money into it. Perhaps you host it for a hundred dollars a year at Amazon Web Services out of your own pocket money. Lets say also that you want to provide your visitors with the privacy of encrypted connections.

“Alright,” one might say, “HTTPS is the solution.”

So next you go and configure apache with OpenSSL, read up on configuration and hardening, and get everything set up right. You self sign a certificate, do a graceful restart of the webserver, and visit your site. Next thing you know, your browser is now telling you your website is doing something that could be harmful.

Lets just ignore for the moment that we knew that was going to happen, or why it happens, and think about what is actually different about the connection.

Before TLS was enabled, your web browser got the IP address from DNS, asked for a connection to that IP address, and trusted that whoever responded was who they wanted to talk to. Everything was transfered in clear text for any passive attacker to see. Their ISP can see it, the NSA can see it, China can probably see it, and their brother might be able to see if if he is good enough with computers.

You also don’t know that that’s actually the website you wanted, so an active attacker can change the content either direction. They can inject frames, or scripts into the website, or replace it with a redirect to a malicious site.

The good news on the other hand is that the browser is fine with it.

Checklist for Unencrypted HTTP:
Resistant to Passive Attacks: No
Resistant to Active Attacks: No
Browser Warnings: No
Cost: Nothing

Alright, well with HTTP we expect no security and we get none.

Lets see how HTTPS with a certificate issued by a valid Certificate Authority fares:

The DNS request is made unencrypted (except in exceptional configurations). This is not much of a big deal since the Server Name Identifier is usually sent in clear text before any key exchange happens, and the server has to provide the certificate for the domain in clear text, so a passive attacker knows the hostname, such as gmail.com, zanthra.com, or insertsomethingveryembarassing.com (don’t actually try to go there as someone may buy that domain sometime).

On the other hand, after the handshake, the connection is now encrypted. This greatly limits the ability of a passive attacker to gather data on the things you like to view or research.

You also have protection from an active attacker, since the key exchange relies on the browser using the public key of the host certificate to open the connection. That public key is in turn secured by cryptographic signature in a chain by the private key of some company that your browser vendor, or OS vendor, chose to trust. It makes it very difficult for an active attacker to trick you into trusting a malicious site. If users do not get warnings for sites using self signed certificates, and the browser had not gotten any headers like HPKP from an earlier visit there would be a risk.

So lets see how that stacks up.

HTTPS (with trusted Certificate):
Resistant to passive attacks: Yes
Resistant to active attacks: Yes
Browser Warnings: Green Lock Icon
Cost: 0$ – 1000$ a year

Now lets look again at HTTPS with a self signed certificate.

Against a passive attacker it still has the vulnerability of revealing the hostname, but it still has encrypted data, so there is protection against a passive attacker.

Against an active attacker the story is a bit different, and not as simple as the prior examples at all. First lets assume that you have never visited this site before, and you have no knowledge about the server’s certificate. Lets also assume that the active attacker is targeting all visitors of this particular webpage (instead of targeting all webpages visited by a particular visitor). They probably have a pregenerated certificate for that website, that they can use to impersonate it. They can return whatever page they want, and pretend to be that URL. Remember though that this same server could have done that for the standard HTTP site as well.

Against an active attacker who is targeting all the webpages visited by a particular visitor, perhaps in order to decrypt the communications that would otherwise be inaccessible to a passive attacker, the situation is only the slightest bit better. If an attacker does not go through the effort of creating and signing a new dynamic certificate for each site, then a look at the certificate details would reveal a wildcard certificate.

This does not matter much for self signed certificates as no one would really try to do this sort of attack against websites with self signed certificates because people don’t visit those sites in enough volume, and when they do there is not enough useful information to be gathered from their sessions. These attackers want your browsing habits, interests, hobbies and the like. If you have reason to be visiting a site with a self signed certificate that you would accept, you likely know something about what you are doing in regards to verifying that certficate’s identity, For example the admin page of a remote website where you can get the key signature through a side channel, such as SSH or your web hosts own TLS secured page. I note all this to contrast it to my proposed ehttp certifcate extension later.

So as the checklist:
Resistance to passive attackers: Yes
Resistance to active attackers: No
Browser Warnings: Yes
Cost: None

So to recap here, moving from unencrypted HTTP to self signed HTTPS you have gained protection from passive attackers, and gained browser warnings. These browser warnings are not there for no reason. They serve a purpose to protect against active attacks against websites that have trusted certificates,

I propose that ehttp be made a new protocol prefix to designate Encrypted Hyper Text Transport Protocol, designed to be resistant to evesdropping, both active and passive. It uses self signed or otherwise untrusted certificates, by giving up the resistance to impersonation. This makes it free to provide basic privacy to your users as a simple upgrade to sites that would otherwise be HTTP only.

The protocol itself is the same as HTTPS, but would not display the standard browser warnings for untrusted certificates. It also will not honor HPKP and HSTS headers, due to the fact that an impersonator could provide those headers with a fake certificate in order to block legitimate visits. A browser could notify the user if they visit a site using ehttp that has a trusted certificate to switch to https, but when visiting a site with the https protocol a ehttp certificate should produce at least as much warning as a self signed certificate.

Against passive attackers, this system is as resilient as HTTPS, since the encryption uses the same methods and key sizes. The hostname will be known, and the fact that it is a self signed certificate will be known, but this is still an advantage over standard HTTP.

An active attacker that wants to impersonate the server can again pregenerate a certificate, in order to look like the server at that destination. Users should be warned, perhaps by clicking on a yellow warning icon where the green lock icon normally appears, that the website may not be who they claim to be, and personal information should not be provided over insecure connections.

Against a semi-active evesdropper, one who wishes to watch all the websites a user visits by posing as the remote server, there is some protection. By restricting EHTTP to hostname certificates only, it will require the evesdropper to dynamically generate a new certificate when the user visits a new ehttp site. With standard self signed certificates this would not be a significant deterrant, as signing a certificate is a very easy action. In order to limit it then I propose a extension be added to EHTTP certificates call the EHTTP proof of work.

The EHTTP proof of work extension would be an extension that takes the valid domain name of the certificate, and concatenates it with the public key fingerprint, and a random nonce. This is hashed and a new nonce generated until it has a sufficient number of zeroes at the end of the hash’s binary string. I suggest a hash function be selected that has a relatively good parity of speed between GPUs and CPUs (relatively in the sense of relative to other hash functions), and a number of minimum zeroes be selected that requires about five minutes of computation time on a fast quad core desktop. This is similar to the proof of work used in other systems, and most well known for it’s use in BitCoins.

Because of the difficulty of quickly creating a new EHTTP certificate with a valid proof of work extension, semi-active evesdroppers would be at a disadvantage. They would be at an even larger disadvantage if the initial handshake is indistinguishable from HTTPS, because in order to remain transparent to both EHTTP and HTTPS servers, it would have to always accept the user’s TCP connections, and establish it’s own with the server and is required to relay all HTTPS sessions in software for no gain. Adding a HTTP header with a verification of the public key fingerprint could further increase this by requiring a layer 7 inspection to strip the header in the application layer. All of this is not going to guarantee an active attacker is not evesdropping, but it is far more secure than HTTP, and moderately more secure than HTTPS with self signed certificates.

On top of this, browser extensions could be made that would ask one or more third party TLS secured websites to also download the certificate for that web server (or request the certificate through a proxy like TOR). The semi-active attacker would not be able to impersonate the trusted TLS certificate, so if it presented a cached EHTTP certificate with proof of work to the client, but was unable to intercept the third party request, there would be a certificate mismatch.

So if we consider the checklist again.

Proposed EHTTP:
Resistance to passive attackers: Yes
Resistance to active attackers: Users should assume None or Minimal with third party certificate verification
Browser Warnings: No
Cost: None (besides some CPU time)

So why not just get a free trusted certificate from StartSSL like I did for this blog? Because they are still a company that needs to make money to stay in buisness. I don’t use long HSTS headers on this website for that reason. I don’t know that when it comes time to renew my certificate that they will still be offering free certificates. It also still requires registration and confirmation through a third party. In general though, most certificates are expensive for people who pay for their website out of their own money and don’t get anything in return for them.

Encrypted HTTP connections could provide a better experience for users of some public WiFi systems, as HTML injection is sometimes used to insert advertisements, and it can protect against some more malicious code injection attacks as well. It will not have any affect on corporate or school security systems that use local trusted wildcard certificates to decrypt all HTTPS traffic for content filtering or preventing data leaks.

Because EHTTP is considered a different protocol from HTTPS, it will not reduce the security of HTTPS. Browsers should not provide any indication of security on the address bar, and if the user looks for more information similar to how they would for HTTPS, they should be informed that the identity of the server cannot be verified. I suggest that if a user visits a page through HTTPS and gets a EHTTP certificate that they be presented with at least as harsh as the harshest warnings given for standard certificate errors, and if they chose to proceed anyways, the browser should not downgrade to EHTTP, but should remain HTTPS with the normal error indicatiors on the address bar. EHTTP should not be classed apart from HTTP in regards to downgrade protection, or mixed site content for HTTPS.

Site owners can use the same redirect or rewrite rules to upgrade users from HTTP to EHTTP, but EHTTP can be upgraded to HTTPS if a trusted certificate is received. Website owners should provide the proper inbound links using ehttp://, and browsers should naturally attempt ehttp before http if they would normally fall back to http if https did not respond.

If the concern of rising certificate error rates due to sites accepting HTTPS connections on the same port as EHTTP is on, then a separate port could be defined. This however allows semi-active atackers to preselect EHTTP candidates for interception and let HTTPS traffic through reducing their processing requirements.

What I have proposed is a Better Than Nothing Security protocol to enhance user privacy on the internet when compared to unencrypted HTTP. It requires a nontrivial effort for ISPs or others who may want to collect data on people’s browsing habits to circumvent. The potential attacks on EHTTP are no worse than that of unencrypted HTTP, and it does not in any way diminish the security of HTTPS using trusted certificates.

It may on the other hand cause some websites that would otherwise have gotten trusted certificates to delay or forgo a HTTPS transition, leaving those websites open to impersonation.

If Encrypted HTTP makes it sound too secure for people’s liking, perhaps AHTTP for Anonymous encryption HTTP, based on the term for the Anonymous Ciphers that are available for TLS. While I don’t propose that Anonymous Ciphers be allowed, as they provide no discernible protection against semi-active evesdroppers and could be used to downgrade any protection EHTTP/AHTTP attempts with proof of work, the term coveys the same idea that the client still does not know who the server is.

While I don’t have high hopes that this blog will really have a big impact, I do hope that this is something that will be looked at. Server hardware is at the point where the CPU power to do encryption is available even to small sites, but the effort and cost involved with getting trusted certificates is not. At the same time the requirements for doing large scale passive data collection is coming down too. With many resent discussions of online privacy and with ISPs like AT&T starting to charge customers to not be tracked, I believe that the time is right for this sort of standard.